url
stringlengths 17
172
| text
stringlengths 44
1.14M
| metadata
stringlengths 820
832
|
---|---|---|
http://www.nag.com/numeric/CL/nagdoc_cl23/html/S/s10acc.html
|
# NAG Library Function Documentnag_cosh (s10acc)
## 1 Purpose
nag_cosh (s10acc) returns the value of the hyperbolic cosine, $\mathrm{cosh}x$.
## 2 Specification
#include <nag.h>
#include <nags.h>
double nag_cosh (double x, NagError *fail)
## 3 Description
nag_cosh (s10acc) calculates an approximate value for the hyperbolic cosine, $\mathrm{cosh}x$.
For $\left|x\right|\le {E}_{1}$, (where ${E}_{1}$ is a machine-dependent constant) $\mathrm{cosh}x=\frac{1}{2}\left({e}^{x}+{e}^{-x}\right)$.
For $\left|x\right|>{E}_{1}$, the function fails owing to danger of setting overflow in calculating ${e}^{x}$. The result returned for such calls is ${\mathrm{cosh}E}_{1}$, i.e., it returns the result for the nearest valid argument.
## 4 References
Abramowitz M and Stegun I A (1972) Handbook of Mathematical Functions (3rd Edition) Dover Publications
## 5 Arguments
1: x – doubleInput
On entry: the argument $x$ of the function.
2: fail – NagError *Input/Output
The NAG error argument (see Section 3.6 in the Essential Introduction).
## 6 Error Indicators and Warnings
NE_REAL_ARG_GT
On entry, ${\mathbf{x}}=〈\mathit{\text{value}}〉$.
Constraint: $\left|{\mathbf{x}}\right|\le 〈\mathit{\text{value}}〉$.
The function has been called with an argument too large in absolute magnitude. There is a danger of overflow. The result returned is the value of $\mathrm{cosh}x$ at the nearest valid argument.
## 7 Accuracy
If $\delta $ and $\epsilon $ are the relative errors in the argument and result, respectively, then in principle
$ε ≃ x tanhx δ .$
That is, the relative error in the argument, $x$, is amplified by a factor at least $x\mathrm{tanh}x$ in the result. The equality should hold if $\delta $ is greater than the machine precision ($\delta $ is due to data errors etc.), but if $\delta $ is simply a result of round-off in the machine representation of $x$ then it is possible that an extra figure may be lost in internal calculation round-off.
It should be noted that near $x=0$ where this amplification factor tends to zero the accuracy will be limited eventually by the machine precision. Also for $\left|x\right|\gtrsim 2$
$ε ∼ x δ = Δ$
where $\Delta $ is the absolute error in the argument $x$.
None.
## 9 Example
The following program reads values of the argument $x$ from a file, evaluates the function at each value of $x$ and prints the results.
### 9.1 Program Text
Program Text (s10acce.c)
### 9.2 Program Data
Program Data (s10acce.d)
### 9.3 Program Results
Program Results (s10acce.r)
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 28, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.5746335983276367, "perplexity_flag": "middle"}
|
http://www.physicsforums.com/showthread.php?p=4068851
|
Physics Forums
## Laurent series for pole to non integer power
Hey all,
I am doing a Schwarz-Christoffel transformation and I am trying to calculate the integral analytically using the residue theorem.
My integral is the following:
$$\int^\zeta _{\zeta_0} (z+1)\frac{1}{(z+2.9)^{{b_1}/\pi}{(z-0.5)^{{b_2}/\pi}}}dz$$
This has two poles at -2.9 and 0.5. $b_1$ and $b_2$ are not integers.
I want to do this integral for a contour that contains both poles. I know how to use the Laurent series to extract the $a_{-1}$ term (residue) needed for the residue theorem for integer powers (which is to take the limit of the derivative of the same power). Does anyone know how I can find the residue for a function where the poles are raised to a non-integer power?
Cheers
P.S. Lately my fraction lines appear in the web browser distorted, anyone knows what's up with that??
PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug
When you've got non-integer powers in the denominator, it means you have branch cuts, hence your function is not continuous around z=-2.9 and z=0.5, hence not holomorphic, hence no poles.
Awww that sucks Is there no way to calculate this analytically?
## Laurent series for pole to non integer power
With the hypergeometric function you can.
Ok, that's interesting. Here's the antiderivative via Mathematica where I use just 2 and not 2.9 and the exponents are b and c: $$\begin{multline}\frac{1}{2 (-b+\pi )}\left(-\frac{1}{2}+z\right)^{-\frac{c}{\pi }} (2+z)^{-\frac{b}{\pi }} \left((-b+\pi ) (1-2 z)^{c/\pi } \left(1+\frac{z}{2}\right)^{b/\pi } z^2 \text{AppellF1}\left[2,\frac{c}{\pi },\frac{b}{\pi },3,2 z,-\frac{z}{2}\right]\\ +2 \pi \left(\frac{1}{5}-\frac{2 z}{5}\right)^{c/\pi } (2+z) \text{Hypergeometric2F1}\left[\frac{c}{\pi },\frac{-b+\pi }{\pi },1+\frac{-b+\pi }{\pi },\frac{2 (2+z)}{5}\right]\right) \end{multline}$$ Ok, that antiderivative, call it $M(x)$ is full of multifunctions and in order to evaluate: $$M(z)\biggr|_{z_1}^{z^2}$$ you would have to take analytic extensions over each multifunction between the points z_1 and z_2. That's quite a challenge I think which means in order to do this one, you'd best work on some simpler ones where you have to analytically extend the antiderivative. Also, since the antiderivative is multivalued, so too will be the answer, one value for each sheet of each function you integrate over and if the exponents are irrational, the answer is infinitely-valued. All in all, a nice problem to work on. Probably take me the entire semester. :)
Thread Tools
| | | |
|-------------------------------------------------------------------|----------------------------|---------|
| Similar Threads for: Laurent series for pole to non integer power | | |
| Thread | Forum | Replies |
| | Calculus & Beyond Homework | 2 |
| | Calculus & Beyond Homework | 1 |
| | Calculus & Beyond Homework | 2 |
| | Calculus & Beyond Homework | 4 |
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.904101550579071, "perplexity_flag": "middle"}
|
http://unapologetic.wordpress.com/2011/12/14/simply-connected-spaces/?like=1&_wpnonce=9d779cb1c0
|
# The Unapologetic Mathematician
## Simply-Connected Spaces
We say that a space is “simply-connected” if any closed curve $c$ with $c(0)=c(1)=p$ is homotopic to a constant curve that stays at the single point $p$. Intuitively, this means that any loop in the space can be “pulled tight” without getting caught up on any “holes”.
It turns out that this is equivalent to saying that every closed curve is the boundary of some parameterized square. Indeed, consider the following diagram I’ve drawn with the help of Geogebra:
This is a picture of the homotopy cylinder. The domain of a curve is the interval $[0,1]$, so the domain of the homotopy cylinder is the square $[0,1]$. I’ve labeled the sides to describe what the homotopy does to them: the lower edge $(x,0)$ follows the curve $c$; the upper edge $(x,1)$ is the constant point $p$; the two sides are also constant at $p$, meaning that we’re holding the curve’s ends fixed as we perform the homotopy. And so the homotopy is exactly a continuous (or smooth) map from the square into our space, and the boundary of the parameterized square is exactly the curve $c$. The converse — that any parameterized square can be homotoped to look like this — shouldn’t be hard to see.
So what does this mean for homology? Well, for cubic singular homology it means that $H_1(M)$ is exact if $M$ is simply-connected. Indeed, if $C$ is a closed $1$-chain, then it must be made up of a formal sum of curves. Any curve which isn’t already closed must have a start and an end, and the end must be the start of another curve, or else the boundary points of $C$ wouldn’t cancel off. We can break $C$ up — possibly non-uniquely — into a collection of closed curves, each of which is the boundary of some parameterized square, by the above argument. Thus $C$ is itself the boundary of this collection of squares; since all closed $1$-chains are exact, the first homology vanishes.
About these ads
### Like this:
Like Loading...
Posted by John Armstrong | Differential Topology, Topology
## 2 Comments »
1. [...] Simply-Connected Spaces [...]
Pingback by | December 17, 2011 | Reply
2. [...] here, and (b) the justification is that (for our purposes) the electric field is defined in some simply-connected region of space which has no “holes” one could wrap a path around. In fact, if the [...]
Pingback by | February 18, 2012 | Reply
« Previous | Next »
## About this weblog
This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”).
I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
• ## RSS Feeds
RSS - Posts
RSS - Comments
• ## Feedback
Got something to say? Anonymous questions, comments, and suggestions at Formspring.me!
%d bloggers like this:
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 19, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9393352270126343, "perplexity_flag": "head"}
|
http://meta.math.stackexchange.com/questions/6957/book-recommendations
|
# Book recommendations
There are quite a few soft questions asking for book recommendations. Reading some of these threads one might notice that there are answers containing several books at once. As such this might not raise any suspicion but in some cases one cannot help but wonder whether the recommender has actually read all the books they recommend.
Recommending a book to someone that one has not read oneself without explicitly declaring so makes no sense to me. If I ask someone for a recommendation I (obviously) expect them to have read what they recommend to me.
Do I have to lower my expectations when interacting with the MSE community? Or are there community members that share my view?
Be that as it may, I have trusted sources from which I can get book recommendations so that I don't have to depend on MSE for that. But what is still of slight concern is that one can gain a lot of reputation just by posting lots of recommendations (to maths one knows nothing about). In the worst case, this could be very misleading to unexperienced users and generally to people with little mathematical experience and this would be the opposite of helpful. As a community I think we are striving to help people rather than the opposite so perhaps this is an issue one might want to address.
-
We can hardly stop people from posting subpar answers. It would be reasonable for book recommendation questions to be converted to CW by the OP or by a moderator. – user53153 Dec 24 '12 at 17:13
3
Another point here is that the user may well be familiar with many books on the subject. A professor teaching an advanced undergraduate / beginning graduate course is likely to compare several textbooks in the process of choosing the right one[s] to use in the course. – user53153 Dec 24 '12 at 17:45
@PavelM Of course, "@"your second comment, but the majority of users here aren't lecturers. – Matt N. Dec 24 '12 at 18:36
1
@PavelM - to your second point: yes indeed; I do a lot of reading prior to selecting texts to use. And also, for those involved in administrative-curriculum-related decisions, and those of who write reviews of text, exposure to a variety of texts comes with the territory. – amWhy Dec 25 '12 at 0:33
2
But I do agree with Matt that perhaps more should be stated in a reference-recommendation-post as to why the recommendations are being made, and how or when or in what context one has encountered the text. – amWhy Dec 25 '12 at 0:35
1
@amWhy I agree, asimple list of books is a bad answer. – Michael Greinecker Dec 25 '12 at 15:11
Look at this post for example! How could anyone be familiar with this entire list! I do hate singling out any one post, but an example can be illustrative. In my own post to that answer, I included links to other posts, where there were more answers and recommendations made by others, to provide more input for the OP. – amWhy Jan 6 at 1:43
## 2 Answers
As somebody who has recently recommended books I have not read (I did acknowledge that I had not read or completely read the books), I would like to say the following:
I would justify my recommendations by pointing out that, as a graduate student in mathematics, I am surrounded by a culture in which a great many books are preceded by their reputations. I wouldn't recommend a book just because I saw it on Amazon, but I would recommend a book that multiple people have recommended to me. Usually, when I start reading a new book, I already have a good idea of what topics it covers, how thorough and elegant the explanations are, and how easy it is to read, even before laying my hands on a copy of the book.
-
But what you say in "()" contradicts your first sentence: I complain about recommending books one has not read without acknowledging it. – Matt N. Dec 27 '12 at 17:13
@MattN. Yes, I've just removed the word "exactly." Implicit in your message was the idea that one cannot accurately recommend a book without having read it first. I'm sure many, perhaps most, MSE users are part of a culture that allows one to be reasonably familiar with many math books they have not read. So I wanted to share that perspective in this discussion. – Brett Frankel Dec 27 '12 at 17:23
Recommending a book to someone that one has not read oneself without explicitly declaring so makes no sense to me.
If you think it's reasonable to start reading a book because someone recommended it to you, why don't you think it's reasonable to recommend a book because someone recommended it to you? When you tell people things, do you generally make a habit of pointing out exactly where you obtained the information you tell them? If not, why do you want people to do this specifically for book recommendations?
I also agree with Pavel that you can't stop people from posting bad answers. If you don't like it when people recommend books without explaining why, then downvote those answers and maybe leave a comment. That's it.
-
Yes, if I tell people "things" I usually point out where I obtained the information or, if I can't remember I explicitly say that I can't remember. Otherwise what I say would not be a fact but merely a statement of opinion and hence completely irrelevant. As for recommendations: If I ask someone to recommend a book to me on $X$ the someone will (1.) know me well enough to know my taste (2.) know enough about $X$ to be a credible judge of the book they are going to recommend. You are of course right that one cannot stop people from posting bad answers. – Matt N. Dec 26 '12 at 16:54
1
(cont'd) But it seems much easier to spot a mistake in a calculation or proof than it is to notice that a list of books isn't suitable because for that one would have to have read the books. – Matt N. Dec 26 '12 at 16:55
1
I think that your first question misses the real point of Matt’s question. I am perfectly willing to make a second-hand recommendation, but I consider it dishonest to do so without noting that it is second-hand. Yes, I do make a point of telling people whether my knowledge is first-hand or not. – Brian M. Scott Dec 28 '12 at 7:51
@Brian: I just think Matt's declaration that the practice "makes no sense" is far too strong. At worst I would call the behavior Matt is calling out mildly negligent, but Matt is reacting to it as if it were immoral or something. – Qiaochu Yuan Dec 28 '12 at 8:09
I think that the makes no sense declaration is on a different axis from Matt’s real concern, which is the dishonesty of making a second-hand recommendation without acknowledging it as such. It’s a relatively minor dishonesty, but it is dishonest, and I don’t really see why anyone would engage in it save by accident $-$ under time pressure, for instance. – Brian M. Scott Dec 28 '12 at 8:16
"If you think it's reasonable to start reading a book because someone recommended it to you, why don't you think it's reasonable to recommend a book because someone recommended it to you?" - In the first case, I am trusting your judgement; in the second case, I am trusting your trust in someone else's judgement. – BlueRaja - Danny Pflughoeft Jan 9 at 21:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9718093276023865, "perplexity_flag": "middle"}
|
http://mathoverflow.net/revisions/26926/list
|
## Return to Answer
2 added 24 characters in body
The volume of the real flag manifold $Fl_R^3 = O(3)/Z_2^3$ can be obtained on one hand by the explicit integration on the $O(N)$ invariant volume element on its big cell:
$\int_0^{\infty}dx_1\int_0^{\infty}dx_2\int_0^{\infty}dx_3(1+x_1^2+(x_3-\frac{x_1x_2}{2})^2)^{-1} \int_{-\infty}^{\infty}dx_1\int_{-\infty}^{\infty}dx_2\int_{-\infty}^{\infty}dx_3(1+x_1^2+(x_3-\frac{x_1x_2}{2})^2)^{-1} (1+x_2^2+(x_3+\frac{x_1x_2}{2})^2)^{-1}$
This integral is not hard to evaluate using elementary integration techniques. The result $2\pi^2$ can also be obtained from: $Vol(Fl_R^3)= Vol(RP^1) Vol(RP^2) = \frac{Vol(S^1)}{2}\frac{Vol(S^2)}{2}$
1
The volume of the real flag manifold $Fl_R^3 = O(3)/Z_2^3$ can be obtained on one hand by the explicit integration on the $O(N)$ invariant volume element on its big cell:
$\int_0^{\infty}dx_1\int_0^{\infty}dx_2\int_0^{\infty}dx_3(1+x_1^2+(x_3-\frac{x_1x_2}{2})^2)^{-1} (1+x_2^2+(x_3+\frac{x_1x_2}{2})^2)^{-1}$
This integral is not hard to evaluate using elementary integration techniques. The result $2\pi^2$ can also be obtained from: $Vol(Fl_R^3)= Vol(RP^1) Vol(RP^2) = \frac{Vol(S^1)}{2}\frac{Vol(S^2)}{2}$
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8576430678367615, "perplexity_flag": "head"}
|
http://mathhelpforum.com/advanced-algebra/118964-dimension.html
|
# Thread:
1. ## dimension
Call a polynomial balanced if its average value over every circle centered at zero is zero. The of all polynomials of degree $\leq 2009$ forms a vector space $V$. Find $\text{dim} \ V$.
So what basis this you use?
2. Originally Posted by Sampras
Call a polynomial balanced if its average value over every circle centered at zero is zero. The of all polynomials of degree $\leq 2009$ forms a vector space $V$. Find $\text{dim} \ V$.
So what basis this you use?
I used rank nullity.
3. the set of polynomials with degree less or equal a certain natural $n$,is a subspace with dimension equal to $n+1$.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 8, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8514516949653625, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/tagged/pde+harmonic-analysis
|
# Tagged Questions
2answers
102 views
### Mean Value Property of Harmonic Function on a Square
A friend of mine presented me the following problem a couple days ago: Let $S$ in $\mathbb{R}^2$ be a square and $u$ a continuous harmonic function on the closure of $S$. Show that the average of ...
0answers
74 views
### Can we do some scaling argument in the presence of inhomogeneous norms?
Notation: $B^n_R$ stands for the ball of radius $R$ in $\mathbb{R}^n$. $\hat{f}$ stands for the Fourier transform of $f$. Question. The following inequality holds true for all \$f\in ...
1answer
158 views
### Laplace equation Dirichlet problem on punctured unit ball.
Let $\Omega = \{ x \in \mathbb{R}^n: 0<|x|<1 \}$ and consider the Dirichlet problem \begin{align} \Delta u &= 0 \\ u(0) &= 1 \\ u &= 0 ~~~\text{if} ~~|x|=1 \end{align} By considering ...
1answer
56 views
### Simple Harmonic estimate
I know this is simple, and I see all the pieces of the puzzle are there, but I can't seem to get it. Let $u$ be a solution of $$\Delta u = f \;\;\; x \in B_4$$ Then if we can bound \int_{B_4} ...
0answers
36 views
### Strichartz estimates and operator from $L^{2}_{x}$ to $L^{6}_{x,t}$
I want to prove that the operator $T=| \nabla|^{1/6} e^{-t\partial ^{3}_{x}}\tilde{P}_{N}$ takes functions from $L^{2}_{x}$ to $L^{6}_{x,t}$. The hint is to first prove for Schwartz functions, and ...
1answer
51 views
### Alternate definitions of $C^{1,\alpha}(S^1)$ and $C^{1,\alpha}(\bar{D})$ maps
My question is about the precise definitition regarding the following: Let $f$ be an orientation-preserving $C^1$ diffeomorphism of the unit circle $S^1$. So $f'(b)$ exists and can be thought as a ...
1answer
112 views
### Harmonic function with condition on part of its boundary
Suppose $u$ is harmonic in the interior of the unit square $0 \leq x \leq 1$, $0\leq y\leq1$. Suppose furthermore that $u$ and its first derivatives continuously extend to the bottom side \$0\leq x ...
2answers
67 views
### Estimate on a simple-looking integral arising from harmonic analysis/harmonic extensions
Let $z\in \mathbb{D}, t\in S^1, \beta\in \mathbb{R}$. I was dealing with the following integral arising from some other calculation regarding harmonic extension on $\mathbb{D}$: ...
1answer
134 views
### On the regularity of the Laplace equations and tensor products and such
To start with, let me apologize for my ignorance as I know next to nothing about partial differential equations. My question is about the tensor product of Banach spaces but actually I do not ...
1answer
155 views
### Nirenberg-Gagliardo- Sobolev inequalities
I need a small help in understanding the following that how "Nirenberg -Gagliardo-Sobolev inequalities" were used. This is a part of the paper. Denote H^1=W^{1, 2}(\Omega)\\ V_1=\{ f\in H^2 ...
0answers
51 views
### Inequality for harmonic extension : Is $\int_{t\in S^1} |t-\zeta|^{\alpha}p(z,t) |dt| \leq K|z-\zeta|^{\alpha}, 0< \alpha < 1$ for uniform $K$?
Let $\zeta\in S^1$(unit circle in the complex plane) and $z\in \mathbb{D}$. Fix $0< \alpha < 1$. Then, is the following true ? (Question 1) Let $p(z,t) = \frac{1}{2\pi}.\frac{1-|z|^2}{|z-t|^2}$ ...
1answer
201 views
### Properties of subharmonic functions
A function $f$ is called subharmonic if $f:U\rightarrow\mathbb R$ (with $U\subset\mathbb R^n$) is upper semi-continuous and \forall\space \mathbb B_r(x)\subset ...
3answers
338 views
### Solution of Laplace's equation in an annulus with constant Dirichlet conditions?
What's the solution to Laplace's equation $\nabla^2V=0$ in the annulus with centre 0, inner radius 1, and outer radius 2, with boundary conditions $V=0$ on the inner boundary and $V=1$ on the outer ...
0answers
89 views
### Curvatures of contours of solutions of 3d Poisson's equation
Let $f(x,y,z)$ be a complex function in a 3d euclidian space that fulfill the Poisson's equation \frac{\partial^2}{\partial x^2} f + \frac{\partial^2}{\partial y^2} f + \frac{\partial^2}{\partial ...
4answers
2k views
### What do modern-day analysts actually do?
In an abstract algebra class, one learns about groups, rings, and fields, and (perhaps naively) conceives of a modern-day algebraist as someone who studies these sorts of structures. One learns about ...
3answers
469 views
### Do discontinuous harmonic functions exist?
A function, $u$, on $\mathbb R^n$ is normally said to be harmonic if $\Delta u=0$, where $\Delta$ is the Laplacian operator $\Delta=\sum_{i=1}^n\frac{\partial^2}{\partial x_i^2}$. So obviously, ...
3answers
159 views
### What is a “domain” in the maximum-minimum principle?
The maximum-minimum principle says that A harmonic function on a domain cannot attain its maximum or its minimum unless it is constant. Here is my question: If we restrict our attention in ...
3answers
157 views
### Why is it useful to express PDE solutions as $L^2$-convergent series?
The existence of an $L^2$ orthonormal basis consisting of eigenfunctions of a Sturm-Liouville equation helps us to express the solutions of various ODEs and PDEs as infinite series. However, in the ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 52, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8972775936126709, "perplexity_flag": "head"}
|
http://en.wikiversity.org/wiki/Continuum_mechanics/Deviatoric_and_volumetric_stress
|
# Continuum mechanics/Deviatoric and volumetric stress
From Wikiversity
## Deviatoric and volumetric stress
Often it is convenient to decompose the stress tensor into volumetric and deviatoric (distortional) parts. Applications of such decompositions can be found in metal plasticity, soil mechanics, and biomechanics.
### Decomposition of the Cauchy stress
The Cauchy stress can be additively decomposed as
$\boldsymbol{\sigma} = \mathbf{s} - p~\boldsymbol{\mathit{1}}$
where $\mathbf{s}$ is the deviatoric stress and $p$ is the pressure and
$\begin{align} p & = - \frac{1}{3}~\text{tr}(\boldsymbol{\sigma}) = -\frac{1}{3}~\boldsymbol{\sigma}:\boldsymbol{\mathit{1}} \\ \mathbf{s} & = \boldsymbol{\sigma} + p~\boldsymbol{\mathit{1}} ~;~~ \mathbf{s}:\boldsymbol{\mathit{1}} = \text{tr}(\mathbf{s}) = 0 \end{align}$
In index notation,
$\begin{align} p & = -\frac{1}{3}~\sigma_{ii} \\ s_{ij} & = \sigma_{ij} - \frac{1}{3}~\sigma_{kk}~\delta_{ij} \end{align}$
### Decomposition of the 2nd P-K stress
The second Piola-Kirchhoff stress can be decomposed into volumetric and distortional parts as
$\boldsymbol{S} = \boldsymbol{S}' - p~J~\boldsymbol{C}^{-1}$
where
$\begin{align} p & = -\frac{1}{3}~J^{-1}~\boldsymbol{S}:\boldsymbol{C} \\ \boldsymbol{S}' & = J~\boldsymbol{F}^{-1}\cdot\mathbf{s}\cdot\boldsymbol{F}^{-T} ~;~~ \boldsymbol{S}':\boldsymbol{C} & = 0 \end{align}$
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 7, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8284454345703125, "perplexity_flag": "middle"}
|
http://en.m.wikibooks.org/wiki/Haskell/YAHT/Language_basics/Solutions
|
# Haskell/YAHT/Language basics/Solutions
Preamble
Introduction
Getting Started
Language Basics (Solutions)
Type Basics (Solutions)
IO (Solutions)
Modules (Solutions)
Advanced Language (Solutions)
Advanced Types (Solutions)
Monads (Solutions)
Advanced IO
Recursion
Complexity
This box: view • talk • edit
## Arithmetic
It binds more tightly; actually, function application binds more tightly than anything else. To see this, we can do something like:
Example:
```Prelude> sqrt 3 * 3
5.19615
```
If multiplication bound more tightly, the result would have been 3.
↑Jump back a section
## Pairs, Triples and More
Solution: `snd (fst ((1,'a'),"foo"))`. This is because first we want to take the first half the tuple: `(1,'a')` and then out of this we want to take the second half, yielding just `'a'`.
If you tried `fst (snd ((1,'a'),"foo"))` you will have gotten a type error. This is because the application of `snd` will leave you with `fst "foo"`. However, the string "foo" isn't a tuple, so you cannot apply `fst` to it.
↑Jump back a section
## Lists
### Simple List Functions
Solution: `map Char.isLower "aBCde"`
Solution: `length (filter Char.isLower "aBCde")`
Solution: `foldr max 0 [5,10,2,8,1]`.
You could also use `foldl`. The foldr case is easier to explain: we replace each cons with an application of `max` and the empty list with 0. Thus, the inner-most application will take the maximum of 0 and the last element of the list (if it exists). Then, the next-most inner application will return the maximum of whatever was the maximum before and the second-to-last element. This will continue on, carrying to current maximum all the way back to the beginning of the list.
In the foldl case, we can think of this as looking at each element in the list in order. We start off our "state" with 0. We pull off the first element and check to see if it's bigger than our current state. If it is, we replace our current state with that number and the continue. This happens for each element and thus eventually returns the maximal element.
Solution: `fst (head (tail [(5,'b'),(1,'c'),(6,'a')]))`
↑Jump back a section
## Source Code Files
↑Jump back a section
## Functions
### Infix
↑Jump back a section
## Comments
↑Jump back a section
## Recursion
We can define a fibonacci function as:
```fib 1 = 1
fib 2 = 1
fib n = fib (n-1) + fib (n-2)
```
We could also write it using explicit if statements, like:
```fib n =
if n == 1 || n == 2
then 1
else fib (n-1) + fib (n-2)
```
Either is acceptable, but the first is perhaps more natural in Haskell.
We can define:
$a*b = \begin{cases} a & b = 1 \\ a + a*(b-1) & \mbox{otherwise} \\ \end{cases}$
And then type out code:
```mult a 1 = a
mult a b = a + mult a (b-1)
```
Note that it doesn't matter that of $a$ and $b$ we do the recursion on. We could just as well have defined it as:
```mult 1 b = b
mult a b = b + mult (a-1) b
```
We can define `my_map` as:
```my_map f [] = []
my_map f (x:xs) = f x : my_map f xs
```
Recall that the `my_map` function is supposed to apply a function `f` to every element in the list. In the case that the list is empty, there are no elements to apply the function to, so we just return the empty list.
In the case that the list is non-empty, it is an element `x` followed by a list `xs`. Assuming we've already properly applied `my_map` to `xs`, then all we're left to do is apply `f` to `x` and then stick the results together. This is exactly what the second line does.
↑Jump back a section
## Interactivity
The code below appears in `Numbers.hs`. The only tricky parts are the recursive calls in `getNums` and `showFactorials`.
```module Main
where
import IO
main = do
nums <- getNums
putStrLn ("The sum is " ++ show (sum nums))
putStrLn ("The product is " ++ show (product nums))
showFactorials nums
getNums = do
putStrLn "Give me a number (or 0 to stop):"
num <- getLine
if read num == 0
then return []
else do rest <- getNums
return ((read num :: Int):rest)
showFactorials [] = return ()
showFactorials (x:xs) = do
putStrLn (show x ++ " factorial is " ++
show (factorial x))
showFactorials xs
factorial 1 = 1
factorial n = n * factorial (n-1)
```
The idea for `getNums` is just as spelled out in the hint. For `showFactorials`, we consider first the recursive call. Suppose we have a list of numbers, the first of which is `x`. First we print out the string showing the factorial. Then we print out the rest, hence the recursive call. But what should we do in the case of the empty list? Clearly we are done, so we don't need to do anything at all, so we simply `return ()`.
Note that this must be `return ()` instead of just `()` because if we simply wrote `showFactorials [] = ()` then this wouldn't be an IO action, as it needs to be. For more clarification on this, you should probably just keep reading the tutorial.
↑Jump back a section
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 3, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.90524822473526, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/52371/code-that-produces-all-possible-trees-with-n-nodes
|
code that produces all possible trees with n nodes. [closed]
Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I'm looking for code that produces all possible trees with no self edges (or their adjacent matrices) with n nodes, anyone have any idea if this is written anywhere?
-
1
See stackoverflow.com. – Ricky Demer Jan 18 2011 at 2:46
2
To prevent comments such as the one above, ask "An algorithm that..." instead of "code that...". Usually, you'll also get an implementation of that algorithm. – Derrick Stolee Jan 18 2011 at 3:52
3
Why is this not appropriate? There are plenty of contexts (often involving operads) where it is useful to have lists of trees to test conjectures and so on. – Neil Strickland Jan 18 2011 at 11:35
2
Dear Neil, if marvin wants to use it for some mathematical reason (for operads, for instance), then he should give background on his application. The more background one gives, the less likely one will be sent to SO. – Harry Gindi Jan 18 2011 at 11:42
2
The question seems completely fine. Trees are a basic mathematical object; maybe the poster is just interested in properties of the set of trees on n nodes. For the purpose of asking for the code, he really doesn't need to tell us precisely which properties. I don't think being pointed to stackoverflow is helpful: stackoverflow is for questions about programming. It seems just as likely that professional mathematicians will know of a tool for generating lists of trees than that professional programmers will, so mathoverflow seems at least as suitable as stackoverflow, probably more so. – James Martin Jan 18 2011 at 13:01
show 1 more comment
2 Answers
In sage the command
list(graphs.trees(9))
produces a list of all trees on 9 vertices. As sage is open source, the code is available for inspection. The command
[tt.am() for tt in graphs.trees(9)]
will provide the adjacency matrices.
-
You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
It is well known that there is a bijection between the set of trees on $n$ nodes and sequences of length $n-2$ with values in $[n]$. These sequences are called Prüfer sequences. Indeed, the wikipedia page has code which will convert any Prüfer sequence into a tree. So a naïve algorithm would be to run the wikipedia algorithm over all Prüfer sequences.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9414250254631042, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/1216/could-gravity-be-an-emergent-property-of-nature/1224
|
# Could gravity be an emergent property of nature?
Sorry if this question is naive. It is just a curiosity that I have.
Are there theoretical or experimental reasons why gravity should not be an emergent property of nature?
Assume a standard model view of the world in the very small. Is it possible that gravity only applies to systems with a scale large enough to encompass very large numbers of particles as an emergent property?
After all: the standard model works very well without gravity; general relativity (and gravity in general) has only been measured at distances on the millimeter scale.
How could gravity emerge? For example, it could be that space-time only gets curved by systems which have measurable properties, or only gets curved by average values. In other words that the stress-energy tensor has a minimum scale by which it varies.
Edit to explain a bit better what I'm thinking of.
1. We would not have a proper quantum gravity as such. I.e. no unified theory that contains QM and GR at the same time.
2. We could have a "small" (possibly semi-classical) glue theory that only needs to explain how the two theories cross over:
• the conditions and mechanism of wave packet reduction (or the other corresponding phenomena in other QM interpretations, like universe branching or decoherence or whatnot)
• how this is correlated to curvature - how GM phenomena arise at this transition point.
Are there theoretical or experimental reasons why such a reasoning is fundamentally incorrect?
-
What do you mean by "scale"? If it is about spatial dimensions, then I believe what you are saying implies singularities should not have gravitation. Or do you mean a certain amount of mass is required to generate gravity? – Cem Nov 22 '10 at 23:31
What I think my example implies is that there would be no singularities, but some other phenomena which are very similar. They would behave effectively like singularities above a certain scale (i.e. have the same metric outside of, say, a ball around the would-be singularity) – Sklivvz♦ Nov 23 '10 at 7:30
1
I'm "curious" how you define emergent. At some level, all physical laws of nature are "emergent"! – Noldorin Nov 30 '10 at 0:32
1
@Noldorin, what I mean in the question is if it is compatible with experiment to theorize a universe where spacetime is flat at the quantum level, and only gets curved in correspondence to "measurement", the assumption being that the Standard Model lives well without GR and vice versa. – Sklivvz♦ Nov 30 '10 at 7:00
2
@Noldorin, there is an article in wikipedia on emergence. 50 Years ago, when I studied, that was called "cooperative phenomenon" (eg ferromagetic behaviour), that was much more telling and precise. – Georg Mar 17 '11 at 13:04
show 4 more comments
## 5 Answers
I'm not an expert in gravity, however, this is what I know.
There's a hypothesis about gravity being an entropic property. The paper from Verlinde is available at arXiv. That said, I would be surprised for this to be true. The reason is simple. As you probably know, entropy is an emergent property out of statistical probability. If you have non-interacting, adimensional particles into one half of a box, with the other half empty and separated by a valve, it's probability, thus entropy, that drives the transformation. If you look at it from the energetic point of view, the energy is exactly the same before and after the transformation. This works nicely for statistical distribution, but when you have to explain why things are attracted to each other statistically, it's much harder. From the probabilistic point of view, it would be the opposite: the more degrees of freedom your particles have, the more entropy they have. A clump has less degrees of freedom, hence has less entropy, meaning that, in a closed system, the existence of gravity is baffling. This is out of my speculation, and I think I am wrong. The paper seems to be a pleasure to read, but I haven't had the chance to go through it.
-
– lurscher Jan 9 '11 at 21:30
Gravity increases entropy. In other words, a clump must have more entropy than a cloud of the same particles. Otherwise gravity would violate the laws of thermodynamics! The point here is that as matter clumps, it must lose energy. The energy is lost as some form of high entropy radiation. What is interesting is that both gravity and entropy have an asymmetrical direction: gravity is only attractive, entropy increases in the same direction of time. – Sklivvz♦ Feb 5 '11 at 8:00
Despite all I wrote in my other answer, there's a very interesting attempt by Xiao-Gang Wen to come up with emergent models of gravity starting from quantum lattice models with no gravity, and only nearest neighbor interactions. His work can be found at gr-qc/0606100 and arXiv:0907.1203. He managed to show that quasiparticles with no energy gap and a helicity of $\pm 2$ can emerge without being accompanied by helicity $\pm 1$ or $0$ quasiparticles. Whether or not this model can be considered a model of gravity though is another matter.
-
+1 for mentioning Wen's work in this regard. – user346 Jan 9 '11 at 19:37
Thanks for mentioning our work. After 6 years and many journals (Science, PRL, PRB, NJP, JHEP, NPB), one of our papers finally get published in NPB. The referee reports and our replies represent detailed discussions between two different points of view on quantum gravity: emergence vs geometry/gauge points of view. The exchange of opinions is important and helpful for the development of quantum gravity. So I like to share those exchanges which are related to the question raised here – Xiao-Gang Wen May 27 '12 at 10:19
Regarding to "Whether or not these models can be considered a model of gravity?", it is very good question. Our models do produce linearized quantum gravity, and they may be the first models to produce linearized quantum gravity (correct me if I am wrong and also see my question). So the next problem is how non-linear quantum gravity can emerge from some lattice models. In any case, our results do favor an emergent origin of gravity. – Xiao-Gang Wen May 28 '12 at 12:06
Isn't the answer to the question of the title widely believed to be "yes"?
If you believe that searching for what high-energy theorists call a "theory of everything" is a valuable and worthwhile enterprise, then you probably also believe that gravity as we currently understand it (General Relativity, say) "emerges" from some deeper theory (in the effective field theory sense) which unifies it with all other known fundamental forces.
Of course nobody yet knows for sure what that theory is, but I'm told certain flavors of string theory are the most viable candidates as of 2010. You can find some indication towards how gravity emerges from string theory in the first few sentences of this answer by Eric Zaslow.
Perhaps Eric Zaslow or some other expert can give more details at the level of saying, for instance, how Einstein's equations arise from string theory (I would ask this as a question on this site, except that I know I could find the answer in any book on string theory if I cared enough to look). I'm told that it has something to do with the renormalization group equations of the conformal field theory on the worldsheet, but I'm afraid I can't reproduce or explain that argument any further for you here.
-
By my question I mean if it is possible that there is actually NO quantized gravity force field at all. Gravity would only exist classically. – Sklivvz♦ Nov 23 '10 at 7:25
I'm afraid I don't quite understand what that means. Quantum mechanics can't only apply to some of the phenomena in the universe; if it did, there would be a ton of contradictions. – j.c. Nov 23 '10 at 15:45
j.c. all of the flavors of string theory are actually the same thing expressed differently mathematically, as shown by Ed Witten. – Cem Nov 23 '10 at 17:08
@Cem: as conjectured by Ed Witten, you mean – j.c. Nov 24 '10 at 18:43
Right, sorry for an absolute tone. I was not aware there was a controversy about that. – Cem Nov 24 '10 at 20:29
show 1 more comment
You might want to look up the Weinberg-Witten theorem which shows that's not possible given certain assumptions. If the original model from which quantum gravity is supposed to emerge is an ordinary Poincaré covariant quantum field theory over flat nondynamical Minkowski space, they showed it's not possible for massless helicity $\pm 2$ particles to emerge. As a theory of quantum gravity ought to contain gravitons, this appears to rule out such models. Of course, these assumptions are questionable. For instance, the theory from which gravity emerges might not be a quantum field theory. This is the case for superstring theory.
Another possibility might be the "fundamental" model isn't Lorentz covariant. However, we still need the low energy effective theory to be approximately Lorentz covariant. In typical condensed matter analog models, different quasiparticles couple to different metrics, and there is no universality to the gravitational couplings, or the speed of light. Unless all the quasiparticles co-emerge together, I don't see any way around this problem.
It might be a bit hard to come up with the positive energy theorem in an emergent theory of gravity. The positive energy theorem states that the ADM energy of an asymptotically flat spacetime always has to be nonnegative. In an emergent theory, the ADM energy could just as easily be negative for some states. To see this, note first that the ADM energy can be defined locally as the limit as we go to spatial infinity of a locally defined integral over an enclosing spatial surface with one spatial and one time codimension. If we assume the "fundamental" theory is local, this means the now emergent ADM flux also has to be defined locally in terms of the more fundamental fields. As the enclosing boundary becomes larger and larger, its extrinsic curvature becomes closer and closer to zero. If we have a positive ADM flux passing through a plane — as defined with respect to a choice of normal vector orientation — a reflection by the plane will give us another state where this ADM flux is now negative. So, we can certainly imagine performing some sort of approximate reflection about the enclosing submanifold on a local patchwork basis, at least for the regions at or around the enclosing surface. We then need to find an interpolation of the resulting state far into the interior, which of course, might not look anything like a reflection at all. But if the fundamental theory also satisfies local independence, that ought to be possible. But the end result of all this construction is a state with negative emergent ADM energy. I know this argument is very handwavy and nonrigorous, but it sounds plausible. But there might be some loopholes. For instance, the fundamental theory might be local, but the emergent large scale excitations — and hence emergent spacetime — might be delocalized with respect to the underlying background spacetime. Or the underlying fundamental theory might be inherently nonlocal.
-
@Jason, could you elaborate in what sense the Weinberg-Witten theorem forbids emergent gravity? Your second point is a good one with a few caveats. In an emergent theory, quite generally speaking, one would have to sum over states of a microscopic ensemble of spin-networks (or strings) in order to obtain a semi-classical geometry. States in this ensemble could have negative energy w.r.t the quantum operator corresponding to the classical ADM observable. However, one would hope that some "natural" restrictions would disallow such states from contributing to physical observables. – user346 Dec 31 '10 at 6:05
1
@space_cadet: My comment doesn't apply to loop quantum gravity, or another other theory with Hamiltonian constraints. But Sklivvz appears to be asking about the emergence of quantum gravity from a quantum field theory, which can't admit Hamiltonian constraints. – QGR Jan 14 '11 at 14:25
@QGR #1. all I'm saying is that any "emergent" model will ultimately yield GR in some limit. Whether you speak of the resulting GR in the Hamiltonian or the action (covariant) formulation, the physics will be the same. #2. Just to clarify - are you saying that a QFT cannot admit a constrained formulation? – user346 Jan 14 '11 at 17:33
@space_cadet: A quantum field theory can admit Gauss gauge constraints, but not Hamiltonian constraints. – QGR Jan 16 '11 at 8:16
1
AdS/CFT circumvents the Weinberg-Witten theorem by producing gravity in d+1 dimensions from a field theory in d dimensions. – pho Jan 16 '11 at 16:01
show 1 more comment
In December, Carlo Rovelli summarized the last twenty years of the research agenda of a group of researchers in Loop Quantum Gravity theory. In a nutshell, LQG argues that gravity is a property of space-time rather than a quantum field theory mediated by a boson, and that space-time is fundamentally discrete with point-like locations in space-time connected by a network of connections to each other. In this approach, described by three main equations, the number of dimensions in space-time itself is emergent and neither locality nor the number of dimensions of space-time are well defined concepts at the most fine grained level. You are at point A and connected to points B, C and D, related by the equations, which when repeated ad infinitum are well approximated by a continuous, four dimensions space that may satisfy the properties of GR in classical approximation. As he sums up the research agenda:
"There is substantial circumstantial evidence that the large distance limit of the theory is correctly general relativity, from asymptotic analysis and from large distance calculations of n-point functions and in spinfoam cosmology; and there are open directions of investigations to reinforce this evidence. The degrees of freedom are correct and the theory is generally covariant: the low-energy limit is not likely to be much else than general relativity. But there is no solid proof yet."
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9432570934295654, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/80595?sort=newest
|
## Why study simplicial homotopy groups?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
The standard definition for simplicial homotopy groups only works for Kan complexes (cf. http://ncatlab.org/nlab/show/simplicial+homotopy+group). I learned that the hard way, when I tried to compute a very simple example, i.e. the homotopy group of the boundary of the standard 2-simplex. My naive idea to actually compute simplicial homotopy groups for arbitrary simplicial sets was taking the fibrant replacement. But obviously we need a model structure for that. Then again, a weak equivalence in the usual model structure for simplicial sets is precisely a weak equivalence of the geometric realization.(cf. http://ncatlab.org/nlab/show/model+structure+on+simplicial+sets)
As I understand it so far, the only satisfactory way to talk about simplicial homotopy groups requires the notion of "classical" homotopy groups. Hence my question: why does it still make sense, to actually talk about simplicial homotopy groups in the first place?
-
3
The can also be computed as the homology of a non-abelian chain complex: the Moore complex on Kan's loop simplicial group (which is defined from the simplicial set). This works in the connected case. Otherwise groupoids are necessary. – Fernando Muro Nov 10 2011 at 15:13
3
Kan computes $\pi_3(S^2)$ via the Moore complex. It takes many pages. The idea is that a subgroup of free group (which is of course free), has a system of generators that is described by the Nielsen-Schrier theorem. – John Klein Nov 11 2011 at 0:52
## 3 Answers
To compute the homotopy groups of a simplicial set, you need to be able to construct a weak equivalence $X \to Y$ where $Y$ is a Kan complex, and then compute the homotopy groups of $Y$ using the definitions you were discussing.
This might seem circular - you need to detect if $X \to Y$ is an equivalence. However, you can construct $Y$ directly using certain more elementary equivalences. Specifically, for a map of a horn $\Lambda \to X$ we can form the pushout of the diagram $\Delta \leftarrow \Lambda \rightarrow X$, called $X'$; on geometric realizations this is homotopy equivalent because you can construct an explicit retraction. The class of maps $X \to Y$ generated by such pushouts is called the family of anodyne extensions of $X$, and you can always construct an anodyne extension which is a Kan complex by the small object argument (you keep gluing in solutions to every possible horn-filling problem).
If you want a more canonical answer, there is also Kan's $Ex^\infty$ construction.
If you want another reference, there are Kan's older papers, and my recollection is that Joyal and Tierney has quite a number of details as well.
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
If $X$ is a simplicial set which is not Kan, you can compute the homotopy groups of $X$ by choosing a weak homotopy equivalence $f: X \rightarrow Y$ where $Y$ is Kan and then applying the construction you are familiar with to $Y$. There are many ways of characterizing the relationship between $X$ and $Y$ without every mentioning topology or model categories (though I'm not sure it is so helpful to avoid these). For example, you can take any map $f: X \rightarrow Y$ with the following property: for every Kan complex $Z$, composition with $f$ induces a bijection from $[Y,Z]$ to $[X,Z]$, where $[K,Z]$ denotes the set of maps from $K$ into $Z$ up to (simplicial) homotopy. There are several purely combinatorial constructions of $Y$ from $X$: for example, Kan's $Ex^{\infty}$ functor.
-
So this entire discussion is in Goerss and Jardine's "Simplicial homotopy theory" and also in May's "Simplicial objects in algebraic topology". Also Curtis' papers and monographs are very nice and classical. One aesthetic reason that one may want simplicial homotopy groups is to show that we can calculate homotopy groups within the category of simplicial sets. Thus one sets up this machinery. I think that Milnor proved the comparison between simplicial and let's say topological (even though this is not quite accurate, I think something like CGHaus homotopy groups) groups.
Here is how you speak about homotopy groups in the context of simplicial sets: First you need the notion of a horn. A horn is the boundary of an n-simplex with a lost face. Now you must remember that simplicial sets have arrows on their edges, so we have a couple of different horns in each dimension. In dimension 2 for example, we have three different horns (and will need all three of these horns for our to define the fundamental group, in addition to a three dimensional horn to give associativity). So we now define a Kan complex as a simplicial set in which each horn may be filled out to any n-simplex that it is contained in (this is written as a lifting property).
So we will use this to define the fundamental groups, as the higher homotopy groups are analogous. Pick a basepoint, and two loops based at that point (if we do not want to talk about basepoints this discussion works for fundamental groupoids). This can be realized as a map of a two horn into the simplicial set. Now pick a horn filling (it doesn't matter which, they all differ by homotopy equivalence, namely an even higher horn fills this choice of two horns. This is similar to higher category theory.). The group operation of the two loops is the new loop created on the boundary of the two simplex. Considering the other two horn fillings will give left and right inverses in the group, which must be shown to be homotopy equivalent (by more horn fillings)
All the properties that you would expect of such composite can be shown to be true by more (generalized) horn fillings. These are called anodyne extensions. It turns out if you can fill all of your horns, you can fill all of your anodynes. This will show that composition is independent of choice of representatives.
-
1
Sorry, but I don't see how this answers my question. Yes, that's the way you define the simplicial homotopy groups for a Kan complex. However even very simple simplicial sets (boundaries of standard simplices, which strike me as a naive choice for "spheres") are not Kan. – Simon Markett Nov 10 2011 at 16:51
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 30, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9408219456672668, "perplexity_flag": "head"}
|
http://mathematica.stackexchange.com/questions/tagged/fitting?sort=faq&pagesize=30
|
# Tagged Questions
Questions on the use of Mathematica to construct models for approximating empirical data. (FindFit[], Fit[], LinearModelFit[], NonlinearModelFit[], etc.)
4answers
2k views
### Using FindFit to fit $a\,b^t$: how to avoid introducing complex numbers?
I'd like to find a model $f(t)=a\,b^t$ which matches the following data ...
3answers
763 views
### Mathematica envelope for the bottom of a plot, a generic function
I have the following set-up: xaxis = Table[x, {x, 0, 10, 0.01}]; yaxis = Table [Sin[x] + Abs[RandomReal[{-1, 1}]], {x, 0, 10, 0.01}]; ListLinePlot[Transpose[{ xaxis, yaxis}]] My ...
4answers
2k views
### Problem with NonlinearModelFit
I'm having trouble with a non-linear fit: fit = NonlinearModelFit[data, y0 + A Sin[\[Pi] (x - xc)/w], {y0, xc, A, w}, x] where ...
3answers
803 views
### Fitting fractional complex data with NonlinearModelfit
I've difficulites with the NonlinearModelFit function. In principle Mathematica should be able to deal with complex data. E.g. if I define the following table ...
6answers
1k views
### How to determine the center and radius of a circle given three points in 3D?
I was wondering if anyone could give me a hand with this problem I have. I have six points on a plane, and I am trying to determine if they form a circle or not. I know that any three points in 2D ...
3answers
438 views
### Ηow to create an interpolated CDF from its samples?
I want to use a distribution I have only aggregate statistics on, namely its CDF sampled at certain points. I would like to keep it "nonparametric" (remain noncommittal on the parametric form), but I ...
3answers
838 views
### Data fitting with Image processing feature detection
I have some 2D data that once plotted looks like the following ...
2answers
323 views
### Is it possible to use the LevenbergMarquardt algorithm for fitting a black-box residual function?
I have a black-box multiargument multiparametric function of the type SRD[dataPoint_List,params_List] which accepts experimental data along with the parameters of ...
4answers
818 views
### How to visualize 3D fit
I have a data set of x,y,z values and I fit a function of x,y to the data. This works, but I can't come up with a nice way to ...
4answers
2k views
### Simultaneously fitting multiple datasets
What is the proposed approach if one wants to simultaneously fit multiple functions to multiple datasets with shared parameters? As an example consider the following case: We have to measurements of ...
5answers
2k views
### Estimate error on slope of linear regression given data with associated uncertainty
Given a set of data, is it possible to create a linear regression which has a slope error that takes into account the uncertainty of the data? This is for a high school class, and so the normal ...
4answers
603 views
### Mathematica Implementations of the Random Forest algorithm
Is anyone aware of Mathematica use/implementation of Random Forest algorithm?
3answers
678 views
### How to use FindFit to fit an implicit function?
Now I am trying to fit some data with a implicit model function. Firstly, I tried a toy example. Toy example with input and output ...
1answer
388 views
### How do I find the best parameter to fit my data if the model is a interpolating function?
Hi I have a question regarding to find the best parameters for my model to fit my data. I have 3 ordinary equation, and I now just picked some parameters (...
0answers
191 views
### Mathematica vs Sigmaplot (Non LinearModelFit)
I asked a question in a previous post that was closed because "The title of the question is not correct and the issue here is too trivial to help anyone later". NonLinearModelFit It's possible ...
5answers
411 views
### How to plot Fit functions?
If I do something like this f[x_]:=x^2 Plot[f[x],{x,1,10}] Mathematica plots the function $f(x)=x^2$, as expected. However, ...
0answers
74 views
### Is it possible to marginalize one or several parameters obtained from fitting procedure?
Statistical model analysis package has a few fitting functions. Their arguments are data points, fitting model and parameters in the fit. The result is a set of best values for parameters, which ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8718537092208862, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/196175/what-does-it-mean-mathematically-to-set-some-of-the-integration-constants-in-the/196183
|
# What does it mean mathematically to set some of the integration constants in the general solution to a linear differential equation, equal to zero?
I'm trying to calculate the position of a particle in a quadrapole magnet depending on the entry position $x_0$ and the combined (constant) physical parameters $k$. Given an equation
$$x(t) =\frac{(\frac{x''(t)}{k})''}{k},$$
solving via assuming that $x(t) = e^{\lambda t}$ et,c...
I arrive at the general solution
$$x(t) = c_1\cos(\sqrt{k}\cdot t)+c_2\sin(\sqrt{k}\cdot t)+c_3e^{-\sqrt{k}\cdot t}+c_4e^{\sqrt{k}\cdot t}$$
with $c_1,c_2,c_3,c_4$ arbitrary constants. What would it mean mathematically if I were to set say $c_1,c_2,c_3 = 0$, assuming I don't have other constraints (in my example I would have additional $x(0) = x_0$, but as far as I can see that doesn't forbid it).
Given that they are arbitrary, I can't see a problem with it. Of course, if you have additional starting conditions, you have to set the constants accordingly, but in my example $c_4 = x_0$ seems to do the job and leaves me with a much simpler solution. So why would I ever NOT eliminate every unnecessary term?
-
## 1 Answer
I think you're getting confused about the meaning of "arbitrary constants" here. In some sense, the word "arbitrary" is to blame: I call them "constants of integration". But they are "arbitrary" in the sense that, no matter what they are, the function you have will still satisfy the differential equation. So why not set them all equal to 0?
You need to consider why you're solving this equation. If you just want to find any old solution to that equation, then the solution x = 0 will do. But you don't necessarily just want to find any old solution. If you're talking about a real particle, in a real quadrapole magnet, with a real position that you need to calculate, then you need to know the particular solution of that equation that corresponds to it. Mathematically, the process is simple: find all the solutions, then somehow pick out the one you want.
Constants of integration encode really important information, such as where the particle started, or how fast it was going and in what direction, and so on, all of which are going to affect its eventual position. Mathematically, nothing especially deep is happening. You have found all possible solutions to that equation - parametrised by four constants - in an attempt to find the one that corresponds to the particle you care about. This is a whole bunch of solutions, all of which correspond to hypothetical particles, all of which started in different places, at different speeds, with different accelerations, etc. But then, you decided (for no reason that was anything to do with the behaviour of your particle) that you didn't want to look at all the solutions after all, and threw most of them away. How do you know you didn't throw the one you want away?
So, the answer is: either you're lucky, and the solution you want is indeed (0, 0, 0, $x_0$) (but you should check this!), or you don't have enough constraints to pin down the solution you want exactly, and you can't calculate the position of your particle yet, because you don't know enough about how it started.
-
so if I just have the contraint given in the answer,either any combination which satisfies $x_0=c_1+c_3+c_4$ will describe the location or I can't pin down the exact location? In this specific example to keep it short I omitted the second equation $y(t)=\frac{x''(t)}{k}$, which also has a constraint which comes down to $y_0 = k(-c_1+c_3+c_4)$. If these are the only constraints I have, I have 2 equations for 3 variable...constants( you know ) with $x_0,y_0$ being parameters. So I will HAVE to just set one right?The other to will have a fixed relationship. – ananon Sep 15 '12 at 16:02
A reason for setting to zero some constant of integration may be that you want your solution to have particular properties (for example, being differentiable), so that some "ugly" pieces must be discarded. – Andrea Orta Sep 15 '12 at 16:07
Yes - don't forget that, for example, "I don't want this particle to fly off to infinity as t increases" is a constraint too (which would set $c_4 = 0$). – Billy Sep 15 '12 at 16:13
Using your solution, compute $x(0)$, $x'(0)$, $x''(0)$, $x'''(0)$ as a linear combination of the $c_i$. You'll find that not less than knowing $x(0)$, $x'(0)$, $x''(0)$, $x'''(0)$ determines the solution. – Hagen von Eitzen Sep 15 '12 at 16:53
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 22, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9662611484527588, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/25527/an-elementary-proof-that-the-degree-of-a-map-of-spheres-determines-its-homotopy-t/25533
|
## An elementary proof that the degree of a map of spheres determines its homotopy type
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I'm helping to teach an undergraduate algebraic geometry course (out of Hatcher's textbook). We have recently defined the degree of a map of spheres using homology, and the professor and I thought it would be nice if we could give some kind of argument that such a map is determined up to homotopy by its degree. I know two proofs of this: one using the Freudenthal suspension theorem, and the other using the Pontryagin correspondence between homotopy classes of [smooth] maps and framed cobordism classes of framed submanifolds (see Milnor, Topology from a Differential Viewpoint). Unfortunately, neither of these arguments would be accessible to our students, who have only seen the fundamental group and homology (no higher homotopy theory) and who are not necessarily expected to know any differential topology.
Thus, I ask the following question:
Is there an elementary argument (i.e., that can be understood by someone who only knows about homology and the fundamental group) that the degree of a map of spheres determines its homotopy type?
More precisely, what we have (or will have) available is most of the material in the first two chapters of Hatcher, not including the "additional topics."
If necessary, I'm willing to make plausible assumptions that the students may not know how to prove, such as
-Replacing $f \colon S^n \to S^n$ by a homotopic map if necessary, we may assume that there exists points with only finitely many preimages, such that $f$ is a homeomorphism locally about each preimage. (i.e., regular values)
-Every map of CW complexes is homotopic to a cellular map.
-The degree map $\pi_n(S^n) \to \mathbb{Z}$ is a group homomorphism. [This reduces us to showing that a degree-0 map is nullhomotopic.]
-
## 1 Answer
Take a look at Exercise 15 in Section 4.1, page 359 of the book you're referring to. This outlines an argument that should be the sort of thing you're looking for. The main step is to deform a given map to be linear in a neighborhood of the preimage of a point, using either simplicial approximation or the argument that proves the cellular approximation theorem. Once this is done, the rest is essentially the Pontryagin-Thom argument (in a very simple setting), plus the fact that $GL(n,\mathbb R)$ has just two path-components.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9224029779434204, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/108383/numerical-methods-for-eisenstein-series
|
## Numerical methods for Eisenstein series
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Are there any existing numerical libraries for Eisenstein series? In particular I am interested in calculating values of parabolic Eisenstein series on $SL(n,\mathbb Z) \setminus GL(n,\mathbb R) / (O(n,\mathbb R),\mathbb R^{\times})$. If no such libraries exist, might there be a formulation of the series which is more suitable for numerical computations? Ultimately I would like to be able to numerically calculate integrals whose integrand contain such Eisenstein series, which is why I am in search of more efficient methods.
-
2
There is apparently a set of functions for Mathematica for doing some computations for automorphic forms on $GL(n,\mathbb R)$ called GL(n)pack (math.waikato.ac.nz/~kab/glnpack.html). You might also look at Goldfeld's book "Automorphic Forms and L-Functions for the Group $GL(n,\mathbb R)$", which mentions GL(n)pack. I've never used it, so I'm leaving this as a comment. – BR Sep 29 at 4:49
Thanks for your comment. GL(n)pack has various tools for calculating the summands of the series, the Fourier coefficients, and the factors in the functional equation, but does not have a tool to approximate the values of the series. Goldfeld's book is one of my main references, and as far as I know it does not contain any such numerical methods. – R. Rosenbaum Sep 29 at 14:07
Oh, well! Hopefully someone else knows something. By the way, are answers to your question known for GL(2)? Also, if you feel comfortable, it would be interesting to see an example of what you want to accomplish. – BR Sep 29 at 16:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9383281469345093, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/29100/real-algebraic-geometry-vs-algebraic-geometry/29112
|
## Real algebraic geometry vs. algebraic geometry
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
This question is predicated on my understanding that real algebraic geometry (henceforth RAG) is the version of algebraic geometry (AG) one gets when replacing (esp. algebraically closed) fields with formally real (esp. real closed) fields. This makes for substantial differences in the theory because such fields can be ordered, and with order comes the notion of a semialgebraic set and a stronger topology.
I am aware that there is a notion of "real spectrum" analogous to the traditional spectrum of a commutative ring, though I'm not terribly familiar with either. I assume this allows one to glue things together and define "real schemes" or some such thing. Or if not, I assume the reason this doesn't work is something one would learn in the study of RAG.
My question: Given the differences in the theories, how well does one need to understand "traditional" AG to study RAG? Are there references (preferably books) which introduce RAG at an abstract level without assuming much knowledge of AG? Or is asking for this like when people ask how they can learn about motives without knowing about AG first?
I already have Basu, Pollack, and Roy's Algorithms in Real Algebraic Geometry but I'm looking for something less algorithmic.
-
7
Have you looked at Bochnak, Coste, and Roy's book "Real Algebraic Geometry"? It seems quite unalgorithmic and very theoretically-oriented, and looks quite self-contained (e.g., not assuming knowledge of schemes). – Boyarsky Jun 22 2010 at 15:34
1
The abstract machineries can be similar, because they are not depending on the particular commutative ring in use. But I do feel RAG is itself a rich subject with many interesting geometric intuitions somehow different from those in Complex AG. A good starting point to demonstrate this will be Hilbert's sixteenth problem. – Bo Peng Jun 22 2010 at 16:06
## 3 Answers
Real algebraic geometry comes with its own set of methods. While keeping in mind the complex picture is sometimes useful (e.g. for any real algebraic variety X, the Smith-Thom inequality asserts that $b(X(\mathbb{R})) \leq b(X(\mathbb{C}))$, where $b(\cdot)$ denotes the sum of the topological Betti numbers with mod 2 coefficients), most of the technique used are either built from scratch or borrow from other areas, such as singularity theory or model theory.
The literature is a lot smaller for RAG than for traditional AG; the basic reference is the book by Bochnak, Coste and Roy (preferably the English-language edition which is more recent by more than 10 years, and has been greatly expanded). The book covers in particular the real spectrum, the transfer principle (which makes non-standard methods really easy), stratifications and Nash manifolds, among other topics. Michel Coste also has An Introduction to Semialgebraic Geometry available on his webpage a very short treatment of some basic results, enough to give you a first impression.
Other interesting books tend to be shorter and more focused than BCR, dealing with a specific aspect; e.g. Prestel's Positive polynomials. (dealing mostly with results such as Schmudgen's theorem), and Andradas-Brocker-Ruiz Constructible sets in real geometry (dealing mostly with the minimum number of inequalities required to define basic sets). The book by Benedetti and Risler is very interesting and concrete; I found some passages very useful and some results are hard to find in other books (the sections on additive complexity of polynomials are very thorough), but it is a bit scatterbrained for my taste.
As the name indicates, the book by Basu Pollack and Roy is entirely focused on the algorithmic aspects. It's a very good book, and you may still pick up some of the theory in there, but it does not sound like what you are after right now.
As for o-minimality, there again, Michel Coste's webpage contains an introduction that nicely complements van den Dries's book. I would hesitate to bundle o-minimality with real algebraic geometry. In some respects, the two domains are undoubtedly close cousins, and o-minimality can be seen as a wide-ranging generalization of real algebraic structures; on the other hand, each disciplines has also its own aspects and problems that do not translate all that well into the other.
I'm being verbose as usual. Still, I hope it helps.
-
1
I should add that both Coste and van den Dries's introductions to o-minimality are highly readable, and essentially self-contained. A real treat! – Thierry Zell Aug 19 2010 at 16:53
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
There's also Partially Ordered Rings and Semi-Algebraic Geometry by Brumfiel and Real algebraic and semi-algebraic sets by Benedetti and Risler.
-
1
I find the book by Brumfiel very helpful especially in real algebra. Though it has much less real geometry than BCR (Bochnak-Coste-Roy). BCR covers Nash Manifolds and other things in real geometry (like curve-selection lemma and separation of semialgebraic closed sets) – Jose Capco Nov 17 2011 at 11:24
For an easy introduction to RAG, you could read van den Dries book "Tame topology and o-minimal structures": he treats the more general notion of o-minimal structures instead of real closed fields, and he does not uses any tool from AG.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9400273561477661, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/trigonometry/43733-trigo-question-help-needed-need-pass-up-tomorrow.html
|
# Thread:
1. ## Trigo question. Help needed. Need to pass up on tomorrow.
Given that $\cos x = -\frac{2}{3}$ and $\sin y = -\frac{1}{\sqrt 6}$, that $0^\circ\le x \le360^\circ$ and that $x$ and $y$ are in the same quadrant, find, without calculator, the values of $\cos\frac{x}{2}$.
Man. This question is killing me. I only get to know $x$ and $y$ are in quadrant 3.
2. Hi
Originally Posted by simonsong
Given that $\cos x = -\frac{2}{3}$ and $\sin y = -\frac{1}{\sqrt 6}$, that $0^\circ\le x \le360^\circ$ and that $x$ and $y$ are in the same quadrant, find, without calculator, the values of $\cos\frac{x}{2}$.
Man. This question is killing me. I only get to know $x$ and $y$ are in quadrant 3.
$\cos x=\cos \left(\frac{x}{2}+\frac{x}{2}\right)=2\cos^2\frac{ x}{2}-1$ hence $\cos \frac{x}{2}=\ldots$
This should give you two values for $\cos \frac{x}{2}$. As you also know in which quadrant is $\frac{x}{2}$, you can choose the right solution among the two you've found.
3. Omg. Never thought of that. Haha. Really thanks pal.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 20, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.962864100933075, "perplexity_flag": "head"}
|
http://mathhelpforum.com/advanced-algebra/164869-determinant.html
|
Thread:
1. determinant
given that A is an nxn matrix, how do you show that the characteristic polynomial of A is (-1)^n det(A)?
2. Originally Posted by alexandrabel90
given that A is an nxn matrix, how do you show that the characteristic polynomial of A is (-1)^n det(A)?
Doing magic because the above is far from being true. In fact, $(-1)^n\det A$ is not even a polynomial of degree n, let alone
the characateristic polynomial of any nxn matrix. Read carefully again the question and repost.
Tonio
3. the exact question is let A be an nxn matric and the trace of A is defined by tr(A)= a_11+a_22...+a_nn . Prove that the constant term of the characteristic polynomial is (-1)^n det(A)
4. Originally Posted by alexandrabel90
the exact question is let A be an nxn matric and the trace of A is defined by tr(A)= a_11+a_22...+a_nn . Prove that the constant term of the characteristic polynomial is (-1)^n det(A)
You know that $p(x)=\det\left(Ix-A\right)$, right? And that the constant term of a polynomial $q$ is $q(0)$ and $\det\left(c A\right)=c^n\det(A)$...right?
5. yup, i know that..
so lets say, A is similar to an upper triangular matric so P(x) = (x-a_11)....(x-a_nn) = x^n + a_n-1 x^(n-1) +....+a_0 so P(0) = a_0 and then how do i continue?
6. Originally Posted by alexandrabel90
yup, i know that..
so lets say, A is similar to an upper triangular matric so P(x) = (x-a_11)....(x-a_nn) = x^n + a_n-1 x^(n-1) +....+a_0 so P(0) = a_0 and then how do i continue?
Friend, what if we just plugged $0$ into $p(x)=\det\left(Ix-A\right)$?
7. oh crap!!! im so blur! thanks
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 7, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8792667388916016, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/72418/what-are-the-best-known-bounds-on-the-number-of-partitions-of-n-into-exactly-k/72491
|
## What are the best known bounds on the number of partitions of $n$ into exactly $k$ distinct parts?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
For example, if $n = 10$ and $k = 3$, then the legal partitions are $$10 = 7 + 2 + 1 = 6 + 3 + 1 = 5 + 4 + 1 = 5 + 3 + 2$$ so the answer is $4$. By choosing $k$ random elements of ${1,\ldots,2n/k}$, one can easily construct about $(n/k^2)^k$ such partitions. For $k \approx \sqrt{n}$ this is not far from best possible, since the total number of partitions is (by Hardy and Ramanujan's famous theorem) asymptotically $$\frac{1}{4 \sqrt{3} n} \exp\left( \pi \sqrt{ \frac{2n}{3} } \right).$$ Can one do much better than $(n/k^2)^k$ for smaller k?
To be precise, writing $p^*_k(n)$ for the number of such partitions, is it true that, for some constant $C$, $$p^*_k(n) \leqslant \left( \frac{Cn}{k^2} \right)^k$$ for every $n,k \in \mathbb{N}$?
-
For a fixed $k$, the number of partitions of $n$ into $k$ parts grows as a polynomial in $n$, of degree $k-1$. – Gerry Myerson Aug 9 2011 at 1:35
Gerry, does this mean that there is a polynomial p(n) such that (for a specified k) the number of partitions of n into exactly k parts is p(n)? I suspect you mean the latter, which is that there are polynomials p(n) and q(n) which serve as upper and lower bounds (when n is sufficiently large) to the desired function. Some clarity would be appreciated. Gerhard "Ask Me About System Design" Paseman, 2011.08.08 – Gerhard Paseman Aug 9 2011 at 4:05
@Gerhard, I mean the number of partitions is $C_kn^{k-1}+O(n^{k-2})$. See also Igor Rivin's answer. What actually happens is that for each $k$ there's an $m=m(k)$ such that there are $m$ polynomials $P_i(x)$ such that if $n\equiv i\pmod m$ then the number of partitions is $P_i(n)$; each $P_i$ has as its leading term the term given in Igor's answer. – Gerry Myerson Aug 9 2011 at 5:47
Cool! I suspected something like that, but it helps a lot to see your phrasing of it. Thanks, Gerry! Gerhard "Ask Me About System Design" Paseman, 2011.08.09 – Gerhard Paseman Aug 9 2011 at 17:49
## 2 Answers
In the 1990 paper by Charles Knessl and Joseph Keller, the authors prove the asymptotic result (for $n>>1, k=O(1)$, your number is asymptotic to:
$\dfrac{n^{k-1}}{k[{k-1]!}^2}.$
They show a number of other related asymptotic results.
EDIT for $k \ll n,$ they have the asymptotic too painful to typeset, but you can find in http://dl.dropbox.com/u/5188175/2101859.pdf, equation (2.27)
-
Thanks! However, I'm really interested in the case $k = k(n) \to \infty$ as $n \to \infty$. An asymptotic result would be great, but I'd be happy even with something much weaker, like the bound I suggested... – Rob Aug 9 2011 at 1:57
To be even more specific, what can one say for $k \approx n^{1/3}$? – Rob Aug 9 2011 at 1:59
See the edit, and enjoy. – Igor Rivin Aug 9 2011 at 2:44
Great, thanks again! At the top of page 327 they state that $p_k(n)$, the number of partitions of $n$ into $k$ (not necessarily distinct) parts, is asymptotically $$\frac{1}{2\pi n} \left( \frac{e^2 n}{k^2} \right)^k,$$ which answers my question in the affirmative! =) – Rob Aug 9 2011 at 4:29
1
One slight problem; as George Andrews points out in the AMS review, the authors make the following (slightly cryptic) comment in the Introduction: Finally we note that all our calculations are formal since we have not proved that they are asymptotic...it should be possible to prove that our results are asymptotic also." This seems to imply that the authors have not proved the claimed bounds. Do you know whether this is the case? If not then I'll email the authors and ask... Thanks again! – Rob Aug 9 2011 at 4:33
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
On further reflection, there seems to be a very simple (and nice) solution to my question. I'll sketch a proof of the following theorem.
Theorem: There is a constant $C$ such that $$\frac{1}{Cnk} \left( \frac{e^2 n}{k^2} \right)^k \leqslant p_k^*(n) \leqslant \frac{C}{nk} \left( \frac{e^2 n}{k^2} \right)^k.$$
The upper bound follows from the recursion $$p_k^*(n) \leqslant \frac{1}{k} \sum_{a=1}^n p^*_{k-1} (n-a)$$ by a simple induction argument. To see the recursion, simply note that since the elements of the partition are distinct, we count each one exactly $k$ times.
For the lower bound, we use the probabilistic method. Motivated by the calculation above, let's choose a random sequence $A = (a_1,\ldots,a_k)$ by selecting each $a_j$ independently according to the distribution $$\mathbb{P}(a_j = a) \approx \frac{(k-1)(n-a)^{k-2}}{n^{k-1}}.$$ Discard the (few) sequences with repeated elements, and note that the expected value of $\sum a_j$ is $n$. We claim that the probability that $\sum a_j = n$ is roughly $1/(n \sqrt{k})$, and that each such sequence appears with probability at most $$\left( \frac{k-1}{en} \right)^k.$$ It follows that there are at least $$\frac{1}{Cn \sqrt{k}} \left( \frac{en}{k-1} \right)^k$$ such sequences. Dividing by $k!$ gives the desired bound on the number of sets.
-
I like the argument for the lower bound, but I suspect that some restriction on $k$ is needed. For instance, if $k$ is of the order $\sqrt{n}$ then, by the Birthday Paradox, it will be likely that two of the $a_j$ are equal. Then the error introduced by discarding sequences with repeated elements will be appreciable. – Mark Wildon Aug 10 2011 at 12:07
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 48, "mathjax_display_tex": 9, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9434361457824707, "perplexity_flag": "head"}
|
http://mathhelpforum.com/math-topics/69547-sequences.html
|
# Thread:
1. ## Sequences
Hi Guys, I desperately need to know if there is a simple formula for the following set of numbers.
3
7
12
18
25
33
42 etc
You can see that the number by which it increses is 1 extra each time but I need to know how to but this into a formula.
Thanks,
Mark
2. Originally Posted by Berwick
Hi Guys, I desperately need to know if there is a simple formula for the following set of numbers.
3
7
12
18
25
33
42 etc
$\frac{n(n+5)}{2}$
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9417726993560791, "perplexity_flag": "middle"}
|
http://www.mathplanet.com/education/geometry/area/parallelogram,-triangles-etc
|
# Parallelogram, triangles etc
The area of a parallelogram is determined by multiplying the base, b, with the height, h, of the parallelogram:
$A=b\cdot h$
The area of a triangle is determined by multiplying the base, b, with the height, h, of the triangle and divide by two:
$A=\frac{b\cdot h}{2}$
The area of a trapezoid is determined by multiplying the mean value of the two bases, b1 and b2, with the height, h, of the trapezoid:
$A=h\cdot \frac{b_{1}+b_{2}}{2}$
The area of a rhombus is determined by half the product of the two diagonals:
$A=\frac{1}{2}(AC)(BD)$
The area of a regular polygon is determined by the product of the perimeter, P, and what is called an apothem, a (see figure below):
$A=\frac{Pa}{2}$
The area of a circle is determined by multiplying the square of the radius, r, with the constant π, pi:
$A=\pi r^{2}$
Video lesson: Find the area of the triangle
Next Class: Area, The surface area and the volume of pyramids, prisms, cylinders and cones
• Pre-Algebra
• Algebra 1
• Algebra 2
• Geometry
• Sat
• Act
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 6, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.895350456237793, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/tagged/residue-calculus+improper-integrals
|
Tagged Questions
1answer
66 views
Finding a generalization for $\int_{0}^{\infty}e^{- 3\pi x^{2} }\frac{\sinh(\pi x)}{\sinh(3\pi x)}dx$
$\;\;\;\;$I was reading the introduction of Paul J. Nain's book "Dr. Euler's fabulous formula" where he talks about the sense of beauty in mathematics and quotes the G.N.Watson as saying that a ...
3answers
75 views
Cauchy principal value of $\int_{\infty}^{-\infty}e^{-ax^2}\cos(2abx) \,dx$
How do I find out the Cauchy Principal value of $\int_{-\infty}^{\infty}e^{-ax^2}\cos(2abx) \,dx\,\,\,\,\,\,\,\,a,b>0$ using complex integration? The answer is $\sqrt{\frac{\pi}{a}}e^{-ab^2}$, and ...
0answers
62 views
Improper integral equal to -pi with square root and Cauchy principal value
I'd like to know if the following proof for the value of $I$ is correct, and if there is a simpler solution to it. Also, I will probably encounter more improper integrals like this in the future, and ...
3answers
134 views
Integral $\int_0^\infty \exp(ia/x^2+ibx^2)dx$
Compute the integral: \begin{equation} \int_0^\infty \exp\left(\frac{ia}{x^2}+ibx^2\right)\,dx \end{equation} for $a$, $b$ real and positive. I tried complex variables, but don't really know how to ...
1answer
59 views
evaluate $\int_0^\infty \dfrac{dx}{1+x^4}$ using $\int_0^\infty \dfrac{u^{p-1}}{1+u} du$
evaluate $\int_0^\infty \dfrac{dx}{1+x^4}$using $\int_0^\infty \dfrac{u^{p-1}}{1+u} du = \dfrac{\pi}{\sin( \pi p)}$. I am having trouble finding what is $p$. I set $u = x^4$, I figure $du = 4x^3 dx$, ...
3answers
200 views
A generalized integral need help
I was thinking this integral : $$I(\lambda)=\int_0^{\infty}\frac{\ln ^2x}{x^2+\lambda x+\lambda ^2}\text{d}x$$ What I do is use a Reciprocal subsitution, easy to show that: ...
1answer
98 views
Integral using residue theorem (maybe)
I came across the following integral in a book (Kato's Perturbation Theory for Linear Operators, $\S$3.5): $\int_{-\infty}^\infty (a^2+x^2)^{-n/2}\,dx$ where $n$ is a non-negative integer and $a$ is ...
1answer
58 views
Residue Calculus Integral computation
I ran into this problem when I was doing some residue computations. For real $a\neq0$, compute, $$I=\int_{-\infty}^{+\infty} \frac{e^{iax}}{(x+i)^3}$$ Be sure to treat both cases when \$a<0, ...
3answers
202 views
Evalulate $\int_{-\infty}^{\infty}\frac{1}{(1+x^{2n})^2}dx$ by using residue theorem
I know the answer of the integral $$\int_{-\infty}^{\infty}\frac{1}{1+x^{2n}}dx=\frac{\pi}{n\sin\left(\frac{\pi}{2n}\right)}$$where $n\in\mathbb{N}$. But how to evalulate ...
0answers
72 views
What is suitable contour shape for $\int_0^\infty\dfrac{b^2+2ab+k}{b(b^2+ab+l)}e^{bx}~db$
$\int_0^\infty\dfrac{b^2+2ab+k}{b(b^2+ab+l)}e^{bx}~db$ . What kind of contour is suitable for this integral?
1answer
861 views
Calculating the Fourier transform of $\frac{\sinh(kx)}{\sinh(x)}$
I'm trying to compute $$\int_{-\infty}^\infty \frac{\sinh(kx)}{\sinh(x)}e^{-i\omega x} \ dx$$ i.e. the Fourier transform of $x\mapsto \frac{\sinh(kx)}{\sinh(x)}$, where $0<k<1$ is fixed. But ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 28, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8957309126853943, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/12905?sort=newest
|
## Set theories that do require the existence of urelements?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I am looking for an axiomatic set theory that not only admits the existence of urelements/atoms (via two-sortedness or an additional unary predicate) but requires it, e.g. by an axiom like "for each set there is an equipollent set of urelements" (= "there are arbitrarily many urelements"). Any references?
-
## 4 Answers
I don't know if this is exactly what you're looking for, but it's a theorem of NFU that $|\mathcal{P}(V)| < |V|$---which has as a corollary not only that there are atoms, but that the set of atoms is equipollent with the universe.
(A somewhat more disquieting way of putting this is that there are more atoms than there are sets.)
-
Something's wrong. NF extends NFU and proves there are no urelements, hence you are effectively claiming that NF is inconsistent, which is actually a major open problem. – Emil Jeřábek Mar 3 2011 at 16:00
Fair enough; I should more accurately say NFU+Choice. NF itself is inconsistent with choice, which is essentially why NFU was devised. You're right that NF is inconsistent iff NFU can prove the existence of atoms without invoking the axiom of choice. – Ian Maxwell Mar 4 2011 at 20:35
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Nominal logic is based on the Frænkel-Mostowski permutation model of set theory. In particular Nominal logic has a freshness axiom that states `${\forall}x. {\exists}a \in \mathbb{A}. a \# x$`, where $\mathbb{A}$ is the set of atoms and `#` is a definable relation which is a bit too complicated to put here.
For reference, see the work by Andrew Pitts and Murdoch Gabbay. For example "A New Approach to Abstract Syntax Involving Binders".
-
Your question is equivalent to asking whether the urelements, or atoms, can form a proper class. This axiom is consistent with ZFA, but usually ZFA is introduced so as to not insist on this (and indeed, not insist on any atoms at all). I believe that many (or most) of the other standard set-theories-with-urelements also allow this.
Andreas Blass has an article here, where he investigates the connection between some theorems in homological algebra and the Axiom of Choice. In his introduction, he states:
In Section 3, we construct a model of set theory with no nontrivial injective abelian groups. It is a permutation model in which the atoms (= urelements) form a proper class;
In contrast, sometimes it is useful to have only a set of atoms, as witnessed by Eric Hall's article, which contains the following remark.
Definitions and Conventions. The theory ZFA is a modification of ZF allowing atoms, also known as urelements. See Jech [4] for a precise definition. A model of ZFA may have a proper class of atoms; however, for this paper we redefine ZFA to include an axiom which says that the class of atoms is a set (always denoted by A).
-
In the final end I found such a theory, it's called ZFCUA (= Zermelo Frankel set theory with the axiom of choice and unlimited atoms), see Faithful Representation in Set Theory with Atoms by Harvey Friedman.
The relevant axiom is #11: "There is no set consisting of all atoms."
A consequence of this axiom is, that there definitely are atoms (since the empty set is a set) and furthermore, that there are so many, that the collection of all of them is not a set but a proper class.
-
3
It's not your doing, but I parsed Axiom 11 incorrectly and got very confused. Better I think is: "There is no set which contains every atom." (I originally read the statement as "There is no set $S$ such that every element of $S$ is an atom", which made no sense at all.) – Pete L. Clark Jan 25 2010 at 13:42
Do you agree with my reading, then? – Hans Stricker Jan 25 2010 at 13:50
1
Side note: If categorists tend to see sets as "bags of dots" (and categories as "bags of dots with arrows between them") why not measure category theory against a set theory which explicitely captures "bags of dots". Simply because the category of sets is equivalent to the category of sets with atoms (as I suppose)? – Hans Stricker Jan 25 2010 at 14:01
1
If you assume AC, then every set (even one containing atoms) can be well-ordered and is thus isomorphic to a von Neumann ordinal, which is a pure set (hereditarily contains no atoms). But in the absence of AC things can be more interesting, e.g. consider a permutation model of ZFA. – Mike Shulman Jan 25 2010 at 15:47
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9467048048973083, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/calculus/115989-easy-way-finding-line-intersection-2-planes.html
|
# Thread:
1. ## easy way of finding line of intersection of 2 planes
I may be confused so I am hoping someone could verify what I am saying is correct.
To find the line of intersection of 2 planes subtract one from the other so that one of the variables cancels out. Then introduce a parameter and then solve for x y and z.
e.g.
$x+2y-z=10$ and $2x+3y+z=0$. Subtracting 2 times the first equation from the second gives $-y+3z=-20$. Let z = t. Solving the equation for y give y=20+3t. Then from the first equation x=10-2(20+3t)+t
am I correct in believing that I have found the parametric equation of the line of intersection of the 2 plains mentioned above?
The reason I'm doubtful is because when I google searched "line of intersection of two planes" I found a more difficult approach that $r=r_o+tv$ where the cross product is used to find v. I guess this way is used to find the vector equation of the line?
2. Hello, superdude!
To find the line of intersection of 2 planes, subtract one from the other,
so that one of the variables cancels out.
Then introduce a parameter and then solve for x y and z.
.Example: . $\begin{array}{cccc}x+2y-z&=&10 & [1]\\<br /> 2x+3y+z&=&0 & [2]\end{array}$
Let me show you the way I explain it . . .
$\begin{array}{cccccc}\text{Multiply }2\times [1]\!: & 2x + 4y - 2z &=& 20 \\ \text{Subtract [2]:} & 2x + 3y + z &=& 0 \\ \\[-3mm]<br /> \text{And we have:} & \qquad\; y - 3z &=& 20 \end{array}$
. . Hence: . $y \:=\:3z + 20$
Substitute into [1]: . $x + 2(3z+20) - z \:=\:10$
. . Hence: . $x \:=\:\text{-}5z-30$
So we have: . $\begin{array}{ccc}x &=& \text{-}5z - 30 \\ y &=& 3x + 20 \\ z &=& z \end{array}$
On the right, replace $z$ with a parameter $t.$
. . and we have: . $\begin{Bmatrix}x &=& \text{-}5t - 30 \\ y &=& 3t + 20 \\ z &=& t \end{Bmatrix}$
The reason I'm doubtful is because when I google searched
"line of intersection of two planes" I found a more difficult approach
where the cross product is used to find v. I guess this way is used
to find the vector equation of the line?
Yes, but you may find this vector approach is faster.
The normal vectors of your two planes are: . $\langle1,2,-1\rangle\,\text{ and }\,\langle2,3,1\rangle$
The cross-product gives the direction of the line of itersection.
. . $\left|\begin{array}{ccc}i & j & k \\ 1 & 2 & \text{-}1 \\ 2 & 3 & 1 \end{array}\right| \;=\;5i - 3j - k \;=\;\langle 5,\text{-}3,\text{-}1\rangle$
Now find any point that lies on both planes . . . For example: . $(0,2,\text{-}6)$
And we have our parametric equations: . $\begin{Bmatrix}x &=& 0 + 5t \\ y &=& 2 -3t \\ z &=& \text{-}6 - t \end{Bmatrix}$
3. Originally Posted by Soroban
Now find any point that lies on both planes . . . For example: . $(0,2,\text{-}6)$
How did you find a point so quickly?
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 18, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8768675327301025, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/33846/proof-of-charge-existence-on-a-grounded-conductor/33898
|
# Proof of charge existence on a grounded conductor
A question regarding the existence of charge on grounded conductors is confusing me.
Could there be charge on a grounded conductor? How does this not contradict Gauss's Law?
Since every conductor has some capacitance attributed to it, doesn't the existence of charge on a grounded conductor contradicts the equation: $Q = C*V$ ?
-
"ground" is an ambiguity... It's where you define V to be zero... But the physical quantity is Potential difference. – Chris Gerig Aug 9 '12 at 22:18
– Qmechanic♦ Aug 9 '12 at 22:51
The answers doesn't provide a proof, nor does it answer the capacitor's equation – alqubaisi Aug 9 '12 at 23:58
What does Gauss's law have anything to do with zero potential, i.e., grounded? – C.R. Sep 9 '12 at 15:16
## 2 Answers
It is almost guaranteed that many macroscopic earth-grounded conductors are not exactly charge-neutral. But they are probably all extremely close to charge-neutral.
The reason is, "ground" has no divine status, it is just a conductor. Usually it's earth-ground, and earth is very close (I imagine) to charge-neutral. But imagine life on a planet that was quite negatively charged (due to some atmospheric process throwing protons but not electrons into space). Then if you connected a conductor to "earth-ground" (i.e. the planet) it would certainly take some of the excess electrons.
You are wondering how the equation Q=CV applies to a capacitor where both sides are charged and both sides are grounded. Well, for the equation Q = CV, Q is not just any old charge, it refers specifically to the so-called "charge on the capacitor", the amount of charge that has been added to one plate but subtracted from the other plate. If the same charge is added to both plates, that charge is not "the charge on the capacitor", it's actually the charge on a different capacitor, the capacitor where one "plate" is both of those conductors, and the other "plate" is wherever the counter-charge is (i.e. the other end of those electric field lines), which would be outer space. So there is no contradiction: The two grounded plates indeed have no voltage difference between them, Q=V=0 in the equation.
For Gauss's law, I'm not sure what you think the contradiction is. There are electric field lines going from space to the planet's surface and to grounded conductors. On the other hand, a grounded conductor in a grounded metal building really would be charge-neutral, because the electric field lines from outer space would terminate on the outside of the building and would not get inside.
-
The earth can be considered like a large spheric conductor of capacity C1. When you ground something whose capacity is C2, the electric charges move from one conductor to the other: The equations are:
Q1+Q2=Q'1+Q'2
V'1=V'2=V
Q'1=C1*V Q'2=C2*V
The Earth electric capacity is so huge that unless you do microelectronics, you can consider all the charges have flown to it.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9536361694335938, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/tagged/f4
|
## Tagged Questions
2answers
775 views
### Invariants for the exceptional complex simple Lie algebra $F_4$
This is an edited version of the original question taking into account the comments below by Bruce. The original formulation was imprecise. Let $\mathfrak{g}$ denote a complex si …
4answers
407 views
### Type of 26-dimensional representation of different real forms of the complex simple Lie algebra $F_4$
The exceptional complex simple Lie algebra $F_4$ has an irreducible 26-dimensional representation $V$ with Dynkin label [0,0,0,1] in the usual ordering of the simple roots one can …
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8344210386276245, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/54415/higgs-boson-graviton
|
# Higgs-Boson/Graviton
The Higgs boson gives particles mass. And the graviton is the theoretical force-carrier of gravity. Gravity depends on mass. So if the Higgs Boson gives things mass, it therefore gives them gravity. Is the Higgs Boson the same thing as a Graviton? Or is there a difference? The only thing I know is that the Higgs Field is something very different from the Gravitational Field. Yet, I'm not satisfied with that fact. I want to know why the Higgs Boson is not the Graviton.
-
4
– Loourr Feb 19 at 16:12
2
Ron Maimon's answer to that question is everything the OP needs to read. – Michael Brown Feb 19 at 16:21
## 3 Answers
You say:
Gravity depends on mass
but this is not so. The source of the gravitational field is an object called the stress-energy tensor. One element of this object is the energy density, and mass contributes to this through Einstein's well known equation $E = mc^2$, but mass is not required to generate a gravitational field. Even massless particles like photons generate a gravitational field.
The Higgs boson, as discovered at the LHC, is a low energy effect of the Higgs field, and it's the Higgs field that is responsible for the mass of the elementary particles. So it's not even correct to say that the Higgs boson gives particles mass.
The graviton is the carrier of the gravitional force if you describe that force by a quantum field theory. Whether gravitons are a useful way to describe quantum gravity is not clear. However, what is clear is that the graviton and Higgs boson are entirely unrelated.
-
Thanks! That answers my question quite well! – Ze Photon Feb 19 at 19:22
You basically answered your own question: Mass is the source of gravity. The Higgs particle plays a role in creating this source, while the graviton plays a role in explaining the mechanism of gravity (but not what causes it to be there in the first place).
-
Mass is a measure of the " inductance " of a particle. The inductance filters quantum vacuum frequencies. Gravity is the resulting Casmir Force
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9239404797554016, "perplexity_flag": "head"}
|
http://nrich.maths.org/270/index?nomenu=1
|
$A$ and $C$ are the opposite vertices of a square $ABCD$, and have coordinates $(a,b)$ and $(c,d)$, respectively. What are the coordinates of the other two vertices? What is the area of the square? How generalisable are these results?
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9507150650024414, "perplexity_flag": "head"}
|
http://samjshah.com/2012/11/08/will-the-fish-bite/
|
# Will the fish bite?
Posted on November 8, 2012 by
Today at the end of Precalculus today I asked if any kids had any questions/topics they wanted a quick review on for an assessment we’re having tomorrow. (We lost a week due to Hurricane Sandy, so it’s been a while since they’ve worked on some of the topics.) One of the topics was inverse functions, so I gave a quick 3 minute lecture on them, and then we solved a simple “Here is a function. Find the inverse function.” question. They then wanted an example of a more challenging one, so I made up a function:
And then we went through and solved for y. And we found that …
… the function is it’s own inverse. Yup, when you go through and solve it, you’ll find that is true. (Do it!)
I said: “WAIT! Don’t think this always happens! This is just random! Really! This is random!”
But I just had the thought that this might be good to capitalize on. So I sent my kids the following email:
I don’t know if anyone will bite, but I hope that someone decides to take me up on it. I already have a few ideas on how to have them explore this! (Namely, first exploring $\frac{x-b}{x-d}$ and then exploring $\frac{ax-b}{cx-d}$.)
We’ll see… I’m trying to capitalize on something random from class. I hope it pans out.
### Like this:
This entry was tagged Pre-Calculus. Bookmark the permalink.
## 6 thoughts on “Will the fish bite?”
1. Omg, Sam, you are my hero. This happened to me in Pre-Calc this year too with a very similar function. My first thought was, “It can’t be. Re-check the algebra.” Not a lot of algebra to check. Run to calculator. Discover the amazingness. I was stoked.
You’ve probably already thought of this, but one good exploration for the kids (especially after they’ve done some algebra on the functions) would be to graph the functions you gave using sliders for a, b, c, and d. Fun!
Also, abs value functions are sometimes their own inverse, depending on how you restrict the domain. Not quite as challenging as rational functions, but I think it provides a good discussion nonetheless and would be a very nice example to include if someone does bite and decides to write a paper on it.
Please keep us posted on this!!
2. Probably you know this already, but rational functions like the one you found are, in the complex plane, very interesting transformations! Look here: http://en.wikipedia.org/wiki/Möbius_transformation
3. I remember coming across this when I taught pre-Calc and it’s pretty interesting. I was thinking about why this happens and noticed that it’s whenever the horizontal and vertical asymptotes are the same. So with your (ax-b)/(cx-d) the horizontal asymptote is a/c and the vertical asymptote is d/c. They are equal when a=d. As long as a=d this rational function will be it’s own inverse. Thanks for sharing! I hope your kids figure this out. I bet some will.
4. Eric
Hey Sam,
Long time listener, first time caller. Love the Math Journal idea, and everything else you do.
Now a somewhat off topic question, but since you mentioned “review” in this post, I guess it fits. Reading the blog, it seems like review isn’t something that you do formally before you give an assessment, am I right in thinking that? I’m introducing SBG into my school’s math classes for the first time, and haven’t been giving formal reviews sheets prior to assessments since the expectation for my students is that they should always be reviewing a little bit every night. However, I think the power of the Review Sheet has caused them to be lazy brained and a lot of them are doing poorly on their assessments due to their lack of review on their own. Should I go back to holding their hand and give them review sheets or just doing it like I have been? (To be clear, I give them the skill that they will be assessed on, the section it comes from in their notes, the homework problems that they’ve done on it, and other problems in their books to try for practice many days before the assessment.)
Let me know what you think
-Eric
• Hihi-
I don’t review in calculus where I do SBG. However, I am teaching precalculus this year for the first time, and I’m not doing SBG in that class. (To do SBG, I need to get the other teacher on board, and also I don’t feel comfortable switching to SBG until I am comfortable with the course so I can make standards that make sense to me.) Thus, my precalculus class is more traditional.
As for the review sheet, you give them so much, so I don’t think you need to give them a review sheet: “To be clear, I give them the skill that they will be assessed on, the section it comes from in their notes, the homework problems that they’ve done on it, and other problems in their books to try for practice many days before the assessment.”
Best,
Sam
5. One of the questions on my pre-calc test (and it’s a concept important enough I announce beforehand what it will be) is “What happens geometrically when taking the inverse of a graph?”
A natural extension is “what happens geometrically when the inverse of a graph is the same as the original graph?”
For your question, I’d say it is easiest to answer in reverse — figure out how to construct rational functions that have the line y=x as a line of symmetry.
# Tags
This is a work in progress (not all posts are tagged yet). But it's all due to the efforts of @crstn85. Thank you!
Algebra II
Pre-Calculus
Calculus
Multivariable Calculus
Standards Based Grading
General Ideas for the Classroom
Big Teaching Questions
Good Math Problems
Mathematical Communication
Other
# Email Subscription
Join 250 other followers
# Blogroll
(0.6014,-0.1169) (-0.4777,0.1747)
Blog at WordPress.com. | Theme: Confit by Automattic.
Follow
### Follow “Continuous Everywhere but Differentiable Nowhere”
Get every new post delivered to your Inbox.
Join 250 other followers
%d bloggers like this:
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 2, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9436560273170471, "perplexity_flag": "middle"}
|
http://www.physicsforums.com/showthread.php?t=586612
|
Physics Forums
Thread Closed
Page 1 of 2 1 2 >
## is lunar effect on humans...lunacy?
I have done some reading, and found articles with little to no references (to scientific journals) that proclaim that although humans are mostly water...the moons gravitational pull has very little, if any, effect on our bodies.
I dont believe that there is more crime during full moon's, for instance...however, there are a couple of studies out there which seem to say that there is a coorelation.
I'm looking for actual scientific evidence that our bodies are not physically affected.
Does anyone know where I should start looking?
thx
PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug
Mentor
Blog Entries: 4
The myths about the effects of the full moon are just that. Of course if someone has convinced themselves that they will be negatively affected during a full moon, they might bring trouble upon themselves.
The full moon and ED patient volumes: unearthing a myth. Abstract To determine if there is any effect of the full moon on emergency department (ED) patient volume, ambulance runs, admissions, or admissions to a monitored unit, a retrospective analysis of the hospital electronic records of all patients seen in an ED during a 4-year period was conducted in an ED of a suburban community hospital. A full moon occurred 49 times during the study period. There were 150,999 patient visits to the ED during the study period, of which 34,649 patients arrived by ambulance. A total of 35,087 patients was admitted to the hospital and 11,278 patients were admitted to a monitored unit. No significant differences were found in total patient visits, ambulance runs, admissions to the hospital, or admissions to a monitored unit on days of the full moon. The occurrence of a full moon has no effect on ED patient volume, ambulance runs, admissions, or admissions to a monitored unit.
http://www.ncbi.nlm.nih.gov/pubmed/8924138
Mentor
Blog Entries: 1
Quote by mattbatson I have done some reading, and found articles with little to no references (to scientific journals) that proclaim that although humans are mostly water...the moons gravitational pull has very little, if any, effect on our bodies.
The fact that you find little/no references should tell you all you need to know. Considering all the billions of work hours that have gone into a variety of human biology fields it's pretty much ridiculous to suggest that a very obvious monthly cycle has not been noticed in all humans at some point.
Furthermore it is irrelevant whether or not we are made of water or anything else, gravity acts on mass regardless of what that mass actually is.
Recognitions:
Gold Member
Science Advisor
Staff Emeritus
## is lunar effect on humans...lunacy?
Quote by Ryan_m_b Considering all the billions of work hours that have gone into a variety of human biology fields it's pretty much ridiculous to suggest that a very obvious monthly cycle has not been noticed in all humans at some point.
I can think of one found in about half of all humans.
Recognitions:
Gold Member
Homework Help
Quote by mattbatson I have done some reading, and found articles with little to no references (to scientific journals) that proclaim that although humans are mostly water...the moons gravitational pull has very little, if any, effect on our bodies.
It's not hard to do a few, rough calculations to prove this to yourself. For starters we can make a few approximations, while keeping in the spirit of the topic:
(a) Let's assume the moon is spherical.
(b) Let's assume that the moon is a constant 385 x 106 meters away. The moon's actual distance varies slightly (from around 360 x 106 to 410 x 106 m) due to the fact that it has an elliptical orbit and the fact we are on the Earth, and the Earth has its own radius and is rotating. But 385 x 106 meters is a rough approximation.
The gravitational acceleration caused by our moon can be determined by
$$a = G \frac{m}{r^2}$$
where G is Newton's gravitational constant, G = 6.67 x 10-11 [m3 kg-1 s-2]. The mass of the moon ,m, is 735 x 1022 kg. Plugging the numbers in gives us
$$a = (6.67 \times 10^{-11}) \frac{7.35 \times 10^{22}}{(384 \times 10^6)^2} \mathrm{[m \ s^{-2}]}$$
$$= 0.0000332 \ \mathrm{m/s^{2}}$$
The direction of that acceleration depends on where the moon is, of course. If you see the moon near the horizon, the accleration is coming from that direction. If you see the moon nearly straight up, the acceleration is directed in that direction. If it's a new moon at midnight, the direction is down.
Compare that to the acceleration caused by the Earth's gravitational attraction on our bodies (we being on the surface of the Earth), which is approximately $9.8 \ \mathrm{m/s^{2}}$, straight down. As you can see, the moon's gravitational pull is barely a drop in the bucket, compared to the Earth's.
You might have been curious about the tides. Yes, the gravitational pull from the Moon (and the Sun too) cause tides. And these tides cause the surface level of the water to raise and lower by somewhere on the order of 1 meter (roughly a meter -- of course that depends on some other geographical factors, but let's just say roughly around a meter or so, give or take.) Now consider that the average ocean depth is in the thousands of meters; ~3000-4000 meters is typical. The deepest part of an ocean is over 10,000 meters. So a tidal variation of about a meter isn't all that much. (I don't want to get into the math/physics of the tides though. It's significantly more complicated than the math used above, although it can be done.)
I dont believe that there is more crime during full moon's, for instance...however, there are a couple of studies out there which seem to say that there is a coorelation. I'm looking for actual scientific evidence that our bodies are not physically affected. Does anyone know where I should start looking? thx
I don't know of such sources, but I haven't looked either . If you did start looking yourself, you should make an effort to remove biases such as brightness (i.e. reflection of the Sun's light off the Moon, directed back to Earth). When the moon is full, it's brighter out. It's pretty obvious. That alone might very well have an effect on people's habits and nighttime shenanigans. It seems to me as though you are interested in the variational effects of the moon's gravitational pull, which is far less noticeable to people than the moon's brightness.
Quote by Ryan_m_b The fact that you find little/no references should tell you all you need to know. Considering all the billions of work hours that have gone into a variety of human biology fields it's pretty much ridiculous to suggest that a very obvious monthly cycle has not been noticed in all humans at some point. Furthermore it is irrelevant whether or not we are made of water or anything else, gravity acts on mass regardless of what that mass actually is.
excellent point about the water content...the people I'm discussing this with seem to liken it to the tides, and that is why they kept referring to water content.
They posted some study done in india back in the late 70's that showed a coorelation between full moon and crime at three different precincts...study is here...http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1444800/
I know that one statistical study means nothing...but apparently there are a couple more out there somewhere.
I've read many blogs and articles (including one from scientific american) that make some excellent points on how this is just not possible. But, the references are all books. I was hoping to find some actual scientific studies that show there are NO effects.
Recognitions: Gold Member Homework Help By the way, due to the Earth's rotation and the moon's revolution, the direction of the moon's gravitational pull circles all the way around in the period of just over a day. This is true even if it's a new moon or a full moon or whatever (e.g. when the moon is full, it rises in the east and sets in the west in a single night -- it doesn't just sit up in the same right ascension all night). If you looking for any effects of gravitational pull from the moon you should look for effects that have a period of 24.878 hours. (Don't hold your breath trying to find anything though.)
Quote by Evo The myths about the effects of the full moon are just that. Of course if someone has convinced themselves that they will be negatively affected during a full moon, they might bring trouble upon themselves. http://www.ncbi.nlm.nih.gov/pubmed/8924138
thanks so much for that find!
really appreciate it.
Quote by collinsmark By the way, due to the Earth's rotation and the moon's revolution, the direction of the moon's gravitational pull circles all the way around in the period of just over a day. This is true even if it's a new moon or a full moon or whatever (e.g. when the moon is full, it rises in the east and sets in the west in a single night -- it doesn't just sit up in the same right ascension all night). If you looking for any effects of gravitational pull from the moon you should look for effects that have a period of 24.878 hours. (Don't hold your breath trying to find anything though.)
another excellent point
I hate it when facts and logic get in the way
Mentor
Blog Entries: 4
Quote by mattbatson excellent point about the water content...the people I'm discussing this with seem to liken it to the tides, and that is why they kept referring to water content. They posted some study done in india back in the late 70's that showed a coorelation between full moon and crime at three different precincts...study is here...http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1444800/
That was published in the BMJ. I have seen so many crackpot papers posted there that I can't believe that they have not been downgraded yet.
The increased incidence of crimes on full moon days may be due to "human tidal waves" caused by the gravitational pull of the moon.
Hoo boy.
I was hoping to find some actual scientific studies that show there are NO effects.
I gave you one.
Recognitions: Gold Member Science Advisor Staff Emeritus I see one paper that talks about crime, and another that addresses hospital acitivities. It has been suggested before that any evidence of increased criminal activities during a full moon might be attributed to increased light levels. It's easier to commit a crime when you can see - no gravity required.
Quote by Evo That was published in the BMJ. I have seen so many crackpot papers posted there that I can't believe that they have not been downgraded yet.Hoo boy. I gave you one.
you did give me one, and i greatly appreciate it.
So BMJ has been involved in crackpot studies in the past....good to know
thx
Quote by Ivan Seeking I see one paper that talks about crime, and another that addresses hospital acitivities. It has been suggested before that any evidence of increased criminal activities during a full moon might be attributed to increased light levels. It's easier to commit a crime when you can see - no gravity required.
yes i can certainly see that.
thanks so much for all the responses...lots of great information
Mentor
Blog Entries: 4
Quote by mattbatson you did give me one, and i greatly appreciate it. So BMJ has been involved in crackpot studies in the past....good to know thx
They still are. There was a link to a paper in one of their *journals* recently where the editor justified his reason for allowing the junk saying basically *it doesn't matter if it's not true as long as they can make a good argument*. Sorry, it matters at this forum if it is true.
More crackpottery at the BMJ link you posted
We suggested that the impulse to take or to give poisons may be increased on full moon days owing to increased "human tidal waves" caused by the gravitational pull of the moon, which is maximum on that day because earth, moon, and sun lie in a straight line. The water content of the human body exceeds 50- 60% and some tidal wave is generated by the gravitational pull of the moon. These human tidal waves may cause physical, physiological, and biochemical changes in the body resulting in an increased tendency to take poisons.
Recognitions:
Gold Member
Science Advisor
Staff Emeritus
Quote by Evo They still are. There was a link to a paper in one of their *journals* recently where the editor justified his reason for allowing the junk saying basically *it doesn't matter if it's not true as long as they can make a good argument*. Sorry, it matters at this forum if it is true. More crackpottery at the BMJ link you posted The direct link to your article http://www.bmj.com/highwire/filestre...icle_pdf/0.pdf
However, there is still the distinction between the data, and the suggested explanation. A crackpot hypothesis doesn't necessarily invalidate the data, though it obvoiusly causes great concern over the validity of the entire paper.
Quote by Evo They still are. There was a link to a paper in one of their *journals* recently where the editor justified his reason for allowing the junk saying basically *it doesn't matter if it's not true as long as they can make a good argument*. Sorry, it matters at this forum if it is true. More crackpottery at the BMJ link you posted The direct link to your article http://www.bmj.com/highwire/filestre...icle_pdf/0.pdf
it seems as if the people doing the study had a pretty serious bias from the beginning?
Mentor
Blog Entries: 4
Quote by mattbatson it seems as if the people doing the study had a pretty serious bias from the beginning?
Just a little.
Thread Closed
Page 1 of 2 1 2 >
Thread Tools
Similar Threads for: is lunar effect on humans...lunacy?
Thread Forum Replies
Classical Physics 86
Classical Physics 4
Earth 13
Earth 108
Earth 61
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.961316704750061, "perplexity_flag": "middle"}
|
http://mathoverflow.net/revisions/74268/list
|
## Return to Answer
1 [made Community Wiki]
This is Denis-Charles Cisinski's answer, given in the comments:
Yes, the map $\pi_0(W^{-1}C)\to W^{-1}\pi_0(C)$ is always an equivalence of categories (this follows immediately by comparing the corresponding universal properties). The same remains true (for formal reasons as well) if you look at the truncations of hom-spaces defined by $\pi_1$ and work up to an adequate notion of equivalence of 2-categories (in fact, you may as well truncate in dimension $n$ and get an equivalence of (n+1,1)-categories).
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9099781513214111, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/21134/is-there-any-approximated-version-of-hilbert-90
|
## Is there any approximated version of Hilbert 90?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Suppose $K$ is a local field and $L$ a finite cyclic extension of $K$. By Hilbert 90, we know that if an element $a$ in $L$ such that $N_{L / K}(a) =1$ then $a = b / \sigma(b)$ for some $b$ in $L$ and $\sigma$ a generator of the Galois group.
My question: Suppose $N_{L / K}(a) \simeq 1$, is there some $b$ such that $a \simeq b /\sigma(b)$?
I guess I could have try to make the question more precise, but there seems to be some merits in leaving it a bit vague.
I have accept the the answer by Paul Broussous, which address the situation when the extension is unramified. It is because that is what I need. I am still curious whether something can be done when the extension is totally ramified?
-
I think you want $L$ a finite cyclic extension of $K$, not of $L$. – Jamie Weigandt Apr 12 2010 at 18:26
the error is fixed. – Tran Chieu Minh Apr 12 2010 at 19:14
1
Doesn't that follow from the standard proof? – Felipe Voloch Apr 12 2010 at 19:35
The problem that I found (if it is really a problem) is that there might be no element in the neighborhood of $a$ which has of norm 1. I try to mimic the original proof for the cyclic case but the independence of automorphism does not work anymore, I mean it still work but I can not bound the $b$ given by this process. – Tran Chieu Minh Apr 13 2010 at 1:53
## 1 Answer
Yes such approximated versions of Hilbert 90 do exist. But you need some technical conditions.
For instance assume that $L/K$ is unramified of degree $d$ and that $a\in {\mathfrak o}_{L}^{\times}$.
Then you condition writes
$N_{L/K}(a)\equiv 1$ modulo $\mathfrak{p}_{K}^{n}$,
for some $n>0$ (I assume that this is what you mean by $\simeq$). This may be rewritten $N_{L/K}(a)=1$ in $U_{L}/U^{n}_L$, where $U$ denotes a unit group. So the map $$\sigma^u\mapsto a\sigma (a) \cdots \sigma^u (a)$$ defines a $1$-cocycle of ${\rm Gal}(L/F)$ in $U_L /U^n_L$ (here $\sigma$ denotes the Frobenius substitution).
So what you want is that this cocycle is split. In fact we have $H^{1}({\rm Gal}(L/K), U_{L}/U^{n}_{L})=1$. This is proved by a standard filtration argument: this is implied by
$$H^{1}({\rm Gal}(L/K), U_L /U^{1}_L ) = H^{1}({\rm Gal}(k_L /k_K ), k_{L}^{\times})=1$$
and
$$H^{1}({\rm Gal}(L/K), U_{L}^{i}/U_{L}^{i+1})=H^{1}({\rm Gal}(k_L /k_K ), k_L )=1$$
here $k$ denotes a residue field. You can find the detail of the proof in, I think, Serre's 'Local fields' or Cassels-Fröhlich's 'Algebréaic Number Theory'.
-
Yes, the unramified version is all that I need (but I did not want to ask a very localized question). Homological tools is too much to put in my final year project thesis. But thanks to your comments I have figured out the correct modification of the original proof. – Tran Chieu Minh Apr 13 2010 at 2:19
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 34, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9385292530059814, "perplexity_flag": "head"}
|
http://www.physicsforums.com/showthread.php?t=28085&highlight=0.999
|
Physics Forums
Thread Closed
Page 1 of 4 1 2 3 4 >
## Is 0.999repeating 1?
Very common debate: is 0.999999 repeating 1?
Opinions?
didn't realize that there was anther exactly the same question in logics thread.
Quote by killerinstinct Very common debate: is 0.999999 repeating 1? Opinions?
Yes, it is, as long as we're talking about the normal real number system. I can't see any difference. Except for the typography.
What is the difference, meaning subtract .999... from 1.000.... The difference would be 0.000...1. But it's not valid to put something after the "..." That's asking what comes after infinity, which isn't a valid question. The expression .000...1 is a typographic error, and not something that is even defined in the set of real numbers.
Recognitions:
Homework Help
Science Advisor
## Is 0.999repeating 1?
It is not an opinion that they are equal, it is a very easy provable fact and only cranks who don't understand the way mathematics work insist they are different after it has been patiently explained to them.
We mean base ten, work out what the infinite sum 0.999... is, if that doesn't convince you then you need to look up the definitions you don't understand in the phrase:
they represent the same equivalence class in the cauchy sequences of rationals modulo convergence that define the real number system.
Recognitions: Gold Member Science Advisor Well I'm 99.99999.....% certain it's equal to 1 :D
This has to be the most asked question on this forum.
Recognitions:
Gold Member
Science Advisor
Quote by JonF This has to be the most asked question on this forum.
JonF I have never been on any mesage board where there have not been arguments about this and that includes non-maths/sci boards.
In fact I now propose jcsd's theorum:
On any bulletin board, no matter the subject area of that board, sooner rather than later someone will argue that 0.99.. is not equal to one.
.999… not equal to 1? That’s kiddy stuff, just watch me argue that .3333… is not equal to 1/3
Recognitions: Homework Help Science Advisor Corollary to JCSD's theorem: every bulletin board etc attracts an idiot, a troll, or possibly both.
Why should .9999... equal 1 and not .9999...? Trying to get from .9999... to 1 is just like trying to accelerate your spaceship to the speed of light. You keep getting closer, but you can't get that last little bit.
Mentor Blog Entries: 9 I knew it.^~ A last little bit poster would have to show up. Do we try to explain it to him? That "last little bit" is $$\frac 1 \infty$$. By the definition of infinity, that last little bit is zero. So essentially this is true by definition, but beyond that it is completely consistent and provable in many different manners. There is no law that says each point on the real number line must have a unique representation. In fact just the opposite is true, every point on the real number line has many (perhaps an infinite) number of different ways to represent it.
Quote by Integral I knew it.^~ A last little bit poster would have to show up. Do we try to explain it to him?
I must say I dislike these kind of comments. I am a 16 yr old student who has not taken a lot of math, certainly not on the subject of infinity. I fail to see why you would judge me as "a last little bit poster" or whatnot, for simply voicing a (to me) logical view. Though these thing may be obvious to you, that is not so for everyone. I find that your post without the two first lines would have been completely satisfactory.
Mentor Blog Entries: 9 My apologies, having been involved in this same discussion on several different forums over the last 2 or 3 years I do not recall anyone ever saying "oh I see" so perhaps am a bit cyncial about the whole issue.
Recognitions:
Gold Member
Science Advisor
Quote by Grizzlycomet I must say I dislike these kind of comments. I am a 16 yr old student who has not taken a lot of math, certainly not on the subject of infinity. I fail to see why you would judge me as "a last little bit poster" or whatnot, for simply voicing a (to me) logical view. Though these thing may be obvious to you, that is not so for everyone. I find that your post without the two first lines would have been completely satisfactory.
let x = 0.9999... =>
10x = 9.99999...
10x - x = 9x = 9 =>
x = 1
All we are really saying is:
$$\sum_{n=1}^{\infty} \frac{9}{10^n} = 1$$
Recognitions: Gold Member Staff Emeritus I'm sorry for the harsh response to your question, Grizzlycomet. We generally try to not be hard on people because of the questions they ask; the problem is that this particular topic is visited way too often by people trying to push their "new math", "theory of infinity" and whatnot, instead of trying first to understand how standard math deals with the issue.
Quote by Integral My apologies, having been involved in this same discussion on several different forums over the last 2 or 3 years I do not recall anyone ever saying "oh I see" so perhaps am a bit cyncial about the whole issue.
Thank you, apology accepted :) I understand that you may have seen this question many times, thus growing very tired of it. Your explanation was in itself good :)
Thread Closed
Page 1 of 4 1 2 3 4 >
Thread Tools
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9696391224861145, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/20333/speed-of-a-fly-inside-a-car
|
Speed of a fly inside a car
A couple of weeks ago I was travelling in a car (120 km/h approximately) and I saw a fly flying in front of me (inside the car, near my nose, windows closed). I wonder how was that possible.
Does it mean is really flying at 120 km/h or the fly is being affected by some kind of gravity/force?
-
Outside the car? How far in front of you was it? – David Zaslavsky♦ Jan 31 '12 at 23:46
Sorry, I updated my question – César Jan 31 '12 at 23:50
It flies 120 km/h if it was a convertible without windshield. Did you feel like you where in a hurricane in your car? If not then why should the fly? – Alexander Feb 1 '12 at 0:00
2
Because the fly can fly. I can't – César Feb 1 '12 at 0:06
But in a 120 km/h wind your are pretty close. It is really not more subtle than this, the wind you feel on your nose is the same wind the fly feels, so in a closed car almost nothing. The fly does not have to be strong at all, it also works with dust in your car, it will stay more or less in one place as you move the whole air inside your car around with you which acts like a 'seat' for the fly. – Alexander Feb 1 '12 at 0:16
show 1 more comment
1 Answer
Basically from the frame of observation as your car:
The fly was inside your car, so its speed with respect to the car is zero. Its just as much inside the car as you are. Both are travelling at 120 with respect to any observer on the road. ut with respect to anyone inside car you both are just sitting inside the car.
So the speed of fly with respect to you is $v=0\,\frac{m}{s}$, with respect to some observer on the road is $120\,$km/h.
Its no more than a tissue paper you might keep near the steering wheel, in front of you.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9745668768882751, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/calculus/49520-finding-limits-graphically-numerically.html
|
# Thread:
1. ## finding limits graphically and numerically
I understand everything but the part in red. how do you get delta = E/3?
2. ## I think they mean "take delta = 1/3"
On the delta-epsilon formulation, delta is a free variable and you've to prove that for any positive delta you can find an epsilon such that abs(f(x) - limit) < epsilon. You could as well take delta = 1/2 or .0001.
3. My bad - read 1/3 in place of epsilon/3.
In any case I guess they mean you can take espilon = 3 * delta.
Point is you can choose any positive delta and that determines epsilon.
4. Originally Posted by algebra2
I understand everything but the part in red. how do you get delta = E/3?
in fact, $\delta = \frac {\epsilon}2$ would work i think
anyway, recall what the definition of a limit means.
we want to find a $\delta > 0$, such that for all $\epsilon > 0$ (and $x \in \text{dom}(f)$), $|x - 1| < \delta$ implies $|f(x) - 2| < \epsilon$.
now, we found that $|x - 1| < \frac {\epsilon}{x + 1}$, but we want $|x - 1|< \delta$, so we need to find a $\delta$ that works. now, what does it mean for $x$ to be close to 1? we give ourselves a generous range. and say, let it be somewhere between 0 and 2, and those x's are "close". now, to make sure our $\delta$ works, we decide to be cautious and choose $x = 2$, that way, $x$ gets really close to 1, since $\frac {\epsilon}{x + 1}$ is the smallest for the range we are considering. so, if $x = 2$, we have $\frac {\epsilon}{x + 1} = \frac {\epsilon}3$, and so we choose that as our $\delta$.
ok, so that was hopelessly confusing, even to me, i know what i want to say, but i am not sure if i said it ok. did you get that?
Originally Posted by hwhelper
My bad - read 1/3 in place of epsilon/3.
In any case I guess they mean you can take espilon = 3 * delta.
Point is you can choose any positive delta and that determines epsilon.
actually, it is delta that depends on epsilon, not the other way around. we must choose a delta that works for any epsilon we are given
5. yeah, thanks for the explanation
6. Originally Posted by algebra2
yeah, thanks for the explanation
good, i was worried. i hope you realize that things like $|x - 1|< \delta$ is talking about distance. the distance between x and 1 is less than delta, that's what it means. this is why i was talking about x "close" to 1, etc
7. See my signature for more details.
8. Originally Posted by Krizalid
See my signature for more details.
your signature is so functional!
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 18, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9621852040290833, "perplexity_flag": "head"}
|
http://mathoverflow.net/revisions/110608/list
|
## Return to Question
2 added 5 characters in body
It is well known that Kirwan's injection theorem gives an ring injection from $H^{\ast}_T(M)$ to $H^{\ast}_T(M^T)$ which is induced by the inclusion $M^T \to M$, where $T$ is a torus acting on manifold $M$ and $M^T$ is the fixed point set of this torus action.
I came across a problem when my professor tried to use Kirwan's injection theorem to explore the ring structure of $\mathbb{P}^2$. \mathbb{CP}^2$. Here$\mathbb{S}^1\times\mathbb{S}^1$acts on$\mathbb{P}^2$. \mathbb{CP}^2$. The professor just regards $\mathbb{P}^2$ \mathbb{CP}^2$as a triangle with edges$\mathbb{P}^1$, \mathbb{CP}^1$, with orthogonal axis $u$ and $v$. Then he said on each vertex there is a polynomial since $H^{\ast}_T(M^T)=H^{\ast}(M^T)\otimes\mathbb{C}[u,v]$. Suppose the triangle is put with two orthogonal edges parallel to the axis $u$ and $v$. Then for the two vertex on the edge of $u$ direction, set $u=0$ to obtain the relations between coefficients. For the case $\ast=2$, each vertex has a polynomial of the form $au+bv$. So there would be 6 unknowns with 3 equations, which gives the rank of $H^2_T(M^T)$ to be 3, same for $H^2_T(M)$.
Now my questions are: Firstly, how should I understand the view of $\mathbb{P}^2$ \mathbb{CP}^2$as a triangle sitting in the orthogonal coordinate system, and why the$u$and$v$here coincident with the coordinate axis? Secondly, what is the intepretation of setting$u=0\$ when we are trying to find the structure of the cohomology ring? Hope someone can help me with those questions.
1
# How to calculate the equivariant cohomology ring of $P^2$?
It is well known that Kirwan's injection theorem gives an ring injection from $H^{\ast}_T(M)$ to $H^{\ast}_T(M^T)$ which is induced by the inclusion $M^T \to M$, where $T$ is a torus acting on manifold $M$ and $M^T$ is the fixed point set of this torus action.
I came across a problem when my professor tried to use Kirwan's injection theorem to explore the ring structure of $\mathbb{P}^2$. Here $\mathbb{S}^1\times\mathbb{S}^1$ acts on $\mathbb{P}^2$. The professor just regards $\mathbb{P}^2$ as a triangle with edges $\mathbb{P}^1$, with orthogonal axis $u$ and $v$. Then he said on each vertex there is a polynomial since $H^{\ast}_T(M^T)=H^{\ast}(M^T)\otimes\mathbb{C}[u,v]$. Suppose the triangle is put with two orthogonal edges parallel to the axis $u$ and $v$. Then for the two vertex on the edge of $u$ direction, set $u=0$ to obtain the relations between coefficients. For the case $\ast=2$, each vertex has a polynomial of the form $au+bv$. So there would be 6 unknowns with 3 equations, which gives the rank of $H^2_T(M^T)$ to be 3, same for $H^2_T(M)$.
Now my questions are: Firstly, how should I understand the view of $\mathbb{P}^2$ as a triangle sitting in the orthogonal coordinate system, and why the $u$ and $v$ here coincident with the coordinate axis? Secondly, what is the intepretation of setting $u=0$ when we are trying to find the structure of the cohomology ring? Hope someone can help me with those questions.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 55, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9543885588645935, "perplexity_flag": "head"}
|
http://en.wikipedia.org/wiki/Karhunen%e2%80%93Lo%c3%a8ve_transform
|
# Karhunen–Loève theorem
(Redirected from Karhunen–Loève transform)
In the theory of stochastic processes, the Karhunen–Loève theorem (named after Kari Karhunen and Michel Loève) is a representation of a stochastic process as an infinite linear combination of orthogonal functions, analogous to a Fourier series representation of a function on a bounded interval. Stochastic processes given by infinite series of this form were considered earlier by Damodar Dharmananda Kosambi.[1] There exist many such expansions of a stochastic process: if the process is indexed over [a, b], any orthonormal basis of L2([a, b]) yields an expansion thereof in that form. The importance of the Karhunen–Loève theorem is that it yields the best such basis in the sense that it minimizes the total mean squared error.
In contrast to a Fourier series where the coefficients are real numbers and the expansion basis consists of sinusoidal functions (that is, sine and cosine functions), the coefficients in the Karhunen–Loève theorem are random variables and the expansion basis depends on the process. In fact, the orthogonal basis functions used in this representation are determined by the covariance function of the process. One can think that the Karhunen–Loève transform adapts to the process in order to produce the best possible basis for its expansion.
In the case of a centered stochastic process {Xt}t ∈ [a, b] (where centered means that the expectations E(Xt) are defined and equal to 0 for all values of the parameter t in [a, b]) satisfying a technical continuity condition, Xt admits a decomposition
$X_t = \sum_{k=1}^\infty Z_k e_k(t)$
where Zk are pairwise uncorrelated random variables and the functions ek are continuous real-valued functions on [a, b] that are pairwise orthogonal in L2[a, b]. It is therefore sometimes said that the expansion is bi-orthogonal since the random coefficients Zk are orthogonal in the probability space while the deterministic functions ek are orthogonal in the time domain. The general case of a process Xt that is not centered can be brought back to the case of a centered process by considering (Xt − E(Xt)) which is a centered process.
Moreover, if the process is Gaussian, then the random variables Zk are Gaussian and stochastically independent. This result generalizes the Karhunen–Loève transform. An important example of a centered real stochastic process on [0,1] is the Wiener process; the Karhunen–Loève theorem can be used to provide a canonical orthogonal representation for it. In this case the expansion consists of sinusoidal functions.
The above expansion into uncorrelated random variables is also known as the Karhunen–Loève expansion or Karhunen–Loève decomposition. The empirical version (i.e., with the coefficients computed from a sample) is known as the Karhunen–Loève transform (KLT), principal component analysis, proper orthogonal decomposition (POD), Empirical orthogonal functions (a term used in meteorology and geophysics), or the Hotelling transform.
## Formulation
• Throughout this article, we will consider a square integrable zero-mean random process Xt defined over a probability space (Ω,F,P) and indexed over a closed interval [a, b], with covariance function KX(s,t). We thus have:
$\forall t\in [a,b], X_t\in L^2(\Omega,\mathcal{F},\mathrm{P}),$
$\forall t\in [a,b], \mathrm{E}[X_t]=0,$
$\forall t,s \in [a,b], K_X(s,t)=\mathrm{E}[X_s X_t].$
• We associate to KX a linear operator TKX defined in the following way:
$\begin{array}{rrl} T_{K_X}: L^2([a,b]) &\rightarrow & L^2([a,b])\\ f(t) & \mapsto & \int_{[a,b]} K_X(s,t) f(s) ds \end{array}$
Since TKX is a linear operator, it makes sense to talk about its eigenvalues λk and eigenfunctions ek, which are found solving the homogeneous Fredholm integral equation of the second kind
$\int_{[a,b]} K_X(s,t) e_k(s)\,ds=\lambda_k e_k(t)$
## Statement of the theorem
Theorem. Let Xt be a zero-mean square integrable stochastic process defined over a probability space (Ω,F,P) and indexed over a closed and bounded interval [a, b], with continuous covariance function KX(s,t).
Then KX(s,t) is a Mercer kernel and letting ek be an orthonormal basis of L2([a, b]) formed by the eigenfunctions of TKX with respective eigenvalues λk, Xt admits the following representation
$X_t=\sum_{k=1}^\infty Z_k e_k(t)$
where the convergence is in L2, uniform in t and
$Z_k=\int_{[a,b]} X_t e_k(t)\, dt$
Furthermore, the random variables Zk have zero-mean, are uncorrelated and have variance λk
$\mathrm{E}[Z_k]=0,~\forall k\in\mathbb{N} \quad\quad\mbox{and}\quad\quad \mathrm{E}[Z_i Z_j]=\delta_{ij} \lambda_j,~\forall i,j\in \mathbb{N}$
Note that by generalizations of Mercer's theorem we can replace the interval [a, b] with other compact spaces C and the Lebesgue measure on [a, b] with a Borel measure whose support is C.
## Proof
• The covariance function KX satisfies the definition of a Mercer kernel. By Mercer's theorem, there consequently exists a set {λk,ek(t)} of eigenvalues and eigenfunctions of TKX forming an orthonormal basis of L2([a,b]), and KX can be expressed as
$K_X(s,t)=\sum_{k=1}^\infty \lambda_k e_k(s) e_k(t)$
• The process Xt can be expanded in terms of the eigenfunctions ek as:
$X_t=\sum_{k=1}^\infty Z_k e_k(t)$
where the coefficients (random variables) Zk are given by the projection of Xt on the respective eigenfunctions
$Z_k=\int_{[a,b]} X_t e_k(t) \,dt$
• We may then derive
$\mathrm{E}[Z_k]=\mathrm{E}\left[\int_{[a,b]} X_t e_k(t) \,dt\right]=\int_{[a,b]} \mathrm{E}[X_t] e_k(t) dt=0$
and:
$\begin{array}[t]{rl} \mathrm{E}[Z_i Z_j]&=\mathrm{E}\left[ \int_{[a,b]}\int_{[a,b]} X_t X_s e_j(t)e_i(s) dt\, ds\right]\\ &=\int_{[a,b]}\int_{[a,b]} \mathrm{E}\left[X_t X_s\right] e_j(t)e_i(s) dt\, ds\\ &=\int_{[a,b]}\int_{[a,b]} K_X(s,t) e_j(t)e_i(s) dt \, ds\\ &=\int_{[a,b]} e_i(s)\left(\int_{[a,b]} K_X(s,t) e_j(t) dt\right) ds\\ &=\lambda_j \int_{[a,b]} e_i(s) e_j(s) ds\\ &=\delta_{ij}\lambda_j \end{array}$
where we have used the fact that the ek are eigenfunctions of TKX and are orthonormal.
• Let us now show that the convergence is in L2:
let $S_N=\sum_{k=1}^N Z_k e_k(t)$.
$\begin{align} \mathrm{E}[|X_t-S_N|^2]&=\mathrm{E}[X_t^2]+\mathrm{E}[S_N^2]-2\mathrm{E}[X_t S_N]\\ &=K_X(t,t)+\mathrm{E}\left[\sum_{k=1}^N \sum_{l=1}^N Z_k Z_l e_k(t)e_l(t) \right] -2\mathrm{E}\left[X_t\sum_{k=1}^N Z_k e_k(t)\right]\\ &=K_X(t,t)+\sum_{k=1}^N \lambda_k e_k(t)^2 -2\mathrm{E}\left[\sum_{k=1}^N \int_0^1 X_t X_s e_k(s) e_k(t) ds\right]\\ &=K_X(t,t)-\sum_{k=1}^N \lambda_k e_k(t)^2 \end{align}$
which goes to 0 by Mercer's theorem.
## Properties of the Karhunen–Loève transform
### Special case: Gaussian distribution
Since the limit in the mean of jointly Gaussian random variables is jointly Gaussian, and jointly Gaussian random (centered) variables are independent if and only if they are orthogonal, we can also conclude:
Theorem. The variables Zi have a joint Gaussian distribution and are stochastically independent if the original process {Xt}t is Gaussian.
In the gaussian case, since the variables Zi are independent, we can say more:
$\lim_{N \rightarrow \infty} \sum_{i=1}^N e_i(t) Z_i(\omega) = X_t(\omega)$
almost surely.
### The Karhunen–Loève transform decorrelates the process
This is a consequence of the independence of the Zk.
### The Karhunen–Loève expansion minimizes the total mean square error
In the introduction, we mentioned that the truncated Karhunen–Loeve expansion was the best approximation of the original process in the sense that it reduces the total mean-square error resulting of its truncation. Because of this property, it is often said that the KL transform optimally compacts the energy.
More specifically, given any orthonormal basis {fk} of L2([a, b]), we may decompose the process Xt as:
$X_t(\omega)=\sum_{k=1}^\infty A_k(\omega) f_k(t)$
where $A_k(\omega)=\int_{[a,b]} X_t(\omega) f_k(t)\,dt$
and we may approximate Xt by the finite sum $\hat{X}_t(\omega)=\sum_{k=1}^N A_k(\omega) f_k(t)$ for some integer N.
Claim. Of all such approximations, the KL approximation is the one that minimizes the total mean square error (provided we have arranged the eigenvalues in decreasing order).
[Proof]
Consider the error resulting from the truncation at the N-th term in the following orthonormal expansion:
$\epsilon_N(t)=\sum_{k=N+1}^\infty A_k(\omega) f_k(t)$
The mean-square error εN2(t) can be written as:
$\begin{align} \varepsilon_N^2(t)&=\mathrm{E}\left[\sum_{i=N+1}^\infty \sum_{j=N+1}^\infty A_i(\omega) A_j(\omega) f_i(t) f_j(t)\right]\\ &=\sum_{i=N+1}^\infty \sum_{j=N+1}^\infty \mathrm{E}\left[\int_{[a, b]}\int_{[a, b]} X_t X_s f_i(t)f_j(s) ds\, dt\right] f_i(t) f_j(t)\\ &=\sum_{i=N+1}^\infty \sum_{j=N+1}^\infty f_i(t) f_j(t) \int_{[a, b]}\int_{[a, b]}K_X(s,t) f_i(t)f_j(s) ds\, dt \end{align}$
We then integrate this last equality over [a, b]. The orthonormality of the fk yields:
$\int_{[a, b]} \varepsilon_N^2(t) dt=\sum_{k=N+1}^\infty \int_{[a, b]}\int_{[a, b]} K_X(s,t) f_k(t)f_k(s) ds\, dt$
The problem of minimizing the total mean-square error thus comes down to minimizing the right hand side of this equality subject to the constraint that the fk be normalized. We hence introduce βk, the Lagrangian multipliers associated with these constraints, and aim at minimizing the following function:
$Er[f_k(t),k\in\{N+1,\ldots\}]=\sum_{k=N+1}^\infty \int_{[a, b]}\int_{[a, b]} K_X(s,t) f_k(t)f_k(s) ds dt-\beta_k \left(\int_{[a, b]} f_k(t) f_k(t) dt -1\right)$
Differentiating with respect to fi(t)\$ and setting the derivative to 0 yields:
$\frac{\partial Er}{\partial f_i(t)}=\int_{[a, b]} \left(\int_{[a, b]} K_X(s,t) f_i(s) ds -\beta_i f_i(t)\right)dt=0$
which is satisfied in particular when $\int_{[a, b]} K_X(s,t) f_i(s) \,ds =\beta_i f_i(t)$, in other words when the fk are chosen to be the eigenfunctions of TKX, hence resulting in the KL expansion.
### Explained variance
An important observation is that since the random coefficients Zk of the KL expansion are uncorrelated, the Bienaymé formula asserts that the variance of Xt is simply the sum of the variances of the individual components of the sum:
$\begin{align} \mbox{Var}[X_t]&=\sum_{k=0}^\infty e_k(t)^2 \mbox{Var}[Z_k]=\sum_{k=1}^\infty \lambda_k e_k(t)^2 \end{align}$
Integrating over [a, b] and using the orthonormality of the ek, we obtain that the total variance of the process is:
$\int_{[a,b]} \mbox{Var}[X_t] dt=\sum_{k=1}^\infty \lambda_k$
In particular, the total variance of the N-truncated approximation is $\sum_{k=1}^N \lambda_k$. As a result, the N-truncated expansion explains $\sum_{k=1}^N \lambda_k/\sum_{k=1}^\infty \lambda_k$ of the variance; and if we are content with an approximation that explains, say, 95% of the variance, then we just have to determine an $N\in\mathbb{N}$ such that $\sum_{k=1}^N \lambda_k/\sum_{k=1}^\infty \lambda_k \geq 0.95$.
### The Karhunen–Loève expansion has the minimum representation entropy property
This section requires expansion. (May 2011)
## Principal Component Analysis
Main article: Principal Component Analysis
We have established the Karhunen–Loève theorem and derived a few properties thereof. We also noted that one hurdle in its application was the numerical cost of determining the eigenvalues and eigenfunctions of its covariance operator through the Fredholm integral equation of the second kind $\int_{[a,b]} K_X(s,t) e_k(s)\,ds=\lambda_k e_k(t)$ .
However, when applied to a discrete and finite process $\left(X_n\right)_{n\in\{1,\ldots,N\}}$, the problem takes a much simpler form and standard algebra can be used to carry out the calculations.
Note that a continuous process can also be sampled at N points in time in order to reduce the problem to a finite version.
We henceforth consider a random N-dimensional vector $X=\left(X_1~X_2~\ldots~X_N\right)^T$. As mentioned above, X could contain N samples of a signal but it can hold many more representations depending on the field of application. For instance it could be the answers to a survey or economic data in an econometrics analysis.
As in the continuous version, we assume that X is centered, otherwise we can let $X:=X-\mu_X$ (where $\mu_X$ is the mean vector of X) which is centered.
Let us adapt the procedure to the discrete case.
### Covariance matrix
Recall that the main implication and difficulty of the KL transformation is computing the eigenvectors of the linear operator associated to the covariance function, which are given by the solutions to the integral equation written above.
Define Σ, the covariance matrix of X. Σ is an N by N matrix whose elements are given by:
$\Sigma_{ij}=E[X_i X_j],\qquad \forall i,j \in \{1,\ldots,N\}$
Rewriting the above integral equation to suit the discrete case, we observe that it turns into:
$\begin{align} &\sum_{i=1}^N \Sigma_{ij} e_j=\lambda e_i\\ \Leftrightarrow \quad& \Sigma e=\lambda e \end{align}$
where $e=(e_1~e_2~\ldots~e_N)^T$ is an N-dimensional vector.
The integral equation thus reduces to a simple matrix eigenvalue problem, which explains why the PCA has such a broad domain of applications.
Since Σ is a positive definite symmetric matrix, it possesses a set of orthonormal eigenvectors forming a basis of $\R^N$, and we write $\{\lambda_i,\phi_i\}_{i\in\{1,\ldots,N\}}$ this set of eigenvalues and corresponding eigenvectors, listed in decreasing values of λi. Let also $\Phi$ be the orthonormal matrix consisting of these eigenvectors:
$\begin{align} \Phi &:=\left(\phi_1~\phi_2~\ldots~\phi_N\right)^T\\ \Phi^T \Phi &=I \end{align}$
### Principal Component Transform
It remains to perform the actual KL transformation which we will call Principal Component Transform in this case. Recall that the transform was found by expanding the process with respect to the basis spanned by the eigenvectors of the covariance function. In this case, we hence have:
$\begin{align} X &=\sum_{i=1}^N \langle \phi_i,X\rangle \phi_i\\ &=\sum_{i=1}^N \phi_i^T X \phi_i \end{align}$
In a more compact form, the Principal Component Transform of X is defined by:
$\left\{ \begin{array}{rl} Y&=\Phi^T X\\ X&=\Phi Y \end{array} \right.$
The i-th component of Y is $Y_i=\phi_i^T X$, the projection of X on $\phi_i$ and the inverse transform $X=\Phi Y$ yields the expansion of $X$ on the space spanned by the $\phi_i$:
$X=\sum_{i=1}^N Y_i \phi_i=\sum_{i=1}^N \langle \phi_i,X\rangle \phi_i$
As in the continuous case, we may reduce the dimensionality of the problem by truncating the sum at some $K\in\{1,\ldots,N\}$ such that $\frac{\sum_{i=1}^K \lambda_i}{\sum_{i=1}^N \lambda_i}\geq \alpha$ where α is the explained variance threshold we wish to set.
We can also reduce the dimensionality through the use of multilevel dominant eigenvector estimation (MDEE).[2]
## Examples
### The Wiener process
There are numerous equivalent characterizations of the Wiener process which is a mathematical formalization of Brownian motion. Here we regard it as the centered standard Gaussian process Wt with covariance function
$K_{W}(t,s) = \operatorname{Cov}(W_t,W_s) = \min (s,t).$
We restrict the time domain to [a,b]=[0,1] without loss of generality.
The eigenvectors of the covariance kernel are easily determined. These are
$e_k(t) = \sqrt{2} \sin \left( \left(k - \textstyle\frac{1}{2}\right) \pi t \right)$
and the corresponding eigenvalues are
$\lambda_k = \frac{1}{(k -\frac{1}{2})^2 \pi^2}.$
[Proof]
In order to find the eigenvalues and eigenvectors, we need to solve the integral equation:
$\begin{align} \int_{[a,b]} K_W(s,t) e(s)ds&=\lambda e(t)\qquad \forall t, 0\leq t\leq 1\\ \int_0^1\min(s,t) e(s)ds&=\lambda e(t)\qquad \forall t, 0\leq t\leq 1 \\ \int_0^t s e(s) ds + t \int_t^1 e(s) ds &= \lambda e(t) \qquad \forall t, 0\leq t\leq 1 \end{align}$
differentiating once with respect to t yields:
$\int_{t}^1 e(s) ds=\lambda e'(t)$
a second differentiation produces the following differential equation:
$-e(t)=\lambda e''(t)$
The general solution of which has the form:
$e(t)=A\sin\left(\frac{t}{\sqrt{\lambda}}\right)+B\cos\left(\frac{t}{\sqrt{\lambda}}\right)$
where A and B are two constants to be determined with the boundary conditions. Setting t=0 in the initial integral equation gives e(0)=0 which implies that B=0 and similarly, setting t=1 in the first differentiation yields e' (1)=0, whence:
$\cos\left(\frac{1}{\sqrt{\lambda}}\right)=0$
which in turn implies that eigenvalues of TKX are:
$\lambda_k=\left(\frac{1}{(k-\frac{1}{2})\pi}\right)^2,\qquad k\geq 1$
The corresponding eigenfunctions are thus of the form:
$e_k(t)=A \sin\left((k-\frac{1}{2})\pi t\right),\qquad k\geq 1$
A is then chosen so as to normalize ek:
$\int_0^1 e_k^2(t) dt=1\quad \implies\quad A=\sqrt{2}$
This gives the following representation of the Wiener process:
Theorem. There is a sequence {Zi}i of independent Gaussian random variables with mean zero and variance 1 such that
$W_t = \sqrt{2} \sum_{k=1}^\infty Z_k \frac{\sin \left(\left(k - \frac{1}{2}\right) \pi t\right)}{ \left(k - \frac{1}{2}\right) \pi}.$
Note that this representation is only valid for $t\in[0,1].$ On larger intervals, the increments are not independent. As stated in the theorem, convergence is in the L2 norm and uniform in t.
### The Brownian bridge
Similarly the Brownian bridge $B_t=W_t-tW_1$ which is a stochastic process with covariance function
$K_B(t,s)=\min(t,s)-ts$
can be represented as the series
$B_t = \sum_{k=1}^\infty Z_k \frac{\sqrt{2} \sin(k \pi t)}{k \pi}$
## Applications
This section requires expansion. (July 2010)
Adaptive optics systems sometimes use K–L functions to reconstruct wave-front phase information (Dai 1996, JOSA A).
Karhunen–Loève expansion is closely related to the Singular Value Decomposition. The latter has myriad applications in image processing, radar, seismology, and the like. If one has independent vector observations from a vector valued stochastic process then the left singular vectors are maximum likelihood estimates of the ensemble KL expansion.
### Applications in signal estimation and detection
#### Detection of a known continuous signal S(t)
In communication, we usually have to decide whether a signal from a noisy channel contains valuable information. The following hypothesis testing is used for detecting continuous signal s(t) from channel output X(t), N(t) is the channel noise, which is usually assumed zero mean gaussian process with correlation function $R_{N} (t, s) = E[N(t)N(s)]$
$H: X(t) = N(t)$,
$K: X(t) = N(t)+s(t), t\in(0,T)$.
#### Signal detection in white noise
When The channel noise is white, its correlation function is $R_{N}(t) = \frac{N_0}{2} \delta (t)$, and it has constant power spectrum density. In physically practical channel, the noise power is finite. so $S_{N}(f) = \frac{N_{0}}{2} \text{ for } |f|<w, 0 \text{ for } |f|>w$ .Then the noise correlation function is sinc function with zeros at $\frac{n}{2\omega}, n = ...-1,0,1,...$ . Since are uncorrelated and gaussian, they are independent. Thus we can take samples from X(t) with time spacing $\Delta t = \frac{n}{2\omega}$ within (0,T). Let $X_i = X(i\Delta t)$. We have a total of $n = \frac{T}{\Delta t} = T(2\omega) = 2\omega T$ i.i.d samples $\{X_1, X_2,...,X_n\}$ to develop the Likelihood Ratio Test. Define signal $S_i = S(i\Delta t)$, the problem becomes,
$H: X_i = N_i$,
$K: X_i = N_i + S_i, i = 1,2...n.$
The log-likelihood ratio $\mathcal{L}(\underline{x}) = log\frac{\sum^n_{i=1} (2S_i x_i - S_i^2)}{2\sigma^2} \Leftrightarrow \Delta t \Sigma ^n_{i = 1} S_i x_i = \sum^n_{i=1} S(i\Delta t)x(i\Delta t)\Delta t \gtrless \lambda_2$. As $t \rightarrow 0, \text{ let } G = \int^T_0 S(t)x(t)dt$. Then G is the test statistics and the Neyman-Pearson Optimum Detector is:$G(\underline{x}) > G_0 \Rightarrow K, < G_0 \Rightarrow H$. As G is gaussian, we can characterize it by finding its mean and variances. Then we get
$H: G \sim N(0,\frac{N_{0}E}{2})$
$K: G \sim N(E,\frac{N_{0}E}{2})$, where $E = \int^T_{0} S^2(t)dt$ is the signal energy. The false alarm error $\alpha = \int^{\infty}_{G_{0}} N(0,\frac{N_{0}E}{2})dG \Rightarrow G_0 = \sqrt{\frac{N_0 E}{2}} \Phi^{-1}(1-\alpha)$
And the probability of detection:$\beta = \int^{\infty}_{G_0} N(E, \frac{N_0 E}{2})dG = 1-\Phi(\frac{G_0 - E}{\sqrt{\frac{N_0 E}{2}}}) = \Phi [\sqrt{\frac{2E}{N_0}} - \Phi^{-1}(1-\alpha)] , \Phi(\cdot)$ is the cdf of standard normal gaussian variable.
#### Signal detection in color noise
When N(t)is colored(correlated in time in some sense) gaussian noise with zero mean and covariance function: $R_N(t,s) = E[X(t)X(s)].$ We couldn't sample independent discrete observations by evenly spacing the time. Instead, we can use K–L expansion to uncorrelate the noise process and get independent gaussian observation 'samples'. The K–L expansion of N(t): $N(t) = \sum^{\infty}_{i=1} N_i \Phi_i(t), 0<t<T$, where $N_i =\int N(t)\Phi_i(t)dt$ and the orthonormal bases $\{\Phi_i{t}\}$are generated by kernal $R_N(t,s)$, i.e., solution to $\int ^T _0 R_N(t,s)\Phi_i(s)ds = \lambda_i \Phi_i(t), var[N_i] = \lambda_i$. Do expansion: $S(t) = \sum^{\infty}_{i = 1}S_i\Phi_i(t)$, where $S_i = \int^T _0 S(t)\Phi_i(t)dt, 0<t<T.$, then $X_i = \int^T _0 X(t)\Phi_i(t) dt = N_i$ under H and $N_i + S_i$ under K. Let $\overline{X} = \{X_1,X_2,\dots\}$, we have
• ${N_i}$ are independent guassian r.v's with variance $\lambda_i$
• under H: $\{X_i\}$ are independent gaussian r.v's. $f_H[x(t)|0<t<T] = f_H(\underline{x}) = \prod^{\infty} _{i=1} \frac{1}{\sqrt{2\pi \lambda_i}}exp[-\frac{x_i^2}{2 \lambda_i}]$
• under K: $\{X_i - S_i\}$ are independent gaussian r.v's. $f_K[x(t)|0<t<T] = f_K(\underline{x}) = \prod^{\infty} _{i=1} \frac{1}{\sqrt{2\pi \lambda_i}}exp[-\frac{(x_i - S_i)^2}{2 \lambda_i}]$
Hence, the log-LR is given by $\mathcal{L}(\underline{x}) = \sum^{\infty}_{i=1} \frac{2S_i x_i - S_i^2}{2\lambda_i}$ and the optimum detector is $G = \sum^{\infty}_{i=1} S_i x_i \lambda_i > G_0 \Rightarrow K, < G_0 \Rightarrow H.$ Define $k(t) = \sum^{\infty}_{i=1} \lambda_i S_i \Phi_i(t), 0<t<T,$ then $G = \int^T _0 k(t)x(t)dt$.
##### How to find k(t)?
Since $\int^T_0 R_N(t,s)k(s)ds = \sum^{\infty}_{i=1} \lambda_i S_i \int^T _0 R_N(t,s)\Phi_i (s) ds = \sum^{\infty}_{i=1} S_i \Phi_i(t) = S(t)$, k(t) is the solution to$\int^T_0 R_N(t,s)k(s)ds = S(t)$. If N(t)is wide-sense stationary, $\int^T_0 R_N(t-s)k(s)ds = S(t)$, which is known as the Wiener-Hopf Equation. The equation can be solved by taking fourier transform, but not practically realizable since infinite spectrum needs spatial factorization. A special case which is easy to calculate k(t) is white gaussian noise. $\int^T_0 \frac{N_0}{2}\delta(t-s)k(s)ds = S(t) \Rightarrow k(t) = C S(t), 0<t<T$. The corresponding impulse response is h(t) = k(T-t) = C S(T-t). Let C = 1, this is just the result we arrived at in previous section for detecting of signal in white noise.
##### The test threshold for Neyman-Pearson optimum detector?
Since X(t)is gaussian process, $G = \int^T_0 k(t)x(t)dt$ is a gaussian random variable that can be characterized by its mean and variance.
• $E[G|H] = \int^T_0 k(t)E[x(t)|H]dt = 0$
• $E[G|K] = \int^T_0 k(t)E[x(t)|K]dt = \int^T_0 k(t)S(t)dt \equiv \rho$
$E[G^2|H] = \int^T_0\int^T_0 k(t)k(s) R_N(t,s)dtds = \int^T_0 k(t)(\int^T_0 k(s)R_N(t,s)ds)=\int^T_0 k(t)S(t)dt = \rho$
• $var[G|H] = E[G^2|H] - (E[G|H])^2 = \rho$
$E[G^2|K]=\int^T_0\int^T_0k(t)k(s)E[x(t)x(s)]dtds = \int^T_0\int^T_0k(t)k(s)(R_N(t,s) +S(t)S(s))dtds = \rho + \rho^2$
• $var[G|K] = E[G^2|K] - (E[G|K])^2 = \rho + \rho^2 -\rho^2 = \rho.$
Hence, we obtain
$H: G \sim N(0,\rho)$
$K: G \sim N(\rho, \rho)$
The false alarm error $\alpha = \int^{\infty}_{G_0} N(0,\rho)dG = 1 - \Phi(\frac{G_0}{\sqrt{\rho}}).$ So the test threshold for Neyman-Pearson Optimum Detector is $G_0 = \sqrt{\rho} \Phi^{-1} (1-\alpha)$. Its power of detection is $\beta = \int^{\infty}_{G_0} N(\rho, \rho)dG = \Phi [\sqrt{\rho} - \Phi^{-1}(1 - \alpha)]$. When the noise is white gaussian process, $\rho = \int^T_0 k(t)S(t)dt = \int^T_0 S(t)^2 dt = E$, the signal power.
##### Prewhitening
For some type of colored noise, a typical practise is to add a prewhiterning filter before the matched filter to transform the colored noise into white noise. For example, N(t) is a wide-sense stationary colored noise with correlation function $\R_N(\tau) = \frac{B N_0}{4} e^{-B|\tau|}$ and$S_N(f) = \frac{N_0}{2(1+(\frac{w}{B})^2)}$. The transfer function of prewhitening filter is $H(f) = 1 + j \frac{w}{B}$
#### Detection of a gaussian random signal in AWGN
When the signal we want to detect from the noisy channel is also random, for example, a white gaussian process X(t), we can still implement K–L expansion to get independent sequence of observation. In this case, the detection problem is described as follows:
$H_0 : Y(t) = N(t)$
$H_1 : Y(t) = N(t) + X(t), 0<t<T.$ X(t) is a random process with correlation function $R_X(t,s) = E\{X[t]X[s]\}$
The K–L expansion of X(t) is $X(t) = \sum^{\infty}_{i=1} X_i \Phi_i(t)$, where $X_i =\int^T_0 X(t) \Phi_i(t). \Phi(t)$'s are solution to $\int^T_0 R_X(t,s)\Phi_i(s)ds= \lambda_i \Phi_i(t)$. So $X_i$'s are independent sequence of r.v's with zero mean and variance $\lambda_i$. Expanding Y(t) and N(t) by $\Phi_i(t)$, we get $Y_i = \int^T_0 Y(t)\Phi_i(t)dt = \int^T_0 [N(t) + X(t)]\Phi_i(t) = N_i + X_i$, where $N_i = \int^T_0 N(t)\Phi_i(t)dt.$ As N(t) is gaussian white noise, $N_i$'s are i.i.d sequence of r.v with zero mean and variance$\frac{N_0}{2}$, then the problem is simplified as follows,
$H_0: Y_i = N_i$
$H_1: Y_i = N_i + X_i$
The Neyman Pearson Optimal Test: $\Lambda = \frac{f_Y|H_1}{f_Y|H_0} = Ce^{-\sum^{\infty}_{i=1}\frac{y_i^2}{2} \frac{\lambda_i}{\frac{N_0}{2}(\frac{N_0}{2} + \lambda_i)}}$, so the log-likelihood ratio $\mathcal{L} = ln(\Lambda) = K -\sum^{\infty}_{i=1}\frac{y_i^2}{2} \frac{\lambda_i}{\frac{N_0}{2}(\frac{N_0}{2} + \lambda_i)}$. Since $\hat{X_i} = \frac{\lambda_i}{\frac{N_0}{2}(\frac{N_0}{2} + \lambda_i)}$ is just the minimum-mean-square estimate of $X_i$ given $Y_i$'s, $\mathcal{L} = K + \frac{1}{N_0} \sum^{\infty}_{i=1} Y_i \hat{X_i}$. K–L expansion has the following property: If $f(t) = \sum f_i \Phi_i(t), g(t) = \sum g_i \Phi_i(t)$, where $f_i = \int_0^T f(t) \Phi_i(t), g_i = \int_0^T g(t)\Phi_i(t).$, then $\sum^{\infty}_{i=1} f_i g_i = \int^T_0 g(t)f(t)dt$. So let $\hat{X(t|T)} = \sum^{\infty}_{i=1} \hat{X_i}\Phi_i(t)$,$\mathcal{L} = K + \frac{1}{N_0} \int^T_0 Y(t) \hat{X(t|T)}dt$. Noncausal filter Q(t, s) can be used to get the estimate through $\hat{X(t|T)} = \int^T_0 Q(t,s)Y(s)ds$. By orthogonality principle, Q(t,s) satisfies $\int^T_0 Q(t,s)R_X(s,t)ds + \frac{N_0}{2} Q(t, \lambda) = R_X(t, \lambda), 0 < \lambda < T, 0<t<T.$. However for practical reason, it's necessary to further derive the causal filter h(t, s), where h(t, s) = 0 for s > t, to get estimate $\hat{X(t|t)}$. Specifically, $Q(t,s) = h(t,s) + h(s, t) - \int^T_0 h(\lambda, t)h(s, \lambda)d\lambda$.
## Notes
1. Kosambi, D. D. (1943), "Statistics in Function Space", Journal of the Indian Mathematical Society 7: 76–88, MR9816 .
2. X. Tang, “Texture information in run-length matrices,” IEEE Transactions on Image Processing, vol. 7, No. 11, pp. 1602- 1609, Nov. 1998
## References
• Stark, Henry; Woods, John W. (1986). Probability, Random Processes, and Estimation Theory for Engineers. Prentice-Hall, Inc. ISBN 0-13-711706-X.
• Ghanem, Roger; Spanos, Pol (1991). Stochastic finite elements: a spectral approach. Springer-Verlag. ISBN 0-387-97456-3.
• Guikhman, I.; Skorokhod, A. (1977). Introduction a la Théorie des Processus Aléatoires. Éditions MIR.
• Simon, B. (1979). Functional Integration and Quantum Physics. Academic Press.
• Karhunen, Kari (1947). "Über lineare Methoden in der Wahrscheinlichkeitsrechnung". Ann. Acad. Sci. Fennicae. Ser. A. I. Math.-Phys. 37: 1–79.
• Loève, M. (1978). Probability theory. Vol. II, 4th ed. Graduate Texts in Mathematics 46. Springer-Verlag. ISBN 0-387-90262-7.
• Dai, G. (1996). "Modal wave-front reconstruction with Zernike polynomials and Karhunen–Loeve functions". JOSA A 13 (6): 1218. Bibcode:1996JOSAA..13.1218D. doi:10.1364/JOSAA.13.001218.
• Wu B., Zhu J., Najm F.(2005) "A Non-parametric Approach for Dynamic Range Estimation of Nonlinear Systems". In Proceedings of Design Automation Conference(841-844) 2005
• Wu B., Zhu J., Najm F.(2006) "Dynamic Range Estimation". IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, Vol. 25 Issue:9 (1618-1636) 2006
• Jorgensen, Palle E. T.; Song, Myung-Sin (2007). "Entropy Encoding, Hilbert Space and Karhunen–Loeve Transforms". arXiv:math-ph/0701056. Bibcode 2007JMP....48j3503J.
• Mathar, Richard J. (2008). "Karhunen–Loeve basis functions of Kolmogorov turbulence in the sphere". Baltic Astronomy 17 (3/4): 383–398. arXiv:0805.3979. Bibcode:2008BaltA..17..383M.
• Mathar, Richard J. (2009). "Modal decomposition of the von-Karman covariance of atmospheric turbulence in the circular entrance pupil". arXiv:0911.4710 [astro-ph.IM]. Bibcode 2009arXiv0911.4710M.
• Mathar, Richard J. (2010). "Karhunen–Loeve basis of Kolmogorov phase screens covering a rectangular stripe". Waves in Random and Complex Media 20 (1): 23–35. Bibcode:2020WRCM...20...23M. doi:10.1080/17455030903369677.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 162, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.873009204864502, "perplexity_flag": "head"}
|
http://mathhelpforum.com/calculus/29479-total-revenue.html
|
# Thread:
1. ## Total revenue from
Assume the total revenue from the sale of X items given by R9x) = In (8x+1),
while the total cos to produce x items is C(x)=x/5. Find the approxiamate number of items that should be manufactured so that profit, R (x) - C(x) is maximum.
Thank You all.
2. Originally Posted by ArmiAldi
Assume the total revenue from the sale of X items given by R9x) = In (8x+1),
while the total cos to produce x items is C(x)=x/5. Find the approxiamate number of items that should be manufactured so that profit, R (x) - C(x) is maximum.
Thank You all.
Differentiate the function $P(x) = R(x) - C(x) = \ln (8x + 1) - \frac{x}{5}$ with respect to x. (You know how to differentiate this, right?)
Put that derivative equal to zero to find the x-coordinate of the stationary point. Test the nature of this solution to prove that this stationary point is a maximum turning point.
Then the value of x found is the answer to the question.
3. Originally Posted by mr fantastic
Differentiate the function $P(x) = R(x) - C(x) = \ln (8x + 1) - \frac{x}{5}$ with respect to x. (You know how to differentiate this, right?)
Put that derivative equal to zero to find the x-coordinate of the stationary point. Test the nature of this solution to prove that this stationary point is a maximum turning point.
Then the value of x found is the answer to the question.
$\frac{dP}{dx} = \frac{8}{8x+1} - \frac{1}{5}$.
$\frac{dP}{dx} = 0 \Rightarrow 0 = \frac{8}{8x+1} - \frac{1}{5} \Rightarrow \frac{8}{8x+1} = \frac{1}{5} \Rightarrow 40 = 8x + 1 \Rightarrow .....$.
The sign test shows that the value of x corresponds to a maximum turning point.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8911014199256897, "perplexity_flag": "middle"}
|
http://nrich.maths.org/2151
|
nrich enriching mathematicsSkip over navigation
### Baby Circle
A small circle fits between two touching circles so that all three circles touch each other and have a common tangent? What is the exact radius of the smallest circle?
### Pericut
Two semicircle sit on the diameter of a semicircle centre O of twice their radius. Lines through O divide the perimeter into two parts. What can you say about the lengths of these two parts?
### Kissing
Two perpendicular lines are tangential to two identical circles that touch. What is the largest circle that can be placed in between the two lines and the two circles and how would you construct it?
# Circles in Circles
##### Stage: 5 Challenge Level:
Take three unit circles, each touching the other two. Construct three circles $C_1$, $C_2$ and $C_3$, with radii $r_1$, $r_2$ and $r_3$, respectively, as in the figure below. The circles that are tangent to all three unit circles are $C_1$ and $C_3$, with $C_1$ the smaller of these. The circle through the three points of tangency of the unit circles is $C_2$. Find the radii $r_1$, $r_2$ and $r_3$, and show that $r_1r_3=r_2^2$.
The NRICH Project aims to enrich the mathematical experiences of all learners. To support this aim, members of the NRICH team work in a wide range of capacities, including providing professional development for teachers wishing to embed rich mathematical tasks into everyday classroom practice. More information on many of our other activities can be found here.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9367951154708862, "perplexity_flag": "head"}
|
http://nrich.maths.org/1944/solution
|
### LOGO Challenge 5 - Patch
Using LOGO, can you construct elegant procedures that will draw this family of 'floor coverings'?
### LOGO Challenge - Triangles-squares-stars
Can you recreate these designs? What are the basic units? What movement is required between each unit? Some elegant use of procedures will help - variables not essential.
### LOGO Challenge - Tilings
Three examples of particular tilings of the plane, namely those where - NOT all corners of the tile are vertices of the tiling. You might like to produce an elegant program to replicate one or all of these.
# Napoleon's Theorem
##### Stage: 4 and 5 Challenge Level:
Triangle $ABC$ has equilateral triangles drawn on its edges. Points $P$, $Q$ and $R$ are the centres of the equilateral triangles. Experimentation with the interactive diagram leads to the conjecture that $PQR$ is an equilateral triangle. There are many ways to prove this result. Here we have chosen two methods, one which uses only the cosine rule and one which uses complex numbers to represent vectors, and multiplication by complex numbers to rotate the vectors by 60 degrees.
Another proof using a tessellation of the plane is discussed on the Cut-the-knot website.
First the proof using the Cosine Rule.
The sides of triangle $ABC$ are written as $a, b$ and $c$. Centroids of equilateral triangles are at the intersection of the altitudes so $\angle PAB$ and $\angle RAC$ are both 30 degrees. Hence
$$AP = {2\over 3}.{\sqrt 3 c\over 2}= {c\over \sqrt 3}$$ and $$AR = {2\over 3}.{\sqrt 3 b\over 2}= {b\over \sqrt 3}.$$
It follows that $\angle PAR = (\angle A + 60)$ degrees. By the cosine rule
$$PR^2 = AP^2 + AR^2 - 2AP.AR \cos (\angle A+60) = {1\over 3}(c^2 + b^2 - 2bc \cos (\angle A + 60) \quad (1).$$
Now $\cos (\angle A + 60) = {1\over 2}\cos A - {\sqrt 3\over 2}\sin A$ and, from $\triangle ABC$: $\cos A = {b^2 + c^2 - a^2 \over 2bc}$ and $\sin A = {2{\rm Area}\triangle ABC\over bc}$. Substituting for $\cos (\angle A + 60)$ in (1) and simplifying the expression gives:
$$PR^2 = {1\over 3}\left[{a^2 + b^2 + c^2\over 2} + 2\sqrt 3 {\rm Area}\triangle ABC\right].$$
This formula is completely symmetric in $a, b$ and $c$ and it follows that $RQ^2$ and $QP^2$ have the same value and that $\triangle PQR$ is equilateral.
Next the proof using complex numbers as vectors.
We use $\lambda = e^{\pi i/3}$ so that $\lambda ^2 = \lambda - 1$.
Also multiplying a complex number by $\lambda$ rotates it by 60 degrees.
Referring to the given diagram let $A, B, C$ be represented by the complex numbers $a, b, c$. The third vertex of the equilateral triangle drawn on $AB$ is represented by the complex number $a+ \lambda (b-a)$. Therefore the centre of this triangle P is represented by $p$ where
$$p = {1\over 3}([2 - \lambda ]a +[1 +\lambda ]b).$$ Similarly $$q = {1\over 3}(2 - \lambda ]b +[1 + \lambda ]c),$$
and
$$r = {1\over 3}(2 - \lambda ]c +[1 + \lambda ]a).$$
To show that $PQR$ is equilateral it is sufficient to show that $r - q = \lambda [p - q]$ and this follows using simple algebra and $\lambda ^ 2 = \lambda - 1$.
The NRICH Project aims to enrich the mathematical experiences of all learners. To support this aim, members of the NRICH team work in a wide range of capacities, including providing professional development for teachers wishing to embed rich mathematical tasks into everyday classroom practice. More information on many of our other activities can be found here.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 32, "mathjax_display_tex": 7, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9221429228782654, "perplexity_flag": "head"}
|
http://mathhelpforum.com/pre-calculus/22938-linear-programming-need-help-one-problem-print.html
|
# Linear Programming-Need help with one problem
Printable View
• November 16th 2007, 08:49 PM
soly_sol
Linear Programming-Need help with one problem
An objective function and a system of linear inequalities representing constraints are given. Graph the system of inequalities representing the constraints. Find the value of the objective function at each corner of the graphed region. Use these values to determine the maximum value of the objective function and the values of x and y for which the maximum occurs.
Objective Function z = 19x + 4y
Constraints 0 < or equal to x < or equal to 10
0 < or equal to y < or equal to 5
3x + 2y > or equal to 6
• November 17th 2007, 04:44 AM
earboth
1 Attachment(s)
Quote:
Originally Posted by soly_sol
An objective function and a system of linear inequalities representing constraints are given. Graph the system of inequalities representing the constraints. Find the value of the objective function at each corner of the graphed region. Use these values to determine the maximum value of the objective function and the values of x and y for which the maximum occurs.
Objective Function z = 19x + 4y
Constraints 0 < or equal to x < or equal to 10
0 < or equal to y < or equal to 5
3x + 2y > or equal to 6
Hello,
the objective function has a straight line as it's graph and the value for z correspond with the y-intercept of this line:
$z = 19x + 4y~\implies~y=-\frac{19}{4}x+\underbrace{\frac z4}_{\text{y-intercept}}$ . That means: Take the y-intercept from the drawing and multiply it by 4 to get z.
Constraints:
$\left \{ \begin{array}{l}x\geq 0 \wedge x \leq 10 \\ y\geq0 \wedge y\leq 5 \\y \geq -\frac32 x + 3\end{array} \right.$ These inequaltities will give a pentagon.
Now draw parallel lines through the vertices of the pentagon which have the slope $m = -\frac{19}{4}$. The greater the y-intercept the greater the value for z. If you use the point (10, 5) then the function with the greatest y-intercept is:
$y=-\frac{19}{4}x+\frac {210}{4}$ . Because $\frac14 z = \frac {210}{4} ~\implies~z=210$
Remark:
(1) I forgot to draw the line: $y = -\frac{19}{4} x +5$
(2) the axes have different scales!
All times are GMT -8. The time now is 05:02 PM.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8391727209091187, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/calculus/25836-finding-maximum-minimum.html
|
# Thread:
1. ## Finding the maximum and minimum
Use analytic methods to find the extreme values of the function on the interval and where they occur.
Extreme values includes local and absolute max and min.
Analytic methods meaning I can't graph anything.
$g(x)= \sin (x + \frac{\pi}{4})$
To find the critical points i need to find when y'= 0.
$g'(x)=\cos(x + \frac{\pi}{4})$
I got the points $\frac{\pi}{4}, \frac{5\pi}{4}$
When plugged into g(x) they yield 1 and -1, respectively.
These are only 2 critical points; I need 4.
Thanks for the help!
2. ## Another Problem
Derivative is $x^{2/3} + \frac{2(x+2)}{3x^(1/3)}$
The original function is $x^{2/3}(x+2)$
The answer says the critical points are 0 and -4/5.
The points appear to be 0, and -2.
What did I do wrong?
3. Originally Posted by Truthbetold
Derivative is $x^[2/3] + \frac{2(x+2)}{3x^(1/3)}$
The original function is $x^[2/3](x+2)$
The answer says the critical points are 0 and -4/5.
The points appear to be 0, and -2.
What did I do wrong?
$x^[2/3] + \frac{2(x+2)}{3x^(1/3)} = 0$
Multiply both sides by $x^{1/3}$. (Note that x = 0 is not in the domain of the derivative, so we lose nothing by doing this.)
$x + \frac{2}{3}(x + 2) = 0$
So I'm getting that x = -4/5 is a critical point.
x = 0 is defined to be a critical point since the derivative doesn't exist there.
-Dan
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 10, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9178022742271423, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/254240/finding-the-spectrum-of-the-schrodinger-operator
|
# Finding the spectrum of the Schrodinger operator
Let $H(f) = -f'' + V(x) f$ be the Schrodinger operator on $\mathbb R$. I am trying to calculate the spectrum (eigenvalues) of the operator $H$ in $L^2(\mathbb R)$ for various choices of $V$. In particular, how does one calculate the spectrum of $H$ if $V(x) = - \frac{C_1}{\cosh^2(C_2 x)}$, $C_1 , C_2 > 0$, or $V(x) = e^x$?
I know that one can find the spectrum by explicitly solving the differential equation $H(f) = Ef$, but I am not sure how to do so.
-
## 1 Answer
You have to solve the time-independent Schroedinger equation,
$$H[f_n](x) = E_n f_n(x)$$
where $E_n$ is an eigenvalue and $f_n(x)$ is the corresponding eigenfunction. Just write out the equation as a plain old differential equation and use the standard techniques you know for solving it. Each value of $E_n$ for which a solution exists is an eigenvalue, and the corresponding solution is the eigenfunction.
If you're doing the analogous problem with multiple variables, you solve it as a partial differential equation instead, i.e. you'll usually have to perform separation of variables first.
-
How does one solve the equation $H(f_n) = E_n f_n$? I tried entering this in Wolfram alpha for various values of $E_n$ and it does not seem to give a closed form solution. – user15464 Dec 9 '12 at 13:28
Ah, well now you're asking the question "How do I solve this differential equation?" for a couple of specific differential equations. I'll try to expand on that more soon, when I have time. It would help if you edit your question to make it more clear that you're looking for help solving these specific differential equations. – David Zaslavsky Dec 9 '12 at 20:42
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 15, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9409586191177368, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/24842/why-is-this-charge-distribution-in-correct-capacitors-in-parallel-and-series
|
# Why is this charge distribution in correct? Capacitors in parallel and series
Please consider the circuit diagram below, in particular, look at the capacitors enclosed by the green loop.
*Note that the green loop and the (+) and (-) charges on the plates were drawn by me, the original circuit has none of these.
Is the charge distribution on the plates correct? The two capacitors in the green loop have a + and - plate connected by a wire, which by definition should be in series. The three -s that joins up at point d and evenly distribute itself (split) into two paths and adds charge to the two plate near point b (sorry for poor terminologies here)
But the book treats the green loop capacitors in parallel, which must imply the configuration.
I am awfully confused how $\ C_1$'s right (+) plate managed to induce $\ C_2$'s right plate to (-) and how the left plate of $\ C_2$ has (+) plate when it's connected to the negative terminal of the battery.
Also, I've noticed that a lot of people often just attack these problems based on the geometry of the circuits and I think they would make the same mistake as my book.
-
## 1 Answer
Your charge distribution is correct, though I usually don't focus on using a "correct" $\pm$ distribution in capacitors--if your sign is incorrect, you get a negative value of charge on the + plate. No biggie. In complex situations, it is sometimes even impossible to predict cjarge distribution without solving the circuit.
And your book is incorrect. $C_1-C_2$ are in series, though the entire branch is in parallel with $C_3$ and its opposite branch.
Parallel is when the current is split, while series is when current is constant. Series is a single wire with an in and an out, with components along the wire. Parallel is when there are many wires with their ends twisted together. Current goes in/out through the twisted ends.
You seem to have grasped that, though :)
Out of curiosity, which book is this? (the capacitors look like they're from Resnick)
-
I think they are, but they have no solutions. I grabbed from another university's website lol. But how do you know whether it is in series/parallel if you don't do what I do? Because the geometry of this circuit could be very misleading as many people would fall for. – jak May 4 '12 at 18:09
@jak look at my wires explanation. Here, you have one wire threading through $C_1,C_2$. Current goes in one end and out the other. No other inlets/outlets. So series. Just check if an ant walking down one end is guaranteed to come out the other, and will pass each component exactly once during its journey. – Manishearth♦ May 4 '12 at 18:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9673603177070618, "perplexity_flag": "middle"}
|
http://crypto.stackexchange.com/questions/3153/sha-256-vs-any-256-bits-of-sha-512-which-is-more-secure/3156
|
“SHA-256” vs “any 256 bits of SHA-512”, which is more secure?
In terms of security strength, Is there any difference in using the SHA-256 algorithm vs using any random 256 bits of the output of the SHA-512 algorithm?
Similarly, what is the security difference between using SHA-224 and using any random 224 bits of the SHA-256 output?
-
This may be a partial dupe, but I think this does get to levels which are better suited over on Crypto. – Rory Alsop Jul 6 '12 at 10:03
– Gilles Jul 6 '12 at 21:43
2 Answers
SHA-512 truncated to 256 bits is as safe as SHA-256 as far as we know. The NIST did basically that with SHA-512/256 introduced March 2012 in FIPS 180-4 (because it is faster than SHA-256 when implemented in software on many 64-bit CPUs). SHA-224 is just as safe as using 224 bits of SHA-256, because that's basically how SHA-224 is constructed. What bits are kept (provided that's fixed) is immaterial to security, but for compliance to NIST specification, the left bits shall be kept.
As stated in this other answer, in the general case of a hash function only assumed to be collision-resistant or preimage-resistant, restricting its output can make it entirely insecure. A trivial example is the 512-bit function obtained by appending 256 zeros to the output of SHA-256, which is both collision-resistant and preimage-resistant, but trivially insecure when restricted to its 256 right bits.
The stated design goal of SHA-2 functions are preimage-resistance and collision-resistance: "The hash algorithms specified in this Standard are called secure because, for a given algorithm, it is computationally infeasible 1) to find a message that corresponds to a given message digest, or 2) to find two different messages that produce the same message digest". Therefore, truncation of SHA-2 functions is not playing by the book.
Update: FIPS 180-4, which defines SHA-2 functions SHA-224, SHA-256, SHA-384, SHA-512, SHA-512/224 and SHA-512/256, explicitly endorses truncation in its section 7: "Some application may require a hash function with a message digest length different than those provided by the hash functions in this Standard. In such cases, a truncated message digest may be used, whereby a hash function with a larger message digest length is applied to the data to be hashed, and the resulting message digest is truncated by selecting an appropriate number of the leftmost bits".
Truncation of SHA-2 functions is safe. That's the very principle used to construct SHA-224 from a slight variant of SHA-256, as well as SHA-384, SHA-512/224, and SHA-512/256 from a slight variant of SHA-512 (the variant being to change the internal initialization vector, in order to avoid that the output of one function reveals bits of the output of another one).
The reason why SHA-2 functions can be safely truncated is that these functions have another unstated design goal, that they reach as far as we know, which is: being computationally indistinguishable from a random function, except for being that particular function. That strong property is necessary to be able to use the hash with confidence in proofs of protocols made in the Random Oracle Model, and implies collision-resistance and preimage-resistance (the reverse is not true). Truncation (by keeping any fixed subset of their output bits) of a function indistinguishable from a random function also is indistinguishable from a random function (proof sketch: any hypothetical distinguisher for the truncated function is easily converted into a distinguisher for the original function, with the same effort and advantage).
The principle can be extended to any size; e.g. SHA-512 truncated to 128 bits is, as far as we know, as fine a 128-bit hash as can be (regardless of which bits we keep), and unquestionably much preferable security-wise to MD5 (another 128-bit hash, which collision resistance is badly broken). However, collision for an $n$-bit hash can be found in about $2^{(n/2)+0.33}$ hashes, little memory, and efficient parallelization (see Parallel Collision Search with Cryptanalytic Applications). Hence for 80-bit level security (often considered an absolute minimum nowadays), when collision resistance is necessary, we should keep at least 160 bits; and when only preimage-resistance is necessary, we should keep at least 80 bits.
-
1
I'd even say it's more secure, since state collisions become much harder, protecting against a certain class of multi-collisions. It also prevents length-extension. – CodesInChaos Jul 6 '12 at 13:42
Can we take it furthur than 224? Or is 224 the absolute minimum? Also, does this mean that SHA-224 is just as secure as SHA-256, or SHA-256 is still more secure than SHA-224? – Pacerier Jul 7 '12 at 9:15
@Pacerier: yes we can do further down than 224 bits. See my addition. – fgrieu Jul 7 '12 at 9:59
1
@Pacerier: I quoted the NIST prescribing the use of leftmost bits. They most likely did it in order to avoid a multiplication of diverging implementations; but there is no reason to believe we need to do this for security. – fgrieu Jul 8 '12 at 18:12
1
The term is common, yes. However it usually does not refer to hash-functions, but to other cryptographic schemes proven secure in the ROM. The hash function (that is used to replace the RO) itself does not even exist in the ROM. It only comes into play, once you try to implement a scheme. – Maeher Jul 10 '12 at 6:36
show 4 more comments
If we are talking about collision resistance, then - in general for any hash function - using only part of the output may be a problem.
Given an arbitrary collision resistant hash function $H$, such that $H(m)=x||b$ (where b is only a single bit), we construct the hash function $H'$, such that $H'(m)=x$, i.e. it gives the same output as $H$ but without the last bit.
We can prove, that collision resistance of $H$ does not imply collision resistance of $H'$. To do that, we construct a specific collision resistant hash function $H$, such that $H'$ is not collision resistant.
For that we assume existence of a third collision resistant hash function $H''$ and define $H(m||b)=H''(m)||b$. $H$ is still collision resistant, because any collision under $H$ would also yield a collision under $H''$. However it now holds that for any $m\in \{0,1\}^*$ $H'(m||0)=H''(m)=H'(m||1)$ and $H'$ is therefore not collision resistant.
This may seem a bit counterintuitive and we assume (and hope) that this is not the case for hash functions that are commonly used (such as SHA-512).
So in short: For a hash function like SHA-512 you will probably be ok, but there is absolutely no guarantee. So don't mess with its output unless it's absolutely necessary and you really know what you are doing. Just use a tool that was made for the job.
-
2
Saying that there is no insurance on the safety of SHA-512 truncated to 256 bits is being overly prudent; the NIST itself used that very technique to build SHA-512/256, part of the March 2012 FIPS 180-4; see my answer. – fgrieu Jul 6 '12 at 15:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 19, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9255377650260925, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/3531/photon-wave-packets-from-distant-stars
|
# Photon wave packets from distant stars
A distant star like the sun, thousands of light years away, could be so faint that only one photon might arrive per square meter every few hundred seconds. How can we think about such an arriving photon in wave packet terms?
Years ago, in a popularisation entitled “Quantum Reality”, Nick Herbert suggested that the photon probability density function in such a case would be a macroscopic entity, something like a pancake with a diameter of metres in the direction transverse to motion, but very thin. (I know the wave packet is a mathematical construct, not a physical entity).
I have never understood how such a calculation could have been derived. After such a lengthy trip, tight lateral localisation suggests a broad transverse momentum spectrum. And since we know the photon’s velocity is c, the reason for any particular pancake “thickness” in the direction of motion seems rather obscure.
(Herbert then linked the wave packet width to the possibilities of stellar optical interferometry).
-
Just to add a little to the wave packet thickness issue. In the direction of motion, E = cp. Since E = hf, if the photon frequency is well-defined then so is its momentum and by the uncertainty principle its localisation would be rather undefined in the direction of travel. Wouldn't this make the pancake thick? – Nigel Seel Jan 21 '11 at 20:35
## 3 Answers
first of all, the shape of the wave function of a photon that is emitted by an atom is independent of the number of photons because the photons are almost non-interacting and the atoms that emit them are pretty much independent of each other. So if an atom on the surface of a star spontaneously emits a photon, the photon is described by pretty much the same wave function as a single photon from a very dim, distant source. The wave function of many photons emitted by different atoms is pretty much the tensor product of many copies of the wave function for a single photon: they're almost independent, or unentangled, if you wish.
The direction of motion of the photon is pretty much completely undetermined. It is just a complete nonsense that the wave function of a photon coming from distant galaxies will have the transverse size of several meters. The Gentleman clearly doesn't know what he is talking about. If the photon arrives from the distance of billions of light years, the size of the wave function in the angular directions will be counted in billions of light years, too.
I think it's always the wrong "classical intuition" that prevents people from understanding that wave functions of particles that are not observed are almost completely delocalized. You would need a damn sharp LASER - one that we don't possess - to keep photons in a few-meter-wide region after a journey that took billions of years. Even when we shine our sharpest lasers to the Moon which is just a light second away, we get a one-meter-wide spot on the Moon. And yes, this size is what measures the size of the wave function. For many photons created in similar ways, the classical electromagnetic field pretty much copies the wave function of each photon when it comes to the spatial extent.
Second, the thickness of the wave packet. Well, you may just Fourier-transform the wave packet and determine the composition of individual frequencies. If the frequency i.e. the momentum of the photon were totally well-defined, the wave packet would have to be infinitely thick. In reality, the width in the frequency space is determined up to $\Gamma$ which is essentially equal to the inverse lifetime of the excited state. The Fourier transform back to the position space makes the width in the position space close to $c$ times the lifetime of the excited state or so.
It's not surprising: when the atom is decaying - emitting a photon - it is gradually transforming to a wave function in which the photon has already been emitted, aside from the original wave function in which it has not been emitted. (This gradually changing state is used in the Schrödinger cat thought experiment.) Tracing over the atom, we see that the photon that is being created has a wave function that is being produced over the lifetime of the excited state of the atom. So the packet created in this way travels $c$ times this lifetime - and this distance will be the approximate thickness of the packet.
An excited state that lives for 1 millisecond in average will create a photon wave packet whose thickness will be about 300 kilometers. So the idea that the thickness is tiny is just preposterous. Of course, we ultimately detect the photon at a sharp place and at a sharp time but the wave function is distributed over a big portion of the spacetime and the rules of quantum mechanics guarantee that the wave function knows about the probabilistic distribution where or when the photon will be detected.
The thickness essentially doesn't change with time because massless fields or massless particles' wave functions propagate simply by moving uniformly by the speed $c$.
Cheers LM
-
1
Thanks for this very helpful answer. If a series of stellar processes eventually result in a photon exiting the star on its way to earth, then perhaps at some initial point the photon wave function is spherically symmetric about its point of origin. However, it will soon encounter a stray proton in space in one direction or another. Won't such occasional interactions localise the photon's wave function pretty quickly (cf. a cloud chamber)? – Nigel Seel Jan 22 '11 at 10:29
1
Dear @Nigel Seel, thanks for your interest. The wave function of a photon can't be entirely spherically symmetric - because a photon carries a spin (transverse polarization) and one can't comb the sphere. However, it's morally true that the wave function occupies almost the whole solid angle $4\pi$. And no, the number of interactions with charged particles (e.g. proton) that a photon coming from star has participated in is exactly zero. If it were not zero, it would change the direction. So no, the photons are not measured at all before they arrive to the telescope. No interference is lost. – Luboš Motl Jan 22 '11 at 10:37
Nigel,
I would like to make another suggestion concerning this question. I have not read the 1985 book by Nick Herbert, but from online links I see that he was discussing the "Hanbury Brown and Twiss" mechanism for optical interferometry. Although this sounds like quite a mouthful and not very fundamental - in fact it is.
The phenomenon has to do with photon ie boson quantum entanglement. The entanglement is between two photons from opposite sides of a star, allowing the star width to be measured. This technique was counter-intuitive apparently even to quantum physicists at the time. Here is a quote from Penrose 2004 (p598) on the matter:
"When their method was first proposed, it met with great opposition from many (even distinguished) physicists, who argued that 'photons can only interfere with themselves, not with other photons'; but they had overlooked the fact that the 'other photons' were part of a boson entangled whole."
It is possible that NH is making some geometric claim about the shape of this photon entangled state.
You can follow the Wikipedia links on the HTB effect, or raise another Stack question.
Roy.
-
– Nigel Seel Jan 22 '11 at 15:03
The answers posted by Lubos and Nigel are not bad but I think there is a point that needs to be added. The "photon" we detect on our photographic plate cannot be identified with a single photon that was emitted by a particular atom in the distant star. It must be thought of as the superposition of trillions of photons emitted at the same time from all over the surface of the star. These photons do not maintain separate identities as they spread through space.
In fact, these photons cannot do anything that cannot also be accounted for by ordinary light waves. I used to believe the argument that the power density of the light from a distant star was far too weak to provide the necessary energy for the reduction of a single atom silver bromide to metallic silver, the minimum energy needed to make a dot on the photographic plate. But last year I realized that this argument is invalid: to calculate the energy required for the transition, we must consider not a single atom but the thermodynamics of an entire silver bromide crystal. If we treat the crystal as a solid solution of silver bromide/metallic silver, then at the extremely low concentrations of metallic silver present in the unexposed crystal, the equilibrium point is shifted so far to the left that the conversion is virtually spontaneous. So you don't really need a full "photon's" worth of energy to drive the transition.
I explain this in more detail in these articles on my physics blog, "Why I Hate Physics": The Collapse of the Wave Function, and also Quantum Siphoning
-
I don't know what the business about dyes has to do with my "Theory". I only claimed to analyze the thermodynamics of the silver bromide=> metallic silver conversion. You call my analysis "amateurish": are you saying it's wrong? – Marty Green Nov 8 '11 at 8:24
Okay. So what exactly is the role of light in the photographic process if not the reduction of silver bromide? – Marty Green Nov 8 '11 at 10:18
Okay. So what exactly is the role of light in the photographic process if not the reduction of silver bromide? – Marty Green Nov 8 '11 at 13:08
For the record, I see that I have two comments in a row which are both identical. Just to explain how this happened: Georg said it was "silly" to think that silver bromide was converted to silver by light. So I asked: what then is the role of light in the process? Georg gave some kind of answer which didn't address my question, so I repeated the question. Then Georg came back and erased his previous answer, leaving my two questions just sitting there. – Marty Green Nov 8 '11 at 20:42
Additional for the record: Georg had a series of comments where he ridiculed my explanation, but he has erased them all now. – Marty Green Nov 9 '11 at 9:53
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.948940098285675, "perplexity_flag": "middle"}
|
http://en.wikiversity.org/wiki/Nonlinear_finite_elements/Tensors
|
# Introduction to Elasticity/Tensors
From Wikiversity
(Redirected from Nonlinear finite elements/Tensors)
# Tensors in Solid Mechanics
A sound understanding of tensors and tensor operation is essential if you want to read and understand modern papers on solid mechanics and finite element modeling of complex material behavior. This brief introduction gives you an overview of tensors and tensor notation. For more details you can read A Brief on Tensor Analysis by J. G. Simmonds, the appendix on vector and tensor notation from Dynamics of Polymeric Liquids - Volume 1 by R. B. Bird, R. C. Armstrong, and O. Hassager, and the monograph by R. M. Brannon. An introduction to tensors in continuum mechanics can be found in An Introduction to Continuum Mechanics by M. E. Gurtin. Most of the material in this page is based on these sources.
## Notation
The following notation is usually used in the literature:
$\begin{align} s & = ~\text{scalar (lightface italic small)} \\ \mathbf{v} & = ~\text{vector (boldface roman small)} \\ \boldsymbol{\sigma} & = ~\text{second-order tensor (boldface Greek)} \\ \boldsymbol{A} & = ~\text{third-order tensor (boldface italic capital)} \\ \boldsymbol{\mathsf{A}} & = ~\text{fourth-order tensor (sans-serif capital)} \end{align}$
## Motivation
A force $\mathbf{f}\,$ has a magnitude and a direction, can be added to another force, be multiplied by a scalar and so on. These properties make the force $\mathbf{f}\,$ a vector.
Similarly, the displacement $\mathbf{u}$ is a vector because it can be added to other displacements and satisfies the other properties of a vector.
However, a force cannot be added to a displacement to yield a physically meaningful quantity. So the physical spaces that these two quantities lie on must be different.
Recall that a constant force $\mathbf{f}$ moving through a displacement $\mathbf{u}\,$ does $\mathbf{f}\bullet\mathbf{u}$ units of work. How do we compute this product when the spaces of $\mathbf{f}\,$ and $\mathbf{u}\,$ are different? If you try to compute the product on a graph, you will have to convert both quantities to a single basis and then compute the scalar product.
An alternative way of thinking about the operation $\mathbf{f}\bullet\mathbf{u}$ is to think of $\mathbf{f}\,$ as a linear operator that acts on $\mathbf{u}$ to produce a scalar quantity (work). In the notation of sets we can write
$\mathbf{f}\bullet\mathbf{u} ~~~\equiv~~~\mathbf{f} : \mathbf{u} \rightarrow \mathbb{R}^{}~.$
A first order tensor is a linear operator that sends vectors to scalars.
Next, assume that the force $\mathbf{f}\,$ acts at a point $\mathbf{x}\,$. The moment of the force about the origin is given by $\mathbf{x}\times\mathbf{f}\,$ which is a vector. The vector product can be thought of as an linear operation too. In this case the effect of the operator is to convert a vector into another vector.
A second order tensor is a linear operator that sends vectors to vectors.
According to Simmonds, "the name tensor comes from elasticity theory where in a loaded elastic body the stress tensor acting on a unit vector normal to a plane through a point delivers the tension (i.e., the force per unit area) acting across the plane at that point."
Examples of second order tensors are the stress tensor, the deformation gradient tensor, the velocity gradient tensor, and so on.
Another type of tensor that we encounter frequently in mechanics is the fourth order tensor that takes strains to stresses. In elasticity, this is the stiffness tensor.
A fourth order tensor is a linear operator that sends second order tensors to second order tensors.
## Tensor algebra
A tensor $\boldsymbol{A}\,$ is a linear transformation from a vector space $\mathcal{V}$ to $\mathcal{V}$. Thus, we can write
$\boldsymbol{A} : \mathbf{u} \in \mathcal{V} \rightarrow \mathbf{v \in \mathcal{V}}~.$
More often, we use the following notation:
$\mathbf{v} = \boldsymbol{A} \mathbf{u} \equiv \boldsymbol{A}(\mathbf{u}) \equiv \boldsymbol{A}\bullet\mathbf{u}~.$
I have used the "dot" notation in this handout. None of the above notations is obviously superior to the others and each is used widely.
### Addition of tensors
Let $\boldsymbol{A}\,$ and $\boldsymbol{B}\,$ be two tensors. Then the sum $(\boldsymbol{A} + \boldsymbol{B})\,$ is another tensor $\boldsymbol{C}\,$ defined by
$\boldsymbol{C} = \boldsymbol{A} + \boldsymbol{B} \implies \boldsymbol{C}\bullet\mathbf{v} = (\boldsymbol{A} + \boldsymbol{B})\bullet\mathbf{v} = \boldsymbol{A}\bullet\mathbf{v} + \boldsymbol{B}\bullet\mathbf{v} ~.$
### Multiplication of a tensor by a scalar
Let $\boldsymbol{A}\,$ be a tensor and let $\lambda\,$ be a scalar. Then the product $\boldsymbol{C} = \lambda \boldsymbol{A}\,$ is a tensor defined by
$\boldsymbol{C} = \lambda \boldsymbol{A} \implies \boldsymbol{C}\bullet\mathbf{v} = (\lambda \boldsymbol{A})\bullet\mathbf{v} = \lambda (\boldsymbol{A}\bullet\mathbf{v}) ~.$
### Zero tensor
The zero tensor $\boldsymbol{\mathit{0}}\,$ is the tensor which maps every vector $\mathbf{v}\,$ into the zero vector.
$\boldsymbol{\mathit{0}}\bullet\mathbf{v} = \mathbf{0} ~.$
### Identity tensor
The identity tensor $\boldsymbol{\mathit{I}}\,$ takes every vector $\mathbf{v}\,$ into itself.
$\boldsymbol{\mathit{I}}\bullet\mathbf{v} = \mathbf{v} ~.$
The identity tensor is also often written as $\boldsymbol{\mathit{1}}\,$.
### Product of two tensors
Let $\boldsymbol{A}\,$ and $\boldsymbol{B}\,$ be two tensors. Then the product $\boldsymbol{C} = \boldsymbol{A}\bullet\boldsymbol{B}$ is the tensor that is defined by
$\boldsymbol{C} = \boldsymbol{A}\bullet\boldsymbol{B} \implies \boldsymbol{C}\bullet\mathbf{v} = (\boldsymbol{A}\bullet\boldsymbol{B})\bullet{\mathbf{v}} = \boldsymbol{A}\bullet(\boldsymbol{B}\bullet{\mathbf{v}}) ~.$
In general $\boldsymbol{A}\bullet\boldsymbol{B} \ne \boldsymbol{B}\bullet\boldsymbol{A}$.
### Transpose of a tensor
The transpose of a tensor $\boldsymbol{A}\,$ is the unique tensor $\boldsymbol{A}^T\,$ defined by
$(\boldsymbol{A}\bullet\mathbf{u})\bullet\mathbf{v} = \mathbf{u}\bullet(\boldsymbol{A}^T\bullet\mathbf{v})~.$
The following identities follow from the above definition:
$\begin{align} (\boldsymbol{A} + \boldsymbol{B})^T & = \boldsymbol{A}^T + \boldsymbol{B}^T ~, \\ (\boldsymbol{A}\bullet\boldsymbol{B})^T & = \boldsymbol{B}^T\bullet\boldsymbol{A}^T ~, \\ (\boldsymbol{A}^T)^T & = \boldsymbol{A} ~. \end{align}$
### Symmetric and skew tensors
A tensor $\boldsymbol{A}\,$ is symmetric if
$\boldsymbol{A} = \boldsymbol{A}^T ~.$
A tensor $\boldsymbol{A}\,$ is skew if
$\boldsymbol{A} = -\boldsymbol{A}^T ~.$
Every tensor $\boldsymbol{A}\,$ can be expressed uniquely as the sum of a symmetric tensor $\boldsymbol{E}\,$ (the symmetric part of $\boldsymbol{A}\,$) and a skew tensor $\boldsymbol{W}\,$ (the skew part of $\boldsymbol{A}\,$).
$\boldsymbol{A} = \boldsymbol{E} + \boldsymbol{W} ~;~~ \boldsymbol{E} = \cfrac{\boldsymbol{A} + \boldsymbol{A}^T}{2} ~;~~ \boldsymbol{W} = \cfrac{\boldsymbol{A} - \boldsymbol{A}^T}{2} ~.$
### Tensor product of two vectors
The tensor (or dyadic) product $\mathbf{a}\mathbf{b}\,$ (also written $\mathbf{a}\otimes\mathbf{b}\,$) of two vectors $\mathbf{a}\,$ and $\mathbf{b}\,$ is a tensor that assigns to each vector $\mathbf{v}\,$ the vector $(\mathbf{b}\bullet\mathbf{v})\mathbf{a}$.
$(\mathbf{a}\mathbf{b})\bullet\mathbf{v} = (\mathbf{a}\otimes\mathbf{b})\bullet\mathbf{v} = (\mathbf{b}\bullet\mathbf{v})\mathbf{a} ~.$
Notice that all the above operations on tensors are remarkably similar to matrix operations.
## Spectral theorem
The spectral theorem for tensors is widely used in mechanics. We will start off by definining eigenvalues and eigenvectors.
Let $\boldsymbol{S}$ be a second order tensor. Let $\lambda$ be a scalar and $\mathbf{n}$ be a vector such that
$\boldsymbol{S}\cdot\mathbf{n} = \lambda~\mathbf{n}$
Then $\lambda$ is called an eigenvalue of $\boldsymbol{S}$ and $\mathbf{n}$ is an eigenvector .
A second order tensor has three eigenvalues and three eigenvectors, since the space is three-dimensional. Some of the eigenvalues might be repeated. The number of times an eigenvalue is repeated is called multiplicity.
In mechanics, many second order tensors are symmetric and positive definite. Note the following important properties of such tensors:
1. If $\boldsymbol{S}$ is positive definite, then $\lambda > 0$.
2. If $\boldsymbol{S}$ is symmetric, the eigenvectors $\mathbf{n}$ are mutually orthogonal.
For more on eigenvalues and eigenvectors see Applied linear operators and spectral methods.
### Spectral theorem
Let $\boldsymbol{S}$ be a symmetric second-order tensor. Then
1. the normalized eigenvectors $\mathbf{n}_1, \mathbf{n}_2, \mathbf{n}_3$ form an orthonormal basis.
2. if $\lambda_1, \lambda_2, \lambda_3$ are the corresponding eigenvalues then $\boldsymbol{S} = \sum_{i=1}^3 \lambda_i \mathbf{n}_i \otimes \mathbf{n}_i$.
This relation is called the spectral decomposition of $\boldsymbol{S}$.
## Polar decomposition theorem
Let $\boldsymbol{F}$ be second order tensor with $\det\boldsymbol{F} > 0$. Then
1. there exist positive definite, symmetric tensors $\boldsymbol{U}$,$\boldsymbol{V}$ and a rotation (orthogonal) tensor $\boldsymbol{R}$ such that $\boldsymbol{F} = \boldsymbol{R}\cdot \boldsymbol{U} = \boldsymbol{V} \cdot \boldsymbol{R}$.
2. also each of these decompositions is unique.
## Principal invariants of a tensor
Let $\boldsymbol{S}$ be a second order tensor. Then the determinant of $\boldsymbol{S} - \lambda~\boldsymbol{\mathit{I}}$ can be expressed as
$\det(\boldsymbol{S} - \lambda~\boldsymbol{\mathit{I}}) = -\lambda^3 + I_1(\boldsymbol{S})~\lambda^2 - I_2(\boldsymbol{S})~\lambda + I_3(\boldsymbol{S})$
The quantities $I_1, I_2, I_3\,$ are called the principal invariants of $\boldsymbol{S}$. Expressions of the principal invariants are given below.
Principal invariants of $\boldsymbol{S}$ $\begin{align} I_1 & = \text{tr}~ \boldsymbol{S} = \lambda_1 + \lambda_2 + \lambda_3 \\ I_2 & = \cfrac{1}{2}\left[ (\text{tr}~ \boldsymbol{S})^2 - \text{tr}(\boldsymbol{S^2})\right] = \lambda_1~\lambda_2 + \lambda_2~\lambda_3 + \lambda_3~\lambda_1\\ I_3 & = \det\boldsymbol{S} = \lambda_1~\lambda_2~\lambda_3 \end{align}$
Note that $\lambda$ is an eigenvalue of $\boldsymbol{S}$ if and only if
$\det(\boldsymbol{S} - \lambda~\boldsymbol{\mathit{1}}) = 0$
The resulting equations is called the characteristic equation and is usually written in expanded form as
$\lambda^3 - I_1(\boldsymbol{S})~\lambda^2 + I_2(\boldsymbol{S})~\lambda -I_3(\boldsymbol{S}) = 0$
## Cayley-Hamilton theorem
The Cayley-Hamilton theorem is a very useful result in continuum mechanics. It states that
Cayley-Hamilton theorem If $\boldsymbol{S}$ is a second order tensor then it satisfies its own characteristic equation $\boldsymbol{S}^3 - I_1(\boldsymbol{S})~\boldsymbol{S}^2 + I_2(\boldsymbol{S})~\boldsymbol{S} -I_3(\boldsymbol{S})~\boldsymbol{\mathit{1}} = \boldsymbol{\mathit{0}}$
## Index notation
All the equations so far have made no mention of the coordinate system. When we use vectors and tensor in computations we have to express them in some coordinate system (basis) and use the components of the object in that basis for our computations.
Commonly used bases are the Cartesian coordinate frame, the cylindrical coordinate frame, and the spherical coordinate frame.
A Cartesian coordinate frame consists of an orthonormal basis $(\mathbf{e}_1,\mathbf{e}_2,\mathbf{e}_3)\,$ together with a point $\mathbf{o}\,$ called the origin. Since these vectors are mutually perpendicular, we have the following relations:
$\begin{align}\text{(1)} \qquad \mathbf{e}_1\bullet\mathbf{e}_1 & = 1 ~;~~ \mathbf{e}_1\bullet\mathbf{e}_2 = 0 ~;~~ \mathbf{e}_1\bullet\mathbf{e}_3 = 0 ~; \\ \mathbf{e}_2\bullet\mathbf{e}_1 & = 0 ~;~~ \mathbf{e}_2\bullet\mathbf{e}_2 = 1 ~;~~ \mathbf{e}_2\bullet\mathbf{e}_3 = 0 ~;\\ \mathbf{e}_3\bullet\mathbf{e}_1 & = 0 ~;~~ \mathbf{e}_3\bullet\mathbf{e}_2 = 0 ~;~~ \mathbf{e}_3\bullet\mathbf{e}_3 = 1 ~. \end{align}$
### Kronecker delta
To make the above relations more compact, we introduce the Kronecker delta symbol
${ \delta_{ij} = \begin{cases} 1 & ~\rm{if}~ i = j~. \\ 0 & ~\rm{if}~ i \ne j ~. \end{cases} }$
Then, instead of the nine equations in (1) we can write (in index notation)
$\mathbf{e}_i\bullet\mathbf{e}_j = \delta_{ij} ~.$
### Einstein summation convention
Recall that the vector $\mathbf{u}\,$ can be written as
$\text{(2)} \qquad \mathbf{u} = u_1 \mathbf{e}_1 + u_2 \mathbf{e}_2 + u_3 \mathbf{e}_3 = \sum_{i=1}^3 u_i \mathbf{e}_i ~.$
In index notation, equation (2) can be written as
${ \mathbf{u} = u_i \mathbf{e}_i~. }$
This convention is called the Einstein summation convention. If indices are repeated, we understand that to mean that there is a sum over the indices.
### Components of a vector
We can write the Cartesian components of a vector $\mathbf{u}\,$ in the basis $(\mathbf{e}_1,\mathbf{e}_2,\mathbf{e}_3)\,$ as
$u_i = \mathbf{e}_i\bullet\mathbf{u} ~,~~~i = 1, 2, 3~.$
### Components of a tensor
Similarly, the components of $A_{ij}\,$ of a tensor $\boldsymbol{A}\,$ are defined by
${ A_{ij} = \mathbf{e}_i\bullet(\boldsymbol{A}\bullet\mathbf{e}_j)~. }$
Using the definition of the tensor product, we can also write
$\boldsymbol{A} = \sum_{i,j=1}^3 A_{ij} \mathbf{e}_i\mathbf{e}_j \equiv \sum_{i,j=1}^3 A_{ij} \mathbf{e}_i\otimes\mathbf{e}_j ~.$
Using the summation convention,
${ \boldsymbol{A} = A_{ij} \mathbf{e}_i\mathbf{e}_j \equiv A_{ij} \mathbf{e}_i\otimes\mathbf{e}_j~. }$
In this case, the bases of the tensor are $\{\mathbf{e}_i\otimes\mathbf{e}_j\}$ and the components are $A_{ij}\,$.
### Operation of a tensor on a vector
From the definition of the components of tensor $\boldsymbol{A}\,$, we can also see that (using the summation convention)
${ \mathbf{v} = \boldsymbol{A}\bullet\mathbf{u} ~~~\equiv~~~ v_i = A_{ij} u_j~. }$
### Dyadic product
Similarly, the dyadic product can be expressed as
${ (\mathbf{a}\mathbf{b})_{ij} \equiv (\mathbf{a}\otimes\mathbf{b})_{ij} = a_i b_j ~. }$
### Matrix notation
We can also write a tensor $\boldsymbol{A}$ in matrix notation as
$\boldsymbol{A} = A_{ij}\mathbf{e}_i\mathbf{e}_j = A_{ij}\mathbf{e}_i\otimes\mathbf{e}_j \implies \mathbf{A} = \begin{bmatrix} A_{11} & A_{12} & A_{13} \\ A_{21} & A_{22} & A_{23} \\ A_{31} & A_{32} & A_{33} \end{bmatrix} ~.$
Note that the Kronecker delta represents the components of the identity tensor in a Cartesian basis. Therefore, we can write
$\boldsymbol{I} = \delta_{ij}\mathbf{e}_i\mathbf{e}_j = \delta_{ij}\mathbf{e}_i\otimes\mathbf{e}_j \implies \mathbf{I} = \begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{bmatrix} ~.$
### Tensor inner product
The inner product $\boldsymbol{A} : \boldsymbol{B}\,$ of two tensors $\boldsymbol{A}\,$ and $\boldsymbol{B}\,$ is an operation that generates a scalar. We define (summation implied)
${ \boldsymbol{A} : \boldsymbol{B} = A_{ij} B_{ij} ~. }$
The inner product can also be expressed using the trace :
${ \boldsymbol{A} : \boldsymbol{B} = Tr(\boldsymbol{A^T} \bullet \boldsymbol{B}) ~. }$
Proof using the definition of the trace below :
${ Tr(\boldsymbol{A^T} \bullet \boldsymbol{B}) = \boldsymbol{I}:(\boldsymbol{A^T} \bullet \boldsymbol{B})=\delta_{ij}\mathbf{e}_i\otimes\mathbf{e}_j :(A_{lk}\mathbf{e}_k \otimes \mathbf{e}_l \bullet B_{mn}\mathbf{e}_m\otimes \mathbf{e}_n) = \delta_{ij}\mathbf{e}_i\otimes\mathbf{e}_j : (A_{mk}B_{mn}\mathbf{e}_k \otimes \mathbf{e}_n) = }$
${ A_{mk}B_{mn}\delta_{ij}\delta_{ik}\delta_{jn}=A_{mk}B_{mn}\delta_{kn} = A_{mn}B_{mn} =A:B }$
### Trace of a tensor
The trace of a tensor is the scalar given by
$\text{Tr}(\boldsymbol{A}) = \boldsymbol{I}:\boldsymbol{A} = \delta_{ij}\mathbf{e}_i\otimes\mathbf{e}_j:A_{mn}\mathbf{e}_m\otimes\mathbf{e}_n = \delta_{ij}\delta_{im}\delta_{jn}A_{mn} = A_{ii}$
(** Needs a proper definition **)
### Magnitude of a tensor
The magnitude of a tensor $\boldsymbol{A}\,$ is defined by
$\Vert \boldsymbol{A} \Vert = \sqrt{\boldsymbol{A}:\boldsymbol{A}} \equiv \sqrt{A_{ij}A_{ij}} ~.$
### Tensor product of a tensor with a vector
Another tensor operation that is often seen is the tensor product of a tensor with a vector. Let $\boldsymbol{A}\,$ be a tensor and let $\mathbf{v}\,$ be a vector. Then the tensor cross product gives a tensor $\boldsymbol{C}\,$ defined by
${ \boldsymbol{C} = \boldsymbol{A}\times\mathbf{v} \implies C_{ij} = e_{klj} A_{ik} v_{l} ~. }$
### Permutation symbol
The permutation symbol $e_{ijk}\,$ is defined as
${ e_{ijk} = \begin{cases} 1 & ~\text{if}~ ijk = 123, 231, ~\text{or}~ 312 \\ -1 & ~\text{if}~ ijk = 321, 132, ~\text{or}~ 213 \\ 0 & ~\text{if any two indices are alike} \end{cases} }$
## Identities in tensor algebra
Let $\boldsymbol{A}$, $\boldsymbol{B}$ and $\boldsymbol{C}$ be three second order tensors. Then
$\boldsymbol{A}:(\boldsymbol{B}\cdot\boldsymbol{C}) = (\boldsymbol{C}\cdot\boldsymbol{A}^T):\boldsymbol{B}^T = (\boldsymbol{B}^T\cdot\boldsymbol{A}):\boldsymbol{C}$
Proof:
It is easiest to show these relations by using index notation with respect to an orthonormal basis. Then we can write
$\boldsymbol{A}:(\boldsymbol{B}\cdot\boldsymbol{C}) \equiv A_{ij} (B_{ik}~C_{kj}) = C_{kj}~A^T_{ji}~B^T_{ki} \equiv (\boldsymbol{C}\cdot\boldsymbol{A}^T):\boldsymbol{B}^T$
Similarly,
$\boldsymbol{A}:(\boldsymbol{B}\cdot\boldsymbol{C}) \equiv A_{ij} (B_{ik}~C_{kj}) = B^T_{ki}~A_{ij}~C_{kj} \equiv (\boldsymbol{B}^T\cdot\boldsymbol{A}):\boldsymbol{C}$
## Tensor calculus
Recall that the vector differential operator (with respect to a Cartesian basis) is defined as
$\boldsymbol{\nabla}{} = \cfrac{\partial }{\partial x_1}\mathbf{e}_1+\cfrac{\partial }{\partial x_2}\mathbf{e}_2+\cfrac{\partial }{\partial x_3}\mathbf{e}_3 \equiv \cfrac{\partial }{\partial x_i}\mathbf{e}_i ~.$
In this section we summarize some operations of $\boldsymbol{\nabla}{}$ on vectors and tensors.
### The gradient of a vector field
The dyadic product $\boldsymbol{\nabla}{\mathbf{v}}\,$ (or $\boldsymbol{\nabla}{}\otimes\mathbf{v}$) is called the gradient of the vector field $\mathbf{v}\,$. Therefore, the quantity $\boldsymbol{\nabla}{\mathbf{v}}$ is a tensor given by
${ \boldsymbol{\nabla}{\mathbf{v}} = \sum_i\sum_j \cfrac{\partial v_j}{\partial x_i} \mathbf{e}_i \mathbf{e}_j \equiv v_{j,i} \mathbf{e}_i \mathbf{e}_j ~. }$
In the alternative dyadic notation,
${ \boldsymbol{\nabla}{\mathbf{v}} \equiv \boldsymbol{\nabla}{}\otimes\mathbf{v} = \sum_i\sum_j \cfrac{\partial v_j}{\partial x_i} \mathbf{e}_i\otimes\mathbf{e}_j \equiv v_{j,i} \mathbf{e}_i\otimes\mathbf{e}_j ~. }$
'Warning: Some authors define the $ij$ component of $\boldsymbol{\nabla}{\mathbf{v}}$ as $\partial v_i/\partial x_j = v_{i,j}$.
### The divergence of a tensor field
Let $\boldsymbol{A}\,$ be a tensor field. Then the divergence of the tensor field is a vector $\boldsymbol{\nabla}\bullet{\boldsymbol{A}}$ given by
${ \boldsymbol{\nabla}\bullet{\boldsymbol{A}} = \sum_j \left[\sum_i \cfrac{\partial A_{ij}}{\partial x_i}\right] \mathbf{e}_j \equiv \cfrac{\partial A_{ij}}{\partial x_i} \mathbf{e}_j = A_{ij,i} \mathbf{e}_j~. }$
To fix the definition of divergence of a general tensor field (possibly of higher order than 2), we use the relation
$(\boldsymbol{\nabla}\bullet{\boldsymbol{A}})\bullet\mathbf{c} = \boldsymbol{\nabla}\bullet(\boldsymbol{A}\bullet\mathbf{c})$
where $\mathbf{c}$ is an arbitrary constant vector.
### The Laplacian of a vector field
The Laplacian of a vector field is given by
${ \nabla^2{\mathbf{v}} = \boldsymbol{\nabla}\bullet{\boldsymbol{\nabla}{\mathbf{v}}} = \sum_j \left[\sum_i \cfrac{\partial^2 v_j}{\partial x_i^2}\right] \mathbf{e}_j \equiv v_{j,ii} \mathbf{e}_j ~. }$
## Tensor Identities
Some important identities involving tensors are:
1. $\boldsymbol{\nabla}\bullet{\boldsymbol{\nabla}{\mathbf{v}}} = \boldsymbol{\nabla}{(\boldsymbol{\nabla}\bullet{\mathbf{v}})} - \boldsymbol{\nabla}\times{(\boldsymbol{\nabla}\times{\mathbf{v}})}$.
2. $\mathbf{v}\bullet\boldsymbol{\nabla}{\mathbf{v}} = \frac{1}{2}\boldsymbol{\nabla}{(\mathbf{v}\bullet\mathbf{v})} - \mathbf{v}\times(\boldsymbol{\nabla}\times{\mathbf{v})}$ .
3. $\boldsymbol{\nabla}\bullet{(\mathbf{v}\otimes\mathbf{w})} = \mathbf{v}\bullet\boldsymbol{\nabla}{\mathbf{w}} + \mathbf{w}(\boldsymbol{\nabla}\bullet{\mathbf{v}})$ .
4. $\boldsymbol{\nabla}\bullet{(\varphi\boldsymbol{A})} = \boldsymbol{\nabla}{\varphi}\bullet\boldsymbol{A} + \varphi\boldsymbol{\nabla}\bullet{\boldsymbol{A}}$ .
5. $\boldsymbol{\nabla}{(\mathbf{v}\bullet\mathbf{w})} = (\boldsymbol{\nabla}{\mathbf{v}})\bullet\mathbf{w} + (\boldsymbol{\nabla}{\mathbf{w}})\bullet\mathbf{v}$ .
6. $\boldsymbol{\nabla}\bullet{(\boldsymbol{A}\bullet\mathbf{w})} = (\boldsymbol{\nabla}\bullet{\boldsymbol{A}})\bullet\mathbf{w} + \boldsymbol{A}^T:(\boldsymbol{\nabla}{\mathbf{w}})$ .
## Integral theorems
The following integral theorems are useful in continuum mechanics and finite elements.
### The Gauss divergence theorem
If $\Omega$ is a region in space enclosed by a surface $\Gamma\,$ and $\boldsymbol{A}\,$ is a tensor field, then
${ \int_{\Omega} \boldsymbol{\nabla}\bullet{\boldsymbol{A}} ~dV = \int_{\Gamma} \mathbf{n}\bullet\boldsymbol{A} ~dA }$
where $\mathbf{n}\,$ is the unit outward normal to the surface.
### The Stokes curl theorem
If $\Gamma\,$ is a surface bounded by a closed curve $\mathcal{C}$, then
$\int_{\Gamma} \mathbf{n}\bullet(\boldsymbol{\nabla}\times{\boldsymbol{A})}~dA = \oint_{\mathcal{C}} \mathbf{t}\bullet\boldsymbol{A}~ ds$
where $\boldsymbol{A}\,$ is a tensor field, $\mathbf{n}\,$ is the unit normal vector to $\Gamma\,$ in the direction of a right-handed screw motion along $\mathcal{C}$, and $\mathbf{t}\,$ is a unit tangential vector in the direction of integration along $\mathcal{C}$.
### The Leibniz formula
Let $\Omega$ be a closed moving region of space enclosed by a surface $\Gamma\,$. Let the velocity of any surface element be $\mathbf{v}\,$. Then if $\boldsymbol{A}(\mathbf{x},t)\,$ is a tensor function of position and time,
$\cfrac{\partial }{\partial t} \int_{\Omega} \boldsymbol{A}~dV = \int_{\Omega} \cfrac{\partial \boldsymbol{A}}{\partial t}~dV + \int_{\Gamma} \boldsymbol{A}(\mathbf{v}\bullet\mathbf{n})~dA$
where $\mathbf{n}\,$ is the outward unit normal to the surface $\Gamma\,$.
## Directional derivatives
We often have to find the derivatives of vectors with respect to vectors and of tensors with respect to vectors and tensors. The directional directive provides a systematic way of finding these derivatives.
The definitions of directional derivatives for various situations are given below. It is assumed that the functions are sufficiently smooth that derivatives can be taken.
### Derivatives of scalar valued functions of vectors
Let $f(\mathbf{v})$ be a real valued function of the vector $\mathbf{v}$. Then the derivative of $f(\mathbf{v})$ with respect to $\mathbf{v}$ (or at $\mathbf{v}$) in the direction $\mathbf{u}$ is the vector defined as
$\frac{\partial f}{\partial \mathbf{v}}\cdot\mathbf{u} = Df(\mathbf{v})[\mathbf{u}] = \left[\frac{\partial }{\partial \alpha}~f(\mathbf{v} + \alpha~\mathbf{u})\right]_{\alpha = 0}$
for all vectors $\mathbf{u}$.
Properties:
1) If $f(\mathbf{v}) = f_1(\mathbf{v}) + f_2(\mathbf{v})$ then $\frac{\partial f}{\partial \mathbf{v}}\cdot\mathbf{u} = \left(\frac{\partial f_1}{\partial \mathbf{v}} + \frac{\partial f_2}{\partial \mathbf{v}}\right)\cdot\mathbf{u}$
2) If $f(\mathbf{v}) = f_1(\mathbf{v})~ f_2(\mathbf{v})$ then $\frac{\partial f}{\partial \mathbf{v}}\cdot\mathbf{u} = \left(\frac{\partial f_1}{\partial \mathbf{v}}\cdot\mathbf{u}\right)~f_2(\mathbf{v}) + f_1(\mathbf{v})~\left(\frac{\partial f_2}{\partial \mathbf{v}}\cdot\mathbf{u} \right)$
3) If $f(\mathbf{v}) = f_1(f_2(\mathbf{v}))$ then $\frac{\partial f}{\partial \mathbf{v}}\cdot\mathbf{u} = \frac{\partial f_1}{\partial f_2}~\frac{\partial f_2}{\partial \mathbf{v}}\cdot\mathbf{u}$
### Derivatives of vector valued functions of vectors
Let $\mathbf{f}(\mathbf{v})$ be a vector valued function of the vector $\mathbf{v}$. Then the derivative of $\mathbf{f}(\mathbf{v})$ with respect to $\mathbf{v}$ (or at $\mathbf{v}$) in the direction $\mathbf{u}$ is the second order tensor defined as
$\frac{\partial \mathbf{f}}{\partial \mathbf{v}}\cdot\mathbf{u} = D\mathbf{f}(\mathbf{v})[\mathbf{u}] = \left[\frac{\partial }{\partial \alpha}~\mathbf{f}(\mathbf{v} + \alpha~\mathbf{u})\right]_{\alpha = 0}$
for all vectors $\mathbf{u}$.
Properties:
1) If $\mathbf{f}(\mathbf{v}) = \mathbf{f}_1(\mathbf{v}) + \mathbf{f}_2(\mathbf{v})$ then $\frac{\partial \mathbf{f}}{\partial \mathbf{v}}\cdot\mathbf{u} = \left(\frac{\partial \mathbf{f}_1}{\partial \mathbf{v}} + \frac{\partial \mathbf{f}_2}{\partial \mathbf{v}}\right)\cdot\mathbf{u}$
2) If $\mathbf{f}(\mathbf{v}) = \mathbf{f}_1(\mathbf{v})\times\mathbf{f}_2(\mathbf{v})$ then $\frac{\partial \mathbf{f}}{\partial \mathbf{v}}\cdot\mathbf{u} = \left(\frac{\partial \mathbf{f}_1}{\partial \mathbf{v}}\cdot\mathbf{u}\right)\times\mathbf{f}_2(\mathbf{v}) + \mathbf{f}_1(\mathbf{v})\times\left(\frac{\partial \mathbf{f}_2}{\partial \mathbf{v}}\cdot\mathbf{u} \right)$
3) If $\mathbf{f}(\mathbf{v}) = \mathbf{f}_1(\mathbf{f}_2(\mathbf{v}))$ then $\frac{\partial \mathbf{f}}{\partial \mathbf{v}}\cdot\mathbf{u} = \frac{\partial \mathbf{f}_1}{\partial \mathbf{f}_2}\cdot\left(\frac{\partial \mathbf{f}_2}{\partial \mathbf{v}}\cdot\mathbf{u} \right)$
### Derivatives of scalar valued functions of tensors
Let $f(\boldsymbol{S})$ be a real valued function of the second order tensor $\boldsymbol{S}$. Then the derivative of $f(\boldsymbol{S})$ with respect to $\boldsymbol{S}$ (or at $\boldsymbol{S}$) in the direction $\boldsymbol{T}$ is the second order tensor defined as
$\frac{\partial f}{\partial \boldsymbol{S}}:\boldsymbol{T} = Df(\boldsymbol{S})[\boldsymbol{T}] = \left[\frac{\partial }{\partial \alpha}~f(\boldsymbol{S} + \alpha~\boldsymbol{T})\right]_{\alpha = 0}$
for all second order tensors $\boldsymbol{T}$.
Properties:
1) If $f(\boldsymbol{S}) = f_1(\boldsymbol{S}) + f_2(\boldsymbol{S})$ then $\frac{\partial f}{\partial \boldsymbol{S}}:\boldsymbol{T} = \left(\frac{\partial f_1}{\partial \boldsymbol{S}} + \frac{\partial f_2}{\partial \boldsymbol{S}}\right):\boldsymbol{T}$
2) If $f(\boldsymbol{S}) = f_1(\boldsymbol{S})~ f_2(\boldsymbol{S})$ then $\frac{\partial f}{\partial \boldsymbol{S}}:\boldsymbol{T} = \left(\frac{\partial f_1}{\partial \boldsymbol{S}}:\boldsymbol{T}\right)~f_2(\boldsymbol{S}) + f_1(\boldsymbol{S})~\left(\frac{\partial f_2}{\partial \boldsymbol{S}}:\boldsymbol{T} \right)$
3) If $f(\boldsymbol{S}) = f_1(f_2(\boldsymbol{S}))$ then $\frac{\partial f}{\partial \boldsymbol{S}}:\boldsymbol{T} = \frac{\partial f_1}{\partial f_2}~\left(\frac{\partial f_2}{\partial \boldsymbol{S}}:\boldsymbol{T} \right)$
### Derivatives of tensor valued functions of tensors
Let $\boldsymbol{F}(\boldsymbol{S})$ be a second oder tensor valued function of the second order tensor $\boldsymbol{S}$. Then the derivative of $\boldsymbol{F}(\boldsymbol{S})$ with respect to $\boldsymbol{S}$ (or at $\boldsymbol{S}$) in the direction $\boldsymbol{T}$ is the fourth order tensor defined as
$\frac{\partial \boldsymbol{F}}{\partial \boldsymbol{S}}:\boldsymbol{T} = D\boldsymbol{F}(\boldsymbol{S})[\boldsymbol{T}] = \left[\frac{\partial }{\partial \alpha}~\boldsymbol{F}(\boldsymbol{S} + \alpha~\boldsymbol{T})\right]_{\alpha = 0}$
for all second order tensors $\boldsymbol{T}$.
Properties:
1) If $\boldsymbol{F}(\boldsymbol{S}) = \boldsymbol{F}_1(\boldsymbol{S}) + \boldsymbol{F}_2(\boldsymbol{S})$ then $\frac{\partial \boldsymbol{F}}{\partial \boldsymbol{S}}:\boldsymbol{T} = \left(\frac{\partial \boldsymbol{F}_1}{\partial \boldsymbol{S}} + \frac{\partial \boldsymbol{F}_2}{\partial \boldsymbol{S}}\right):\boldsymbol{T}$
2) If $\boldsymbol{F}(\boldsymbol{S}) = \boldsymbol{F}_1(\boldsymbol{S})\cdot\boldsymbol{F}_2(\boldsymbol{S})$ then $\frac{\partial \boldsymbol{F}}{\partial \boldsymbol{S}}:\boldsymbol{T} = \left(\frac{\partial \boldsymbol{F}_1}{\partial \boldsymbol{S}}:\boldsymbol{T}\right)\cdot\boldsymbol{F}_2(\boldsymbol{S}) + \boldsymbol{F}_1(\boldsymbol{S})\cdot\left(\frac{\partial \boldsymbol{F}_2}{\partial \boldsymbol{S}}:\boldsymbol{T} \right)$
3) If $\boldsymbol{F}(\boldsymbol{S}) = \boldsymbol{F}_1(\boldsymbol{F}_2(\boldsymbol{S}))$ then $\frac{\partial \boldsymbol{F}}{\partial \boldsymbol{S}}:\boldsymbol{T} = \frac{\partial \boldsymbol{F}_1}{\partial \boldsymbol{F}_2}:\left(\frac{\partial \boldsymbol{F}_2}{\partial \boldsymbol{S}}:\boldsymbol{T} \right)$
3) If $f(\boldsymbol{S}) = f_1(\boldsymbol{F}_2(\boldsymbol{S}))$ then $\frac{\partial f}{\partial \boldsymbol{S}}:\boldsymbol{T} = \frac{\partial f_1}{\partial \boldsymbol{F}_2}:\left(\frac{\partial \boldsymbol{F}_2}{\partial \boldsymbol{S}}:\boldsymbol{T} \right)$
## Derivative of the determinant of a tensor
Derivative of the determinant of a tensor The derivative of the determinant of a second order tensor $\boldsymbol{A}$ is given by $\frac{\partial }{\partial \boldsymbol{A}}\det(\boldsymbol{A}) = \det(\boldsymbol{A})~[\boldsymbol{A}^{-1}]^T ~.$ In an orthonormal basis the components of $\boldsymbol{A}$ can be written as a matrix $\mathbf{A}$. In that case, the right hand side corresponds the cofactors of the matrix.
Proof:
Let $\boldsymbol{A}$ be a second order tensor and let $f(\boldsymbol{A}) = \det(\boldsymbol{A})$. Then, from the definition of the derivative of a scalar valued function of a tensor, we have
$\begin{align} \frac{\partial f}{\partial \boldsymbol{A}}:\boldsymbol{T} & = \left.\cfrac{d}{d\alpha} \det(\boldsymbol{A} + \alpha~\boldsymbol{T}) \right|_{\alpha=0} \\ & = \left.\cfrac{d}{d\alpha} \det\left[\alpha~\boldsymbol{A}\left(\cfrac{1}{\alpha}~\boldsymbol{\mathit{1}} + \boldsymbol{A}^{-1}\cdot\boldsymbol{T}\right) \right] \right|_{\alpha=0} \\ & = \left.\cfrac{d}{d\alpha} \left[\alpha^3~\det(\boldsymbol{A})~ \det\left(\cfrac{1}{\alpha}~\boldsymbol{\mathit{1}} + \boldsymbol{A}^{-1}\cdot\boldsymbol{T}\right)\right] \right|_{\alpha=0} ~. \end{align}$
Recall that we can expand the determinant of a tensor in the form of a characteristic equation in terms of the invariants $I_1,I_2,I_3$ using (note the sign of $\lambda$)
$\det(\lambda~\boldsymbol{\mathit{1}} + \boldsymbol{A}) = \lambda^3 + I_1(\boldsymbol{A})~\lambda^2 + I_2(\boldsymbol{A})~\lambda + I_3(\boldsymbol{A}) ~.$
Using this expansion we can write
$\begin{align} \frac{\partial f}{\partial \boldsymbol{A}}:\boldsymbol{T} & = \left.\cfrac{d}{d\alpha} \left[\alpha^3~\det(\boldsymbol{A})~ \left(\cfrac{1}{\alpha^3} + I_1(\boldsymbol{A}^{-1}\cdot\boldsymbol{T})~\cfrac{1}{\alpha^2} + I_2(\boldsymbol{A}^{-1}\cdot\boldsymbol{T})~\cfrac{1}{\alpha} + I_3(\boldsymbol{A}^{-1}\cdot\boldsymbol{T})\right) \right] \right|_{\alpha=0} \\ & = \left.\det(\boldsymbol{A})~\cfrac{d}{d\alpha} \left[ 1 + I_1(\boldsymbol{A}^{-1}\cdot\boldsymbol{T})~\alpha + I_2(\boldsymbol{A}^{-1}\cdot\boldsymbol{T})~\alpha^2 + I_3(\boldsymbol{A}^{-1}\cdot\boldsymbol{T})~\alpha^3 \right] \right|_{\alpha=0} \\ & = \left.\det(\boldsymbol{A})~\left[I_1(\boldsymbol{A}^{-1}\cdot\boldsymbol{T}) + 2~I_2(\boldsymbol{A}^{-1}\cdot\boldsymbol{T})~\alpha + 3~I_3(\boldsymbol{A}^{-1}\cdot\boldsymbol{T})~\alpha^2 \right] \right|_{\alpha=0} \\ & = \det(\boldsymbol{A})~I_1(\boldsymbol{A}^{-1}\cdot\boldsymbol{T}) ~. \end{align}$
Recall that the invariant $I_1$ is given by
$I_1(\boldsymbol{A}) = \text{tr}{\boldsymbol{A}} ~.$
Hence,
$\frac{\partial f}{\partial \boldsymbol{A}}:\boldsymbol{T} = \det(\boldsymbol{A})~\text{tr}(\boldsymbol{A}^{-1}\cdot\boldsymbol{T}) = \det(\boldsymbol{A})~[\boldsymbol{A}^{-1}]^T : \boldsymbol{T} ~.$
Invoking the arbitrariness of $\boldsymbol{T}$ we then have
$\frac{\partial f}{\partial \boldsymbol{A}} = \det(\boldsymbol{A})~[\boldsymbol{A}^{-1}]^T ~.$
## Derivatives of the invariants of a tensor
Derivatives of the principal invariants of a tensor The principal invariants of a second order tensor are $\begin{align} I_1(\boldsymbol{A}) & = \text{tr}{\boldsymbol{A}} \\ I_2(\boldsymbol{A}) & = \frac{1}{2} \left[ (\text{tr}{\boldsymbol{A}})^2 - \text{tr}{\boldsymbol{A}^2} \right] \\ I_3(\boldsymbol{A}) & = \det(\boldsymbol{A}) \end{align}$ The derivatives of these three invariants with respect to $\boldsymbol{A}$ are $\begin{align} \frac{\partial I_1}{\partial \boldsymbol{A}} & = \boldsymbol{\mathit{1}} \\ \frac{\partial I_2}{\partial \boldsymbol{A}} & = I_1~\boldsymbol{\mathit{1}} - \boldsymbol{A}^T \\ \frac{\partial I_3}{\partial \boldsymbol{A}} & = \det(\boldsymbol{A})~[\boldsymbol{A}^{-1}]^T = I_2~\boldsymbol{\mathit{1}} - \boldsymbol{A}^T~(I_1~\boldsymbol{\mathit{1}} - \boldsymbol{A}^T) = (\boldsymbol{A}^2 - I_1~\boldsymbol{A} + I_2~\boldsymbol{\mathit{1}})^T \end{align}$
Proof:
From the derivative of the determinant we know that
$\frac{\partial I_3}{\partial \boldsymbol{A}} = \det(\boldsymbol{A})~[\boldsymbol{A}^{-1}]^T ~.$
For the derivatives of the other two invariants, let us go back to the characteristic equation
$\det(\lambda~\boldsymbol{\mathit{1}} + \boldsymbol{A}) = \lambda^3 + I_1(\boldsymbol{A})~\lambda^2 + I_2(\boldsymbol{A})~\lambda + I_3(\boldsymbol{A}) ~.$
Using the same approach as for the determinant of a tensor, we can show that
$\frac{\partial }{\partial \boldsymbol{A}}\det(\lambda~\boldsymbol{\mathit{1}} + \boldsymbol{A}) = \det(\lambda~\boldsymbol{\mathit{1}} + \boldsymbol{A})~[(\lambda~\boldsymbol{\mathit{1}}+\boldsymbol{A})^{-1}]^T ~.$
Now the left hand side can be expanded as
$\begin{align} \frac{\partial }{\partial \boldsymbol{A}}\det(\lambda~\boldsymbol{\mathit{1}} + \boldsymbol{A}) & = \frac{\partial }{\partial \boldsymbol{A}}\left[ \lambda^3 + I_1(\boldsymbol{A})~\lambda^2 + I_2(\boldsymbol{A})~\lambda + I_3(\boldsymbol{A}) \right] \\ & = \frac{\partial I_1}{\partial \boldsymbol{A}}~\lambda^2 + \frac{\partial I_2}{\partial \boldsymbol{A}}~\lambda + \frac{\partial I_3}{\partial \boldsymbol{A}}~. \end{align}$
Hence
$\frac{\partial I_1}{\partial \boldsymbol{A}}~\lambda^2 + \frac{\partial I_2}{\partial \boldsymbol{A}}~\lambda + \frac{\partial I_3}{\partial \boldsymbol{A}} = \det(\lambda~\boldsymbol{\mathit{1}} + \boldsymbol{A})~[(\lambda~\boldsymbol{\mathit{1}}+\boldsymbol{A})^{-1}]^T$
or,
$(\lambda~\boldsymbol{\mathit{1}}+\boldsymbol{A})^T\cdot\left[ \frac{\partial I_1}{\partial \boldsymbol{A}}~\lambda^2 + \frac{\partial I_2}{\partial \boldsymbol{A}}~\lambda + \frac{\partial I_3}{\partial \boldsymbol{A}}\right] = \det(\lambda~\boldsymbol{\mathit{1}} + \boldsymbol{A})~\boldsymbol{\mathit{1}} ~.$
Expanding the right hand side and separating terms on the left hand side gives
$(\lambda~\boldsymbol{\mathit{1}} +\boldsymbol{A}^T)\cdot\left[ \frac{\partial I_1}{\partial \boldsymbol{A}}~\lambda^2 + \frac{\partial I_2}{\partial \boldsymbol{A}}~\lambda + \frac{\partial I_3}{\partial \boldsymbol{A}}\right] = \left[\lambda^3 + I_1~\lambda^2 + I_2~\lambda + I_3\right] \boldsymbol{\mathit{1}}$
or,
$\begin{align} \left[\frac{\partial I_1}{\partial \boldsymbol{A}}~\lambda^3 \right.& \left.+ \frac{\partial I_2}{\partial \boldsymbol{A}}~\lambda^2 + \frac{\partial I_3}{\partial \boldsymbol{A}}~\lambda\right]\boldsymbol{\mathit{1}} + \boldsymbol{A}^T\cdot\frac{\partial I_1}{\partial \boldsymbol{A}}~\lambda^2 + \boldsymbol{A}^T\cdot\frac{\partial I_2}{\partial \boldsymbol{A}}~\lambda + \boldsymbol{A}^T\cdot\frac{\partial I_3}{\partial \boldsymbol{A}} \\ & = \left[\lambda^3 + I_1~\lambda^2 + I_2~\lambda + I_3\right] \boldsymbol{\mathit{1}} ~. \end{align}$
If we define $I_0 := 1$ and $I_4 := 0$, we can write the above as
$\begin{align} \left[\frac{\partial I_1}{\partial \boldsymbol{A}}~\lambda^3 \right.& \left.+ \frac{\partial I_2}{\partial \boldsymbol{A}}~\lambda^2 + \frac{\partial I_3}{\partial \boldsymbol{A}}~\lambda + \frac{\partial I_4}{\partial \boldsymbol{A}}\right]\boldsymbol{\mathit{1}} + \boldsymbol{A}^T\cdot\frac{\partial I_0}{\partial \boldsymbol{A}}~\lambda^3 + \boldsymbol{A}^T\cdot\frac{\partial I_1}{\partial \boldsymbol{A}}~\lambda^2 + \boldsymbol{A}^T\cdot\frac{\partial I_2}{\partial \boldsymbol{A}}~\lambda + \boldsymbol{A}^T\cdot\frac{\partial I_3}{\partial \boldsymbol{A}} \\ &= \left[I_0~\lambda^3 + I_1~\lambda^2 + I_2~\lambda + I_3\right] \boldsymbol{\mathit{1}} ~. \end{align}$
Collecting terms containing various powers of $\lambda$, we get
$\begin{align} \lambda^3&\left(I_0~\boldsymbol{\mathit{1}} - \frac{\partial I_1}{\partial \boldsymbol{A}}~\boldsymbol{\mathit{1}} - \boldsymbol{A}^T\cdot\frac{\partial I_0}{\partial \boldsymbol{A}}\right) + \lambda^2\left(I_1~\boldsymbol{\mathit{1}} - \frac{\partial I_2}{\partial \boldsymbol{A}}~\boldsymbol{\mathit{1}} - \boldsymbol{A}^T\cdot\frac{\partial I_1}{\partial \boldsymbol{A}}\right) + \\ &\qquad \qquad\lambda\left(I_2~\boldsymbol{\mathit{1}} - \frac{\partial I_3}{\partial \boldsymbol{A}}~\boldsymbol{\mathit{1}} - \boldsymbol{A}^T\cdot\frac{\partial I_2}{\partial \boldsymbol{A}}\right) + \left(I_3~\boldsymbol{\mathit{1}} - \frac{\partial I_4}{\partial \boldsymbol{A}}~\boldsymbol{\mathit{1}} - \boldsymbol{A}^T\cdot\frac{\partial I_3}{\partial \boldsymbol{A}}\right) = 0 ~. \end{align}$
Then, invoking the arbitrariness of $\lambda$, we have
$\begin{align} I_0~\boldsymbol{\mathit{1}} - \frac{\partial I_1}{\partial \boldsymbol{A}}~\boldsymbol{\mathit{1}} - \boldsymbol{A}^T\cdot\frac{\partial I_0}{\partial \boldsymbol{A}} & = 0 \\ I_1~\boldsymbol{\mathit{1}} - \frac{\partial I_2}{\partial \boldsymbol{A}}~\boldsymbol{\mathit{1}} - I_2~\boldsymbol{\mathit{1}} - \frac{\partial I_3}{\partial \boldsymbol{A}}~\boldsymbol{\mathit{1}} - \boldsymbol{A}^T\cdot\frac{\partial I_2}{\partial \boldsymbol{A}} & = 0 \\ I_3~\boldsymbol{\mathit{1}} - \frac{\partial I_4}{\partial \boldsymbol{A}}~\boldsymbol{\mathit{1}} - \boldsymbol{A}^T\cdot\frac{\partial I_3}{\partial \boldsymbol{A}} & = 0 ~. \end{align}$
This implies that
$\begin{align} \frac{\partial I_1}{\partial \boldsymbol{A}} &= \boldsymbol{\mathit{1}} \\ \frac{\partial I_2}{\partial \boldsymbol{A}} & = I_1~\boldsymbol{\mathit{1}} - \boldsymbol{A}^T\\ \frac{\partial I_3}{\partial \boldsymbol{A}} & = I_2~\boldsymbol{\mathit{1}} - \boldsymbol{A}^T~(I_1~\boldsymbol{\mathit{1}} - \boldsymbol{A}^T) = (\boldsymbol{A}^2 - I_1~\boldsymbol{A} + I_2~\boldsymbol{\mathit{1}})^T \end{align}$
## Derivative of the identity tensor
Let $\boldsymbol{\mathit{1}}$ be the second order identity tensor. Then the derivative of this tensor with respect to a second order tensor $\boldsymbol{A}$ is given by
$\frac{\partial \boldsymbol{\mathit{1}}}{\partial \boldsymbol{A}}:\boldsymbol{T} = \boldsymbol{\mathsf{0}}:\boldsymbol{T} = \boldsymbol{\mathit{0}}$
This is because $\boldsymbol{\mathit{1}}$ is independent of $\boldsymbol{A}$.
## Derivative of a tensor with respect to itself
Let $\boldsymbol{A}$ be a second order tensor. Then
$\frac{\partial \boldsymbol{A}}{\partial \boldsymbol{A}}:\boldsymbol{T} = \left[\frac{\partial }{\partial \alpha} (\boldsymbol{A} + \alpha~\boldsymbol{T})\right]_{\alpha = 0} = \boldsymbol{T} = \boldsymbol{\mathsf{I}}:\boldsymbol{T}$
Therefore,
$\frac{\partial \boldsymbol{A}}{\partial \boldsymbol{A}} = \boldsymbol{\mathsf{I}}$
Here $\boldsymbol{\mathsf{I}}$ is the fourth order identity tensor. In index notation with respect to an orthonormal basis
$\boldsymbol{\mathsf{I}} = \delta_{ik}~\delta_{jl}~\mathbf{e}_i\otimes\mathbf{e}_j\otimes\mathbf{e}_k\otimes\mathbf{e}_l$
This result implies that
$\frac{\partial \boldsymbol{A}^T}{\partial \boldsymbol{A}}:\boldsymbol{T} = \boldsymbol{\mathsf{I}}^T:\boldsymbol{T} = \boldsymbol{T}^T$
where
$\boldsymbol{\mathsf{I}}^T = \delta_{jk}~\delta_{il}~\mathbf{e}_i\otimes\mathbf{e}_j\otimes\mathbf{e}_k\otimes\mathbf{e}_l$
Therefore, if the tensor $\boldsymbol{A}$ is symmetric, then the derivative is also symmetric and we get
$\frac{\partial \boldsymbol{A}}{\partial \boldsymbol{A}} = \boldsymbol{\mathsf{I}}^{(s)} = \frac{1}{2}~(\boldsymbol{\mathsf{I}} + \boldsymbol{\mathsf{I}}^T)$
where the symmetric fourth order identity tensor is
$\boldsymbol{\mathsf{I}}^{(s)} = \frac{1}{2}~(\delta_{ik}~\delta_{jl} + \delta_{il}~\delta_{jk}) ~\mathbf{e}_i\otimes\mathbf{e}_j\otimes\mathbf{e}_k\otimes\mathbf{e}_l$
## Derivative of the inverse of a tensor
Derivative of the inverse of a tensor Let $\boldsymbol{A}$ and $\boldsymbol{T}$ be two second order tensors, then $\frac{\partial }{\partial \boldsymbol{A}} \left(\boldsymbol{A}^{-1}\right) : \boldsymbol{T} = - \boldsymbol{A}^{-1}\cdot\boldsymbol{T}\cdot\boldsymbol{A}^{-1}$ In index notation with respect to an orthonormal basis $\frac{\partial A^{-1}_{ij}}{\partial A_{kl}}~T_{kl} = - A^{-1}_{ik}~T_{kl}~A^{-1}_{lj} \implies \frac{\partial A^{-1}_{ij}}{\partial A_{kl}} = - A^{-1}_{ik}~A^{-1}_{lj}$ We also have $\frac{\partial }{\partial \boldsymbol{A}} \left(\boldsymbol{A}^{-T}\right) : \boldsymbol{T} = - \boldsymbol{A}^{-T}\cdot\boldsymbol{T}\cdot\boldsymbol{A}^{-T}$ In index notation $\frac{\partial A^{-1}_{ji}}{\partial A_{kl}}~T_{kl} = - A^{-1}_{jk}~T_{kl}~A^{-1}_{li} \implies \frac{\partial A^{-1}_{ji}}{\partial A_{kl}} = - A^{-1}_{li}~A^{-1}_{jk}$ If the tensor $\boldsymbol{A}$ is symmetric then $\frac{\partial A^{-1}_{ij}}{\partial A_{kl}} = -\cfrac{1}{2}\left(A^{-1}_{ik}~A^{-1}_{jl} + A^{-1}_{il}~A^{-1}_{jk}\right)$
Proof:
Recall that
$\frac{\partial \boldsymbol{\mathit{1}}}{\partial \boldsymbol{A}}:\boldsymbol{T} = \boldsymbol{\mathit{0}}$
Since $\boldsymbol{A}^{-1}\cdot\boldsymbol{A} = \boldsymbol{\mathit{1}}$, we can write
$\frac{\partial }{\partial \boldsymbol{A}}(\boldsymbol{A}^{-1}\cdot\boldsymbol{A}):\boldsymbol{T} = \boldsymbol{\mathit{0}}$
Using the product rule for second order tensors
$\frac{\partial }{\partial \boldsymbol{S}}[\boldsymbol{F}_1(\boldsymbol{S})\cdot\boldsymbol{F}_2(\boldsymbol{S})]:\boldsymbol{T} = \left(\frac{\partial \boldsymbol{F}_1}{\partial \boldsymbol{S}}:\boldsymbol{T}\right)\cdot\boldsymbol{F}_2 + \boldsymbol{F}_1\cdot\left(\frac{\partial \boldsymbol{F}_2}{\partial \boldsymbol{S}}:\boldsymbol{T}\right)$
we get
$\frac{\partial }{\partial \boldsymbol{A}}(\boldsymbol{A}^{-1}\cdot\boldsymbol{A}):\boldsymbol{T} = \left(\frac{\partial \boldsymbol{A}^{-1}}{\partial \boldsymbol{A}}:\boldsymbol{T}\right)\cdot\boldsymbol{A} + \boldsymbol{A}^{-1}\cdot\left(\frac{\partial \boldsymbol{A}}{\partial \boldsymbol{A}}:\boldsymbol{T}\right) = \boldsymbol{\mathit{0}}$
or,
$\left(\frac{\partial \boldsymbol{A}^{-1}}{\partial \boldsymbol{A}}:\boldsymbol{T}\right)\cdot\boldsymbol{A} = - \boldsymbol{A}^{-1}\cdot\boldsymbol{T}$
Therefore,
$\frac{\partial }{\partial \boldsymbol{A}} \left(\boldsymbol{A}^{-1}\right) : \boldsymbol{T} = - \boldsymbol{A}^{-1}\cdot\boldsymbol{T}\cdot\boldsymbol{A}^{-1}$
## Remarks
The boldface notation that I've used is called the Gibbs notation. The index notation that I have used is also called Cartesian tensor notation.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 311, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8784094452857971, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/51346/no-magnetic-field-from-a-static-charge-is-there-a-simple-physical-argument-to
|
# No magnetic field from a static charge - Is there a simple physical argument to show why?
For a charge moving in an electric field $\vec E$, its equation of motion is given by the electric part of the Lorentz force $$\frac d {dt}\gamma m \vec v = e\vec E$$This comes from the conservation of relativistic energy in a static electric field. But a magnetic field would still make this conservation law true since the magnetic force is always orthogonal to the velocity of the charge and therefore doesn't change its energy.
Is there a simply physical argument that shows why a static charge doesn't create a magnetic field?
-
2
I don't really get what the preliminary discussion has to do with the question at the end. If you're interested in a static charge, why are we talking about accelerating charges? – Mark Eichenlaub Jan 15 at 23:56
Whatever arguments people put forth will hopefully not be inconsistent with the fact that an electron at rest DOES have a magnetic field. :-D – Steve B Jan 16 at 0:20
@MarkEichenlaub you're right that it doesn't really need the preamble. But I thought it worth putting in to emphasise that the conservation of relativistic energy partly explains the form of the electric Lorentz force created by a static electric change, and perhaps give a clue to the additional physical argument that is needed to show why it mustn't create a magnetic field. – John McVirgo Jan 16 at 0:24
@SteveB well this question is about a static charge, i.e. an ideal electric monopole, not an electron. – David Zaslavsky♦ Jan 16 at 1:00
1
@Chris -- It is not true that "magnetic fields can only be caused by time varying electric fields". One counterexample is a loop of wire carrying a DC current. Another counterexample is an electron at rest (since it's at rest, its electric field is not changing over time). – Steve B Feb 1 at 16:08
show 1 more comment
## 4 Answers
The electric and magnetic fields form one object, the electromagnetic field tensor. This tensor represents an oriented plane at each point in space(time). An easy way to visualize this is to think of the magnetic field vector. Instead of the vector, think about the plane to which that vector is normal. This is the fundamental nature of the EM field.
The electric field is like this, too, except the planes have one direction along the time axis and one direction along a spatial axis.
When a point charge moves through space and time, it traces out a plane. One of the plane's directions is the direction it moves--a stationary charge "moves" through time. The other direction is based on the direction between it and an observer.
Since a charge at rest only moves through time, it sweeps out planes that have at least one timelike direction. This means its EM field contains only electric--not magnetic--components.
-
As a guess, I'd say for a static charge, the conservation of momentum requires the direction of the Lorentz force to be independent of the direction of the other moving charge's velocity. This then implies a magnetic field can't be created by a static electric charge. I haven't got a clue how to prove this though lol.
-
A static charge only produces an electric field; cf. Maxwell's equations, Gauss' law.
-
The fact is, magnetism is nothing more than electrostatics combined with relativistic motion. Other definitions fall into two classes:
1. Really advanced speculation in the form of theory-of-everything-ness that I won't go into here.
2. Empirical observations from circa the year 1900, where people thought "huh, isn't it nifty that electromagnetism turns out to be relativistically invariant?" If you really believe that the magnetic field is some magical, independent entity that has to be tested in every conceivable configuration, then sure, it's possible that a static charge (or a flux capacitor or a unicorn) will produce a magnetic field.
But, these days we understand that there is only electrostatic attraction/repulsion between charges. A particle's acceleration can always be seen to be due solely to electrostatic effects if you transform into its rest frame. Thus you shouldn't be looking at any energy-conserving rule to derive the Lorentz force. You should start with relativistic mechanics and $F \propto Qq/r^2$, and all of Maxwell's/Lorentz's laws will follow.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9306541681289673, "perplexity_flag": "head"}
|
http://mathhelpforum.com/math-challenge-problems/117745-geometry-2.html
|
Thread:
1. Originally Posted by Drexel28
Question:
Spoiler:
What do you mean "lying in $\mathbb{Q}^2$? Do you mean that the verticies of the triangle are rational points or that literally every point of the triangle is rational? Because I have a problem witht the second one...
Guess, you can't have that. Points in a line sengment are not countable - so every point can't be rational. Correct?
Also, what do you mean by tangent of the angle?
2. Originally Posted by aman_cc
Guess, you can't have that. Points in a line sengment are not countable - so every point can't be rational. Correct?
Also, what do you mean by tangent of the angle?
Sure you're right, I meant the vertices are in , my mistake for no paying enough attention to wording.
If a is an interior angle of the triangle, tan(a) is what I meant by the tangent of the angle, and the proposition claims that it is rational.
In the original case of equilateral triangle, it turns out no equilateral triangle
exist with vertices in , as tan(60 deg) = sqrt(3)
3. Originally Posted by aman_cc
Guess, you can't have that. Points in a line sengment are not countable - so every point can't be rational. Correct?
Also, what do you mean by tangent of the angle?
That is precisely why! When speaking of triangles, we must first talk about the edges of the triangle. These edges are connected and may be interpreted as being homeomorphic to some interval in $\mathbb{R}$ and since any interval in $\mathbb{R}$ is uncountable we can't speak of lines in a countable spac such as $\mathbb{Q}^2$
P.S. I didn't check that my language above was precise, but you get the idea.
4. I almost forgot about the first problem you stated.
The solution:
Spoiler:
Let O be the center of the regular polygon (the center of the circumcenter), and let R be the distance from the O to the vertices.
The angle between two lines beginning in O and ending in two adjacent vertices is thus $\frac{2\pi}{n}$, and from the sinus law we get that the area of a triangle formed by such two lines is given by $\frac{1}{2}R^2\sin{\frac{2\pi}{n}} = R^2\sin{\frac{\pi}{n}}\cos{\frac{\pi}{n}}$
But there are n such triangles in the regular polygon. Using our last result this gives us $n R^2\sin{\frac{\pi}{n}}\cos{\frac{\pi}{n}}$
Furthermore, if we drop an altitude to the base in one of those isosceles triangles, we come up with $R = \frac{\frac{1}{2} a}{\sin{\frac{\pi}{n}}}$ , and after subtitution in the expression for the area we get $n (\frac{\frac{1}{2} a}{\sin{\frac{\pi}{n}}})^2\sin{\frac{\pi}{n}}\cos{ \frac{\pi}{n}} = \frac{1}{4} a^2 n \cot{\frac{\pi}{n}}$ .
Q.E.D
5. Originally Posted by Unbeatable0
I almost forgot about the first problem you stated.
The solution:
Spoiler:
Let O be the center of the regular polygon (the center of the circumcenter), and let R be the distance from the O to the vertices.
The angle between two lines beginning in O and ending in two adjacent vertices is thus $\frac{2\pi}{n}$, and from the sinus law we get that the area of a triangle formed by such two lines is given by $\frac{1}{2}R^2\sin{\frac{2\pi}{n}} = R^2\sin{\frac{\pi}{n}}\cos{\frac{\pi}{n}}$
But there are n such triangles in the regular polygon. Using our last result this gives us $n R^2\sin{\frac{\pi}{n}}\cos{\frac{\pi}{n}}$
Furthermore, if we drop an altitude to the base in one of those isosceles triangles, we come up with $R = \frac{\frac{1}{2} a}{\sin{\frac{\pi}{n}}}$ , and after subtitution in the expression for the area we get $n (\frac{\frac{1}{2} a}{\sin{\frac{\pi}{n}}})^2\sin{\frac{\pi}{n}}\cos{ \frac{\pi}{n}} = \frac{1}{4} a^2 n \cot{\frac{\pi}{n}}$ .
Q.E.D
Note now that with this and Pick's theorem we can say any $n$ such that $\cot\left(\frac{\pi}{n}\right)\notin\mathbb{Q}$ cannot have all vertices on lattice points. I just came to that conclusion! Sweet! Suck that pentagon!
6. Originally Posted by Drexel28
Note now that with this and Pick's theorem we can say any $n$ such that $\cot\left(\frac{\pi}{n}\right)\notin\mathbb{Q}$ cannot have all vertices on lattice points. I just came to that conclusion! Sweet! Suck that pentagon!
Actually, from the proposition I posted before, it follows that for any regular n-sided polygon, if $\cot{\frac{\pi}{n}}$ is irrational, then it can't have three vertices in , two of which are adjacent.
Nobody is up to prove the claim in post #14?
7. Originally Posted by Unbeatable0
Actually, from the proposition I posted before, it follows that for any regular n-sided polygon, if $\cot{\frac{\pi}{n}}$ is irrational, then it can't have three vertices in , two of which are adjacent.
Nobody is up to prove the claim in post #14?
I would like to give it a try, but I am swamped right now. Give me, or someone else, a day or so and then post up the solution otherwise!
8. Originally Posted by Unbeatable0
Prove that in any triangle in $\mathbb{R}^2$ whose vertices are in $\mathbb{Q}^2$, the tangents
of the angles are rational.
I allow myself to post a solution (not mine), after seeing that the problem has not caught too much interest. The proof is in the attached image.
Attached Thumbnails
9. Originally Posted by Unbeatable0
I allow myself to post a solution (not mine), after seeing that the problem has not caught too much interest. The proof is in the attached image.
That was close to my solution! Which I actually logged onto to post! I didn't have a picture though..so this one is decidedly better. Sorry about not doing it. I had Putnam, finals, etc. Thanks for the solution!
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 22, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9619936943054199, "perplexity_flag": "head"}
|
http://www.impan.pl/cgi-bin/dict?convergence
|
## convergence
Moreover, one has estimates on the rate at which this convergence is taking place.
Addressing this issue requires using the convergence properties of Fourier series.
The convergence of the sum on the left is of course a weaker statement than the convergence of (2).
We give $X$ the topology of uniform convergence on compact subsets of $A$.
Almost everywhere convergence is the best we can hope for.
Go to the list of words starting with: a b c d e f g h i j k l m n o p q r s t u v w y z
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8228174448013306, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/12584?sort=oldest
|
## When does collection imply replacement?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
In ordinary membership-based set theory, the axiom schema of replacement states that if $\phi$ is a first-order formula, and $A$ is a set such that for any $x\in A$ there exists a unique $y$ such that $\phi(x,y)$, then there exists a set $B$ such that $y\in B$ if and only if $\phi(x,y)$ for some $x\in A$. That is, $B$ is the "image" of $A$ under the "definable class function" $\phi$.
The related axiom schema of collection modifies this by not requiring $y$ to be unique, but only requiring $B$ to contain some $y$ for each $x$ rather than all of them. However, there are at least two different versions of this.
1. If for all $x\in A$ there exists a $y$ with $\phi(x,y)$, then there exists a set $B$ such that for all $x\in A$ there is a $y\in B$ with $\phi(x,y)$ (this is Wikipedia's version; I'll call it "weak collection").
2. If for all $x\in A$ there exists a $y$ with $\phi(x,y)$, then there exists a set $B$ such that (1) for all $x\in A$ there is a $y\in B$ with $\phi(x,y)$, and (2) for all $y\in B$ there is an $x\in A$ with $\phi(x,y)$ (I'll call this "strong collection").
The third possibly relevant axiom is the axiom schema of separation, which states that for any $\phi$ and any set $A$ there exists a set $B\subseteq A$ such that $x\in B$ if and only if $x\in A$ and $\phi(x)$.
I know the following implications between these axioms:
• Strong collection implies weak collection, since it has the same hypotheses and a stronger conclusion.
• Strong collection implies replacement, since it has a weaker hypothesis and the same conclusion.
• Replacement implies separation (assuming excluded middle): apply replacement to the formula "($\phi(x)$ and $y=\lbrace x\rbrace$) or ($\neg\phi(x)$ and $y=\emptyset$)" and take the union of the resulting set.
• Together with AC and foundation, replacement implies weak collection: let $\psi(x,V)$ assert that $V=V_\alpha$ is the smallest level of the von Neumann hierarchy such that there exists a $y\in V_\alpha$ with $\phi(x,y)$, apply replacement to $\psi$ and take the union of all the resulting $V_\alpha$.
• Weak collection and separation together imply strong collection: separation cuts out the subset of $B$ consisting of those $y$ such that $\phi(x,y)$ for some $x\in A$.
My question is: does weak collection imply replacement (and hence also separation and strong collection) without assuming separation to hold a priori? Feel free to assume all the other axioms of ZFC (including $\Delta_0$-separation). I'm fairly sure the answer is "no," but several sources I've read seem to assume that it does. Can someone give a definitive answer, and ideally a reference?
-
1
That's a neat question! I know the answer is generally no in weak set theories, where such things actually do matter. The answer is no without pairing. Namely the class of ordinals satisfies all the axioms except pairing, replacement, and separation, but it does satisfy foundation, collection, union and powerset (which is just the ordinal successor!). I'll have to think about combining pairing and powerset. By the way, it probably won't cause any confusion but "strong collection" already has a standard meaning, which is different from yours. – François G. Dorais♦ Jan 22 2010 at 2:08
## 2 Answers
The answer is no, if you allow me to adopt some weak-but-equivalent forms of the other axioms. And the reason is interesting:
• A shocking number of the axioms of set theory are true in the non-negative real line R+, with the usual order < being used to interpret set membership. (!)
Let's just check. For example, Extensionality holds, because if two real numbers have the same predecessors, then they are equal. The emptyset axiom holds, since there are no non-negative reals below 0. The Union axiom holds, since for any real x, the reals less than x are precisely the reals that are less than something less than x. (Thus, every set is its own union.) A weakened version of the Power set axioms holds, the Proper Power set, which asserts that for every x, there is a set p whose elements are the strict substs of x. This is because for any real number x, the reals below x are precisely the reals y (other than x), all of whose predecessors are less than x. (Thus, every real is its own proper power set.) An alternative weakening of power set would say: for every x, there is p such that y subset x implies y in p. This is true in the reals by using any p > x. A weakened pairing axiom states similarly: for every x,y, there is z with x ε z and y ε z. This is true in the reals by using any z above both x and y. The Foundation axiom is no problem, since 0 is in every nonempty set. Also, similarly AC holds in the form about families of disjoint nonempty sets, since this never occurs in this model. The Weak Collection Axiom holds since if every y < x has phi(x,y,w), then in fact any y in the same interval will work (since this structure has many automorphisms), and so we may collect witness with any B above x and w. Note that Separation fails, since, for example, {0} does not exist in this model. Also, Replacement fails for the same reason.
Similar interesting models can be built by considering the structure (ORD,<) built using only ordinals, or the class { Vα | α in ORD }. These also satisfy all of the weakened forms of the ZFC axioms without Separation, using Collection in place of Replacement.
Thus, part of the answer to your question is that it depends on what you mean by the "other axioms of ZFC".
Apart from this, however, let me say that the term Weak Collection is usually used to refer to the axioms that restrict the complexity of the formulas in the usual Collection scheme, rather than the axiom that you state. For example, in Kripke Platek set theory KP, a weak fragment of ZFC, one has collection only for Sigma_1 formulas, and this is described as a Weak Collection axiom. (What you call Weak Collection is usually just called Collection.) And there is a correspondingly weakened version of Separation in KP.
But I am happy to adopt your terminology here. You stated that AC plus Replacement implies Weak Collection, but this is not quite right. You don't need any AC. Instead, as your argument shows, what you need is the cumulative Vα hierarchy, which is built on the Power set axiom, not AC. That is, If you have Replacement and Power set and enough else to build the Vα hierarchy, then you get Collection as you described, even if AC fails. For example, ZF can be axiomatized equivalently with either Collection or Replacement.
Your question is a bit unusual, since usually Separation is regarded as a more fundamental axiom than Replacement and Collection, and more in keeping with what we mean by set theory. After all, if one has a set A and a property phi(x), particularly when phi is very simple, it is one of the most basic set theoretic constructions to be able to form { x in A | phi(x) }, and any set theory violating this is not very set-like. We don't really want to consider models of set theory where many instances of Separation fail (for example, Separation for atomic formula is surely elemental).
Incidentally, there is another version of a weakening of Collection in the same vein as what you are considering. Namely, consider the scheme of assertions, whenever phi(x,y) is a property, that says for every set A, there is a set B such that for every a in A, if there is b with phi(a,b), then there is b in B such that phi(a,b).
OK, let me now give a positive answer, with what I think is a more sensible interpretation of your question. I take what I said above (and Dorais's comment) to show that we shouldn't consider set theories where the Separation Axiom fails utterly. Rather, what we want is some very weak set theory, such as the Kripke Platek axioms, and then ask the relationship between Weak Collection and Replacement over those axioms. And here, you get the postive result as desired.
Theorem. If KP holds, then Weak Collection implies Replacement.
Proof. Assume KP plus (Weak) Collection. First, I claim that this is enough to prove a version of the Reflection Principle, since that proof amounts to taking successive upward Skolem hulls, which is what Collection allows. That is, I claim that for every set x and any formula phi, there is a transitive set Y such that x in Y and phi(w) is absolute between Y and the universe V. This will in effect turns any formula into a Delta0 formula using parameter Y.
Applying this, suppose we have a set A and every a in A has a unique b such that phi(a,b). By the Reflection Principle, let Y be a large set transitive containing A such that phi(a,b) and "exists b phi(a,b)" are absolute between Y and V. So Y has all the desired witnesses b for a in A. But also, now { b | exists a in A phi(a,b) } is a Delta0 definable subset of Y, since we can bound the quantifier again by Y. So the set exists. So Replacement holds. QED
I think we can get away with much less than KP. Perhaps one way to do the argument is to just prove Separation by induction on the complexity of formulas. One can collect witnesses by (weak) Collection, and this turns the formulas into lower complexity, using the new bound as a bound on the quantifiers.
-
You can get around reflection using the powerset (assuming Delta0 separation). If φ(x) is Delta0 with parameters in a transitive set a, then if there is an x which satisfies φ(x) then there is such an x in the powerset of a. This iterates, adding one more powerset for each quantifier depth. You have to modify the formula a little so the quantifiers range over sets of the appropriate powerset level. – François G. Dorais♦ Jan 22 2010 at 3:06
@Dorais: Is that right? What if phi(x) says "x has a member, which has a member that has a member that is y", where y is the parameter. This seems to push you up several levels beyond the parameters. – Joel David Hamkins Jan 22 2010 at 3:13
@Joel: Ah, right! I should be more careful when switching contexts like that. I was using phi^a(x) when I used this trick before. – François G. Dorais♦ Jan 22 2010 at 3:25
But I think a version of your trick works, where you find the witnesses in H(|a|+). That is, if there is a witness, then take a Skolem hull and collapse things down so that the witness is small. And H(|a|+) is something very like P(a), as you said. For example, every element in H(|a|+) is coded in P(a). – Joel David Hamkins Jan 22 2010 at 3:30
Yes, of course, H(|a|+) is Sigma_1-elementary in V. That could work... – François G. Dorais♦ Jan 22 2010 at 4:07
show 6 more comments
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Since I am currently unable to comment, I will post as an answer.
Suppose in weak collection, we were able to extend phi(x,y) to phi(x,y) and (y in A). I don't know if this needs a parameterized form of collection or what, but it seems one step closer to separation. Perhaps looking at forms of collection involving parameters would help.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 54, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9431218504905701, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/22028/diameter-of-a-metric-on-orbits-under-affine-bijections-of-n-dimensional-convex
|
## Diameter of a metric on orbits under affine bijections of $n-$dimensional convex compact sets
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Given two $n-$dimensional convex compact sets $A,B$, we define $d(A,B)$ as $\log({\mathrm{Vol}}(\alpha_2(A)))-\log(\mathrm{Vol}(\alpha_1(A)))$ where $\alpha_1,\alpha_2$ are two affine bijections such that $\alpha_1(A)\subset B\subset\alpha_2(A)$ and such that the ratio $\mathrm{Vol}(\alpha_2(A))/\mathrm{Vol}(\alpha_1(A))$ is minimal. The function $d$ is symmetric, satisfies the triangle inequality, is well-defined for orbits of convex sets under affine bijections and $d(A,B)=0$ if and only if $A$ and $B$ are in the same orbit under affine bijections.
The function $d$ defines thus a distance on the set $\mathcal C_n$ of orbits under affine bijections of $n-$dimensional convex compact sets.
What is the diameter of the metric set $\mathcal C_n$? (It is easy to see that $\mathcal C_n$ is of bounded diameter.) A natural guess is that the diameter is achieved by the distance of (the orbit of) an $n-$dimensional ball to (the orbit of) the $n-$dimensional simplex.
-
Added the banach-spaces tag, even though no Banach spaces appear explicitly in the question, because Banach space specialists are likely to have the most information about it. – Mark Meckes Apr 21 2010 at 13:43
## 1 Answer
I assume you also want your compact sets to have non-empty interior, hence positive volume.
The literature mostly deals with the related Banach-Mazur metric $d_{BM}(A,B)$, in which it is assumed that $\alpha_1(A)$ and $\alpha_2(A)$ are homothetic, so $d_{BM}(A,B) \ge d(A,B)$. (Here I'm following your convention and making $d_{BM}$ a metric, as opposed to the usual definition which makes its logarithm a metric.) Here's a little of what's known about that related to your question:
If $B$ is a Euclidean ball, then $d_{BM}(A,B) \le \log n$, with equality achieved exactly when $A$ is a simplex. Thus the diameter of `$(\mathcal{C}_n, d_{BM})$` is at most $2\log n$. I believe the exact diameter is an open question.
Let $\mathcal{C}_n^0$ be the set of affine equivalence classes of centrally symmetric convex bodies. Then if $B$ is a Euclidean ball, `$d_{BM}(A,B) \le \log \sqrt{n}$`, with equality achieved when $A$ is a cube or a crosspolytope (but not only then); therefore the diameter of `$\mathcal{C}_n^0,d_{BM})$` is at most $2\log\sqrt{n} = \log n$. Gluskin proved that the diameter of `$(\mathcal{C}_n^0,d_{BM})$` is at least $\log n - c$ for a constant $c$ independent of $n$, by in fact proving the same lower bound for the diameter of `$(\mathcal{C}_n^0,d)$`.
-
Thank you for these precisions. The two metrics are however of a different nature since the underlying sets are different: The Banach-Mazur metric is defined on the set of convex compact sets while the above metric is defined for orbits under affine bijections of convex compact sets. By the way, one of my questions can be easily reformulated for the Banach-Mazur metric: Is it true that the $n-$dimensional simplex is at maximal distance (measured by $d_{BM}$) from the $n-$dimensional ball? – Roland Bacher Apr 21 2010 at 15:06
The Banach-Mazur distance is also defined on the set of affine equivalence classes. A cousin of it, defined on convex bodies themselves without minimizing over affine transformations, is sometimes called "geometric distance". The answer to your last question, as I indicated in the answer above, is yes. – Mark Meckes Apr 21 2010 at 15:13
You are of course right. – Roland Bacher Apr 22 2010 at 16:09
I accept this answer although it does not answer my question completely, the diameter of these metric spaces is still unknown to me. By the way, this question is related to another question (more interesting, in my eyes) on which I am also completely clueless: Can a discrete set of the plane of uniform density intersect all large triangles? – Roland Bacher Apr 26 2010 at 14:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 38, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9327362179756165, "perplexity_flag": "head"}
|
http://scicomp.stackexchange.com/questions/5092/a-better-fast-marching-method
|
# A better Fast Marching Method?
I am using the Fast Marching Method (FMM) to calculate shortest "distance" (traveltime) from some points.
The way FMM works is: I keep a velocity function in RAM: V(xi,yj,zk). I also keep a priority queue of all points on the front sorted on their V value. I repeatedly propagate the first point in this queue one step outwards. As I do this I remove the used point from the queue and insert those "touched" by the front.
My current implementation have two problems:
I. I must keep the whole cost (slowness) function in RAM. This limits the size of the cost function I can use.
II. I would like it to be even faster.
Any suggestions on how to improve my current implementation? For instance would it be possible to implement this on the GPU?
-
Could you give us more information? We can't really suggest improvements to your implementation without knowing which library you got it from, or how you implemented it yourself. – Godric Seer Jan 23 at 13:42
I am the OP. I added a bit extra info (it is being peer-reviewed). – Andy Jan 24 at 8:58
1
– Thomas Klimpel Jan 24 at 15:42
## 1 Answer
My current implementation have two problems:
I. I must keep the whole cost (slowness) function in RAM. This limits the size of the cost function I can use.
II. I would like it to be even faster.
The first point is a bit ambiguous. Just computing the cost function and the traveltime for each grid cell and storing them somewhere should not be an issue. The issue is the access pattern of the fast marching method to this data, which is very cache unfriendly and prevents effective parallelization. For a straightforward implementation, the priority queue is the worst offender for the cache. There are cache optimal priority queues, but even then the memory access pattern of the updates is still very irregular.
The second point is probably most related to the parallelization potential of the algorithm. At least for the second point, one is probably forced to give up on fast marching methods and look at fast sweeping methods instead.
The observation behind the fast sweeping methods is that for the updates it is only important in which of $2^n$ "general directions" the characteristics point. Especially any region where this "general direction" is constant can be updated by a sweep. Still, there are many different ways to turn this observation into an algorithm, and which strategy will be most efficient depends a bit on whether your velocity function leads to large regions with constant "general direction", or whether it creates more a sort of labyrinth.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9143530130386353, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/36289/simulating-a-car-in-an-intersection
|
# Simulating a car in an intersection
I'm somewhat confused. I want to simulate in real-time an intersection where cars have to turn left, right or go straight. What I have are 2 way points: One at the beginning of the intersection on the incoming street and the other at the end of the intersection on the outgoing street. As I know the next way point on the outgoing street, I know which direction the car should be pointing.
How would I slow the car down to the optimal speed, calculate its steering angle and correct it in a time interval so that the car drives an optimal curve?
A resource I have found, that seems quite good for this is the following paper.
I just don't really understand the first part of the paper where the Circular track is calculated. At which point is the steering angle applied?
-
1
It is not clear to me that this is a physics question. Could you say a little more about exactly what you are trying to model here and why? – dmckee♦ Sep 13 '12 at 13:49
I do not agree with the paper in that people are not proportional controllers in adjusting their speed. I think a more reasonable assumption is a) constant (fractional) power acceleration and b) constant value deceleration. – ja72 Sep 13 '12 at 15:30
1
I think you need a diagram or a sketch in order to define the problem as you have it. – ja72 Sep 13 '12 at 16:40
## 1 Answer
Consider a steady corner shown below:
The coordinates of any point along the curve are $x = r - r \cos \varphi$ and $y = r \sin \varphi$. The angle $\varphi$ is computed from the distance traveled $s = r \varphi$ where $r$ is the cornering radius.
If the final point $B$, and orientation $\theta$ are given, then the radius is
$$r = \frac{x_B-x_A}{1-\cos\theta}$$
and
$$y_B-y_A = r \sin \theta$$
The velocity and acceleration vectors at $P$ are:
$$\vec{v} = v(t) (\sin \frac{s(t)}{r}, \cos \frac{s(t)}{r})$$ $$\vec{a} =( \dot{v} \sin \frac{s(t)}{r}+\frac{{v(t)}^2}{r} \cos \frac{s(t)}{r}, \dot{v} \cos \frac{s(t)}{r}-\frac{{v(t)}^2}{r} \sin \frac{s(t)}{r} )$$
Now since most people don't corner with more than $a_N =\frac{{v(t)}^2}{r}= 0.2g$ then the target cornering speed is
$$v(t) = \sqrt{ a_N \; r }$$
If you are accelerating $\dot v > 0$ or decelerating $\dot v <0$ to reach the target speed, the make sure you do not exceed the desired cornering acceleration $a_N$ by checking
$$\left(\frac{v^2}{r}\right)^2 + \left(\dot{v}\right)^2 \le a_N^2$$
-
Most people corner under 0.2g? That explains a lot. – Colin K Oct 13 '12 at 20:50
@ColinK: If you have a smartphone and can see the accelerometer values, take a ride with your friends and make some observations of your own. – ja72 Oct 14 '12 at 0:16
I've gotten up to 0.5 on streets. But I think 0.2 is probably right for normal people. I max out at 0.96 steady state, >1 peak, on street tires in competition. So my standards are not normal :) – Colin K Oct 14 '12 at 3:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 6, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9427999258041382, "perplexity_flag": "middle"}
|
http://mathoverflow.net/revisions/29988/list
|
## Return to Question
6 Addendum mentioning new AMM paper.
It is not uncommon to see in a science museum a bicycle with square wheels that rides smoothly over a washboard-like surface made from inverted catenary curves (e.g., at the Münich museum). The square wheel may be generalized to any regular polygon (except the triangle), which rolls on a similar curve without slippage. Here, for example, is a nice Mathematica demo.
My question is: For which wheel shapes does there exist a matching road shape that permits the wheel to roll over it without slippage so that: (a) the wheel center remains horizontal throughout its motion, (b) the wheel can turn at constant angular velocity, and (c) if possible, the wheel center also moves at constant horizontal velocity?
The square satisfies (a) and (b), but only regular hexagons and beyond satisfy (c). If you've experienced a square-wheel bicycle ride, you can feel it jerk because (c) fails to hold. It would be interesting to know the class of closed wheel curves that satisfy (a) and (b), and also those that in addition satisfy (c). For example, must all (a,b) curves be star-shaped from the wheel center $x$? (star-shaped: every point of the curve is visible from $x$).
This is probably all known, so an appropriate reference may suffice.
Addendum
Addendum1 (1July10). The delightful Hall-Wagon paper that user abel found (below) answers many of my questions, and may be the last word (or the most recent work) on the topic. However, it does not seem to address the broader question I posed: For which class of wheel shape curves is a such a wheel-road construction possible? I'll update further if anything comes to light.
Addendum2 (8June11). A paper just appeared in the Amer. Math. Monthly (Vol.118, No.6, 2011), "Roads and Wheels, Roulettes and Pedals," by Fred Kuczmarski, which seems to establish that a wheel-road construction is possible for every
continuously differentiable plane curve such that the angle of rotation of its tangent lines, as measured relative to some initial position, is a strictly monotonic function of arc length. We call such curves rollable. The monotonic condition implies that rollable curves have no inflection points, while the strictness of the monotonicity precludes rollable curves from containing line segments.
Certainly this is not the full class (as he mentions), but he has a nice theorem that constructs a road for any rollable-curve wheel.
5 Tag added
4 Summarized status in an Addendum. Added the plane-geom tag.
It is not uncommon to see in a science museum a bicycle with square wheels that rides smoothly over a washboard-like surface made from inverted catenary curves (e.g., at the Münich museum). The square wheel may be generalized to any regular polygon (except the triangle), which rolls on a similar curve without slippage. Here, for example, is a nice Mathematica demo.
My question is: For which wheel shapes does there exist a matching road shape that permits the wheel to roll over it without slippage so that: (a) the wheel center remains horizontal throughout its motion, (b) the wheel can turn at constant angular velocity, and (c) if possible, the wheel center also moves at constant horizontal velocity?
The square satisfies (a) and (b), but only regular hexagons and beyond satisfy (c). If you've experienced a square-wheel bicycle ride, you can feel it jerk because (c) fails to hold. It would be interesting to know the class of closed wheel curves that satisfy (a) and (b), and also those that in addition satisfy (c). For example, must all (a,b) curves be star-shaped from the wheel center $x$? (star-shaped: every point of the curve is visible from $x$).
This is probably all known, so an appropriate reference may suffice.I don't quite know how
Addendum. The delightful Hall-Wagon paper that user abel found (below) answers many of my questions, and may be the last word (or the most recent work) on the topic. However, it does not seem to tag this; please retag address the broader question I posed: For which class of wheel shape curves is a such a wheel-road construction possible? I'll update further if you know betteranything comes to light.
3 added 22 characters in body
It is not uncommon to see in a science museum a bicycle with square wheels that rides smoothly over a washboard-like surface made from inverted catenary curves (e.g., at the Münich museum). The square wheel may be generalized to any regular polygon (except the triangle), which rolls on a similar curve without slippage. Here, for example, is a nice Mathematica demo.
My question is: For which wheel shapes does there exist a matching road shape that permits the wheel to roll over it without slippage so that: (a) the wheel center remains horizontal throughout its motion, (b) the wheel can turn at constant angular velocity, and (c) if possible, the wheel center also moves at constant horizontal velocity?
The square satisfies (a) and (b), but only regular hexagons and beyond satisfy (c). If you've experienced a square-wheel bicycle ride, you can feel it jerk because (c) fails to hold. It would be interesting to know the class of closed wheel curves that satisfy (a) and (b), and also those that in addition satisfy (c). For example, must all (a,b) curves be star-shaped from the wheel center $x$? (star-shaped: every point of the curve is visible from $x$).
This is probably all known, so an appropriate reference may suffice. I don't quite know how to tag this; please retag if you know better.
2 Added geometry tag.
1
# Generalizing square wheels rolling on inverted catenaries
It is not uncommon to see in a science museum a bicycle with square wheels that rides smoothly over a washboard-like surface made from inverted catenary curves (e.g., at the Münich museum). The square wheel may be generalized to any regular polygon, which rolls on a similar curve without slippage. Here, for example, is a nice Mathematica demo.
My question is: For which wheel shapes does there exist a matching road shape that permits the wheel to roll over it without slippage so that: (a) the wheel center remains horizontal throughout its motion, (b) the wheel can turn at constant angular velocity, and (c) if possible, the wheel center also moves at constant horizontal velocity?
The square satisfies (a) and (b), but only regular hexagons and beyond satisfy (c). If you've experienced a square-wheel bicycle ride, you can feel it jerk because (c) fails to hold. It would be interesting to know the class of closed wheel curves that satisfy (a) and (b), and also those that in addition satisfy (c). For example, must all (a,b) curves be star-shaped from the wheel center $x$? (star-shaped: every point of the curve is visible from $x$).
This is probably all known, so an appropriate reference may suffice. I don't quite know how to tag this; please retag if you know better.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9460374712944031, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/4558/treatment-of-boundary-terms-when-applying-the-variational-principle
|
# Treatment of boundary terms when applying the variational principle
One of the main sources of subtlety in the AdS/CFT correspondence is the role played by boundary terms in the action. For example, for a scalar field in AdS there is range of masses just above the Breitenlohner-Freedman bound where there are two possible quantizations and which one you get depends on what boundary terms you add to the action. Boundary terms are also essential in the treatment of first-order Lagrangians for fermions and self-dual tensor fields. These all involve the "UV" boundary as $z \rightarrow 0$ in Poincare coordinates. Then there are dual models of QCD like the hard-wall model where one imposes an IR cutoff and imposes boundary conditions at the IR boundary and/or adds IR boundary terms to the action. My question is a bit vague, but basically I would like references to reviews, books or papers that give a good general treatment of the variational principle when one has to be careful about boundary terms. It would help if they clearly distinguish the requirements that follow from mathematical consistency from those that are imposed because of a desire to model the physics in a certain way.
-
UV boundary terms for AdS at conformal infinity aren't problematic. But in Randall-Sundrum models with a Planck brane, things can get a little bit tricky after quantization. – QGR Feb 4 '11 at 17:08
OK, but I don't find this too helpful without some explanation of what the trickiness is or a reference to the literature. – pho Feb 4 '11 at 18:06
## 2 Answers
I'm not sure how far back you want to go, but one of the earliest "careful" treatments (in the Hamiltonian formalism) is in the paper by Regge and Teitelboim:
• T. Regge and C. Teitelboim, “Role Of Surface Integrals In The Hamiltonian Formulation Of General Relativity,” Annals Phys. 88, 286 (1974)
Analogous work in the Lagrangian formalism didn't happen until much more recently. See the paper by Mann and Marolf, or the treatment in this relevant but shameless self-promotion:
• Mann, Marolf, McNees, and Virmani, "On the Stress Tensor for Asymptotically Flat Gravity," (http://arxiv.org/abs/0804.2079)
As far as calculations specific to AAdS spacetimes there is the paper by Skenderis and Papadimitriou
• Papadimitriou and Skenderis, "Thermodynamics of Asymptotically Locally AdS Spacetimes," (http://arxiv.org/abs/hep-th/0505190)
which attempts to formulate things in a language similar to the Wald paper referenced by Moshe. All of these papers are concerned with the consistent treatment of the variational problem on spacetimes with a spatial infinity. The references of the last two will turn up additional relevant works.
You should immediately discount any paper which claims that the inclusion of the Gibbons-Hawking-York surface term gives a good variational problem for a gravitational theory.
This topic -- the proper variational formulation of gravitational theories -- is one of my main interests. I'm trying to take the Red Line from Loyola down to U of C for seminars this term (as teaching allows), so perhaps we can talk about it some time.
-
Thanks Robert. Let me know next time you come down to U of C. – pho Feb 4 '11 at 13:27
I'll let you know next time I head down, which will hopefully be soon. – Robert McNees Feb 4 '11 at 13:45
@Robert what is the deficiency with the Gibbons-Hawking-York term? – user346 Feb 4 '11 at 17:19
@Robert, yes I'm curious about your comment on G-H-Y as well. That approach seems to be what Wald does in Appendix E and he is usually quite careful about getting things right. – pho Feb 4 '11 at 18:05
The problem isn't with GHY. It's the claim that adding GHY to the EH action leads to a "well-defined variational problem". Take the EH action and add the GHY term. The Schwarzschild solution should be a stationary point of this action, right? Now consider the change in the action due to a small variation of the metric. You will find that the surface terms in the first-order change in the action only vanish if $\delta g$ falls off faster than 1/r. You have an 'action' that allows Dirichlet boundary conditions, but it can't tell between Schwarzschild and other metrics with 1/r asymptotics. – Robert McNees Feb 4 '11 at 19:32
show 1 more comment
Here is a general treatment, Ishibashi and Wald, in a slightly more formal language than normal in this type of discussion. Not sure if this is what you are looking for though.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9292240738868713, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/54954?sort=oldest
|
## Why is the dimension of Gaussian variables is bounded by the dimension of the space?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I'm looking at a probabilistic proof of a local version of Dvoretzky's theorem in Pisier's manuscript "Probabilistic Methods in the Geometry of Banach Spaces."
For each $\epsilon >0$ there is a number $\eta^\prime(\epsilon) > 0$ with the following property. Let $X$ be a Gaussian r.v. with values in a Banach space $B$ of dimension $N.$ Then $B$ contains a subspace $F$ of dimension $n = [\eta^\prime(\epsilon) d(X)]$ which is $(1+\epsilon)$-isomorphic to $\ell^n_2.$ Conversely, if $B$ contains a subspace $F$ with $F \stackrel{1+\epsilon}{\sim} \ell^n_2,$ then there is a $B$-valued Gaussian r.v. $X$ such that $d(X) \geq (1+\epsilon)^{-2}n.$
This statement uses the "dimension" $d(X)$ of a Gaussian variable, $$d(X) = \mathbb{E}\|X\|^2/\sigma(X)^2,$$ where `$$ \sigma(X)^2 = \sup \{ \mathbb{E} \xi(X)^2 \mid \|\xi\|_{B^\star} \leq 1 \} $$` is the weak variance of $X.$
For this to match the usual $n = O(\log N)$ statement of the theorem, you'd need a lower bound on $d(X)$ of order $\log N,$ and as a sanity check an upper bound of $O(N).$
Any hints or references on how to show these two bounds on $d(X)$? Pisier states that the upper bound $d(X) \leq N$ is easy to show, but I've not been able to prove even that.
-
## 1 Answer
None of these estimates is trivial. The upper bound follows from John's theorem that the Banach-Mazur distance between an N dimensional normed space and an N dimensional Euclidean space is at most $\sqrt N$. The lower bound is more involved and uses the Dvoretzky-Rogers lemma. A good reference for the Gaussian approach to Dvoretzky's theorem is Pisier's book: The volume of convex bodies and Banach space geometry. For a more geometrical approach see: Milman and Schechtman: Asymptotic theory of finite-dimensional normed spaces. The theorem and it's proof is presented in other books as well.
-
Thanks! I'll check out the mentioned references. – Alex Gittens Feb 11 2011 at 2:58
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 25, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9073767066001892, "perplexity_flag": "head"}
|
http://en.wikipedia.org/wiki/Bell's_Theorem
|
# Bell's theorem
(Redirected from Bell's Theorem)
This article needs attention from an expert in Physics. Please add a reason or a talk parameter to this template to explain the issue with the article. WikiProject Physics (or its Portal) may be able to help recruit an expert. (October 2011)
Bell's theorem is a no-go theorem famous for drawing an important line in the sand between quantum mechanics (QM) and the world as we know it classically. In its simplest form, Bell's theorem states:[1]
No physical theory of local hidden variables can ever reproduce all of the predictions of quantum mechanics.
When introduced in 1927, the philosophical implications of the new quantum theory were troubling to many prominent physicists of the day, including Albert Einstein. In a well known 1935 paper, Einstein and co-authors Boris Podolsky and Nathan Rosen (collectively EPR) demonstrated by a paradox that QM was incomplete. This provided hope that a more complete (and less troubling) theory might one day be discovered. But that conclusion rested on the seemingly reasonable assumptions of locality and realism (together called "local realism" or "local hidden variables", often interchangeably). In the vernacular of Einstein: locality meant no instantaneous ("spooky") action at a distance; realism meant the moon is there even when not being observed. These assumptions were hotly debated within the physics community, notably between Nobel laureates Einstein and Niels Bohr.
In his groundbreaking 1964 paper, "On the Einstein Podolsky Rosen paradox", physicist John Stewart Bell presented an analogy (based on spin measurements on pairs of entangled electrons) to EPR's hypothetical paradox. Using their reasoning, he said, a choice of measurement setting here should not affect the outcome of a measurement there (and vice versa). After providing a mathematical formulation of locality and realism based on this, he showed specific cases where this would be inconsistent with the predictions of QM.
In experimental tests following Bell's example, now using quantum entanglement of photons instead of electrons, John Clauser and Stuart Freedman (1972) and Alain Aspect et al. (1981) convincingly demonstrated that the predictions of QM are correct in this regard. While this does not demonstrate QM is complete, one is forced to reject either locality or realism (or both).
Cornell solid-state physicist David Mermin has described the various appraisals of the importance of Bell's theorem within the physics community as ranging from "indifference" to "wild extravagance".[2] Lawrence Berkeley particle physicist Henry Stapp declared: “Bell’s theorem is the most profound discovery of science.”.[3]
## Overview
Bell’s theorem states that the concept of local realism, favoured by Einstein,[4] yields predictions that disagree with those of quantum mechanical theory. Because numerous experiments agree with the predictions of quantum mechanical theory, and show correlations that are, according to Bell, greater than could be explained by local hidden variables, the experimental results have been taken by many as refuting the concept of local realism as an explanation of the physical phenomena under test. For a hidden variable theory, if Bell's conditions are correct, then the results which are in agreement with quantum mechanical theory appear to evidence superluminal effects, in contradiction to the principle of locality.
Illustration of Bell test for particles such as photons. A source produces a singlet pair, one particle is sent to one location, and the other is sent to another location. A measurement of the entangled property is performed at various angles at each location.
The theorem applies to any quantum system of two entangled qubits. The most common examples concern systems of particles that are entangled in spin or polarization.
Following the argument in the Einstein–Podolsky–Rosen (EPR) paradox paper (but using the example of spin, as in David Bohm's version of the EPR argument[5][6]), Bell considered an experiment in which there are "a pair of spin one-half particles formed somehow in the singlet spin state and moving freely in opposite directions."[5] The two particles travel away from each other to two distant locations, at which measurements of spin are performed, along axes that are independently chosen. Each measurement yields a result of either spin-up (+) or spin-down (−).
The probability of the same result being obtained at the two locations varies, depending on the relative angles at which the two spin measurements are made, and is subject to some uncertainty for all relative angles other than perfectly parallel alignments (0° or 180°). Bell's theorem thus applies only to the statistical results from many trials of the experiment. For this reason, the terms "correlated", "anti-correlated", and "uncorrelated" apply only to sets of several pairs of measurements. The correlation of two binary variables can be defined as the average of the product of the two outcomes of the pairs of measurements. This definition is in accordance with the definition of covariance between real-valued random variables. Using this definition, if the pairs of outcomes are always the same, the correlation will be +1, no matter which same value each pair of outcomes have. If the pairs of outcomes are always opposite, the correlation will be -1. Finally, if the pairs of outcomes are perfectly balanced, being 50% of the times in accordance, and 50% of the times opposite, the correlation, being an average, will be 0. Measuring the spin of these entangled particles along anti-parallel directions, i.e. along the same axis but in opposite directions, the set of all results will be correlated. On the other hand, if the measurements are performed along parallel directions will always yield opposite results, and the set of measurements will show perfect anti-correlation. Finally, measurement at perpendicular directions will have a 50% chance of matching, and the total set of measurement will be uncorrelated. These basic cases are illustrated in the table below.
Anti-parallel Pair 1 Pair 2 Pair 3 Pair 4 … Pair n
Alice, 0° + − + + … −
Bob, 180° + − + + … −
Correlation = ( +1 +1 +1 +1 … +1 ) / n = +1
(100% identicall)
Parallel Pair 1 Pair 2 Pair 3 Pair 4 … Pair n
Alice, 0° + − − + … +
Bob, 0° or 360° − + + − … −
Correlation = ( -1 -1 -1 -1 … -1 ) / n = -1
(100% opposite)
Orthogonal Pair 1 Pair 2 Pair 3 Pair 4 … Pair n
Alice, 0° + − + − … −
Bob, 90° or 270° − − + + … −
Correlation = ( +1 −1 −1 +1 … −1 ) / n = 0
(50% identical, 50% opposite)
The local realist prediction (solid lines) for quantum correlation for spin (assuming 100% detector efficiency). The quantum mechanical prediction is the dotted (cosine) curve. In this plot the opposite convention with respect to the values used in the text is adopted: "-1" for correlation and "1" for anti-correlation
With the measurements oriented at intermediate angles between these basic cases, the existence of local hidden variables would imply a linear variation in the correlation. However, according to quantum mechanical theory, the correlation varies as the cosine of the angle. Experimental results match the curve predicted by quantum mechanics.[1]
Bell achieved his breakthrough by first deriving the results that he posits local realism would necessarily yield. Bell claimed that, without making any assumptions about the specific form of the theory beyond requirements of basic consistency, the mathematical inequality he discovered was clearly at odds with the results (described above) predicted by quantum mechanics and, later, observed experimentally. If correct, Bell's theorem appears to rule out local hidden variables as a viable explanation of quantum mechanics (though it still leaves the door open for non-local hidden variables). Bell concluded:
In a theory in which parameters are added to quantum mechanics to determine the results of individual measurements, without changing the statistical predictions, there must be a mechanism whereby the setting of one measuring device can influence the reading of another instrument, however remote. Moreover, the signal involved must propagate instantaneously, so that a theory could not be Lorentz invariant.
—[5]
Over the years, Bell's theorem has undergone a wide variety of experimental tests. However, various common deficiencies in the testing of the theorem have been identified, including the detection loophole[7] and the communication loophole.[7] Over the years experiments have been gradually improved to better address these loopholes, but no experiment to date has simultaneously fully addressed all of them.[7] However, it is generally considered unreasonable that such an experiment, if conducted, would give results that are inconsistent with the prior experiments. For example, Anthony Leggett has commented:
[While] no single existing experiment has simultaneously blocked all of the so-called ‘‘loopholes’’, each one of those loopholes has been blocked in at least one experiment. Thus, to maintain a local hidden variable theory in the face of the existing experiments would appear to require belief in a very peculiar conspiracy of nature.[8]
To date, Bell's theorem is generally regarded as supported by a substantial body of evidence and is treated as a fundamental principle of physics in mainstream quantum mechanics textbooks.[9][10]
## Importance of the theorem
Bell's theorem, derived in his seminal 1964 paper titled On the Einstein Podolsky Rosen paradox,[5] has been called, on the assumption that the theory is correct, "the most profound in science".[11] Perhaps of equal importance is Bell's deliberate effort to encourage and bring legitimacy to work on the completeness issues, which had fallen into disrepute.[12] Later in his life, Bell expressed his hope that such work would "continue to inspire those who suspect that what is proved by the impossibility proofs is lack of imagination."[13]
The title of Bell's seminal article refers to the famous paper by Einstein, Podolsky and Rosen[14] that challenged the completeness of quantum mechanics. In his paper, Bell started from the same two assumptions as did EPR, namely (i) reality (that microscopic objects have real properties determining the outcomes of quantum mechanical measurements), and (ii) locality (that reality in one location is not influenced by measurements performed simultaneously at a distant location). Bell was able to derive from those two assumptions an important result, namely Bell's inequality, implying that at least one of the assumptions must be false.
In two respects Bell's 1964 paper was a step forward compared to the EPR paper: firstly, it considered more hidden variables than merely the element of physical reality in the EPR paper; and Bell's inequality was, in part, liable to be experimentally tested, thus raising the possibility of testing the local realism hypothesis. Limitations on such tests to date are noted below. Whereas Bell's paper deals only with deterministic hidden variable theories, Bell's theorem was later generalized to stochastic theories[15] as well, and it was also realised[16] that the theorem is not so much about hidden variables as about the outcomes of measurements which could have been done instead of the one actually performed. Existence of these variables is called the assumption of realism, or the assumption of counterfactual definiteness.
After the EPR paper, quantum mechanics was in an unsatisfactory position: either it was incomplete, in the sense that it failed to account for some elements of physical reality, or it violated the principle of a finite propagation speed of physical effects. In a modified version of the EPR thought experiment, two hypothetical observers, now commonly referred to as Alice and Bob, perform independent measurements of spin on a pair of electrons, prepared at a source in a special state called a spin singlet state. It is the conclusion of EPR that once Alice measures spin in one direction (e.g. on the x axis), Bob's measurement in that direction is determined with certainty, as being the opposite outcome to that of Alice, whereas immediately before Alice's measurement Bob's outcome was only statistically determined (i.e., was only a probability, not a certainty); thus, either the spin in each direction is an element of physical reality, or the effects travel from Alice to Bob instantly.
In QM, predictions are formulated in terms of probabilities — for example, the probability that an electron will be detected in a particular place, or the probability that its spin is up or down. The idea persisted, however, that the electron in fact has a definite position and spin, and that QM's weakness is its inability to predict those values precisely. The possibility existed that some unknown theory, such as a hidden variables theory, might be able to predict those quantities exactly, while at the same time also being in complete agreement with the probabilities predicted by QM. If such a hidden variables theory exists, then because the hidden variables are not described by QM the latter would be an incomplete theory.
Two assumptions drove the desire to find a local realist theory:
1. Objects have a definite state that determines the values of all other measurable properties, such as position and momentum.
2. Effects of local actions, such as measurements, cannot travel faster than the speed of light (in consequence of special relativity). Thus if observers are sufficiently far apart, a measurement made by one can have no effect on a measurement made by the other.
In the form of local realism used by Bell, the predictions of the theory result from the application of classical probability theory to an underlying parameter space. By a simple argument based on classical probability, he showed that correlations between measurements are bounded in a way that is violated by QM.
Bell's theorem seemed to put an end to local realism. This is because, if the theorem is correct, then either quantum mechanics or local realism is wrong, as they are mutually exclusive. The paper noted that "it requires little imagination to envisage the experiments involved actually being made",[5] to determine which of them is correct. It took many years and many improvements in technology to perform tests along the lines Bell envisaged. The tests are, in theory, capable of showing whether local hidden variable theories as envisaged by Bell accurately predict experimental results. The tests are not capable of determining whether Bell has accurately described all local hidden variable theories.
The Bell test experiments have been interpreted as showing that the Bell inequalities are violated in favour of QM. The no-communication theorem shows that the observers cannot use the effect to communicate (classical) information to each other faster than the speed of light, but the ‘fair sampling’ and ‘no enhancement’ assumptions require more careful consideration (below). That interpretation follows not from any clear demonstration of super-luminal communication in the tests themselves, but solely from Bell's theory that the correctness of the quantum predictions necessarily precludes any local hidden-variable theory. If that theoretical contention is not correct, then the "tests" of Bell's theory to date do not show anything either way about the local or non-local nature of the phenomena.
## Bell inequalities
Bell inequalities concern measurements made by observers on pairs of particles that have interacted and then separated. According to quantum mechanics they are entangled, while local realism would limit the correlation of subsequent measurements of the particles.
Different authors subsequently derived inequalities similar to Bell´s original inequality, and these are here collectively termed Bell inequalities. All Bell inequalities describe experiments in which the predicted result from quantum entanglement differs from that flowing from local realism. The inequalities assume that each quantum-level object has a well-defined state that accounts for all its measurable properties and that distant objects do not exchange information faster than the speed of light. These well-defined states are typically called hidden variables, the properties that Einstein posited when he stated his famous objection to quantum mechanics: "God does not play dice."
Bell showed that under quantum mechanics, the mathematics of which contains no local hidden variables, the Bell inequalities can nevertheless be violated: the properties of a particle are not clear, but may be correlated with those of another particle due to quantum entanglement, allowing their state to be well defined only after a measurement is made on either particle. That restriction agrees with the Heisenberg uncertainty principle, a fundamental concept in quantum mechanics.
In Bell's words:
Theoretical physicists live in a classical world, looking out into a quantum-mechanical world. The latter we describe only subjectively, in terms of procedures and results in our classical domain. (…) Now nobody knows just where the boundary between the classical and the quantum domain is situated. (…) More plausible to me is that we will find that there is no boundary. The wave functions would prove to be a provisional or incomplete description of the quantum-mechanical part. It is this possibility, of a homogeneous account of the world, which is for me the chief motivation of the study of the so-called "hidden variable" possibility.
(…) A second motivation is connected with the statistical character of quantum-mechanical predictions. Once the incompleteness of the wave function description is suspected, it can be conjectured that random statistical fluctuations are determined by the extra "hidden" variables — "hidden" because at this stage we can only conjecture their existence and certainly cannot control them.
(…) A third motivation is in the peculiar character of some quantum-mechanical predictions, which seem almost to cry out for a hidden variable interpretation. This is the famous argument of Einstein, Podolsky and Rosen. (…) We will find, in fact, that no local deterministic hidden-variable theory can reproduce all the experimental predictions of quantum mechanics. This opens the possibility of bringing the question into the experimental domain, by trying to approximate as well as possible the idealized situations in which local hidden variables and quantum mechanics cannot agree.[17]
In probability theory, repeated measurements of system properties can be regarded as repeated sampling of random variables. In Bell's experiment, Alice can choose a detector setting to measure either $A(a)$ or $A(a')$ and Bob can choose a detector setting to measure either $B(b)$ or $B(b')$. Measurements of Alice and Bob may be somehow correlated with each other, but the Bell inequalities say that if the correlation stems from local random variables, there is a limit to the amount of correlation one might expect to see.
### Original Bell's inequality
The original inequality that Bell derived was:[5]
$1 + \operatorname{\rho}(B, C) \geq |\operatorname{\rho}(A, B) - \operatorname{\rho}(A, C)|,$
where ρ is the "correlation" of the particle pairs and A, B and C settings of the apparatus. This inequality is not used in practice. For one thing, it is true only for genuinely "two-outcome" systems, not for the "three-outcome" ones (with possible outcomes of zero as well as +1 and −1) encountered in real experiments. For another, it applies only to a very restricted set of hidden variable theories, namely those for which the outcomes on both sides of the experiment are always exactly anticorrelated when the analysers are parallel, in agreement with the quantum mechanical prediction.
Nevertheless, a simple limit of Bell's inequality has the virtue of being quite intuitive. If the result of three different statistical coin-flips A, B, and C have the property that:
1. A and B are the same (both heads or both tails) 99% of the time
2. B and C are the same 99% of the time,
then A and C are the same at least 98% of the time. The number of mismatches between A and B (1/100) plus the number of mismatches between B and C (1/100) are together the maximum possible number of mismatches between A and C (a simple Boole–Fréchet inequality).
In quantum mechanics, however, by letting A, B, and C be the values of the spin of two entangled particles measured relative to some axis at 0 degrees, θ degrees, and 2θ degrees respectively, the overlap of the wavefunction between the different angles is proportional to cos(Sθ) ≈ 1–S2θ2/2. The probability that A and B give the same answer is 1–ε2, where ε is proportional to θ. This is also the probability that B and C give the same answer.
But A and C are the same 1 – (2ε)2 of the time. Choosing the angle so that ε=0.1, A and B are 99% correlated, B and C are 99% correlated, but now A and C are only 96% correlated!
Imagine that two entangled particles in a spin singlet are shot out to two distant locations, and the spins of both are measured in the direction A. The spins are 100% correlated (actually, anti-correlated, but for this argument that is equivalent). The same is true if both spins are measured in directions B or C. It is safe to conclude that any hidden variables that determine the A, B, and C measurements in the two particles are 100% correlated, and can be used interchangeably. If A is measured on one particle and B on the other, the correlation between them is 99%. If B is measured on one and C on the other, the correlation is 99%. This allows us to conclude that the hidden variables determining A and B are 99% correlated, and B and C are 99% correlated.
But if A is measured in one particle and C in the other, the quantum mechanical results are only 96% correlated, which is a contradiction. This intuitive formulation is due to David Mermin, while the small-angle limit is emphasized in Bell's original article.
### CHSH inequality
Main article: CHSH inequality
In addition to Bell's original inequality,[5] the form given by John Clauser, Michael Horne, Abner Shimony and R. A. Holt,[18] (the CHSH form) is especially important,[18] as it gives classical limits to the expected correlation for the above experiment conducted by Alice and Bob:
$\ (1) \quad \mathbf{C}[A(a), B(b)] + \mathbf{C}[A(a), B(b')] + \mathbf{C}[A(a'), B(b)] - \mathbf{C}[A(a'), B(b')] \leq 2$
where C denotes correlation.
Correlation of observables X, Y is defined as
$\mathbf{C}(X,Y) = \operatorname{E}(X Y)$
Where $\operatorname{E}(Z)$ represents the expected or average value of $Z$
This is a non-normalized form of the correlation coefficient considered in statistics (see Quantum correlation).
To formulate Bell's theorem, we formalize local realism as follows:
1. There is a probability space $\Lambda$ and the observed outcomes by both Alice and Bob result by random sampling of the parameter $\lambda \in \Lambda$.
2. The values observed by Alice or Bob are functions of the local detector settings and the hidden parameter only. Thus
• Value observed by Alice with detector setting $\scriptstyle a$ is $\scriptstyle A(a,\lambda)$
• Value observed by Bob with detector setting $\scriptstyle b$ is $\scriptstyle B(b,\lambda)$
Implicit in assumption 1) above, the hidden parameter space $\scriptstyle\Lambda$ has a probability measure $\scriptstyle\rho$ and the expectation of a random variable X on $\scriptstyle\Lambda$ with respect to $\scriptstyle\rho$ is written
$\operatorname{E}(X) = \int_\Lambda X(\lambda) \rho(\lambda) d \lambda$
where for accessibility of notation we assume that the probability measure has a density.
Bell's inequality. The CHSH inequality (1) holds under the hidden variables assumptions above.
For simplicity, let us first assume the observed values are +1 or −1; we remove this assumption in Remark 1 below.
Let $\lambda \in \Lambda$. Then at least one of
$B(b, \lambda) + B(b', \lambda), \quad B(b, \lambda) - B(b', \lambda)$
is 0. Thus
$\begin{align} &\quad A(a, \lambda) B(b, \lambda) + A(a, \lambda) B(b', \lambda) + A(a', \lambda) B(b, \lambda) - A(a', \lambda) \ B(b', \lambda)\\ &= A(a, \lambda) \left[B(b, \lambda) + B(b', \lambda)\right] + A(a', \lambda) \left[B(b, \lambda) - B(b', \lambda)\right]\\ &\leq 2 \end{align}$
and therefore
$\begin{align} &\quad \mathbf{C}(A(a), B(b)) + \mathbf{C}(A(a), B(b')) + \mathbf{C}(A(a'), B(b)) - \mathbf{C}(A(a'), B(b'))&\\ &= \int_\Lambda A(a, \lambda) B(b, \lambda) \rho(\lambda) d \lambda + \int_\Lambda A(a, \lambda) B(b', \lambda) \rho(\lambda) d \lambda + \int_\Lambda A(a', \lambda) B(b, \lambda) \rho(\lambda) d \lambda - \int_\Lambda A(a', \lambda) B(b', \lambda) \rho(\lambda) d \lambda&\\ &= \int_\Lambda \big\{ A(a, \lambda) B(b, \lambda) + A(a, \lambda) B(b', \lambda) + A(a', \lambda) B(b, \lambda) - A(a', \lambda) B(b', \lambda) \big\} \rho(\lambda) d \lambda&\\ &= \int_\Lambda \big\{ A(a, \lambda) \left[ B(b, \lambda) + B(b', \lambda) \right] + A(a', \lambda) \left[ B(b, \lambda) - B(b', \lambda) \right] \big\} \rho(\lambda) d \lambda\\ &\leq 2 \end{align}$
Remark 1
The correlation inequality (1) still holds if the variables $A(a,\lambda)$, $B(b,\lambda)$ are allowed to take on any real values between −1 and +1. Indeed, the relevant idea is that each summand in the above average is bounded above by 2. This is easily seen as true in the more general case:
$\begin{align} &\quad A(a, \lambda) B(b, \lambda) + A(a, \lambda) B(b', \lambda) + A(a', \lambda) B(b, \lambda) - A(a', \lambda) B(b', \lambda)\\ &= A(a, \lambda) \left[B(b, \lambda) + B(b', \lambda)\right] + A(a', \lambda) \left[B(b, \lambda) - B(b', \lambda)\right]\\ &\leq \big| A(a, \lambda) \left[B(b, \lambda) + B(b', \lambda)\right] + A(a', \lambda) \left[B(b, \lambda) - B(b', \lambda)\right] \big|\\ &\leq \big| A(a, \lambda) \left[B(b, \lambda) + B(b', \lambda)\right] \big| + \big| A(a', \lambda) \left[B(b, \lambda) - B(b', \lambda)\right] \big|\\ &\leq \big| B(b, \lambda) + B(b', \lambda) \big| + \big| B(b, \lambda) - B(b', \lambda) \big| \leq 2 \end{align}$
To justify the upper bound 2 asserted in the last inequality, without loss of generality, we can assume that
$B(b, \lambda) \geq B(b', \lambda) \geq 0$
In that case
$\begin{align} & \big|B(b, \lambda) + B(b', \lambda)\big| + \big|B(b, \lambda) - B(b', \lambda)\big|\\ &= B(b, \lambda) + B(b', \lambda) + B(b, \lambda) - B(b', \lambda)\\ &= 2B(b, \lambda) \leq 2 \end{align}$
Remark 2
Though the important component of the hidden parameter $\scriptstyle\lambda$ in Bell's original proof is associated with the source and is shared by Alice and Bob, there may be others that are associated with the separate detectors, these others being conditionally independent given the first, and with conditional probability distributions only depending on the corresponding local setting (if dependent on the settings at all). This argument was used by Bell in 1971, and again by Clauser and Horne in 1974,[15] to justify a generalisation of the theorem forced on them by the real experiments, in which detectors were never 100% efficient. The derivations were given in terms of the averages of the outcomes over the local detector variables. The formalisation of local realism was thus effectively changed, replacing A and B by averages and retaining the symbol $\scriptstyle\lambda$ but with a slightly different meaning. It was henceforth restricted (in most theoretical work) to mean only those components that were associated with the source.
However, with the extension proved in Remark 1, CHSH inequality still holds even if the instruments themselves contain hidden variables. In that case, averaging over the instrument hidden variables gives new variables:
$\overline{A}(a, \lambda), \quad \overline{B}(b, \lambda)$
on $\scriptstyle\Lambda$, which still have values in the range [−1, +1] to which we can apply the previous result.
## Bell inequalities are violated by quantum mechanical predictions
In the usual quantum mechanical formalism, the observables X and Y are represented as self-adjoint operators on a Hilbert space. To compute the correlation, assume that X and Y are represented by matrices in a finite dimensional space and that X and Y commute; this special case suffices for our purposes below. The von Neumann measurement postulate states: a series of measurements of an observable X on a series of identical systems in state $\scriptstyle\phi$ produces a distribution of real values. By the assumption that observables are finite matrices, this distribution is discrete. The probability of observing λ is non-zero if and only if λ is an eigenvalue of the matrix X and moreover the probability is
$\|\operatorname{E}_X(\lambda) \phi\|^2$
where EX (λ) is the projector corresponding to the eigenvalue λ. The system state immediately after the measurement is
$\|\operatorname{E}_X(\lambda) \phi\|^{-1} \operatorname{E}_X(\lambda) \phi.$
From this, we can show that the correlation of commuting observables X and Y in a pure state $\scriptstyle\psi$ is
$\langle X Y \rangle = \langle X Y \psi \mid \psi \rangle$
We apply this fact in the context of the EPR paradox. The measurements performed by Alice and Bob are spin measurements on electrons. Alice can choose between two detector settings labelled a and a′; these settings correspond to measurement of spin along the z or the x axis. Bob can choose between two detector settings labelled b and b′; these correspond to measurement of spin along the z′ or x′ axis, where the x′ – z′ coordinate system is rotated 135° relative to the x – z coordinate system. The spin observables are represented by the 2 × 2 self-adjoint matrices:
$S_x = \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix}, S_z = \begin{bmatrix} 1 & 0 \\ 0 & -1 \end{bmatrix}$
These are the Pauli spin matrices normalized so that the corresponding eigenvalues are +1, −1. As is customary, we denote the eigenvectors of Sx by
$\left|+x\right\rang, \quad \left|-x\right\rang.$
Let $\scriptstyle\phi$ be the spin singlet state for a pair of electrons discussed in the EPR paradox. This is a specially constructed state described by the following vector in the tensor product
$\left|\phi\right\rang = \frac{1}{\sqrt{2}} \left(\left|+x\right\rang \otimes \left|-x\right\rang - \left|-x\right\rang \otimes \left|+x\right\rang \right)$
Now let us apply the CHSH formalism to the measurements that can be performed by Alice and Bob.
Illustration of Bell test for spin 1/2 particles. Source produces spin singlet pairs, one particle of each pair is sent to Alice and the other to Bob. Each performs one of the two spin measurements.
$\begin{align} A(a) &= S_z \otimes I\\ A(a') &= S_x \otimes I\\ B(b) &= -\frac{1}{\sqrt{2}} \ I \otimes (S_z + S_x)\\ B(b') &= \frac{1}{\sqrt{2}} \ I \otimes (S_z - S_x) \end{align}$
The operators $\scriptstyle B(b')$, $\scriptstyle B(b)$ correspond to Bob's spin measurements along x′ and z′. Note that the A operators commute with the B operators, so we can apply our calculation for the correlation. In this case, we can show that the CHSH inequality fails. In fact, a straightforward calculation shows that
$\langle A(a) B(b) \rangle = \langle A(a') B(b) \rangle = \langle A(a') B(b') \rangle = \frac{1}{\sqrt{2}}$
and
$\langle A(a) B(b') \rangle = -\frac{1}{\sqrt{2}}$
so that
$\langle A(a) B(b) \rangle + \langle A(a') B(b') \rangle + \langle A(a') B(b) \rangle - \langle A(a) B(b') \rangle = \frac{4}{\sqrt{2}} = 2 \sqrt{2} > 2$
Bell's Theorem: If the quantum mechanical formalism is correct, then the system consisting of a pair of entangled electrons cannot satisfy the principle of local realism. Note that $\scriptstyle 2 \sqrt{2}$ is indeed the upper bound for quantum mechanics called Tsirelson's bound. The operators giving this maximal value are always isomorphic to the Pauli matrices.
## Practical experiments testing Bell's theorem
Scheme of a "two-channel" Bell test
The source S produces pairs of "photons", sent in opposite directions. Each photon encounters a two-channel polariser whose orientation (a or b) can be set by the experimenter. Emerging signals from each channel are detected and coincidences of four types (++, −−, +− and −+) counted by the coincidence monitor.
Main article: Bell test experiments
Experimental tests can determine whether the Bell inequalities required by local realism hold up to the empirical evidence.
Bell's inequalities are tested by "coincidence counts" from a Bell test experiment such as the optical one shown in the diagram. Pairs of particles are emitted as a result of a quantum process, analysed with respect to some key property such as polarisation direction, then detected. The setting (orientations) of the analysers are selected by the experimenter.
Bell test experiments to date overwhelmingly violate Bell's inequality. Indeed, a table of Bell test experiments performed prior to 1986 is given in 4.5 of Redhead, 1987.[19] Of the thirteen experiments listed, only two reached results contradictory to quantum mechanics; moreover, according to the same source, when the experiments were repeated, "the discrepancies with QM could not be reproduced".
Nevertheless, the issue is not conclusively settled. According to Shimony's 2004 Stanford Encyclopedia overview article:[7]
Most of the dozens of experiments performed so far have favored Quantum Mechanics, but not decisively because of the 'detection loopholes' or the 'communication loophole.' The latter has been nearly decisively blocked by a recent experiment and there is a good prospect for blocking the former.
To explore the 'detection loophole', one must distinguish the classes of homogeneous and inhomogeneous Bell inequality.
The standard assumption in Quantum Optics is that "all photons of given frequency, direction and polarization are identical" so that photodetectors treat all incident photons on an equal basis. Such a fair sampling assumption generally goes unacknowledged, yet it effectively limits the range of local theories to those that conceive of the light field as corpuscular. The assumption excludes a large family of local realist theories, in particular, Max Planck's description. We must remember the cautionary words of Albert Einstein[20] shortly before he died: "Nowadays every Tom, Dick and Harry ('jeder Kerl' in German original) thinks he knows what a photon is, but he is mistaken".
Those who maintain the concept of duality, or simply of light being a wave, recognize the possibility or actuality that the emitted atomic light signals have a range of amplitudes and, furthermore, that the amplitudes are modified when the signal passes through analyzing devices such as polarizers and beam splitters. It follows that not all signals have the same detection probability.[21]
### Two classes of Bell inequalities
The fair sampling problem was faced openly in the 1970s. In early designs of their 1973 experiment, Freedman and Clauser[22] used fair sampling in the form of the Clauser–Horne–Shimony–Holt (CHSH[18]) hypothesis. However, shortly afterwards Clauser and Horne[15] made the important distinction between inhomogeneous (IBI) and homogeneous (HBI) Bell inequalities. Testing an IBI requires that we compare certain coincidence rates in two separated detectors with the singles rates of the two detectors. Nobody needed to perform the experiment, because singles rates with all detectors in the 1970s were at least ten times all the coincidence rates. So, taking into account this low detector efficiency, the QM prediction actually satisfied the IBI. To arrive at an experimental design in which the QM prediction violates IBI we require detectors whose efficiency exceeds 82% for singlet states, but have very low dark rate and short dead and resolving times. This is well above the 30% achievable[23] so Shimony’s optimism in the Stanford Encyclopedia, quoted in the preceding section, appears over-stated.
### Practical challenges
Main article: Loopholes in Bell test experiments
Because detectors don't detect a large fraction of all photons, Clauser and Horne[15] recognized that testing Bell's inequality requires some extra assumptions. They introduced the No Enhancement Hypothesis (NEH):
A light signal, originating in an atomic cascade for example, has a certain probability of activating a detector. Then, if a polarizer is interposed between the cascade and the detector, the detection probability cannot increase.
Given this assumption, there is a Bell inequality between the coincidence rates with polarizers and coincidence rates without polarizers.
The experiment was performed by Freedman and Clauser,[22] who found that the Bell's inequality was violated. So the no-enhancement hypothesis cannot be true in a local hidden variables model. The Freedman–Clauser experiment reveals that local hidden variables imply the new phenomenon of signal enhancement:
In the total set of signals from an atomic cascade there is a subset whose detection probability increases as a result of passing through a linear polarizer.
This is perhaps not surprising, as it is known that adding noise to data can, in the presence of a threshold, help reveal hidden signals (this property is known[24] as stochastic resonance). One cannot conclude that this is the only local-realist alternative to Quantum Optics, but it does show that the word loophole is biased. Moreover, the analysis leads us to recognize that the Bell-inequality experiments, rather than showing a breakdown of realism or locality, are capable of revealing important new phenomena.
## Theoretical challenges
Most advocates of the hidden variables idea believe that experiments have ruled out local hidden variables. They are ready to give up locality, explaining the violation of Bell's inequality by means of a non-local hidden variable theory, in which the particles exchange information about their states. This is the basis of the Bohm interpretation of quantum mechanics, which requires that all particles in the universe be able to instantaneously exchange information with all others. A 2007 experiment ruled out a large class of non-Bohmian non-local hidden variable theories.[25]
If the hidden variables can communicate with each other faster than light, Bell's inequality can easily be violated. Once one particle is measured, it can communicate the necessary correlations to the other particle. Since in relativity the notion of simultaneity is not absolute, this is unattractive. One idea is to replace instantaneous communication with a process that travels backwards in time along the past Light cone. This is the idea behind a transactional interpretation of quantum mechanics, which interprets the statistical emergence of a quantum history as a gradual coming to agreement between histories that go both forward and backward in time.[26]
A few advocates of deterministic models have not given up on local hidden variables. For example, Gerard 't Hooft has argued that the superdeterminism loophole cannot be dismissed.[27][28]
The quantum mechanical wavefunction can also provide a local realistic description, if the wavefunction values are interpreted as the fundamental quantities that describe reality. Such an approach is called a many-worlds interpretation of quantum mechanics. In this view, two distant observers both split into superpositions when measuring a spin. The Bell inequality violations are no longer counterintuitive, because it is not clear which copy of the observer B observer A will see when going to compare notes. If reality includes all the different outcomes, locality in physical space (not outcome space) places no restrictions on how the split observers can meet up.
This implies that there is a subtle assumption in the argument that realism is incompatible with quantum mechanics and locality. The assumption, in its weakest form, is called counterfactual definiteness. This states that if the results of an experiment are always observed to be definite, there is a quantity that determines what the outcome would have been even if you don't do the experiment.
Many worlds interpretations are not only counterfactually indefinite, they are factually indefinite. The results of all experiments, even ones that have been performed, are not uniquely determined.
E. T. Jaynes[29] pointed out two hidden assumptions in Bell Inequality that could limit its generality. According to him:
1. Bell interpreted conditional probability P(X|Y) as a causal inference, i.e. Y exerted a causal inference on X in reality. However, P(X|Y) actually only means logical inference (deduction). Causes cannot travel faster than light or backward in time, but deduction can.
2. Bell's inequality does not apply to some possible hidden variable theories. It only applies to a certain class of local hidden variable theories. In fact, it might have just missed the kind of hidden variable theories that Einstein is most interested in.
## Final remarks
The violations of Bell's inequalities, due to quantum entanglement, just provide the definite demonstration of something that was already strongly suspected, that quantum physics cannot be represented by any version of the classical picture of physics.[30] Some earlier elements that had seemed incompatible with classical pictures included apparent complementarity and (hypothesized) wavefunction collapse. Complementarity is now seen not as an independent ingredient of the quantum picture but rather as a direct consequence of the Quantum decoherence expected from the quantum formalism itself. The possibility of wavefunction collapse is now seen as one possible problematic ingredient of some interpretations, rather than as an essential part of quantum mechanics. The Bell violations show that no resolution of such issues can avoid the ultimate strangeness of quantum behavior.[31]
The EPR paper "pinpointed" the unusual properties of the entangled states, e.g. the above-mentioned singlet state, which is the foundation for present-day applications of quantum physics, such as quantum cryptography; one application involves the measurement of quantum entanglement as a physical source of bits for Rabin's oblivious transfer protocol. This strange non-locality was originally supposed to be a Reductio ad absurdum, because the standard interpretation could easily do away with action-at-a-distance by simply assigning to each particle definite spin-states. Bell's theorem showed that the "entangledness" prediction of quantum mechanics has a degree of non-locality that cannot be explained away by any local theory.
In well-defined Bell experiments (see the paragraph on "test experiments") one can now falsify either quantum mechanics or Einstein's quasi-classical assumptions: currently many experiments of this kind have been performed, and the experimental results support quantum mechanics, though some believe that detectors give a biased sample of photons, so that until nearly every photon pair generated is observed there will be loopholes.
What is powerful about Bell's theorem is that it doesn't refer to any particular physical theory. What makes Bell's theorem unique and powerful is that it shows that nature violates the most general assumptions behind classical pictures, not just details of some particular models. No combination of local deterministic and local random variables can reproduce the phenomena predicted by quantum mechanics and repeatedly observed in experiments.[32]
## Notes
1. ^ a b C.B. Parker (1994). McGraw Hill Encyclopaedia of Physics (2nd ed.). McGraw Hill. p. 542. ISBN 0-07-051400-3.
2. N. David Mermin, "Is the moon there when nobody looks? Reality and the quantum theory" in , April, 38-47 (1985). PDF.
3. Henry Stapp, Nuovo Cimento 40B, 191 (1977). PDF.
4. C.B. Parker (1994). McGraw Hill Encyclopaedia of Physics (2nd ed.). McGraw Hill. p. 541. ISBN 0-07-051400-3.
5. Bell, John (1964). "On the Einstein Podolsky Rosen Paradox". 1 (3): 195–200.
6. Bohm, David Quantum Theory. Prentice−Hall, 1951.
7. ^ a b c d Article on Bell's Theorem by Abner Shimony in the Stanford Encyclopedia of Philosophy, (2004).
8. Leggett, Anthony (2003). "Nonlocal Hidden-Variable Theories and Quantum Mechanics: An Incompatibility Theorem". Foundations of Physics 33 (10): 1469–1493. doi:10.1023/A:1026096313729.
9. Griffiths, David J. (1998). Introduction to Quantum Mechanics (2nd ed.). Pearson/Prentice Hall. p. 423.
10. Merzbacher, Eugene (2005). Quantum Mechanics (3rd ed.). John Wiley & Sons. pp. 18, 362.
11. Stapp, 1975
12. Bell, JS, "On the impossible pilot wave." Foundations of Physics (1982) 12:989–99. Reprinted in Speakable and unspeakable in quantum mechanics: collected papers on quantum philosophy. CUP, 2004, p. 160.
13. Bell, JS, "On the impossible pilot wave." Foundations of Physics (1982) 12:989–99. Reprinted in Speakable and unspeakable in quantum mechanics: collected papers on quantum philosophy. CUP, 2004, p. 161.
14. Einstein, A.; Podolsky, B.; Rosen, N. (1935). "Can Quantum-Mechanical Description of Physical Reality Be Considered Complete?". Physical Review 47 (10): 777. Bibcode:1935PhRv...47..777E. doi:10.1103/PhysRev.47.777.
15. ^ a b c d Clauser, John F. (1974). "Experimental consequences of objective local theories". Physical Review D 10 (2): 526. Bibcode:1974PhRvD..10..526C. doi:10.1103/PhysRevD.10.526.
16. Eberhard, P. H. (1977). "Bell's theorem without hidden variables". 38: 75–80. Bibcode:1977NCimB..38...75E. doi:10.1007/BF02726212.
17. Bell, JS, Speakable and unspeakable in quantum mechanics: Introduction remarks at Naples–Amalfi meeting., 1984. Reprinted in Speakable and unspeakable in quantum mechanics: collected papers on quantum philosophy. CUP, 2004, p. 29.
18. ^ a b c Clauser, John; Horne, Michael; Shimony, Abner; Holt, Richard (1969). "Proposed Experiment to Test Local Hidden-Variable Theories". Physical Review Letters 23 (15): 880. Bibcode:1969PhRvL..23..880C. doi:10.1103/PhysRevLett.23.880.
19. M. Redhead, Incompleteness, Nonlocality and Realism, Clarendon Press (1987)
20. A. Einstein in Correspondance Einstein–Besso, p.265 (Herman, Paris, 1979)
21. Marshall and Santos, Semiclassical optics as an alternative to nonlocality Recent Research Developments in Optics 2:683–717 (2002) ISBN 81-7736-140-6
22. ^ a b Freedman, Stuart J.; Clauser, John F. (1972). "Experimental Test of Local Hidden-Variable Theories". Physical Review Letters 28 (14): 938. Bibcode:1972PhRvL..28..938F. doi:10.1103/PhysRevLett.28.938.
23. Giorgio Brida; Marco Genovese; Marco Gramegna; Fabrizio Piacentini; Enrico Predazzi; Ivano Ruo-Berchera (2007). "Experimental tests of hidden variable theories from dBB to Stochastic Electrodynamics". Journal of Physics: Conference Series 67 (12047): 012047. arXiv:quant-ph/0612075. Bibcode:2007JPhCS..67a2047G. doi:10.1088/1742-6596/67/1/012047.
24. Gammaitoni, Luca; Hänggi, Peter; Jung, Peter; Marchesoni, Fabio (1998). "Stochastic resonance". Reviews of Modern Physics 70: 223. Bibcode:1998RvMP...70..223G. doi:10.1103/RevModPhys.70.223.
25. Gröblacher, Simon; Paterek, Tomasz; Kaltenbaek, Rainer; Brukner, Časlav; Żukowski, Marek; Aspelmeyer, Markus; Zeilinger, Anton (2007). "An experimental test of non-local realism". Nature 446 (7138): 871–5. arXiv:0704.2529. Bibcode:2007Natur.446..871G. doi:10.1038/nature05677. PMID 17443179.
26. Cramer, John (1986). "The transactional interpretation of quantum mechanics". Reviews of Modern Physics 58 (3): 647. Bibcode:1986RvMP...58..647C. doi:10.1103/RevModPhys.58.647.
27. Gerard 't Hooft (2009). "Entangled quantum states in a local deterministic theory". arXiv:0908.3408 [quant-ph].
28. Gerard 't Hooft (2007). "The Free-Will Postulate in Quantum Mechanics". arXiv:quant-ph/0701097 [quant-ph].
29. Jaynes, E. T. (1989). "Clearing up Mysteries—The Original Goal". Maximum Entropy and Bayesian Methods: 12.
30. Roger Penrose (2007). The Road to Reality. Vintage books. p. 583. ISBN 0-679-77631-1.
31. E. Abers (2004). Quantum Mechanics. Addison Wesley. pp. 193–195. ISBN 9780131461000.
32. R.G. Lerner, G.L. Trigg (1991). Encyclopaedia of Physics (2nd ed.). VHC publishers. p. 495. ISBN 0-89573-752-3.
## References
• A. Aspect et al., Experimental Tests of Realistic Local Theories via Bell's Theorem, Phys. Rev. Lett. 47, 460 (1981)
• A. Aspect et al., Experimental Realization of Einstein–Podolsky–Rosen–Bohm Gedankenexperiment: A New Violation of Bell's Inequalities, Phys. Rev. Lett. 49, 91 (1982).
• A. Aspect et al., Experimental Test of Bell's Inequalities Using Time-Varying Analyzers, Phys. Rev. Lett. 49, 1804 (1982).
• A. Aspect and P. Grangier, About resonant scattering and other hypothetical effects in the Orsay atomic-cascade experiment tests of Bell inequalities: a discussion and some new experimental data, Lettere al Nuovo Cimento 43, 345 (1985)
• B. D'Espagnat, The Quantum Theory and Reality, Scientific American, 241, 158 (1979)
• J. S. Bell, On the problem of hidden variables in quantum mechanics, Rev. Mod. Phys. 38, 447 (1966)
• J. S. Bell, On the Einstein Podolsky Rosen Paradox, Physics 1, 3, 195–200 (1964)
• J. S. Bell, Introduction to the hidden variable question, Proceedings of the International School of Physics 'Enrico Fermi', Course IL, Foundations of Quantum Mechanics (1971) 171–81
• J. S. Bell, Bertlmann’s socks and the nature of reality, Journal de Physique, Colloque C2, suppl. au numero 3, Tome 42 (1981) pp C2 41–61
• J. S. Bell, Speakable and Unspeakable in Quantum Mechanics (Cambridge University Press 1987) [A collection of Bell's papers, including all of the above.]
• J. F. Clauser and A. Shimony, Bell's theorem: experimental tests and implications, Reports on Progress in Physics 41, 1881 (1978)
• J. F. Clauser and M. A. Horne, Phys. Rev D 10, 526–535 (1974)
• E. S. Fry, T. Walther and S. Li, Proposal for a loophole-free test of the Bell inequalities, Phys. Rev. A 52, 4381 (1995)
• E. S. Fry, and T. Walther, Atom based tests of the Bell Inequalities — the legacy of John Bell continues, pp 103–117 of Quantum [Un]speakables, R.A. Bertlmann and A. Zeilinger (eds.) (Springer, Berlin-Heidelberg-New York, 2002)
• R. B. Griffiths, Consistent Quantum Theory', Cambridge University Press (2002).
• L. Hardy, Nonlocality for 2 particles without inequalities for almost all entangled states. Physical Review Letters 71 (11) 1665–1668 (1993)
• M. A. Nielsen and I. L. Chuang, Quantum Computation and Quantum Information, Cambridge University Press (2000)
• P. Pearle, Hidden-Variable Example Based upon Data Rejection, Physical Review D 2, 1418–25 (1970)
• A. Peres, Quantum Theory: Concepts and Methods, Kluwer, Dordrecht, 1993.
• P. Pluch, Theory of Quantum Probability, PhD Thesis, University of Klagenfurt, 2006.
• B. C. van Frassen, Quantum Mechanics, Clarendon Press, 1991.
• M.A. Rowe, D. Kielpinski, V. Meyer, C.A. Sackett, W.M. Itano, C. Monroe, and D.J. Wineland, Experimental violation of Bell's inequalities with efficient detection,(Nature, 409, 791–794, 2001).
• S. Sulcs, The Nature of Light and Twentieth Century Experimental Physics, Foundations of Science 8, 365–391 (2003)
• S. Gröblacher et al., An experimental test of non-local realism,(Nature, 446, 871–875, 2007).
• D. N. Matsukevich, P. Maunz, D. L. Moehring, S. Olmschenk, and C. Monroe, Bell Inequality Violation with Two Remote Atomic Qubits, Phys. Rev. Lett. 100, 150404 (2008).
• The comic Dilbert, by Scott Adams, refers to Bell's Theorem in the 1992-09-21 and 1992-09-22 strips.
## Further reading
The following are intended for general audiences.
• Amir D. Aczel, Entanglement: The greatest mystery in physics (Four Walls Eight Windows, New York, 2001).
• A. Afriat and F. Selleri, The Einstein, Podolsky and Rosen Paradox (Plenum Press, New York and London, 1999)
• J. Baggott, The Meaning of Quantum Theory (Oxford University Press, 1992)
• N. David Mermin, "Is the moon there when nobody looks? Reality and the quantum theory", in Physics Today, April 1985, pp. 38–47.
• Louisa Gilder, The Age of Entanglement: When Quantum Physics Was Reborn (New York: Alfred A. Knopf, 2008)
• Brian Greene, The Fabric of the Cosmos (Vintage, 2004, ISBN 0-375-72720-5)
• Nick Herbert, Quantum Reality: Beyond the New Physics (Anchor, 1987, ISBN 0-385-23569-0)
• D. Wick, The infamous boundary: seven decades of controversy in quantum physics (Birkhauser, Boston 1995)
• R. Anton Wilson, Prometheus Rising (New Falcon Publications, 1997, ISBN 1-56184-056-4)
• Gary Zukav "The Dancing Wu Li Masters" (Perennial Classics, 2001, ISBN 0-06-095968-1)
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 49, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8981660604476929, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/calculus/190952-these-derivatives-correct.html
|
# Thread:
1. ## Are these derivatives correct?
Find the derivative of $(xy+1)^3 = x - y^2 + 8$ using implicit differentiation. Do not simplify.
$(xy+1)^3 = x - y^2 + 8$
$3(xy+1)^2 \cdot (y + x \cdot \frac{dy}{dx}) = 1 - 2y \cdot \frac{dy}{dx}$
$y + x \cdot \frac{dy}{dx} + \frac{2y \cdot \frac{dy}{dx}}{3(xy+1)^2} = \frac {1}{3(xy+1)^2}$
$x \cdot \frac{dy}{dx} + \frac{2y \cdot \frac{dy}{dx}}{3(xy+1)^2} = \frac {1}{3(xy+1)^2} -y$
$\frac{dy}{dx}[x + \frac{2y}{3(xy+1)^2}] = \frac {1}{3(xy+1)^2} -y$
$\frac{dy}{dx} = \frac {\frac {1}{3(xy+1)^2} -y}{x + \frac{2y}{3(xy+1)^2}}$
The question stated not to simplify, so this would be my final answer.
Find the derivative of $\sin(xy) + x = e^y$ using implicit differentiation. Do not simplify.
$\sin(xy) + x = e^y$
$\cos(xy) \cdot (y + x \cdot \frac{dy}{dx}) + 1 = e^y \cdot \frac{dy}{dx}$
$\cos(xy) \cdot (y + x \cdot \frac{dy}{dx}) - e^y \cdot \frac{dy}{dx} = -1$
$x \cdot \frac{dy}{dx} - \frac{e^y \cdot \frac{dy}{dx}}{\cos(xy)} = \frac{-1}{\cos(xy)} - y$
$\frac{dy}{dx}[x - \frac{e^y}{\cos(xy)}] = \frac{-1}{\cos(xy)} - y$
$\frac{dy}{dx} = \frac{\frac{-1}{\cos(xy)} - y}{[x - \frac{e^y}{\cos(xy)}]}$
I opted to not use logarithmic differentiation for some reason. Is this solution still valid?
Thanks a lot!
2. ## Re: Are these derivatives correct?
They look fine to me.
I don't think that logarithmic differentiation would have helped .
3. ## Re: Are these derivatives correct?
Thanks for the response!
The other people in my class that I talked to said that they used logarithmic differention; I haven't tried it yet with this one yet.
Though, as long as they're correct, that's all that matters at the moment.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 14, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9612994194030762, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/61528?sort=votes
|
Is the direct image of a constant sheaf a constant sheaf?
Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Is the direct image of a constant sheaf a constant sheaf? I'm not an expert on sheaf theory and can't find this anywhere
-
4 Answers
Notation: $f:X \to Y$ is the map we're pushing forward along, and $F$ is our sheaf on $X$. In general the stalks of $f_*F$ at different points will not be isomorphic. For instance if $f$ misses the point $y \in Y$ and your space is sufficiently separated then the stalk of $f_*F$ at $y$ will be 0 while it will be nonzero for points in the image.
An extreme case is when the map has image a point. Then you get a skyscraper sheaf, which is very far from constant on most spaces and most points (Note: if you're hitting the generic point of $Y$ then the direct image will in fact be constant).
Edit: Another extreme case is when $X$ is a large discrete space. Then one can get direct image sheaves where no stalk is isomorphic to any other stalk. For instance this happens if all the fibers of $f$ have different cardinalities. I think you would usually need the axiom of choice to even define such a map.
-
Thanks for your answer K. J. – Mario Carrasco Jul 26 2011 at 14:06
You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Not in general. For example take the double cover of the unit circle $S^1$ by the connected helix, say $H$, and call the projection $\pi:H\rightarrow S^1$. Then the sections of the direct image of the constant sheaf $\mathcal A$ over any connected open set $U$ of $S^1$ will give the direct product of two copies of the group $A$, while the constant sheaf over $U$ would give only one copy of $A$ as its group of sections.
-
The point is that for U=S^1 you get only one copy of A but for, say connected nonempty U you get two copies of A. – Jan Weidner Apr 13 2011 at 11:16
Thanks so much Jan – Mario Carrasco Jul 26 2011 at 14:06
As HNuers example shows, the pushforward of a constant sheaf isn't constant anymore. However one also doesn't get arbitrary sheaves this way. The direct image is still a constructible sheaf, which means that your space is a finite disjoint union of locally closed pieces on which the sheaf is locally constant. For example the sheaf in NHuers example is locally constant (but not constant!) on the whole space. The pushforward of a vector space on a point to some space in the example of K.J. Moi would be constant on the point and its complement.
Edit: Of course one needs assumptions here. For example everything works for pushforward along morphisms of complex algebraic varieties (sheaves considered in analytic topology!).
-
Great, thanks again – Mario Carrasco Jul 26 2011 at 14:06
When the map is well behaved, say a locally trivial fibration over a base $B$ with fiber $F$, then the direct image of a constant sheaf will be locally constant (all of the fibers of this sheaf will be equal to the cohomology of the constant sheaf on $F$), but won't be constant in general. The obstruction is the monodromy, the defect of compatibility between all local trivializations along a loop in the base space.
-
What is a constant if not the sheaf of locally constant functions? – bavajee Apr 13 2011 at 16:52
Sorry, I meant to write: What is a constant sheaf, if not a sheaf of locally constant functions? – bavajee Apr 13 2011 at 16:53
@bavajee: "Locally constant sheaf" is not the same as "sheaf of locally constant functions", in the same way that a covering space is a local homeomorphism but not a homeomorphism on connected components. Actually, the second is an example of the first, if you construct the sheaf of maps from the base to the covering space. This is, basically, what ACL is saying in his answer. – Ryan Reich Apr 13 2011 at 17:32
2
@Ryan: I see. Constant sheaf = sheaf associated to constant presheaf. Locally constant sheaf = sheaf that is locally isomorphic to a constant sheaf. Thanks. – bavajee Apr 13 2011 at 20:02
Thank you guys for your answers – Mario Carrasco Jul 26 2011 at 14:07
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 23, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9230603575706482, "perplexity_flag": "head"}
|
http://sagemath.org/doc/constructions/linear_algebra.html
|
# Linear algebra¶
## Vector spaces¶
The VectorSpace command creates a vector space class, from which one can create a subspace. Note the basis computed by Sage is “row reduced”.
```sage: V = VectorSpace(GF(2),8)
sage: S = V.subspace([V([1,1,0,0,0,0,0,0]),V([1,0,0,0,0,1,1,0])])
sage: S.basis()
[
(1, 0, 0, 0, 0, 1, 1, 0),
(0, 1, 0, 0, 0, 1, 1, 0)
]
sage: S.dimension()
2
```
## Matrix powers¶
How do I compute matrix powers in Sage? The syntax is illustrated by the example below.
```sage: R = IntegerModRing(51)
sage: M = MatrixSpace(R,3,3)
sage: A = M([1,2,3, 4,5,6, 7,8,9])
sage: A^1000*A^1007
[ 3 3 3]
[18 0 33]
[33 48 12]
sage: A^2007
[ 3 3 3]
[18 0 33]
[33 48 12]
```
## Kernels¶
The kernel is computed by applying the kernel method to the matrix object. The following examples illustrate the syntax.
```sage: M = MatrixSpace(IntegerRing(),4,2)(range(8))
sage: M.kernel()
Free module of degree 4 and rank 2 over Integer Ring
Echelon basis matrix:
[ 1 0 -3 2]
[ 0 1 -2 1]
```
A kernel of dimension one over $$\QQ$$:
```sage: A = MatrixSpace(RationalField(),3)(range(9))
sage: A.kernel()
Vector space of degree 3 and dimension 1 over Rational Field
Basis matrix:
[ 1 -2 1]
```
A trivial kernel:
```sage: A = MatrixSpace(RationalField(),2)([1,2,3,4])
sage: A.kernel()
Vector space of degree 2 and dimension 0 over Rational Field
Basis matrix:
[]
sage: M = MatrixSpace(RationalField(),0,2)(0)
sage: M
[]
sage: M.kernel()
Vector space of degree 0 and dimension 0 over Rational Field
Basis matrix:
[]
sage: M = MatrixSpace(RationalField(),2,0)(0)
sage: M.kernel()
Vector space of degree 2 and dimension 2 over Rational Field
Basis matrix:
[1 0]
[0 1]
```
Kernel of a zero matrix:
```sage: A = MatrixSpace(RationalField(),2)(0)
sage: A.kernel()
Vector space of degree 2 and dimension 2 over Rational Field
Basis matrix:
[1 0]
[0 1]
```
Kernel of a non-square matrix:
```sage: A = MatrixSpace(RationalField(),3,2)(range(6))
sage: A.kernel()
Vector space of degree 3 and dimension 1 over Rational Field
Basis matrix:
[ 1 -2 1]
```
The 2-dimensional kernel of a matrix over a cyclotomic field:
```sage: K = CyclotomicField(12); a = K.gen()
sage: M = MatrixSpace(K,4,2)([1,-1, 0,-2, 0,-a^2-1, 0,a^2-1])
sage: M
[ 1 -1]
[ 0 -2]
[ 0 -zeta12^2 - 1]
[ 0 zeta12^2 - 1]
sage: M.kernel()
Vector space of degree 4 and dimension 2 over Cyclotomic Field of order 12
and degree 4
Basis matrix:
[ 0 1 0 -2*zeta12^2]
[ 0 0 1 -2*zeta12^2 + 1]
```
A nontrivial kernel over a complicated base field.
```sage: K = FractionField(PolynomialRing(RationalField(),2,'x'))
sage: M = MatrixSpace(K, 2)([[K.gen(1),K.gen(0)], [K.gen(1), K.gen(0)]])
sage: M
[x1 x0]
[x1 x0]
sage: M.kernel()
Vector space of degree 2 and dimension 1 over Fraction Field of Multivariate
Polynomial Ring in x0, x1 over Rational Field
Basis matrix:
[ 1 -1]
```
Other methods for integer matrices are elementary_divisors, smith_form (for the Smith normal form), echelon_form for the Hermite normal form, frobenius for the Frobenius normal form (rational canonical form).
There are many methods for matrices over a field such as $$\QQ$$ or a finite field: row_span, nullity, transpose, swap_rows, matrix_from_columns, matrix_from_rows, among many others.
See the file matrix.py for further details.
## Eigenvectors and eigenvalues¶
How do you compute eigenvalues and eigenvectors using Sage?
Sage has a full range of functions for computing eigenvalues and both left and right eigenvectors and eigenspaces. If our matrix is $$A$$, then the eigenmatrix_right (resp. eightmatrix_left) command also gives matrices $$D$$ and $$P$$ such that $$AP=PD$$ (resp. $$PA=PD$$.)
```sage: A = matrix(QQ, [[1,1,0],[0,2,0],[0,0,3]])
sage: A
[1 1 0]
[0 2 0]
[0 0 3]
sage: A.eigenvalues()
[3, 2, 1]
sage: A.eigenvectors_right()
[(3, [
(0, 0, 1)
], 1), (2, [
(1, 1, 0)
], 1), (1, [
(1, 0, 0)
], 1)]
sage: A.eigenspaces_right()
[
(3, Vector space of degree 3 and dimension 1 over Rational Field
User basis matrix:
[0 0 1]),
(2, Vector space of degree 3 and dimension 1 over Rational Field
User basis matrix:
[1 1 0]),
(1, Vector space of degree 3 and dimension 1 over Rational Field
User basis matrix:
[1 0 0])
]
sage: D, P = A.eigenmatrix_right()
sage: D
[3 0 0]
[0 2 0]
[0 0 1]
sage: P
[0 1 1]
[0 1 0]
[1 0 0]
sage: A*P == P*D
True
```
For eigenvalues outside the fraction field of the base ring of the matrix, you can choose to have all the eigenspaces output when the algebraic closure of the field is implemented, such as the algebraic numbers, QQbar. Or you may request just a single eigenspace for each irreducible factor of the characteristic polynomial, since the others may be formed through Galois conjugation. The eigenvalues of the matrix below are $pmsqrt{3}$ and we exhibit each possible output.
Also, currently Sage does not implement multiprecision numerical eigenvalues and eigenvectors, so calling the eigen functions on a matrix from CC or RR will probably give inaccurate and nonsensical results (a warning is also printed). Eigenvalues and eigenvectors of matrices with floating point entries (over CDF and RDF) can be obtained with the “eigenmatrix” commands.
```sage: MS = MatrixSpace(QQ, 2, 2)
sage: A = MS([1,-4,1, -1])
sage: A.eigenspaces_left(format='all')
[
(-1.732050807568878?*I, Vector space of degree 2 and dimension 1 over Algebraic Field
User basis matrix:
[ 1 -1 - 1.732050807568878?*I]),
(1.732050807568878?*I, Vector space of degree 2 and dimension 1 over Algebraic Field
User basis matrix:
[ 1 -1 + 1.732050807568878?*I])
]
sage: A.eigenspaces_left(format='galois')
[
(a0, Vector space of degree 2 and dimension 1 over Number Field in a0 with defining polynomial x^2 + 3
User basis matrix:
[ 1 a0 - 1])
]
```
Another approach is to use the interface with Maxima:
```sage: A = maxima("matrix ([1, -4], [1, -1])")
sage: eig = A.eigenvectors()
sage: eig
[[[-sqrt(3)*%i,sqrt(3)*%i],[1,1]],[[[1,(sqrt(3)*%i+1)/4]],[[1,-(sqrt(3)*%i-1)/4]]]]
```
This tells us that $$\vec{v}_1 = [1,(\sqrt{3}i + 1)/4]$$ is an eigenvector of $$\lambda_1 = - \sqrt{3}i$$ (which occurs with multiplicity one) and $$\vec{v}_2 = [1,(-\sqrt{3}i + 1)/4]$$ is an eigenvector of $$\lambda_2 = \sqrt{3}i$$ (which also occurs with multiplicity one).
Here are two more examples:
```sage: A = maxima("matrix ([11, 0, 0], [1, 11, 0], [1, 3, 2])")
sage: A.eigenvectors()
[[[2,11],[1,2]],[[[0,0,1]],[[0,1,1/3]]]]
sage: A = maxima("matrix ([-1, 0, 0], [1, -1, 0], [1, 3, 2])")
sage: A.eigenvectors()
[[[-1,2],[2,1]],[[[0,1,-1]],[[0,0,1]]]]
```
Warning: Notice how the ordering of the output is reversed, though the matrices are almost the same.
Finally, you can use Sage’s GAP interface as well to compute “rational” eigenvalues and eigenvectors:
```sage: print gap.eval("A := [[1,2,3],[4,5,6],[7,8,9]]")
[ [ 1, 2, 3 ], [ 4, 5, 6 ], [ 7, 8, 9 ] ]
sage: print gap.eval("v := Eigenvectors( Rationals,A)")
[ [ 1, -2, 1 ] ]
sage: print gap.eval("lambda := Eigenvalues( Rationals,A)")
[ 0 ]
```
## Row reduction¶
The row reduced echelon form of a matrix is computed as in the following example.
```sage: M = MatrixSpace(RationalField(),2,3)
sage: A = M([1,2,3, 4,5,6])
sage: A
[1 2 3]
[4 5 6]
sage: A.parent()
Full MatrixSpace of 2 by 3 dense matrices over Rational Field
sage: A[0,2] = 389
sage: A
[ 1 2 389]
[ 4 5 6]
sage: A.echelon_form()
[ 1 0 -1933/3]
[ 0 1 1550/3]
```
## Characteristic polynomial¶
The characteristic polynomial is a Sage method for square matrices.
First a matrix over $$\ZZ$$:
```sage: A = MatrixSpace(IntegerRing(),2)( [[1,2], [3,4]] )
sage: f = A.charpoly()
sage: f
x^2 - 5*x - 2
sage: f.parent()
Univariate Polynomial Ring in x over Integer Ring
```
We compute the characteristic polynomial of a matrix over the polynomial ring $$\ZZ[a]$$:
```sage: R = PolynomialRing(IntegerRing(),'a'); a = R.gen()
sage: M = MatrixSpace(R,2)([[a,1], [a,a+1]])
sage: M
[ a 1]
[ a a + 1]
sage: f = M.charpoly()
sage: f
x^2 + (-2*a - 1)*x + a^2
sage: f.parent()
Univariate Polynomial Ring in x over Univariate Polynomial Ring in a over
Integer Ring
sage: M.trace()
2*a + 1
sage: M.determinant()
a^2
```
We compute the characteristic polynomial of a matrix over the multi-variate polynomial ring $$\ZZ[u,v]$$:
```sage: R.<u,v> = PolynomialRing(ZZ,2)
sage: A = MatrixSpace(R,2)([u,v,u^2,v^2])
sage: f = A.charpoly(); f
x^2 + (-v^2 - u)*x - u^2*v + u*v^2
```
It’s a little difficult to distinguish the variables. To fix this, we might want to rename the indeterminate “Z”, which we can easily do as follows:
```sage: f = A.charpoly('Z'); f
Z^2 + (-v^2 - u)*Z - u^2*v + u*v^2
```
## Solving systems of linear equations¶
Using maxima, you can easily solve linear equations:
```sage: var('a,b,c')
(a, b, c)
sage: eqn = [a+b*c==1, b-a*c==0, a+b==5]
sage: s = solve(eqn, a,b,c); s
[[a == (25*I*sqrt(79) + 25)/(6*I*sqrt(79) - 34),
b == (5*I*sqrt(79) + 5)/(I*sqrt(79) + 11),
c == 1/10*I*sqrt(79) + 1/10],
[a == (25*I*sqrt(79) - 25)/(6*I*sqrt(79) + 34),
b == (5*I*sqrt(79) - 5)/(I*sqrt(79) - 11),
c == -1/10*I*sqrt(79) + 1/10]]
```
You can even nicely typeset the solution in LaTeX:
```sage.: print latex(s)
...```
To have the above appear onscreen via xdvi, type view(s).
You can also solve linear equations symbolically using the solve command:
```sage: var('x,y,z,a')
(x, y, z, a)
sage: eqns = [x + z == y, 2*a*x - y == 2*a^2, y - 2*z == 2]
sage: solve(eqns, x, y, z)
[[x == a + 1, y == 2*a, z == a - 1]]
```
Here is a numerical Numpy example:
```sage: from numpy import arange, eye, linalg
sage: A = eye(10) ## the 10x10 identity matrix
sage: b = arange(1,11)
sage: x = linalg.solve(A,b)
```
Another way to solve a system numerically is to use Sage’s octave interface:
```sage: M33 = MatrixSpace(QQ,3,3)
sage: A = M33([1,2,3,4,5,6,7,8,0])
sage: V3 = VectorSpace(QQ,3)
sage: b = V3([1,2,3])
sage: octave.solve_linear_system(A,b) # optional - octave
[-0.33333299999999999, 0.66666700000000001, 0]
```
### Table Of Contents
Groups
#### Next topic
Linear codes and ciphers
### Quick search
Enter search terms or a module, class or function name.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 14, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8421229124069214, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/185276/total-function-and-termination
|
# Total function and termination
If we have a total function, is it by default terminating function? How can we prove the termination for this total function?
-
Not by default, but by definition. – MJD Aug 22 '12 at 2:08
What is the definition of a terminating function? – William Aug 22 '12 at 2:17
Can you provide the definitions of "total function" and "terminating function" you are using here? – Rahul Narain Aug 22 '12 at 22:45
There are total functions that are not computable, but in this case one only has (at best) a non-effective description of the function, and saying it is terminating does not really make sense. – Marc van Leeuwen Aug 28 '12 at 14:31
## 2 Answers
Since the definitions of "total function" say that is defined for all possible inputs, it seems that, yes, it must terminate (otherwise it would not be defined for any input that is does not terminate for).
-
Could you please give an example of this ? – Abufouda Aug 22 '12 at 2:06
Consider the function $f$ which adds 1 to its argument, unless the argument is 0, in which case it goes into an infinite loop and never yields a result. This is not a terminating function, since it will fail to terminate when given the argument 0. It is also not a total function, since it fails to yield a result when given the argument 0. – MJD Aug 22 '12 at 2:10
To echo @William: what does talk of 'termination' mean here?
It suggests (at least to this reader) the idea of a terminating computation. But even if we restrict ourselves to total functions that map numbers to numbers, not all such functions are computable. So on one very natural reading, not ever total function $f \colon \mathbb{N} \to \mathbb{N}$ can be said to be terminating (if that means that there is a way of computing $f$ which terminates for every input).
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9044329524040222, "perplexity_flag": "middle"}
|
http://planetmath.org/CartesianClosedCategory
|
# Cartesian closed category
A category $\mathcal{C}$ with finite products is said to be Cartesian closed if each of the following functors has a right adjoint
1. 1.
$\textbf{0}:\mathcal{C}\to\textbf{1}$, where 1 is the trivial category with one object $0$, and $\textbf{0}(A)=0$
2. 2.
the diagonal functor $\delta:\mathcal{C}\to\mathcal{C}\times\mathcal{C}$, where $\delta(A)=(A,A)$, and
3. 3.
for any object $B$, the functor $(-\times B):\mathcal{C}\to\mathcal{C}$, where $(-\times B)(A)=A\times B$, the product of $A$ and $B$.
Furthermore, we require that the corresponding right adjoints for these functors to be
1. 1.
any functor $\textbf{1}\to\mathcal{C}$, where $0$ is mapped to an object $T$ in $\mathcal{C}$. $T$ is necessarily a terminal object of $\mathcal{C}$.
2. 2.
the product (bifunctor) $(-\times-):\mathcal{C}\times\mathcal{C}\to\mathcal{C}$ given by $(-\times-)(A,B)\mapsto A\times B$, the product of $A$ and $B$.
3. 3.
for any object $B$, the exponential functor $(-^{B}):\mathcal{C}\to\mathcal{C}$ given by $(-^{B})(A)=A^{B}$, the exponential object from $B$ to $A$.
In other words, a Cartesian closed category $\mathcal{C}$ is a category with finite products, has a terminal objects, and has exponentials. It can be shown that a Cartesian closed category is the same as a finitely complete category having exponentials.
Examples of Cartesian closed categories are the category of sets Set ( terminal object: any singleton; product: any Cartesian product of a finite number of sets; exponential object: the set of functions from one set to another) the category of small categories Cat (terminal object: any trivial category; product object: any finite product of categores; exponential object: any functor category), and every elementary topos.
# References
• 1 S. Mac Lane, Categories for the Working Mathematician, Springer, New York (1971).
Type of Math Object:
Definition
Major Section:
Reference
Groups audience:
## Mathematics Subject Classification
18D15 Closed categories (closed monoidal and Cartesian closed categories, etc.)
## Recent Activity
May 17
new image: sinx_approx.png by jeremyboden
new image: approximation_to_sinx by jeremyboden
new image: approximation_to_sinx by jeremyboden
new question: Solving the word problem for isomorphic groups by mairiwalker
new image: LineDiagrams.jpg by m759
new image: ProjPoints.jpg by m759
new image: AbstrExample3.jpg by m759
new image: four-diamond_figure.jpg by m759
May 16
new problem: Curve fitting using the Exchange Algorithm. by jeremyboden
new question: Undirected graphs and their Chromatic Number by Serchinnho
## Info
Owner: CWoo
Added: 2007-01-20 - 05:48
Author(s): CWoo
## Versions
(v9) by CWoo 2013-03-22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 27, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8615299463272095, "perplexity_flag": "middle"}
|
http://cms.math.ca/cmb/v54/n1/
|
Canadian Mathematical Society
www.cms.math.ca
| | | | |
|----------|----|-----------|----|
| | | | | | |
| Site map | | | CMS store | |
location: Publications → journals → CMB
« Vol. 53 No. 4 Vol. 54 No. 2 »
Volume 54 Number 1 (Mar 2011)
Looking for a printed back issue?
Page
Contents
3
Bakonyi, M.; Timotin, D.
Let $S$ be a subset of an amenable group $G$ such that $e\in S$ and $S^{-1}=S$. The main result of this paper states that if the Cayley graph of $G$ with respect to $S$ has a certain combinatorial property, then every positive definite operator-valued function on $S$ can be extended to a positive definite function on $G$. Several known extension results are obtained as corollaries. New applications are also presented.
12
Bingham, N. H.; Ostaszewski, A. J.
The Kestelman--Borwein--Ditor Theorem, on embedding a null sequence by translation in (measure/category) large'' sets has two generalizations. Miller replaces the translated sequence by a sequence homotopic to the identity''. The authors, in a previous paper, replace points by functions: a uniform functional null sequence replaces the null sequence, and translation receives a functional form. We give a unified approach to results of this kind. In particular, we show that (i) Miller's homotopy version follows from the functional version, and (ii) the pointwise instance of the functional version follows from Miller's homotopy version.
21
Bouali, S.; Ech-chad, M.
Let $H$ be a separable, infinite-dimensional, complex Hilbert space and let $A, B\in{\mathcal L }(H)$, where ${\mathcal L}(H)$ is the algebra of all bounded linear operators on $H$. Let $\delta_{AB}\colon {\mathcal L}(H)\rightarrow {\mathcal L}(H)$ denote the generalized derivation $\delta_{AB}(X)=AX-XB$. This note will initiate a study on the class of pairs $(A,B)$ such that $\overline{{\mathcal R}(\delta_{AB})}= \overline{{\mathcal R}(\delta_{A^{\ast}B^{\ast}})}$.
28
Chang, Yu-Hsien; Hong, Cheng-Hong
The purpose of this paper is to show the existence of a generalized solution of the photon transport problem. By means of the theory of equicontinuous $C_{0}$-semigroup on a sequentially complete locally convex topological vector space we show that the perturbed abstract Cauchy problem has a unique solution when the perturbation operator and the forcing term function satisfy certain conditions. A consequence of the abstract result is that it can be directly applied to obtain a generalized solution of the photon transport problem.
39
Chapman, S. T.; García-Sánchez, P. A.; Llena, D.; Marshall, J.
Questions concerning the lengths of factorizations into irreducible elements in numerical monoids have gained much attention in the recent literature. In this note, we show that a numerical monoid has an element with two different irreducible factorizations of the same length if and only if its embedding dimension is greater than two. We find formulas in embedding dimension three for the smallest element with two different irreducible factorizations of the same length and the largest element whose different irreducible factorizations all have distinct lengths. We show that these formulas do not naturally extend to higher embedding dimensions.
44
Cheung, Wai-Shun; Tam, Tin-Yau
Given a complex semisimple Lie algebra $\mathfrak{g}=\mathfrak{k}+i\mathfrak{k}$ ($\mathfrak{k}$ is a compact real form of $\mathfrak{g}$), let $\pi\colon\mathfrak{g}\to \mathfrak{h}$ be the orthogonal projection (with respect to the Killing form) onto the Cartan subalgebra $\mathfrak{h}:=\mathfrak{t}+i\mathfrak{t}$, where $\mathfrak{t}$ is a maximal abelian subalgebra of $\mathfrak{k}$. Given $x\in \mathfrak{g}$, we consider $\pi(\mathop{\textrm{Ad}}(K) x)$, where $K$ is the analytic subgroup $G$ corresponding to $\mathfrak{k}$, and show that it is star-shaped. The result extends a result of Tsing. We also consider the generalized numerical range $f(\mathop{\textrm{Ad}}(K)x)$, where $f$ is a linear functional on $\mathfrak{g}$. We establish the star-shapedness of $f(\mathop{\textrm{Ad}}(K)x)$ for simple Lie algebras of type $B$.
56
Dinh, Thi Anh Thu
Let $\mathcal{A}$ be a line arrangement in the complex projective plane $\mathbb{P}^2$, having the points of multiplicity $\geq 3$ situated on two lines in $\mathcal{A}$, say $H_0$ and $H_{\infty}$. Then we show that the non-local irreducible components of the first resonance variety $\mathcal{R}_1(\mathcal{A})$ are 2-dimensional and correspond to parallelograms $\mathcal{P}$ in $\mathbb{C}^2=\mathbb{P}^2 \setminus H_{\infty}$ whose sides are in $\mathcal{A}$ and for which $H_0$ is a diagonal.
68
Eilers, Søren; Restorff, Gunnar; Ruiz, Efren
A. Bonkat obtained a universal coefficient theorem in the setting of Kirchberg's ideal-related $KK$-theory in the fundamental case of a $C^*$-algebra with one specified ideal. The universal coefficient sequence was shown to split, unnaturally, under certain conditions. Employing certain $K$-theoretical information derivable from the given operator algebras using a method introduced here, we shall demonstrate that Bonkat's UCT does not split in general. Related methods lead to information on the complexity of the $K$-theory which must be used to classify $*$-isomorphisms for purely infinite $C^*$-algebras with one non-trivial ideal.
82
Emerson, Heath
Using Poincar\'e duality, we obtain a formula of Lefschetz type that computes the Lefschetz number of an endomorphism of a separable nuclear $C^*$-algebra satisfying Poincar\'e duality and the Kunneth theorem. (The Lefschetz number of an endomorphism is the graded trace of the induced map on $\textrm{K}$-theory tensored with $\mathbb{C}$, as in the classical case.) We then examine endomorphisms of Cuntz--Krieger algebras $O_A$. An endomorphism has an invariant, which is a permutation of an infinite set, and the contracting and expanding behavior of this permutation describes the Lefschetz number of the endomorphism. Using this description, we derive a closed polynomial formula for the Lefschetz number depending on the matrix $A$ and the presentation of the endomorphism.
100
Fan, Dashan; Wu, Huoxiong
A class of generalized Marcinkiewicz integral operators is introduced, and, under rather weak conditions on the integral kernels, the boundedness of such operators on $L^p$ and Triebel--Lizorkin spaces is established.
113
Hytönen, Tuomas P.
The generalized Beurling-Ahlfors operator $S$ on $L^p(\mathbb{R}^n;\Lambda)$, where $\Lambda:=\Lambda(\mathbb{R}^n)$ is the exterior algebra with its natural Hilbert space norm, satisfies the estimate $$\|S\|_{\mathcal{L}(L^p(\mathbb{R}^n;\Lambda))}\leq(n/2+1)(p^*-1),\quad p^*:=\max\{p,p'\}$$ This improves on earlier results in all dimensions $n\geq 3$. The proof is based on the heat extension and relies at the bottom on Burkholder's sharp inequality for martingale transforms.
126
Jin, Yongyang; Zhang, Genkai
We prove that the fundamental solutions of Kohn sub-Laplacians $\Delta + i\alpha \partial_t$ on the anisotropic Heisenberg groups are tempered distributions and have meromorphic continuation in $\alpha$ with simple poles. We compute the residues and find the partial fundamental solutions at the poles. We also find formulas for the fundamental solutions for some matrix-valued Kohn type sub-Laplacians on H-type groups.
141
Kim, Sang Og; Park, Choonkil
For $C^*$-algebras $\mathcal{A}$ of real rank zero, we describe linear maps $\phi$ on $\mathcal{A}$ that are surjective up to ideals $\mathcal{I}$, and $\pi(A)$ is invertible in $\mathcal{A}/\mathcal{I}$ if and only if $\pi(\phi(A))$ is invertible in $\mathcal{A}/\mathcal{I}$, where $A\in\mathcal{A}$ and $\pi:\mathcal{A}\to\mathcal{A}/\mathcal{I}$ is the quotient map. We also consider similar linear maps preserving zero products on the Calkin algebra.
147
Nelson, Sam
We define a family of generalizations of the two-variable quandle polynomial. These polynomial invariants generalize in a natural way to eight-variable polynomial invariants of finite biquandles. We use these polynomials to define a family of link invariants that further generalize the quandle counting invariant.
159
Sababheh, Mohammad
We prove that some inequalities, which are considered to be generalizations of Hardy's inequality on the circle, can be modified and proved to be true for functions integrable on the real line. In fact we would like to show that some constructions that were used to prove the Littlewood conjecture can be used similarly to produce real Hardy-type inequalities. This discussion will lead to many questions concerning the relationship between Hardy-type inequalities on the circle and those on the real line.
172
Shayya, Bassam
We prove that if the Fourier transform of a compactly supported measure is in $L^2$ of a half-space, then the measure is absolutely continuous to Lebesgue measure. We then show how this result can be used to translate information about the dimensionality of a measure and the decay of its Fourier transform into geometric information about its support.
180
Spurný, J.; Zelený, M.
An important conjecture in the theory of Borel sets in non-separable metric spaces is whether any point-countable Borel-additive family in a complete metric space has a $\sigma$-discrete refinement. We confirm the conjecture for point-countable $\mathbf\Pi_3^0$-additive families, thus generalizing results of R. W. Hansell and the first author. We apply this result to the existence of Borel measurable selectors for multivalued mappings of low Borel complexity, thus answering in the affirmative a particular version of a question of J. Kaniewski and R. Pol.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 77, "mathjax_display_tex": 1, "mathjax_asciimath": 2, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8672877550125122, "perplexity_flag": "head"}
|
http://hbfs.wordpress.com/2012/05/29/cubic-interpolation-interpolation-part-ii/
|
# Harder, Better, Faster, Stronger
Explorations in better, faster, stronger code.
## Cubic Interpolation (Interpolation, part II)
In a previous entry, we had a look at linear interpolation and concluded that we should prefer some kind of smooth interpolation, maybe a polynomial.
However, we must use a polynomial of sufficient degree so that neighboring patches do not exhibit very different slopes on either side of known points. This pretty much rules out quadratic polynomials, because polynomials of the form $ax^2+bx+c$ are only capable of expressing (strictly) convex (or concave) patches. A quadratic piece-wise function would look something like:
Here, the solid lines represent the part of the patch used for interpolation, while the dashed line show the extent of the quadratic patch. We see that this yields a saw-tooth pattern, clearly something we do not want for “smooth” interpolation.
Since a cubic polynomial is capable of (slightly more) expressiveness, namely it can have two bumps (in opposing directions), and uses four points (instead of just three as with the quadratic patches), it captures possibly better the oscillations in the (unknown) underlying function. On the same points, maybe cubic patches would yield the following interpolation:
Maybe the slope doesn’t quite match on either side of the points, but at least, it doesn’t change as wildly than the quadratic patches.
Let again $\{(x_i,y_i)\}_{i=1}^n$ be a series of known points between which we wish to interpolate. So between any two points $(x_i,y_i)$ and $(x_{i+1},y_{i+1})$, we will use a cubic polynomial to interpolate. However, a degree $m$ polynomial needs $m+1$ points to be uniquely defined (that’s the unisolvence theorem), and we will need, in addition to $(x_i,y_i)$ and $(x_{i+1},y_{i+1})$, the points $(x_{i-1},y_{i-1})$ and $(x_{i+2},y_{i+2})$. We need to find the parameters to the equation
$ax^3+bx^2+cx+d=y$
that passes through all four points. That gives us four equations, four unknowns that we must solve:
$ax_{i-1}^3+bx_{i-1}^2+cx_{i-1}+d=y_{i-1}$
$ax_{i}^3+bx_{i}^2+cx_{i}+d=y_{i}$
$ax_{i+1}^3+bx_{i+1}^2+cx_{i+1}+d=y_{i+1}$
$ax_{i+2}^3+bx_{i+2}^2+cx_{i+2}+d=y_{i+2}$
But writing equations this way is cumbersome, and not propitious to fast and efficient solution. Noticing that
$ax^3+bx^2+cx+d=y$
can be rewritten as
$\hat{y}(x)=\left[~x^3~x^2~x~1~\right]~\left[\begin{array}{c} a\\ b\\ c\\ d\end{array}\right]$
That is, as a dot product. We can rewrite the four equations as a matrix/vector system of the form $Ma=y$:
$\left[\begin{array}{cccc} x_{i-1}^3 & x_{i-1}^2 & x_{i-1} & 1\\ x_{i}^3 & x_{i}^2 & x_{i} & 1\\ x_{i+1}^3 & x_{i+1}^2 & x_{i+1} & 1\\ x_{i+2}^3 & x_{i+2}^2 & x_{i+2} & 1 \end{array}\right]~\left[\begin{array}{c} a\\ b\\ c\\ d\end{array}\right]=\left[\begin{array}{c} y_{i-1}\\ y_{i}\\ y_{i+1}\\ y_{i+2}\end{array}\right]$
and we solve for $\left[~a~b~c~d~\right]$. Fortunately, it’s not too hard:
$\left[\begin{array}{c} a\\ b\\ c\\ d\end{array}\right]= \left[\begin{array}{cccc} x_{i-1}^3 & x_{i-1}^2 & x_{i-1} & 1\\ x_{i}^3 & x_{i}^2 & x_{i} & 1\\ x_{i+1}^3 & x_{i+1}^2 & x_{i+1} & 1\\ x_{i+2}^3 & x_{i+2}^2 & x_{i+2} & 1 \end{array}\right]^{-1} \left[\begin{array}{c} y_{i-1}\\ y_{i}\\ y_{i+1}\\ y_{i+2} \end{array}\right]$
Matrix inverses are really painful to compute in general (not even when using pen-and-paper computation) but if we have the special case where $x_{i-1}=-1$, $x_{i}=0$, $x_{i+1}=1$, and $x_{i+1}=2$, then the matrix simplifies to
$M=\left[\begin{array}{cccc} -1 & 1 & -1 & 1\\ 0 & 0 & 0 & 1\\ 1 & 1 & 1 & 1\\ 8 & 4 & 2 & 1 \end{array}\right]$
with the inverse
$M^{-1}=\displaystyle\frac{1}{6}\left[\begin{array}{cccc} -1 & 3 & -3 & 1\\ 3 & -6 & 3 & 0\\ -2 & -3 & 6 & -1\\ 0 & 6 & 0 & 0 \end{array}\right]$
and solving for the parameters are obtained by
$\left[\begin{array}{c} a\\ b\\ c\\ d \end{array}\right]=M^{-1}~\left[\begin{array}{c} y_{i-1}\\ y_{i}\\ y_{i+1}\\ y_{i+2}\end{array}\right]$
*
* *
We can compute the product $M^{-1}y$ in much less than $4\times{4}$ (scalar) products. First, some entries are zero, are are $\pm{1}$ and therefore reduce to simple addition/subtraction; then we notice that some are similar except for sign. With constant folding, we can get something efficient.
*
* *
So we have a rather efficient way of fitting a cubic through four points, but the cubic patches still do not quite fit together as nicely as they should. In particular, the derivative around $(x_i,y_i)$ is not the same whether we consider the patch interpolating between $(x_{i-1},y_{i-1})$ and $(x_i,y_i)$ and the patch interpolating between $(x_i,y_i)$ and $(x_{i+1},y_{i+1})$. So how do we force them to match?
Well, we can add the constraints explicitly in the equation system and solve for different parameters. One simple way of doing so is to use the Hermite splines.
To be continued…
### Like this:
This entry was posted on Tuesday, May 29th, 2012 at 9:37 am and is filed under algorithms, Mathematics, programming. You can follow any responses to this entry through the RSS 2.0 feed. You can leave a response, or trackback from your own site.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 36, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9384779334068298, "perplexity_flag": "head"}
|
http://www.johnmyleswhite.com/
|
# John Myles White
## What’s Next
By on 5.9.2013
The last two weeks have been full of changes for me. For those who’ve been asking about what’s next, I thought I’d write up a quick summary of all the news.
(1) I successfully defended my thesis this past Monday. Completing a Ph.D. has been a massive undertaking for the past five years, and it’s a major relief to be done. From now on I’ll be (perhaps undeservedly) making airline and restaurant reservations under the name Dr. White.
(2) As announced last week, I’ll be one of the residents at Hacker School this summer. The list of other residents is pretty amazing, and I’m really looking forward to meeting the students.
(3) In addition to my residency at Hacker School, I’ll be a temporary postdoc in the applied math department at MIT, where I’ll be working on Julia full-time. Expect to see lots of work on building up the core data analysis infrastructure.
(4) As of today I’ve accepted an offer to join Facebook’s Data Science team in the fall. I’ll be moving out to the Bay Area in November.
That’s all so far.
Posted in Autobiographical, Statistics | 4 Responses
## Using Norms to Understand Linear Regression
By on 3.22.2013
### Introduction
In my last post, I described how we can derive modes, medians and means as three natural solutions to the problem of summarizing a list of numbers, $$(x_1, x_2, \ldots, x_n)$$, using a single number, $$s$$. In particular, we measured the quality of different potential summaries in three different ways, which led us to modes, medians and means respectively. Each of these quantities emerged from measuring the typical discrepancy between an element of the list, $$x_i$$, and the summary, $$s$$, using a formula of the form,
\sum_i |x_i – s|^p,
where $$p$$ was either $$0$$, $$1$$ or $$2$$.
### The $$L_p$$ Norms
In this post, I’d like to extend this approach to linear regression. The notion of discrepancies we used in the last post is very closely tied to the idea of measuring the size of a vector in $$\mathbb{R}^n$$. Specifically, we were minimizing a measure of discrepancies that was almost identical to the $$L_p$$ family of norms that can be used to measure the size of vectors. Understanding $$L_p$$ norms makes it much easier to describe several modern generalizations of classical linear regression.
To extend our previous approach to the more standard notion of an $$L_p$$ norm, we simply take the sum we used before and rescale things by taking a $$p^{th}$$ root. This gives the formula for the $$L_p$$ norm of any vector, $$v = (v_1, v_2, \ldots, v_n)$$, as,
|v|_p = (\sum_i |v_i|^p)^\frac{1}{p}.
When $$p = 2$$, this formula reduces to the familiar formula for the length of a vector:
|v|_2 = \sqrt{\sum_i v_i^2}.
In the last post, the vector we cared about was the vector of elementwise discrepancies, $$v = (x_1 – s, x_2 – s, \ldots, x_n – s)$$. We wanted to minimize the overall size of this vector in order to make $$s$$ a good summary of $$x_1, \ldots, x_n$$. Because we were interested only in the minimum size of this vector, it didn’t matter that we skipped taking the $$p^{th}$$ root at the end because one vector, $$v_1$$, has a smaller norm than another vector, $$v_2$$, only when the $$p^{th}$$ power of that norm smaller than the $$p^{th}$$ power of the other. What was essential wasn’t the scale of the norm, but rather the value of $$p$$ that we chose. Here we’ll follow that approach again. Specifically, we’ll again be working consistently with the $$p^{th}$$ power of an $$L_p$$ norm:
|v|_p^p = (\sum_i |v_i|^p).
### The Regression Problem
Using $$L_p$$ norms to measure the overall size of a vector of discrepancies extends naturally to other problems in statistics. In the previous post, we were trying to summarize a list of numbers by producing a simple summary statistic. In this post, we’re instead going to summarize the relationship between two lists of numbers in a form that generalizes traditional regression models.
Instead of a single list, we’ll now work with two vectors: $$(x_1, x_2, \ldots, x_n)$$ and $$(y_1, y_2, \ldots, y_n)$$. Because we like simple models, we’ll make the very strong (and very convenient) assumption that the second vector is, approximately, a linear function of the first vector, which gives us the formula:
y_i \approx \beta_0 + \beta_1 x_i.
In practice, this linear relationship is never perfect, but only an approximation. As such, for any specific values we choose for $$\beta_0$$ and $$\beta_1$$, we have to compute a vector of discrepancies: $$v = (y_1 – (\beta_0 + \beta_1 x_1), \ldots, y_n – (\beta_0 + \beta_1 x_n))$$. The question then becomes: how do we measure the size of this vector of discrepancies? By choosing different norms to measure its size, we arrive at several different forms of linear regression models. In particular, we’ll work with three norms: the $$L_0$$, $$L_1$$ and $$L_2$$ norms.
As we did with the single vector case, here we’ll define discrepancies as,
d_i = |y_i – (\beta_0 + \beta_1 x_i)|^p,
and the total error as,
E_p = \sum_i |y_i – (\beta_0 + \beta_1 x_i)|^p,
which is the just the $$p^{th}$$ power of the $$L_p$$ norm.
### Several Forms of Regression
In general, we want estimate a set of regression coefficients that minimize this total error. Different forms of linear regression appear when we alter the values of $$p$$. As before, let’s consider three settings:
E_0 = \sum_i |y_i – (\beta_0 + \beta_1 x_i)|^0
E_1 = \sum_i |y_i – (\beta_0 + \beta_1 x_i)|^1
E_2 = \sum_i |y_i – (\beta_0 + \beta_1 x_i)|^2
What happens in these settings? In the first case, we select regression coefficients so that the line passes through as many points as possible. Clearly we can always select a line that passes through any pair of points. And we can show that there are data sets in which we cannot do better. So the $$L_0$$ norm doesn’t seem to provide a very useful form of linear regression, but I’d be interested to see examples of its use.
In contrast, minimizing $$E_1$$ and $$E_2$$ define quite interesting and familiar forms of linear regression. We’ll start with $$E_2$$ because it’s the most familiar: it defines Ordinary Least Squares (OLS) regression, which is the one we all know and love. In the $$L_2$$ case, we select $$\beta_0$$ and $$\beta_1$$ to minimize,
E_2 = \sum_i (y_i – (\beta_0 + \beta_1 x_i))^2,
which is the summed squared error over all of the $$(x_i, y_i)$$ pairs. In other words, Ordinary Least Squares regression is just an attempt to find an approximating linear relationship between two vectors that minimizes the $$L_2$$ norm of the vector of discrepancies.
Although OLS regression is clearly king, the coefficients we get from minimizing $$E_1$$ are also quite widely used: using the $$L_1$$ norm defines Least Absolute Deviations (LAD) regression, which is also sometimes called Robust Regression. This approach to regression is robust because large outliers that would produce errors greater than $$1$$ are not unnecessarily augmented by the squaring operation that’s used in defining OLS regression, but instead only have their absolute values taken. This means that the resulting model will try to match the overall linear pattern in the data even when there are some very large outliers.
We can also relate these two approaches to the strategy employed in the previous post. When we use OLS regression (which would be better called $$L_2$$ regression), we predict the mean of $$y_i$$ given the value of $$x_i$$. And when we use LAD regression (which would be better called $$L_1$$ regression), we predict the median of $$y_i$$ given the value of $$x_i$$. Just as I said in the previous post, the core theoretical tool that we need to understand is the $$L_p$$ norm. For single number summaries, it naturally leads to modes, medians and means. For simple regression problems, it naturally leads to LAD regression and OLS regression. But there’s more: it also leads naturally to the two most popular forms of regularized regression.
### Regularization
If you’re not familiar with regularization, the central idea is that we don’t exclusively try to find the values of $$\beta_0$$ and $$\beta_1$$ that minimize the discrepancy between $$\beta_0 + \beta_1 x_i$$ and $$y_i$$, but also simultaneously try to satisfy a competing requirement that $$\beta_1$$ not get too large. Note that we don’t try to control the size of $$\beta_0$$ because it describes the overall scale of the data rather than the relationship between $$x$$ and $$y$$.
Because these objectives compete, we have to combine them into a single objective. We do that by working with a linear sum of the two objectives. And because both the discrepancy objective and the size of the coefficients can be described in terms of norms, we’ll assume that we want to minimize the $$L_p$$ norm of the discrepancies and the $$L_q$$ norm of the $$\beta$$’s. This means that we end up trying to minimize an expression of the form,
(\sum_i |y_i – (\beta_0 + \beta_1 x_i)|^{p}) + \lambda (|\beta_1|^q).
In most regularized regression models that I’ve seen in the wild, people tend to use $$p = 2$$ and $$q = 1$$ or $$q = 2$$. When $$q = 1$$, this model is called the LASSO. When $$q = 2$$, this model is called ridge regression. In another approach, I’ll try to describe why the LASSO and ridge regression produce such different patterns of coefficients.
Posted in Statistics | 7 Responses
## Modes, Medians and Means: A Unifying Perspective
By on 3.22.2013
### Introduction / Warning
Any traditional introductory statistics course will teach students the definitions of modes, medians and means. But, because introductory courses can’t assume that students have much mathematical maturity, the close relationship between these three summary statistics can’t be made clear. This post tries to remedy that situation by making it clear that all three concepts arise as specific parameterizations of a more general problem.
To do so, I’ll need to introduce one non-standard definition that may trouble some readers. In order to simplify my exposition, let’s all agree to assume that $$0^0 = 0$$. In particular, we’ll want to assume that $$|0|^0 = 0$$, even though $$|\epsilon|^0 = 1$$ for all $$\epsilon > 0$$. This definition is non-standard, but it greatly simplifies what follows and emphasizes the conceptual unity of modes, medians and means.
### Constructing a Summary Statistic
To see how modes, medians and means arise, let’s assume that we have a list of numbers, $$(x_1, x_2, \ldots, x_n)$$, that we want to summarize. We want our summary to be a single number, which we’ll call $$s$$. How should we select $$s$$ so that it summarizes the numbers, $$(x_1, x_2, \ldots, x_n)$$, effectively?
To answer that, we’ll assume that $$s$$ is an effective summary of the entire list if the typical discrepancy between $$s$$ and each of the $$x_i$$ is small. With that assumption in place, we only need to do two things: (1) define the notion of discrepancy between two numbers, $$x_i$$ and $$s$$; and (2) define the notion of a typical discrepancy. Because each number $$x_i$$ produces its own discrepancy, we’ll need to introduce a method for aggregating the individual discrepancies to order to say something about the typical discrepancy.
### Defining a Discrepancy
We could define the discrepancy between a number $$x_i$$ and another number $$s$$ in many ways. For now, we’ll consider only three possibilities. All of these three options satisfies a basic intuition we have about the notion of discrepancy: we expect that the discrepancy between $$x_i$$ and $$s$$ should be $$0$$ if $$|x_i – s| = 0$$ and that the discrepancy should be greater than $$0$$ if $$|x_i – s| > 0$$. That leaves us with one obvious question: how much greater should the discrepancy be when $$|x_i – s| > 0$$?
To answer that question, let’s consider three definitions of the discrepancy, $$d_i$$:
1. $$d_i = |x_i – s|^0$$
2. $$d_i = |x_i – s|^1$$
3. $$d_i = |x_i – s|^2$$
How should we think about these three possible definitions?
The first definition, $$d_i = |x_i – s|^0$$, says that the discrepancy is $$1$$ if $$x_i \neq s$$ and is $$0$$ only when $$x_i = s$$. This notion of discrepancy is typically called zero-one loss in machine learning. Note that this definition implies that anything other than exact equality produces a constant measure of discrepancy. Summarizing $$x_i = 2$$ with $$s = 0$$ is no better nor worse than using $$s = 1$$. In other words, the discrepancy does not increase at all as $$s$$ gets further and further from $$x_i$$. You can see this reflected in the far-left column of the image below:
The second definition, $$d_i = |x_i – s|^1$$, says that the discrepancy is equal to the distance between $$x_i$$ and $$s$$. This is often called an absolute deviation in machine learning. Note that this definition implies that the discrepancy should increase linearly as $$s$$ gets further and further from $$x_i$$. This is reflected in the center column of the image above.
The third definition, $$d_i = |x_i – s|^2$$, says that the discrepancy is the squared distance between $$x_i$$ and $$s$$. This is often called a squared error in machine learning. Note that this definition implies that the discrepancy should increase super-linearly as $$s$$ gets further and further from $$x_i$$. For example, if $$x_i = 1$$ and $$s = 0$$, then the discrepancy is $$1$$. But if $$x_i = 2$$ and $$s = 0$$, then the discrepancy is $$4$$. This is reflected in the far right column of the image above.
When we consider a list with a single element, $$(x_1)$$, these definitions all suggest that we should choose the same number: namely, $$s = x_1$$.
### Aggregating Discrepancies
Although these definitions do not differ for a list with a single element, they suggest using very different summaries of a list with more than one number in it. To see why, let’s first assume that we’ll aggregate the discrepancy between $$x_i$$ and $$s$$ for each of the $$x_i$$ into a single summary of the quality of a proposed value of $$s$$. To perform this aggregation, we’ll sum up the discrepancies over each of the $$x_i$$ and call the result $$E$$.
In that case, our three definitions give three interestingly different possible definitions of the typical discrepancy, which we’ll call $$E$$ for error:
E_0 = \sum_{i} |x_i – s|^0.
E_1 = \sum_{i} |x_i – s|^1.
E_2 = \sum_{i} |x_i – s|^2.
When we write down these expressions in isolation, they don’t look very different. But if we select $$s$$ to minimize each of these three types of errors, we get very different numbers. And, surprisingly, each of these three numbers will be very familiar to us.
### Minimizing Aggregate Discrepancies
For example, suppose that we try to find $$s_0$$ that minimizes the zero-one loss definition of the error of a single number summary. In that case, we require that,
s_0 = \arg \min_{s} \sum_{i} |x_i – s|^0.
What value should $$s_0$$ take on? If you give this some extended thought, you’ll discover two things: (1) there is not necessarily a single best value of $$s_0$$, but potentially many different values; and (2) each of these best values is one of the modes of the $$x_i$$.
In other words, the best single number summary of a set of numbers, when you use exact equality as your metric of error, is one of the modes of that set of numbers.
What happens if we consider some of the other definitions? Let’s start by considering $$s_1$$:
s_1 = \arg \min_{s} \sum_{i} |x_i – s|^1.
Unlike $$s_0$$, $$s_1$$ is a unique number: it is the median of the $$x_i$$. That is, the best summary of a set of numbers, when you use absolute differences as your metric of error, is the median of that set of numbers.
Since we’ve just found that the mode and the median appear naturally, we might wonder if other familiar basic statistics will appear. Luckily, they will. If we look for,
s_2 = \arg \min_{s} \sum_{i} |x_i – s|^2,
we’ll find that, like $$s_1$$, $$s_2$$ is again a unique number. Moreover, $$s_2$$ is the mean of the $$x_i$$. That is, the best summary of a set of numbers, when you use squared differences as your metric of error, is the mean of that set of numbers.
To sum up, we’ve just seen that the three most famous single number summaries of a data set are very closely related: they all minimize the average discrepancy between $$s$$ and the numbers being summarized. They only differ in the type of discrepancy being considered:
1. The mode minimizes the number of times that one of the numbers in our summarized list is not equal to the summary that we use.
2. The median minimizes the average distance between each number and our summary.
3. The mean minimizes the average squared distance between each number and our summary.
In equations,
1. $$\text{The mode of } x_i = \arg \min_{s} \sum_{i} |x_i – s|^0$$
2. $$\text{The median of } x_i = \arg \min_{s} \sum_{i} |x_i – s|^1$$
3. $$\text{The mean of } x_i = \arg \min_{s} \sum_{i} |x_i – s|^2$$
### Summary
We’ve just seen that the mode, median and mean all arise from a simple parametric process in which we try to minimize the average discrepancy between a single number $$s$$ and a list of numbers, $$x_1, x_2, \ldots, x_n$$ that we try to summarize using $$s$$. In a future blog post, I’ll describe how the ideas we’ve just introduced relate to the concept of $$L_p$$ norms. Thinking about minimizing $$L_p$$ norms is a generalization of taking modes, medians and means that leads to almost every important linear method in statistics — ranging from linear regression to the SVD.
### Thanks
Thanks to Sean Taylor for reading a draft of this post and commenting on it.
Posted in Statistics | 28 Responses
## Writing Better Statistical Programs in R
By on 1.24.2013
A while back a friend asked me for advice about speeding up some R code that they’d written. Because they were running an extensive Monte Carlo simulation of a model they’d been developing, the poor performance of their code had become an impediment to their work.
After I looked through their code, it was clear that the performance hurdles they were stumbling upon could be overcome by adopting a few best practices for statistical programming. This post tries to describe some of the simplest best practices for statistical programming in R. Following these principles should make it easier for you to write statistical programs that are both highly performant and correct.
### Write Out a DAG
Whenever you’re running a simulation study, you should appreciate the fact that you are working with a probabilistic model. Even if you are primarily focused upon the deterministic components of this model, the presence of any randomness in the model means that all of the theory of probabilistic models applies to your situation.
Almost certainly the most important concept in probabilistic modeling when you want to write efficient code is the notion of conditional independence. Conditional independence is important because many probabilistic models can be decomposed into simple pieces that can be computed in isolation. Although your model contains many variables, any one of these variables may depend upon only a few other variables in your model. If you can organize all of variables in your model based on their dependencies, it will be easier to exploit two computational tricks: vectorization and parallelization.
Let’s go through an example. Imagine that you have the model shown below:
X \sim \text{Normal}(0, 1)
Y1 \sim \text{Uniform}(X, X + 1)
Y2 \sim \text{Uniform}(X – 1, X)
Z \sim \text{Cauchy}(Y1 + Y2, 1)
In this model, the distribution of Y1 and Y2 depends only on the value of X. Similarly, the distribution of Z depends only on the values of Y1 and Y2. We can formalize this notion using a DAG, which is a directed acyclic graph that depicts which variables depend upon which other variables. It will help you appreciate the value of this format if you think of the arrows in the DAG below as indicating the flow of causality:
Having this DAG drawn out for your model will make it easier to write efficient code, because you can generate all of the values of a variable V simultaneously once you’ve computed the values of the variables that V depends upon. In our example, you can generate the values of X for all of your different simulations at once and then generate all of the Y1′s and Y2′s based on the values of X that you generate. You can then exploit this stepwise generation procedure to vectorize and parallelize your code. I’ll discuss vectorization to give you a sense of how to exploit the DAG we’ve drawn to write faster code.
### Vectorize Your Simulations
Sequential dependencies are a major bottleneck in languages like R and Matlab that cannot perform loops efficiently. Looking at the DAG for the model shown able, you might think that you can’t get around writing a “for” loop to generate samples of this model because some of the variables need to be generated before others.
But, in reality, each individual sample from this model is independent of all of the others. As such, you can draw all of the X’s for all of your different simulations using vectorized code. Below I show how this model could be implemented using loops and then show how this same model could be implemented using vectorized operations:
#### Loop Code
```1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
``` ```run.sims <- function(n.sims)
{
results <- data.frame()
for (sim in 1:n.sims)
{
x <- rnorm(1, 0, 1)
y1 <- runif(1, x, x + 1)
y2 <- runif(1, x - 1, x)
z <- rcauchy(1, y1 + y2, 1)
results <- rbind(results, data.frame(X = x, Y1 = y1, Y2 = y2, Z = z))
}
return(results)
}
b <- Sys.time()
run.sims(5000)
e <- Sys.time()
e - b```
#### Vectorized Code
```1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
``` ```run.sims <- function(n.sims)
{
x <- rnorm(n.sims, 0, 1)
y1 <- runif(n.sims, x, x + 1)
y2 <- runif(n.sims, x - 1, x)
z <- rcauchy(n.sims, y, 1)
results <- data.frame(X = x, Y1 = y1, Y2 = y2, Z = z)
return(results)
}
b <- Sys.time()
run.sims(5000)
e <- Sys.time()
e - b```
The performance gains for this example are substantial when you move from the naive loop code to the vectorized code. (NB: There are also some gains from avoiding the repeated calls to `rbind`, although they are less important than one might think in this case.)
We could go further and parallelize the vectorized code, but this can be tedious to do in R.
### The Data Generation / Model Fitting Cycle
Vectorization can make code in languages like R much more efficient. But speed is useless if you’re not generating correct output. For me, the essential test of correctness for a probabilistic model only becomes clear after I’ve written two complementary functions:
1. A data generation function that produces samples from my model. We can call this function `generate`. The arguments to `generate` are the parameters of my model.
2.
3. A model fitting function that estimates the parameters of my model based on a sample of data. We can call this function `fit`. The arguments to `fit` are the data points we generated using `generate`
The value of these two functions is that they can be set up to feedback into one another in the cycle shown below:
I feel confident in the quality of statistical code when these functions interact stably. If the parameters inferred in a single pass through this loop are close to the original inputs, then my code is likely to work correctly. This amounts to a specific instance of the following design pattern:
```1
2
3
``` ```data <- generate(model, parameters)
inferred.parameters <- fit(model, data)
reliability <- error(model, parameters, inferred.parameters)```
To see this pattern in action, let’s step through a process of generating data from a normal distribution and then fitting a normal to the data we generate. You can think of this as a form of “currying” in which we hardcore the value of the parameter `model`:
```1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
``` ```n.sims <- 100
n.obs <- 100
generate.normal <- function(parameters)
{
return(rnorm(n.obs, parameters[1], parameters[2]))
}
fit.normal <- function(data)
{
return(c(mean(data), sd(data)))
}
distance <- function(true.parameters, inferred.parameters)
{
return((true.parameters - inferred.parameters)^2)
}
reliability <- data.frame()
for (sim in 1:n.sims)
{
parameters <- c(runif(1), runif(1))
data <- generate.normal(parameters)
inferred.parameters <- fit.normal(data)
recovery.error <- distance(parameters, inferred.parameters)
reliability <- rbind(reliability,
data.frame(True1 = parameters[1],
True2 = parameters[2],
Inferred1 = inferred.parameters[1],
Inferred2 = inferred.parameters[2],
Error1 = recovery.error[1],
Error2 = recovery.error[2]))
}```
If you generate data this way, you will see that our inference code is quite reliable. And you can see that it becomes better if we set `n.obs` to a larger value like 100,000.
I expect this kind of performance from all of my statistical code. I can’t trust the quality of either `generate` or `fit` until I see that they play well together. It is their mutual coherence that inspires faith.
### General Lessons
#### Speed
When writing code in R, you can improve performance by searching for every possible location in which vectorization is possible. Vectorization essentially replaces R’s loops (which are not efficient) with C’s loops (which are efficient) because the computations in a vectorized call are almost always implemented in a language other than R.
#### Correctness
When writing code for model fitting in any language, you should always insure that your code can infer the parameters of models when given simulated data with known parameter values.
Posted in Programming, Statistics | 6 Responses
## Americans Live Longer and Work Less
By on 1.21.2013
Today I saw an article on Hacker News entitled, “America’s CEOs Want You to Work Until You’re 70″. I was particularly surprised by this article appearing out of the blue because I take it for granted that America will eventually have to raise the retirement age to avoid bankruptcy. After reading the article, I wasn’t able to figure out why the story had been run at all. So I decided to do some basic fact-checking.
I tracked down some time series data about life expectancies in the U.S. from Berkeley and then found some time series data about the average age at retirement from the OECD. Plotting just these two bits of information, as shown below, makes it clear that Americans are spending a larger proportion of their life in retirement.
Perhaps I’m just naive, but it seems obvious to me that we can’t afford to take on several additional years of retirement pension liabilities for every living American. If Americans are living longer, we will need them to work longer in order to pay our bills.
Posted in Statistics | 23 Responses
## Symbolic Differentiation in Julia
By on 1.7.2013
### A Brief Introduction to Metaprogramming in Julia
In contrast to my previous post, which described one way in which Julia allows (and expects) the programmer to write code that directly employs the atomic operations offered by computers, this post is meant to introduce newcomers to some of Julia’s higher level functions for metaprogramming. To make metaprogramming more interesting, we’re going to build a system for symbolic differentiation in Julia.
Like Lisp, the Julia interpreter represents Julian expressions using normal data structures: every Julian expression is represented using an object of type `Expr`. You can see this by typing something like `:(x + 1)` into the Julia REPL:
```1
2
3
4
5
``` ```julia> :(x + 1)
:(+(x,1))
julia> typeof(:(x+1))
Expr```
Looking at the REPL output when we enter an expression quoted using the `:` operator, we can see that Julia has rewritten our input expression, originally written using infix notation, as an expression that uses prefix notation. This standardization to prefix notation makes it easier to work with arbitrary expressions because it removes a needless source of variation in the format of expressions.
To develop an intuition for what this kind of expression means to Julia, we can use the `dump` function to examine its contents:
```1
2
3
4
5
6
7
8
``` ```julia> dump(:(x + 1))
Expr
head: Symbol call
args: Array(Any,(3,))
1: Symbol +
2: Symbol x
3: Int64 1
typ: Any```
Here you can see that a Julian expression consists of three parts:
1. A `head` symbol, which describes the basic type of the expression. For this blog post, all of the expressions we’ll work with have `head` equal to `:call`.
2. An `Array{Any}` that contains the arguments of the `head`. In our example, the `head` is `:call`, which indicates a function call is being made in this expression. The arguments for the function call are:
1. `:+`, the symbol denoting the addition function that we are calling.
2. `:x`, the symbol denoting the variable `x`
3. `1`, the number 1 represented as a 64-bit integer.
3. A `typ` which stores type inference information. We’ll ignore this information as it’s not relevant to us right now.
Because each expression is built out of normal components, we can construct one piecemeal:
```1
2
``` ```julia> Expr(:call, {:+, 1, 1}, Any)
:(+(1,1))```
Because this expression only depends upon constants, we can immediately evaluate it using the `eval` function:
```1
2
``` ```julia> eval(Expr(:call, {:+, 1, 1}, Any))
2```
### Symbolic Differentiation in Julia
Now that we know how Julia expressions are built, we can design a very simple prototype system for doing symbolic differentiation in Julia. We’ll build up our system in pieces using some of the most basic rules of calculus:
1. The Constant Rule: `d/dx c = 0`
2. The Symbol Rule: `d/dx x = 1`, `d/dx y = 0`
3. The Sum Rule: `d/dx (f + g) = (d/dx f) + (d/dx g)`
4. The Subtraction Rule: `d/dx (f - g) = (d/dx f) - (d/dx g)`
5. The Product Rule: `d/dx (f * g) = (d/dx f) * g + f * (d/dx g)`
6. The Quotient Rule: `d/dx (f / g) = [(d/dx f) * g - f * (d/dx g)] / g^2`
Implementing these operations is quite easy once you understand the data structure Julia uses to represent expressions. And some of these operations would be trivial regardless.
For example, here’s the Constant Rule in Julia:
```1
``` `differentiate(x::Number, target::Symbol) = 0`
And here’s the Symbol rule:
```1
2
3
4
5
6
7
``` ```function differentiate(s::Symbol, target::Symbol)
if s == target
return 1
else
return 0
end
end```
The first two rules of calculus don’t actually require us to understand anything about Julian expressions. But the interesting parts of a symbolic differentiation system do. To see that, let’s look at the Sum Rule:
```1
2
3
4
5
6
7
8
9
``` ```function differentiate_sum(ex::Expr, target::Symbol)
n = length(ex.args)
new_args = Array(Any, n)
new_args[1] = :+
for i in 2:n
new_args[i] = differentiate(ex.args[i], target)
end
return Expr(:call, new_args, Any)
end```
The Subtraction Rule can be defined almost identically:
```1
2
3
4
5
6
7
8
9
``` ```function differentiate_subtraction(ex::Expr, target::Symbol)
n = length(ex.args)
new_args = Array(Any, n)
new_args[1] = :-
for i in 2:n
new_args[i] = differentiate(ex.args[i], target)
end
return Expr(:call, new_args, Any)
end```
The Product Rule is a little more interesting because we need to build up an expression whose components are themselves expressions:
```1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
``` ```function differentiate_product(ex::Expr, target::Symbol)
n = length(ex.args)
res_args = Array(Any, n)
res_args[1] = :+
for i in 2:n
new_args = Array(Any, n)
new_args[1] = :*
for j in 2:n
if j == i
new_args[j] = differentiate(ex.args[j], target)
else
new_args[j] = ex.args[j]
end
end
res_args[i] = Expr(:call, new_args, Any)
end
return Expr(:call, res_args, Any)
end```
Last, but not least, here’s the Quotient Rule, which is a little more complex. We can code this rule up in a more explicit fashion that doesn’t use any loops so that we can directly see the steps we’re taking:
```1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
``` ```function differentiate_quotient(ex::Expr, target::Symbol)
return Expr(:call,
{
:/,
Expr(:call,
{
:-,
Expr(:call,
{
:*,
differentiate(ex.args[2], target),
ex.args[3]
},
Any),
Expr(:call,
{
:*,
ex.args[2],
differentiate(ex.args[3], target)
},
Any)
},
Any),
Expr(:call,
{
:^,
ex.args[3],
2
},
Any)
},
Any)
end```
Now that we have all of those basic rules of calculus implemented as functions, we’ll build up a lookup table that we can use to tell our final `differentiate` function where to send new expressions based on the kind of function’s that being differentiated during each call to `differentiate`:
```1
2
3
4
5
6
``` ```differentiate_lookup = {
:+ => differentiate_sum,
:- => differentiate_subtraction,
:* => differentiate_product,
:/ => differentiate_quotient
}```
With all of the core machinery in place, the final definition of `differentiate` is very simple:
```1
2
3
4
5
6
7
8
9
10
11
``` ```function differentiate(ex::Expr, target::Symbol)
if ex.head == :call
if has(differentiate_lookup, ex.args[1])
return differentiate_lookup[ex.args[1]](ex, target)
else
error("Don't know how to differentiate $(ex.args[1])")
end
else
return differentiate(ex.head)
end
end```
Ive put all of these snippets together in a single GitHub Gist. To try out this new differentiation function, let’s copy the contents of that GitHub gist into a file called `differentiate.jl`. We can then load the contents of that file into Julia at the REPL using `include`, which will allow us try out our differentiation tool:
```1
2
3
4
5
6
7
``` ```julia> include("differentiate.jl")
julia> differentiate(:(x + x*x), :x)
:(+(1,+(*(1,x),*(x,1))))
julia> differentiate(:(x + a*x), :x)
:(+(1,+(*(0,x),*(a,1))))```
While the expressions that are constructed by our `differentiate` function are ugly, they are correct: they just need to be simplified so that things like `*(0, x)` are replaced with `0`. If you’d like to see how to write code to perform some basic simplifications, you can see the `simplify` function I’ve been building for Julia’s new Calculus package. That codebase includes all of the functionality shown here for `differentiate`, along with several other rules that make the system more powerful.
What I love about Julia is the ease with which one can move from low-level bit operations like those described in my previous post to high-level operations that manipulate Julian expressions. By allowing the programmer to manipulate expressions programmatically, Julia has copied one of the most beautiful parts of Lisp.
Posted in Programming, Statistics | 2 Responses
## Computers are Machines
By on 1.3.2013
When people try out Julia for the first time, many of them are worried by the following example:
```1
2
3
4
5
6
7
``` ```julia> factorial(n) = n == 0 ? 1 : n * factorial(n - 1)
julia> factorial(20)
2432902008176640000
julia> factorial(21)
-4249290049419214848```
If you’re not familiar with computer architecture, this result is very troubling. Why would Julia claim that the factorial of 21 is a negative number?
The answer is simple, but depends upon a set of concepts that are largely unfamiliar to programmers who, like me, grew up using modern languages like Python and Ruby. Julia thinks that the factorial of 21 is a negative number because computers are machines.
Because they are machines, computers represent numbers using many small groups of bits. Most modern machines work with groups of 64 bits at a time. If an operation has to work with more than 64 bits at a time, that operation will be slower than a similar operation than only works with 64 bits at a time.
As a result, if you want to write fast computer code, it helps to only execute operations that are easily expressible using groups of 64 bits.
Arithmetic involving small integers fits into the category of operations that only require 64 bits at a time. Every integer between `-9223372036854775808` and `9223372036854775807` can be expressed using just 64 bits. You can see this for yourself by using the `typemin` and `typemax` functions in Julia:
```1
2
3
4
5
``` ```julia> typemin(Int64)
-9223372036854775808
julia> typemax(Int64)
9223372036854775807```
If you do things like the following, the computer will quickly produce correct results:
```1
2
3
4
5
``` ```julia> typemin(Int64) + 1
-9223372036854775807
julia> typemax(Int64) - 1
9223372036854775806```
But things go badly if you try to break out of the range of numbers that can be represented using only 64 bits:
```1
2
3
4
5
``` ```julia> typemin(Int64) - 1
9223372036854775807
julia> typemax(Int64) + 1
-9223372036854775808```
The reasons for this are not obvious at first, but make more sense if you examine the actual bits being operated upon:
```1
2
3
4
5
6
7
8
``` ```julia> bits(typemax(Int64))
"0111111111111111111111111111111111111111111111111111111111111111"
julia> bits(typemax(Int64) + 1)
"1000000000000000000000000000000000000000000000000000000000000000"
julia> bits(typemin(Int64))
"1000000000000000000000000000000000000000000000000000000000000000"```
When it adds 1 to a number, the computer blindly uses a simple arithmetic rule for individual bits that works just like the carry system you learned as a child. This carrying rule is very efficient, but works poorly if you end up flipping the very first bit in a group of 64 bits. The reason is that this first bit represents the sign of an integer. When this special first bit gets flipped by an operation that overflows the space provided by 64 bits, everything else breaks down.
The special interpretation given to certain bits in a group of 64 is the reason that factorial of 21 is a negative number when Julia computes it. You can confirm this by looking at the exact bits involved:
```1
2
3
4
5
``` ```julia> bits(factorial(20))
"0010000111000011011001110111110010000010101101000000000000000000"
julia> bits(factorial(21))
"1100010100000111011111010011011010111000110001000000000000000000"```
Here, as before, the computer has just executed the operations necessary to perform multiplication by 21. But the result has flipped the sign bit, which causes the result to appear to be a negative number.
There is a way around this: you can tell Julia to work with groups of more than 64 bits at a time when expressing integers using the `BigInt` type:
```1
2
3
4
5
6
7
8
9
10
``` ```julia> require("BigInt")
julia> BigInt(typemax(Int))
9223372036854775807
julia> BigInt(typemax(Int)) + 1
9223372036854775808
julia> BigInt(factorial(20)) * 21
51090942171709440000```
Now everything works smoothly. By working with `BigInt`‘s automatically, languages like Python avoid these concerns:
```1
2
3
4
``` ```>>> factorial(20)
2432902008176640000
>>> factorial(21)
51090942171709440000L```
The `L` at the end of the numbers here indicates that Python has automatically converted a normal integer into something like Julia’s `BigInt`. But this automatic conversion comes at a substantial cost: every operation that stays within the bounds of 64-bit arithmetic is slower in Python than Julia because of the time required to check whether an operation might go beyond the 64-bit bound.
Python’s automatic conversion approach is safer, but slower. Julia’s approach is faster, but requires that the programmer understand more about the computer’s architecture. Julia achieves its performance by confronting the fact that computers are machines head on. This is confusing at first and frustrating at times, but it’s a price that you have to pay for high performance computing. Everyone who grew up with C is used to these issues, but they’re largely unfamiliar to programmers who grew up with modern languages like Python. In many ways, Julia sets itself apart from other new languages by its attempt to recover some of the power that was lost in the transition from C to languages like Python. But the transition comes with a substantial learning curve.
And that’s why I wrote this post.
Posted in Programming, Statistics | 8 Responses
## What is Correctness for Statistical Software?
By on 12.14.2012
### Introduction
A few months ago, Drew Conway and I gave a webcast that tried to teach people about the basic principles behind linear and logistic regression. To illustrate logistic regression, we worked through a series of progressively more complex spam detection problems.
The simplest data set we used was the following:
This data set has one clear virtue: the correct classifier defines a decision boundary that implements a simple `OR` operation on the values of `MentionsViagra` and `MentionsNigeria`. Unfortunately, that very simplicity causes the logistic regression model to break down, because the MLE coefficients for `MentionsViagra` and `MentionsNigeria` should be infinite. In some ways, our elegantly simple example for logistic regression is actually the statistical equivalent of a SQL injection.
In our webcast, Drew and I decided to ignore that concern because R produces a useful model fit despite the theoretical MLE coefficients being infinite:
Although R produces finite coefficients here despite theory telling us to except something else, I should note that R does produce a somewhat cryptic warning during the model fitting step that alerts the very well-informed user that something has gain awry:
```1
``` `glm.fit: fitted probabilities numerically 0 or 1 occurred`
It seems clear to me that R’s warning would be better off if it were substantially more verbose:
```1
2
3
4
5
6
``` ```Warning from glm.fit():
Fitted probabilities could not be distinguished from 0's or 1's
under finite precision floating point arithmetic. As a result, the
optimization algorithm for GLM fitting may have failed to converge.
You should check whether your data set is linearly separable.```
### Broader Questions
Although I’ve started this piece with a very focused example of how R’s implementation of logistic regression differs from the purely mathematical definition of that model, I’m not really that interested in the details of how different pieces of software implement logistic regression. If you’re interested in learning more about that kind of thing, I’d suggest reading the excellent piece on R’s logistic regression function that can be found on the Win-Vector blog.
Instead, what interests me right now are a set of broader questions about how statistical software should work. What is the standard for correctness for statistical software? And what is the standard for usefulness? And how closely related are those two criteria?
Let’s think about each of them separately:
• Usefulness: If you want to simply make predictions based on your model, then you want R to produce a fitted model for this data set that makes reasonably good predictions on the training data. R achieves that goal: the fitted predictions for R’s logistic regression model are numerically almost indistinguishable from the 0/1 values that we would expect from a maximum likelihood algorithm. If you want useful algorithms, then R’s decision to produce some model fit is justified.
• Correctness: If you want software to either produce mathematically correct answers or to die trying, then R’s implementation of logistic is not for you. If you insist on theoretical purity, it seems clear that R should not merely emit a warning here, but should instead throw an inescapable error rather than return an imperfect model fit. You might even want R to go further and to teach the end-user about the virtues of SVM’s or the general usefulness of parameter regularization. Whatever you’d like to see, one thing is sure: you definitely do not want R to produce model fits that are mathematically incorrect.
It’s remarkable that such a simple example can bring the goals of predictive power and theoretical correctness into such direct opposition. In part, the conflict arises here because those purely theoretical concerns are linked by a third consideration: computer algorithms are not generally equivalent to their mathematical idealizations. Purely computational concerns involving floating-point imprecision and finite compute time mean that we cannot generally hope for computers to produce answers similar to those prescribed by theoretical mathematics.
What’s fascinating about this specific example is that there’s something strangely desirable about floating-point numbers having finite precision: no one with any practical interest in modeling is likely to be interested in fitting a model with infinite-valued parameters. R’s decision to blindly run an optimization algorithm here unwittingly achieves a form of regularization like that employed in early stopping algorithms for fitting neural networks. And that may be a good thing if you’re interested in using a fitted model to make predictions, even though it means that R produces quantities like standard errors that have no real coherent interpretation in terms of frequentist estimators.
Whatever your take is on the virtues or vices of R’s implementation of logistic regression, there’s a broad take away from this example that I’ve been dealing with constantly while working on Julia: any programmer designing statistical software has to make decisions that involve personal judgment. The requirement for striking a compromise between correctness and usefulness is so nearly omnipresent that one of the most popular pieces of statistical software on Earth implements logistic regression using an algorithm that a pure theorist could argue is basically broken. But it produces an answer that has practical value. And that might just be the more important thing for statistical software to do.
Posted in Statistics | 6 Responses
## What is Economics Studying?
By on 12.10.2012
Having spent all five of my years as a graduate student trying to get psychologists and economists to agree on basic ideas about decision-making, I think the following two pieces complement one another perfectly:
• Cosma Shalizi’s comments on rereading Blanchard and Fischer’s “Lectures on Macroeconomics”:
Blanchard and Fischer is about “modern” macro, models based on agents who know what the economy is like optimizing over time, possible under some limits. This is the DSGE style of macro. which has lately come into so much discredit — thoroughly deserved discredit. Chaikin and Lubensky is about modern condensed matter physics, especially soft condensed matter, based on principles of symmetry-breaking and phase transitions. Both books are about building stylized theoretical models and solving them to see what they imply; implicitly they are also about the considerations which go into building models in their respective domains.
What is very striking, looking at them side by side, is that while these are both books about mathematical modeling, Chaikin and Lubensky presents empirical data, compares theoretical predictions to experimental results, and goes into some detail into the considerations which lead to this sort of model for nematic liquid crystals, or that model for magnetism. There is absolutely nothing like this in Blanchard and Fischer — no data at all, no comparison of models to reality, no evidence of any kind supporting any of the models. There is not even an attempt, that I can find, to assess different macroeconomic models, by comparing their qualitative predictions to each other and to historical reality. I presume that Blanchard and Fischer, as individual scholars, are not quite so indifferent to reality, but their pedagogy is.
I will leave readers to draw their own morals.
• Itzhak Gilboa’s argument that economic theory is a rhetoric apparatus rather than a set of direct predictions about the world in which we live.
Posted in Economics, Psychology | Leave a response
## A Cheap Criticism of p-Values
By on 12.6.2012
One of these days I am going to finish my series on problems with how NHST is issued in the social sciences. Until then, I came up with a cheap criticism of p-values today.
To make sense of my complaint, you’ll want to head over to Andy Gelman’s blog and read the comments on his recent blog post about p-values. Reading them makes one thing clear: not even a large group of stats wonks can agree on how to think about p-values. How could we ever hope for understanding from the kind of people who are only reporting p-values because they’re forced to do so by their fields?
Posted in Psychology, Statistics | 2 Responses
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 44, "mathjax_display_tex": 159, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9266440272331238, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/tagged/trigonometry+geometry
|
# Tagged Questions
3answers
13 views
### Right-angled isosceles triangles
If a right-angled triangle is isosceles then the other two angles must be equal to $45^\circ$ ? Is this always the case or are there other possible right-angled isosceles triangles?
0answers
15 views
### Find next point in ellipse given the chord length
I would like to draw a cloud programmatically. For this reason I need to know where to draw the next circle around the ellipse. Given the chord (circle radius), how can I calculate the next point in ...
1answer
32 views
### Find the value of $\tan^2\alpha+\cot^2\beta$
A circle with centre o have two chords AC and BD,which are intersecting each other at P.If $\angle AOB=15^\circ$ and $\angle APB=30^\circ$,then find out value of $$\tan^2\angle APB+\cot^2\angle COD$$ ...
1answer
47 views
### Drawing an arc between two points
I was writing a java program to draw an arc. Arc2D.Double(int x,int y,int width,int height,int startAngle,int arcAngle,int type); Since, I'm not familiar with the mathematics behind drawing arc, I'm ...
1answer
61 views
### How to find a point on the tangent line whos length is 1?
im trying to figure out a formula to find the point(x,y) on a tangent line whos length is between 0 and 1 while it rotates around the unit circle uniformly, so the point would either be right on the ...
3answers
60 views
### Distance between two antennas
I am trying to find out the formula to calculate how high antennas need to be for Line of Sight (LoS) propagation. I found: d = 3.57sqrt(h) also ...
2answers
43 views
### Calculate new positon of rectangle corners based on angle.
I am trying to make a re-sizable touch view with rotation in android. I re-size rectangle successfully. You can find code here It has 4 corners. You can re-size that rectangle by dragging one of ...
3answers
40 views
### Please help me find a formula to find the 3rd point in a right triangle
I'm trying to figure out how to plot a 3rd point on a graph Given the following line segments and angles Is there a formula for the 3rd point? Note: This image is just for an example. The base ...
1answer
27 views
### Please help me to find an equation to find the 3rd point in an arc.
Long story short, I want to animate the rotation of an object that's based off a circle. Given the center point of the circle, the radius, and one of the points in the arc, is it possible to find the ...
0answers
20 views
### How can I align the angle between points with the magnetic heading as the points move?
I have 3 robots which must track a point. The distance between all the robots and the point is known so a triangle can be formed between any 2 robots and the point. If I find the angles in the ...
1answer
38 views
### Optimal rotation to align a circle with external points
I have a circle $C$ with radius $r$ and a set of finite points $P=\left \{ p_1,p_2,\ldots,p_n \right \}$ are identified external to the circle $C$. These points may lie on the exterior or the interior ...
1answer
60 views
### finding Length of a diagonal
Given Quadrilateral ABCD in such that $AB<BC<CD$ creating increasing arithmetic progression with sum of $27$ cm. $\measuredangle BCD=60^{0}$. the diagonal $BD=\sqrt{133}$ cm, and it divided ...
2answers
116 views
### Calculating circle radius from two points and arc length
For a simulation I want to convert between different kind of set point profiles with one being set points based on steering angles and one being based on circle radius. I have 2 way points the ...
1answer
18 views
### What is Angle(A,b) about something.
I was reading a paper and came through a notation saying .... Angle = Angle(A,B) about C. Can anybody tell me what exactly it means. Thnaks, Harsha
1answer
21 views
### Need “up” vector to calculate distance from a focal plane given world coordinates (SOLVED)
I have a RGB image, and for each pixel in the image I also have its real world coordinate. I also have the location (real world coordinate) yaw, pitch and roll of the camera. I am trying to produce ...
0answers
43 views
### How to find the maximum diagonal length inside a dodecahedron
I am trying to find the maximum length of a diagonal inside a dodecahedron with a side length of 2.319914107*10^89 meters. I am not sure if any other information than that is needed, if it is I ...
1answer
56 views
### Euclidean triangle. Does this one exist
Does $\exists$ a Euclidean triangle $ABC$ with $\sin(A) : \sin(B) : \sin(C) = \frac{1}{4} : \frac{1}{3} : \frac{1}{2}$?
1answer
26 views
### Largest Quadrilateral from a Set of Points
I posted the below on StackOverflow but was directed here as this may be more mathematical problem but I was looking to implement an algorithm.... I have a discrete set of points. From this set of ...
0answers
12 views
### Is there a formula to get the changes in ship course from wind and current?
Anyone know how to get the changes of degree's in ship course that affected by wind and current? I thinks it maybe related with the speed and degree of WIND and CURRENT. But I don't know how to ...
1answer
62 views
### How can I calculate the angle of a slice of an ellipse?
I'm attempting to draw a pie-chart programmatically, using an ellipse instead of a circle, but I'm having trouble calculating the correct angles for the slices. If it were a circle, I could use the ...
1answer
17 views
### Coordinates of all 'N' points, equidistant from each other , on a circle of radius 'R' whose center is (h,v) from the origin?
How would I calculate the coordinates of all 'n points' equidistant from each other on a circle of radius r and the center coordinates of (h,v) from the origin .
1answer
40 views
### How to find a new point on rectangle based on an known point on the same?
I have rotated a rectangle a certain amount of degree and got the point(x,y)=(130,40) which was previously (152,60). Now i want to find the x,y(marked as red) value at another location based on the ...
0answers
34 views
### Geometry question
The sides of a triangle are given to be $x^2+x+1$ , $2x+1$ and $x^2-1$. Then the largest of the three angles of the triangle is a)75 degree b)$\dfrac{x}{x+\pi}$ c)120 degree d)135 degree please ...
1answer
98 views
### How to simplify this trigonometric expression?
I was trying to solve a problem taken from an Physics Olympiad when I came across a curious and complex mathematical expression. I can not prove with what I know so far about mathematics, does could ...
1answer
59 views
### Calculating mean velocity of an orbiting body as it moves towards a point.
I'm making a game, in the game planets orbit a central point in circular orbits, they move directly towards their targets and the vector is simply added to their orbital path. Whilst not realistic it ...
1answer
34 views
### Trig problem, finding angles and ranges
I have what may well be a simple problem, but it's been too long since I've done this type of problem. From a fixed point (intersection of all the lines), the angles to 3 other fixed points $a,b,c$ ...
2answers
49 views
### How do I find the surface area of an angled conic base?
Thank you for viewing my question. I need help creating a formula for finding the surface area of a conic base. (eg. I install a flood light on my roof, I want to know how much surface area it will ...
3answers
113 views
### How can I find the points of intersection between the curves $r=1+\sin\theta$ and $r=1-\sin\theta$?
Find the points of intersection for the curve $r=a(1+\sin\theta)$ and $r=a(1-\sin\theta)$ My book says the answer is $(0,0),(a,0),(a,\pi)$. However I calculated $(a,0),(a,\pi),(a,2\pi)$.
3answers
58 views
### Proofs on equilateral triangles
Let $\Delta$ be the set of all triangles with two equal edges and be inscribed in a circle of radius $R$. So, how do I show that: Equilateral triangle in $\Delta$ is maximizing the area? and this ...
2answers
131 views
### Hard proof concerning the periodicity of trigonometrical functions. Is that a challenge or just trivial
i want to know if exist or if you can develop or give me ideas of a proof to show that the least number for which sine is periodic is $2\pi$ \neg \{\exists n\in \mathbb{R} \wedge n < 2\pi: ...
1answer
87 views
### Is this a valid proof of the derivatives of the trigonometric functions?
For the sake of this proof, the trigonometric functions $\cos$ and $\sin$ are defined as the coordinates of a point on the unit circle, rather than any of the modern analytic definitions. Let \$\vec ...
2answers
89 views
### Can find the angles of the triangle created by 3 points if I have each points compass bearing?
I am currently researching using magnetometers and radio field strength of 3 points for localisation. Is it possible to use the compass heading of 3 points to work out the angles of the triangle they ...
3answers
104 views
### How to prove $\cos\left(\pi\over7\right)-\cos\left({2\pi}\over7\right)+\cos\left({3\pi}\over7\right)=\cos\left({\pi}\over3 \right)$
Is there an easy way to prove the identity? \cos \left ( \frac{\pi}{7} \right ) - \cos \left ( \frac{2\pi}{7} \right ) + \cos \left ( \frac{3\pi}{7} \right ) = \cos \left (\frac{\pi}{3} \right ...
0answers
19 views
### Distance from a point outside of a sphere to two related points on the sphere, also angles needed
I'm a chemist with a solid background in maths, but I would really appreciate some help with the following problem: I have a point $M$ and a sphere. I only know the distance from the point $M$ to the ...
1answer
58 views
### $\tan B\cdot \frac{BM}{MA}+\tan C\cdot \frac{CN}{NA}=\tan A.$
Let $\triangle ABC$ be a triangle and $H$ be the orthocenter of the triangle. If $M\in AB$ and $N \in AC$ such that $M,N,H$ are collinear prove that : \tan B\cdot \frac{BM}{MA}+\tan C\cdot ...
2answers
71 views
### $\sin{\frac{A+B}{2}}+\sin{\frac{B+C}{2}}+\sin{\frac{C+A}{2}} > \sin{A}+\sin{B}+\sin{C}.$
Help me please to prove that: for any $\triangle ABC$ we have the following inequality: $$\sin{\frac{A+B}{2}}+\sin{\frac{B+C}{2}}+\sin{\frac{C+A}{2}} > \sin{A}+\sin{B}+\sin{C}.$$ It's about ...
1answer
25 views
### The law of cosines for a sphere
$\cos(c) = \cos(a)\cos(b) + \sin(a)\sin(b)\cos(C)$ Prove that if $a$, $b$, and $c$ is approximately $0$, then $c^2 = a^2 + b^2 - 2ab~\cos(C)$. I wasn't sure how to prove this. One thought I had was ...
5answers
151 views
### elegant proof that $\sin(x)\cdot\cos(x)=\sin(2x)/2$
I tried for a few days to prove the identity $\sin(x)\cos(x)=\frac{\sin(2x)}{2}$ and finally got the following proof. I wanted to know if someone knew a simpler or more elegant way to proof it. ...
2answers
42 views
### Area of a rectangular triangle
We need to calculate the area of the triangle shown in figure: The text of the problem also says that: $\sin \alpha =2 \sin \beta$. What is the area of the triangle?
0answers
114 views
### Finding side and angle of isosceles triangle inside two circles
I'm having a problem that I'm not sure how to solve (or if it's even possible). It's not homework, just something i'm struggling with for a project. :) Basically, there are two circles, represented ...
1answer
72 views
### A controlled trapezoid transformation with perspective projecton
I'm trying to implement a controlled trapezoid transformation in Adobe Flash's ActionScript using the built-in perspective projection facility. To give you an idea of how the effect looks like: ...
1answer
59 views
### Area of a Quadrilateral proof
Prove that the area of a quadrilateral is one half the product of the lengths of its diagonals and the sine of the angle between the diagonals.
2answers
48 views
### Duplicate quadratic Bézier curve with new start point?
I have Bézier curve as shown by the wikipedia gif here: I would like to create a new curve that is a segment of the old one. For example, in this gif (from the same article): .. if I wanted B to ...
2answers
152 views
### The distance from a point to a line segment
I'm pretty sure this may be a duplicate post somewhere, but I've searched all through the internet looking for a definite formula to calculate the distance between a point and a line segment. There ...
1answer
47 views
### Finding the coordinates of a point five units along the line perpendicular to a midpoint?
I've been doing some personal math stuff and have spent the last few hours trying to figure this out with no success. I want to find the coordinates of the point at the end of the small line segment ...
1answer
134 views
### Trigonometry / Geometry Puzzle with a Circle Inscribed within a Square
Point P is any point on the inscribed circle. You must prove that (tan(a))^2 + (tan(B))^2 = 8 I first moved point P down to the point where the square would be tangent to the curve to make the ...
1answer
32 views
### Given a unit circle, is there a diameter that intersect it in one single point?
This is false but this is what I have come up with: The circle can be written as $x=\cos\phi$, $y=\sin\phi$, $\phi \in \left [ 0,2\pi \right ]$ Denote now $t=\tan{\frac{\phi}{2}}$ and it follows ...
2answers
78 views
### radian measure problem help.
Find, in radians, the angle between the tangents to a a circle at two points whose distance apart, measured on the circumference of the circle is 350 ft., the radius of the circle being 800 ft. so ...
1answer
41 views
### Determine theta/radius line parameters from line segment endpoints
I've been working on this for the past few hours and am quite stuck! As part of a computer vision exercise I've build a Hough transform that maps between the (x,y) space of an image, and a parameter ...
1answer
99 views
### Trigonometry and Geometry
I have no idea on how to solve this question so can someone please assist me. My son brought it from school and he is really struggling with the question. Consider a triangle ABC with line segments ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 59, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9295114278793335, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/34713/if-the-nucleation-bubble-radius-is-greater-than-the-desitter-radius-does-that-m
|
If the nucleation bubble radius is greater than the deSitter radius, does that make the de Sitter space stable?
In our de Sitter phase, the cosmological constant is tiny. $10^{-123}M_P^4$. Suppose there is another phase with a lower vacuum energy. Is de Sitter phase still stable? The tunneling bubble radius has to exceed the de Sitter radius. Suppose a metastable decay to such a bubble happened. Take that final state, and evolve back in time. It's unlikely to tunnel back because of exponential suppression factors. Light cones are dragged outward in expanding de Sitter at such radii, so, by causality, the bubble radius has to keep shrinking back in time until at least the de Sitter radius. This contradicts our earlier assumption.
What about engineering a phase transition? Form a small bubble and stuff it with enough matter in the new phase with sufficient interior pressure to keep the bubble from shrinking. It collapses to a black hole if the radius R is much greater than $M_P^2/T$, which is much less than the de Sitter phase. The black hole then evaporates.
Even if the cosmological constant in the new phase is large and negative, the tunneling radius still has to be larger than the de Sitter radius because of the hyperbolic geometry of AdS means the volume of the new phase is only proportional to the domain wall area?
If we assume there is a large matter density in the de Sitter phase, it has to be very very large to make the tunneling radius smaller than the de Sitter radius.
Is this deSitter phase stable and not metastable?
-
This is a good question, it can be resolved by considering the thermal nature of deSitter and the continuation to a sphere--- you just thermally mix the two vacua, you go back and forth. The second part of the question is not so sensible, though. What are you doing with the bubble radius? The pressure is not up to you to control, it's determined by the new vacuum energy and the instanton structure. – Ron Maimon Aug 23 '12 at 6:15
To what extent can we be confident we're not sitting in some metastable vacuum that might yet decay (eg. we're in the symmetric phase of a Higgs potential which will fall apart once the universe cools enough - or alternatively, dark energy is some slow-rolling scalar field that will eventually reach its minimum)? – James Aug 30 '12 at 12:45
1 Answer
No, de Sitter is still metastable. Our minds evolved to visualize flat Euclidean geometry in 2D and 3D only and has problems visualizing higher dimensional highly warped Lorentzian geometries, which is why so much confusion exists. To aid our visual intuition, we have to "pretend" and project out a few extra dimensions, while imagining curved spacetime as a curved 2D surface embedded within a flat 3D Euclidean space which really transforms as a 2+1D Lorentzian space.
To aid this intuition, embed 3+1D de Sitter space within 4+1D Minkowski space with Cartesian coordinates T, W, X, Y and Z. Only the first is timelike. de Sitter space is the induced surface satisfying $-T^2 + W^2 + X^2 + Y^2 + Z^2 = 1/k^2$.
The solution that is tunneled to corresponds to cutting off de Sitter space to $W\leqslant -\sqrt{1/k^2 -c^2M_P^4/T_w^2}$ where $T_w$ is the domain wall tension, and attaching a flat Minkowski space at $W=\sqrt{1/k^2 -c^2M_P^4/T_w^2}$ bounded by its intersection with de Sitter.
Let $r^2 = X^2+Y^2+Z^2$, $Z=r\cos\theta$, $X=r\sin\theta \cos\phi$, $Y=r\sin\theta \sin\phi$.
In the coordinate system $(\tau=\sinh^{-1}(kT)/k,r,\Omega)$, the minimum bubble radius is $cM_P^2/T_w$, but there is inside-outside reversal for de Sitter space so that the remaining de Sitter space outside is compact with size $cM_P^2/T_w$ as well. This is what naive Wick rotation and the Coleman-de Luccia analysis gives.
Expanding de Sitter coordinates only cover a patch of half of de Sitter space corresponding to $W+T > 0$. Let $kt=\ln (k(W+T))$. In this coordinate system, $-(e^{kt}/k+\sqrt{1/k^2 -c^2M_P^4/T_w^2})^2+(1/k^2 -c^2M_P^4/T_w^2)+r^2 =1/k^2$. As $t\to -\infty$, the bubble radius approaches $1/k$ from the outside. In this coordinate system, it looks that going back in time, the wall asymptotically approaches the de Sitter radius but never gets smaller than that. This is just a coordinate singularity.
There is a tunneling from expanding de Sitter space everywhere to this solution. The tunneling doesn't happen at $T=0$ as suggested by Wick rotation, but for a small tiny positive value for $W+T$ to accommodate the bubble wall thickness just outside the de Sitter radius in the expanding de Sitter coordinates. Wick rotation gets it wrong because the Killing vector in the expanding de Sitter coordinates isn't globally timelike, and also, the Euclidean instanton solution doesn't apply for this steady state with no time reversal invariance in this coordinate system for t.
It's not true that the tunneling radius has to be around $T/k^2M_P^2$. That's what the naive Minkowski analysis gives.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 21, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9046497941017151, "perplexity_flag": "middle"}
|
http://aimath.org/textbooks/beezer/Bsection.html
|
Bases
A basis of a vector space is one of the most useful concepts in linear algebra. It often provides a concise, finite description of an infinite vector space.
## Bases
We now have all the tools in place to define a basis of a vector space.
Definition B (Basis) Suppose $V$ is a vector space. Then a subset $S\subseteq V$ is a basis of $V$ if it is linearly independent and spans $V$.
So, a basis is a linearly independent spanning set for a vector space. The requirement that the set spans $V$ insures that $S$ has enough raw material to build $V$, while the linear independence requirement insures that we do not have any more raw material than we need. As we shall see soon in Section D:Dimension, a basis is a minimal spanning set.
You may have noticed that we used the term basis for some of the titles of previous theorems (i.e. Theorem BNS, Theorem BCS, Theorem BRS) and if you review each of these theorems you will see that their conclusions provide linearly independent spanning sets for sets that we now recognize as subspaces of $\complex{m}$. Examples associated with these theorems include Example NSLIL, Example CSOCD and Example IAS. As we will see, these three theorems will continue to be powerful tools, even in the setting of more general vector spaces.
Furthermore, the archetypes contain an abundance of bases. For each coefficient matrix of a system of equations, and for each archetype defined simply as a matrix, there is a basis for the null space, three bases for the column space, and a basis for the row space. For this reason, our subsequent examples will concentrate on bases for vector spaces other than $\complex{m}$.
Notice that Definition B does not preclude a vector space from having many bases, and this is the case, as hinted above by the statement that the archetypes contain three bases for the column space of a matrix. More generally, we can grab any basis for a vector space, multiply any one basis vector by a non-zero scalar and create a slightly different set that is still a basis. For "important" vector spaces, it will be convenient to have a collection of "nice" bases. When a vector space has a single particularly nice basis, it is sometimes called the standard basis though there is nothing precise enough about this term to allow us to define it formally --- it is a question of style. Here are some nice bases for important vector spaces.
Theorem SUVB (Standard Unit Vectors are a Basis) The set of standard unit vectors for $\complex{m}$ (Definition SUV), $B=\set{\vectorlist{e}{m}}=\setparts{\vect{e}_i}{1\leq i\leq m}$ is a basis for the vector space $\complex{m}$.
Proof.
Example BP: Bases for $P_n$.
Example BM: A basis for the vector space of matrices.
The bases described above will often be convenient ones to work with. However a basis doesn't have to obviously look like a basis.
Example BSP4: A basis for a subspace of $P_4$.
Example BSM22: A basis for a subspace of $M_{22}$.
Example BC: Basis for the crazy vector space.
We have seen that several of the sets associated with a matrix are subspaces of vector spaces of column vectors. Specifically these are the null space (Theorem NSMS), column space (Theorem CSMS), row space (Theorem RSMS) and left null space (Theorem LNSMS). As subspaces they are vector spaces (Definition S) and it is natural to ask about bases for these vector spaces. Theorem BNS, Theorem BCS, Theorem BRS each have conclusions that provide linearly independent spanning sets for (respectively) the null space, column space, and row space. Notice that each of these theorems contains the word "basis" in its title, even though we did not know the precise meaning of the word at the time. To find a basis for a left null space we can use the definition of this subspace as a null space (Definition LNS) and apply Theorem BNS. Or Theorem FS tells us that the left null space can be expressed as a row space and we can then use Theorem BRS.
Theorem BS is another early result that provides a linearly independent spanning set (i.e. a basis) as its conclusion. If a vector space of column vectors can be expressed as a span of a set of column vectors, then Theorem BS can be employed in a straightforward manner to quickly yield a basis.
## Bases for Spans of Column Vectors
We have seen several examples of bases in different vector spaces. In this subsection, and the next (Subsection B.BNM:Bases: Bases and Nonsingular Matrices), we will consider building bases for $\complex{m}$ and its subspaces.
Suppose we have a subspace of $\complex{m}$ that is expressed as the span of a set of vectors, $S$, and $S$ is not necessarily linearly independent, or perhaps not very attractive. Theorem REMRS says that row-equivalent matrices have identical row spaces, while Theorem BRS says the nonzero rows of a matrix in reduced row-echelon form are a basis for the row space. These theorems together give us a great computational tool for quickly finding a basis for a subspace that is expressed originally as a span.
Example RSB: Row space basis.
Example IAS provides another example of this flavor, though now we can notice that $X$ is a subspace, and that the resulting set of three vectors is a basis. This is such a powerful technique that we should do one more example.
Example RS: Reducing a span.
## Bases and Nonsingular Matrices
A quick source of diverse bases for $\complex{m}$ is the set of columns of a nonsingular matrix.
Theorem CNMB (Columns of Nonsingular Matrix are a Basis) Suppose that $A$ is a square matrix of size $m$. Then the columns of $A$ are a basis of $\complex{m}$ if and only if $A$ is nonsingular.
Proof.
Example CABAK: Columns as Basis, Archetype K.
Perhaps we should view the fact that the standard unit vectors are a basis (Theorem SUVB) as just a simple corollary of Theorem CNMB? (See technique LC.)
With a new equivalence for a nonsingular matrix, we can update our list of equivalences.
Theorem NME5 (Nonsingular Matrix Equivalences, Round 5) Suppose that $A$ is a square matrix of size $n$. The following are equivalent.
1. $A$ is nonsingular.
2. $A$ row-reduces to the identity matrix.
3. The null space of $A$ contains only the zero vector, $\nsp{A}=\set{\zerovector}$.
4. The linear system $\linearsystem{A}{\vect{b}}$ has a unique solution for every possible choice of $\vect{b}$.
5. The columns of $A$ are a linearly independent set.
6. $A$ is invertible.
7. The column space of $A$ is $\complex{n}$, $\csp{A}=\complex{n}$.
8. The columns of $A$ are a basis for $\complex{n}$.
Proof.
## Orthonormal Bases and Coordinates
We learned about orthogonal sets of vectors in $\complex{m}$ back in Section O:Orthogonality, and we also learned that orthogonal sets are automatically linearly independent (Theorem OSLI). When an orthogonal set also spans a subspace of $\complex{m}$, then the set is a basis. And when the set is orthonormal, then the set is an incredibly nice basis. We will back up this claim with a theorem, but first consider how you might manufacture such a set.
Suppose that $W$ is a subspace of $\complex{m}$ with basis $B$. Then $B$ spans $W$ and is a linearly independent set of nonzero vectors. We can apply the Gram-Schmidt Procedure (Theorem GSP) and obtain a linearly independent set $T$ such that $\spn{T}=\spn{B}=W$ and $T$ is orthogonal. In other words, $T$ is a basis for $W$, and is an orthogonal set. By scaling each vector of $T$ to norm 1, we can convert $T$ into an orthonormal set, without destroying the properties that make it a basis of $W$. In short, we can convert any basis into an orthonormal basis. Example GSTV, followed by Example ONTV, illustrates this process.
Unitary matrices (Definition UM) are another good source of orthonormal bases (and vice versa). Suppose that $Q$ is a unitary matrix of size $n$. Then the $n$ columns of $Q$ form an orthonormal set (Theorem CUMOS) that is therefore linearly independent (Theorem OSLI). Since $Q$ is invertible (Theorem UMI), we know $Q$ is nonsingular (Theorem NI), and then the columns of $Q$ span $\complex{n}$ (Theorem CSNM). So the columns of a unitary matrix of size $n$ are an orthonormal basis for $\complex{n}$.
Why all the fuss about orthonormal bases? Theorem VRRB told us that any vector in a vector space could be written, uniquely, as a linear combination of basis vectors. For an orthonormal basis, finding the scalars for this linear combination is extremely easy, and this is the content of the next theorem. Furthermore, with vectors written this way (as linear combinations of the elements of an orthonormal set) certain computations and analysis become much easier. Here's the promised theorem.
Theorem COB (Coordinates and Orthonormal Bases) Suppose that $B=\set{\vectorlist{v}{p}}$ is an orthonormal basis of the subspace $W$ of $\complex{m}$. For any $\vect{w}\in W$, \begin{equation*} \vect{w}= \innerproduct{\vect{w}}{\vect{v}_1}\vect{v}_1+ \innerproduct{\vect{w}}{\vect{v}_2}\vect{v}_2+ \innerproduct{\vect{w}}{\vect{v}_3}\vect{v}_3+ \cdots+ \innerproduct{\vect{w}}{\vect{v}_p}\vect{v}_p \end{equation*}
Proof.
Example CROB4: Coordinatization relative to an orthonormal basis, $\complex{4}$.
A slightly less intimidating example follows, in three dimensions and with just real numbers.
Example CROB3: Coordinatization relative to an orthonormal basis, $\complex{3}$.
Not only do the columns of a unitary matrix form an orthonormal basis, but there is a deeper connection between orthonormal bases and unitary matrices. Informally, the next theorem says that if we transform each vector of an orthonormal basis by multiplying it by a unitary matrix, then the resulting set will be another orthonormal basis. And more remarkably, any matrix with this property must be unitary! As an equivalence (technique E) we could take this as our defining property of a unitary matrix, though it might not have the same utility as Definition UM.
Theorem UMCOB (Unitary Matrices Convert Orthonormal Bases) Let $A$ be an $n\times n$ matrix and $B=\set{\vectorlist{x}{n}}$ be an orthonormal basis of $\complex{n}$. Define
\begin{align*} C&=\set{A\vect{x}_1,\,A\vect{x}_2,\,A\vect{x}_3,\,...,\,A\vect{x}_n} \end{align*}
Then $A$ is a unitary matrix if and only if $C$ is an orthonormal basis of $\complex{n}$.
Proof.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 79, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8806749582290649, "perplexity_flag": "head"}
|
http://cs.stackexchange.com/questions/7072/not-self-reducible-np-problem
|
# Not self-reducible NP problem
I am interesting in proving that there is no search problem that is polynomial bounded and self-reducible, as long as ${\sf P} \neq {\sf NP} \cap {\sf coNP}$.
The problem is I don't know how to approach the proof, below I wrote few ideas with open questions.
We can start by denoting the search problem in set ${\sf NP} \cap {\sf coNP}$ in terms of search problem relations $R_1$ and $R_2$ such that $S = \left \{ x:R_1(x) \neq \emptyset \right \} = \left \{ x:R_2(x) = \emptyset \right \}$. But how to present that the decision problem $S$ is not in ${\sf P}$. I don't know (but it seems to be crucial to show that $S$ is not in ${\sf P}$).
Having defined $S$ the next step would be to show that there is a relation $R$ that is self-reducible to $S$, but is not polynomial bounded.
In short, the question is how to define relation $R$ that is self-reducible to $S$. How to prove that $R$ is not polynomial bounded. Actually proving that $R$ is polynomial bounded may be redundant because $S$ is in ${\sf NP} \cap {\sf coNP}$ and it's given that ${\sf NP} \cap {\sf coNP} \neq {\sf P}$.
Addendum: I was given a hint
$R = \left \{ (x,1y):(x,y) \in R_1 \right \} \cup \left \{ (x,0y):(x,y) \in R_2 \right \}$
If I will be able to show that search problem relation R is self-reducible to S, than I think the problem is solved.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 20, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9826269149780273, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/25211?sort=newest
|
## Entire function interpolation with control over multiplicities/derivatives
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Let's say I have a multiset of complex numbers $\lbrace a_1,\cdots,a_n\rbrace$ (so some of the elements may be repeated) and I would like to construct an entire function $p(z)$ with those numbers as zeroes. However, I also have a multiset of complex numbers $B = \lbrace b_1,\cdots,b_n \rbrace$ such that I wish $p(b_i) = 1$ - p is only 1 on the $b_i$'s.
It seems like trying to use Lagrange's polynomial interpolation formula gives you a polynomial with too high a degree (greater than $n$ and less than or equal to $2n$), and then there's the possibility that $p^{-1}(1) \nsubseteq B$.
I've been thinking about doing the following:
Let $g(z) = (x-a_1) \cdots (x - a_n)$, and then via Weierstrass construct an entire function $h(z)$ such that $e^{h(b_i)} = 1/g(b_i)$. Then it seems like the entire function $e^{h(z)}g(z)$ is getting somewhat closer to what I want - but then again I don't know if there are any other $\alpha$'s such that $e^{h(\alpha)}g(\alpha) = 1$ where $\alpha \notin B$.
The problem of polynomial interpolation and fitting seems very well studied; however, I can't seem to find a reference for this particular puzzle.
Thanks in advance!
-
You're imposing too many conditions. The space of polynomials of degree at most $n$ has dimension $n+1$. You are trying to impose $2n$ linear conditions on that space, which when $n > 1$ is more conditions than the dimension of your space. So there will be no solution in general. – Pete L. Clark May 19 2010 at 8:14
Based on a closer reading of your question, it sounds like you are aware of what I said in my previous comment. But then I can't figure out what you're asking: of course you can interpolate by an entire function, but not by a polynomial in general. – Pete L. Clark May 19 2010 at 8:16
Ah, I guess I was not clear at all. I'm not looking for a polynomial (because of what you just said), but rather an entire function with 0's at only those places (the $a_i$'s), and 1's at those places (the $b_i$'s). I know I can construct a Weierstrass entire function with the specified zeros, but can I force the entire function to have 1's at only those places? – Henry Yuen May 19 2010 at 8:26
## 3 Answers
If I read you right, you want an entire function that takes the values $0$ and $1$ at only finitely many (specified) points. This implies that the function must be a polynomial, by Picard's great theorem, since there will be deleted neighbourhoods of infinity where the function misses two values.
-
Then, based on Pete Clark's comment above, I'm imposing too many conditions on the polynomial for it to exist (in general)? – Henry Yuen May 19 2010 at 9:08
1
Henry, yes you are imposing too stringent conditions. – Robin Chapman May 19 2010 at 9:16
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
In your statement, you do not say explicitly, whether $p$ is aloowed to have other zeros, except those in the set $A$.
If you want to construct an entire function with zeros and ones exactly prescribed, this is clearly impossible when your sets $A$ and $B$ are both finite. For the reason explained by Robin Chapman.
If you want ones to be exactly prescribed, and function having zeros on the set $A$, and perhaps other zeros, then this is possible: take $p(z)=1+(z-b_1)...(z-b_n)\exp g(z)$ and use interpolation for $g$.
-
Some very nice instances of your problem (but of course not all) are solved by so-called Shabbat polynomials, ie. by polyomials such that $p^{-1}[0,1]$ is a tree and such that ${0,1,\infty}$ are the only critical values. Every planar tree can be realized by an (essentially unique up to affine transformation) Shabbat polynomial. You have thus a polynomial solution if your points $a_i$ and $b_i$ form a bipartition of the vertices of such a "Shabbat-tree".
Let me add that Shabbat polynomials are the simplest instances of "dessins d'enfants" defined by Grothendieck in the hope of understanding the absolute Galois group. (Suitably normalized Shabbat polynomials have algebraic coefficients and the action of the absolute Galois group preserves them and acts thus on the corresponding trees by permuting them.)
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 34, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9517719149589539, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/10478?sort=votes
|
## An everywhere locally trivial line bundle
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Is there a variety $X$ over $\mathbb{Q}$ and a line bundle $L$ over $X$ (other than the trivial line bundle $\mathcal{O}_X$ ) such that $L_v$ is the trivial line bundle over $X_v=X\times_{\mathbb{Q}}\mathbb{Q}_v$ for every place $v$ of $\mathbb{Q}$ ?
(Answer known. There is a pun on "locally trivial" in the title.)
-
Forgive me for asking, but if the answer is known, could you show it to us? – Hailong Dao Jan 2 2010 at 16:18
I thought people would like to think about it. – Chandan Singh Dalawat Jan 3 2010 at 3:06
1
The moral of this one seems to me "don't let on that you know the answer" :-/ Maybe it's time you answered your own question? – Kevin Buzzard Jan 6 2010 at 16:00
Sorry for having kept everyone waiting ! I had to be away three days... – Chandan Singh Dalawat Jan 7 2010 at 9:32
2
Others may disagree, but I think it's against the spirit of things here to ask a question to which you already know the answer (however nice the question is). It might seem like an abuse of people's willingness to help. – Tom Leinster Jan 7 2010 at 14:11
show 1 more comment
## 1 Answer
The following example was provided to me by Colliot-Thélène some years ago : Let $X$ be the complement in $\mathbb{P}_{1,\mathbb{Q}}$ of the three closed points defined by $x^2=13$, $x^2=17$, $x^2=221$. Then $\operatorname{Pic}(X)=\mathbb{Z}/2\mathbb{Z}$ but $\operatorname{Pic}(X_v)=0$ for every place $v$ of $\mathbb{Q}$.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 18, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9567009806632996, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/69274/best-theorem-for-eulerian-paths-with-open-ends
|
## BEST theorem for Eulerian paths with open ends
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
In R.P. Stanley's book, Enumerative Combinatorics, Vol.2, paragraph 5.6, there is an intuitive proof of the BEST therem, which states that the number of eulerian tours in a balanced digraph $D$ with vertices in $V$ is given by
$\epsilon(D) = t(D) \prod_{v' \in V} (\mathrm{out}_{v'}(D)-1)!$
Here $\mathrm{out}_u(D)$ is the outdegree of a vertex, which equals the indegree $\mathrm{in}_u(D)$, and $t(D)$ is the number of arborescences, or spanning oriented rooted trees, which for balanced digraphs turns out to be independent of the root.
I was looking for a generalization to eulerian paths with open ends. Since such paths have to be drawn without lifting the pencil, the balanced digraph on which they take place must have all balanced vertices but for two vertices $u$ and $v$ (respectively the starting and the arrival vertices), such that
$\mathrm{out}_u(D) - \mathrm{in}_u(D)=+1, \quad \mathrm{out}_v(D) - \mathrm{in}_v(D)=-1$.
Now, it seems to me that Stanley's proof works out equally well, I can't see any obvious impediment, so that one should end up with a formula like
$\epsilon_{v,u}(D) = t_v(D) \prod_{v' \in V} (\mathrm{out}_{v'}(D)-1)!$
where now, since the graph is unbalanced, $t_v(D)$ will depend on the root. However, I couldn't find references for this, and the dedicated literature on eulerian trials seems to worried with other kinds of problems, which I have no intuition of. What do you think?
-
Try drawing a line segment between the two unbalanced vertices, and then use the previous result. Gerhard "The Shortest Distance Between Results..." Paseman, 2011.07.01 – Gerhard Paseman Jul 1 2011 at 20:21
## 1 Answer
The question seems to be sinking into the depths of Lethe; here is a styrofoam noodle for it.
Every Eulerian path on G can be completed to an Eulerian tour on G' which is G augmented with the edge (v,u). This correspondence is easily seen to be 1-1, so the number of desired paths on G is the formula you mention above, applied to the graph G'.
Unless I'm misunderstanding something, that should do it.
Gerhard "Email Me About System Design" Paseman, 2011.07.05
-
It seems to work. Then my formula is wrong, since deletion-contraction formulas for $t_v(G')$ imply that I'm forgetting a piece; I'll have to make up my mind where Stanley's proof fails for unbalanced digraphs. – tomate Jul 6 2011 at 8:21
Sleep on it for a night. If it still doesn't work out tomorrow, try a new question to ask for help with the sticky bits. Gerhard "Email Me About System Design" Paseman, 2011.07.06 – Gerhard Paseman Jul 6 2011 at 8:30
Wait, now I see why this is wrong. It's not true that the correspondence is 1-1. Eulerian paths on G with open ends are 1-1 with those eulerian cycles on G+(v,u) whose final edge is (v,u)! But internal cycles in an eulerian cycle can be walked in any desired order! I'm more and more convinced that my guess is correct. – tomate Jul 22 2011 at 13:34
I am not sure what you are counting. The edge removal does matter if you are counting a traversal of an Eulerian tour with a given start/endpoint. If you consider two traversals equivalent if the sequence of edges traveled differs by a cyclic permutation, then I think the (set of) equivalence classes of traversals is equinumerous with the paths. If you make clear what is being counted (I thought it was equiv. classes), then I may adjust my answer as needed. Gerhard "Ask Me About System Design" Paseman, 2011.07.22 – Gerhard Paseman Jul 22 2011 at 23:54
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.940304160118103, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/107319/sanity-check-about-wikipedia-definition-of-differentiable-manifold-as-a-locally/107432
|
# Sanity check about Wikipedia definition of differentiable manifold as a locally ringed space
Most textbooks introduce differentiable manifolds via atlases and charts. This has the advantage of being concrete, but the disadvantage that the local coordinates are usually completely irrelevant- the choice of atlas and chart is arbitrary, and rarely if ever seems to play any role in differential geometry/topology.
There is a much better definition of differentiable manifolds, which I don't know a good textbook reference for, via sheaves of local rings. This definition does not involve any strange arbitrary choices, and is coordinate free. Paragraph 3 in Wikipedia (which is the actual definition) states:
A differentiable manifold (of class $C_k$) consists of a pair $(M, \mathcal{O}_M)$ where $M$ is a topological space, and $\mathcal{O}_M$ is a sheaf of local $R$-algebras defined on $M$, such that the locally ringed space $(M,\mathcal{O}_M)$ is locally isomorphic to $(\mathbb{R}^n, \mathcal{O})$.
This confuses me, because I don't see why such a sheaf should be acyclic, or where conditions like "paracompact" or "complete metric space" or "second countable Hausdorff" are implicit. So either:
1. The wikipedia entry has a mistake (I would want to sanity-check this before editing the entry, because this is such a fundamental definition which thousands must have read).
2. Somewhere in that definition, the condition that $M$ be paracompact is implicit.
Question: Should the definition above indeed require that $M$ be second-countable Hausdorff or paracompact or whatever? Or is it somehow implicit somewhere, and if so, where?
Also, is this definition given carefully in any textbook?
Update: I have editted the Wikipedia article to require that $M$ be second-countable Hausdorff. But I'm still wondering if there is a textbook covering this stuff, and whether requiring the sheaf to be acyclic might have worked instead as an alternative.
-
Does being locally isomorphic to $(\mathbb{R}^n,\mathcal{O})$ require the topology to be paracompact, second-countable, and Hausdorff? – Neal Feb 9 '12 at 3:57
What about "when O_M is an acyclic sheaf of local R-algebras". Would that also do the job? – Daniel Moskovich Feb 9 '12 at 5:43
8
@Qiaochu: being locally isomorphic to $(\mathbb{R}^n,\mathcal{O})$ certainly does not imply that the space is Hausdorff. There are standard examples of "non-Hausdorff differentiable manifolds". For a slightly non-standard example, see p. 4 of math.uga.edu/~pete/modularcurves.pdf -- these are notes for lectures I gave last week in a course on modular curves. The better part of a week was spent nailing down conditions for the quotient under a group action to be Hausdorff! – Pete L. Clark Feb 9 '12 at 6:09
@Pete: whoops. Of course. – Qiaochu Yuan Feb 9 '12 at 6:30
## 3 Answers
To expand a bit on my comment above:
Being isomorphic as a locally ringed space to $(\mathbb{R}^n,\mathcal{O})$ doesn't impose additional conditions on the underlying topological space of a locally ringed space beyond requiring it to be locally homeomorphic to $\mathbb{R}^n$. (Well, that's a lie: a differentiable structure does of course place limitations on the topology of a manifold, but only very subtle ones: it doesn't impose either of the limitations you are asking about. See below!)
Thus, if you want your definition of a manifold to include Hausdorff and second countable and/or paracompact, you had better put that in explicitly. (And, although it's a matter of taste and terminology, in my opinion you do want this.)
I think you will find these lecture notes enlightening on these points. In particular, on page 4 I give an example (taken from Thurston's book on 3-manifolds!) of a Galois covering map where the total space is a manifold but the quotient space is not Hausdorff. (When I gave this example I mentioned that I wish someone had told me that covering maps could destroy the Hausdorff property! And indeed the audience looked suitably shaken.)
With regard to your other question ("Also, is this definition given carefully in any textbook?")...I completely sympathize. When I was giving these lectures I found that I really wanted to speak in terms of locally ringed spaces! See in particular Theorem 9 in my notes, which contains the unpleasantly anemic statement: "If $X$ has extra local structure, then $\Gamma \backslash X$ canonically inherits this structure." What I really wanted to say is that if $\pi: X \rightarrow \Gamma \backslash X$, then $\mathcal{O}_{\Gamma \backslash X} = \pi_* \mathcal{O}_X$! (I am actually not the kind of arithmetic geometer who has to express everything in sheaf-theoretic language, but come on -- this is clearly the way to go in this instance: that one little equation is worth a thousand words and a lot of hand waving about "local structure".)
What is even more ironic is that my course is being taken by students almost all of whom have taken a full course on sheaves in the context of algebraic geometry. But whatever differential / complex geometry / topology they know, they know in the classical language of coordinate charts and matrices of partial derivatives. It's really kind of a strange situation.
I fantasize about teaching a year long graduate course called "modern geometry" where we start off with locally ringed spaces and use them in the topological / smooth / complex analytic / Riemannian categories as well as just for technical, foundational things in a third course in algebraic geometry. (As for most graduate courses I want to teach, improving my own understanding is a not-so-secret ulterior motive.) In recent years many similar fantasies have come true, but this time there are two additional hurdles: (i) this course cuts transversally across several disciplines so implicitly "competes" with other graduate courses we offer and (ii) this should be a course for early career students, and at a less than completely fancy place like UGA such a highbrow approach would, um, raise many eyebrows.
-
4
I wish I had taken a class like your proposed "modern geometry" class. Off hand are there any books you would recommend (aside from your forthcoming lecture notes `:-)`) for this unified treatment? – Willie Wong♦ Feb 9 '12 at 9:22
1
Thank you for this answer. Some points of confusion: 1) Which was my other question? 2) Would requiring the sheaf O_M to be acyclic solve the problem? The course sounds interesting- I wish I could listen in :). Maybe you could upload video lectures if you give it? Also, indeed sheaf theory language somehow seems more natural and enlightening for fundamentals of differential geometry, for me at the moment anyway. – Daniel Moskovich Feb 9 '12 at 10:05
@Willie: I have no (existing!) texts to recommend for this, but other people have recommended some texts in their answers. (Wait, I thought of something: Wells's Differential analysis on complex manifolds has at least some of the desired material.) They are worth looking into. – Pete L. Clark Feb 9 '12 at 14:00
@Daniel: I edited my answer to clarify which question I was referring to. Also, off the top of my head I don't see why requiring acyclicity of the structure sheaf should help, but I don't really know or have any particular insight there. If this is a serious question, you may want to ask again (maybe on MO: it sounds "research level" to me). – Pete L. Clark Feb 9 '12 at 14:03
2
– Daniel Moskovich Feb 10 '12 at 1:35
1) Godement has written a book on Lie groups where he defines and uses manifolds through sheaves.
This is not surprising since he wrote a treatise on sheaf theory nearly sixty years ago, which surprisingly is still the standard reference on the subject.
The sheaf theory is very easy, since the structural sheaf is a ring of functions and thus automatically separated (= satisfies first axiom for a presheaf to be a sheaf).
The book has no English translation (to my knowledge), but if you can overcome that hurdle you will be able to savour Godelment's inimitably idiosyncratic style, as well as the expertise of this great mathematician .
2) Since you mention acyclicity, let me remark that it does not follow from either definition but is a theorem.
It is a consequence of the existence of partitions of unity, which implies that the structural sheaf $\mathcal C^k_M$ is fine, hence acyclic.
However partitions of unity require $M$ to be paracompact, which might be an argument for including paracompactness (or equivalent conditions) in the definition.
-
Thanks!! Glancing briefly through Godement's book, isn't this the "structure sheaf" definition? en.wikipedia.org/wiki/Differentiable_manifold#Structure_sheaf – Daniel Moskovich Feb 9 '12 at 12:56
If I demand that the sheaf of local R-algebras be fine, and local isomorphism to $(\mathbb{R}^n,O)$, does that imply that M must be paracompact Hausdorff? (at least for M connected?) If so, that looks like a nice alternative formulation. – Daniel Moskovich Feb 9 '12 at 13:04
1
Dear @Daniel: 1) yes, Godement's definition is the structure sheaf definition. I mentioned his book because you asked "is this definition given carefully in any textbook ?" and Godement is a very careful mathematician, as you can expect from a collaborator of Bourbaki. 2) Whether you use the atlas definition or the sheaf definition, it's up to you to demand that a manifold be Hausdorff or paracompact: it doesn't follow from either definition. 3) I don't know if it is sufficient to assume that the structure sheaf be acyclic to ensure paracompactness. It would indeed be nice if it were the case – Georges Elencwajg Feb 9 '12 at 15:22
Thanks you very much for this answer! I'll ask about whether sheaf conditions suffice on MO. – Daniel Moskovich Feb 9 '12 at 23:29
I'm afraid I don't know the answer to your main question, but I would like to mention a textbook that approaches manifolds from the sheaf-theoretic perspective: Ramanan's Global Calculus.
He explicitly includes the Hausdorff + second-countable conditions, defining a manifold as follows:
Definition. A differential manifold $M$ (of dimension $n$) consists of
a) a topological space which is Hausdorff and admits a countable base for open sets, and
b) a sheaf $\mathcal{A}^M=\mathcal{A}$ of subalgebras of the sheaf of continuous functions on $M$.
These are required to satisfy the following local condition. For any $x\in M$, there is an open neighborhood $U$ of $x$ and a homeomorphism of $U$ with an open set $V$ in $\mathbb{R}^n$ such that the restriction of $\mathcal{A}$ to $U$ is the inverse image of the sheaf of differentiable functions on $V$.
-
– Daniel Moskovich Feb 9 '12 at 5:23
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 35, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.943213939666748, "perplexity_flag": "middle"}
|
http://www.physicsforums.com/showthread.php?s=63e02ad402531fe0baedcad40f8d8166&p=4268830
|
Physics Forums
## Norm vs. a Metric
Are the axioms of a Norm different from those of a Metric?
For instance Wikipedia says:
a NORM is a function p: V → R s.t. V is a Vector Space, with the following properties:
For all a ∈ F and all u, v ∈ V, p(av) = |a| p(v), (positive homogeneity or positive scalability).
p(u + v) ≤ p(u) + p(v) (triangle inequality or subadditivity).
If p(v) = 0 then v is the zero vector (separates points).
While a metric is defined as a distance function between elements of a set with the 3 familiar axioms, of positive definition, triangular inequality and symmetry.
The terms are however used loosely in many math books and make you think they are one and the same.
The reason I am asking is I am faced with a question where given a norm definition on the set ##\mathbb{Q}## I am asked to determine if the operation defines a norm? So I was wondering which axioms do I check, those of the Metric or those of the VS norm?
Thank you
PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug
Blog Entries: 1 Recognitions: Gold Member Homework Help Science Advisor A norm and a metric are two different things. The norm is measuring the size of something, and the metric is measuring the distance between two things. A metric can be defined on any set ##S##. It is simply a function ##d## which assigns a distance (i.e. a non-negative real number) ##d(x,y)## to any two elements ##x,y \in S##. It must satisfy three criteria in order to qualify as a metric. For all ##x,y,z\in S##, (1) ##d(x,y) \geq 0##, and ##d(x,y) = 0## if and only if ##x = y##; (2) ##d(x,y) = d(y,x)##; (3) ##d(x,z) \leq d(x,y) + d(y,z)##. If ##S## happens to be a vector space, meaning that there is a way to add the elements together and multiply them by scalars, then we can define a norm. This is a function ##\rho## which maps each element to a nonnegative real number. It must satisfy the following rules for all ##x,y \in S## and all scalar ##\alpha##: (1) ##\rho(x) \geq 0##, and ##\rho(x) = 0## if and only if ##x = 0##; (2) ##\rho(\alpha x) = |\alpha| \rho(x)##; (3) ##\rho(x+y) \leq \rho(x) + \rho(y)##. If we have a vector space with a norm ##\rho##, it is always possible to define a metric in terms of that norm by putting ##d(x,y) = \rho(x-y)##. But not every metric is defined in terms of a norm, or even CAN be defined in terms of a norm. An example is the metric $$d(x,y) = \begin{cases} 1 & \textrm{ if }x \neq y \\ 0 & \textrm{ if }x = y \\ \end{cases}$$ It is easy to check that this satisfies the three conditions for a metric. But even if we define this metric on a vector space, there is no norm ##\rho## satisfying ##d(x,y) = \rho(x-y)##. To see this, suppose there were such a ##\rho##, and let ##\alpha## be any scalar such that ##\alpha \neq 0## and ##|\alpha| \neq 1##. Choose any ##x,y## such that ##x \neq y##. Then ##\alpha x \neq \alpha y##, and we must have $$1 = d(\alpha x, \alpha y) = \rho(\alpha x - \alpha y) = \rho(\alpha(x-y)) = |\alpha| \rho(x-y) = |\alpha| d(x,y) = |\alpha|$$ which is a contradiction.
Quote by jbunniii A norm and a metric are two different things. The norm is measuring the size of something, and the metric is measuring the distance between two things. A metric can be defined on any set ##S##. It is simply a function ##d## which assigns a distance (i.e. a non-negative real number) ##d(x,y)## to any two elements ##x,y \in S##. It must satisfy three criteria in order to qualify as a metric. For all ##x,y,z\in S##, (1) ##d(x,y) \geq 0##, and ##d(x,y) = 0## if and only if ##x = y##; (2) ##d(x,y) = d(y,x)##; (3) ##d(x,z) \leq d(x,y) + d(y,z)##. If ##S## happens to be a vector space, meaning that there is a way to add the elements together and multiply them by scalars, then we can define a norm. This is a function ##\rho## which maps each element to a nonnegative real number. It must satisfy the following rules for all ##x,y \in S## and all scalar ##\alpha##: (1) ##\rho(x) \geq 0##, and ##\rho(x) = 0## if and only if ##x = 0##; (2) ##\rho(\alpha x) = |\alpha| \rho(x)##; (3) ##\rho(x+y) \leq \rho(x) + \rho(y)##. If we have a vector space with a norm ##\rho##, it is always possible to define a metric in terms of that norm by putting ##d(x,y) = \rho(x-y)##. But not every metric is defined in terms of a norm, or even CAN be defined in terms of a norm. An example is the metric $$d(x,y) = \begin{cases} 1 & \textrm{ if }x \neq y \\ 0 & \textrm{ if }x = y \\ \end{cases}$$ It is easy to check that this satisfies the three conditions for a metric. But even if we define this metric on a vector space, there is no norm ##\rho## satisfying ##d(x,y) = \rho(x-y)##. To see this, suppose there were such a ##\rho##, and let ##\alpha## be any scalar such that ##\alpha \neq 0## and ##|\alpha| \neq 1##. Choose any ##x,y## such that ##x \neq y##. Then ##\alpha x \neq \alpha y##, and we must have $$1 = d(\alpha x, \alpha y) = \rho(\alpha x - \alpha y) = \rho(\alpha(x-y)) = |\alpha| \rho(x-y) = |\alpha| d(x,y) = |\alpha|$$ which is a contradiction.
Thank you for clarifying this: so which axioms do I have to check for the norm?
Recognitions:
Gold Member
Science Advisor
Staff Emeritus
## Norm vs. a Metric
Given a norm, |v|, we can define a metric: d(u, v)= |u- v|. However, there exist metric spaces which do NOT correspond to a norm. The simplest is the "discrete metric": for any set, define d(x, y)= 0, if x= y, 1, otherwise.
It is also true that, given any inner product, <u, v>, we can define a norm: $|v|= \sqrt{<v, v>}$, and so a metric. But there exist norms that do not correspond to any inner product.
(Edited thanks to micromass.)
Blog Entries: 8
Recognitions:
Gold Member
Science Advisor
Staff Emeritus
Quote by HallsofIvy But there exist inner product spaces which cannot be given norms.
I guess you mean this backwards: there exists normed spaces which cannot be given an inner product.
Blog Entries: 1
Recognitions:
Gold Member
Homework Help
Science Advisor
Quote by Bachelier Thank you for clarifying this: so which axioms do I have to check for the norm?
Check that it satisfies the conditions that I mentioned above:
If ##S## happens to be a vector space, meaning that there is a way to add the elements together and multiply them by scalars, then we can define a norm. This is a function ##\rho## which maps each element to a nonnegative real number. It must satisfy the following rules for all ##x,y \in S## and all scalar ##\alpha##: (1) ##\rho(x) \geq 0##, and ##\rho(x) = 0## if and only if ##x = 0##; (2) ##\rho(\alpha x) = |\alpha| \rho(x)##; (3) ##\rho(x+y) \leq \rho(x) + \rho(y)##.
Norms are linear. Metrics only need to fulfil the triangle inequality. If you want to see a cool metric that cannot be derived from a norm look up the French railroad metric.
Mentor If you're asked to check if a given function is a norm, you must of course (directly or indirectly) check if the function satisfies the conditions listed in the definition of "norm". Anything else would be absurd. If there had been a theorem that says that every metric is a norm, then it would of course have been sufficient to check that the given function is a metric. But there's no such theorem. The conditions that a metric is required to satisfy do not even mention a vector space structure. Another thing worth noting is that the term "metric" is also used as a short form of "metric tensor field" in differential geometry. That kind of metric is a function that takes each point p in a manifold M to something very similar to an inner product on the tangent space of M at p. (The tangent space at p is a vector space). So I would say that inner products are more closely related to the "metrics" of differential geometry than the metrics of metric spaces.
Recognitions:
Gold Member
Science Advisor
Staff Emeritus
Quote by micromass I guess you mean this backwards: there exists normed spaces which cannot be given an inner product.
Yes, thanks for catching that. I have gone back and edited it.
Thread Tools
| | | |
|----------------------------------------|----------------------------|---------|
| Similar Threads for: Norm vs. a Metric | | |
| Thread | Forum | Replies |
| | Classical Physics | 0 |
| | Calculus & Beyond Homework | 2 |
| | Calculus & Beyond Homework | 2 |
| | Calculus & Beyond Homework | 1 |
| | Earth | 1 |
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8692821860313416, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/36451/holomorphic-function-on-upper-half-plane-must-be-rational
|
# Holomorphic function on Upper Half Plane must be rational
Let $f$ be holomorphic on the upper half plane and continuous on $\mathbb{R}$, with $|f(r)|=1$ for all $r\in\mathbb{R}$. Prove that $f$ is rational.
I was playing around with conformal maps and $\overline{f(\bar{z})}$, but I would really like a hint on how exactly "rationality" comes up. I'm guessing Schwarz Lemma is involved?
-
5
how about $e^{ix}$? It doesn't look very rational. – user8268 May 2 '11 at 16:01
e^{ix} is not holomorphic on the upper half plane. – ergo May 2 '11 at 16:35
but it's the composition of two holomorphic functions? – quanta May 2 '11 at 16:37
2
@ergo: user8268 means $f(z) = e^{iz}$. – Robert Israel May 2 '11 at 16:42
Note that a rational function holomorphic on the upper half plane and such that $|f(r)|=1$ for all $r\in\mathbb{R}$ is a product of $z \mapsto (z-\alpha)/(z-\bar{\alpha})$ for $\alpha$ in the upper half plane. – Plop May 2 '11 at 19:33
show 2 more comments
## 1 Answer
I think you also want $\lim_{r \to +\infty} f(r)$ and $\lim_{r \to -\infty} f(r)$ to exist and be equal. Schwarz Reflection principle shows $f$ is meromorphic on $\mathbb C$ with $f(\overline{z}) = 1/\overline{f(z)}$. Same applies to $f(1/z)$. So $f$ is an analytic function from the Riemann sphere to itself, and such functions are rational.
-
Isn't it necessary that $f(r)\in\mathbb{R}$ for all $r\in\mathbb{R}$ in order to apply the Schwarz Reflection principle? – ergo May 2 '11 at 16:46
1
– t.b. May 2 '11 at 16:51
@Theo letting $\varphi(z)=\frac{i(z+1)}{z-1}$ (the inverse of the Cayley transform), for $f \circ \varphi$ to be holomorphic we need $f(z) \neq 1$ for all $z$, and that is not true in general. I think an ad hoc version of the Schwarz reflection principle is needed here (to allow meromorphic functions). – Plop May 2 '11 at 19:06
@Robert why don't we need the stronger condition that $\lim_{|z| \rightarrow + \infty} f(z)$ exists? – Plop May 2 '11 at 19:09
@Plop: Right, I should have formulated this a bit more carefully. But you can simply exclude the discrete set of points that are mapped to one. The Schwarz reflection principle applies to all sets $U$ that are open in the closed upper half plane and only take real values on $U \cap \mathbb{R}$. – t.b. May 2 '11 at 19:19
show 2 more comments
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 28, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9401419162750244, "perplexity_flag": "head"}
|
http://mathhelpforum.com/calculus/81667-calculator-help.html
|
# Thread:
1. ## Calculator help
I am trying to graph r^2=2cos(2theta)
in my answer key it's supposed to be an infinity symbol but I get a 4 leaf lemniscate. I am in polar mode so what can be the error?
2. Originally Posted by aaronb
I am trying to graph r^2=2cos(2theta)
in my answer key it's supposed to be an infinity symbol but I get a 4 leaf lemniscate. I am in polar mode so what can be the error?
If $\theta$ lies between $\pi/4$ and $3\pi/4$ (or between $-3\pi/4$ and $-\pi/4$) then $\cos(2\theta)$ is negative and will not have a square root. So the top and bottom leaves of the lemniscate ought not to be there. Maybe you should rewrite the equation as $r = \sqrt{2\cos(2\theta)}$ and see what the calculator makes of that version.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 7, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9495788812637329, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/81223/minimizing-functions-over-simple-matrix-inequalities
|
minimizing functions over simple matrix inequalities
Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I'm wondering if anything is known about minimizing convex, not necessarily linear functions subject to "simple" matrix equalities. To be precise, consider the following example:
$min \Sigma x_i ln x_i$ such that $Ax=b$
Where $x$ is the (positive) vector variable, $b$ is a given real vector, and $A$ is a real matrix.
At one extreme, if $A$ has a single row, the convexity of $x_i ln x_i$ means that this problem is very easy to solve. One can introduce a Lagrangian $\lambda$ so that the problem becomes $min \Sigma x_i ln x_i + \lambda Ax$. Then, one can use e.g. bisection to compute $\lambda$ such that $Ax=b$. If we suppose that the number of bisection steps required is independent of the length of the vector $x$, then solving this minimization problem takes O(n) time for vectors of length n.
At the other extreme, if $A$ is an arbitrary real matrix, the minimization problem above is at least as difficult as solving the matrix equation itself, since it contains that problem. Solving the matrix equation is doable in polynomial time (e.g. via the Ellipsoid algorithm) but in the absence of any additional structure, cannot be expected to be solved in O(n) time, since at the very least the entries of $A$ have to be read, and there are in general more than O(n) of these. (And I guess I should mention that this is why I'm interested in minimizing convex functions subject to matrix constraints, since if the functions weren't convex, minimizing them would in general be NP-hard.)
What I would like to ask is: what is known about cases between these two extremes?
As a motivating example, consider minimizing $\Sigma x_i ln x_i$ such that $Ax=b$, where $A$ has O(n) nonzero entries, in the following pattern:
```|*** |
|* ** |
| * ** |
| * **|
```
This matrix has n non-zeros in the first row, plus 3 non-zeros in the subsequent n rows, i.e. 4n non-zeros in total.
Is there an efficient way to minimize functions subject to simple matrix constraints like these? (I'm well and truly hand-waving now, but maybe something like the ellipsoid method can be applied O(n) times to smaller sub-problems of O(1) size?)
-
What's the application? This looks like a standard maximum entropy optimization. cf www-stat.stanford.edu/~donoho/Reports/Oldies/MENBO.pdf – rcompton Nov 18 2011 at 7:44
I mean, you can reduce the problem subject to solving a system of equations saying that the $d f(x_i)/dx_i=(A^Tc)_i$, which you can invert, and separately $Ax=b$. Due to convexity you can invert the first equation to write $x$ as a monotone function of the $c$. Shouldn't standard equation-solving methods work here? – Will Sawin Nov 18 2011 at 7:48
1
As a general statement: objective function is convex and constraints are afine---> it can be solved by either interior point methods or grandient methods. On the other hand, your statement: if A is an arbitrary real matrix, the minimization problem above is at least as difficult as solving the matrix equation itself, since it contains that problem, I disagree. Indeed, in the gradient method is used as a projector and the problem it is not solved though. – mikitov Nov 18 2011 at 7:52
@rcompton: the application that got me thinking about this indeed involved entropy optimization. There were two problems in particular. One had the simple 'probability simplex' constraint that all $x_i$ summed to 1. This was solvable very rapidly, by (more or less) bisecting on the value of each $x_i$. The other had the slightly more complex constrainto f 'nested simplices', where the $x_i$ were partitioned into (disjoint) sets. A single "primary" set was constrained to sum to 1 as before, and the remaining sets were all constrained to sum to elements $x_i$ of the primary set. – Fumiyo Eda Nov 18 2011 at 8:22
@mikitov: absolutely, interior-point or gradient methods will work. Indeed, I am currently using a gradient method to solve the 'nested simplices' problem mentioned in my comment to rcompton. It works, but I can't help but feel there is a more computationally efficient way to solve the problem. – Fumiyo Eda Nov 18 2011 at 8:23
show 1 more comment
1 Answer
As far as numerical solutions are concerned, your problem seems to be a good candidate for the following approaches:
1. Bregman's algorithm (essentially a dual-coordinate ascent procedure). Closely related is the method called MART: "Multiplicative algebraic reconstruction technique"
2. Alternatively, you could try using an Augmented Lagrangian Method to solve your problem.
For both choices above, there also exist parallel approaches that might be relevant if the problem size becomes very large.
-
Survit, can you say a little bit more about why you've made these suggestions? The "alternating direction method of multipliers" mentioned in your second reference seems applicable, though I am wondering if the idea is to decompose the problem above repeatedly until only one multiplier for each row of $A$ is updated at every step. Is this what you are suggesting? – Fumiyo Eda Nov 18 2011 at 22:32
@Fumiyo: MART allows you to go thru $A$ one row at a time; the benefit being extremely simple updates. The cost of each update should be proportional to the number of nonzeros in the row of $A$ being used. The Augmented Lagrangian version will allow you to obtain a method that uses the entire matrix $A$ at one shot. MART will be by far the simplest, so one can always try it out! – S. Sra Nov 18 2011 at 22:44
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 29, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9474579691886902, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/49517/a-hyperbola-as-a-constant-difference-of-distances/49526
|
# A hyperbola as a constant difference of distances
I understand that a hyperbola can be defined as the locus of all points on a plane such that the absolute value of the difference between the distance to the foci is $2a$, the distance between the two vertices.
In the simple case of a horizontal hyperbola centred on the origin, we have the following:
1. $\frac{x^2}{a^2} - \frac{y^2}{b^2} = 1$
2. $c = \sqrt{a^2 + b^2} = a\varepsilon = a\sqrt{1 + \frac{b^2}{a^2}}$
The foci lie at $(\pm c, 0)$.
Now, if I'm not wrong about that, then this should be pretty basic algebra, but I can't see how to get from the above to an equation given a point $(x,y)$ describing the difference in distances to the foci as being $2a$. While I actually do care about the final result, how to get there is more important.
Why do I want to know this? Well, I'd like to attempt trilateration based off differences in distance rather than fixed radii.
-
1
What is "an equation that results in $2a$ given x and y"? – anon Jul 5 '11 at 1:29
@anon I've reworded that bit in a way that might make more sense. – Iskar Jarak Jul 5 '11 at 2:13
## 2 Answers
We will use a little trick to avoid work. We want to have $$\sqrt{(x+c)^2+y^2} -\sqrt{(x-c)^2+y^2}=\pm 2a.\qquad\text{(Equation 1)}$$
Rationalize the numerator, by multiplying "top" and "bottom" by $\sqrt{(x+c)^2+y^2} +\sqrt{(x-c)^2+y^2}.$ After the (not very dense) smoke clears, we get $$\frac{4xc}{\sqrt{(x+c)^2+y^2} +\sqrt{(x-c)^2+y^2}}=\pm 2a.$$ Flip it over, do some easy algebra. We get $$\sqrt{(x+c)^2+y^2} +\sqrt{(x-c)^2+y^2}=\pm \frac{2cx}{a}.\qquad\text{(Equation 2)}$$ From Equations 1 and 2, by adding, we get $$2\sqrt{(x+c)^2+y^2}=\pm 2\left(a+ \frac{xc}{a}\right).$$ Cancel the $2$'s, square. We get $$x^2+2cx+c^2+y^2=a^2+ 2cx+ \frac{c^2x^2}{a^2}.$$ Now it's basically over, the $2cx$ terms cancel. Multiply through by $a^2$, put $c^2=a^2+b^2$, and rearrange.
-
1
+1 much better than mine – Ross Millikan Jul 5 '11 at 3:33
@Ross Millikan: Not really. But "rationalizing the numerator" does have a number of uses, so it is nice to be able to mention it. – André Nicolas Jul 5 '11 at 3:38
Thanks, that was a big help. Totally forgot about that rationalising trick. – Iskar Jarak Jul 5 '11 at 3:51
If we write the equation for what you said, a point $(x,y)$ on the hyperbola, taking $x \gt 0$ for convenience, must have $\sqrt{(x+c)^2+y^2}-\sqrt{(x-c)^2+y^2}=2a$. Squaring, $(x+c)^2+2y^2+(x-c)^2-2\sqrt{((x-c)^2+y^2)((x+c)^2+y^2)}=4a^2$. Then if you isolate the radical and square again, you should be able to cancel a lot of terms and get to the form you want.
-
This was still helpful although not as detailed as the answer I accepted - I'd upvote if I could. – Iskar Jarak Jul 5 '11 at 3:51
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 16, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9509273767471313, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/statistics/34320-permutations-bottles.html
|
# Thread:
1. ## Permutations of Bottles
In how many different ways can I arrange 7 green and 8 brown bottles so that exactly one pair of green bottles are side by side?
I've been trying to do this all day, am I right for separating the answers into 14 cases of where the 2 green bottles can be across 15 places?
2. Hello, mixtapevanity!
In how many different ways can I arrange 7 green and 8 brown bottles
so that exactly one pair of green bottles are side by side?
Duct-tape two green bottles together.
Now we have 14 units to arrange:
. . $\boxed{GG}\,, G, G, G, G, G, B, B, B, B, B, B,B,B$
Now place the eight brown bottles in a row.
. . Note that there are spaces before, after and between them.
. . . $\_\,B\,\_\,B\,\_\,B\,\_\,B\,\_\,B\,\_\,B\,\_\,B\,\ _\,B\,\_$
We will take the six green units and place them in six of the nine spaces.
And there are: . ${8\choose6} \:=\:\boxed{84}$ ways.
3. Thanks for your reply! That is along the lines of what I was thinking. But can I ask what did you mean by (8 6) = 84?
4. That is a mere typo. It should be $\binom{9}{6}=84$
Nine blanks choosing six can be done in 84 ways.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9373209476470947, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/3222?sort=newest
|
## Finiteness of Obstruction to a Local-Global Principle
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Say that a projective variety V over Q satisfies the local-global principle up to finite obstruction (#) if there are only finitely many isomorphism classes of projective varieties over Q that are not isomorphic to V over Q despite being isomorphic to V over every completion of Q..
In section 7 of Barry Mazur's 1993 article titled On the Passage From Local to Global in Number Theory, Mazur describes his attempt to prove that (#) for abelian varieties over Q implies (#) for all projective varieties over Q, and a partial result that he, Yevsey Nisnevich and Ofer Gabber achieved in this direction. Has there been further progress in this direction since 1993?
My understanding is that an effective version of (#) for genus 1 curves (an effective bound on certain Tate-Shafarevich groups) gives a finite algorithm (of a priori bounded running time) for determining whether a genus 1 curve has a rational point, and also that such an effective bound on Tate-Shafarevich groups is expected.
Is an effective version of (#) for general projective varieties over Q expected? If so, how does this relate to Hilbert's 10th problem over Q (which Bjorn Poonen has conjectured to be undecidable)?
-
1
Automorphism schemes of projective varieties intervene in the method (prior to intervention of abelian varieties!), so unless one knows results on finite generation of their component groups (an open problem in most cases) there's no way one can make such an algorithm. Over global function fields the relatively new theory of pseudo-reductive groups has brought us to the same degree of progress as the number field case. As best I can tell, finiteness properties of component groups of Aut-schemes are intractable in general. – BCnrd Apr 25 2010 at 0:41
## 1 Answer
"Has there been further progress in this area since 1993?"
So far as I know, there has been no direct progress. I feel semi-confident that I would know if there had been a big breakthrough: Mazur was my adviser, this is one of my favorite papers of his, and I still work in this field. Also, I just checked MathReviews and none of the citations to this paper makes a big advance on the problem, although two are somewhat relevant:
MR1905389 Thăńg, Nguyêñ Quoc On isomorphism classes of Zariski dense subgroups of semisimple algebraic groups with isomorphic $p$-adic closures. Proc. Japan Acad. Ser. A Math. Sci. 78 (2002), no. 5, 60--62.
MR2376817 (2009f:14040) Borovoi, M.; Colliot-Thélène, J.-L.; Skorobogatov, A. N. The elementary obstruction and homogeneous spaces. Duke Math. J. 141 (2008), no. 2, 321--364.
I'm not sure what you mean by an effective bound on Shafarevich-Tate groups (henceforth "Sha"). It is certainly expected that the Sha of any abelian variety over a global field is finite. If this is true, then in any given case one can, "in principle", give an explicit upper bound on Sha by the method of n-descents for increasingly large n. (In practice, even for elliptic curves reasonable algorithms have been implemented only for small values of n.) I really can't imagine any algorithm having to do with Sha that has "a priori bounded running time". What do you have in mind here?
As to the final question, let me start by saying that it seems reasonable at least that the set of "companion varieties" (i.e., Q-isomorphism classes of varieties everywhere locally isomorphic to the given variety) of a projective variety V/Q is finite: as above, we believe this for abelian varieties, and Barry Mazur proved in this paper a lot of results in the direction that the conjecture for abelian varieties implies it for arbitrary varieties. (For instance, quoting from memory, I believe he proved the implication for all varieties of general type.)
Here is a key point: suppose you are given a variety V/Q and you are wondering whether it has rational points. If V is itself a torsor under an abelian variety (e.g. a genus one curve), then if you can compute Sha of the Albanese abelian variety of V, you can use this to determine whether or not V has a Q-rational point. In general, the connection between computation of sets of companion varieties of V and deciding whether V has a Q-rational point is less straightforward. If V is a curve, then there are theorem in the direction of the fact that finiteness of Sha(Jac(V)) implies that the Brauer-Manin obstruction is the only one to the existence of rational points on V. In particular, people who believe this (including Bjorn Poonen, I think), believe that there is an algorithm for deciding the existence of rational points on curves. But nowadays we know examples of varieties where the Brauer-Manin obstruction is not sufficient to explain failure of rational points.
So, in summary, it is a perfectly tenable position to believe that companion sets are always finite, even effectively computable, but still there is no algorithm to decide the existence of Q-points on an arbitrary variety.
-
Thanks for your response. What I had in mind in referring to an effective bound on Sha and an a priori bound on running time is the strong from of the Birch and Swinnerton-Dyer conjecture - for example, I have the impression that in the analytic rank zero case it's possible to compute the size of Sha by computing the central critical value of the L-function attached to the elliptic curve to high precision (with running time bounded a priori) using the fact that elliptic curves are modular. But maybe I'm just confused. – Jonah Sinick Oct 29 2009 at 8:20
Ah, I see what you are saying. Yes, if the full BSD is given to you, you have a different way to compute the order of Sha. In the case where the analytic rank is zero, this seems to have a priori bounded running time. But if the analytic rank is greater than 1, how are you going to compute it (rigorously) without computing the Mordell-Weil rank instead? Anyway, I am far from an expert in the algorithmic aspects here. I hope someone else will weigh in. – Pete L. Clark Oct 29 2009 at 14:20
I don't have any idea of how one would treat the case with analytic rank > 1 - maybe it's not reasonable to expect an analogous approach in this case. Still, this different way of computing Sha is interesting even in the rank 0 and 1 cases in light of the fact that most elliptic curves over Q seem to have analytic rank 0 or 1. It would be interesting to identify a large and natural family of projective varieties over Q with the property that Hilbert's 10th problem is decidable for this class. (This would be analogous to Gromov's discovery of hyperbolic groups in relation to the word problem.) – Jonah Sinick Oct 29 2009 at 22:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.933002769947052, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/190981/how-to-be-good-at-proving
|
# how to be good at proving?
I'm starting my Discrete Math class, and I was taught proving techniques such as proof by contradiction, contrapositive proof, proof by construction, direct proof, equivalence proof etc.
I know how the proving system works and I can understand the sample proofs in my text to a sufficient extent. However, whenever I tried proving on my own, I got stuck, with no advancement of ideas in my head. How do you remedy this solution? Should i practise proving as much as possible?
So far I've been googling proofs for my homework questions. But the final exam got proving questions (closed-book) so I need to come up with the proofs myself.
We mainly focus on proving questions related to number theory. Should I read up on number theory and get acquainted with the properties of integers? I don't know how I should go about becoming proficient in proving. Can you guys share your experience on overcoming such an obstacle? What kind of resources do you use for this purpose?
Thank you!
-
3
Practice, practice, practice, searching the internet for proofs of homework questions may be convenient but by doing this you prevent yourself from developing your intuition when it comes to proof techniques, even if you understand the proof fully when shown it. I would suggest going back through homework questions without the internet or books and trying to work your way through them on your own. – Alex J Best Sep 4 '12 at 15:09
5
Much like watching cooking shows won't make you a good chef, and watching the Beer Channel won't make you a good drunk, reading proofs won't make you good at proving. You gotta do it yourself, and again and again and then some more. – Asaf Karagila Sep 4 '12 at 15:11
1
Mathematics is not a spectator sport. – Michael Greinecker Sep 4 '12 at 15:22
4
There is a Beer Channel?! – Mariano Suárez-Alvarez♦ Sep 4 '12 at 17:30
1
@Mariano: We're all allowed to have dreams... :-) – Asaf Karagila Sep 4 '12 at 17:34
## 7 Answers
I do not consider myself "good" at proving things. However, I know that I have gotten better. The key to writing a proof is understanding what you are trying to prove, which is harder than it may seem.
Know your definitions. Often, I have been hampered or seen students hampered by not really knowing all of the definitions in the problem statement.
Work with others. Look at what someone else has done in a proof and ask questions. Ask how they came up with the idea, ask that person to explain the proof to you. Also, do the same for them. Explain your proofs to a classmate and have them ask you questions.
Try everything. Students often get stuck on proofs because they try one idea that does not work and give up. I often go through several bad ideas before getting anywhere on a proof. Another good strategy is to work with specific examples until you understand the problem. Plug in numbers and see why the theorem seems to be true. Also, try to construct a counterexample. The reason counterexamples fail often leads to a way to prove the statement.
-
While you mention proof methods, what you seem to need are proof-finding strategies. That's a large field. Here are just a few hints:
• Make yourself acquated with the premises. How can the statement fail if a single premise is left out?
• Find yourself a specific numerical example of the problem statement and check the conditions. Maybe you note a way how the premises enforce the validity of the statement for this example.
• Try to find a counterexample. You (probably) won't find one, but you might notice what kind of obstacles prevent you from finding it.
• Check extremes. If the statement says "For all real numbers with $0<r<2$ ...", then check what would happen with $r=0$ and $r=2$
-
Practice, Practice, Practice!
Get books in the class you are doing, review the proofs. Learn to look at a theorem and see if you can figure out a proof approach.
There are also books that may help along these lines with general proof approaches.
General Proof Strategies How to Solve It: A New Aspect of Mathematical Method (Princeton Science Library) [Paperback] G. Polya (Author)
How to Prove It: A Structured Approach [Paperback] Daniel J. Velleman (Author)
The Nuts and Bolts of Proofs, Third Edition: An Introduction to Mathematical Proofs [Paperback] Antonella Cupillari (Author)
How to Read and Do Proofs: An Introduction to Mathematical Thought Processes [Paperback] Daniel Solow (Author)
Discrete Math http://www.cs.dartmouth.edu/~ac/Teach/CS19-Winter06/SlidesAndNotes/lec12induction.pdf Discrete Mathematics with Proof [Hardcover] Eric Gossett (Author)
Discrete Mathematics: Mathematical Reasoning and Proof with Puzzles, Patterns, and Games [Hardcover] Douglas E. Ensley (Author), J. Winston Crawley (Author)
Schaum's Outline of Discrete Mathematics, Revised Third Edition (Schaum's Outline Series) by Seymour Lipschutz and Marc Lipson (Aug 26, 2009)
2000 Solved Problems in Discrete Mathematics by Seymour Lipschutz (Oct 1, 1991)
Concrete Mathematics: A Foundation for Computer Science (2nd Edition) by Ronald L. Graham, Donald E. Knuth and Oren Patashnik (Mar 10, 1994)
Finite and Discrete Math Problem Solver (REA) (Problem Solvers Solution Guides) by The Editors of REA and Lutfi A. Lutfiyya (Jan 25, 1985)
The problem books above would also be useful references for working problems and proofs.
HTH ~A
-
+1 for the list. – Rick Decker Sep 4 '12 at 19:25
I second most of what the other answers have said, and would like to add a technique that I think is very useful for people first learning how to prove things:
If you are trying to "prove statement X," take the point of view that you are unsure if statement X is true. Then, try to decide if it is true or not. Seek counterexamples, as Hagen von Eitzen suggested. Seek evidence that might suggest X is true as well. If at some point you become convinced that X is actually false, great! Try to convince somebody else. If, on the other hand, you become convinced that X is true, great! However you became convinced can be the basis of your proof.
The heart of this piece of advice is: you need your proof-writing skills to be linked to the process by which you come to believe what's true and what isn't. Learning how to prove is nothing more than learning how to write down an absolutely convincing argument. Math has developed a lot of techniques, tricks, common argument patterns, etc., giving the impression that there is a whole body of stuff one has to master, but at its heart, a proof is nothing more than a logical argument that serves to convince everybody that something is true. To learn how to make good arguments, you need to be tuned into what is convincing and what isn't, and the authentic way to do this is to stay tuned in to what convinces you and what doesn't. So in trying to create a proof, the best thing is to take the point of view that you aren't sure if it is even true, and actually decide for yourself if and why you think it is, being as skeptical as you possibly can. If you become convinced it is true, no matter how skeptical you try to be, then whatever convinced you can be turned into your proof.
As an aside, I believe that those of us who are experienced at writing proofs have all, at least on some (conscious or unconscious) level, developed this habit of taking the point of view that we are not sure if it is true. Then we write the proof to convince ourselves.
-
First of all, I suggest you learn (and understand!) the definitions of the terms in the theorem. $\quad$
Then, try to understand what the theorem actually means. (This is the harder bit!) In the proof, check you have exhausted all the assumption (no theorem will have more assumptions than needed!).
If you can't prove a theorem using any of the usual methods (contrapositive, reductio ad absurdum, etc), "proof directly from the definitions" usually does the trick.
The only way to become good at proving theorems is to really understand what you are asked to prove!
-
Prove. ${}{}{}{}{}{}{}{}{}{}{}{}$
-
1
So, I'll guess that by this you mean prove in the sense of "test". Is that correct? If so, other answers do cohere with this and elaborate on it, but the semantics of the word "prove" in that sense don't quite match the demonstrative aspect of mathematical proofs... they just show what happens after the testing has taken place. – Doug Spoonwood Sep 23 '12 at 21:32
Huh? I honestly have no idea where you got that I meant that (nor, really, do I have any idea what the rest of your comment even means...) I meant prove, as in prove theorems: practice makes perfect. Wasn't it pretty obvious?! – Mariano Suárez-Alvarez♦ Sep 23 '12 at 23:33
2
Suarez-Alvarez I didn't find your meaning obvious at all. You only say "prove", which in many other contexts means "to test", as well as the Latin term meaning "to test". Some of the answers given here also suggest what I mean by "testing", such as Kris Williams and Hagen's answers. – Doug Spoonwood Sep 23 '12 at 23:42
Doug, the context of what I wrote is a question, «how to be good at proving?», in which the verb to prove has an obvious meaning. I see not even the faintest hint of those two answers you mention having taken the meaning you seem to have taken. In any case, this is rather off-topic: please do not add noise. – Mariano Suárez-Alvarez♦ Sep 23 '12 at 23:50
Suarez-Alvarez Those answers suggest trying to find a counterexample, finding examples, and checking extremes. Testing involves attempting to confirm and dis-confirm a statement, as those answers suggest. – Doug Spoonwood Sep 23 '12 at 23:54
show 1 more comment
Watch Suits, the USA Network TV series. They continuously say you should "press where it hurts", and that's the exact thing you do in mathematics. The border cases are always interesting, and the conditions for a theorem are almost always where a proof starts. For example:
A continuous function $f:\mathbb{R}\to\mathbb{R}$ which is negative at 0 and positive at 1 (that is, $f(0)<0$ and $f(1)>0$) has a zero somewhere between 0 and 1.
What you should use to prove this theorem is that the function is continuous, that it is negative at 0 and that it is positive at 1. Now, first think about what these things really tell you: what does it mean for a function to be continuous, and how can you use negative and positive? Why wouldn't the theorem hold for noncontinuous functions? Can you give a counterexample if the function is continuous, but positive at both 0 and 1?
Such questions give you hints as to the reason the theorem is true, and might just lead you to a proof.
NB: they actually said something along the lines of "press until it hurts" instead of "press where it hurts", but it is a nice series nonetheless.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9572754502296448, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/advanced-algebra/203329-generalization-unit-elements-print.html
|
# a generalization of unit elements
Printable View
• September 12th 2012, 01:56 AM
ymar
a generalization of unit elements
If I have a monoid $S$, and an element $u\in S$, then the following equivalence holds:
$u$ is a unit iff $f(x)=xu$ and $g(x)=ux$ are bijections onto $S$.
Proof. Suppose $u$ is a unit. Let $x,y\in S.$ Then we have
1. $x=xu^{-1}u=f(xu^{-1});$
2. if $f(x)=f(y),$ then $xu=yu$ so $x=xuu^{-1}=yuu^{-1}=y.$
Conversely, if $f$ and $g$ are bijections onto $S,$ then there exists $x\in S$ such that $xu=f(x)=1$ and $y\in S$ such that $ux=g(x)=1.$ Thus $u$ has a left inverse and a right inverse, which is known to imply that $u$ is a unit.
Now let's say $S$ is a semigroup without an identity element. Can there be an element $u\in S$ such that $f(x)=xu$ and $g(x)=ux$ are bijections? Is it possible when $S$ is finite? (Then it's equivalent to asking whether a semigroup without ideantity can have a cancellable element.) Is it possible when $S$ is infinite? Is it possible when we only demand that at least one of $f,g$ be a bijection onto $S?$
• September 12th 2012, 08:02 AM
Deveno
Re: a generalization of unit elements
a partial answer:
if such bijections x→xu and x→ux exists for some element u, then for some element x:
xu = u, and since for ANY y in S we have y = ua (for some a): xy = x(ua) = (xu)a = ua = y. thus x is a left-identity for S.
similarly, there must be some z in S with uz = u, and thus (writing y = bu), yz = (bu)z = b(uz) = bu = y, so z is a right-identity for S.
but then x = xz = z, so S possesses an identity element, which we can call e.
so in THAT case, we have xu = e and uy = e for some x,y in S, so u is indeed a unit (and x = y: x = xe = x(uy) = (xu)y = ey = y).
note that above, we only require that x→ux, x→xu be surjective to show that a right- or left-identity exists, uniqueness of this identity follows (for one of left- or right-) if one of those maps is injective as well (if S is finite, then surjective = bijective) (if x→ux is injective, then we have a unique right-identity, if x→xu is injective, we have a unique left-identity).
i am unsure of how you would define a unit without the presence of a two-sided identity.
• September 13th 2012, 04:12 AM
ymar
Re: a generalization of unit elements
Thanks!
All times are GMT -8. The time now is 01:51 AM.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 29, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9072234630584717, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/86687/limit-of-1-x2-apostol-3-2-example-4/86692
|
# Limit of $1/x^2$ - Apostol 3.2, Example 4
In Apostol, One Variable Calculus Volume 1, section 3.2, page 130, he gives the following example (roughly paraphrased):
Let $f(x) = \frac{1}{x^2}$ if $x \neq 0$, and let $f(0) = 0$. To prove rigorously that there is no real number $A$ such that $\lim_{x\to0^+} f(x)=A$, we may argue as follows: Suppose there were such an $A$, say $A>0$. Choose a neighborhood $N(A)$ of length $1$.
In the interval $0 < x < \frac{1}{A + 2}$, we have $f(x) = \frac{1}{x^2} > (A + 2)^2 > (A + 2)$, so $f(x)$ cannot lie in the neighborhood $N(A)$. Thus, every neighborhood $N(0)$ contains points $x > 0$ for which $f(x)$ is outside $N(A)$, so (3.3) is violated for this choice of $N(A)$. Hence $f$ has no right-hand limit at $0$.
While I intuitively understand why the function has no limit, I'm completely lost on his proof. What justifies him from moving from $f(x)$ lies outside of $N(A)$ for the neighborhood $0 < x < \frac{1}{A+2}$ to $f(x)$ lies outside every neighborhood?
The way I've been thinking about it is to translate the proof into terms from the $ε-δ$ definition. Thus, when he says "in the interval $0 < x < \frac{1}{A+2}$", he's setting $ε = \frac{1}{A+2}$, and then showing that $f(x)$ lies outside of $|f(x) - A| < ε$ for $|x-0| < δ$. But, if we were to prove that there is no limit, we have to show that for some $ε$, no $δ$ works. He says this in, "Thus, every neighborhood $N(0)$ contains points...", but I don't see how he can move from "this $δ$ doesn't work" to "no $δ$ works".
The best I can come up with is that either I'm making a mistake in thinking he's choosing $δ = \frac{1}{A+2}$, or that particular $δ$ is supposed to be a catch all. That is, if any $δ$ will work, this one should. But if that is the case, I don't see why this $δ$ has to be the one that works.
Thanks in advance.
-
I've added LaTeX formatting to your question; take a look at the source to see how it works. In general, if you see a piece of LaTeX you want to know the code for on the site, you can right click on it and choose "Show Source" - this is a good way of picking up how to do things. – Zev Chonoles♦ Nov 29 '11 at 10:19
By the way, I would also like to applaud your in-depth explanation of your thoughts about your question - it is something that is unfortunately too rare. – Zev Chonoles♦ Nov 29 '11 at 10:21
Thanks for the formatting. I briefly tried to get it into LaTex, but I gave up after nothing worked. Haha. I've tried to ask questions where I didn't explain as much, and the person I was asking either refused to help or started talking about a part of the problem that I wasn't asking about. In this case, I had already burned through all of my mathy friends, but they are all at least two years out from real analysis, and couldn't help very much. They did help me get this far, though. – Nathan Nov 29 '11 at 11:06
## 2 Answers
He shows for any $x$ with $0<x<{1\over A+2}$ that $f(x)>A+2$.
Now, for any neighborhood $N(0)$ you can select an $x\in N(0)$ with $0<x<{1\over A+2}$ ($N(0)$ has some radius $\delta$. Just choose some positive $x$ in the nhood that is less than ${1\over A+2}$). Then, from the above, $f(x)>A+2$ .
This is saying the same thing as "for any $\delta>0$ there is an $x$ with $|x-0|<\delta$ with $f(x)>A+2$".
-
Oh! I think I get it. Any neighborhood N(0) will contain points such that 0<x<{1\over A+2}, and for those points f(x) > A+2. You can't get around that, no matter what δ you choose. What makes me sure that I get it is that now I don't understand why I didn't see that in the first place. – Nathan Nov 29 '11 at 11:15
Great. And, I agree with the others. Thank you for a nicely thought out post. – David Mitra Nov 29 '11 at 11:16
First of all, I join Zev in appreciating your work.
The best I can come up with is that either I'm making a mistake in thinking he's choosing $\delta = \frac{1}{A+2}$, or that particular $\delta$ is supposed to be a catch all. That is, if any $\delta$ will work, this one should. But if that is the case, I don't see why this $\delta$ has to be the one that works.
Your second guess is almost correct. Let's see what exactly is going on.
Suppose some $\delta > 0$ works. That means that for all $x$ such that $0 < |x| < \delta$, $f(x)$ lies in the $1$-neighborhood of $A$. Now pick any $x$ such that $0 < |x| < \delta$ and $0 < x < \frac{1}{A+2}$ are both satisfied; this is equivalent to saying $$0 < x < \min \left\{ \delta, \frac{1}{A+2} \right\}.$$ Certainly there is at least one such $x$ (in fact, there are infinitely many $x$'s possible). For this $x$,
• since $0 < x < \frac{1}{A+2}$, we already know that $f(x)$ does not lie in the $1$-neighborhood of $A$.
• since $0 < |x| < \delta$ also holds, by our assumption, $f(x)$ lies in the $1$-neighborhood of $A$.
Obviously, these two conclusions contradict each other, which implies that our starting assumption must be wrong. That is why, no $\delta > 0$ can work.
-
I think I get it. Is this like saying that for a given δ1 such that for 0 < x < δ1, if δ1 does not work, than no δ > δ1 will work? – Nathan Nov 29 '11 at 11:03
@Nathan Your observation is accurate. But I am not sure that is very related to the answer to your question. – Srivatsan Nov 29 '11 at 11:06
On reflection, I'm not sure either. This is helping, though. Thanks! – Nathan Nov 29 '11 at 11:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 80, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9795970320701599, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/52893/is-it-possible-to-construct-without-choice-even-a-non-finitely-generated-grou
|
## Is it possible to construct (without choice, even?) a non-finitely-generated group with no proper non-finitely-generated subgroup?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Is there a non-finitely-generated group each of whose proper subgroups is finitely generated? If so, what form of choice (if any) is required to construct such a group?
-
14
The Prufer p-group. en.wikipedia.org/wiki/Pr%C3%BCfer_group – George Lowther Jan 23 2011 at 1:21
## 1 Answer
(CW since this is just expanding on George Lowther’s comment to the question, which could really have been an answer in the first place; if George L wants to convert his answer to a comment himself, I can delete this one.)
For any prime $p$, the Prüfer $p$-group is as desired.
There are several constructions of this; a good one for present purposes is $$\mathbb{Z}[1/p]\ /\ \mathbb{Z}$$ i.e. rationals with denominator a power of $p$, modulo the integers.
To see that this works, note that it is the union of the linearly ordered chain of finitely generated (indeed, cyclic) subgroups $H_i := \{ [a / p^i]\ |\ 0 \leq a < p^i \}$, over $i \in \mathbb{N}$.
Now any element of $H_{i+1}$ not in $H_{i}$ must be of the form $[a/p^{i+1}]$ with $a$ coprime to $p$, and hence generates the whole of $H_{i+1}$. So any subgroup is either equal to some $H_i$, or else contains them all and is the whole group.
On the other hand, the entire group is clearly not finitely generated since any finite set of elements is contained in some $H_i$.
-
1
Sweet... didn't know this one. I'm quite charmed. – Todd Trimble Jan 23 2011 at 18:33
@Peter: Thanks. That's just what I would have said. No need to delete this answer, as 6 people have already bothered upvoting it. – George Lowther Jan 24 2011 at 0:24
1
Also, every proper subgroup is cyclic, and not just finitely generated. – George Lowther Jan 24 2011 at 0:26
Excellent example and explanation---thanks! It's still not clear to me to what extent choice is necessary for this example, but I'll look through the argument more closely later. (I was expecting a less clear-cut construction, I suppose.) – Z Norwood Jan 24 2011 at 4:26
This group is isomorphic to the group of p-th power roots of unity in the complex numbers (restrict the isomorphism of Q/Z with the roots of unity to the subgroup of elements with p-power order). This is a more concrete model for the group. That every subgroup of this group is cyclic is related to the fact that any finite subgroup of the nonzero elements of a field is a cyclic group: any proper subgroup is missing some root of unity of order, say, $p^n$ and therefore has no root of unity of order p^n or higher, which means it is a finite subgroup. – KConrad Jan 24 2011 at 9:04
show 5 more comments
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9406896829605103, "perplexity_flag": "head"}
|
http://crypto.stackexchange.com/questions/3864/why-is-there-a-strong-distinction-between-stream-and-block-ciphers?answertab=active
|
# Why is there a strong distinction between stream and block ciphers?
If I don't err, in the literature a stream cipher is one in which each plaintext bit is processed individually, commonly via xor-ing with one bit of a random or pseudo-random bit stream, while a block cipher (in ECB mode, i.e. ignoring the add-on processing operations like CBC etc.) operates on n bits at a time with operations that are the same for a given key, i.e. each successive groups of n bits are processed in exactly the same manner.
If one looks at the corresponding literatures, one finds in the first case mostly works on linear or non-linear feedback shift registers and in the second case ideas of design of a rather different genre. That could justify the differentiation in terminolgy however IMHO only on the assumption that no cipher designs could lie inbetween the two categories. But that assumption seems to me to be an invalid one. Anyway, let's consider the classical ciphers. Is a Vigenere cipher or a (additive) cipher with a running key a stream cipher or a block cipher? That question seems not to be very satisfactorily to be answered, I am afraid.
-
– Ilmari Karonen Sep 22 '12 at 14:11
"in the first case the dynamics or variability" What's that supposed to mean? – CodesInChaos Sep 22 '12 at 14:29
As for your own cipher, why do you think a cipher that's 10000 times as slow as AES is acceptable? There are dozens of open source implementations of AES, so your "commercial proprietary (black-box) IT-security software, which generally have very excellent computing efficiency but which are absolutely unknown (since by definition un-knowable) of being free or not of dormant backdoors implanted by mafias or malicious agencies of certain mighty pseudo-righteous pseudo-humanitarian pseudo-peace-loving pseudo-democratic regimes of the world" argument certainly doesn't apply to AES. – CodesInChaos Sep 22 '12 at 14:39
Welcome to Cryptography Stack Exchange. Please note that this is not a discussion forum, but a question-and-answer site, and as such we prefer constructive questions, as mentioned by Ilmari. There is a constructive core in your question, but towards the end it is mostly argumenting. – Paŭlo Ebermann♦ Sep 22 '12 at 15:51
1
In an effort to keep this question from being closed, I've edited it to remove the argumentative last paragraph, leaving only the actual question. Further improvements are welcome. – Ilmari Karonen Sep 22 '12 at 16:34
show 1 more comment
## 2 Answers
A block cipher by itself does map `n` bits to `n` bits using a key. i.e. it's a keyed pseudo-random permutation. It cannot accept longer or shorter texts.
To actually encrypt a message you always need a chaining mode. ECB is one such chaining mode(and a really bad one), and it's not the pure block cipher. Even ECB consists of "add-on processing operations". These chaining modes can have quite different properties.
One of the most popular chaining modes, Counter mode (CTR) constructs a synchronous stream cipher from a block cipher. Another mode, CFB constructs a self synchronizing stream cipher, with properties somewhere between those of CBC and a synchronous stream cipher.
So your assumption that there are no ciphers between stream and blockciphers isn't really true. Cryptographers just prefer building them from the well understood block cipher primitive, instead of creating a completely new system.
I'd call Vigenère a stream cipher, albeit one with a much too short period. It uses a 26 symbol encoding instead of a 2 symbol encoding, but that doesn't mean it's not a stream cipher. Look at Solitaire/Pontifex for a modern construction of a stream cipher with 26 symbols.
-
If I don't err, "chaining" in block encryption is normally employed in the context of "block chaining", i.e. rendering the successive blocks dependent on one another so as to make the analysis more difficult. So IMHO ECB would have by definition no chaining effect as such. – Mok-Kong Shen Sep 22 '12 at 17:20
Mathematically, a block cipher is just a keyed pseudorandom permutation family on the set $\{0,1\}^n$ of $n$-bit blocks. (In practice, we usually also require an efficient way to compute the inverse permutation.) A block cipher on its own is not very useful for practical cryptography, at least unless you just happen to need to encrypt small messages that each fit into a single block.
However, it turns out that block ciphers are extremely versatile building blocks for constructing other cryptographic tools: once you have a good block cipher, you can easily build anything from stream ciphers to hash functions, message authentication codes, key derivation functions, pseudorandom number generators, entropy pools, etc. based on just one block cipher.
Not all of these applications necessarily need a block cipher; for example, many of them could be based on any pseudorandom function which need not be a permutation (but, conveniently, there's a lemma that says a pseudorandom permutation will, nonetheless, work). Also, many of the constructions are indirect; for example, you can construct a key derivation function from a message authentication code, which you can construct from a hash function, which you can — but don't have to — construct from a block cipher. But still, if you have a block cipher, you can build all the rest out of it.
Furthermore, these constructions typically come with (conditional) security proofs that reduce the security of the constructed functions to that of the underlying block cipher. Thus, you don't need to carry out the laborious and unreliable task of cryptanalyzing each of these functions separately — instead, you're free to concentrate all your efforts on the block cipher, knowing that any confidence you'll have on the security of the block cipher directly translates into confidence on all the functions based on it.
Obviously, all this is very convenient if you're, say, working on a small embedded platform where including efficient and secure code for lots of separate crypto primitives could be difficult and expensive. But even if you're not on such a constrained platform, writing and analyzing low-level crypto code can be laborious due to the need to pay attention to things like side-channel attacks. It's easier to restrict yourself to a limited number of low-level building blocks and to build everything you need out of those.
Also, even on fast platforms with lots of memory, like desktop CPUs, implementing low-level crypto operations directly in hardware can be much faster than doing them in software — but it's not practical to do that for more than a few of them. Due to their versatility, block ciphers are excellent candidates for hardware implementation (as in the AES instruction set for modern x86 CPUs).
Mathematically, a stream cipher — in the most general sense of the term — is also a keyed invertible pseudorandom function family, but on the set $\{0,1\}^*$ of arbitrary-length bitstrings rather than on blocks of limited length.
(There are some subtleties here; for example, most stream cipher constructions require the input to include a unique nonce value, and do not guarantee security — in the sense of indistinguishability from a truly random function — if the same nonce is used for two different inputs. Also, as there is no uniform distribution on invertible functions from $\{0,1\}^*$ to itself to choose random functions from, we need to define carefully just what it means for a stream cipher to look "indistinguishable from random", and this definition does have practical security implications — for example, most stream ciphers leak the length of the message. Practically, we usually also require that stream ciphers, in fact, be "streaming", in the sense that arbitrarily long input bitstreams can be encrypted — and decrypted — using only constant storage and time linear in the message length.)
Of course, stream ciphers are much more immediately useful than block ciphers: you can use them directly to encrypt messages of any length. However, it turns out that they're also much less useful as building blocks for other cryptographic tools: if you have a block cipher, you can easily turn it into a stream cipher, whereas turning an arbitrary stream cipher into a block cipher is difficult if not impossible.
So why do people bother designing dedicated stream ciphers at all, then, if block ciphers can do the job just as well? Mostly, the reason is speed: sometimes, you need a fast cipher to encrypt lots of data, and there are some really fast dedicated stream cipher designs out there. Some of these designs are also designed to be very compact to implement, either in software or hardware or both, so that if you really only need a stream cipher, you can save on code/circuit size by using one of those ciphers instead of a general block cipher based one.
However, what you gain in speed and compactness, you lose in versatility. For example, there doesn't seem to be any simple way to make a hash function out of a stream cipher, so if you need one of those (and you often do, because hash functions, besides being useful on their own, are also common building blocks for other crypto tools), you'll have to implement them separately. And, guess what, most hash functions are based on block ciphers, so if you have one, you might as well reuse the same block cipher for encryption too (unless you really need the raw speed of the dedicated stream cipher).
-
I questioned whether it is necessary to have two different terms. According to what you explained, a stream cipher is simply a special case of a block cipher, i.e. one for the limiting case where the n in the set {0,1}^n is 1. So I would argue for not maintaining the current distinction of terminologies. – Mok-Kong Shen Sep 22 '12 at 15:42
@Mok-KongShen Actually, a stream cipher is not simply a block cipher with block size 1 (other than classic monoalphabetic ciphers, which can be assumed to be both). A stream cipher usually translates the bits/bytes/... of the stream differently, depending on the current internal state of the cipher, while a block cipher for same input has same output (and thus is usually used in a "mode of operation" to create a stream cipher). – Paŭlo Ebermann♦ Sep 22 '12 at 15:57
@PauloEbermann. IMHO you answered for me a question of CodesinChaos conscerning "dynamics and variability". – Mok-Kong Shen Sep 22 '12 at 16:50
@Mok-KongShen No he didn't. The only advantage a dedicated stream cipher has over a block cipher in an appropriate mode is performance. You can't disregard chaining modes, since nobody sane uses block ciphers without appropriate chaining. – CodesInChaos Sep 22 '12 at 17:48
@CodesInChaos. Different applications have different performance requirements. To encrypt e.g. an email, one doesn't need the performance that would be desirable for encryption of, say, a video-file. – Mok-Kong Shen Sep 22 '12 at 18:02
show 3 more comments
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9354838728904724, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/251491/two-normal-subgroups-with-trivial-intersection-one-is-characteristic-what-abou?answertab=oldest
|
# Two normal subgroups with trivial intersection, one is characteristic, what about the other?
Let $G$ be a group, $N, M$ normal subgroups with $N \cap M = {1}$ and $G = NM$. I know $N$ is a characteristic subgroup of $G$. How could I show that $M$ is characteristic as well?
Thank you.
P.S.: I also know that G is Abelian, but perhaps this fact isn't needed!?
-
Well, if $\,G\,$ is abelian then any subgroup is normal, so why is that even mentioned? – DonAntonio Dec 5 '12 at 11:49
Because the statement would be a lot stronger if that wasn't needed. – Boris Dec 5 '12 at 12:00
## 1 Answer
This isn't true. For example, consider $G=\mathbb{Z}\times\mathbb{Z_2}$. $0_{\mathbb{Z}}\times \mathbb{Z}_2$ is a characteristic subgroup of $G$, but $\mathbb{Z} \times 0_{\mathbb{Z}_2}$ is not.
-
I'm sorry, but I don't see why $\mathbb{Z} \times 0_{\mathbb{Z}_2}$ is not characteristic. Could you help me? – Boris Dec 5 '12 at 14:11
@Boris There is an automorphism of $G$ that doesn't preserve $\mathbb{Z}\times 0_{\mathbb{Z}_2}$. This automorphism sends $(a, b)$ to $(a, b + [a])$ for every $a \in \mathbb{Z}$ and every $b \in \mathbb{Z}_2$. Here $[a]$ denotes the class of number $a$ modulo $2$. – Dan Shved Dec 5 '12 at 15:28
Thank you Dan. I understand it now. – Boris Dec 9 '12 at 13:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 22, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9621512293815613, "perplexity_flag": "head"}
|
http://www.all-science-fair-projects.com/science_fair_projects_encyclopedia/Ordinary_differential_equations
|
# All Science Fair Projects
## Science Fair Project Encyclopedia for Schools!
Search Browse Forum Coach Links Editor Help Tell-a-Friend Encyclopedia Dictionary
# Science Fair Project Encyclopedia
For information on any area of science that interests you,
enter a keyword (eg. scientific method, molecule, cloud, carbohydrate etc.).
Or else, you can start by choosing any of the categories below.
# Ordinary differential equation
(Redirected from Ordinary differential equations)
In mathematics, and in particular analysis, an ordinary differential equation (or ODE) is an equation that involves the derivatives of an unknown function of one variable. A simple example of an ordinary differential equation is
$f' = f,\,$
where f is an unknown function, and f' is its derivative.
See differential calculus and integral calculus for basic calculus background.
Contents
## Definition
Let y represent an unknown function of x, and let
$y', y'',\ \dots,\ y^{(n)}$
denote the derivatives
$\frac{dy}{dx},\ \frac{d^{2}y}{dx^2},\ \dots,\ \frac{d^{n}y}{dx^{n}}.$
An ordinary differential equation (ODE) is an equation involving
$x,\ y,\ y',\ y'',\ \dots$.
The order of a differential equation is the order n of the highest derivative that appears.
A solution of an ODE is a function y(x) whose derivatives satisfy the equation. Such a function is not guaranteed to exist and, if it does exist, is usually not unique.
When a differential equation of order n has the form
$F(x, y', y'',\ \dots,\ y^{(n)}) = 0$
it is called an implicit differential equation whereas the form
$F(x, y', y'',\ \dots,\ y^{(n-1)}) = y^{(n)}$
is called an explicit differential equation.
A differential equation not depending on x is called autonomous, and one with no terms depending only on x is called homogeneous.
## General application
An important special case is when the equations do not involve x. These differential equations may be represented as vector fields. This type of differential equations has the property that space can be divided into equivalence classes based on whether two points lie on the same solution curve. Since the laws of physics are believed not to change with time, the physical world is governed by such differential equations. (See also symplectic topology for abstract discussion.)
The problem of solving a differential equation is to find the function y whose derivatives satisfy the equation. For example, the differential equation
$y'' + y = 0 \, \!$
has the general solution
$y = A \cos{x} + B \sin{x} \, \!$,
where A, B are constants determined from boundary conditions. In the case where the equations are linear, this can be done by breaking the original equation down into smaller equations, solving those, and then adding the results back together. Unfortunately, many of the interesting differential equations are non-linear, which means that they cannot be broken down in this way. There are also a number of techniques for solving differential equations using a computer (see numerical ordinary differential equations).
Ordinary differential equations are to be distinguished from partial differential equations where y is a function of several variables, and the differential equation involves partial derivatives.
## Types of differential equations with some history
The influence of geometry, physics, and astronomy, starting with Newton and Leibniz, and further manifested through the Bernoullis, Riccati, and Clairaut, but chiefly through d'Alembert and Euler, has been very marked, and especially on the theory of linear partial differential equations with constant coefficients .
### Linear ODEs with constant coefficients
The first method of integrating linear ordinary differential equations with constant coefficients is due to Euler, who made the solution of the form
$\frac {d^{n}y} {dx^{n}} + A_{1}\frac {d^{n-1}y} {dx^{n-1}} + \cdots + A_{n}y = 0$
depend on that of the algebraic equation of the nth degree,
$F(z) = z^{n} + A_{1}z^{n-1} + \cdots + A_n = 0$
in which zk takes the place of
$\frac {d^{k}y} {dx^{k}}\quad\quad(k = 1, 2, \cdots, n).$
This equation F(z) = 0, is the "characteristic" equation considered later by Monge and Cauchy. If z is a (possibly complex) zero of F(z) of multiplicity m and $k\in\{0,1,\dots,m-1\}$ then y = xkezx is a solution of the ODE.
If the Ai are real then real-valued solutions are preferable. Since the complex z values will come in conjugate pairs, so will their corresponding y values; replace each pair with their linear combinations $\Re y$ and $\Im y$.
A case that involves complex ($\mathbb{C}$) root can be solved with the aid of Euler's formulae. Recall that Maclaurin series are defined as:
$e^x = \sum_{k = 0}^\infty {\frac{{x^{k} }}{{k!}}}$,
$\cos x = \sum_{k = 0}^\infty {\frac{{\left( { - 1} \right)^k x^{2k} }}{{\left( {2k} \right)!}}}$, $\sin x = \sum_{k = 0}^\infty {\frac{{\left( { - 1} \right)^k }}{{\left( {2k + 1} \right)!}}x^{2k + 1} }$
And since
$\begin{matrix} i = \sqrt { - 1} \\ i^2 = - 1 \\ i^3 = - i \\ i^4 = 1 \\ \end{matrix}, e^{i\theta } = \sum_{k = 0}^\infty {\frac{{\left( i \right)^k }}{{k!}}\theta ^k = } \sum_{k = 0}^\infty {\frac{{\left( { - 1} \right)^k }}{{\left( {2k} \right)!}}\theta ^{2k} + i} \sum_{k = 0}^\infty {\frac{{\left( { - 1} \right)^k }}{{\left( {2k + 1} \right)!}}\theta ^{2k + 1} = } \cos \theta + i\sin \theta$
Giving the Euler's Formulae, eiθ = cosθ + isinθ
• Example: Suppose P(D)y = 0 for P(D)=D2 - 4D + 5
(Note: Here operator's notation is used to represent the linear ODE, y"-4y'+5=0),
Complete the square to find $\mathbb{C}$ roots by writing above equation in form:
$P(D)=\left[ {P - a} \right] + b^2$ roots are $r = a \pm bi.$
$P(D) = \left[ {D^2 - 4D + 4} \right] + 1 = \left[ {D - 2} \right]^2 + 1^2.\ \mathrm{Here}\ r = 2 \pm i$
are the characteristic roots. Hence solution in the form of y = erx are to be written as
$e^{\left( {2 + i} \right)x} = e^{2x + ix} = e^{2x} e^{ix} = e^{2x} \left( {\cos x + i\sin x} \right) = e^{2x} \cos x + ie^{2x} \sin x$
We think of $r = 2 \pm i{\rm{ }}$ as a root of multiplicity of 2. So seek two linearly independent solution to above equation yields:
$\left\{ {\begin{matrix} {y_1 = e^{2x} \cos x} \\ {y_2 = e^{2x} \sin x} \\\end{matrix}} \right.$
Any other solution to the equation has form of: yc = c1e2xcosx + c2e2xsinx . Note the arbitrariness of C1 and C2 absorbs $\pm$ i.
Also, for repeated complex roots, multiply y1 and y2 repeatedly by x to generate a family of solutions, but only to multiplicity.
### Linear ODEs with variable coefficient
Natural oscillations (be it mechanical or electrical circuit) exhibit a forcing function that is due to friction, dashpot, or circuit resistance.
Suppose we model this forcing function as f(t), an linear ODE with this added nonhomogeneous term now takes the form
$A_n \frac{{d^n y}}{{dt^n }} + A_{n - 1} \frac{{d^{n - 1} y}}{{dt^{n - 1} }} + \cdots + A_1 \frac{{dy}}{{dt}} + A_0 y = f\left( t \right),$
or simply (in standard form),
$a_n y^{(n)} + a_{n - 1} y^{(n - 1)} + \cdots + a_1 y' + a_0 y = f\left( t \right).\,$
In case of non-homogeneous linear ODE (non-HLDE) where the input function is polynomial, sinusoidal, exponential or any product of the three; we seek the solution to the equation above in the form of yG = yc + yp where
• yG denotes a general solution;
• yc denotes a characteristic equation;
• yp denotes a particular solution.
#### Method of undetermined coefficients
The method of undetermined coefficients (MoUC) is useful in finding solution for yp. Given P(D) = f(t), find the annihilator A(D) for f(t) such that A(D)f(t) = 0; then apply A(D) to both side of P(D) = f(t) to have A(D)f(t) = A(D)f = 0, a HLDE with constant coefficients (cc) which could than be readily solve using technique found in 3.1. Note by convention when f(t) is used, it often means that an equation is time-dependent, where f(x) and other denotes time-independent.
Suppose that f(x) = 1 − 2x; A(D) has the following family of solutions:
Recall: r = 0:e0 = 1,x,x2,x3,...
Thus, when we have x; henceforth it implies this root repeated twice. With this in mind, A(D) = D2 has multiplicity 2.
Similarly, case of complex roots is based on sin or cos.
• Example: f(x) = sinx - xcos2x
1. sin x is due to complex root, has real part of 0 because e0 = 1 (multiply 1 on sin and cos).
2. A(D) then has root of $0 \pm i$ (simply $\pm i$) with multiplicity 1.
3. Also $r=\pm 2i$ with multiplicity 2.
• Example: $\left[ {D^2 - D} \right]y = 1 - 2x$
Here r = 0:e0 = 1,x,x2,....xn;r = 1:ex,xex,...,xnex Note that once a distinct root is used, it may not be used again due to linearly independent.
$y_c = c_1 y_1 + c_2 y_2 = c_1 \left( 1 \right) + c_2 \left( {e^x } \right)$. A(D) has of multiplicity of 2.
$\left. {\begin{matrix} {Y_p = Ax + Bx^2 } \\ {Y_p ^\prime = A + 2Bx} \\ {Y_p ^{\prime \prime } = 2B} \\ \end{matrix}} \right\}2B - \left[ {A + 2Bx} \right] = \left[ {2B - A} \right] - 2Bx = 1 - 2x$
Equating coefficients, 2B − A yields constant term on RHS of 1, hence 2B − 1 = 1 so B = 1, A = 1. −2B = −2. Therefore yp = Ax + Bx2 = x + x2. Solution hence becomes y = yc + yp = C1 + C2ex + x + x2 . If we do not keep deleting our used roots, we than may have y = yc + yp = C1 + C2ex + 1 + x2, it would be incorrect since C1 absorbs the arbitrariness of x (here is 1); thus violates linearly dependence.
• Example: $\left[ {D^2 - D} \right]y = x - 2e^x$ (same as y'' - y' = x - 2ex)
In this case, we have roots r = {0, 1} which yield family of solution such as
$\begin{matrix} r = 0:1,x,x^2,x^3,... \\ r = 1:e^x,xe^x,x^2 e^x,... \\ \end{matrix}$
Therefore, y1 = 1,y2 = ex and yc = C1(1) + C2ex Since A(D) has $\left. \begin{matrix} r = 0\,\,{\rm{of\ multiplicity\ of\ 2}} \\ r = 1\,\,{\rm{of\ multiplicity\ of\ 1}} \\ \end{matrix} \right\}$ giving the form of $\left. \begin{matrix} Y_p = Ax + Bx^2 + Cxe^x \\ Y_p ^\prime = A + 2Bx + C(1 + x)e^x \\ Y_p ^{\prime \prime } = 2B + C(2 + x)e^x \\ \end{matrix} \right\}$ put in original equation to have $\left[ {2B - A} \right] - 2Bx + ce^x = x - 2e^x$
Equating coefficient, $\begin{matrix} 2B - A = 0\,\,{\rm{so }}\,{\rm{A = 2B}} \Rightarrow A = - 1 \\ - 2B = 1 \Rightarrow B = - \frac{1}{2};C = - 1 \\ \end{matrix}$
Thus $y_p = Ax + Bx^2 + Cxe^x = - x - \frac{1}{2}x^2 - 2xe^x$
• Example: $\left[ {D^2 + 1} \right]y = f = \sec x$. What roots would give rise to the solution of the form $f\left( x \right) = \sec \left( x \right)$ ?
Solution: No roots. $f\left( x \right) = \sec \left( x \right)$ is not a sinusoid, rather the reciprocal of a sinusoid. So this method would not apply and 2nd-order variation-of-parameters (VoP) must be used to solve these type of problems (no valid finite linear combination could be tried in this case).
#### Method of variation of parameters
As explained above, the general solution to a non-homogeneous, linear differential equation y''(x) + p(x)y'(x) + q(x)y(x) = g(x) can be expressed as the sum of the general solution yh(x) to the corresponding homogenous, linear differential equation y''(x) + p(x)y'(x) + q(x)y(x) = 0 and any one solution yp(x) to y''(x) + p(x)y'(x) + q(x)y(x) = g(x).
Like the method of undetermined coefficients, described above, the method of variation of parameters is a method for finding one solution to y''(x) + p(x)y'(x) + q(x)y(x) = g(x), having already found the general solution to y''(x) + p(x)y'(x) + q(x)y(x) = 0. Unlike the method of undetermined coefficients, which fails except with certain specific forms of g(x), the method of variation of parameters will always work; however, it is significantly more difficult to use.
For a second-order equation, the method of variation of parameters makes use of the following fact:
##### Fact
Let p(x), q(x), and g(x) be functions, and let y1(x) and y2(x) be solutions to the homogeneous, linear differential equation y''(x) + p(x)y'(x) + q(x)y(x) = 0. Further, let u(x) and v(x) be functions such that u'(x)y1(x) + v'(x)y2(x) = 0 and u'(x)y1'(x) + v'(x)y2'(x) = g(x) for all x, and define yp(x) = u(x)y1(x) + v(x)y2(x). Then yp(x) is a solution to the non-homogeneous, linear differential equation y''(x) + p(x)y'(x) + q(x)y(x) = g(x).
##### Proof
yp(x) = u(x)y1(x) + v(x)y2(x)
yp'(x) = u'(x)y1(x) + u(x)y1'(x) + v'(x)y2(x) + v(x)y2'(x) = 0 + u(x)y1'(x) + v(x)y2'(x)
yp''(x) = u'(x)y1'(x) + u(x)y1''(x) + v'(x)y2'(x) + v(x)y2''(x) = g(x) + u(x)y1''(x) + v(x)y2''(x)
yp''(x) + p(x)y'p(x) + q(x)yp(x) = g(x) + u(x)y1''(x) + v(x)y2''(x) + p(x)u(x)y1'(x) + p(x)v(x)y2'(x) + q(x)u(x)y1(x) + q(x)v(x)y2(x) = g(x) + u(x)(y1''(x) + p(x)y1'(x) + q(x)y1(x)) + v(x)(y2''(x) + p(x)y2'(x) + q(x)y2(x)) = g(x) + 0 + 0 = g(x)
##### Usage
To solve the second-order, non-homogeneous, linear differential equation y''(x) + p(x)y'(x) + q(x)y(x) = g(x) using the method of variation of parameters, use the following steps:
1. Find the general solution to the corresponding homogeneous equation y''(x) + p(x)y'(x) + q(x)y(x) = 0. Specifically, find two linearly independent solutions y1(x) and y2(x).
2. Since y1(x) and y2(x) are linearly independent solutions, their Wronskian y1(x)y2'(x) - y1'(x)y2(x) is nonzero, so we can compute $-\frac{g(x) y_2(x)}{y_1(x) y_2'(x) - y_1'(x) y_2(x)}$ and $\frac{g(x) y_1(x)}{y_1(x) y_2'(x) - y_1'(x) y_2(x)}$. If the former is equal to u'(x) and the latter to v'(x), then u and v satisfy the two constraints given above: that u'(x)y1(x) + v'(x)y2(x) = 0 and that u'(x)y1'(x) + v'(x)y2'(x) = g(x).
3. Integrate $-\frac{g(x) y_2(x)}{y_1(x) y_2'(x) - y_1'(x) y_2(x)}$ and $\frac{g(x) y_1(x)}{y_1(x) y_2'(x) - y_1'(x) y_2(x)}$ to obtain u(x) and v(x), respectively. (Note that we only need one choice of u and v, so there is no need for constants of integration.)
4. Compute yp(x) = u(x)y1(x) + v(x)y2(x). The function yp is one solution of y''(x) + p(x)y'(x) + q(x)y(x) = g(x).
5. The general solution is c1y1(x) + c2y2(x) + yp(x), where c1 and c2 are arbitrary constants.
##### Higher-order equations
The method of variation of parameters can also be used with higher-order equations. For example, if y1(x), y2(x), and y3(x) are linearly independent solutions to y'''(x) + p(x)y''(x) + q(x)y'(x) + r(x)y(x) = 0, then there exist functions u(x), v(x), and w(x) such that u'(x)y1(x) + v'(x)y2(x) + w'(x)y3(x) = 0, u'(x)y1'(x) + v'(x)y2'(x) + w'(x)y3'(x) = 0, and u'(x)y1''(x) + v'(x)y2''(x) + w'(x)y3''(x) = g(x). Having found such functions (by solving algebraically for u'(x), v'(x), and w'(x), then integrating each), we have yp(x) = u(x)y1(x) + v(x)y2(x) + w(x)y3(x), one solution to the equation y'''(x) + p(x)y''(x) + q(x)y'(x) + r(x)y(x) = g(x).
##### Example
Solve the previous example, y'' + y = secx Recall $\sec x = \frac{1}{{\cos x}} = f$. From technique learned from 3.1, LHS has root of $r = \pm i$ that yield yc = C1cosx + C2sinx, (so y1 = cosx, y2 = sinx ) and its derivatives $\left\{ {\begin{matrix} {\dot u = \frac{{ - y_2 f}}{W} = \frac{{ - \sin x}}{{\cos x}} = \tan x} \\ {\dot v = \frac{{y_1 f}}{W} = \frac{{\cos x}}{{\cos x}} = 1} \\ \end{matrix}} \right.$ where Wronskian $W\left( {y_1,y_2 :x} \right) = \left| {\begin{matrix} {\cos x} & {\sin x} \\ { - \sin x} & {\cos x} \\ \end{matrix}} \right| = 1$ were computed in order to seek solution to its derivatives. Upon integration, $\left\{ \begin{matrix} u = - \int {\tan xdx = - \ln \left| {\sec x} \right| + C} \\ v = \int {1dx = x + C} \\ \end{matrix} \right.$ Computing yp and yG: $\begin{matrix} y_p = f = uy_1 + vy_2 = \cos x\ln \left| {\cos x} \right| + x\sin x \\ y_G = y_c + y_p = C_1 \cos x + C_2 \sin x + x\sin x + \cos x\ln \left( {\cos x} \right) \\ \end{matrix}$
### General solution method for first-order linear ODEs
For a first-order linear ODE, with coefficients that may or may not vary with t:
$x'(t) + p(t) \times x(t) = r(t)$
Then:
$x=e^{-C}(\int{r(t) \times e^{C}dt} + \kappa)$
Where κ is the constant of integration, and:
$C=\int{Adt}$
#### Proof
This proof comes from Jean Bernoulli. Let
$x^\prime + px = r$
Suppose for some unknown functions u(t) and v(t) that x = uv.
Then
$x^\prime = u^\prime v + u v^\prime$
Substituting into the differential equation,
$u^\prime v + u v^\prime + puv = r$
Now, the most important step: Since the differential equation is linear we can split this into two independent equations and write
$u^\prime v + puv = 0$
$u v^\prime = r$
Since v is not zero, the top equation becomes
$u^\prime + pu = 0$
The solution of this is
$u = e^{ - \int p dt }$
Substituting into the second equation
$v = \int r e^{ \int p dt } + C$
Since x = uv, for aribitrary constant C
$x =e^{ - \int p dt } \left( \int r e^{ \int p dt } + C \right)$
#### First order differential equation with constant coefficients
As an illustrative example, consider a first order differential equation with constant coefficients:
$a\frac{dx}{dt} + bx = Af(t).$
This equation is particularly relevant to first order systems such as RC circuits, mass-damper systems.
After nondimensionalization, the equation becomes
$\frac{d \chi}{d \tau} + \chi = F(\tau).$
In this case, p(t) = r(t) = 1.
Hence it's solution by inspection is
$\chi (\tau) = e^{-\tau} \left( \int F(\tau)e^{\tau} \, d\tau + C \right).$
### Linear PDEs
The theory of linear partial differential equations may be said to begin with Lagrange (1779 to 1785). Monge (1809) treated ordinary and partial differential equations of the first and second order, uniting the theory to geometry, and introducing the notion of the "characteristic", the curve represented by F(z) = 0, which was investigated by Darboux, Levy, and Lie.
### First-order PDEs
Pfaff (1814, 1815) gave the first general method of integrating partial differential equations of the first order, of which Gauss (1815) gave an analysis. Cauchy (1819) gave a simpler method, attacking the subject from the analytical standpoint, but using the Monge characteristic . Cauchy also first stated the theorem (now called the Cauchy-Kovaleskaya theorem ) that every analytic differential equation defines an analytic function, expressible by means of a convergent series.
Jacobi (1827) also gave an analysis of Pfaff's method, besides developing an original one (1836) which Clebsch published (1862). Clebsch's own method appeared in 1866, and others are due to Boole (1859), Korkine (1869), and A. Mayer (1872). Pfaff's problem (on total differential equations) was investigated by Natani (1859), Clebsch (1861, 1862), DuBois-Reymond (1869), Cayley, Baltzer, Frobenius, Morera, Darboux, and Lie.
The next great improvement in the theory of partial differential equations of the first order was made by Lie (1872), who placed the whole subject on a solid foundation. After about 1870, Darboux, Kovalevsky, Méray, Mansion, Graindorge, and Imschenetsky became prominent in this line. The theory of partial differential equations of the second and higher orders, beginning with Laplace and Monge, was notably advanced by Ampère (1840).
The integration of partial differential equations with three or more variables was the object of elaborate investigations by Lagrange, and his name became connected with certain subsidiary equations. It was he and Charpit who originated one of the methods for integrating the general equation with two variables; a method which now bears Charpit's name.
### Singular solutions
The theory of singular solutions of ordinary and partial differential equations was a subject of research from the time of Leibniz, but only since the middle of the nineteenth century did it receive special attention. A valuable but little-known work on the subject is that of Houtain (1854). Darboux (starting in 1873) was a leader in the theory, and in the geometric interpretation of these solutions he opened a field which was worked by various writers, notably Casorati and Cayley. To the latter is due (1872) the theory of singular solutions of differential equations of the first order as accepted circa 1900.
### Reduction to quadratures
The primitive attempt in dealing with differential equations had in view a reduction to quadratures. As it had been the hope of eighteenth-century algebraists to find a method for solving the general equation of the nth degree, so it was the hope of analysts to find a general method for integrating any differential equation. Gauss (1799) showed, however, that the differential equation meets its limitations very soon unless complex numbers are introduced. Hence analysts began to substitute the study of functions, thus opening a new and fertile field. Cauchy was the first to appreciate the importance of this view. Thereafter the real question was to be, not whether a solution is possible by means of known functions or their integrals, but whether a given differential equation suffices for the definition of a function of the independent variable or variables, and if so, what are the characteristic properties of this function.
### The Fuchsian theory
Two memoirs by Fuchs (Crelle, 1866, 1868), inspired a novel approach, subsequently elaborated by Thomé and Frobenius. Collet was a prominent contributor beginning in 1869, although his method for integrating a non-linear system was communicated to Bertrand in 1868. Clebsch (1873) attacked the theory along lines parallel to those followed in his theory of Abelian integrals. As the latter can be classified according to the properties of the fundamental curve which remains unchanged under a rational transformation, so Clebsch proposed to classify the transcendent functions defined by the differential equations according to the invariant properties of the corresponding surfaces f = 0 under rational one-to-one transformations.
## Lie's theory
From 1870 Lie's work put the theory of differential equations on a more satisfactory foundation. He showed that the integration theories of the older mathematicians can, by the introduction of what are now called Lie groups, be referred to a common source; and that ordinary differential equations which admit the same infinitesimal transformations present comparable difficulties of integration. He also emphasized the subject of transformations of contact (Berührungstransformationen).
## See also
03-10-2013 05:06:04
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 70, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9158939123153687, "perplexity_flag": "middle"}
|
http://mathoverflow.net/revisions/51664/list
|
## Return to Answer
2 fixed grammar
The compact group $F_4$ is the group of isometries of the octonionic projective plane $\mathbb{OP}^2$ endowed with an analog of Fubini-Study metric. I suspect the other real groups of type $F_4$ are the isometries groups of the octonionic hyperbolic plane and of the analogous objects build built from split octonions. (Related question on mathoverflow.) One of the noncompact real forms of $E_6$ is the group of projective transformations (collineations) of $\mathbb{OP}^2$. The groups of type $F_4$ and $E_6$ arise in this context because of their close relationship to the exceptional Jordan algebras of hermitian three by three matrices over octonions. Indeed -- the group $E_6$ preserves the determinant of these matrices and $F_4$ preserves the determinant and the trace.
The most geometric approach to the exceptional groups that I am aware of (and which goes in this direction) is that of Rosenfeld, but unfortunately . Unfortunately I don't have that book. He interprets groups of type $E_7$ and $E_8$ in a similar manner for $(\mathbb{C}\otimes\mathbb{O} ) \mathbb{P}^2$ and $(\mathbb{H}\otimes\mathbb{O} ) \mathbb{P}^2$. Some details and introduction to the subject is in Baez.
1
The compact group $F_4$ is the group of isometries of the octonionic projective plane $\mathbb{OP}^2$ endowed with an analog of Fubini-Study metric. I suspect the other real groups of type $F_4$ are the isometries groups of the octonionic hyperbolic plane and of the analogous objects build from split octonions. (Related question on mathoverflow.) One of the noncompact real forms of $E_6$ is the group of projective transformations (collineations) of $\mathbb{OP}^2$. The groups of type $F_4$ and $E_6$ arise in this context because of their close relationship to the exceptional Jordan algebras of hermitian three by three matrices over octonions. Indeed -- the group $E_6$ preserves the determinant of these matrices and $F_4$ preserves the determinant and the trace.
The most geometric approach to the exceptional groups that I am aware of (and which goes in this direction) is that of Rosenfeld, but unfortunately I don't have that book. He interprets groups of type $E_7$ and $E_8$ in a similar manner for $(\mathbb{C}\otimes\mathbb{O} ) \mathbb{P}^2$ and $(\mathbb{H}\otimes\mathbb{O} ) \mathbb{P}^2$. Some details and introduction to the subject is in Baez.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 26, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9286102652549744, "perplexity_flag": "head"}
|
http://mathoverflow.net/revisions/70285/list
|
## Return to Answer
2 edited body
In a nutshell, do a spline interpolation, resample, and then compute a TFTDFT. You will encounter difficulties if the $t_{k+1}- t_k$ vary over several orders of magnitude because the smallest such gap dictates the interval width for resampling. You will also have immense trouble if the data are noisy - in that case you could try to denoise them first, e.g. using wavelets. Finally you will have to handle endpoint problems.
1
In a nutshell, do a spline interpolation, resample, and then compute a TFT. You will encounter difficulties if the $t_{k+1}- t_k$ vary over several orders of magnitude because the smallest such gap dictates the interval width for resampling. You will also have immense trouble if the data are noisy - in that case you could try to denoise them first, e.g. using wavelets. Finally you will have to handle endpoint problems.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8947572708129883, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/98536/how-to-use-a-character-table-to-get-the-centre
|
# How to use a character table to get the centre
I have been given a character table and I need to find from the table the centre of each character. I dont know how to do this. if someone could please explain how i can find the centre by looking at the character table.
-
Are you just trying to find the centre of each character, or the centre of the whole group? – user16299 Jan 12 '12 at 18:49
1
Do you know what the centre of a character is? Can you tell us what your definition is? The definition I know of makes it obvious how to find it if you are given the character table! – Mariano Suárez-Alvarez♦ Jan 12 '12 at 18:50
thank you guys. sorry for the lack of clarity in the question. i think i have it figured out now. thanks again :) – sarah jamal Jan 13 '12 at 3:17
## 1 Answer
Let $G$ be some finite group and $\text{irr}(G)$ the set of irreducible characters of $G$. For $\chi\in\text{irr}(G)$ define $\mathbf{Z}(\chi)=\left\{g\in G:|\chi(g)|=\chi(1)\right\}$ (called the center of the character). Then, it's a common fact that
$$\mathbf{Z}(G)=\bigcap_{\chi\in\text{irr}(G)}\mathbf{Z}(\chi)$$
So, if you are given a character table, then for each row you can look at the entries for which the modulus of that entry matches with the first entry of that row (assuming you are writing character tables with $\{1\}$ corresponding to the first column), and take the union over the conjugacy classes those entries sit below, call this union $Z_k$ if $k$ is the row we are in. Then, the above theorem says that $\mathbf{Z}(G)=Z_1\cap\cdots\cap Z_n$ if you have $n$ rows.
For proofs of the above statements you can see my blog post here, or for a more comprehensive source you can see Isaac's Character Theory of Finite Groups.
EDIT: Thankfully Yemon Choi has pointed out that you were just looking how to obtain the center of the CHARACTER from the character table, and not the center of the group. This is implicitly stated in the second paragraph of the above. Namely, to find $\mathbf{Z}(\chi)$, locate the row corresponding to $\chi$, and then take the union of the conjugacy classes lying above the row entries whose modulus equals $\chi(1)$.
-
1
The OP's question seems to ask the different (and easier) question of determining each $Z(\chi)$, though it is not clear what definition she is using – user16299 Jan 12 '12 at 18:48
@YemonChoi Thank you for pointing that out. I didn't even catch that--it's sort of an odd thing to ask. I have edited accordingly. – Alex Youcis Jan 12 '12 at 18:50
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9374357461929321, "perplexity_flag": "head"}
|
http://mathhelpforum.com/calculus/86069-missing-apex-parallelogram.html
|
# Thread:
1. ## Missing apex on a parallelogram
Let A=(1,0,-2), B=(-1,1,2),C=(3,4,0)
(a) Find the point D such that ABCD is a parallelogram where the apex D is opposite to A.
(b) Write parametric equations and symmetric equations of the line L passing through D and parallel to the segment BC.
2. Hello, Undefdisfigure!
Let: . $A=(1,0,\text{-}2),\;\; B=(\text{-}1,1,2),\;\;C=(3,4,0)$
(a) Find point $D$ such that $ABCD$ is a parallelogram where vertex $D$ is opposite $A.$
We would find vector $AB$ like this . . .
. . $\overrightarrow{AB} \;=\;\underbrace{(\text{-}1,1,2)}_B - \underbrace{(1,0,\text{-}2)}_A \;=\;\langle\text{-}2,1,4\rangle$
Let $D \,=\,(x,y,z)$
We know that: . $\overrightarrow{AB} \,=\,\overrightarrow{CD}$
We would find vector CD like this:
. . $CD \;=\;(x,y,z) - (3,4,0) \:=\:\langle \text{-}2,1,4\rangle \quad\Rightarrow\quad \langle x-3,y-4,z-0\rangle \:=\:\langle \text{-}2,1,4\rangle$
So we have: . $\begin{array}{ccc}x-3 \:=\:\text{-}2 & \Rightarrow & x \:=\:1 \\ y-4\:=\:1 & \Rightarrow & y \:=\:5 \\ z - 0 \:=\:4 & \Rightarrow & z \:=\:4 \end{array}$
Therefore: . $D(1,5,4)$
(b) Write parametric equations and symmetric equations of the line $L$
passing through $D$ and parallel to the segment $BC.$
The direction of $\overrightarrow{BC}$ is: . $\vec v \;=\;(3,4,0) - (\text{-}1,1,2) \;=\;\langle 4,3,\text{-}2\rangle$
The line through $D(1,5,4)$ with direction $\vec v \:=\:\langle 4,3,\text{-}2\rangle$ has equations:
. . . . $\begin{array}{ccc}x &=& 1 + 4t \\ y &=& 5 + 3t \\ z &=& 4 - 2t \end{array}$ . . and . . $\frac{x-1}{4} \:=\:\frac{y-5}{3} \:=\:\frac{z-4}{-2}$
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 21, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8500459790229797, "perplexity_flag": "middle"}
|
http://jdh.hamkins.org/victoria-gitman/
|
# Victoria Gitman
Posted on August 13, 2012 by
Victoria Gitman earned her Ph.D. under my supervision at the CUNY Graduate Center in June, 2007. For her dissertation work, Victoria had chosen a very difficult problem, the 1962 question of Dana Scott to characterize the standard systems of models of Peano Arithmetic, a question in the field of models of arithmetic that had been open for over forty years. Victoria was able to make progress, now published in several papers, by using an inter-disciplinary approach, applying set-theoretic ideas—including a use of the proper forcing axiom PFA—to the problem in the area of models of arithmetic, where such methods hadn’t often yet arisen. Ultimately, she showed under PFA that every arithmetically closed proper Scott set is the standard system of a model of PA. This result extends the classical result to a large new family of Scott sets, providing for these sets an affirmative solution to Scott’s problem. In other dissertation work, Victoria untangled the confusing mass of ideas surrounding various Ramsey-like large cardinal concepts, ultimately separating them into a beautiful hierarchy, a neighborhood of the vast large cardinal hierarchy intensely studied by set theorists. (Please see the diagram in her dissertation.) Victoria holds a tenure-track position at the New York City College of Technology of CUNY.
Victoria Gitman
web page | math geneology | MathSciNet | ar$\chi$iv | google scholar | related posts
Victoria Gitman, “Applications of the Proper Forcing Axiom to Models of Peano Arithmetic,” Ph.D. dissertation for the Graduate Center of the City University of New York, June 2007.
Abstract. In Chapter 1, new results are presented on Scott’s Problem in the subject of models of Peano Arithmetic. Some forty years ago, Dana Scott showed that countable Scott sets are exactly the countable standard systems of models of PA, and two decades later, Knight and Nadel extended his result to Scott sets of size $\omega_1$. Here it is shown that assuming the Proper Forcing Axiom, every arithmetically closed proper Scott set is the standard system of a model of PA. In Chapter 2, new large cardinal axioms, based on Ramsey-like embedding properties, are introduced and placed within the large cardinal hierarchy. These notions generalize the seldom encountered embedding characterization of Ramsey cardinals. I also show how these large cardinals can be used to obtain indestructibility results for Ramsey cardinals.
This entry was posted in Students and tagged countable models, forcing, forcing axioms, large cardinals, Peano Arithmetic, PFA, Ramsey cardinals, Scott sets, Victoria Gitman by Joel David Hamkins. Bookmark the permalink.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9115371704101562, "perplexity_flag": "middle"}
|
http://planetmath.org/freesemigroupwithinvolution
|
# free semigroup with involution
Let $X,X^{\ddagger}$ be two disjoint sets in bijective correspondence given by the map ${}^{\ddagger}:X\rightarrow X^{\ddagger}$. Denote by $Y=X\amalg X^{\ddagger}$ (here we use $\amalg$ instead of $\cup$ to remind that the union is actually a disjoint union) and by $Y^{+}$ the free semigroup on $Y$. We can extend the map ${}^{\ddagger}$ to an involution ${}^{\ddagger}:Y^{+}\rightarrow Y^{+}$ on $Y^{+}$ in the following way: given $w\in Y^{+}$, we have $w=w_{1}w_{2}...w_{k}$ for some letters $w_{i}\in Y$; then we define
$w^{\ddagger}=w_{k}^{\ddagger}w_{{k-1}}^{\ddagger}...w_{{2}}^{\ddagger}w_{{1}}^% {\ddagger}.$
It is easily verified that this is the unique way to extend ${}^{\ddagger}$ to an involution on $Y$. Thus, the semigroup $(X\amalg X^{\ddagger})^{+}$ with the involution $\ddagger$ is a semigroup with involution. Moreover, it is the free semigroup with involution on $X$, in the sense that it solves the following universal problem: given a semigroup with involution $S$ and a map $\Phi:X\rightarrow S$, a semigroup homomorphism $\overline{\Phi}:(X\amalg X^{\ddagger})^{+}\rightarrow S$ exists such that the following diagram commutes:
$\xymatrix{&X\ar[r]^{{\iota}}\ar[d]_{{\Phi}}&(X\amalg X^{\ddagger})^{+}\ar[dl]^% {{\overline{\Phi}}}\\ &S&}$
where $\iota:X\rightarrow(X\amalg X^{\ddagger})^{+}$ is the inclusion map. It is well known from universal algebra that $(X\amalg X^{\ddagger})^{+}$ is unique up to isomorphisms.
If we use $Y^{*}$ instead of $Y^{+}$, where $Y^{*}=Y^{+}\cup\{\varepsilon\}$ and $\varepsilon$ is the empty word (i.e. the identity of the monoid $Y^{*}$), we obtain a monoid with involution $(X\amalg X^{\ddagger})^{*}$ that is the free monoid with involution on $X$.
Major Section:
Reference
Type of Math Object:
Example
Parent:
## Mathematics Subject Classification
20M10 General structure theory
## Recent Activity
May 17
new image: sinx_approx.png by jeremyboden
new image: approximation_to_sinx by jeremyboden
new image: approximation_to_sinx by jeremyboden
new question: Solving the word problem for isomorphic groups by mairiwalker
new image: LineDiagrams.jpg by m759
new image: ProjPoints.jpg by m759
new image: AbstrExample3.jpg by m759
new image: four-diamond_figure.jpg by m759
May 16
new problem: Curve fitting using the Exchange Algorithm. by jeremyboden
new question: Undirected graphs and their Chromatic Number by Serchinnho
## Info
Owner: Mazzu
Added: 2006-08-23 - 11:47
Author(s): Mazzu
## Versions
(v8) by Mazzu 2013-03-22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 32, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8827928900718689, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/79461/positive-hermitian-elements-in-m-n-mathbbc/79508
|
## positive hermitian elements in $M_n(\mathbb{C})$
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Elements of the set $P$ of positive hermitian $n×n$ matrices over complex numbers have some special properties:
(i) they are closed under sum,
(ii) they are closed under multiplication by positive scalars,
(iii) spectrum of every matrix is positive, (all eigenvalues are nonnegative, and not all are equal to 0),
(iv) $P+-P+iP+-iP=M_n(\mathbb{C})$.
Does any other subset of matrix algebra $M_n(\mathbb{C})$ satisfy these properties except for $tPt^{-1}$, where $t$ is an invertible element in $M_n(\mathbb{C})$?
-
$X^*AX \ge 0$ for all $X$ if $A \ge 0$. – S. Sra Oct 29 2011 at 13:07
But $x^∗ax$ is also hermitian matrix if $a$ is. So $x^∗Px⊂P$, and $x^∗M_n(\mathbb{C})x=M_n(\mathbb{C})$ iff $x$ is invertible. So $x^∗Px$ either does not satisfy (iv) or equals $P$. – spelas Oct 29 2011 at 13:54
ah, ok. i did not read (iv) at all :-) – S. Sra Oct 29 2011 at 14:21
The set of upper (lower) triangular matrices with non-negative diagonals satisfies (i), (ii), and (iii) trivially since the eigenvalues lie on the diagonal. If we call the set of such upper triangular matrices $\mathcal U$, and the set of such lower triangular matrices $\mathcal L$, then we have a variant of (iv) which is $\mathcal U + -\mathcal U + i \mathcal U + -i \mathcal U + \mathcal L + -\mathcal L + i \mathcal L + -i \mathcal L = M_n(\mathbb C)$. – Jack Poulson Oct 29 2011 at 19:48
## 1 Answer
I think I recall seeing this question in a Halmos book on linear algebra, either "Finite Dimensional Vector Spaces" or the "Linear Algebra Problem Book", but I don't remember which, and I don't have them on hand.
Here are some subsets which satisfy 3 out of 4 conditions:
Jack Poulson already mentioned upper triangular matrices, which only violate (iv).
The set of all Hermitian matrices only violates (iii).
The set of Hermitian matrices $P_r$, where all eigenvalues are greater than some positive real $r$ is closed under addition — but not positive scaling — and every matrix can be written as an element of $P_r+(−P_r)+iP_r + (−iP_r)$. This set is a strict subset of $P$, and any element of $P \setminus P_r$ is not contained in $tP_rt^{-1}$ for any invertible $t \in M_n(\mathbb{C})$ (consider diagonalization).
The set of non-diagonalizable matrices with real, non-negative eigenvalues satisfies everything but (i). For $M_2$ explictly, consider matrices of the form $$A = \left[ {\begin{array}{cc} r_1 & z \\ c\bar{z} & r_2 \\ \end{array} } \right]$$ where $r_1$, $r_2$ are real, $r_1 + r_2 > 0$, $z \neq 0$, and $c = -\left(\frac{r_1 - r_2}{2|z|}\right)^2$. Then $A$ has one repeated eigenvalue, $\frac{r_1 + r_2}{2}$, and one linearly independent eigenvector $(z, \frac{r_2-r_1}{2})$. The set of all such matrices satisfies (ii), (iii), (iv), and is not conjugate to $P$ — since everything in $P$ is diagonalizable — but is not closed under addition.
-
I cannot find the question in Halmos books. There were some other useful information. Thank you. For case $n=2$, might Mathematica be able to compute this? – spelas Oct 30 2011 at 16:02
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 40, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9080485701560974, "perplexity_flag": "head"}
|
http://physics.aps.org/synopsis-for/print/10.1103/PhysRevB.83.220508
|
# Synopsis:
Nodes or no nodes
#### Relation between nodes and 2Δ/Tc on the hole Fermi surface in iron-based superconductors
Saurabh Maiti and Andrey V. Chubukov
Published June 16, 2011
In iron-based superconductors, the existence of nodes in the superconducting gap and the symmetry of the order parameter are topics of considerable debate. The Fermi surface in these materials consists of hole pockets at the center of the Brillouin zone and electron pockets at the zone corners. Many experiments suggest that the symmetry of the superconducting gap is $s$-wave-like and changes sign between the electron and hole pockets. While $s$-wave symmetry is not usually associated with nodes, accidental nodes can occur due to a modulation of the gap on the electron pockets. Thus far, angle-resolved photoemission spectroscopy (ARPES) has succeeded in determining the superconducting gap on the hole Fermi surface but cannot resolve the two gaps on the electron pockets to see whether there are nodes.
Writing in Physical Review B, Saurabh Maiti and Andrey Chubukov at the University of Wisconsin in Madison, have made a connection between the measured gap on the hole Fermi surface and the possible presence of the superconducting gap nodes on the electron Fermi surface. The study is based upon a model that considers an angle-dependent interaction between the electron and hole Fermi pockets. Maiti and Chubukov are able to predict the possibility of the absence or presence of gap nodes on the electron Fermi surface based solely upon whether the ratio of the measured gap on the hole Fermi surface and the superconducting transition temperature is above or below a certain threshold. This study might be useful in classifying the nodal behavior of the growing zoo of iron superconductors. – Hari Dahal
ISSN 1943-2879. Use of the American Physical Society websites and journals implies that the user has read and agrees to our Terms and Conditions and any applicable Subscription Agreement.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 2, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8951435685157776, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/27851?sort=oldest
|
## Polynomials having a common root with their derivatives
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Here is a question someone asked me a couple of years ago. I remember having spent a day or two thinking about it but did not manage to solve it. This may be an open problem, in which case I'd be interested to know the status of it.
Let $f$ be a one variable complex polynomial. Supposing $f$ has a common root with every $f^{(i)},i=1,\ldots,\deg f-1$, does it follow that $f$ is a power of a degree 1 polynomial?
upd: as pointed out by Pedro, this is indeed a conjecture (which makes me feel less badly about not being able to do it). But still the question about its status remains.
-
Is your "with any" a 'there exists' or a 'for all'? – Mark Jun 11 2010 at 19:24
Mark -- "any" here means "every". – algori Jun 11 2010 at 20:05
Related question: mathoverflow.net/questions/52006/… – Felipe Voloch May 14 2011 at 22:30
## 2 Answers
That is known as the Casas-Alvero conjecture. Check this out, for instance:
http://front.math.ucdavis.edu/0605.5090
Not sure of its current status, though.
-
Thanks, Pedro! – algori Jun 11 2010 at 20:18
5
It is still open. – quim Jun 11 2010 at 21:08
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
The strongest result in this direction that I've heard of is Sudbery's theorem (which was originally conjectured by Popoviciu and Erdös).
Theorem. Let $P(z)$ be a polynomial of degree $n\geq 2$ and let $\Pi(z)=\prod\limits_{k=0}^{n-1}P^{(k)}(z)$ where $P^{(k)}$ is the $k$th derivative of $P$. Then either $\Pi(z)$ has exactly one distinct root or $\Pi(z)$ has at least $n+1$ distinct roots.
See the original paper by Sudbery.
-
Thanks, Andrey! – algori Jun 11 2010 at 20:30
You're welcome. – Andrey Rekalo Jun 11 2010 at 20:31
For those who do not have an access to the journal, there is an AoPS discussion at artofproblemsolving.com/Forum/… which contains enough information to recover the full proof :) – fedja May 29 2011 at 0:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 13, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9444383382797241, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/tagged/combinatorics?page=3&sort=newest&pagesize=15
|
# Tagged Questions
Permutations, combinations, bijective proofs, generating functions
2answers
57 views
### Binomial probability with summation
Show that $$\sum_{k=0}^{m} \frac{m!(n-k)!}{n!(m-k)!} = \frac{n+1}{n-m+1}$$ Attempt: It becomes: $$\sum_{k=0}^{m } \frac{\binom{m}{k}}{\binom{n}{k}}$$ Telescoping, pairing, binomial theorem don't ...
2answers
45 views
### $a_{k+1}-a_k = a_2 - a_1$,$\sum \limits_{k=1}^{n}{a_k}$=?
I need to find an explicit formula for the sum $\sum \limits_{k=1}^{n}{a_k}$ where $(a_k)_{k∈ℕ}∈ℚ^ℕ$ with $a_{k+1}-a_k = a_2 - a_1$ for all k∈ℕ I would love to start with by collecting the values of ...
1answer
140 views
### No of labeled trees with n nodes such that certain pairs of labels are not adjacent.
Moderator Note: This is a current contest question on codechef.com. What is the number of trees possible with $n$ nodes where the $i$th and $(i+1)$th node are not adjacent to each other for \$i ...
2answers
59 views
### Permutations of a queue of interlaced boys and girls.
Suppose $5$ boys and $4$ girls are to be arranged in a queue such that between any two boys there is at least one girl. Find the number of such arrangements possible. What i think is $5$ boys ...
4answers
145 views
### Find a ternary $4\times 39$ matrix satisfying the conditions below
Can you find a matrix $A_{4\times39}$ with elements from $\{-1,0,1\}$ so that No column is all zero. All columns are different. No column is $-1$ times another column. Each row consists of $13$ of ...
1answer
42 views
### total number of different mixes
Patient Age Avg Visits / Year <1 year 7.5 1-4 years 3.0 5-14 years 1.8 15-24 years 1.7 25-44 years 2.6 45-64 years ...
0answers
140 views
### Possible Playable Chords on a Guitar
Fingerstyle Guitar Chord Diversity Check Considering a $20$-fret $6$-string acoustic guitar and supposing that the fretting range (inclusive of the fingered notes) for an average hand is $4$ frets in ...
1answer
33 views
### how often does a value appear in a combination
Say I have a set of numbers 1,2,3,4,5,6,7,8,9,10 and I say 10 C 4 I know that equals 210. But lets say I want to know how often 3 appears in those combinations how do I determine that?
2answers
154 views
### Choosing a linear map $(\mathbb{Z}/2\mathbb{Z})^n \rightarrow \mathbb{Z}/2\mathbb{Z}$ which is nonzero on half of a sequence of vectors
Let $v_1,\ldots,v_m \in (\mathbb{Z}/2\mathbb{Z})^n$ be nonzero vectors. Is it always possible to choose a linear map $f : (\mathbb{Z}/2\mathbb{Z})^n \rightarrow \mathbb{Z}/2\mathbb{Z}$ such that $f$ ...
0answers
211 views
+500
### Conjecture regarding trapping rational numbers in some special intervals
Conjecture: Let $b\in\mathbb{N}_{\geq3}$ and $\{x_i\}$ be a collection of $b−2$ rational numbers greater than $1$. Does there always exist a natural number $a$ such that for all $i$ there exists some ...
6answers
60 views
### How to prove a limit with a recurrence?
$s_1 = 1$ and $s_{n+1} = \dfrac{s_n + 1}{3}$ for $n \in \Bbb N$. How do you find $\displaystyle \lim_{x\to \infty} s_n$? Then how do you prove that the value is the limit using the definition of the ...
1answer
52 views
### Counting 0-1 matrices up to symmetry
I'm interested in counting the number of n×n 0-1 matrices with a given number of 1s up to rotation and reflection. What is the best way to do this if n is not too small? For example, consider ...
1answer
35 views
### Proof of bipartite graphs with $k$ edges
Let $b_k(n)$ be the number of bipartite graphs (without multiple edges) with $k$ edges on the vertex set $[n]$. Show that: \sum_{n\geq 0}\sum_{k\geq 0}b_k(n)q^k\frac{x^n}{n!}=\sqrt{\sum_{n\geq ...
3answers
121 views
### Evaluate a sum with binomial coefficients
$$\text{Find} \ \ \sum_{k=0}^{n} (-1)^k k \binom{n}{k}^2$$ I expanded the binomial coefficients within the sum and got $$\binom{n}{0}^2 + \binom{n}{1}^2 + \binom{n}{2}^2 + \dots + \binom{n}{n}^2$$ ...
1answer
71 views
### 65-card deck consisting of 13 ranks and 5 suits
** I FIGURED OUT 15 out of 16 cases. I don't understand the last case of RUNT. Anyone helps? I recently went to a math event and one person presented a weird card deck, consisting of 13 ranks and 5 ...
0answers
25 views
### Probability question using PIE
Five people check identical suitcases before boarding an airplane. At the baggage claim, each person takes one of the five suitcases at random. What is the probability that every person ends up with ...
6answers
48 views
### calculate the number of possible number of words
If one word can be at most 63 characters long. It can be combination of : letters from a to z numbers from 0 to 9 hyphen - but only if not in the first or the last character of the word I'm trying ...
2answers
33 views
### Combinatorics/Probability - Multiple Groups Example Problem
Joe, an avid and properly licensed sportsman, is in his hunting blind when he locates 20 Canada geese, 25 Mallard ducks, 40 Bald Eagles, 10 Whopping Cranes, and 5 Flamingos. Joe randomly selects ...
2answers
119 views
### Derivative of Schur function
In his answer to http://mathoverflow.net/questions/129854, R. Stanley says that the partial derivative (over the relevant x[i]) of the Schur function of a partition lambda of n equals the sum the ...
1answer
17 views
### Prove that $h_r(x_1,\dots,x_n)=\sum^n_{k=1}x^{n-1+r}_k\prod_{i\neq k}(x_k-x_i)^{-1}$
How do I show that $$h_r(x_1,\dots,x_n)=\sum^n_{k=1}x^{n-1+r}_k\prod_{i\neq k}(x_k-x_i)^{-1}$$ Can anyone just give me like a hint or "headstart"? Thanks!
1answer
41 views
### Growth rate of formula
I have formula: $\frac{(m+n)!}{m!n!}$ I am wondering what is growth rate of it. Can I say that it grows exponentially with m and n? Or maybe this is different growth rate? Greetings, Rnd
0answers
25 views
### Sets of numbers satisfying a simple additive property
There are four sets of size $N$ in the integers, say $A_1,A_2,A_3,A_4$. And for at least $\epsilon N^3$ of the tuples $(a_1,a_2,a_3,a_4) \in A_1 \times A_2 \times A_3 \times A_4$ it is true that \$a_1 ...
3answers
65 views
### Factorial Equality Problem
I'm stuck on this problem, any help would be appreciated. Find all $n \in \mathbb{Z}$ which satisfy the following equation: $${12 \choose n} = \binom{12}{n-2}$$ I have tried to put each of them ...
1answer
44 views
### binary circle - difficult question
I ran into this question and I'm not really sure how to start. we are looking at 100 0/1's that are written arround a circle. for a binary sequence $w$, we'll define $n_{w}$ as the number of times ...
1answer
66 views
### Probability to complete a sequence with two attempts
Imagine a slot machine with $N$ reels. I want to calculate the probability $P$ that a player hits a certain sequence $A$, if the player has given the possibility to spin again (and only once again), ...
0answers
57 views
### Ball and holder problem [duplicate]
I am trying to solve this but having a tough time deriving the formula. There are $X$ ball and $Y$ holders $Y \leq X$. Out of the $X$ balls, $N$ are red and $X-N$ are blue. What is the probability ...
1answer
50 views
### Ferrers Diagram Partitions
Using Ferrer's diagram, prove that the number of partitions of n in which each part is 1 or 2 is equal to the number of partitions of n+3 which has exactly two distinct parts. Any help please, all I ...
2answers
52 views
### Solution gives wrong answer to probability problem
Great Northern Airlines flies small planes in northern Canada and Alaska. Their largest plane can seat 16 passengers seated in 8 rows of 2. On a certain flight flown on this plane, they have 12 ...
1answer
28 views
### Combinatorial Techniques: Putting two and two together
This is a $3$-part question. I got the first two parts, but could not get the third part (which uses the first two parts): Pick sequence of $8$ coins from sack of $40$ coins, containing $10$ pennies, ...
0answers
106 views
### (3n,n)-Turán graph [closed]
I'm working on a problem regarding (kn,n)-Turán graphs. The (2n,n)-Turán graph, also known as the cocktail party graph, has a closed formula for its number of spanning trees. I want to know if there ...
2answers
179 views
### A card game with no decisions
A friend showed me a mindless card game he plays, in which the initial state of the deck completely determines whether he wins or loses. The game is played as follows: Shuffle a standard $52$ card ...
2answers
38 views
### Probability/Combinatorics Problem - Old Maid Cards
A special deck of Old Maid cards consist of 25 pairs and a single old maid card. All 51 cards evenly between you and two other players – 17 cards for each player. (a) how many different ...
0answers
42 views
### Calculating a probability
Given $m\cdot e$ balls, $b$ of which are black (suppose the rest are white balls). Randomly put the balls into $m$ baskets, with $e$ balls in each basket. What is the probability of the event that ...
2answers
53 views
### A probability question: a building and an elevator.
Suppose that 7 people waiting for an elevator in a building with 14 flours. Q: What is the probability that every person get out in different flour? My attempt: There is \$14 \cdot 13 \cdot 12 \cdot ...
0answers
66 views
### A combinatorial problem.
Let be $(X, \mathbb{A}, \mu)$ a measure space, a partition of $X$ is a disjoint family $\xi=\{P_1,\ldots,P_k \}$ of measurable sets such tath $\bigcup P_i=X\pmod0).$ If \$\xi=\{P_1,\ldots,P_k ...
1answer
58 views
### Distributing objects in boxes
In how many way can we distribute: 7 objects in 3 boxes; provided that: 1) objects are distinct, boxes are distinct and boxes may be empty; 2) objects are distinct, boxes are distinct and boxes may ...
3answers
107 views
### How can one show $100!=100 \cdot 99!$ by combinatorial arguments
How does one show $100!=100\cdot 99!$ by using combinatorial arguments?
0answers
32 views
### Different Perspectives of Multinomial Theorem & Partitions
There are 2 important interpretations of the multinomial theorem and coefficients. 1: Determining the number of ordered strings that can be formed using a set of letters. For example, with 1 m, 4 ...
1answer
53 views
### What is the probability that, given the smallest of 50 random integers(>0), it will be the smallest of 50 other random integers (one being itself)?
More generally, if an array of random integers (size N), and another array of random integers (size M), "overlap" by R numbers (have them in common): What is the chance that the smallest of one is the ...
2answers
37 views
### Probability of selecting correct answer in 15 out of 25 exercises with 0.25 chance
There are 25 exercises, each one consists of answers: a, b, c, d and only one answer is correct. My question is what is the probability of selecting correct answer in 15 out of 25 exercises. My idea: ...
2answers
39 views
### How many different 2-regular graphs are there with 5 vertices?
How many different 2-regular (simple) graphs are there with 5 vertices? I just asked a very similar question, and I actually already understand the answer of this question. I think there are ...
0answers
34 views
### Is there a two name Wikipedia pangram? [closed]
Benjamin Franklin Goodrich and François-Xavier Wurth-Paquet are people in Wikipedia with the letters A-O and N-X. Is there a pair of names in Wikipedia that has all the letters A-Z? I use ...
0answers
35 views
### which kind of data related to permutation group and SSYT [closed]
if do not have these books, then just focus on which data related to permutation group and SSYT page 309 in enumerative combinatorics book volume 2 (old edition) prepare to apply this chapter, which ...
1answer
22 views
### Partitioning of subsets
This is a previous exam question. Let $S$ be a subset of $\{10,11,...,99\}$ containing 10 elements. Show that there will exist two disjoint subsets $A$ and $B$ of $S$ such that sum of the elements of ...
0answers
225 views
### Counting number of spanning trees in $(3n,n)$-turan graph [closed]
Moderator Note: This is a current contest question on codechef.com. I'm working on a problem regarding $(kn,n)$-Turán graphs. The $(2n,n)$-Turán graph, also known as the cocktail party graph, has ...
3answers
50 views
### About ascending numbers
I have that a positive integer d is said to be ascending if in its decimal representation: $$d=d_md_{m-1}\cdots d_2d_1$$ we have $$0<d_m\leq d_{m-1}\leq \cdots \leq d_2\leq d_1.$$ How can I find ...
1answer
187 views
### What can we say about the size of $HK\cap KH$ when $HK\neq KH$?
If $G$ is a finite group, and $H$, $K$ are proper subgroups of $G$, then it is not necessary that $HK=KH$. But, these two subsets have same size. The question I would like to ask, then, is If ...
2answers
85 views
### How does this “combinatorial proof” work?
For any non-integer $n$, $$(1+x)^n=\sum_{k=0}^{n}\binom{n}{k}x^k$$ Let $y_1,\dots,y_n$ be variables and, for any subset $S$ of $\{1,\dots,n\}$, let $y^S$ denote the product of the $y_i$'s for each ...
1answer
67 views
### Conditional probability Bayes Theorem
I am trying to solve this problem but I am not sure how to obtain the formula given below. Any help would be appreciated. A boy is selected at random from among the children belonging to families ...
1answer
30 views
### $\frac{1}{4^n}\binom{1/2}{n} \stackrel{?}{=} \frac{1}{1+2n}\binom{n+1/2}{2n}$ - An identity for fractional binomial coefficients
In trying to write an answer to this question: calculate the roots of $z = 1 + z^{1/2}$ using Lagrange expansion I have come across the identity \frac{1}{4^n}\binom{1/2}{n} = ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 95, "mathjax_display_tex": 9, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9200353026390076, "perplexity_flag": "middle"}
|
http://mathoverflow.net/revisions/27901/list
|
## Return to Question
4 added 75 characters in body
It is well known that if $X$ is a first countable topological space and $Y$ is a topological space, then $f : X \rightarrow Y$ is continuous iff
`$$\forall x \in {\rm map}(\mathbb{N},X),\forall p \in X \quad x_{n} \rightarrow p \Rightarrow f(x_{n}) \rightarrow f(p)$$`
It is also well known that if $X$ and $Y$ are metric spaces and $f : X \rightarrow Y$ is uniformly continuous, then $f$ maps Cauchy sequences to Cauchy sequences.
By analogy it seems plausible that if a function between metric spaces maps Cauchy sequences to Cauchy sequences then it must be uniformly continuous. However mimicking the proof of the analogous result for continuous maps doesn't work, which makes me think the result if false. Does anyone know any counterexamples?
Also on the uniform continuity wikipedia page, it says that the result is true if $X$ and $Y$ are subsets of $\mathbb{R}^{n}$. EDIT: It actually doesn't say this, I misread the page.
3 corrected spelling, added tag
# AquestionaboutDoesCauchycontinuityimply uniform continuity?[No.]
It is well known that if $X$ is a first countable topological space and $Y$ is a topological space, then $f : X \rightarrow Y$ is continuous iff
`$$\forall x \in {\rm map}(\mathbb{N},X),\forall p \in X \quad x_{n} \rightarrow p \Rightarrow f(x_{n}) \rightarrow f(p)$$`
It is also well known that if $X$ and $Y$ are metric spaces and $f : X \rightarrow Y$ is uniformly continuiouscontinuous, then $f$ maps Cauchy sequences to Cauchy sequences.
By Analogy analogy it seems plausible that if a function between metric spaces maps cauchy Cauchy sequences to cauchy Cauchy sequences then it must be uniformly continuiouscontinuous. However mimicking the proof of the analogus analogous result for continuious continuous maps doesn't work, which makes me think the result if false. Does anyone know any counter examplescounterexamples?
Also on the wikipedia page, it says that the result is true if X $X$ and Y $Y$ are subsets of $\mathbb{R}^{n}$\mathbb{R}^{n}\$.
2 edited body; added 1 characters in body; deleted 5 characters in body; deleted 2 characters in body
It is well known that if $X$ is a first countable topological space and $Y$ is a topological space, then $f : X \rightarrow Y$ is continuous iff
`$$\forall x \in {\rm map}(\mathbb{N},X),\forall p \in X \quad x_{n} \rightarrow p \Rightarrow f(x_{n}) \rightarrow f(p)$$`
It is also well known that if $X$ and $Y$ are metric spaces and $f : X \rightarrow Y$ is uniformly continuious, then $f$ maps Cauchy sequences to Cauchy sequences.
By Analogy it seems plausible that if a function between metric spaces maps cauchy sequences to cauchy sequences then it must be uniformly continuious, however . However mimicking the proof of the analogus result for continuious maps doesn't work, which makes me think the result if false. Does anyone know any counter examples?
Also on the wikipedia page, it says that the result is true if X and Y are finite dimensional Banach spaces.subsets of $\mathbb{R}^{n}$
1
# A question about uniform continuity
It is well known that if $X$ is a first countable topological space and $Y$ is a topological space, then $f : X \rightarrow Y$ is continuous iff
`$$\forall x \in {\rm map}(\mathbb{N},X),\forall p \in X \quad x_{n} \rightarrow p \Rightarrow f(x_{n}) \rightarrow f(p)$$`
It is also well known that if $X$ and $Y$ are metric spaces and $f : X \rightarrow Y$ is uniformly continuious, then $f$ maps Cauchy sequences to Cauchy sequences.
By Analogy it seems plausible that if a function between metric spaces maps cauchy sequences to cauchy sequences then it must be uniformly continuious, however mimicking the proof of the analogus result for continuious maps doesn't work, which makes me think the result if false. Does anyone know any counter examples?
Also on the wikipedia page, it says that the result is true if X and Y are finite dimensional Banach spaces.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 35, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9461120963096619, "perplexity_flag": "head"}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.