url
stringlengths 17
172
| text
stringlengths 44
1.14M
| metadata
stringlengths 820
832
|
---|---|---|
http://www.fields.utoronto.ca/programs/scientific/12-13/Back2Fields/wick.html
|
# SCIENTIFIC ACTIVITIES
May 18, 2013
FIELDS 20th ANNIVERSARY Back2Fields COLLOQUIUM SERIES June 20, 2012 at 3:30 p.m. Fields Institute, Room 230 (map) Brett Wick Georgia Institute of Technology Function Theory meets Operator Theory: The Corona Problem and Bilinear Forms audio and slides
Abstract: During my time as the Jerrold E. Marsden Postdoctoral Fellow at the Fields Institute, I was exposed to many interesting mathematical questions in function theory, operator theory and harmonic analysis. In this talk, I will discuss two interesting and important questions that I was pointed to during my time there and was fortunate to have a hand in solving.
In the first question, extensions of Carleson's Corona Theorem will be discussed. The Corona Theorem has served as a major motivation for many results in complex function theory, operator theory and harmonic analysis. In a simple form, the result states that for $N$ bounded analytic functions $f_1,\ldots,f_N$ on the unit disc such that $\inf \left\vert f_1\right\vert+\cdots+\left\vert f_N\right\vert\geq\delta>0$ it is possible to find $N$ other bounded analytic functions $g_1,\ldots,g_N$ such that $f_1g_1+\cdots+f_Ng_N = 1$. Moreover, the functions $g_1,\ldots,g_N$ can be chosen with some norm control in terms of $\delta$. Extensions of this result to several variables and different spaces of analytic functions will be discussed.
Motivated by questions in operator theory and partial differential equations, one frequently encounters bilinear forms on various spaces of functions. It is interesting to determine the behavior of this form (e.g., boundedness, compactness, etc.) in terms of function theoretic information about a naturally associated symbol of this operator. For the second question, I will talk about necessary and sufficient conditions in order to have a bounded bilinear form on the Dirichlet space. This condition will be expressed in terms of a Carleson measure condition for the Dirichlet space.
The connection between both these problems is a certain family of spaces of analytic functions and some fundamental ideas in harmonic analysis. This talk will illustrate the usefulness of these ideas through the resolution of these two mathematical problems.
The Back2Fields Colloquium Series celebrates the accomplishments of former postdoctoral fellows of Fields Institute thematic programs. Over the past two decades, these programs attracted the rising stars of their fields and often launched very distinguished research careers. As part of the 20th anniversary celebrations, this series of colloquium talks will allow a general mathematical public to become familiar with some of their work.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9300842881202698, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/186335/method-for-expanding-an-expression-with-function-composition?answertab=active
|
# Method for expanding an expression with function composition
Sorry for the bad title; I couldn't think of a good way to phrase it. Someone more knowledgeable could edit it if that helps. I'm an engineer, not a mathematician, so my level of rigor is somewhat lacking.
This question is motivated by my answer to this question at the Signal Processing Stack Exchange site. I currently have a poor handwaving explanation for how to express one function that is defined as the composition of two functions solely in terms of the independent variable used in the most inner layer of the function composition. More precisely:
$$Y_D(z) = X\left(z^{1/M}\right) = f(g(z))$$
where
$$f(\alpha) = X(\alpha)$$
$$g(\beta) = \beta^{1/M}$$
(with $z, \alpha, \beta \in \mathbb{C}$)
Specifically, I would like to arrive at an expression for $Y_D(z)$ that is solely in terms of the intermediate function $X(z)$. The question references a paper that indicates that it can be expressed as:
$$Y_D(z)=\frac{1}{M}\sum_{k=0}^{M-1}X(z^{1/M} e^{j2\pi k/M})$$
Which seems to make sense to me, but I'm not sure how to rigorously arrive at that result. In the past, I recall using a method to derive expressions for the pdf of functions of a random variables. That method involved something like breaking the function applied to the random variable into sections that are one-to-one and treating them separately. The individual results were summed together, with the absolute value of the derivative of the mapping function used as a scaling factor in there somewhere.
It seems to me that a similar method could be applied here; I understand that for any particular value $z$ that we would want to evaluate $Y_D(z)$ at, there are $M$ distinct values $z^{1/M}$ in the domain of $X(z^{1/M})$ that would map to it, hence the sum, with the $M$ roots of unity included to catch all of the $M$-th roots of $z$. I'm just not sure how to cleanly get there.
Does this seem to ring a bell to anyone? Is there a particular name for the method that would be applied to this sort of problem?
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9625308513641357, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/141375/doubt-sin2n-1-cos4n
|
# Doubt $\sin(2n)/(1+\cos^4(n))$
So my doubt is for this comment of this video:
Is ANYONE going to notice the fact that the kid's solution to the problem given at 9:35 is completely invalid? I mean, $\sin(2n)/(1+\cos^4(n))$ fails to be either positive or monotonically decreasing, meaning it fails ALL the conditions of the Integral Test, making his solution completely invalid. To make matters worse, the corresponding function is straight-up periodic, which means that a quick limit to infinity and invocation of the $N$th Term Test would've sufficed to show it FAILS to converge.
If you could come up with a more explicit example would appreciate it, including both a valid example of the question as whether if the child example perhaps could came to a feasible point of what he wanted to do, thanks.
-
What exactly is your question? It seems to me that the comment is right, and the kid is wrong. If we let $n \to \infty$, the terms don't converge to $0$, so how can the series converge? – TMM May 5 '12 at 13:31
– dato May 5 '12 at 13:40
– TMM May 5 '12 at 13:50
thanks for that link.,now i understand pretty clear – Voislav Sauca May 5 '12 at 14:32
you are welcome – dato May 5 '12 at 14:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9573761820793152, "perplexity_flag": "middle"}
|
http://crypto.stackexchange.com/questions/5953/what-is-blinding-used-for-in-cryptography?answertab=votes
|
# What is “Blinding” used for in cryptography?
What does "blinding" mean in cryptography, and where do we usually use it? Can you describe a sample implementation?
-
3
– CodesInChaos Jan 9 at 10:32
– Amzoti Jan 10 at 16:33
## 1 Answer
As @CodesInChaos explains:
• It might refer to blind signatures.
• It also might refer to a method to harden (typically) RSA implementations against timing/side-channel attacks, by blinding the data before operating on it.
Example: suppose you are writing code to decrypt data, i.e., to compute $y=x^d \bmod n$, given the input $x$. The naive way to do is just to compute $x^d \bmod n$; but it turns out this can be vulnerable to timing and other side-channel attacks. One defense is to blind the data before raising the $d$th power. In more detail, pick a random number $r$; compute $s=r^e \bmod n$; compute $X=xs \bmod n$ and then $Y=X^d \bmod n$ and then $y=Y/r \bmod n$. You can notice that $Y/r=X^d/r=(xs)^d/r = x^d s^d/r = x^d r/r = x^d \bmod n$, which is what we wanted to compute. However, this process makes it hard for an attacker to learn anything about $d$ using a timing attack, because the exponentiation process works on a random value $X$ that's not known to the attacker, rather than on the known value $x$.
-
Then again, it is usually not the value $x$ to-be-raised that has to be blinded, but the private exponent $d$. Your method does not blind $d$, so if the timing attack works independently of the value-to-be-raised, it has no effect at all. – Henrick Hellström Mar 2 at 10:40
@HenrickHellström, the defense I have described (namely, blinding $x$) is a standard defense against timing attacks on RSA. To my knowledge, this method of blinding defends against all known timing attacks against RSA (i.e., against all attacks that are capable of recovering $d$). I do not know of any timing attack that works if $x$ is blinded in this way (i.e., any timing attack that can recover $d$ without knowledge of the value-to-be-raised). If you know of anything that contradicts this, I'd certainly be interested to hear. – D.W. Mar 2 at 11:14
– Henrick Hellström Mar 2 at 11:16
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 20, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9413812756538391, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/differential-equations/146037-differential-equation-method-do-i-use-print.html
|
# Which Differential Equation Method Do I use?
Printable View
• May 22nd 2010, 11:14 PM
lanczlot
Which Differential Equation Method Do I use?
i need to solve this differential equation (3y^3-xy)dx-(x^2+6xy^2)dy=0
I tried using Bernoulli, but it doesn't work. Also tried separating the variables but i cant seem to separate it. i also tried using the 3 cases for exact equation but to no avail. the 3rd case may work but i don't know how to simplify it so i can integrate it.
• May 23rd 2010, 12:09 AM
mr fantastic
Quote:
Originally Posted by lanczlot
i need to solve this differential equation (3y^3-xy)dx-(x^2+6xy^2)dy=0
I tried using Bernoulli, but it doesn't work. Also tried separating the variables but i cant seem to separate it. i also tried using the 3 cases for exact equation but to no avail. the 3rd case may work but i don't know how to simplify it so i can integrate it.
Are you sure it's not (3y^2 - xy) dx -(x^2 + 6xy^2) dy=0?
• May 23rd 2010, 05:29 AM
lanczlot
the problem was hand written, so it may have been a mistake. what if it was squared instead of cube? how do you solve it?
• May 23rd 2010, 05:34 AM
mr fantastic
Quote:
Originally Posted by lanczlot
the problem was hand written, so it may have been a mistake. what if it was squared instead of cube? how do you solve it?
Divide through by y^2. Now read this: Homogeneous Ordinary Differential Equation -- from Wolfram MathWorld
The technique will be in your classnotes and textbook.
• May 23rd 2010, 08:07 AM
Danny
Quote:
Originally Posted by lanczlot
i need to solve this differential equation (3y^3-xy)dx-(x^2+6xy^2)dy=0
I tried using Bernoulli, but it doesn't work. Also tried separating the variables but i cant seem to separate it. i also tried using the 3 cases for exact equation but to no avail. the 3rd case may work but i don't know how to simplify it so i can integrate it.
If you write the ODE as
$\frac{dy}{dx} = \frac{3y^3-xy}{x^2 + 6xy^2}$
multiply both side by 2y
$2y\frac{dy}{dx} = \frac{6y^4-2xy^2}{x^2 + 6xy^2}$
and let $u = y^2$ then
$\frac{du}{dx} = \frac{6u^2-2xu}{x^2 + 6xu}$ (this is homogeneous).
• May 23rd 2010, 09:53 AM
lanczlot
wait... how is the differential equation with u homogeneous?
• May 23rd 2010, 11:55 AM
Danny
Divide everything on the rhs by $x^2$ so
$<br /> \frac{du}{dx} = \frac{6\dfrac{u^2}{x^2}-2\dfrac{u}{x}}{1 + 6\dfrac{u}{x}}$
which of the form is $\frac{du}{dx} = f\left(\frac{u}{x}\right)$.
• May 23rd 2010, 12:23 PM
Krizalid
the ODE also admits an integrating factor of the form $u(x,y)=x^my^n.$
All times are GMT -8. The time now is 09:52 AM.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 8, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.938732922077179, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/advanced-algebra/193955-orthonormal-set-eigenvectors-print.html
|
Orthonormal Set of Eigenvectors of A
Printable View
• December 10th 2011, 06:13 PM
divinelogos
Orthonormal Set of Eigenvectors of A
Problem:
http://img46.imageshack.us/img46/5769/capture2br.jpg
Part (a): This part seems simple. I just find the eigenvectors of the matrix A, use the GS process, and then normalize to get an orthonormal set. Is this correct?
Part (b): I'm not sure how to do this part. Is "S" referring to [S]B? That is, do I use the eigenvectors of A=[S]B? I'm pretty lost here...
Part (c): Not sure how to do this without completing part b.
Any help will be appreciate and I'll click the thank you button for you! Thanks! :)
• December 10th 2011, 08:22 PM
Drexel28
Re: Orthonormal Set of Eigenvectors of A
Quote:
Originally Posted by divinelogos
Problem:
http://img46.imageshack.us/img46/5769/capture2br.jpg
Part (a): This part seems simple. I just find the eigenvectors of the matrix A, use the GS process, and then normalize to get an orthonormal set. Is this correct?
Part (b): I'm not sure how to do this part. Is "S" referring to [S]B? That is, do I use the eigenvectors of A=[S]B? I'm pretty lost here...
Part (c): Not sure how to do this without completing part b.
Any help will be appreciate and I'll click the thank you button for you! Thanks! :)
I'm sorry, is this for a takehome exam?
• December 10th 2011, 09:43 PM
divinelogos
Re: Orthonormal Set of Eigenvectors of A
Quote:
Originally Posted by Drexel28
I'm sorry, is this for a takehome exam?
Nope. It's a study guide for the final. I can send you the whole thing if you need to verify that. Thanks! :)
• December 11th 2011, 12:43 AM
FernandoRevilla
Re: Orthonormal Set of Eigenvectors of A
(a) The eigenvalues of $A$ are $\lambda=1$ (double) and $\lambda=1$ (simple). We have $\ker (A-I) \equiv \begin{Bmatrix}-x_1+x_3=0\\x_1-x_3=0\end{matrix}$ and an orthogonal basis is $\{(1,0,1)^t,(0,1,0)^t\}$ .
Now, $\ker (A+I) \equiv \begin{Bmatrix}x_1+x_3=0\\2x_2=0\\x_1+x_3=0\end{ma trix}$ and a basis is $\{(1,0,-1)^t\}$ .
Hence, $B=\{(1/\sqrt{2},0,1/\sqrt{2})^t,(0,1,0)^t,(1/\sqrt{2},0,-1/\sqrt{2})^t\}$ is an orthonormal basis of eigenvectors of $A$ .
(b) $(1/\sqrt{2},0,1/\sqrt{2})^t\equiv (1/\sqrt{2})\vec{u_1}+(1/\sqrt{2})\vec{u_3}=\ldots$
Hope you can continue.
All times are GMT -8. The time now is 10:23 AM.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 10, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8858265280723572, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/tagged/thermodynamics%20ideal-gas
|
# Tagged Questions
0answers
44 views
### Air pressure in balloon
I have to calculate the air pressure inside of an hot air balloon. After some searching I found out that I can use the ideal gas law: PV = nRT (from Wikipedia) So to get the pressure in the balloon I ...
0answers
23 views
### Finding rms velocity in isothermal process [closed]
I think none of these options are correct, I just need someone to confirm. Please, as no option is matching with $V_{rms}$.
2answers
44 views
### What are the units of these virial coefficients?
I'm reading some papers for calculating the vapor pressure of alkali metals as a function of temperature, and I've come across some familiar-looking virial expansions, but when I tried to work out the ...
2answers
76 views
### Temperature change inside pressure chamber
Let's say there is a pressure chamber with some sort of sample / specimen (e.g. protein crystal) in it. Now I apply a certain amount of gas pressure, e.g. 10 or 20 atm. Let's say I use xenon as a gas. ...
1answer
56 views
### In $PdV$, what is the value of $P$? $P_1$ or $P_2$?
Say I have an ideal gas that has a known $P_1$, $P_2$, $T_1$, and $T_2$ undergoing a reversible adiabatic process. I want to find the work done so I must use $PV = RT$ to get the change in $V$, so ...
1answer
124 views
### Ideal gas temperature and pressure gradients?
Consider an ideal gas in a $d\times d\times L$ box with the $L$ dimension in the $x$-direction. Suppose that the opposite $d\times d$ sides of the box are held at temperatures $T_1$ and $T_2$ with ...
2answers
172 views
### Ideal gas concentration under temperature gradient
I'm trying to calculate the concentration of an ideal gas in an adiabatic container as a function of position where the top and bottom plates of the container are fixed at temperatures $T_1$ and ...
2answers
135 views
### With ideal gases, varying quantity of moles, and having a constant volume how do temperature and pressure behave?
I'm trying to build a simulation of gases so I ended-up trying to use law of ideal gases ($PV = nRT$). In my scenario: volume is constant ($V=1\rm{m}^3$); a known quantity of moles are being added ...
3answers
184 views
### Understanding mathematically the free expansion process of an ideal gas
I'm trying to understand mathematically that for the free expansion of an ideal gas the internal energy $E$ just depends on temperature $T$ and not volume $V$. In the free expansion process the ...
0answers
93 views
### Is it possible to add heat to a monoatomic ideal gas without increasing entropy? [closed]
The Sackur-Tetrode equation expresses the entropy of a monoatomic ideal gas: [Equation from HyperPhysics]
0answers
58 views
### Using thermodynamics and Kinematics together to solve a parachuter problem?
I need to find a parachutist's displacement after a given height (nearly 37000m) and at a given latitude. I have his mass, area, parachute area, drop height, parachute deployment height, data about ...
1answer
54 views
### Problem evaluating moles in a an isochor transformation
I have a problem with an isochor transformation. Me and my group of study made an experiment that want to check Gay-Lussac’s law. We registered the equilibrium states and fitted the $P = nRT / V$, ...
1answer
67 views
### Thermodynamic process when nebula is heated
The basic thermodynamics problem is stated as follows. The nebula contains a very tenuous gas of a given number density (atoms per volume) that is being heated to a given temperature. What is the ...
2answers
783 views
### Calculating work done on an ideal gas
I am trying to calculate the work done on an ideal gas in a piston set up where temperature is kept constant. I am given the volume, pressure and temperature. I know from Boyle's law that volume is ...
1answer
362 views
### Adiabatic expansion [closed]
I'll start off by saying this is homework, but I ask because I don't understand how the math should work (I don't just want an answer, I'd like an explanation if possible). I understand if this is ...
1answer
49 views
### Performing work on a box of gas by lifting it, and first law of thermodynamics
What happens if we lift a box of ideal gas? Work is done to the box but no heat is getting into it. So does it's internal energy increase by the amount of work done? Or is it that lifting is not ...
1answer
128 views
### Ideal gas and diatomic gas with same temperature
If a box of ideal gas and another box of diatomic gas are in thermal equilibrium, does it mean that the average translational energy of ideal gas particle (A) is the same as that of diatomic gas ...
1answer
1k views
### Work Done by an Adiabatic Expansion
I am given the information that a parcel of air expands adiabatically (no exchange of heat between parcel and its surroundings) to five times its original volume, and its initial temperature is 20° C. ...
1answer
588 views
### Work Done in an Isobaric Process
I am given the information that an air parcel undergoes isobaric heating from 0° C to 20° C, and that's all I'm given. I have to determine the work done by the parcel on its surroundings. I know that ...
1answer
669 views
### Work on ideal gas by piston
Imagine a thermally insulated cylinder containing a ideal gas closed at one end by a piston. If the piston is moved rapidly, so the gas expands from $V_i$ to $V_f$. The expanding gas will do work ...
2answers
243 views
### Is it Possible to have Adiabatic Processes other than $PV^\gamma$ for the ideal Gas?
Is it possible to represent an adiabatic process for an ideal gas by a formula other than $PV^\gamma=Const$?: Relevant Considerations: We always need to connect a pair of arbitrary points/states ...
2answers
104 views
### May molecules of ideal gases have an inner structure?
The following question is probably very elementary: whether molecules of ideal gases may have optic properties? As far as I understand, when one discusses optic properties, one assumes that molecules ...
4answers
6k views
### Why are volume and pressure inversely proportional to each other?
It makes sense, that if you have a balloon and press it down with your hands, the volume will decrease and the pressure will increase. This confirms Boyle's Law, $pV=k=nRT$. But what if the ...
0answers
154 views
### Centrifugal Compressor Flow Rate
For a centrifugal compressor, as found in most turbochargers on internal combustion engines, is there a noticeable change in flow rate versus a naturally aspirated flow rate? In other words, does the ...
1answer
216 views
### Black body balloon in vacuum [closed]
The problem statement, all variables and given/known data There is a perfectly spherical balloon with surface painted black. It is placed in a perfect vacuum. It is gently inflated with an ideal ...
3answers
477 views
### What does it take to derive the ideal gas law in themodynamics?
How can the ideal gas law be derived from the following assumptions/observations/postulates, and these only ? I'm able to measure pressure $P$ and volume $V$ for gases. I notices that if ...
1answer
2k views
...
1answer
121 views
### Expansion of Helmholtz energy
To get an expansion of Helmholtz energy of a) an ideal gas b) a Van der waals gas we must integrate $\left ( \frac{\delta A }{\delta V} \right )_{T}=-P$ I saw the solution is : Can you ...
1answer
219 views
### Energy formula for separating $O_2$ from mixture of $O_2$, $NH_3$ and $H_2O$
I have a physics problem I'd like to make sure I get correct. The practical aspect of this problem is that the photosynthetic efficiency of algae is inhibited with dissolved O2 in the growth medium, ...
3answers
643 views
### When a gas expands against an external pressure of 0, must the stopper on the cylinder be massless?
Basically, I need to conceptually understand why the work a gas does is the integral of pressure external * dv and is 0 when pressure external is 0. I understand why dw = - p external * dv and so ...
3answers
1k views
### Differentiating the ideal gas law
In reading Fermi's Thermodynamics, to show that $C_p = C_v + R$, the author differentiates the ideal gas law for a mole of gas ($PV = RT$) to obtain: $PdV + VdP = RdT$. Now, the only way I am able to ...
1answer
3k views
### Why does a gas get hot when suddenly compressed? What is happening at the molecular level?
My guess is that the molecules of gas all have the same speed as before, but now there are much more collisions per unit area onto the thermometer, thus making the thermometer read a higher ...
3answers
2k views
### What type of substances allows the use of the Ideal Gas Law?
I know that I can use the ideal gas law with pure gases or pure liquids. But can I also use the ideal gas law at saturated gases and saturated liquids as long as they aren't two phase substances?
4answers
799 views
### How slow is a reversible adiabatic expansion of an ideal gas?
A truly reversible thermodynamic process needs to be infinitesimally displaced from equilibrium at all times and therefore takes infinite time to complete. However, if I execute the process slowly, I ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 39, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.93109530210495, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/advanced-algebra/196540-one-one-ness-isomorphisms-between-polynomial-spaces.html
|
# Thread:
1. ## one to one-ness of isomorphisms between polynomial spaces
Hey guys showing 1-1ness of an isomorphism (supposed) i reach asub1(x)+bsub1(x^2)=asub2(x)+bsub2(x2)
now according to my Linear Algebra book two polynomials like these are equal only if their respective coefficients are equal, but if we pick x to be say 2 the asub1 can be 2 and bsub1 can be 4, where as on the right side asub2 can be 4 bsub2 cab be 3, so the coefficients are not equal. Please help, thanks
2. ## Re: one to one-ness of isomorphisms between polynomial spaces
Given two polynomials p(x) and q(x), the book's claim is that if p(x) = q(x) for all x, then the corresponding coefficients of p(x) and q(x) are the same. Of course, p(2) = q(2) is a weaker statement than p(x) = q(x) for all x (the latter implies the former but not the other way around), so it is not sufficient to guarantee that p(x) and q(x) have the same coefficients.
3. ## Re: one to one-ness of isomorphisms between polynomial spaces
polynomials are functions. so an expresion like:
p(x) = ax2 + bx + c really means:
p is the function that, when given a certain value for x, squares x and multiplies it by the constant value a, then adds that to b times the orginal value x, and then adds the constant c:
p = a*(squaring function) + b*(identity function) + c
notice in the line above, no mention of what "x" is, is made.
now, for a GIVEN x, p(x) is just a number. it is easy to confuse the function p with its value at x, p(x). but it should not be hard to see that the two FUNCTIONS:
p = a*(squaring function) + b*(identity function) + c, and
q = d*(squaring function) + e*(identity function) + f
will only be the same function if a = d, b = e, and c = f.
put another way, we can identify p and q with the graphs of x vs. p(x) and x vs. q(x). even though the two graphs may have points in common (that is, may match for certain x's), they are not the same graph unless they match at every single point (that is, for ALL x's).
4. ## Re: one to one-ness of isomorphisms between polynomial spaces
so tell me if i got this right: although polynomials at some specific x might be equal without the coefficients being equal, for the the polynomials to be equal everywhere(thinking graphically) the coefficients must be the same. If so, does this apply to this example also?: asub0sin(x)+asub1cos(x)=bsub0sin(x)+bsub1cos(x) at x = pi/4 the coefficients dont necessarily have to be equal, but for them to be equal for any x the coefficients have to be equal
5. ## Re: one to one-ness of isomorphisms between polynomial spaces
Originally Posted by siryog90
so tell me if i got this right: although polynomials at some specific x might be equal without the coefficients being equal, for the the polynomials to be equal everywhere(thinking graphically) the coefficients must be the same.
Yes, this is correct.
Originally Posted by siryog90
If so, does this apply to this example also?: asub0sin(x)+asub1cos(x)=bsub0sin(x)+bsub1cos(x) at x = pi/4 the coefficients dont necessarily have to be equal, but for them to be equal for any x the coefficients have to be equal
First, you can write [tex]a_0\sin(x)+a_1\cos(x)[/tex] to get $a_0\sin(x)+a_1\cos(x)$. See the sticky threads in the LaTeX Help subforum for more information. In plain text, it is customary to write a_0 sin(x) or a_0 * sin(x) or just a0 sin(x).
Yes, $a_0\sin(x)+a_1\cos(x)=b_0\sin(x)+b_1\cos(x)$ for all x implies $a_0=b_0$ and $a_1=b_1$. In general, if you have linearly independent vectors (see below) $e_1,\dots,e_n$, then
for any numbers $a_1,\dots,a_n,b_1\dots,b_n$, if $a_1e_1+\dots+a_ne_n=b_1e_1+\dots+b_ne_n$, then $a_1=b_1, ..., a_n=b_n$ (*)
Here "vectors" does not necessarily means two- or three-dimensional Euclidean vectors, but anything that can be added and multiplied by a number. For example, functions from real numbers to real numbers are vectors. Note that, as Deveno wrote, you multiply a function as a whole, not just its value at one particular point. E.g., if $f:x\mapsto x^2$ is a function, then $5f$ is also a function defined for all x, namely, $5f:x\mapsto5x^2$.
A set $\{e_1,\dots,e_n\}$ of vectors is linearly independent if
for all numbers $a_1,\dots,a_n, a_1e_1+\dots+a_ne_n=0$ implies $a_1=\dots =a_n=0$ (**)
For example, 1, x, ..., $x^n$ are linearly independent: e.g., $a_0+a_1x+a_2x^2=0$ for all x implies $a_0=a_1=a_2=0$. Also, sin(x) and cos(x) are linearly independent. Indeed, suppose $a\sin(x)+b\cos(x)=0$ for all x. Then $\tan(x)=-b/a$ for all x, a contradiction. In fact, the whole set sin(x), sin(2x), ..., cos(x), cos(2x), ... are linearly independent, which is very important for Fourier analysis.
You can show that (**) implies (*), which is not hard and answers affirmatively your second question. For more information, see Vector space and Linear independence in Wikipedia.
6. ## Re: one to one-ness of isomorphisms between polynomial spaces
got it, thanks a lot
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 19, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8945063352584839, "perplexity_flag": "middle"}
|
http://elitistjerks.com/f31/t22891-math_stat_equivalence_multi_ability_rotations/print/
|
Elitist Jerks (http://elitistjerks.com/forums.php)
- Class Mechanics (http://elitistjerks.com/f31/)
- - [Math] Stat Equivalence in Multi-ability "Rotations" (http://elitistjerks.com/f31/t22891-math_stat_equivalence_multi_ability_rotations/)
Muphrid 03/12/08 10:19 AM
[Math] Stat Equivalence in Multi-ability "Rotations"
It is fairly straightforward to theorycraft the equivalence of stats for a given ability. Variables that don't change often cancel out nicely, yielding a clean, concise result. This post will extend those general methods to multi-ability "rotations" (which, in fact, need not be periodic at all).
The Absolute Difference Formulation
Let us consider an abstract objective function $F$ that is a function of stats and abilities represented by a vector $\vec u$. That is, we're considering the effects of different stats on the value of $F(\vec u)$. Any change in stats or circumstances can be expressed as $\Delta \vec u$ and the magnitude of the effect of that change given by $F(\vec u + \Delta \vec u) - F(\vec u)= \Delta F(\vec u, \Delta \vec u)$. For brevity, I'll abbreviate this as $\Delta F_u$.
For some combination of abilities used, we have...
$F= n_1 F_1 + n_2 F_2 + \cdots + n_k F_k = \sum_{j=1}^k n_j F_j = \vec n \cdot \vec F$
...where $n_k$ is a weighting constant. The meaning of $n_k$ can vary based on the problem: in a calculation of just raw damage for the whole rotation or sequence, $n_k$ represents the number of times the kth ability is used. In a calculation of overall DPS, it would represent the proportion of time spent using the kth ability.
Due to the linearity of the problem, we can conclude that $\Delta F_u= \vec n \cdot \Delta \vec F_u$. Ultimately, what we want to do is substitute some expression $\vec f(\Delta u)= \Delta \vec F_u$. The most convenient case is when $\vec f(\Delta u)= \Delta u \vec \kappa = \Delta \vec F_u$, where $\vec \kappa$ is a constant vector. This is the linear case, where the stat has no quadratic or higher-order effects on the objective function. In practice, for sufficiently small stat changes, we can approximate the problem as completely linear without much loss of precision.
Now then, what we ultimately have is $\Delta F_u= \Delta u ( \vec n \cdot \vec \kappa_u )$. If we consider some other possible stat change $\Delta \vec v$, that would look like $\Delta F_v= \Delta v ( \vec n \cdot \vec \kappa_v )$. The whole idea of stat equivalence is that we set $\Delta F_u= \Delta F_v$, which leads us to...
$\Delta u (\vec n \cdot \vec \kappa_u)= \Delta v (\vec n \cdot \vec \kappa_v) \Rightarrow \Delta u = \frac{\vec n \cdot \vec \kappa_v}{\vec n \cdot \vec \kappa_u} \Delta v$
It behooves us, for the moment, to go back to the one-ability case. In this case, we deal with not vectors but constants: $\Delta u= \frac{n \kappa_v}{n \kappa_u} \Delta v = \frac{\kappa_v}{\kappa_u} \Delta v$. A good result: it means that the constants we're using are based on the one-ability case.
The Relative Difference Formulation
It's also useful to consider $\frac{\Delta F_u}{F}= \nabla F_u$, the reative difference instead of just the absolute difference (I use the nabla symbol to denote this, though that's much less standard than the delta for the absolute difference, and it risks confusion with the gradient). We have...
$\nabla F_u= \frac{\Delta F_u}{F} = \frac{\vec n \cdot \Delta \vec F_u}{\vec n \cdot \vec F_u} = \nabla \vec F_u \cdot \vec N$
...where $\vec N= (n_1 F_1, n_2 F_2, \cdots, n_k F_k)/(\vec n \cdot \vec F)$
The advantage of this approach is that $\vec N$ has a special meaning: the kth element of this vector is the proportion of $F$ contributed by $n_k F_k$, or by the kth ability in the "rotation" or sequence. For example, if we're dealing in expected damage or DPS functions, this value would be equal to the proportion of total damage done by a given ability, which is something you could obtain without calculation but from a wws parse.
As before, we can approximate $\nabla F_u$ as linear, meaning $\nabla \vec F_u= \Delta u \vec K_u$. Thus, we now have a different version of our expression for stat equivalence:
$\Delta u (\vec N \cdot \vec K_u)= \Delta v (\vec N \cdot \vec K_v) \Rightarrow \Delta u = \frac{\vec N \cdot \vec K_v}{\vec N \cdot \vec K_u} \Delta v$
An Example: +spell power vs. +spell crit
At this point, it's easy to get lost in the notation or the formulas, so I'll provide a concrete example: let's consider an Arcane/Frost mage (I choose this example for simplicity; more common Fire builds have problems with Ignite when considering practical application of this procedure--more general problems involving 3+ abilities or the damage of a melee class would be that much more cumbersome).
We'll assume that this mage uses Arcane Blast and Frostbolt, with Arcane Blast being 69% of the damage done and Frostbolt 31%. Let AB have a 41% crit rate, Frostbolt 40%. Assume 1400 +damage on average. Let's examine the equivalence between +damage and +crit.
Obviously, $\vec N= (.69, .31)$ (order is not relevant, as long as we're consistent). Note that, in general...
$K_{d,k}= \frac{1}{\frac{m_k}{r_k} + d_k}$
...where m = average base damage, r = +damage coefficient, and d = +damage. m = 700 ish for AB, r = .714. m = 623.5 ish for Frostbolt, and r = .814. d = 1400 for both. Using these values, we get $\vec K_d= (4.20, 4.62)\cdot 10^{-4}$.
In general...
$K_{c,k}= \frac{1}{\frac{1}{b_k} + c_k}$
Where b is the crit bonus and c the crit chance. b = .75 for AB, 1.25 for an Arc/Frost Frostbolt. c was given to us above. $\vec K_c= (.574, .833)$.
$1 \mbox{ spell crit rating } \equiv \frac{\vec N \cdot \vec K_c}{\vec N \cdot \vec K_d} \frac{1}{2208}= \frac{.654}{.000433} \frac{1}{2208} = .684 \mbox{ +damage}$
Which is a fairly sensible result.
Application with WWS?
When I started to tackle the problem of multi-ability equivalences, I doubted there would be any practical application to the math behind it. However, I knew it would be interesting to start with the expressions for relative differences, as the percentage of one's damage or DPS from a given spell or ability could be easily estimated by a damage meter or WWS. WWS, of course, has its share of problems: any given boss fight constitutes a somewhat small sample size, which means big error bars (even larger still since we can't know one's average spell damage or attack power directly like we can with crit chances).
To be entirely truthful, though, the only power this has is in how WWS hands you $\vec N$ on a silver platter. I'm certain that this can be useful, but I'm unsure quite how useful it will turn out to be. In short, I suppose it depends largely on whether it makes sense to use WWS as an estimate of average parameters, given the variance inherent to those estimates.
alienangel 03/12/08 4:59 PM
I think I lost you a bit in the calculation of actual values for K in the example :S
Regarding using WWS to obtain the N values, yes, WWS seems a bit dubious for that, since I know my damage proportions vary a fair bit from night to night based on random crit rates/miss rates/debuffs/heroism timing etc, so it's still not really handing N to us "on a silver platter". Obtaining N from a simulation would be better, but if we're at the point of having a simulation worth trusting we can work out stat equivalences iteratively instead of analytically anyway :S
Regardless, interesting work :) I'm impressed at the quality of the notation too, is that actually a feature provided by the forums? Time to experiment with the code shown in the tooltips for the images :)
Muphrid 03/12/08 5:07 PM
Quote:
Originally Posted by alienangel (Post 672955) I think I lost you a bit in the calculation of actual values for K in the example :S
Sorry about that; those expressions for each K are based on a simple expected damage/cast formula:
$E= hq(m+rd)(1+bc)$
From here, you can see that...
$\Delta E_d= hq(1+bc)r \Delta d \Rightarrow \frac{\Delta E_d}{E} = \frac{r \Delta d}{m+rd} = \frac{\Delta d}{\frac{m}{r}+d}$
Which makes it somewhat clear where I got the expression for K_d from. Similar logic gets you K_c.
Quote:
Regarding using WWS to obtain the N values, yes, WWS seems a bit dubious for that, since I know my damage proportions vary a fair bit from night to night based on random crit rates/miss rates/debuffs/heroism timing etc, so it's still not really handing N to us "on a silver platter". Obtaining N from a simulation would be better, but if we're at the point of having a simulation worth trusting we can work out stat equivalences iteratively instead of analytically anyway :S
You're right, of course. This analysis has, however, motivated me to brush up on statistics and see whether it would be worthwhile to approach this not as an exact calculation but an estimate with the appropriate error bars.
Quote:
Regardless, interesting work :) I'm impressed at the quality of the notation too, is that actually a feature provided by the forums? Time to experiment with the code shown in the tooltips for the images :)
The forums themselves have $\LaTeX$ built in; I've found it a good way to be clear with symbolic math and practice at the same time.
Kavan 03/12/08 7:26 PM
Another thing to consider is also that N is not an independent variable, but depends on stats. This is specially true for arcane builds where exactly how long a fight is or how much mana you have will determine the weights of each cycle. I guess for first order approximation these can be neglected, but they definitely play a role.
Muphrid 03/12/08 8:27 PM
Quote:
Originally Posted by Kavan (Post 673124) Another thing to consider is also that N is not an independent variable, but depends on stats. This is specially true for arcane builds where exactly how long a fight is or how much mana you have will determine the weights of each cycle. I guess for first order approximation these can be neglected, but they definitely play a role.
Oh yes, it's only the meaning of N that is convenient; it is by no means a constant, and closed-form expressions for its components will not be pretty.
I would say that the first-order approximation means that the effect of stats on what your overall rotation is is negligible, yeah, just because we're dealing with small stat changes.
alienangel 03/12/08 9:04 PM
Quote:
Originally Posted by Muphrid (Post 673178) Oh yes, it's only the meaning of N that is convenient; it is by no means a constant, and closed-form expressions for its components will not be pretty. I would say that the first-order approximation means that the effect of stats on what your overall rotation is is negligible, yeah, just because we're dealing with small stat changes.
In WoW terms this isn't necessarily the case though. A difference of a few haste rating can (in theory anyway) change the ratio of specials to autoattacks quite dramatically for classes whose specials interfere with their autoattacks, which tends to have a larger effect on N values than just changing things like crit/hit/+dmg. The hunter threads are currently a depressing and painful mess trying to sort this kind of thing out.
Kamma 03/13/08 3:01 AM
I think it would be useful to take a step back from $N$ by using specific dmg functions for each ability to transform it to a vector representing the probability of using each ability - ie normalizing it to the proportion that a spell is used, not the proportion of damage done by the spell.
This would useful in making more accurate extrapolations. Just a thought.
Muphrid 03/13/08 12:53 PM
Proportion that a spell is used in terms of time? Using DPS per casting time, you can do this with the absolute difference approach.
Hm, I have a hunch that you could probably do it with the relative differences as well, but that's going to take some thought.
All times are GMT -4. The time now is 4:39 PM.
Forum Infrastructure by vBulletin 3.6.12 ©2000-2007, Jelsoft Enterprises Ltd.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 40}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9413500428199768, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/math-topics/41963-light.html
|
# Thread:
1. ## Light
If we treat light as a wave. Sin(nx)
do we say white light consists of many functions where n can be an element of Real positive numbers?
or are they intergers?
2. Originally Posted by Charbel
If we treat light as a wave. Sin(nx)
do we say white light consists of many functions where n can be an element of Real positive numbers?
or are they intergers?
White light at a point is a combination of many terms of the form $\sin(2 \pi f t +\phi)$, but the frequencies can be any real number (that is we have a continuous spectrum).
A casual observer cannot tell the difference between a continuous and discrete spectrum because of the way that the eye works, so what you think is white light need not have a continuous spectrum
RonL
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9406881928443909, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/79785/geometry-of-the-hilbert-sphere
|
## Geometry of the Hilbert sphere
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Let $X$ be the unit sphere in $\ell^2$, i.e. $X=\{x\in\ell^2: \|x\|=1\}$. Let the metric on $X$ be the geodesic metric, i.e. $d(x,y)=\cos^{-1}\langle x,y\rangle$. Call a set a ball-intersection if it is an intersection of closed balls with centers in $X$.
Does there exist a decreasing sequence of nonempty ball-intersections in $X$ with void intersection?
If we let $S_0=\cap_i B[e_i,\pi/2]=\{x\in X: x_i\ge 0 \forall i \},$ $S_1=\{x\in S_0: x_1=0\},$ $S_2=\{x\in S_0: x_1=x_2=0\}, \cdots$, then $\cap_i S_i=\emptyset$. But $S_i$ for $i\neq 0$ are not ball-intersections.
-
$S_n=S_0\cap B[-e_1,\pi/2]\cap\dots\cap B[-e_n,\pi/2]$, so it is a ball-intersection. – Anton Petrunin Nov 2 2011 at 1:41
@Anton. Thanks for pointing this out. – TCL Nov 2 2011 at 2:05
## 1 Answer
I think yes. Your balls are of the form $X\cap S(x,r)$ where $x\in X$ and $S(s,r)$ is the slice `$\{y: \|y\|\le 1 \ \text{and} \ \langle x,y\rangle \ge r\}$` of the unit ball. Note that if $y$ is in a slice, so is $y/\|y\|$. The slices have non empty intersection because they are weakly compact.
EDIT: This argument looks OK if all $r$ are positive, but $r$ can be negative, so some further thought is needed.
-
@Bill. Fortunately, for my purpose $r\ge 0$ is what I need. Thank you. – TCL Nov 2 2011 at 1:35
@TCL, it works only for $r>0$. – Anton Petrunin Nov 2 2011 at 1:49
@Anton. You are right. $r>0$ is what I need. – TCL Nov 2 2011 at 2:03
Good, then I won't think about the general case. – Bill Johnson Nov 2 2011 at 2:46
The idea works for closed geodesic convex subsets of $X$ with diameter $\le k<\pi/2$. – TCL Nov 2 2011 at 11:17
show 4 more comments
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 26, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9020098447799683, "perplexity_flag": "middle"}
|
http://www.physicsforums.com/showthread.php?p=1950583
|
Physics Forums
Mentor
## Two similar Bayesian problems. Did I get them right?
I would appreciate if someone could help me solve and understand these two problems. The first version of this post contained my attempts to solve them, but I have deleted those parts because I saw that I had messed up badly. I really suck at this type of problems, but perhaps someone can show me a good way to approach them.
Edit: OK, now I think I get it. If I still think I'm right in 20 minutes I'll probably edit this post again and add my new attempts to solve the problems.
Another edit: I have to go to bed, so I don't have time to post the explanations, but the results I get are 10/11 for the first problem and 1/2 for the second. Does that sound right?
This is not homework by the way. Oh, and since I'm asking about "Bayesian" probabilities, I would also appreciate if someone could tell me how that word is supposed to be pronounced. Baysian? Buy-eeshan? Buy-eezian? Bay-eezian? I've been wondering about that for years.
Problem 1 Two identical boxes. One of them contains 10 balls numbered 1-10. The other one contains 100 balls numbered 1-100. You don't know if the box on the left contains 10 or 100. You use a coin flip to choose one of the boxes and ask a friend to pick a ball at random from it. The ball he picks has the number 9 written on it. What is the probability that the box you chose contained 10 balls?
Problem 2 Two identical buildings. Both of them contain 100 rooms numbered 1-100. 110 people are blindfolded and randomly put into rooms 1-10 of building A, and rooms 1-100 of building B. You're one of those people, and you're told that your room number is 9. What's the probability that you're in building A?
PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug
Recognitions: Homework Help Science Advisor "Bayesian" comes from a guy named "Bayes", which would be pronounced like "Bays". So "Bayesian" is pronounced like "Bays-Ian". You can actually here it here. Anyways, the idea is to use the idea of conditional probability: $$P(A | B) = \frac{P(A \cap B)}{P(B)}$$ So the probability of A, given that B has occured, is equal to the probability that both A and B occur, divided by the probability that B occurs. In problem 1, you want to calculate the probability that the box chosen had 10 balls, given that the ball that was picked had 9 written on it.
Mentor Thanks AKG. At least now I know how pronounce Bayesian. I'm not sure I understand the formula for the conditional probability though. A and B are obviously not independent in the formula. If they were, the formula would be kind of pointless, since we would have P(A|B)=P(A) and P(A and B)=P(A)*P(B). What I don't understand is what P(A and B) means when A and B are not independent. Anyway, that doesn't matter much right now, since I believe my solutions are correct. If I'm wrong, I hope someone will tell me. These are my solutions: Problem 1 There was a 1/2 probability that you picked the box with 10 balls and a 1/2 probability that you picked the box with 100 balls. If you picked the box with 10 balls, it was certain that the ball your friend picked would have a number less than 11. If you picked the box with 100 balls, there was only a 1/10 chance that he would pick a ball with a number less than 11, and a 9/10 chance that he would not. From this we get the probabilities for each possibility: . . . . . . . . . . . . . Small number . . . . . . . . . Large number Box with 10 . . . . . 1/2 * 1 = 1/2 . . . . . . . . 1/2 * 0 = 0 Box with 100 . . . . 1/2 * 1/10 = 1/20 . . . . . . 1/2 * 9/10 = 9/20 If we do this a large number of times, we will get a ball with a small number from the box with 10 balls 1/2 the times and we will get a small number 1/2 + 1/20 = 11/20 times. The probability we seek is the first of those numbers divided by the second: (1/2)/(11/20)=10/11. Problem 2 You were assigned a room that was chosen at random from a set of 110 rooms, 10 of which is in building A, so there was a 1/11 probability that you ended up in building A and a 10/11 probability that you ended up in building B. If you ended up in building A, it was certain that you would get a low room number. If you ended up in building B, there was a 1/10 probability that you would get a low room number and a 9/10 probability that you would get a high room number. From this we get the probabilities for each possibility: . . . . . . . . . . . . . Small number . . . . . . . . . Large number Building A . . . . . . 1/11 * 1 = 1/11 . . . . . . . 1/11 * 0 = 0 Building B . . . . . . 10/11 * 1/10 = 1/11 . . . . 10/11 * 9/10 = 9/11 If you do this a large number of times, you will find yourself in a room with a small number in building A 1/11 times, and you will find yourself in a room with a small number 2/11 times. The probability we seek is the first of those numbers divided by the second: (1/11)/(2/11)=1/2.
## Two similar Bayesian problems. Did I get them right?
I would suggest that you use the formula:
$$P(A|B)= \frac{P(B)P(B|A)}{P(A)}$$
instead. In the first example $$A$$ is "the box contains 10 balls" and $$B$$ is "You pick ball number 9", we get $$P(B)= \frac{1}{2}(\frac{1}{10}+\frac{1}{100})$$, $$P(B|A)= \frac{1}{10}$$, and $$P(A)=\frac{1}{2}$$. Hence $$P(A|B)= \frac{10}{11}$$.
Recognitions:
Homework Help
Science Advisor
Quote by DavidK I would suggest that you use the formula: $$P(A|B)= \frac{P(B)P(B|A)}{P(A)}$$
Did you mean P(A|B) = P(A)P(B|A)/P(B)?
Quote by EnumaElish Did you mean P(A|B) = P(A)P(B|A)/P(B)?
Yes I did
[QUOTE=Fredrik;935916] I believe the answer to both the problem is 10/11. I think its straightforward in the first case, whereas in the second one the probability of picking room no. 9 given that the person is in bldg A is 1/10 only bcos only 10 rooms were occupied and since we chose an occupied room.
Thread Tools
| | | |
|---------------------------------------------------------------------------|--------------------------------------------|---------|
| Similar Threads for: Two similar Bayesian problems. Did I get them right? | | |
| Thread | Forum | Replies |
| | Set Theory, Logic, Probability, Statistics | 3 |
| | Set Theory, Logic, Probability, Statistics | 2 |
| | Set Theory, Logic, Probability, Statistics | 2 |
| | Calculus & Beyond Homework | 0 |
| | General Physics | 8 |
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 9, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9658923149108887, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/19021/avoiding-minkowskis-theorem-in-algebraic-number-theory/19027
|
## Avoiding Minkowski’s theorem in algebraic number theory.
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
For any course in algebraic number theory, one must prove the finiteness of class number and also Dirichlet's unit theorem. The standard proof uses Minkowski's theorem. Is there a way to avoid it?
The reasons I am asking this question are the following.
$1$. Minkowski lived long after Dirichlet and Dedekind(esp Dirichlet). So the original proof cannot likely have used Minkowski's theorem as such. If the original proof did use Minkowski's theorem, then it was of course found by someone else, most probably Dirichlet, and it is unfair to use the name Minkowski's theorem.
$2$. Even more importantly, the finiteness of classnumber and some version of unit theorem is true(at least I hope so) for all global fields. And there of course one cannot talk of Minkowski's theorem.
The objection I have for Minkowski's theorem is that it seems to be ad hoc, coming out of nowhere. And it seems that not much work is going on nowadays in the subject of geometry of numbers.
So it will be really nice to have a method which would feel more natural and is perhaps more general.
-
4
Great question, but I would like to debate the statement that "there is not much work going on in the subject of geometry of numbers". Curt Mcmullen just recently proved Minkowski's conjecture in 6 dimensions, and reduced all higher dimensions to a simpler problem. Geometry of numbers is still studied quite a bit; it is frequently a study of geodesics now, but lattices, lattice packings and geodesics (especially on hyperbolic space) are all geometry of numbers and still very much studied. – Ben Weiss Mar 22 2010 at 15:34
2
While I don't share your dislike for Minkowski's theorem, I also wonder whether there is a natural way to avoid it. I've never read the literature on higher regulators, and I wonder how the necessary finiteness theorems are proven, say for the regulators of Borel and Beilinson for $K_3$. Perhaps if one understands the proofs for higher regulators, a "natural-feeling" method might become evident. – Marty Mar 22 2010 at 15:40
Minkowski's theorem and its application are no doubt great. It is not that I dislike it. It is just a quest in another direction. – Regenbogen Mar 22 2010 at 15:56
@Ben Weiss. Thank you. – Regenbogen Mar 22 2010 at 15:57
1
There appears to be a proof on PlanetMath: planetmath.org/encyclopedia/… – Tyler Lawson Mar 22 2010 at 16:30
## 4 Answers
In the end all you need is the pigeonhole principle. Minkowski's theorem is just a sharpening of it. If you want an uniform proof for number fields and function fields for both the unit theorem and finiteness of class number which uses just the pigeonhole principle and is likely close to the original ones, see:
Axiomatic characterization of fields by the product formula for valuations E. Artin and G. Whaples, Bull. AMS 51 (1945) 469-492.
-
Ah! So it is there in Artin-Whaples? Thank you! – Regenbogen Mar 22 2010 at 15:56
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
For a course on algebraic number theory, you certainly can prove the finiteness of the class group without Minkowski's theorem. For example, if you look in Ireland-Rosen's book you will find a proof there which they attribute to Hurwitz. It gives a worse constant (which depends on a choice of $\mathbf Z$-basis for the ring of integers of the number field; changing the basis can shrink the constant, but it's still generally worse than Minkowski's) but it is computable and you can use it to show, say, that $\mathbf Z[\sqrt{-5}]$ has class number 2.
As for the history of the proof of the unit theorem, it was proved by Dirichlet using the pigeonhole principle. If you think about it, Minkowski's convex body theorem is a kind of pigeonhole principle (covering the convex body by translates of a fundamental domain for the lattice and look for an overlap). You can find a proof along these lines in Koch's book on algebraic number theory, published by the AMS. Incidentally, Dirichlet himself proved the unit theorem for rings of the form $\mathbf Z[\alpha]$; the unit theorem is true for orders as much as for the full ring of integers (think about Pell's equation $x^2 - dy^2 = 1$ and the ring $\mathbf Z[\sqrt{d}]$, which need not be the integers of $\mathbf Q(\sqrt{d})$), even though some books only focus on the case of a full ring of integers. Dirichlet didn't have the general conception of a full ring of integers.
One result which Minkowski was able to prove with his convex body theorem that had not previously been resolved by other techniques was Kronecker's conjecture (based on the analogy between number fields and Riemann surfaces, with $\mathbf Q$ being like the projective line over $\mathbf C$) that every number field other than $\mathbf Q$ is ramified at some prime.
-
3
The proof in Ireland-Rosen is essentially due to Kronecker (his thesis), and predates even the introduction of ideal numbers. Kronecker gave his proof in the case of cyclotomic fields; the proof goes through in general, however, once you know what an integral basis is. – Franz Lemmermeyer Mar 22 2010 at 19:27
1
Aha! Saying this proof of class number finiteness goes back to Hurwitz is repeated in another book also. If it is originally due to Kronecker, do you know why it is attributed to Hurwitz? – KConrad Mar 22 2010 at 19:50
2
Probably because hardly anyone ever read Kronecker's thesis. It is in Latin, and his main result is "Dirichlet's unit theorem" for cyclotomic fields. – Franz Lemmermeyer Mar 23 2010 at 6:12
I found the thesis (Crelle vol. 93 pp. 1--52, or visible at the link gdz.sub.uni-goettingen.de/dms/load/img/…) and on page 15 the "Hurwitz" argument jumps out from the Latin. It looks like here Kronecker is working in a subfield (with degree $\lambda$) of $\mathbf Q(\zeta_p)$ where $p$ is prime, rather than in a general cyclotomic field. – KConrad Mar 23 2010 at 15:52
2
That nobody read Kronecker's thesis isn't a complete explanation for why he doesn't get credit on this bound related to the class group. Kronecker reproduces the argument in his long paper on arithmetic in polynomial rings (Crelle 92 1882, see pp. 64--65) and points out there that the basic idea was already in his thesis. Nobody at the time understood this paper very well, so one should also say Kronecker doesn't get the credit because his 1882 paper was not widely read either. (In Dedekind's XI-th supplement, 4th ed., Sect. 181, Kronecker's argument is used without attribution.) – KConrad Jun 17 2010 at 1:51
show 1 more comment
You might want to read the early parts of Basic Number Theory, by A. Weil. Weil shows how to do all of these proofs in a very clean way which is uniform between the number field and the function field case; his proofs are all based on local compactness.
However, I don't think that Weil's proofs are morally different from the standard ones. In my, perhaps limited, experience, all proofs of these results are based on the pigeonhole principle (including the extension that, in a measure space of measure $1$, any two open sets whose measure add up to more than $1$ must meet.)
As a warning, remember that these results are false for function fields over $\mathbb{C}$. If $X$ is an affine algebraic curve over $\mathbb{C}$ with positive genus, then the class group of $\mathcal{O}_X$ is infinite and the unit group may have rank less than the number of punctures minus $1$.
-
@DS: The unit group of an affine algebraic curve certainly contains $\mathbb{C}^{\times}$, so has infinite rank: i.e., more than the number of punctures minus $1$. Or do you mean something different by "rank" here? – Pete L. Clark Apr 6 2010 at 11:29
2
Thanks, good catch. I meant the rank of $\mathcal{O}_X^{\times}/\mathbb{C}^{\times}$. The units of the base field should be considered as analogous to the roots of unity in the standard Dirichlet unit theorem. – David Speyer Apr 6 2010 at 12:08
Yes, there is a way to get the finiteness of class number as well as the S-unit theorem, avoiding using Minkowski's theorem explicitly. However, it doesn't give you the Minkowski bound. The idea is to carry out the work of Minkowski's theorem on convex bodies in the adele ring of your number field instead of in $\mathbb{R}^{r_1+2r_2}$. Basically you prove what Brian Conrad calls an adelic Minkowski lemma" involving the Haar measure of subsets of the adele ring. Using this, you can prove the compactness of the group $\mathbb{J}_K^1/K^\times$, where $\mathbb{J}_K^1$ is the kernel of the continuous idelic norm and $K^\times$ is the diagonally embedded discrete image of $K^\times$ in the idele group. The S-unit theorem and finiteness of class number are straightforward consequences of this compactness result. You can find a proof along these lines in Cassels and Frohlich. The finiteness of class number is easier, and comes from a natural surjective, continuous homomorphism from the idele group to the ideal class group (the ideal class group is given the discrete topology); you show that this gives you a continuous surjective map from a quotient of $\mathbb{J}_K^1/K^\times$ and you get that the ideal class group is compact and discrete, hence finite. Tom Weston used to have a writeup of this stuff on his website that I really liked, but I'm not sure if it's still there.
-
I should add that the combined S-unit theorem and finiteness of class number are equivalent (in the number field case) to the compactness of $\mathbb{J}_K^1/K^\times$. I think the compactness of the latter group is usually proved using the S-unit theorem and the finiteness of class number, not the other way around. – Keenan Kidwell Mar 22 2010 at 16:43
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 32, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9496198892593384, "perplexity_flag": "head"}
|
http://mathhelpforum.com/advanced-statistics/47952-exponential-distribution-question-print.html
|
# exponential distribution question
Printable View
• September 6th 2008, 04:10 PM
lllll
exponential distribution question
Let $Y$ be an exponentially distributed random variable with mean $\beta$. Define a random variable $X$ in the following way: $X=k$ if $k-1 \leq Y \leq k \ \ \mbox{for} \ \ k =1,2,...,n$
a) Find $P(X=k) \ \ \forall \ \ k=1,2,...,n$
b) Show that your answer to part a) can be written as:
$P(X=k) =(e^{-\frac{1}{\beta}})^{k-1}(1-e^{-\frac{1}{\beta}}) \ \ \forall \ \ k =1,2,...,n$
I would think for a) it would be $\int_{k-1}^{k} \frac{1}{\beta} e^{-\frac{x}{\beta}} dy = -e^{-\frac{x}{\beta}} \bigg{|}^{k}_{k-1} = -e^{-\frac{k}{\beta}}+e^{-\frac{k-1}{\beta}}$, but this doesn't seem right since k is not continuous.
and for b) I would think that you have to manipulate the function you got in a) to get what is shown, but am clueless on how to do so. Any help would be greatly appreciated.
• September 6th 2008, 05:42 PM
Laurent
Your answer to a) is correct.
Besides, this will be confirmed by b). You need to expand $(e^{-\frac{1}{\beta}})^{k-1}(1-e^{-\frac{1}{\beta}})$ (into a sum of two terms) and use the properties $e^ae^b=e^{a+b}$ and $\left(e^a\right)^b=e^{ab}$, in order to write it like your result in a).
(And if you can do that, then you can use the same computations in the reverse order to go from the result a) to the expected formula, if you prefer)
• September 6th 2008, 05:46 PM
mr fantastic
Quote:
Originally Posted by lllll
Let $Y$ be an exponentially distributed random variable with mean $\beta$. Define a random variable $X$ in the following way: $X=k$ if $k-1 \leq Y \leq k \ \ \mbox{for} \ \ k =1,2,...,n$
a) Find $P(X=k) \ \ \forall \ \ k=1,2,...,n$
b) Show that your answer to part a) can be written as:
$P(X=k) =(e^{-\frac{1}{\beta}})^{k-1}(1-e^{-\frac{1}{\beta}}) \ \ \forall \ \ k =1,2,...,n$
I would think for a) it would be $\int_{k-1}^{k} \frac{1}{\beta} e^{-\frac{x}{\beta}} dy = -e^{-\frac{x}{\beta}} \bigg{|}^{k}_{k-1} = -e^{-\frac{k}{\beta}}+e^{-\frac{k-1}{\beta}}$, but this doesn't seem right since k is not continuous.
and for b) I would think that you have to manipulate the function you got in a) to get what is shown, but am clueless on how to do so. Any help would be greatly appreciated.
Your answer to a) is correct. Why does the fact that k is discrete bother you?
b) Note that $e^{-(k-1)/\beta} - e^{-k/\beta} = e^{-(k-1)/\beta} - e^{-k/\beta} \, e^{1/\beta} \, e^{-1/\beta}$
$= e^{-(k-1)/\beta} - e^{-(k-1)/\beta} \, e^{-1/\beta} = e^{-(k-1)/\beta} \left( 1 - e^{-1/\beta}\right)$.
All times are GMT -8. The time now is 02:59 PM.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 21, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9457898139953613, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/253902/sum-m-0n-m-np2-n-choose-m-pm-qn-m-npq
|
# $\sum_{m=0}^n (m-np)^2 {n \choose m} p^m q^{n-m} = npq$
How to show that:
$\sum_{m=0}^n (m-np)^2 {n \choose m} p^m q^{n-m} = npq$
-
3
do you really want to show it this way or can you use the binomial random variable? – Jean-Sébastien Dec 8 '12 at 18:11
## 2 Answers
Here is a way to see it without having to expand anything in the sum. Consider a random variable $X$ following a binomial distribution with parameters $n,p$. By definition, it is the sum of $n$ i.i.d. Bernoulli$(p)$ random variable, who have expectation $p$ and variance $pq$. So since $$X=X_1+X_2+\cdots X_n,$$ it's expectation is the sum of the expectation of all $X_m$, so $E[X]=\sum_{m=0}^nE[X_m]=np$. Since the $X_m's$ are independant, the variance of their sum is the sum of their variance so $$Var[X]=npq,$$ $(q=1-p)$. By definition of the variance, you also have $$Var[X]=\sum_{k=0}^{n}(m-np)^2\binom{n}{m}p^{m}q^{n-m}.$$ thus, both are equal.
-
$(m-np)^2\binom n m=\{m(m-1)+(1-2np)m+n^2p^2\}\binom n m=n(n-1)\binom{n-2}{m-2}+(1-2np)n\binom {n-1}{m-1}+n^2p^2\binom n m$
as $m\binom n m=m\frac{n!}{m!(n-m)!}=mn\frac{(n-1)!}{m\cdot(m-1)!\{(n-1)-(m-1)\}!}=n\binom{n-1}{m-1}$ for $m\ge 1$
and $m(m-1)\binom n m=m(m-1)\frac{n!}{m!(n-m)!}$ $=m(m-1)n(n-1)\frac{(n-1)!}{m(m-1)\cdot(m-2)!\{(n-2)-(m-2)\}!}=n(n-1)\binom{n-2}{m-2}$ for $m\ge 2$
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 18, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9328073263168335, "perplexity_flag": "head"}
|
http://mathhelpforum.com/advanced-applied-math/15831-physics-complex-numbers.html
|
# Thread:
1. ## Physics and Complex numbers
Hey guys,
I was looking through one of the past Advanced Physics exam papers and I came across a question which is a little confusing at the end (the answer that is):
Consider a forced, damped simple harmonic oscillator, which obeys the equation of
motion:
$m\frac{d^2x}{dt^2} = -kx-bv+Fcos(\Omega t)$
a) First consider the un-driven case ( $F=0$)
Using the substitution
$x(t) = Ae^{\lambda t}$
where A is an arbitrary constant, together with de Moivre’s theorem
$e^{i \omega t}=cos(\omega t)+isin(\omega t)$
derive the expression giving the damped SHM:
$x(t)= Ae^{\frac{-bt}{2m}}cos(\omega t)$
Answer:
And the answer is too long so I'll just post the last few lines. Basically you let x(t)= Ae^(lambda)t dx/dt = A(lambda)e^(lambda)t and
d^2x/dt^2= A(lambda)^2 e^(lambda)t
sub d^2x/dt^2 into original equation (given above).
Divide by Ae^(lambda)t to elimintae on both sides.
Take everything over to the left hand side to get a quadratic in (lambda)^2.
Quadratic formula.
You get:
x(t) = A e ^(-bt/2m) e^ (+-i (omega) t) where omega = the discriminant of the quadratic= i((k/m - b^2/4m^2)^1/2)
I apologise for the lack of LaTex but it would have taken like 30 minutes. The last 2 lines of the proof/answer are in LaTex:
= $Ae^{\frac{-bt}{2m}}(cos(\omega t) + i sin (\omega t))$
(I know i have neglected the -iomega t case [as did the answers], but I believe this just gives $cos(\omega t) - isin(\omega t)$)
Now here is where I get confused.
The next line is:
$x(t)=Ae^{\frac{-bt}{2m}}(cos(\omega t)$
Why/how did they eliminate the sin? Is it because we are dealing with a real oscillator and an expression for oscillation cannot have an imaginary part? Does this hold with any example (that is if your dealing with something physical in the real world and you use complex numbers to prove, can you just eliminate the imaginary part at the end of the proof)?
Also if we end up eliminating the imaginary part in the end, why did we use complex numbers to prove in the first place? I am unsure why complex nmbers are used so rigorously to prove mathematical equations (especially if part of it is just deleted at the end of the proof).
Much thanks for any responses.
2. Originally Posted by behemoth100
$m\frac{d^2x}{dt^2} = -kx-bv+Fcos(\Omega t)$
a) First consider the un-driven case ( $F=0$)
The solution that you give: $x(t) = Ae^{\frac{-bt}{2m}}\cos (\omega t)$ is wrong!
Do you know how to solve this differencial equation?
Since $F=0$ rewrite this equation as:
$m x'' + kx = - bv$
This is a Non-homogenous differencial equation.
We begin by finding the charachteristic equation:
$r^2 + k =0 \Rightarrow r^2 = -k$.
Note that $-k<0$ since $k$ the spring coefficient is positive.
Thus, $r = \pm i \sqrt{k}$
But that is still not the solution to the Non-homogenous differcinal equation, we need to find a particular solution which is $x = -\frac{bv}{k}$.
So the general solution is given by:
$x(t) = C_1 \sin (\sqrt{k}) + C_2 \cos (\sqrt{k}) - \frac{bv}{k}$
-----
This looks nothing what you posted. And I cannot help you out with their derivation because Mathematical Physics derivations' never make sense to me. They make you feel like an idiot.
3. That is true, but it can also be said that for many equations (not all but some) there are is more than one way to prove it. However in the question it states that you MUST use the substitution x(t) =Ae^ (lambda t)
and you MUST use de moivres theorem, and your resulting answer MUST be the answer THEY give, in terms of A, e, b, t, m, and omega, as the equation is used in later questions.
Edit: Also its not the proof I am having trouble with really its the principles they employed in the end (i.e. eliminating isin (omega t)). I like to learn and understand the principles behind the maths, rather then just the method of doing a qs. Id prefer to know WHY you can get rid of isin(omega t), if you can do it in any and every case, why we used complex numbers (in this qs and in other mathematic proofs which any student will inevitably do in a maths lecture) to prove equation, rather than just remembering or being told that you CAN eliminate isin(omega t) (with no explanation as to why).
P.s. I do know that complex numbers can be useful, I'm just wondering how they are useful in this case, and other cases involving the proof of mathematical equations that are physically applicable (e.g. period of a pendulum, or driven oscillation, Newtonian physics, etc)
4. Originally Posted by behemoth100
That is true, but it can also be said that for many equations (not all but some) there are is more than one way to prove it. However in the question it states that you MUST use the substitution x(t) =Ae^ (lambda t)
and you MUST use de moivres theorem, and your resulting answer MUST be the answer THEY give, in terms of A, e, b, t, m, and omega, as the equation is used in later questions.
My point is that the solution that you posted for $x(t)$ is totally wrong. Just substitute it into the equation and see for yourself. I am not saying that I have a different way of deriving this I am saying that what you did is wrong. Unless I am missing something here because of my limited physics skills.
5. Originally Posted by ThePerfectHacker
The solution that you give: $x(t) = Ae^{\frac{-bt}{2m}}\cos (\omega t)$ is wrong!
Do you know how to solve this differencial equation?
Since $F=0$ rewrite this equation as:
$m x'' + kx = - bv$
This is a Non-homogenous differencial equation.
We begin by finding the charachteristic equation:
$r^2 + k =0 \Rightarrow r^2 = -k$.
Note that $-k<0$ since $k$ the spring coefficient is positive.
Thus, $r = \pm i \sqrt{k}$
But that is still not the solution to the Non-homogenous differcinal equation, we need to find a particular solution which is $x = -\frac{bv}{k}$.
So the general solution is given by:
$x(t) = C_1 \sin (\sqrt{k}) + C_2 \cos (\sqrt{k}) - \frac{bv}{k}$
-----
This looks nothing what you posted. And I cannot help you out with their derivation because Mathematical Physics derivations' never make sense to me. They make you feel like an idiot.
$v=\frac{dx}{dt}$
RonL
6. Originally Posted by ThePerfectHacker
My point is that the solution that you posted for $x(t)$ is totally wrong. Just substitute it into the equation and see for yourself. I am not saying that I have a different way of deriving this I am saying that what you did is wrong. Unless I am missing something here because of my limited physics skills.
Oh right I see what your saying. Actually this is not my working, this is the worked answer given for the End of Semester exam for Physics advanced 2003 paper.
7. And can you believe this question was only worth 2 marks? Far out
8. Originally Posted by behemoth100
Hey guys,
I was looking through one of the past Advanced Physics exam papers and I came across a question which is a little confusing at the end (the answer that is):
Consider a forced, damped simple harmonic oscillator, which obeys the equation of
motion:
$m\frac{d^2x}{dt^2} = -kx-bv+Fcos(\Omega t)$
a) First consider the un-driven case ( $F=0$)
Using the substitution
$x(t) = Ae^{\lambda t}$
where A is an arbitrary constant, together with de Moivre’s theorem
$e^{i \omega t}=cos(\omega t)+isin(\omega t)$
derive the expression giving the damped SHM:
$x(t)= Ae^{\frac{-bt}{2m}}cos(\omega t)$
Answer:
And the answer is too long so I'll just post the last few lines. Basically you let x(t)= Ae^(lambda)t dx/dt = A(lambda)e^(lambda)t and
d^2x/dt^2= A(lambda)^2 e^(lambda)t
sub d^2x/dt^2 into original equation (given above).
Divide by Ae^(lambda)t to elimintae on both sides.
Take everything over to the left hand side to get a quadratic in (lambda)^2.
Quadratic formula.
You get:
x(t) = A e ^(-bt/2m) e^ (+-i (omega) t) where omega = the discriminant of the quadratic= i((k/m - b^2/4m^2)^1/2)
I apologise for the lack of LaTex but it would have taken like 30 minutes. The last 2 lines of the proof/answer are in LaTex:
= $Ae^{\frac{-bt}{2m}}(cos(\omega t) + i sin (\omega t))$
(I know i have neglected the -iomega t case [as did the answers], but I believe this just gives $cos(\omega t) - isin(\omega t)$)
Now here is where I get confused.
The next line is:
$x(t)=Ae^{\frac{-bt}{2m}}(cos(\omega t)$
Why/how did they eliminate the sin? Is it because we are dealing with a real oscillator and an expression for oscillation cannot have an imaginary part? Does this hold with any example (that is if your dealing with something physical in the real world and you use complex numbers to prove, can you just eliminate the imaginary part at the end of the proof)?
Also if we end up eliminating the imaginary part in the end, why did we use complex numbers to prove in the first place? I am unsure why complex nmbers are used so rigorously to prove mathematical equations (especially if part of it is just deleted at the end of the proof).
Much thanks for any responses.
The general solution of the homogeneous equation is of the form:
$<br /> x=e^{-bt/2m}\left[Ae^{i\omega t}+Be^{-i \omega t}\right]<br />$
(that is the indical equation has two roots $-bt/2m \pm i \omega t$)
Now if we impose real initial conditions on $x$ and $x'$ we find $A=\bar{B}$ and $x$ is always
real, but what you have for the solution is:
$<br /> x=e^{-bt/2m}\left[a \cos(\omega t)+b \sin(\omega t)\right]<br />$.
What you give for the general real solution for the homogeneous equation cannot
be right as it has only one arbitary constant.
RonL
9. Originally Posted by behemoth100
And can you believe this question was only worth 2 marks? Far out
the question itself is easy enough, its the worked solution that is crap.
RonL
10. Uhh ok you guys are getting me a little confused .
I havent actually heard the term homogenous equation before (though looking it up on google, I realise what your basically saying) but there is no mention of general forms of HE, the lectures (remember its physics not maths) haven't mentioned it, so its safe to say (trust me on this) that the general formulas you guys are talking about are not need nor used.
I do greatly appreciate the sppedy responses though .
If I may, I have posted the link to the worked answers for that question in the 2003 exam. It is question 9 a)
http://www.physics.usyd.edu.au/ugrad...tions_2003.pdf
11. Originally Posted by CaptainBlack
the question itself is easy enough, its the worked solution that is crap.
RonL
I agree totally, I got down to the end easily enough, just got stuck with the sudden deletion of the sin.
NOTE THAT FOR THE LINK I PROVIDED THERE IS A TYPO:
Second line of the equation they have -b(lambda)e ^(lambda t) where as there should be an A there. However their eqns do take this into account, they just skipped the A key when typing it up, as can be seen on the next line once they have divided through by Ae^(lambda t).
Also remember that this is the first semester of first year undergraduate physics .
12. Originally Posted by behemoth100
Uhh ok you guys are getting me a little confused .
I havent actually heard the term homogenous equation before (though looking it up on google, I realise what your basically saying) but there is no mention of general forms of HE, the lectures (remember its physics not maths) haven't mentioned it, so its safe to say (trust me on this) that the general formulas you guys are talking about are not need nor used.
I do greatly appreciate the sppedy responses though .
If I may, I have posted the link to the worked answers for that question in the 2003 exam. It is question 9 a)
http://www.physics.usyd.edu.au/ugrad...tions_2003.pdf
The model answer is wrong, it takes the step:
$x(t) = A e^{-bt/2m}e^{\pm i \omega t}$
....... $= A e^{-bt/2m}\left(\cos(\omega t) + i \sin(\omega t) \right)<br />$
which is invalid.
In fact even the first equality above is wrong it should be:
$x(t) = e^{-bt/2m}\left(A e^{+ i \omega t}+B e^{- i \omega t}\right)$
RonL
13. Originally Posted by behemoth100
I agree totally, I got down to the end easily enough, just got stuck with the sudden deletion of the sin.
NOTE THAT FOR THE LINK I PROVIDED THERE IS A TYPO:
Second line of the equation they have -b(lambda)e ^(lambda t) where as there should be an A there. However their eqns do take this into account, they just skipped the A key when typing it up, as can be seen on the next line once they have divided through by Ae^(lambda t).
Also remember that this is the first semester of first year undergraduate physics .
But it still involves the solution of second order linear constant coefficient ordinary
differential equations, which are essential for an awfull lot of Physics, and the model
answer should still give the correct solution.
Suppose the initial conditions are $x(0)=0,\ x'(0)=1$ the first of these forces
$A=0$, and so the second cannot be satisfied by the model solution.
RonL
14. Originally Posted by CaptainBlack
The model answer is wrong, it takes the step:
$x(t) = A e^{-bt/2m}e^{\pm i \omega t}$
....... $= A e^{-bt/2m}\left(\cos(\omega t) + i \sin(\omega t) \right)<br />$
RonL
May I ask why this is wrong?
By euler's formula:
$e^{i\theta}= cos(\theta) + isin (\theta)$
Let $\theta = \omega t$
Then:
$e^{i\omega t} = cos (\omega t) + i sin (\omega t)$
for $e^{-i\omega t}$
we rewrite $e^{i(-\omega) t}$
$e^{-i\omega t}= cos(-\omega t) + i sin (-\omega t)$
$e^{-i\omega t}= cos (\omega t) - isin (\omega t)$
as $cos(-\theta)=cos(\theta)$
and $sin(-\theta)= - sin(\theta)$
15. I should also add that the general formula for damped oscillation IS given by
$x(t)=Ae^{-bt/2m}cos(\omega t)$ in physics textbooks, so the answer should be correct.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 63, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9410603046417236, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/17638/has-anyone-theorized-a-connection-between-entropy-and-quantum-uncertainty?answertab=oldest
|
Has anyone theorized a connection between entropy and quantum uncertainty?
I apologize if this kind of idle theorizing is frowned upon here, but I was wondering if it is possible that the Second Law of Thermodynamics is a consequence of quantum uncertainty.
I've heard entropy of a system defined as the number of micro-states that it can have to correspond to the macro-states it has. So that definition makes it sound like entropy is simply losing information. As we know, entropy increases as time goes on. Now this seems contradictory to me; we know more as time goes on, not less.
Is it possible that, because you can gain more and more information about a system as time goes on as you can interact with it more that, some information needs to be "hidden" from you. And that this process of losing information is entropy?
P.S. I know that what I "know" about any system does not approach the limits set out by the uncertainty principle. But as System A interacts with System B, over time System A's state is more influenced by System B and in that sense System A has gained knowledge of System B.
-
3
Your premise "we know more as time goes on" cannot be generalized to all thermodynamic systems. Consider this experiment: You have two boxes with white and red balls which are next to each other but with a plate separating them. In the beginning all white balls are on the left, all red on the right. So you know exactly where all white and red balls are. Now pull out the plate and shake the system. The balls will mix, entropy is increased and the we know less about the positions now. Nothing is hidden, the entropy just increased over time and this is not a quantum effect. – Alexander Nov 29 '11 at 22:01
You do know more if you heard and saw the shaking. That represents additional measurements of the system. We just don't know more about the answer to particular question you asked. In fact, the more the box was shaken, the more waves hit your body. So shaking the box increases the amount of information about the box and the thing that was shaking it that is present in your body. – Joe Nov 29 '11 at 23:24
Just to clarify what I mean, quantum certainty is about measuring quantities, not exactly knowing things like where is the red ball or blue ball. Your example conflates quantum uncertainty with the human concept of knowing facts. Keep in mind you don't know what particles are in the red and blue balls, or which particles left or came into them. – Joe Nov 30 '11 at 2:18
To determine the position of objects is a measurement. And one of the standard examples for the Heisenberg principle as well. I am sorry that I do not see a real question here and would recommend you to think about your problem/theory again and might try to ask a more well defined question here on this site. – Alexander Nov 30 '11 at 14:13
My question is if any physicists have devised a theory that derives entropy as a result of quantum uncertainty. In your example you site one piece of information that you don't have because of the passage of time. I am saying you gain and lose information. (For example, observing the shaking let's you know the density of the box and balls better.) What you do is remind us that entropy has occurred. A better question would be, if you shake a box and lose no information about the position of the balls, have you gained information? – Joe Nov 30 '11 at 15:08
show 1 more comment
3 Answers
This paper considers and relates uncertainty relations, and entropic relations in an information-theoretical sense (amongst other things). Maybe it is possible to extend that to entropic relations in a physical sense.
-
The third law of thermodynamics has a purely quantum-mechanical origin. However, the second law applies equally to classical and quantum-mechanical systems. For instance, it applies to an ideal gas. This tells us that the second law doesn't depend logically on quantum mechanics.
So that definition makes it sound like entropy is simply losing information. As we know, entropy increases as time goes on. Now this seems contradictory to me; we know more as time goes on, not less.
There is a difference between the information content of a system and the information that can actually be extracted from a system by measurements. This is true both classically and quantum-mechanically. In comments, Alexander gave the example of the red and white balls mixing. The system is classical, and in theory, you could observe the final state and use Newton's laws to evolve the system backward in time and find out the initial state. In practice, this fails, because the system is chaotic, so its evolution either forward or backward in time is extremely sensitive to the initial conditions. To extrapolate by a time $t$, you need measurements that have a precision that grows exponentially with $t$, and for large $t$ this rapidly becomes impossible.
-
This paper,
http://www.nature.com/ncomms/journal/v4/n4/full/ncomms2665.html
Shows how a thermodynamic cycle of a gas can generate work for free if generalized uncertainty relations are violated. It ties uncertainty principle to notions of information theory.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9507023692131042, "perplexity_flag": "head"}
|
http://electronics.stackexchange.com/questions/49112/why-would-you-attach-a-diode-to-the-base-of-a-bjt
|
# Why would you attach a Diode to the base of a BJT?
I was looking at a DC BJT setup for sourcing current and came accross this
I have never seen a diode attached to the base of BJTs before and was wondering what it might be used for? I believe it might be used for compensation due to effects in temperature, but I haven't seen much info on this or why you wouldn't bridge the voltage at the base of Q1 with a resistor instead. Does anyone have any suggestions to why you might do something like this?
-
## 4 Answers
It is there to keep the transistor's current less susceptible to temperature changes.
In the case of Q1:
Suppose that instead of having R1 and D1, Q1base was connected directly to ground.
Emitter current would be: $$I_{e} = \frac{20V - V_{be}}{R_{2}}$$
You can see Ie is susceptible to variations in Vbe, which has a known dependency on temperature (T), so you might as well express it as: $$I_{e}(T) = \frac{20V - V_{be}(T)}{R_{2}}$$
But with the diode, if they are matched and thermally bonded: $$V_{diode}(T) = V_{be}(T)$$
So now: $$I_{e} = \frac{20V+V_{diode}(T)-V_{be}(T)}{R_{2}}$$ Which simplifies to: $$I_{e} = \frac{20V}{R_{2}}$$ Independent of Vbe, and its variations with temperature.
The diode is effectively providing the little voltage offset that would be needed to compensate for Vbe changes with T, in order to maintain a constant current.
-
Thanks for the advice and for showing some simple equations for explaining temperature compensation. Usually I am used to biasing with resistors so its interesting to see it with diodes. – user1207381 Dec 12 '12 at 18:42
It's a form of temperature compensation. As long as the diode and the transistor are at the same temperature, the variation in the diode's VF tracks the transistor's VBE, keeping the collector current more constant.
-
The diode is there to provide roughly the same voltage drop as the B-E junction of the transistor does. Often this is done with a second matched transistor in what is called the current mirror configuration:
Look at this closely and see how Q2 will source the came current on its collector as whatever is drawn by I1. This is used in ICs without the resistors. It works because two transistors next to each other that ran thru the same process are well matched.
-
The diode is used to create an accurate bias point which is about 0.7V above the common return voltage. This bias point is relatively immune to changes in the supply voltage. Whether the positive voltage is 9V or 20V, the top of the diode will be at 0.7V. If we replaced the diode with a resistor, the bias point would not have this property. Its voltage will vary with supply voltage. Double the supply voltage from 9V to 18V, and its voltage will double also.
Why does the circuit want to keep the bias at exactly one diode drop above ground? What that will do is put the emitter of Q1 (top of R2) at approximately ground potential, because of the diode drop across the BE junction of the transistor. Thus the emitter is a "virtual ground". It's not clear why that is important without more information about the circuit: where it is used, for what purpose, and any rationale notes from the designer.
That is, why can't the base of Q1 just be grounded, resulting in a bias point that is just 0.7V lower. Maybe there is no reason. Designers do not always do things for rational reasons, but rather for "ritualistic" reasons. It looks as if the designer wanted the voltage drop across R2 to be precisely 20V. Note how R2 is specified as 4.99K, which is ridiculously precise. A 1% tolerance 5K resistor could be anywhere between 4.95K and 5.05K. A 4.99K resistor isn't something you can actually go out and buy, so you cannot actually build this circuit as specified, unless you use a variable resistor and use your digital potentiometer to tune that resistor to 4.99K. The -20V supply has to be just as precise for such a precise value of R2 to make sense. The current through R2 (and hence the collector current of Q1) will vary with the negative supply voltage.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.958459734916687, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/math-challenge-problems/154137-matrices-went-2x2.html
|
# Thread:
1. ## Matrices went in 2x2
Another challenge problem, inspired by an earlier question.
Given arbitrary 2x2 matrices $A$ and $B$, prove that
$AB+BA=\beta A+\alpha B+(\gamma-\alpha\beta)I$
where $\alpha$ is the trace of $A$, $\beta$ is the trace of $B$ and $\gamma$ is the trace of $AB$ or of $BA$. $I$ is of course the 2x2 identity matrix.
Although a brute force approach would work, a more elegant solution would be welcome.
Enjoy.
Moderator approved. CB
2. We use the Cayley-Hamilton identity for $A, \ B, \ A+B$
$A^2-\alpha A+\det A\cdot I_2=O_2$
$B^2-\beta B+\det B\cdot I_2=O_2$
$(A+B)^2-Trace(A+B)+\det (A+B)\cdot I_2=O_2$
We have $Trace(A+B)=Trace(A)+Trace(B)=\alpha+\beta$
and $\det(A+B)=\det A+\det B+\gamma-\alpha\beta$
Then
$A^2+B^2+AB+BA-\alpha A-\alpha B-\beta A-\beta B+\det A\cdot I_2+\det B\cdot I_2+(\gamma-\alpha\beta)I_2=O_2$
or $AB+BA=\beta A+\alpha B+(\gamma-\alpha\beta)I_2$
3. Originally Posted by red_dog
$(A+B)^2-Trace(A+B)+\det (A+B)\cdot I_2=O_2$
Should be
$(A+B)^2-Trace(A+B)\cdot(A+B)+\det (A+B)\cdot I_2=O_2$
but otherwise spot on! Thanks.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 21, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9085947275161743, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/122307/gcd-of-any-two-real-numbers
|
# “GCD” of any two real numbers
This isn't really a GCD question, because GCD is only defined for integers. I'm interested in the the existence of a common divisor of any two non-zero real numbers. In other words can you prove or disprove the following:
Given $x,y \neq 0\in \mathbb{R}, \exists \space g \space s.t. \space x/g \in \mathbb{Z}$ and $y/g \in \mathbb{Z}$.
(I hope my math is understandable, haven't done this in awhile). It's clearly possible for many numbers, including irrational ones (e.g. for multiples of $\pi$, $g = \pi$). Is it possible for all real numbers?
-
2
If $x/g=m$, and $y/g=n$, then $x/y=m/n$. – Chris Eagle Mar 20 '12 at 0:17
For this to exist for all reals, it would require that the ratio of two rational numbers can be expressed as the ratio of two integers, which is a rational number. – Peter Grill Mar 20 '12 at 0:21
Yeah, what @ChrisEagle said. – Peter Grill Mar 20 '12 at 0:22
@PeterGrill In your first comment, I think you mean "the ratio of two real numbers". – Alex Becker Mar 20 '12 at 0:24
@AlexBecker: I actually meant to say "ratio of two irrational numbers" as that was the example I was thinking about, but real is even better as your answer has shown. – Peter Grill Mar 20 '12 at 0:28
show 2 more comments
## 2 Answers
The following conditions are equivalent for nonzero reals $x,y$
1. There is a real $g$ such that $x/g$ and $y/g$ are integers
2. The quotient $x/y$ is rational
Proof:
$1 \implies 2$: Since quotient of integers is rational, your condition implies that
$(x/g) / (y/g) \in \mathbb{Q}$
after clearing $g$ in denominators
$x/y \in \mathbb{Q}$.
$2 \implies 1$:
If $x/y$ is rational: $x/y=p/q$ then define $g = y/q$ (or $g = x/p$), then $x/g = xq/y = p$ and $y/g = q$ are integers. QED
So any pair of reals with irrational quotient is a counterexample, for example $x=1$ and $y=\sqrt{2}$.
Real numbers $x,y$ with rational quotient are known as commensurable. This is how irrationality was formulated in the ancient times. It has been said that diagonal of a square is not commensurable with its side.
The Euclidean algorithm for finding GCD was originally formulated on segments (reals) - it found a common measure ($g$) given segments of length $x$ and $y$.
-
The statement is not true. Consider $x=1,y=\pi$. If $y/g=n\in\mathbb Z$, then $g=\pi/n$ so $x/g=n/\pi\notin \mathbb Z$.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 33, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9515649676322937, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/differential-geometry/170037-orientation-manifolds.html
|
# Thread:
1. ## orientation of manifolds
Hello,
I have a question about the topic in the headline.
We have defined a oriented manifold as a manifold with a atlas, s.t. the determinant of the differential of chart changes is >0, i.e. det $d(x\circ y^{-1})>0$ forall x,y charts.
I think it is true that if we have a non orientable manifold, then the der above has to be 0 for some charts.
To put it another way, if the manifold is not or., then it can't appear that der(...)<0.
My questions are:
1)Is my conjecture correct?
2)Do you know a argument, why there has to be a atlas, s.t. det(..)>0 if we have a atlas s.t. det(..)<0?
I try to put a "-" to each chart. putthen the "-" cancel out, since we have the composition of thwo charts.
Regards
2. The requirement is that if your atlas is $\{(U_\alpha,h_\alpha)\}_{\alpha\in A}$, then for every $\alpha,\beta\in A$ and every $p\in U_\alpha\cap U_\beta$, you have $\mathrm{det}(D(h_\beta\circ h_\alpha^{-1})|_{h_\alpha(p)})>0$. It sounds like you might be forgetting that the matrix (and hence its determinant) depends on which point in the chart intersection you look at it from.
Now, since determinant is continuous and we never have determinant zero (these are diffeomorphisms from $\mathbb{R}^m\to\mathbb{R}^m$), the determinant is either all positive or all negative on each connected component of $U_\alpha\cap U_\beta$. But of course there might be several components--for example, the usual construction of the Mobius bundle over $S^1$.
To answer #2 (if I understand it correctly), consider $S^n$ with the atlas given by stereographic projection. There are two charts, their intersection is $S^n-\{N,S\}$ which is connected for n>1, and the determinant can be +1 or -1 depending on how you define your projection.
3. Thank you
4. Go the other way--assuming you have an oriented atlas for AxB, show you can "project" it to an oriented atlas of A. And yes, that's what the charts on the product manifold look like.
5. Hello tinyboss,
i think it is not so obvious to "project" the atlas. What what you mean? I try to solve a similar problem. I mean any coordinate map of AxB is in general of the form:
(f x g)(x,y):=(f(x,y) x g(x,y)). That is we habe functions which depends on both elements! How can i project this?
Regards
6. Can nobody help me?
Once again:
Let M,N be manifolds of dimension m,n respectively.
We know that M is not orientable. And want to show that MxN is also not orientable.
If we assume that MxN is not orientable, we have to conclude that M has to be orientable,too.
But why?
Regards
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 10, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9207953810691833, "perplexity_flag": "middle"}
|
http://mathforum.org/mathimages/index.php?title=Solving_Triangles&diff=20695&oldid=20669
|
# Solving Triangles
### From Math Images
(Difference between revisions)
Line 227: Line 227:
[[Image:Tree_in_park.jpg|center]] [[Image:Tree_in_park.jpg|center]]
+
+ '''Answer'''
+ {{Hide=
+
+ There are several ways to solve this problem. One way to use [[Inverse Trigonometry]]. Another way is to use a combination of the methods described above.
+
+ First, we can use Pythagorean Theorem to find the length of the hypotenuse of the triangle, from the tip of the shadow to the top of the tree.
+
+ ::<math> a^{2}+b^{2} = c^{2}</math>
+
+ Substitute the length of the legs of the triangle for <math>a, b</math>
+
+ ::<math> 51^{2}+68^{2} = c^{2}</math>
+
+ Simplifying gives us
+
+ ::<math> 2601+4624 = c^{2}</math>
+
+ ::<math> 7225 = c^{2}</math>
+
+ Take the square root of both sides for
+
+ ::<math> \sqrt{7225} = c</math>
+
+ ::<math>85 = c</math>
+
+ Next, we can use the law of cosines to find the measure of the angle of elevation.
+
+ ::<math> a^{2}=b^{2}+c^{2} - 2bc \cos A</math>
+
+ Plugging in the appropriate values gives us
+
+ ::<math> 51^{2}=68^{2}+85^{2} - 2(85)(68) \cos A</math>
+
+ Simplify for
+
+ ::<math> 2601= 4624+7225 - 11560 \cos A</math>
+
+ ::<math> 2601= 11849 - 11560 \cos A</math>
+
+ Subtract <math>11849</math> from both sides for
+
+ ::<math> -9248= -11560 \cos A</math>
+
+ Simplify to get
+
+ ::<math> .8 = \cos A</math>
+
+
+
+ }}
|other=Trigonometry, Geometry |other=Trigonometry, Geometry
## Revision as of 13:49, 16 June 2011
The Shadow Problem
Field: Geometry
Created By: Orion Pictures
Website: [ http://en.wikipedia.org/wiki/File:Shadows_and_fog.jpg]
The Shadow Problem
In the 1991 film Shadows and Fog, the eerie shadow of a larger-than-life figure appears against the wall as the shady figure lurks around the corner. How tall really is the ominous character? Filmmakers use the geometry of shadows and triangles to make this special effect.
The shadow problem is a standard type of problem for teaching trigonometry and the geometry of triangles. In the standard shadow problem, several elements of a triangle will be given. The process by which the rest of the elements are found is referred to as solving a triangle.
# Basic Description
A triangle has six total elements: three sides and three angles. Sides are valued by length, and angles are valued by degree or radian measure. According to postulates for Congruent triangles, given three elements, the other three elements can always be determined as long as at least one side length is given. Math problems that involve solving triangles, like shadow problems, typically provide certain information about just a few of the elements of a triangle, so that a variety of methods can be used to solve the triangle.
Shadow problems normally have a particular format. Some light source, often the sun, shines down at a given angle of elevation. The angle of elevation is the smallest--always acute-- numerical angle measure that can be measured by swinging from the horizon. Assuming that the horizon is parallel to the surface on which the light is shining, the angle of elevation is always equal to the angle of depression. The angle of depression is the angle at which the light shines down, compared to the angle of elevation which is the angle at which someone or something must look up to see the light source. Knowing the angle of elevation or depression can be helpful because trigonometry can be used to relate angle and side lengths.
In the typical shadow problem, the light shines down on an object or person of a given height. It casts a shadow on the ground below, so that the farthest tip of the shadow make a direct line with the tallest point of the person or object and the light source. The line that directly connects the tip of the shadow and the tallest point of the object that casts the shadow can be viewed as the hypotenuse of a triangle. The length from the tip of the shadow to the point on the surface where the object stands can be viewed as the first leg, or base, of the triangle, and the height of the object can be viewed as the second leg of the triangle. In the most simple shadow problems, the triangle is a right triangle because the object stands perpendicular to the ground.
In the picture below, the sun casts a shadow on the man. The length of the shadow is the base of the triangle, the height of the man is the height of the triangle, and the length from the tip of the shadow to top of the man's head is the hypotenuse. The resulting triangle is a right triangle.
In another version of the shadow problem, the light source shines from the same surface on which the object or person stands. In this case the shadow is projected onto some wall or vertical surface, which is typically perpendicular to the first surface. In this situation, the line that connects the light source, the top of the object and the tip of the shadow on the wall is the hypotenuse. The height of the triangle is the length of the shadow on the wall, and the distance from the light source to the base of the wall can be viewed as the other leg other leg of the triangle. The picture below diagrams this type of shadow problem, and this page's main picture is an example of one of these types of shadows.
More difficult shadow problems will often involve a surface that is not level, like a hill. The person standing on the hill does not stand perpendicular to the surface of the ground, so the resulting triangle is not a right triangle. Other shadow problems may fix the light source at given height, like on a street lamp. This scenario creates a set of two similar triangles.
Ultimately, a shadow problem asks you to solve a triangle by providing only a few elements of the possible six total. In the case of some shadow problems, like the one that involves two similar triangles, information about one triangle may be given and the question may ask to find elements of another.
# A More Mathematical Explanation
Note: understanding of this explanation requires: *Trigonometry, Geometry
[Click to view A More Mathematical Explanation]
## Why Shadows?
Shadows are useful in the set-up of a triangle pro [...]
[Click to hide A More Mathematical Explanation]
## Why Shadows?
Shadows are useful in the set-up of a triangle problems because of the way light works. A shadow is cast when light cannot shine through a solid surface. Light shines in a linear fashion, that is to say it does not bend. Light waves travel forward in the same direction in which the light was shined. Light is not like a liquid: it does not fill the space in which it shines like liquid assumes the shape of any container it's in.
In addition to the linear fashion in which light shines, light has certain angular properties. When light shines on an objects that reflects light, it reflects back at the same angle at which it shined. Say a light shines onto a mirror. The angle between the beam of light and the wall that the mirror is the angle of approach. The angle from the wall at which the light reflects off of the mirror is the angle of departure. The angle of approach is equal to the angle of departure.
In another example, a cue ball is bounced off of the wall of a pool table at a certain angle. Just like the way that light bounces off of the mirror, the cue ball bounces off the wall at exactly the same angle at which it hits the wall. The cue ball has the same properties as the beam of light in this case: the angle of departure is the same as the angle of approach. This property will help with certain types of triangle problems, particularly those that involve mirrors.
## More Than Just Shadows
Shadow problems are just one type of problem that involves solving triangles. There are numerous other formats and set ups for unsolved triangle problems. Most of these problems are formatted as word problems, that is set up the problem in terms of some real life scenario.
There are, however, many problems that simply provide numbers that represent angles and side lengths. In this type of problem, angles are denoted with capital letters, ${A, B, C,...}$, and the sides are denoted by lower-case letters,${a,b,c,...}$, where $a$ is the side opposite the angle $A$.
Ladder Problems One other common problem in solving triangles is the ladder problem. A ladder of a given length is leaned up against a wall that stands perpendicular to the ground. The ladder can be adjusted so that the top of the ladder sits higher or lower on the wall and the angle that the ladder makes with the ground increases or decreases accordingly. Because the ground and the wall are perpendicular to one another, the triangles that need to be solved in ladder problems always have right angles. Since the right angle is always fixed, many ladder problems require the angle between the ground and the ladder, or the angle of elevation, to to be somehow associated with a fixed length of a ladder and the height of the ladder on the wall. In other words, ladder problems normally deal with the SAS scenario: they involve the length from the wall to the base of the ladder, the fixed length of the ladder itself, and the enclosed angle of elevation to determine the height at which the ladder sits on the wall.
Mirror Problems Mirror problems are a specific type of triangle problem which involves two people or objects that stand looking into the same mirror. Because of the way a mirror works, light reflects back at the same angle at which it shines in, as explained below in A More Mathematical Explanation. In a mirror problem, the angle at which one person looks into the mirror, or the angle of vision is the same exact angle at which the second person looks the mirror. Typically, the angle at which one person looks into the mirror is given along with some other piece of information. Once that angle is known, then one angle of the triangle is automatically known since the light reflects back off of the mirror at the same angle, making the angle of the triangle next to the mirror the supplement to twice the angle of vision.
Sight Problems Like shadow problems, sight problems include many different scenarios and several forms of triangles. Most sight problems are set up as word problems. They involve a person standing below or above some other person or object. In most of these problems, a person measures an angle with a tool called an astrolabe or a protractor, . In the most standard type of problem, a person uses the astrolabe to measure the angle at which he looks up or down at something. In the example at the right, the bear stands in a tower of a given height and uses the astrolabe to measure the angle at which he looks down at the forest fire. The problem asks to find how far away the forest fire is from the base of the tower given the previous information.
## Ways to Solve Triangles
In all cases, a triangle problem will only give a few elements of a triangle and will ask to find one or more of the unknown elements. A triangle problem asks for one of the lengths or angle measures that is not given in the problem. There are numerous formulas, methods, and operations that can help to solve a triangle depending on the information given in the problem.
The first step in any triangle problem is drawing a diagram. A picture can help to show which elements of the triangle are given and which elements are adjacent or opposite one another. By knowing where the elements are in relation to one another, we can use the trigonometric functions to relate angle and side lengths.
There are numerous techniques which can be implemented in solving triangles:
• Trigonometry: The Basic Trigonometric Functions relate side lengths to angles. By substituting the appropriate values into the formulas for sine, cosine, or tangent, trigonometry can help to solve for a particular side length or angle measure. This is useful when given a side length and an angle measure.
• Pythagorean Theorem: The Pythagorean Theorem relates the squares of all three side lengths to one another in right triangles. This is useful when a triangle problem provides two side lengths and a third is needed.
$a^{2}+b^{2} = c^{2}$
• Law of Cosines: The Law of Cosines is a generalization of the Pythagorean Theorem which can be used for solving non-right triangles. The law of cosines relates the squares of the side lengths to the cosine of one of the angle measures. This is particularly useful given a SAS configuration, or when three side lengths are known and no angles non-right triangles.
$c^{2} = a^{2} + b^{2} - 2ab \cos C$
• Law of Sines: The Law of Sines is a formula that relates the sine of a given angle to the length of its opposite side. The law of sines is useful in any configuration when an angle measure and the length of its opposite side are given. It is also useful given an ASA configuration, and often the ASS configuration . The ASS configuration is known as The Ambiguous Case since it does not always provide one definite solution to the triangle.
$\frac{a}{\sin A} = \frac{b}{\sin B} = \frac{c}{\sin C}$
When solving a triangle, one side length must always be given in the problem. Given an AAA configuration, there is no definite solution to the triangle. According to postulates for Congruent triangles, the AAA configuration proves similarity in triangles, but there is no way to find the side lengths of a triangle.
## Example Triangle Problems
Example 1: Using Trigonometry
A damsel in distress awaits her rescue from the tallest tower of the castle. A brave knight is on the way. He can see the castle in the distance and starts to plan his rescue, but he needs to know the height of the tower so he can plan properly. The knight sits on his horse 500 feet away from the castle. He uses his handy protractor to find the measure of the angle at which he looks up to see the princess in the tower, which is 15°. Sitting on the horse, the knight's eye level is 8 feet above the ground. What is the height of the tower?
We can use tangent to solve this problem. For a more in depth look at tangent, see Basic Trigonometric Functions.
Use the definition of tangent.
$\tan =\frac{\text{opposite}}{\text{adjacent}}$
Plug in the angle and the known side length.
$\tan 15^\circ =\frac{x ft}{500 ft}$
Clearing the fraction gives us
$\tan 15^\circ (500) =x$
Simplify for
$(.26795)(500) =x$
Round to get
$134 ft \approx x$
But this is only the height of the triangle and not the height of the tower. We need to add 8 ft to account for the height between the ground and the knight's eye-level which served as the base of the triangle.
$134 ft + 8 ft = h$
simplifying gives us
$142 ft = h$
The tower is approximately 142 feet tall.
Example 2: Using Law of Sines
A man stands 100 feet above the sea on top of a cliff. The captain of a white-sailed ship looks up at a 45° angle to see the man, and the captain of a black-sailed ship looks up at a 30° angle to see him. How far apart are the two ships?
To solve this problem, we can use the law of sines to solve for the bases of the two triangles since we have an AAS configuration with a known right angle. To find the distance between the two ships, we can take the difference in length between the two ships.
First, we need to find the third angle for both of the triangles. Then we can use the law of sines.
For the white-sailed ship,
$180^\circ - 90^\circ - 45^\circ = 45^\circ$
Let the distance between this boat and the cliff be denoted by $a$.
By the law of sines,
$\frac{100}{\sin 45^\circ} = \frac{a}{\sin 45^\circ}$
Multiplying both sides by $\sin 45^\circ$ gives us
$(\sin 45^\circ)\frac{100}{\sin 45^\circ} = a$
Simplify for
$a = 100 ft$
For the black-sailed ship,
$180^\circ - 90^\circ - 30^\circ =60^\circ$
Let the distance between this boat and the cliff be denoted by $b$.
By the law of sines,
$\frac{100}{\sin 30^\circ} = \frac{b}{\sin 60^\circ}$
Clear the fractions to get,
$100(\sin 60^\circ) = b(\sin 30^\circ)$
Compute the sines of the angle to give us
$100\frac{\sqrt{3}}{2} = b\frac{1}{2}$
Simplify for
$100(\sqrt{3}) = b$
Multiply and round for
$b =173 ft$
The distance between the two boats, $x$, is the positive difference between the lengths of the bases of the triangle.
$b-a=x$
$173-100 = 73 ft$
The boats are about 73 feet apart from one another.
Example 3: Using Multiple Methods
At the park one afternoon, a tree casts a shadow on the lawn. A man stands at the edge of the shadow and wants to know the angle at which the sun shines down on the tree. If the tree is 51 feet tall and if he stands 68 feet away from the tree, what is the angle of elevation?
Answer {{Hide=
There are several ways to solve this problem. One way to use Inverse Trigonometry. Another way is to use a combination of the methods described above.
First, we can use Pythagorean Theorem to find the length of the hypotenuse of the triangle, from the tip of the shadow to the top of the tree.
$a^{2}+b^{2} = c^{2}$
Substitute the length of the legs of the triangle for $a, b$
$51^{2}+68^{2} = c^{2}$
Simplifying gives us
$2601+4624 = c^{2}$
$7225 = c^{2}$
Take the square root of both sides for
$\sqrt{7225} = c$
$85 = c$
Next, we can use the law of cosines to find the measure of the angle of elevation.
$a^{2}=b^{2}+c^{2} - 2bc \cos A$
Plugging in the appropriate values gives us
$51^{2}=68^{2}+85^{2} - 2(85)(68) \cos A$
Simplify for
$2601= 4624+7225 - 11560 \cos A$
$2601= 11849 - 11560 \cos A$
Subtract $11849$ from both sides for
$-9248= -11560 \cos A$
Simplify to get
$.8 = \cos A$
}}
# Why It's Interesting
Shadow Problems are one of the most common types of problem used in teaching trigonometry. The paradigm set up by a shadow problem is simple, visual, and easy to remember. Though an easy method by which to learn trigonometry, shadow problems are commonly used and highly applicable.
Shadows, while an effective paradigm in a word problem, can even be useful in real life applications. In this section, we can use real life examples of using shadows and triangles to calculate heights and distances.
# Teaching Materials
There are currently no teaching materials for this page. Add teaching materials.
Leave a message on the discussion page by clicking the 'discussion' tab at the top of this image page.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 44, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9131935238838196, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/85245/locally-fv-processes-are-fv-processes?answertab=oldest
|
# Locally FV processes are FV processes
Does anyone see why any process which is of Locally Finite Variation has to be of Finite Variation ?
Best regards
-
## 1 Answer
Hi I think I got it please tell me if I'm wrong.
Only one implication has to be proven as the other is obvious. So let's demonstrate the contrapositive proposition of $X$ in Locally FV implies $X$ is a FV process. That is let's take $X$ not an FV process and show it cannot be a locally finite process.
As $X \not\in FV$ ,then there exists an event $A$ such that :
-$P(A)>0$
-$\exists t>0,\int_0^t |dX_s|(\omega)=+\infty$ for any $\omega \in A$.
Then for any sequence of stopping time $\tau^n$ increasing to $\infty$ almost surely, then the stopped process $X^{\tau^n}$ are such that the events $A_n=\{\omega \in \Omega,\int_0^t |dX^{\tau^n}_s|(\omega)=+\infty \} \to A$ almost surely.
This shows as $P(A)>0$ that $X$ cannot be of locally finite variation.
Best regards
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9561893343925476, "perplexity_flag": "head"}
|
http://www.citizendia.org/Complex_plane
|
Geometric representation of z and its conjugate $\bar{z}$ in the complex plane. The distance along the light blue line from the origin to the point z is the modulus or absolute value of z. The angle φ is the argument of z.
In mathematics, the complex plane is a geometric representation of the complex numbers established by the real axis and the orthogonal imaginary axis. Mathematics is the body of Knowledge and Academic discipline that studies such concepts as Quantity, Structure, Space and Complex plane In Mathematics, the complex numbers are an extension of the Real numbers obtained by adjoining an Imaginary unit, denoted It can be thought of as a modified Cartesian plane, with the real part of a complex number represented by a displacement along the x-axis, and the imaginary part by a displacement along the y-axis. In Mathematics, the Cartesian coordinate system (also called rectangular coordinate system) is used to determine each point uniquely in a plane In Mathematics, the real part of a Complex number z is the first element of the Ordered pair of Real numbers representing z In Mathematics, the imaginary part of a Complex number z is the second element of the ordered pair of Real numbers representing z [1]
The complex plane is sometimes called the Argand plane because it is used in Argand diagrams. These are named after Jean-Robert Argand (1768. Jean-Robert Argand ( July 18, 1768 - August 13, 1822) was a non-professional Mathematician. . 1822), although they were first described by Norwegian-Danish land surveyor and mathematician Caspar Wessel (1745. Caspar Wessel ( June 8, 1745 - March 25, 1818) was a Danish-Norwegian Mathematician. . 1818). [2] Argand diagrams are frequently used to plot the positions of the poles and zeroes of a function in the complex plane. In Complex analysis, a pole of a Meromorphic function is a certain type of singularity that behaves like the singularity at z = 0 This article is about the zeros of a function which should not be confused with the value at zero. The Mathematical concept of a function expresses dependence between two quantities one of which is given (the independent variable, argument of the function
The concept of the complex plane allows a geometric interpretation of complex numbers. Geometry ( Greek γεωμετρία; geo = earth metria = measure is a part of Mathematics concerned with questions of size shape and relative position Under addition, they add like vectors. Addition is the mathematical process of putting things together The multiplication of two complex numbers can be expressed most easily in polar coordinates – the magnitude (or modulus) of the product is the product of the two absolute values, or moduli, and the angle (or argument) of the product is the sum of the two angles, or arguments. In Mathematics, the polar coordinate system is a two-dimensional Coordinate system in which each point on a plane is determined by In Mathematics, the absolute value (or modulus) of a Real number is its numerical value without regard to its sign. In particular, multiplication by a complex number of modulus 1 acts as a rotation.
## Notational conventions
In complex analysis the complex numbers are customarily represented by the symbol z, which can be separated into its real (x) and imaginary (y) parts, like this:
$z = x + iy\,$
where x and y are real numbers, and i is the imaginary unit. Complex analysis, traditionally known as the theory of functions of a complex variable, is the branch of Mathematics investigating functions of Complex In this customary notation the complex number z corresponds to the point (x, y) in the Cartesian plane. In Mathematics, the Cartesian coordinate system (also called rectangular coordinate system) is used to determine each point uniquely in a plane
In the Cartesian plane the point (x, y) can also be represented (in polar coordinates) as
$(x, y) = (r\cos\theta, r\sin\theta)\qquad\left(r = \sqrt{x^2+y^2}; \quad \theta=\arctan\frac{y}{x}\right).\,$
In the Cartesian plane it may be assumed that the arctangent takes values from −π to π (in radians), and some care must be taken to define the real arctangent function for points (x, y) when x ≤ 0. The radian is a unit of plane Angle, equal to 180/ π degrees, or about 57 [3] In the complex plane these polar coordinates take the form
$z = x + iy = |z|\left(\cos\theta + i\sin\theta\right) = |z|e^{i\theta}\,$
where
$|z| = \sqrt{x^2+y^2}; \quad \theta = \arg(z) = -i\log\frac{z}{|z|}.\,$[4]
Here |z| is the absolute value or modulus of the complex number z; θ, the argument of z, is usually taken on the interval 0 ≤ θ < 2π; and the last equality (to |z|eiθ) is taken from Euler's formula. This article is about Euler's formula in Complex analysis. For Euler's formula in algebraic topology and polyhedral combinatorics see Euler characteristic Notice that the argument of z is multi-valued, because the complex exponential function is periodic, with period 2πi. The exponential function is a function in Mathematics. The application of this function to a value x is written as exp( x) Thus, if θ is one value of arg(z), the other values are given by arg(z) = θ + 2nπ, where n is any integer ≠ 0. [5]
The theory of contour integration comprises a major part of complex analysis. In Mathematics, a line integral (sometimes called a path integral or curve integral) is an Integral where the function to be integrated In this context the direction of travel around a closed curve is important – reversing the direction in which the curve is traversed multiplies the value of the integral by −1. By convention the positive direction is counterclockwise. For example, the unit circle is traversed in the positive direction when we start at the point z = 1, then travel up and to the left through the point z = i, then down and to the left through −1, then down and to the right through −i, and finally up and to the right to z = 1, where we started. In Mathematics, a unit circle is
Almost all of complex analysis is concerned with complex functions – that is, with functions that map some subset of the complex plane into some other (possibly overlapping, or even identical) subset of the complex plane. Complex analysis, traditionally known as the theory of functions of a complex variable, is the branch of Mathematics investigating functions of Complex Here it is customary to speak of the domain of f(z) as lying in the z-plane, while referring to the range or image of f(z) as a set of points in the w-plane. In Mathematics, the domain of a given function is the set of " Input " values for which the function is defined In Mathematics, the range of a function is the set of all "output" values produced by that function In symbols we write
$z = x + iy;\qquad f(z) = w = u + iv\,$
and often think of the function f as a transformation of the z-plane (with coordinates (x, y)) into the w-plane (with coordinates (u, v)).
## Stereographic projections
Main article: Stereographic projection
Sometimes it's useful to think of the complex plane as if it occupied the surface of a sphere. In Geometry, the stereographic projection is a particular mapping ( function) that projects a Sphere onto a plane Imagine a sphere of unit radius, and put the complex plane right through the middle of it, so the center of the sphere coincides with the origin z = 0 of the complex plane, and the equator on the sphere coincides with the unit circle in the plane.
We can establish a one-to-one correspondence between the points on the surface of the sphere and the points in the complex plane as follows. In Mathematics, a bijection, or a bijective function is a function f from a set X to a set Y with the property Given a point in the plane, draw a straight line connecting it with the north pole on the sphere. That line will intersect the surface of the sphere in exactly one other point. The point z = 0 will be projected onto the south pole of the sphere. Since the interior of the unit circle lies inside the sphere, that entire region (|z| < 1) will be mapped onto the southern hemisphere. The unit circle itself (|z| = 1) will be mapped onto the equator, and the exterior of the unit circle (|z| > 1) will be mapped onto the northern hemisphere. Clearly this procedure is reversible – given any point on the surface of the sphere that is not the north pole, we can draw a straight line connecting that point to the north pole and intersecting the flat plane in exactly one point.
Under this stereographic projection there's just one point – the north pole itself – that is not associated with any point in the complex plane. We perfect the one-to-one correspondence by adding one more point to the complex plane – the so-called point at infinity – and associating it with the north pole on the sphere. This topological space, the complex plane plus the point at infinity, is known as the extended complex plane. In Mathematics, the Riemann sphere is a way of extending the plane of Complex numbers with one additional Point at infinity, in a way that And this is why mathematicians speak of a single "point at infinity" when discussing complex analysis. There are two points at infinity (positive, and negative) on the real number line, but there is only one point at infinity (the north pole) in the extended complex plane. [6]
Imagine for a moment what will happen to the lines of latitude and longitude when they are projected from the sphere onto the flat plane. The lines of latitude are all parallel to the equator, so they will become perfect circles centered on the origin z = 0. And the lines of longitude will become straight lines passing through the origin (and also through the "point at infinity", since they pass through both the north and south poles on the sphere).
This is not the only possible stereographic projection of a sphere onto a plane. For instance, the south pole of the sphere might be placed on top of the origin z = 0 in a plane that's tangent to the sphere. The details don't really matter. Any stereographic projection of a sphere onto a plane will produce one "point at infinity", and it will map the lines of latitude and longitude on the sphere into circles and straight lines, respectively, in the plane.
## Cutting the plane
When discussing functions of a complex variable it is often convenient to think of a cut in the complex plane. This idea arises naturally in several different contexts.
### Multi-valued relationships and branch points
Consider the simple two-valued relationship
$w = f(z) = \pm\sqrt{z} = z^{\frac{1}{2}}.\,$
Before we can treat this relationship as a single-valued function, the range of the resulting value must be restricted somehow. The Mathematical concept of a function expresses dependence between two quantities one of which is given (the independent variable, argument of the function When dealing with the square roots of real numbers this is easily done. For instance, we can just define
$y = g(x) = \sqrt{x}\ = x^{\frac{1}{2}}\,$
to be the non-negative real number y such that y2 = x. This idea doesn't work so well in the two-dimensional complex plane. To see why, let's think about the way the value of f(z) varies as the point z moves around the unit circle. We can write
$z = e^{i\theta}\qquad\Rightarrow\qquad w=z^{\frac{1}{2}} = e^{\frac{i\theta}{2}}\qquad(0\leq\theta\leq 2\pi).\,$
Evidently, as z moves all the way around the circle, w only traces out one-half of the circle. So one continuous motion in the complex plane has transformed the positive square root e0 = 1 into the negative square root eiπ = −1.
This problem arises because the point z = 0 has just one square root, while every other complex number z ≠ 0 has exactly two square roots. On the real number line we could circumvent this problem by erecting a "barrier" at the single point x = 0. A bigger barrier is needed in the complex plane, to prevent any closed contour from completely encircling the branch point z = 0. In the mathematical field of Complex analysis, a branch point may be informally thought of as a point z 0 at which a " multi-valued This is commonly done by introducing a branch cut; in this case the "cut" might extend from the point z = 0 along the positive real axis to the point at infinity, so that the argument of the variable z in the cut plane is restricted to the range 0 ≤ arg(z) < 2π.
We can now give a complete description of w = z½. To do so we need two copies of the z-plane, each of them cut along the real axis. On one copy we define the square root of 1 to be e0 = 1, and on the other we define the square root of 1 to be eiπ = −1. We call these two copies of the complete cut plane sheets. By making a continuity argument we see that the (now single-valued) function w = z½ maps the first sheet into the upper half of the w-plane, where 0 ≤ arg(w) < π, while mapping the second sheet into the lower half of the w-plane (where π ≤ arg(w) < 2π). [7]
The branch cut in this example doesn't have to lie along the real axis. It doesn't even have to be a straight line. Any continuous curve connecting the origin z = 0 with the point at infinity would work. In some cases the branch cut doesn't even have to pass through the point at infinity. For example, consider the relationship
$w = g(z) = \left(z^2 - 1\right)^{\frac{1}{2}}.\,$
Here the polynomial z2 − 1 vanishes when z = ±1, so g evidently has two branch points. We can "cut" the plane along the real axis, from −1 to 1, and obtain a sheet on which g(z) is a single-valued function. Alternatively, the cut can run from z = 1 along the positive real axis through the point at infinity, then continue "up" the negative real axis to the other branch point, z = −1.
This situation is most easily visualized by using the stereographic projection described above. In Mathematics, the complex plane is a geometric representation of the Complex numbers established by the real axis and the orthogonal imaginary axis On the sphere one of these cuts runs longitudinally through the southern hemisphere, connecting a point on the equator (z = −1) with another point on the equator (z = 1), and passing through the south pole (the origin, z = 0) on the way. The second version of the cut runs longitudinally through the northern hemisphere and connects the same two equatorial points by passing through the north pole (that is, the point at infinity).
### Restricting the domain of meromorphic functions
A meromorphic function is a complex function that is holomorphic and therefore analytic everywhere in its domain except at a finite, or countably infinite, number of points. In Complex analysis, a meromorphic function on an open subset D of the Complex plane is a function that is holomorphic Holomorphic functions are the central object of study of Complex analysis; they are functions defined on an open subset of the complex number plane This article is about both real and complex analytic functions [8] The points at which such a function cannot be defined are called the poles of the meromorphic function. In Complex analysis, a pole of a Meromorphic function is a certain type of singularity that behaves like the singularity at z = 0 Sometimes all these poles lie in a straight line. In that case mathematicians may say that the function is "holomorphic on the cut plane". Here's a simple example.
The gamma function, defined by
$\Gamma (z) = \frac{e^{-\gamma z}}{z} \prod_{n=1}^\infty \left[\left(1+\frac{z}{n}\right)^{-1}e^{z/n}\right]\,$
where γ is the Euler-Mascheroni constant, and has simple poles at 0, −1, −2, −3, . In Mathematics, the Gamma function (represented by the capitalized Greek letter '''&Gamma''') is an extension of the Factorial function The Euler–Mascheroni constant (also called the Euler constant) is a Mathematical constant recurring in analysis and Number theory, usually . . because exactly one denominator in the infinite product vanishes when z is zero, or a negative integer. In Mathematics, for a Sequence of numbers a 1 a 2 a 3. the infinite product [9] Since all its poles lie on the negative real axis, from z = 0 to the point at infinity, this function might be described as
"holomorphic on the cut plane, the cut extending along the negative real axis, from 0 (inclusive) to the point at infinity. "
Alternatively, Γ(z) might be described as
"holomorphic in the cut plane with −π < arg(z) < π and excluding the point z = 0. "
Notice that this cut is slightly different from the branch cut we've already encountered, because it actually excludes the negative real axis from the cut plane. The branch cut left the real axis connected with the cut plane on one side (0 ≤ θ), but severed it from the cut plane along the other side (θ < 2π).
Of course, it's not actually necessary to exclude the entire line segment from z = 0 to −∞ to construct a domain in which Γ(z) is holomorphic. All we really have to do is puncture the plane at a countably infinite set of points {0, −1, −2, −3, . . . }. But a closed contour in the punctured plane might encircle one or more of the poles of Γ(z), giving a contour integral that is not necessarily zero, by the residue theorem. In Mathematics, a line integral (sometimes called a path integral or curve integral) is an Integral where the function to be integrated The residue theorem in Complex analysis is a powerful tool to evaluate Line integrals of Analytic functions over closed curves and can often be used to compute By cutting the complex plane we ensure not only that Γ(z) is holomorphic in this restricted domain – we also ensure that the contour integral of Γ over any closed curve lying in the cut plane is identically equal to zero. And this may be important in some mathematical arguments.
### Specifying convergence regions
Many complex functions are defined by infinite series, or by continued fractions. In Mathematics, a series is often represented as the sum of a Sequence of terms That is a series is represented as a list of numbers with In analysis, a generalized continued fraction is a generalization of regular continued fractions in canonical form in which the partial numerators and the A fundamental consideration in the analysis of these infinitely long expressions is identifying the portion of the complex plane in which they converge to a finite value. A cut in the plane may facilitate this process, as the following examples show.
Consider the function defined by the infinite series
$f(z) = \sum_{n=1}^\infty \left(z^2 + n\right)^{-2}.\,$
Since z2 = (−z)2 for every complex number z, it's clear that f(z) is an even function of z, so the analysis can be restricted to one half of the complex plane. In Mathematics, even functions and odd functions are functions which satisfy particular Symmetry relations with respect to taking Additive And since the series is undefined when
$z^2 + n = 0 \quad \Leftrightarrow \quad z = \pm i\sqrt{n},\,$
it makes sense to cut the plane along the entire imaginary axis and establish the convergence of this series where the real part of z is not zero before undertaking the more arduous task of examining f(z) when z is a pure imaginary number. [10]
In this example the cut is a mere convenience, because the points at which the infinite sum is undefined are isolated, and the cut plane can be replaced with a suitably punctured plane. In some contexts the cut is necessary, and not just convenient. Consider the infinite periodic continued fraction
$f(z) = 1 + \cfrac{z}{1 + \cfrac{z}{1 + \cfrac{z}{1 + \cfrac{z}{\ddots}}}}.\,$
It can be shown that f(z) converges to a finite value if and only if z is not a negative real number such that z < −¼. In the analytic theory of continued fractions, the convergence problem is the determination of conditions on the partial numerators a In other words, the convergence region for this continued fraction is the cut plane, where the cut runs along the negative real axis, from −¼ to the point at infinity. [11]
## Gluing the cut plane back together
Main article: Riemann surface
We have already seen how the relationship
$w = f(z) = \pm\sqrt{z} = z^\frac{1}{2}\,$
can be made into a single-valued function by splitting the domain of f into two disconnected sheets. In Mathematics, particularly in Complex analysis, a Riemann surface, first studied by and named after Bernhard Riemann, is a one-dimensional In Mathematics, the complex plane is a geometric representation of the Complex numbers established by the real axis and the orthogonal imaginary axis It is also possible to "glue" those two sheets back together to form a single Riemann surface on which f(z) = z½ can be defined as a holomorphic function whose image is the entire w-plane (except for the point w = 0). Here's how that works.
Imagine two copies of the cut complex plane, the cuts extending along the positive real axis from z = 0 to the point at infinity. On one sheet define 0 ≤ arg(z) < 2π, so that 1½ = e0 = 1, by definition. On the second sheet define 2π ≤ arg(z) < 4π, so that 1½ = eiπ = −1, again by definition. Now flip the second sheet upside down, so the imaginary axis points in the opposite direction of the imaginary axis on the first sheet, with both real axes pointing in the same direction, and "glue" the two sheets together (so that the edge on the first sheet labeled "θ = 0" is connected to the edge labeled "θ < 4π" on the second sheet, and the edge on the second sheet labeled "θ = 2π" is connected to the edge labeled "θ < 2π" on the first sheet). The result is the Riemann surface domain on which f(z) = z½ is single-valued and holomorphic (except when z = 0). [7]
To understand why f is single-valued in this domain, imagine a circuit around the unit circle, starting with z = 1 on the first sheet. When 0 ≤ θ < 2π we are still on the first sheet. When θ = 2π we have crossed over onto the second sheet, and are obliged to make a second complete circuit around the branch point z = 0 before returning to our starting point, where θ = 4π is equivalent to θ = 0, because of the way we glued the two sheets together. In other words, as the variable z makes two complete turns around the branch point, the image of z in the w-plane traces out just one complete circle.
Formal differentiation shows that
$f(z) = z^\frac{1}{2} \quad\Rightarrow\quad f^\prime (z) = \frac{1}{2}z^{-\frac{1}{2}}\,$
from which we can conclude that the derivative of f exists and is finite everywhere on the Riemann surface, except when z = 0 (that is, f is holomorphic, except when z = 0).
How can the Riemann surface for the function
$w = g(z) = \left(z^2 - 1\right)^\frac{1}{2},\,$
also discussed above, be constructed? Once again we begin with two copies of the z-plane, but this time each one is cut along the real line segment extending from z = −1 to z = 1 – these are the two branch points of g(z). In Mathematics, the complex plane is a geometric representation of the Complex numbers established by the real axis and the orthogonal imaginary axis We flip one of these upside down, so the two imaginary axes point in opposite directions, and glue the corresponding edges of the two cut sheets together. We can verify that g is a single-valued function on this surface by tracing a circuit around a circle of unit radius centered at z = 1. Commencing at the point z = 2 on the first sheet we turn halfway around the circle before encountering the cut at z = 0. The cut forces us onto the second sheet, so that when z has traced out one full turn around the branch point z = 1, w has taken just one-half of a full turn, the sign of w has been reversed (since eiπ = −1), and our path has taken us to the point z = 2 on the second sheet of the surface. Continuing on through another half turn we encounter the other side of the cut, where z = 0, and finally reach our starting point (z = 2 on the first sheet) after making two full turns around the branch point.
The natural way to label θ = arg(z) in this example is to set −π < θ ≤ π on the first sheet, with π < θ ≤ 3π on the second. The imaginary axes on the two sheets point in opposite directions so that the counterclockwise sense of positive rotation is preserved as a closed contour moves from one sheet to the other (remember, the second sheet is upside down). Imagine this surface embedded in a three-dimensional space, with both sheets parallel to the xy-plane. Then there appears to be a vertical hole in the surface, where the two cuts are joined together. What if the cut is made from z = −1 down the real axis to the point at infinity, and from z = 1, up the real axis until the cut meets itself? Again a Riemann surface can be constructed, but this time the "hole" is horizontal. Topologically speaking, both versions of this Riemann surface are equivalent – they are orientable two-dimensional surfaces of genus one. Topology ( Greek topos, "place" and logos, "study" is the branch of Mathematics that studies the properties of A surface S in the Euclidean space R 3 is orientable if a two-dimensional figure (for example) cannot be moved around the surface and back In Mathematics, genus has a few different but closely related meanings Topology Orientable surface
## Use of the complex plane in control theory
In control theory, one use of the complex plane is known as the 's-plane'. Control theory is an interdisciplinary branch of Engineering and Mathematics, that deals with the behavior of Dynamical systems The desired output It is used to visualise the roots of the equation describing a system's behaviour (the characteristic equation) graphically. The equation is normally expressed as a polynomial in the parameter 's' of the Laplace transform, hence the name 's' plane. In Mathematics, the Laplace transform is one of the best known and most widely used Integral transforms It is commonly used to produce an easily soluble algebraic
Another related use of the complex plane is with the Nyquist stability criterion. The Nyquist stability criterion, named after Harry Nyquist, provides a simple test for Stability of a Closed-loop Control system by examining This is a geometric principle which allows the stability of a control system to be determined by inspecting a Nyquist plot of its frequency-phase response (or transfer function) in the complex plane. A Nyquist plot is used in automatic control and Signal processing for assessing the stability of a system with Feedback. A transfer function is a mathematical representation in terms of spatial or temporal frequency of the relation between the input and output of a ( linear time-invariant)
The 'z-plane' is a discrete-time version of the s-plane, where z-transforms are used instead of the Laplace transformation. In Mathematics and Signal processing, the Z-transform converts a discrete Time-domain signal which is a Sequence of real
## Other meanings of "complex plane"
The preceding sections of this article deal with the complex plane as the geometric analogue of the complex numbers. Although this usage of the term "complex plane" has a long and mathematically rich history, it is by no means the only mathematical concept that can be characterized as "the complex plane". There are at least three additional possibilities.
1. 1+1-dimensional Minkowski space, also known as the split-complex plane, is a "complex plane" in the sense that the algebraic split-complex numbers can be separated into two real components that are easily associated with the point (x, y) in the Cartesian plane. In Physics and Mathematics, Minkowski space (or Minkowski spacetime) is the mathematical setting in which Einstein's theory of Special relativity In Linear algebra, a split-complex number is of the form z = x + y j where j2 = +1, and x and y are Real
2. The set of dual numbers over the reals can also be placed into one-to-one correspondence with the points (x, y) of the Cartesian plane, and represent another example of a "complex plane". A variety of dualities in mathematics are listed at Duality (mathematics.
3. The vector space C×C, the Cartesian product of the complex numbers with themselves, is also a "complex plane" in the sense that it is a two-dimensional vector space whose coordinates are complex numbers. Cartesian square redirects here For Cartesian squares in Category theory, see Cartesian square (category theory.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 17, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9252576231956482, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/30552/storing-light-energy-as-potential-kinetic-to-drive-an-electric-generator
|
# Storing light energy as potential kinetic to drive an electric generator [closed]
I would like to lift a 10 ton mass (crate of rocks) to a height of about 80 feet using a small 24 volt electric motor powered by a 16 square foot photovoltaic panel. Then I would like to collect the energy stored by this 10 ton mass by having it turn an electric generator in a controlled descent.
One of the problems I must solve it finding a gear assembly that will allow this small 24 volt motor to raise such a large mass. I would like to get information related to mechanics of such gear assembly and it would be just great if someone could direct me to a company who builds such gear assemblies.
-
Hi Michael, and welcome to Physics Stack Exchange! General information about gear mechanisms is more of an engineering topic, and is generally off topic here. You could ask about the physics aspects of a gear system, though; if you want to do that, feel free to edit the question accordingly and I'd be happy to reopen it. – David Zaslavsky♦ Jun 22 '12 at 2:40
## closed as off topic by David Zaslavsky♦Jun 22 '12 at 2:33
Questions on Physics Stack Exchange are expected to relate to physics within the scope defined in the FAQ. Consider editing the question or leaving comments for improvement if you believe the question can be reworded to fit within the scope. Read more about closed questions here.
## 1 Answer
A typical value for solar insolation is roughly $1 kW/m^2$ and a typical metric for efficiency is about $12 \text{%}$, this means that under optimistic conditions your $16 ft^2$ array could be expected to deliver $178 W$. In terms of the basic mechanics, this is a more relevant metric than the voltage that you give, but I will return to that.
The power you need to lift the load depends on how fast you raise it. The primary drive shaft of your motor will have some angular speed $\omega$ in terms of $rad/s$ and this will translate into a linear speed $v$ ($ft/s$) given the gearing and pulley system. Potential energy of the mass is $mgh$ and the power needed to raise it will be how fast this quantity is increased over time. Introducing the fact that $v=dh/dt$ and with my prior argument $P=d(mgh)/dt=mg\times dh/dt$, we then have $P=mgv$. For instance, lifting the mass at $1 ft/s$, you will require $27 kW$. In fact, this number will be a minimum, you will need more than this due to all the losses involved.
So how fast can you lift it with your solar panels? If we use the prior 178 W number, we find that you can lift it at a rate of no more than $0.006 ft/s$. This is slow, but that might be fast enough, as it would take about $20 \text{minutes}$ to lift it the 80 ft.
Now, the voltage isn't very helpful because it doesn't translate directly into the speed. The DC motor design will relate all these parameters. In fact, you could buy just about any speed motor at just about any rated power, voltage matters for the construction, but doesn't limit what these can be. But for the sake of argument, let's say you have a $3600 rpm$ motor, which translates to $377 rad/s$, and if your primary gear on the shaft is, let's say, $5 cm$, then the outer edge of the disc would be moving at $18 m/s$ and the mechanical advantage needed from there would be about $10,000$. This mechanical advantage would be impractical to do with pulleys alone (since you would need 80,000 ft of rope), but it would not be impractical with a series of gearing.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 21, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9574170112609863, "perplexity_flag": "head"}
|
http://cms.math.ca/Events/winter12/res/eco
|
2012 CMS Winter Meeting
Fairmont Queen Elizabeth (Montreal), December 7 - 10, 2012
Recent Research in Econometrics
Org: Jean-Marie Dufour and Christian Genest (McGill)
[PDF]
XIN LIANG, McGill University
Necessary and sufficient conditions for identification and estimability of linear parameters [PDF]
We study the relationship between estimability and identifiability of linear parameters in partially linear models. A model is partially linear for the parameter vector $\beta$ if the conditional distribution of the data given $X$ depends on $\beta$ through $X\beta ,$ where $X$ is a known matrix. We focus here on situations where $X$ may not have a full-column rank, and $X\beta$ can be interpreted as an identifiable parameter. Besides linear regressions, partially linear models include several widely used statistical models: generalized linear models and linear mixed models, median regression, quantile regressions, various discrete choice models (such as probit and Tobit models), single index models, etc. We observe that usual conditions for parameter estimability in linear regressions -- \emph{a fortiori} in partially linear models -- are not necessary for identification, so estimability is not equivalent to identifiability. In the context of a general likelihood model (which may not be partially linear), we give a necessary and sufficient condition for identification of a transformation of model parameters. The proposed partial identification condition involves a general form of (potentially nonlinear) separability. This result is then applied to characterize the identification of an arbitrary vector $Q\beta$ in a partially linear model. Several equivalent partial identifiability conditions are provided, and close-form representations are provided for the corresponding \textquotedblleft identification sets\textquotedblright\ as linear subspaces of the parameter space. The proposed identifiability conditions include a number of easily interpretable conditions not previously supplied in the literature on estimability in linear regression.
MIRZA TROKIC, McGill University
Regulated Variance Ratio Unit Root Tests [PDF]
Regulated (bounded) integrated time series are of significant practical importance. Although regulated integrated series are characterized by asymptotic distributions which differ substantially from their unregulated counterparts, inferential exercises continue to be performed with complete disregard for this feature of time series data. This article aims to bridge this gap by proposing the variance ratio statistic of Nielsen (2009) in the case of regulated series. The article develops asymptotic distribution for the standard and OLS detrended versions of the statistic. In the unbounded case this statistic offers a means of improving statistical power of the test by choosing the fractional integration parameter d to be as small as possible. What this paper demonstrates is that no such template exists when the series is bounded. Choices of d in the regulated case depends heavily on the length, direction, and nature of the bounding interval. In cases where the bounding interval is sufficiently wide so that the problem may be considered "unbounded'', the results in Nielsen (2009) are replicated. In all other cases, the regulated variance ratio statistic suffers from very low power which in most cases of interest decreases to zero as one moves away from the unit root null hypothesis into the stationary alternative hypothesis. Finally, this paper extends the results of Cavaliere and Xu (2011) by introducing what seems to be the first theoretical justification for the asymptotic distribution of regulated integrated time series with a linear trend.
PUREVDORJ TUVAANDORJ, McGill University
Maximum Likelihood Estimation and Inference in Possibly Unidentified Models [PDF]
The validity of standard distributional approximation in regular statistical model critically hinges upon the identifiability of the model. Lack of identification imposes strong limitations on the construction of estimators and test statistics for nonidenifiable parameters with desirable properties. Motivated by the observation that identification failure does not preclude the possibility of making valid inference on the identifiable part of the model, the present paper studies identification, estimation and hypotheses testing in possibly unidentified parametric models. We give necessary and sufficient conditions for local identifiability of a parametric function in terms of its Jacobian matrix with respect to the parameter of the model and the Fisher information matrix. Based on local asymptotic analysis, it is shown that despite the identification failure the score and likelihood ratio statistics for testing hypothesis on the identifiable parameter have chi-square limiting distribution with degrees of freedom equal to the number of restrictions under certain regularity conditions. Moreover, stochastic dominance relations between various test statistics are provided.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8702953457832336, "perplexity_flag": "head"}
|
http://unapologetic.wordpress.com/2010/11/02/right-representations/?like=1&source=post_flair&_wpnonce=8cb43e0c56
|
# The Unapologetic Mathematician
## Right Representations
In our discussions of representations so far we’ve been predominantly concerned with actions on the left. That is, we have a map $G\times V\to V$, linear in $V$, that satisfies the relation $g_1(g_2v)=(g_1g_2)v$. That is, the action of the product of two group elements is the composition of their actions.
But sometimes we’re interested in actions on the right. This is almost exactly the same, but with a map $V\times G\to V$, again linear in $V$, and this time the relation reads $(vg_1)g_2=v(g_1g_2)$. Again, the action of the product of two group elements is the composition of their actions, but now in the opposite order! Before we first acted by $g_2$ and then by $g_1$, but now we act first by $g_1$ and then by $g_2$. And so instead of a homomorphism $G\to\mathrm{End}(V)$, we have an anti-homomomorphism — a map from one group to another that reverses the order of multiplication.
We can extend the notation from last time. If the space $V$ carries a right representation of a group $G$, then we hang a tag on the right: $V_G$. If we have an action by another group $H$ on the right that commutes with the action of $G$, we write $V_{GH}$. And if $H$ instead acts on the left, we write ${}_HV_G$. Again, this can be read as a pair of commuting actions, or as a left action of $H$ on the right $G$-module $V_G$, or as a right action of $G$ on the left $G$-module ${}_HV$.
Pretty much everything we’ve discussed moves over to right representations without much trouble. On the occasions we’ll really need them I’ll clarify if there’s anything tricky, but I don’t want to waste a lot of time redoing everything. One exception that I will mention right away is the right regular representation, which (predictably enough) corresponds to the left regular representation. In fact, when I introduced that representation I even mentioned the right action in passing. At the time, I said that we can turn the natural antihomomorphism into a homomorphism by right-multiplying by the inverse of the group element. But if we’re willing to think of a right action on its own terms, we no longer need that trick.
So the group algebra $\mathbb{C}[G]$ — here considered just as a vector space — carries the left regular representation. The left action of a group element $h$ on a basis vector $\mathbf{g}$ is the basis vector $\mathbf{hg}$. It also carries the right regular representation. The right action of a group element $k$ on a basis vector $\mathbf{g}$ is the basis vector $\mathbf{gk}$. And it turns out that these two actions commute! Indeed, we can check
$\displaystyle(h\mathbf{g})k=\mathbf{hg}k=\mathbf{hgk}=h\mathbf{gk}=h(\mathbf{g}k)$
This might seem a little confusing at first, but remember that when $h$ shows up plain on the left it means the group element $h$ acting on the vector to its right. When it shows up in a boldface expression, that expression describes a basis vector in $\mathbb{C}[G]$. Overall, this tells us that we can start with the basis vector $\mathbf{g}$ and act first on the left by $h$ and then on the right by $k$, or we can act first on the right by $k$ and then on the left by $h$. Either way, we end up with the basis vector $\mathbf{hgk}$, which means that these two actions commute. Using our tags, we can thus write ${}_G\mathbb{C}[G]_G$.
## 1 Comment »
1. [...] more explicit parallel between left and right representations: we have morphisms between right -modules just like we had between left -modules. I won’t [...]
Pingback by | November 3, 2010 | Reply
« Previous | Next »
## About this weblog
This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”).
I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 43, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9268635511398315, "perplexity_flag": "head"}
|
http://mathhelpforum.com/trigonometry/202847-trigonometry-question-print.html
|
# Trigonometry Question
Printable View
• September 2nd 2012, 11:40 AM
DonManolo
Trigonometry Question
Being http://forum2011.obmep.org.br/latexr...d23525e932.gif a triangle and http://forum2011.obmep.org.br/latexr...0c9a0fdaaa.gif a point inside http://forum2011.obmep.org.br/latexr...d23525e932.gif.
Find the coordinates of the points http://forum2011.obmep.org.br/latexr...319e541aee.gif and http://forum2011.obmep.org.br/latexr...0caf1283d6.gif on the sides of http://forum2011.obmep.org.br/latexr...d23525e932.gif, in function of the coordinates of the points http://forum2011.obmep.org.br/latexr...c130d45b94.gif and http://forum2011.obmep.org.br/latexr...0c9a0fdaaa.gif, in a way that http://forum2011.obmep.org.br/latexr...0c9a0fdaaa.gif divides the line segment http://forum2011.obmep.org.br/latexr...bc048a646a.gif in half.
(Tip: Consider the point http://forum2011.obmep.org.br/latexr...b72eacbe29.gifto be on the origin of the coordinates system and point http://forum2011.obmep.org.br/latexr...957afab571.gif to be on the http://forum2011.obmep.org.br/latexr...3038fdd89b.gif axis.)
• September 3rd 2012, 09:30 PM
kalyanram
1 Attachment(s)
Re: Trigonometry Question
Hey Don,
Refer to the figure attached. Choose x-axis along $AB$ and y-axis along $AC$ with the origin at $A$.
$Q,R$ can be interchanged.
Case 1.
$QR$ on sides $AB$ and $AC$ we have $Q=(2\alpha,0)$, $R=(0,2\beta)$. With the additional condition that $0 \le \alpha \le \frac{b}{2}$, $0 \le \beta \le \frac{c}{2}$
Case 2.
$QR$ on sides $AC$ and $BC$ we have the abscissa of $Q$ is $0$ and hence abscissa of $R$ has to be $2\alpha$. Let the ordinate of $Q$ be $\gamma$ then we have ordinate of $R$ has to be $2\beta-\gamma$ and it has to satisfy the line equation $\frac{y}{x-b} = -\frac{c}{b} \implies \gamma = -\frac{c(2\alpha - b)}{b}$ with the additional bound constraints $0 \le \alpha \le \frac{b}{2}$ and $0 \le \gamma \le c$
Case 3.
Can be done on similar lines of Case2.
Now use the rotational matrix to translate to the Rectangular Cartesian Co-ordinates.
~Kalyan.
All times are GMT -8. The time now is 10:42 AM.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 25, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.657637357711792, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/7938/intersection-cohomology-of-flag-varieties-schubert-varieties
|
## Intersection cohomology of flag varieties/Schubert varieties
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
How do you compute in characteristic $0$, intersection cohomology of partial flag varieties (corresponding to a fixed partition $\lambda$)? I understand the answer involves Kazhdan-Lusztig polynomials; all I can find is a reference for characteristic $p$ (http://arxiv.org/PS_cache/arxiv/pdf/0709/0709.0207v2.pdf), I'm looking for the paper by Kazhdan & Lusztig: Schubert varieties and Poincare duality, which I cannot find.
I'm specifically trying to compute the intersection cohomology of a subspace of the product of two flag varieties $(V_{i}), (W_{j})$ where the intersections $dim(V_{i} \cap W_{j})$ have fixed dimension. This problem isn't known or studied right? Is there anything to be said about intersection cohomology of homogeneous spaces?
-
You're interested in the intersection cohomology of Schubert varieties, so you might want to change your title to reflect that (the flag variety is smooth, and its cohomology is well-known). It would be quite inaccurate to say this problem isn't studied. In the KL paper you cite, they show that the coefficients of the KL polynomials are related to intersection cohomology of Schubert varieties. You are correct that the coefficients of the KL polynomials aren't known in general. This is considered to be an extremely difficult problem to solve (and important due to KL conjectures). – Mike Skirvin Dec 6 2009 at 1:38
I promise a useful answer once you clarify your question... In particular, please distinguish between a flag (or a Schubert) variety and the flags that its points represent. – Alexander Woo Dec 6 2009 at 2:04
Sorry, you're talking to someone whose only heard Schubert varieties in passing before (sorry about that, I'm still a second year undergrad, don't know basics). So from the wikipedia definition, I understand that one example of the Schubert variety is where $dim(V_{i} \int W_{j}) \geq j$. Is the variety I specified also a Schubert variety? – Vinoth Dec 6 2009 at 9:47
Alexander, my question is the following: Consider the subspace of the product of the two flag varieties: ${( {V_{j}}, {W_{k}}) | dim V_{j} = a_{j}, dim W_{k} = b_{k}, dim ( V_{j} \cap W_{k} ) = a_{jk} )}$ Compute its IC sheaves and dimensions of stalks etc. And I think I'd considered the problem roughly "solved" if we can reduce it to Kazhdan-Lusztig polynomials, so my question is not to explicitly find the KL coefficients – Vinoth Dec 6 2009 at 9:57
vinoth: Given your subvariety, we can project to the first flag variety. The fiber over a point will be a certain Schubert cell (because you're requiring equalities of intersection) which is isomorphic to an affine space. So what you're getting is a locally trivial fibration of a flag variety whose fibers are affine spaces. It's not quite a Schubert variety. – Steven Sam Dec 6 2009 at 17:34
show 6 more comments
## 1 Answer
First, let me rephrase your question in a slightly pedantic manner.
To establish some notation, for a point $p$ on the flag variety $G/B$, let $V_1(p)\subset\cdots V_{n-1}(p)$ be the flag in $\mathbb{C}^n$ that it corresponds to. (Be careful. There are no flags actually in the flag variety, just points. Rather, the points in the flag variety correspond to flags. If this confuses you you need a live person to straighten you out.)
You are asking for the intersection cohomology of the subvariety $X\subset G/B \times G/B$ consisting of points $(p,q)$ such that $\dim(V_i(p)\cap V_j(q))=a_{ij}$ (for some specified $a_{ij}$).
Now an answer:
Your variety $X$ has a projection onto the second factor, and this map is a fiber bundle whose base space is smooth (since it is the entire flag variety). Therefore, the local intersection cohomology for the whole space is determined entirely by the local intersection cohomology of the fibers.
If the conditions $a_{ij}$ are conditions that determine a Schubert variety, then the fibers are Schubert varieties, and hence local intersection cohomolgy Betti numbers are precisely given by Kazhdan--Lusztig polynomials.
If the conditions $a_{ij}$ are not conditions determining a Schubert variety, then your fibers will be unions of Schubert varieties. I don't know if anyone has bothered to do this, but I would think that if you take any of the definitions of Kazhdan--Lusztig polynomials $P_{u,v}(q)$ and modify it in the obvious way (if there is one) to allow $v$ to be an arbitrary lower ideal in Bruhat order rather than a principal lower ideal you should get the right thing.
-
thanks, that is really helpful. One last question: could you give me a reference that proves intersection cohomology of Schubert varieties are KL polynomials? I haven't had success finding one. – Vinoth Dec 7 2009 at 6:33
The original reference seems to be Kazhdan and Lusztig, Schubert varieties and Poincare duality, in Geometry of the Laplace operator, Proc. Sympos. Pure Math. 34, AMS, Providence, RI 1980, pp. 185--203. I must confess I've never actually read that, nor the later expositions in Borel's or Kirwan's books, nor ever understood much of the proof, despite writing a couple combinatorial papers on K-L polynomials from the Schubert point of view. – Alexander Woo Dec 7 2009 at 19:52
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 20, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9263859987258911, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/125539/power-reduction-formula/125568
|
# Power-reduction formula
According to the Power-reduction formula, one can interchange between $\cos(x)^n$ and $\cos(nx)$ like the following: $$\cos^n\theta = \frac{2}{2^n} \sum_{k=0}^{\frac{n-1}{2}} \binom{n}{k} \cos{((n-2k)\theta)} \tag{odd}\\$$ $$\cos^n\theta = \frac{1}{2^n} \binom{n}{\frac{n}{2}} + \frac{2}{2^n} \sum_{k=0}^{\frac{n}{2}-1} \binom{n}{k} \cos{((n-2k)\theta)} \tag{even}$$ To me this looks like an Binomial transform. Is this true?
May I think of it as a change of basis of a vector space?
-
## 2 Answers
Yes, it's a binomial expansion:
$$\begin{eqnarray} \cos^n\theta &=& 2^{-n}\left(\mathrm e^{\mathrm i\theta}+\mathrm e^{-\mathrm i\theta}\right)^n \\ &=& 2^{-n}\sum_{k=0}^n\binom nk\mathrm e^{\mathrm ik\theta}\mathrm e^{-\mathrm i(n-k)\theta} \\ &=& 2^{-n}\sum_{k=0}^n\binom nk\mathrm e^{\mathrm i(2k-n)\theta}\;, \end{eqnarray}$$
and then combining the terms whose exponents differ only by a sign (and whose coefficients coincide) yields the formulas you give. And yes, you may think of it as a change of basis if you wish, since both $\cos^n\theta$ and $\cos n\theta$ are linearly independent sets of functions; this is known from Fourier theory for $\cos n\theta$, and your transformation, which is clearly invertible, shows that it's also true for $\cos^n\theta$.
-
+1 Great, thanks. (I'll vote up, as soon as I can). – draks ... Mar 28 '12 at 19:17
@draks: You're welcome!. – joriki Mar 28 '12 at 19:42
I don't know about the binomial transform, but you can get it from the binomial theorem after writing $\cos \theta = (e^{i\theta} + e^{-i\theta})/2$.
Yes, in the vector space of functions spanned by $\cos^j (\theta)$ for nonnegative integers $j$ it tells you how to transform to the basis $\cos(j \theta)$.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.953690230846405, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/51584/what-is-the-current-through-the-lamp
|
# What is the current through the lamp?
We have the following circuit:
A neon lamp and a inductor are connected in parallel to a battery of 1.5 $V$. The inductor has a 1000 loops, a length of $5.0 cm$, an area of $12cm^2$ and a resistance of $3.2 \Omega$. The lamp shines when the voltage is $\geq 80V$.
• When the switch is closed, $B$ in the inductor is $1.2\times 10^{-2} T$.
• The flux then is $1.4 \times 10^{-5} Wb$
(calculated myself, both approximations).
You open the switch. During $1.0 \times 10^{-4} s$ there is induction. Calculate how big the current through the lamp is.
My textbook provides me with the following answer:
$U_{ind} = 1000 . 1.4 \times 10^{-5} / 1.0 \times 10^{-4} = 1.4 \times 10^{2} V$.
$I = U/R_{tot} = 1.4 \times 10^{2} / (3.2+1.2) = 32A$
My concerns:
• How do we know that $1.4 \times 10^{-5}$ is $|\Delta \phi|$? This is the flux in the inductor while the switch is closed, but when you open it doesn't induction increase/decrease the flux? Or will the flux just become 0 and hence give us $1.4 \times 10^{-5}$ ?
• Why do we have to take the $R_{tot}$? What does the resistance of the inductor have to do with the lamp?
p.s. - This question can't be asked on electronics SE, since their site doesn't allow for such a question.
-
@PatEugene Haha I'm quite certain we are not supposed to use a time dependent differential equation, at least not for a couple of years! – Ylyk Coitus Jan 18 at 23:08
Yeah ok so this problem, is like I said a little silly. It seems like you have to assume the current drops to zero in the given time and therefore so does the flux. This gives you the first part. – PatEugene Jan 18 at 23:40
## 3 Answers
When you close the switch the inductor "charges", gaining magnetic energy and hence an associated flux. When you open the switch, there is a potential energy associated with the inductor, and hence it will "discharge", generating a current in the circuit. So under the assumption that all the flux discharges, then $\Delta \phi$ will be $1.4 \times 10^{-5}$ $Wb$.
Now that there is a current flowing in the circuit, the current will see all the resistances in the circuit, not just the ones in front of it (since the circuit is closed and the sums of the sources and potential drops around the whole circuit must be zero.)
One can consider only the resistance of the lamp if the resistance of the inductor was zero. But since it has a finite resistance (you could think of it like the internal resistance of a battery) you will have to consider the internal resistance of the inductor in series with the resistance of the lamp.
-
But that means that the resistance through the loop is equal to the resistance through the lamp? If so, does this always hold for parallel components? In other words, do parallel components always have the same resistance? – Ylyk Coitus Jan 22 at 22:11
Wait, I might get it now: if the switch is open the current flows through the loop and lamp? – Ylyk Coitus Jan 22 at 22:26
I think I get the logic! – Ylyk Coitus Jan 22 at 22:26
Wait, does your final comment mean the question is wrongly stated? – Ylyk Coitus Jan 22 at 22:30
@YlykCoitus - You're confusing yourself. The resistance of the inductor and of the lamp are in series, not parallel. Therefore they should be added up, like any two series resistances. I just thought it would be easier to think of it as a perfect inductor (zero resistance) in series with a resistance, since your inductor does have a resistance. When the switch is open, current doesn't flow through the bottom part of the circuit (with the battery), so the only source of voltage is the energy that was stored in the inductor. contd in next comment. – Kitchi Jan 23 at 5:31
show 2 more comments
Yeah ok so this problem, is like I said a little silly. It seems like you have to assume the current drops to zero in the given time and therefore so does the flux. This gives you the first part, the induced voltage across the inductor. For the second part, it seems we simply have to apply Kirchoff's first loop rule and Ohm's law to find the current in the loop. This all seems very odd to me, because we are assuming the current is changing in order to induce a voltage, but also a single value for the current. Really, the current should be time dependent and the induction occurs for all time, not simply a finite amount. For the sake of completing this homework problem, we are done, but in reality we have to solve a differential equation and end up with exponential behavior.
-
So the reason we have to take $R_{tot}$ is because 'Kirchoff's first loop rule'? I still don't quite understand why you have to take the total resistance if you want to calculate the current through the lamp – Ylyk Coitus Jan 19 at 9:01
I think the other answers cover your question about the change in flux. As to why $U/R_{tot}$ is used instead of $U/R_{lamp}$... it's not quite clear whether you're expected to find the current at a particular point on the circuit, or what point that might be. If I had to pick one current to characterize the circuit, it would be the current through a bit of wire that didn't have any parallel counterparts (or equivalently the sum of the currents through a set of parallel wires). The voltage drop has to be equal across both the lamp and the inductor, so the current is just $I_{tot}=V/R_{tot}$. If after this you're interested in the current through the lamp, you can find it easily by calculating the current through the inductor individually ($I=V/R_{ind}$), then $I_{lamp}=I_{tot}-I_{ind}$.
In your original question you don't say anything about needing to find the current through the lamp, but in one of your comments you imply that that's the quantity you're after - is that actually what the original problem statement asks for?
-
Yes, the current through the lamp! I am so sorry I didn't notice it, wow! – Ylyk Coitus Jan 22 at 19:41
Right - well that just adds one extra step, as I outlined above. The 32A your textbook provides as an answer is, I believe, the total current (e.g. through the switch). – Kyle Jan 22 at 20:25
Well, that's their answer to the question (the current through the lamp) – Ylyk Coitus Jan 22 at 20:31
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 24, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9556964635848999, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/17107/does-the-wave-function-density-state-actually-exist/17253
|
# Does the wave function/density state actually exist?
I have been reading with interest the debates here on whether the wave function/density state actually collapses or not, or whether it is subjective Bayesian or objective with actual complex numbered values for each component. I have been struck, though, by the implicit assumption made by all camps that something like the wave function/density state actually exists, whether subjective or objective. What if it does not? That would render all of these debates moot.
The wf/ds is a highly abstract theoretical inference of what is measured experimentally. All one measures are correlations between definite measured outcomes. Pragmatically, it has been found such correlations can pretty much only be calculated accurately within the Dirac/von Neumann two stage framework where one first postulates some abstract wf/ds evolving unitarily alternating with a collapse during measurement to the eigenspaces of some 'measured observable' according to the Born rule, whatever that really means. This makes no ontological assumptions per se; it is just what anyone needs to go through to calculate the measured correlations.
Many worlds people argue wf/ds is objective with no collapse and all the branches co-exist. Copenhagenists argue for collapse. But what if the wf/ds doesn't exist? Then they are both wrong and missing the point. Other than the fact the only time wf/ds shows up is in abstract symbolic calculations, why its existence be assumed?
Here, the situation differs from classical probability distributions. The probabilities are still linear for the density state, but not the wave function, but negative probabilities appear and that makes all the difference. In classical Bayesian updating, there is some leeway in when the updating happens because the different updates evolve independently and the original distribution always evolves as a nonnegative weighted sum of the individual independent contributions. In the quantum density state case, destructive interference due to oscillations between positive and negative 'probabilities' exist and the different outcomes can no longer be said to be in any way independent. Decoherence does not really explain it away because the suppression of interference is not exact, takes time, and is potentially reversible in principle. What if the density state does not exist?
-
1
In your last paragraph, you say "Decoherence does not really explain it away". What is "it"? – Mitchell Porter Nov 17 '11 at 9:26
– anna v Nov 18 '11 at 4:53
4
Can you please ask the question in a way that doesn't mention existence? What if I don't exist. So what. What do I care whether I exist. You need to make it clear what it means for something not to exist. Also, you can measure the wavefunction for a collection of identically prepared systems (or the density matrix for that matter), and so there are circumstances where it is certainly a measurable object. – Ron Maimon Nov 18 '11 at 5:45
2
To answer politically: "That depends on what your definition of 'is' is" – Lagerbaer Dec 1 '11 at 23:11
## 8 Answers
You ask "Does the wave function/density state actually exist?" but this is a question that can't be answered. Quantum Mechanics is a mathematical model that gives an excellent description of the real world. QM is based upon the assumption that the wf/ds is a real object, but whether QM is a "real" description of the world is a question we need to leave to the philosophers.
In a comment to another question, Is there a mechanism for time symmetry breaking?, someone mentioned this paper. Although I'm not sure it revolutionises our understanding of the wavefunction it makes an interesting read in the context of this question.
-
There is only one universe with only one quantum state. Density states are statistical descriptions and do not and cannot possibly apply to only one universe. Only with an ensemble of many many copies of the same quantum state can we associate a density state according to the limiting ratios of frequencies of the measured values of observables. No ensemble, no density state and no wavefunction either.
-
Even if you believe this, you can believe that there is one universe with a state we don't know, and our ignorance leads to a probability distribution. If someone gave you a statistical description of cosmology using a density matrix, would you reject it our of hand? I think this is more of a philsophical statement, that there is "really" a pure state underneath, but what positivist meaning can you give to this statement, when a state which is rotated $\epsilon$ with respect to this pure state will be indistinguishable from it almost certainly, and we observe only one universe? – Ron Maimon Nov 19 '11 at 7:08
2
You're completely ignoring the fact that density states arise from partial traces as well as from statistical descriptions; partial traces are meaningful even if there is just one universe. – Peter Shor Dec 1 '11 at 20:51
Let me quote Zurek here. There is no information without representation. If the wave function or density state exists as some form of actual information, it has to be represented by matter in some physical way. It has to, in the colorful terminology of a very interesting character, be made of potatoes. Clearly, there is no such material representation within our universe, but it could be turtles all the way down and materially represented on some higher hypostatis.
-
There is a hidden assumption made here by most people including Pusey et al. The assumption is strict causality in time. Give me any particular instant in time, and realism combined with this assumption states that there is some complete set of information we can specify about the state at that time such that the probabilities for future outcomes can be determined to the best possible in principle based solely and uniquely upon the complete set of information at the given instant. This overlooks retrocausal interpretations where the actual observed outcome depends upon a transaction of alternations between forward causal influences and retrocausal influences. In such interpretations, the wavefunction need not exist.
-
The answer all boils down to which interpretation you adopt. Obviously, the many worlds interpretation deals with an existing wave function. The consistent histories interpretation also requires an existing real wave function because one of its requirements is consistent families and what the permissible consistent families are is very sensitive to the actual values of the components of the physical density state. The opinion of the Copenhagen interpretation is the wave function is only a computational tool for getting the probabilities of measured outcomes.
-
Theoretical Physics does not actually use the concept of 'existence'. That word does not appear in the usual axioms of QM and does not appear (much) in normal Physics textbooks either (I just checked Sommerfeld's Mechanics: as is typical, it only uses the word for mathematical existence, in an informal way: there exists a solution to the equation, etc.). How on earth would one define it, anyway? (Mathematical Logic doesn't use our normal intuitive concept of existence, either, as one can see by the fact that what might seem to be something like that, the existential quantifier, is in fact avoidable by using the universal quantifier and negation instead.)
Physics is, on the other hand, full of such statements as 'If the system is in the quantum state psi_o at time t=0, then....'
I think this certainly supports Mr. Rennie's point. I myself have noticed the same thing about the word 'event', I don't recall seeing it even once in any Mechanics textbook I have looked at...
Then the real difference between Bayesians and more 'classic' Quantum theorists such as Dirac and Wigner is that the latter simply say 'The set of quantum states of a system are the set of rays in a Hilbert Space' and go on to say many statements such as 'If the system is in the quantum state $v$ at time $t=0$ then ...' But Bayesians are forced to say 'The quantum state of a system encodes all our knowledge about the system' and they cannot avoid introducing subjective concepts like 'my' or 'knowledge'.
This is not a difference as to the probabilistic or statistical interpretation of the wave function, both a 'classic' QM-er and a Bayesian can both say that the interpretation of the wave function is that the modulus squared of its values are the probabilities that ....etc. And a Bayesian could say the wave function or quantum state is 'real', depending on their philosphic notions of reality and existence, which like Mr. Rennie said, are a separate issue from Physics. That is why I stated the real difference is between formally introducing subjective concepts like 'knowledge' or 'observer' or not introducing them. Dirac carefully avoids using either word: he says 'result of a measurement process' just as if no one was watching or cared.
A symptom of the difference between a Bayesian and a classic QMer is that the former expand the old idea of quantum state to include density matrices. The classic axioms make a sharp distinction: quantum state is a primitive concept and its connection with the probabilities of the results of measurement processes are given by axioms. Then the mixed states and density matrices are define in terms of these, and the rules for calculating probabilities of results of measurements applied to mixed states are derived as theorems. For a logically careful classic QMer, all quantum states are pure and all systems are closed.
If you ask me, it is Bayesians who are trying to make the concept of quantum state palatable to our everyday intuition by dragging in subjectivism and knowledge issues. Dirac simply said one could only develop an intuition about quantum concepts by using them... To me, it seems that it is Bayesians who are trying to interpret a quantum concept and include that interpretation in the axiom or formal system, whereas Dirac wanted it to stay uninterpreted.
-
Your statement about mathematical logic is incorrect. The negation of a universal quantifier being an existential quantifier is essentially the equivalence of the statements "There is a crow of some color other than black" and "It is not true that all crows are black," which is perfectly valid using the standard definition of existence. – Peter Shor Dec 10 '11 at 19:26
Possibly you are being confused by the fact that in mathematical logic, if there are no crows, then all crows are black (or any color you please, for that matter). – Peter Shor Dec 10 '11 at 23:26
Existence is not a predicate. If $x$ is a proper name, mathematical logic cannot say « $x$ exists ». The fact that one can use quantifiers and we gave one of them the same name as « existence » does not prove that it covers the same concept as we have in ordinary language or physics when we say `exists'. I am not sure what you mean by « the standard definition » of existence. If existence is not one of the primitive undefined concepts, but is defined, then its meaning will vary with differing interpretations of the undefined concepts. (That is what underlies the Skolem paradox.) – joseph f. johnson Dec 15 '11 at 1:21
I think you're misinterpreting the debates a bit. The wave function isn't real. It's a bookkeeping device. QM has these things called observables. Nothing else is posited to be real. The wave function is not an observable. If you go look at the development of QM in Schwinger's book, you see that the wave function shows up purely as a mathematical intermediate to make a lot of calculations straightforward.
Copenhagen posits that there is an act, "measurement", which forces a particle into a pure state. Many worlds folks say that there is an act, "measurement", which splits the universe into multiple paths. Neither of them claim any reality for the wave function. There are interpretations, such as the Bohm-de Broglie pilot wave, which imply reality for the wave function. They're worth knowing about because they have been a useful intuitive tool for folks like John Bell.
The two things that are really worth reading about foundations of quantum mechanics at this point are van Kampen's 'Ten Theorems on Quantum Mechanical Measurements' (let me know if you can't find a copy), and Griffiths's book 'Consistent Quantum Theory' (available online).
-
1
-1: This is not true. You can measure the wavefunction on a collection of identically prepared system, for example, a bunch of hydrogen atoms. Schwinger does not make the assumption that the wavefunction is an artifact, he just gives an unusual set of axioms on observations to reproduce quantum mechanics. – Ron Maimon Nov 18 '11 at 5:43
I would agree with that it is a bookkeeping device.It is similar to discussions on the barycenter of the solar system in weather related blogs; like asking :" is the barycenter real". It is a confusion of language levels, which is how paradoxes arise. The barycenter, (as the wavefunction), is a real mathematical point; it is calculable continuously with newtonian mechanics. That is its reality. In the reality framework of the barycenter plowing through the sun and creating effects it is nonsense. Real is "for a certain value of reality", i.e. the metalevel of the observation. – anna v Dec 10 '11 at 5:27
I want you to think computationally. Obviously, nature can and does compute quantum mechanics. It remains to be asked how nature does it. The Copenhagenists will smugly tell you it does not matter how nature does it or what happens in between. Inputs go in, and then a miracle happens and we are supposed to stay hush hush about it and not peek inside, and outputs come out. Aren't you the least bit curious what happens in between? Do you want to listen to the Copenhagenists telling you nothing happens in between? Then, by implication, nature computes by magic and somehow get it right.
If the Church-Turing thesis is right, nature needs to compute with some scratch space and "registers" and "RAM"s. Otherwise, nature is some hypercomputational machine. The contents of the scratch space which happen in between are what Bell termed "beables". What are the minimal information needed for a beable? Some would say the wave function. Others the density state. Others the path integral. If it is not possible to do with any less, the beable has to be real, or do you still insist upon objecting and fighting? The beable has to be real.
-
Have you ever heard of "nomological Bohmian mechanics"? The wavefunction is not treated as a thing, instead it is absorbed into the equation of motion of the classical configuration. That equation of motion can be reduced to local classical forces and a nonlocal quantum force whose specific form depends on the specific wavefunction that you started with. The point is that you do not need an exponentially large beable in order to get quantum dynamics. – Mitchell Porter Nov 19 '11 at 6:42
1
@Mitchell Porter: If you write a simulation of Bohmian mechanics, you still need a ton of RAM to store the wavefunction data, and there is no reduction in the amount of necessary data from the particle positions. The particle positions are just extra baggage. From a computation perspective, Bohm is worse than quantum mechanics, and the words you use like "nomological" don't make any difference to the computational heft of the wavefunction. – Ron Maimon Nov 19 '11 at 7:12
Ron, in nomological Bohmian mechanics you can get rid of the wavefunction entirely. You can go backwards from the Hamilton-Jacobi picture and just think of the net force which you calculate as the gradient of the phase, and then that force breaks into two parts as I described. Quantum dynamics then arises from the nonlocal part in the equations of motion for the classical beables. – Mitchell Porter Nov 21 '11 at 0:03
1
@Mitchell:you can repeat, you are still wrong. Just to specify the initial wavefunction, you need a function on 3N dimensions, which requires 10 to the 3N values even for only positions on a 10 cubed grid. The idea that the force only needs to be computed at the particle positions is seductive, but wrong in principle, because the wavefunction diffused around based on its own wave equation. You can calculate the wavefunction by summing over all paths, but there are exponentially many paths. – Ron Maimon Nov 21 '11 at 18:18
2
There is absolutely no reason to believe that the universe is being computed on some computer living in some hyperverse. In fact, if you take this view (as you seem to), you are left with the question of what is doing the computation that runs this hyperverse. Also, the absence of observations of "bugs" in the laws of physics argues strongly against this possibility. – Peter Shor Dec 1 '11 at 20:52
show 3 more comments
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.940361499786377, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/104718/from-generating-functions-to-recurrence/104723
|
# From generating functions to recurrence
I am quite new to generating functions and I don't understand how one comes from the generating function equation to the recurrence:
$A = \sum_{n\geq 0} a_nx^n$
Now if $A=A(2xA'-A)+x$ holds, why is it easy to conclude that $a_n = \sum^{n-1}_{k=1} (2k-1)a_ka_{n-k}$ ?
-
## 1 Answer
The most important property of generating functions is that their multiplication is translated to a convolution of the corresponding sequences:
If
$$f(x)=\sum_{n=0}^\infty a_nx^n$$ $$g(x)=\sum_{n=0}^\infty b_nx^n$$
Then $h(x)=f(x)g(x)$ can be written as:
$h(x)=\sum_{n=0}^\infty (\sum_{k=0}^n a_ib_{n-i})x^n$
The sum in the middle - $\sum_{k=0}^n a_ib_{n-i}$ is what I call "convolution". It comes from the rules of multiplying power series (very similar to multiplying polynomials).
Now, if $A=\sum_{n=0}^\infty a_nx^n$, then by deriving "term-term" we have:
$$A^\prime = \sum_{n=1}^\infty a_nnx^{n-1}$$
And so:
$$2xA^\prime = \sum_{n=1}^\infty 2na_nx^x$$
And so
$$2xA^\prime-A = \sum_{n=1}^\infty (2n-1)a_nx^x$$
And the result follows.
If you're new to generating function - congratulations! This is a fascinating subject which at first glance might seem intimidating, but is actually very elegant and beautiful. I suggest Wilf's "generatingfunctionology" or Stanley's "Enumerative Combinatorics", or Flajolet and Sedgewick's "Analytic Combinatorics" (both the letter books can be quite a challenge, but are worth it).
-
thank you very much for the nice answer, I am currently reading generatingfunctionology. – stefan Feb 1 '12 at 21:37
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9057886600494385, "perplexity_flag": "middle"}
|
http://physics.aps.org/articles/v6/19
|
# Viewpoint: Holey Intrinsic Photoconductivity
and , IPCMS (UMR 7504) and ISIS (UMR 7006), Université de Strasbourg and CNRS, Strasbourg, France
Published February 19, 2013 | Physics 6, 19 (2013) | DOI: 10.1103/Physics.6.19
A new experiment reveals the different dynamics of particles and holes in ultracold fermionic gases in an optical lattice, opening the way to investigate condensed matter transport phenomena such as photoconductivity via quantum optical systems.
#### Intrinsic Photoconductivity of Ultracold Fermions in Optical Lattices
J. Heinze, J. S. Krauser, N. Fläschner, B. Hundt, S. Götze, A. P. Itin, L. Mathey, K. Sengstock, and C. Becker
Published February 19, 2013 | PDF (free)
When a crystal absorbs photons of suitable energy, electron-hole excitations are created in pairs leading to an increase in conductivity that is proportional to the photon flux. Such a phenomenon in which light generates electrical current, turning a nonconductor into a conducting material, is known as photoconductivity. While photoconductivity occurs, in principle, in any material, a substantial photocurrent is particularly easy to generate in semiconductors because of their small band gaps. Photoconductivity is of crucial interest for investigating electron-hole dynamics, transport properties of complex compounds, possibly displaying novel physics, as well as for its technological applications like semiconductor photodiodes and photoresistors. However, solid-state systems can be complicated because of the density of constituent atoms, so there is a desire to study physical effects from condensed matter physics in well-controlled analogous systems, such as cold atomic gases, as well as to understand how particle-hole dynamics operate in the latter case.
Writing in Physical Review Letters, Jannes Heinze of the University of Hamburg, Germany, and collaborators present results from combined experimental and theoretical work on the excitation dynamics of an ultracold fermionic gas trapped in a periodic optical potential or “optical lattice” [1]. The experiment is specifically designed to mimic the phenomenon of photoconductivity in an atomic gas. In this experiment, particles are transferred by modulations of the lattice amplitude from the lowest band of the lattice to the second excited one, leaving holes behind. An external harmonic potential induces oscillations of both particles and holes, corresponding to transport in the condensed matter system, and the ensuing dynamics is monitored via momentum-resolved absorption imaging techniques. The parallel with semiconductors is transparent and very appealing: photons in semiconductors are equivalent to lattice modulations, electrons are played by fermionic atoms, while holes remain…holes. Figure 1 illustrates the excitation process as occurring in the condensed matter system (upper panel) and in its cold-atom counterpart (lower panel). In the spirit of quantum simulations, one goal is to exploit these analogies to investigate the physics of semiconductors using quantum optical systems.
The research team uses, for the most part, an ultracold gas of spin-polarized noninteracting potassium-$40$ ($40$K) fermionic atoms. In a few cases, a mixture of two different spin states is utilized instead, adding the possibility to include and tune interactions using a Fano-Feshbach resonance (i.e., a resonance with a molecular state that allows for the control of the sign and strength of interspecies two-body interactions). Modulation of the amplitude of the optical lattice promotes, without transferring quasimomentum (i.e., an intrinsic quantum number arising from the translational symmetry of the lattice), a few particles from the lowest to the upper band. However, in contrast to semiconductors, the two bands are curved here because of the presence of a confining harmonic potential—the optical dipole trap. The resonance frequency for excitations thus becomes quasimomentum-dependent because of the different curvatures of the bands, which allows for full control of the initial quasimomentum of the excited particles. Their dynamics can be then monitored using adiabatic band mapping and absorption imaging after a characteristic time-of-flight of $15$ milliseconds (ms). Rather than just doing transport measurements, this method allows the team to follow the periodic dynamics of the atoms and resolve their momentum completely. A surprising new feature of this experiment is that here, essentially the same is done for holes: differential absorption imaging techniques are crucial, in this case, to follow the time evolution of the hole depth (i.e., the hole momentum with respect to the Fermi one).
The dynamics of the fermions in the excited band displays extended oscillations in momentum space. The oscillation frequencies increase with that of the harmonic confinement, decreasing instead with the quasimomentum of the excitation as well as with the lattice depth. These oscillations are quite long-lived, with typical lifetimes of the order of $100$ ms. Conversely, the hole depth appears almost completely reduced after a much shorter time (i.e., approximately $2$ ms). However, and this is one of the central results, the hole depth displays a series of periodic revivals whose lifetime is longer as the depth of the optical lattice decreases. The direct momentum-resolved measurements of the hole dynamics, and especially of revivals of the hole depth, are an exciting new result whose observation is made possible by the exceptionally well-controlled experimental setup used by the Hamburg team.
For spin-polarized particles, the dynamics has a Hamiltonian description that is essentially determined by the combined effects of the confining periodic and harmonic trapping potentials. The former determines the band structure, with characteristic width $4J$ for the lowest band (with $J$ being the single-particle tunneling matrix element), while the latter fixes its curvature ($ν$ in text), that is, the energy cost paid when a particle moves from the center of the trap to its nearest neighboring lattice site. For the large ratios $(4J/ν)≫1$, realized in the Hamburg experiment, two classes of eigenmodes are present and can be classified according to their energy ($ϵ$). Low-energy modes (with $ϵ<4J$) are well described by harmonic oscillator eigenstates and, as a consequence, are delocalized around the center of the trap. When populated by fermions, these modes allow for transport and undamped oscillations, as observed in the experiment. Deviations from the harmonic-oscillator spectrum occur for $ϵ∼4J$ and are due to corrections arising from the lattice potential. These corrections are responsible for dephasing of dipole oscillations in the trap, as previously observed with bosons only [2]. High-energy modes ($ϵ>4J$), carefully avoided in the experiment, are instead close to position eigenstates, i.e., localized on either side of the harmonic potential.
Such a peculiar single-particle spectrum, which has an analytic solution in the tight-binding limit [3, 4], can be used to gain a qualitative understanding of the dynamics of both particles and holes. In particular, the surprising revival of the hole depth may be explained with cycles of dephasing and rephasing of the fermionic cloud in the lowest band. The dephasing is much less pronounced for particles in the upper band because of their smaller number as well as the larger bandwidth. For a quantitative comparison to experimental results, the Hamburg team has chosen a particularly elegant theoretical approach, based on a semiclassical treatment [5, 6, 7]: when interband transitions can be neglected, the dynamics in each band is shown to map into that of a nonlinear pendulum, whose equation has swapped position and momentum. This treatment fully explains the observed dynamics of both particles and holes, here regarded as particles with negative mass, as standard in condensed matter. Specifically, the fast decay and revival dynamics is due to a larger spread of momentum in phase space for the holes, as a result of the smaller width of the lowest band.
So, having understood this beautiful physics, what now lies ahead? In their work, the authors give us a hint by presenting results for the lifetime of particles in the second band as a function of the scattering length in a mixture of atoms in two different spin states. The lifetime is shown to be strongly dependent on the interspecies interactions and is qualitatively explained in terms of interaction-induced particle-hole recombination. In contrast to traditional condensed matter systems, this interaction is here fully tunable. Combined with the long lifetime of the system demonstrated in this experiment, this capability of tuning interactions may open the way to the investigation of novel fundamental phenomena, allowing for a precise determination of the role of interactions in hole dynamics and lifetime, both in atomic and condensed-matter-type systems, relevant, e.g., to photoconductivity. Since their theoretical treatment would be hardly achievable using current techniques, this class of experiments would constitute a novel example of a “useful quantum simulation” [8]. Applications of quantum simulators are now thrilling the whole physics community, and thus, as in the best stories, it seems that the very best is yet to come.
### References
1. J. Heinze, J. Krauser, N. Fläschner, B. Hundt, S. Götze, A. P. Itin, L. Mathey, K. Sengstock, and C. Becker, “Intrinsic Photoconductivity of Ultracold Fermions in Optical Lattices,” Phys. Rev. Lett. 110, 085302 (2013).
2. C. D. Fertig et al., “Strongly Inhibited Transport of a Degenerate 1D Bose Gas in a Lattice,” Phys. Rev. Lett. 94, 120403 (2005).
3. M. Aunola, “The Discretized Harmonic Oscillator: Mathieu Functions and a New Class of Generalized Hermite Polynomials,” J. Math. Phys. 44, 1913 (2003).
4. A. M. Rey, G. Pupillo, C. W. Clark, and C. J. Williams, “Ultracold Atoms Confined in an Optical Lattice Plus Parabolic Potential: A Closed Form Approach,” Phys. Rev. A 72, 033616 (2005).
5. L. Pezzè et al., “Insulating Behavior of a Trapped Ideal Fermi Gas,” Phys. Rev. Lett. 93, 120401 (2004).
6. A. R. Kolovsky, and H. J. Korsch, “Bloch Oscillations of Cold Atoms in Optical Lattices,” Int. J. Mod. Phys. B 18, 1235 (2004).
7. C. Hooley and J. Quintanilla, “Single-Atom Density of States of an Optical Lattice,” Phys. Rev. Lett. 93, 080404 (2004).
8. J. I. Cirac and P. Zoller, “Goals and Opportunities in Quantum Simulation,” Nature Phys. 8, 264 (2012).
### About the Author: Guido Pupillo
Guido Pupillo is Professor of Physics at the Université de Strasbourg and Director of the Quantum Physics Lab at the Institutes IPCMS and ISIS in Strasbourg, France. After receiving his Ph.D. from the University of Maryland in 2005, for research conducted at NIST, Gaithersburg, he worked at the Austrian Academy of Sciences and at the University of Innsbruck, Austria, where he received his Habilitation in 2011. He is the recipient of the prestigious ERC St-grant 2012 and the French ANR Chair d’ Excellence 2012. His group carries out research on atomic, molecular, and optical physics, and nonequilibrium dynamics of quantum systems.
### About the Author: Fabio Mezzacapo
Fabio Mezzacapo is a Senior Postdoctoral Researcher at the Université de Strasbourg, France. He received his Ph.D. in physics from the University of Alberta, Canada, in 2008. Until 2012, he worked at the Max Planck Institute for Quantum Optics in Garching, Germany. His main current research interests are centered on strongly correlated 2D fermionic systems and quantum gases.
## Related Articles
### More Atomic and Molecular Physics
Condensate in a Can
Synopsis | May 16, 2013
Remove the Noise
Synopsis | Apr 25, 2013
### More Optics
Nanostructures Put a Spin on Light
Synopsis | May 16, 2013
Wave-Shaping Surfaces
Viewpoint | May 6, 2013
## New in Physics
Wireless Power for Tiny Medical Devices
Focus | May 17, 2013
Pool of Candidate Spin Liquids Grows
Synopsis | May 16, 2013
Condensate in a Can
Synopsis | May 16, 2013
Nanostructures Put a Spin on Light
Synopsis | May 16, 2013
Fire in a Quantum Mechanical Forest
Viewpoint | May 13, 2013
Insulating Magnets Control Neighbor’s Conduction
Viewpoint | May 13, 2013
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 14, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9066682457923889, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/differential-geometry/152325-using-lagrange-theorem.html
|
# Thread:
1. ## using the Lagrange Theorem
Suppose G is a group of order 48 and H is a subgroup of order 12, then how many distant right cosets of H are there in G?
I thought it was 6. Am I right? If not can u show me which why to go?
2. Originally Posted by tigergirl
Suppose G is a group of order 48 and H is a subgroup of order 12, then how many distant right cosets of H are there in G?
I thought it was 6. Am I right? If not can u show me which why to go?
Lagrange's theorem: if $G$ is a finite group and $H\leq G$ then $|G|=[G:H]\cdot |H|$ .
Now you can see your mistake.
Tonio
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9655654430389404, "perplexity_flag": "head"}
|
http://www.physicsforums.com/showthread.php?s=c3532470767de9361c8fcded6eceaf91&p=4176990
|
Physics Forums
## Proving the dot product between these vectors is always the same value
How would you show that the dot product between the normal unit vector of a plane and a position vector to any point on the plane is always the same without using this formula
$$n.(r-r_0) = 0$$
∴ $$n.r=n.r_0$$
where $$n$$ is the normal vector, $$r$$ and $$r_o$$ are two position vectors to two points on the plane.
I'm looking for an alternative geometric argument/proof that applies to all cases.
I notice that if you have a point P on a plane that is directly above the origin which is parallel to the xy-plane, then the dot product is simply the magnitude of the vector OP. Then as you move further out from the origin to some point $$P_n$$ on the plane, the position vector gets larger and the projection of the unit normal vector on the vector $$OP_n$$ gets smaller. One gets larger, the other gets smaller and somehow their product is always the same.
So again, I'm after a way to prove this for all cases geometrically.
Thanks
Recognitions: Homework Help Off the geometric interpretation of the dot product, you are trying to show that orthogonal vectors do not have any extent in each others direction? What is wrong with: Use the x-y plane ... all other planes follow on a rotation + translation. The normal is n=(0,0,1) Any point in the plane is (x,y,0) from origin. Your result follows from the definition of a dot product. All other position vectors follow from a translation.
Recognitions: Gold Member Science Advisor Staff Emeritus First of all, the statement, as given, is not true. Let the plane be given by z= 1 in an xyz-coordinate system and the given point (2, 2, 1). The unit normal to the plane is of the form <0, 0, 1>. The position vector of the point is <2, 2, 1> and their dot product is 1, not 0. You appear to be assuming that the plane contains the origin. In that case the position vector of any point in the plane is in the plane itself and so is perpendicular to the normal vector.
Recognitions:
Science Advisor
## Proving the dot product between these vectors is always the same value
Geometric proof. The difference between any two points in a plane is a line in the plane. The normal is perpendicular to all lines in the plane, so that dot product = 0.
Recognitions: Homework Help @autodidude: perhaps you did not phrase the question as well as you'd hoped? Want to have another go? Tell us what you are thinking of when you say "geometric proof", and what it is you want to prove. From the context, I just guessed that "a position vector to a point int he plane" meant "a vector between any two points in the plane"... but I noticed that you seemed to have defined the thing you want to prove ... that is: the "normal vector to the plane" is, by definition the vector whose dot product with any vector that lies in the plane is zero. In general, you'll find you can make more headway by turning natural-language statements into mathematical statements.
Well, in the textbook, he writes $$n.(r-r_0) = 0$$ ∴ $$n.r=n.r_0$$ So the dot product between the unit normal vector to the plane and the position vector to any point on the plane is always the same value - no matter what point you pick. I just wanted to see a different approach as to why this is true without simply rearranging the formula. I've been trying to prove it myself and the best I can come up with is this: Let P be any point on the plane such that the position vector OP is parallel to the normal unit vector n. The dot product between OP and n is |OP|. Then to show that the dot product between the normal unit vector and a position vector to any other point on the plane Q, I did the following: $$OQ.n=|OQ||n|cos(\theta)$$ Since $$|OQ|=\frac{|OP|}{cos(\theta)}$$, then $$OQ.n=\frac{|OP|}{cos(\theta)}|n|cos(\theta)$$ $$OQ.n=|OP||n|$$ $$OQ.n=|OP|$$ Does this look alright?
Recognitions:
Homework Help
So the dot product between the unit normal vector to the plane and the position vector to any point on the plane is always the same value - no matter what point you pick.
Provided the positio vector is in between two points in the plane.
##\vec{p}## is a position vector to a point P in the plane. You'd say "OP" right?
##\vec{q}## is another position vector to another point Q in the plane, so ##\vec{p}\neq\vec{q}##.
The difference between these two positions would be a displacement vector within the plane. ##\vec{d}=\vec{q}-\vec{p}## (from P to Q, final minus initial)
If ##\vec{v}## is a vector that does not lie in the plane, then ##\vec{v}\cdot(\vec{p}-\vec{q})=0## means that ##\vec{v}## is normal to the plane. The equation "he" wrote down is part of the definition of "a plane surface".
The plane is defined geometrically as the set of all points whose displacement from a given point is perpendicular to a given vector. The given vector is called the "normal" to the plane.
In general, if ##\vec{n}## is a (unit) normal vector to a plane, then it's dot product with any position vector to a point in the plane will be different depending on the point vis: ##\vec{n}\cdot\vec{p}\neq\vec{n}\cdot\vec{q}\neq 0## ... it might be, but it will not always be.
Using this formalism, what you have written is:
(1) ##\vec{p}=p\vec{n}## eg OP has the same direction as the unit normal.
(2) hence: ##\vec{p}\cdot\vec{n}=p##
if Q is in the plane, and Q is not P, then:
(3) ##\vec{q}\cdot\vec{n}=q\cos(\theta)## ... where ##\theta## is the angle between them.
Since OQ forms the hypotenuse of a right-triangle OPQ - you can say that
(4) ##p=q\cos(\theta)##
So, by substitution into (3)
(5) ##\vec{q}\cdot\vec{n}=p##
... so far so good.
Notice that the vector notation is the same as your labelling by capitol letters.
There is a conceptual distinction though isn't there: OP (conceptually) must start at the origin and end at point P while the vector ##\vec{p}=\overrightarrow{OP}## can also stand for any vector parallel to it that has the same length.
I have a feeling you are trying to find the link from the concrete labelled points concepts and the general vector concepts.
What do you mean if the position vector is in between two points? I thought position vectors always referred to a vector emanating from the origin? But doesn't the definition of a plane surface say that the dot products should be the same when you expand and reaarange it? I also now realise the author never said 'n' was a UNIT normal vector, I just assumed it was for some reason...shouldn't this then mean that the dot product 'equality' (which I can't yet see what is wrong) should be true for all normal vectors not just unit ones?
Recognitions:
Homework Help
Quote by autodidude What do you mean if the position vector is in between two points? I thought position vectors always referred to a vector emanating from the origin?
All position vectors are relative to some reference point ... unless one is stated, the defined origin is used. If no origin has been defined, then you have to use your head. Strictly speaking, a the vector from one point to another would be a displacement since it is the difference between two positions ... but it could also be the position of one object with respect to another.
Recall: the original proposition was:
...the dot product between the normal unit vector of a plane and a position vector to any point on the plane is always the same...
... from post #1.
This proposition is false. (It will only work for planes that include the origin.)
Now: how can you modify that proposition to make it true, in general?
But doesn't the definition of a plane surface say that the dot products should be the same when you expand and reaarange it?
The dot product of any vector normal to the plane and any vector that lies in the plane will always be zero. This is geometrically the same as saying that the normal vector is perpendicular to the plane.
eg.
In your example, previous, P and Q are points in the plane: ##\vec{p}=\overrightarrow{OP}## was chosen to be a normal vector to the plane.
Any vector parallel to OP will also be a normal vector to the plane.
If ##|\vec{p}|\neq 0## and ##\vec{q}=\overrightarrow{OQ}\neq \vec{p}##
Then: ##\vec{p}\cdot\vec{q}\neq 0## because ##\vec{q}## is not in the plane: it is the position of a point in the plane: different things.
However: ##\vec{p}\cdot (\vec{q}-\vec{p})=0## because ##(\vec{q}-\vec{p})=\overrightarrow{PQ}## is a vector that lies in the plane.
Since the dot product is commutative:
##\vec{p}\cdot\vec{q}=\vec{p}\cdot\vec{p} \Rightarrow p = q\cos(\theta)##
This is consistent with the observation that OPQ froms a right angled triangle with ##\vec{q}## as the hypotenuse. But notice that ##\vec{p}\cdot (\vec{q}-\vec{p})=0## is just the relation you don't want to use with different letters?
Hmmm...oh, I think that's what HallsOfIvy was saying...I don't really see how it would only work for planes that include the origin though. I'm not trying to say that thw dot product is 0, I'm saying it's always the.same value for some normal vector. So in HallsOfIvy's example, the chosen normal vector is <0, 0, 1>. If you pick any other point on the plane, it'll be of the form <a, b, 1> and so the dot product between that and <0, 0, 1> will always be 1. The normal vector to the plane z=1 will always be of the form <0, 0, k> and hence the position vector from the origin to any point on the plane <a, b, 1> dotted with the normal vector will be equal to k. That is what i thought this meant: $$n.r=n.r_0$$ Yeah, I'm not saying the vector q dotted with the vector p is 0. I get that only vectors lying in the plane dotted with the normal to the plane equals zero. What I'm trying to say is that the position vectors from the origin to some point on the plane, when dotted with a normal vector of some chosen magnitude is always the same value no matter which position vectoe you choose so long as the point it end at lies on the plane. ...I also just realised that I'm assuming the normal vector is specified by a positiin vector originating from the origin...
Recognitions:
Homework Help
What I'm trying to say is that the position vectors from the origin to some point on the plane, when dotted with a normal vector of some chosen magnitude is always the same value no matter which position vector you choose so long as the point it end at lies on the plane.
Oh well ... the examples you showed in post #1 had that costant as zero but now I get you: you mean that (for P and Q etc already discussed) ##\vec{p}\cdot\vec{q}=p^2## which is always the same for a particular ##\vec{p}## ? Well, yes. ##p^2## depends only on ##\vec{p}## which you have chosen to be a normal to the plane.
Remember the geometric interpretation of the dot product?
What you are doing is constructing triangles whose adjacent side is shared.
Hence: any arbitrary ##\vec{q}## will have the same extent in the normal direction.
A drawing of the triangle is your geometric proof.
Ok, thanks a lot, that's really all I wanted to know!
Recognitions: Homework Help Cool. BTW: the proof is the same as in post #2 - if you make the plane an arbitrary distance along the z axis. The "constant" you get in this process is the shortest distance from the origin to the plane (multiplied by the length of the normal vector you used).
Thread Tools
| | | |
|---------------------------------------------------------------------------------------------|-------------------------------|---------|
| Similar Threads for: Proving the dot product between these vectors is always the same value | | |
| Thread | Forum | Replies |
| | Introductory Physics Homework | 7 |
| | Calculus & Beyond Homework | 8 |
| | Introductory Physics Homework | 1 |
| | Calculus & Beyond Homework | 11 |
| | Calculus & Beyond Homework | 1 |
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 15, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9258344173431396, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/53192/constructing-adjunction-from-left-adjoint-and-unit
|
# Constructing adjunction from left adjoint and unit
The definition of adjoint functors in terms of universal morphisms lends itself to very economical proofs in situations where one has a functor but no "direct" candidate for the left adjoint functor (only something looking like a unit and/or a suggestion for maps $\overline f$ as above)
I am now in a situation where I have something I suspect will be a left adjoint functor and something I suspect will be the unit of an adjunction. And I am now wondering if there is a similar characterisation of adjoint functors applicable in the case.
I realise that since adjoints are unique just the functor uniquely determines the, possible, adjunction. I guess I am looking for a way to check whether what I suspect is a unit can be part of an adjunction, and if that is the case, what minimal additional structure I need to define the other components of the adjunction.
Concretely, in this case, the problem is related to pointfree topology. I'm working on proving the existence of a right adjoint to the functor $\Omega \; \colon \mathbf{Top} \to \mathbf{Loc}$ (sending a space to its open set lattice and a continuous function to the corresponding frame homomorphism). I know this will turn out to be a topology on the set of points $\operatorname{Pt} X$ of the locale.
What I'm trying to do is prove this by defining the quotient map $X \to \operatorname{Pt} (\Omega \, X)$ where $\operatorname{Pt} (\Omega X)$ is given the quotient topology induced by the equivalence relation defined by two points being equivalent if they correspond to the same point of the spaces locale (I think this is called the soberisation of the space). Formally this is done by associating to each element $x \in X$ the map $p_x \; \colon 1 \to X$ such that $p_x(\cdot) = x$ where $1$ is some fixed one element topology. Two points $x$ and $y$ then correspond to the same point if $\Omega \, p_x = \Omega \, p_y$.
I thus have the left adjoint $\Omega$ and something looking like a unit (the quotient map), and I'm wondering if I can get away without "explicitly" defining how $\operatorname{Pt}$ acts on morphisms, instead defining some other structure. For example, but not necesserily, something like the universal morphisms in the one definition of adjunction.
I know there are other (more or less intuitive) ways to prove this by making $\operatorname{Pt}$ a functor explicitly, I just found the quotient topology construction above very natural, and I wondered if one could make a nicer proof by applying it.
-
I did a major rewrite of the question, I hope it is more clear now. – Tilo Wiklund Jul 23 '11 at 1:47
Have you looked at Freyd's adjoint functor theorem? Mac Lane [CWM, 1998, p. 125] gives the example of constructing the Stone–Čech compactification using it. – Zhen Lin Jul 25 '11 at 1:35
@Zhen: I don't see how the adjoint functor theorem should help here. We know that there is a right adjoint to $\Omega$ and that it is given on objects by $\operatorname{Pt}$ as tilo describes it above. We also know that the unit of the adjunction is given by the construction described here. The question is: Can we use this construction in proving that $\operatorname{Pt}$ is the right adjoint? The obvious way to proceed is to write down the counit and check universality at the other end of the adjunction (that's straightforward). But that's not what's asked here (as far as I can tell). – t.b. Jul 25 '11 at 4:21
Quite right, the main issue is that I would like to avoid having to make $\operatorname{Pt}$ (explicitly) a functor, letting its action on morphisms be decieded by $\Omega$ and this unit. I wouldn't mind having to construct some additional structure, as long as doing so is less work (or, at least, a more natural feeling construct) than making $\operatorname {Pt}$ a functor. – Tilo Wiklund Jul 26 '11 at 0:16
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 21, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9380314350128174, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/14973/what-would-be-the-effects-on-theoretical-physics-if-neutrinos-go-faster-than-lig
|
# What would be the effects on theoretical physics if neutrinos go faster than light?
Earlier today, I saw this link on Facebook about neutrinos going faster than the speed of light, and of course, re-posted. Since then, a couple of my friends have gotten into a discussion about what this means (mostly about time-travel), but I don't really know what this really implies. This made me wonder...
What are the biggest and most immediate implications of this potential discovery?
Related: Superluminal neutrinos
-
21
Job security for theorists and neutrino experimenters. I am so golden in that case. – dmckee♦ Sep 22 '11 at 21:54
– anna v Sep 23 '11 at 6:02
2
El'endia, I polished up your question a bit since we're getting so many duplicates of it. Hope you don't mind ;-) – David Zaslavsky♦ Sep 23 '11 at 19:10
@David: No, I don't mind. :) – El'endia Starman Sep 23 '11 at 19:14
1
@Joel: They repeated the experiment at the end of October with a design that was better suited to studying this issue. The original experiment was not designed for this purpose, so they had to extract the 60 ns shift from pulses that were 10,000 ns wide. The October version used 1-2 ns pulses. MINOS and T2K will also try to reproduce it. – Ben Crowell Nov 11 '11 at 22:25
show 3 more comments
## 6 Answers
Before I answer, a couple caveats:
1. As Adam said, the universe isn't going to start behaving any differently because we discovered something.
2. Right now it seems much more likely (even by admission of the experimenters) that it's just a mistake somewhere in the analysis, not an actual case of superluminal motion.
Anyway: if the discovery turns out to be real, the effect on theoretical physics will be huge, basically because it has the potential to invalidate special relativity shows that special relativity is incomplete. That would have a "ripple effect" through the last century of progress in theoretical physics: almost every branch of theoretical physics for the past 70+ years uses relativity in one way or another, and many of the predictions that have emerged from those theories would have to be reexamined. (There are many other predictions based on relativity that we have directly tested, and those will continue to be perfectly valid regardless of what happens.)
To be specific, one of the key predictions that emerges out of the special theory of relativity is that "ordinary" (real-mass) particles cannot reach or exceed the speed of light. This is not just an arbitrary rule like a speed limit on a highway, either. Relativity is fundamentally based on a mathematical model of how objects move, the Lorentz group. Basically, when you go from sitting still to moving, your viewpoint on the universe changes in a way specified by a Lorentz transformation, or "boost," which basically entails mixing time and space a little bit. (Time dilation and length contraction, if you're familiar with them) We have verified to high precision that this is actually true, i.e. that the observed consequences of changing your velocity do match what the Lorentz boost predicts. However, there is no Lorentz boost that takes an object from moving slower than light to moving faster than light. If we were to discover a particle moving faster than light, we have a type of motion that can't be described by a Lorentz boosts, which means we have to start looking for something else (other than relativity) to describe it.
Now, having said that, there are a few (more) caveats. First, even if the detection is real, we have to ask ourselves whether we've really found a real-mass particle. The alternative is that we might have a particle with an imaginary mass, a true tachyon, which is consistent with relativity. Tachyons are theoretically inconvenient, though (well, that's putting it mildly). The main objection is that if we can interact with tachyons, we could use them to send messages back in time: if a tachyon travels between point A and point B, it's not well-defined whether it started from point A and went to point B or it started from B and went to point A. The two situations can be transformed into each other by a Lorentz boost, which means that depending on how you're moving, you could see one or the other. (That's not the case for normal motion.) This idea has been investigated in the past, but I'm not sure whether anything useful came of it, and I have my doubts that this is the case, anyway.
If we haven't found a tachyon, then perhaps we just have to accept that relativity is incomplete. This is called "Lorentz violation" in the lingo. People have done some research on Lorentz-violating theories, but it's always been sort of a fringe topic; the main intention has been to show that it leads to inconsistencies, thereby "proving" that the universe has to be Lorentz-invariant. If we have discovered superluminal motion, though, people will start looking much more closely at those theories, which means there's going to be a lot of work for theoretical physicists in the years to come.
-
2
Or, there could be other ways to 'fix' things--like the 'true' speed of light is the neutrino speed, and there is some mechanism to slow down all of the electromagnetically interacting particles, so that they traveled at an apparent 'c'. This is all insanely premature, though. – Jerry Schirmer Sep 24 '11 at 5:15
Tachyons don't allow superluminal communication. This is not a tachyon, but a true relativity violation. – Ron Maimon Sep 24 '11 at 7:14
– Richard Terrett Sep 27 '11 at 4:22
@Terrett: The Scharnhorst effect does not exist. It is easy to prove that it is impossible in QED. – Ron Maimon Sep 28 '11 at 1:17
– stoicfury Nov 18 '11 at 22:54
While interesting, even potentially enormous for physics, you can still bet on the sun coming up tomorrow. One thing I like to point out to people who are enamored with the fact that science is constantly changing is that any new changes have to fit the old observations into them. The article even mentions this specifically.
If it turns out that neutrinos have the potential to travel faster than light, the fact remains that general relativity does a fantastic job of explaining a wide variety of phenomena and it always will.
-
Yeah, this is like Einstein's Theories replacing Newtonian gravity; the layperson won't be affected. But what are the specific (potential) effects on theories and such of this? – El'endia Starman Sep 22 '11 at 21:24
I admit that there would have to be changes to some of the theories and that my answer was a bit glib, but I think it does get to the real point. If no one else puts anything down, I'll come back and give some theory stuff. – AdamRedwine Sep 23 '11 at 1:50
1
I read that the measured travel time over hundreds of miles was about 60 nanoseconds. That's about a 60 foot path length error over hundred of miles. How well can they measure the path length? Combined with the horrible quality science reporting, I'm left quite unconvinced. – Colin K Sep 23 '11 at 3:11
1
I definitely believe the precision, it's the accuracy I doubted. But it los like that has been addressed as well. – Colin K Sep 23 '11 at 15:15
1
@Colin - You need to read the press release and official paper: The science reporting is bad, as usual, but CERN and OPERA have been very cautious and wouldn't want to publish something like this if it turned out not to be true. Don't base your opinions on the reporting, go to the original sources. – Kevin Vermeer Sep 23 '11 at 19:49
show 3 more comments
There is no chance that this observation reflects neutrino physics. The neutrinps from supernova 1987a arrive 3hrs before the light, due to blocking of the supernova light by matter. Let us double this to 6hrs to include some dubious measurements, and assume that all the 6hrs is due to the superluminal neutrion travel. Then the time difference for 400km vs. 168,000 light years is $2.5 \cdot 10^{-12} s$, and this is 4 orders of magnitude smaller than the measured deviation. This means that if neutrinos outrun light by this much, the neutrinos from the supernova would have come in about a year earlier than the light.
The distance measurement is tricky, because the light-path is not the same as the neutrino path--- the neutrinos go through the Earth. If you measure the distance by sending radar between towers, you have to deal with curvature corrections due to mountains inbetween, buildings etc, which can easily add 20m of path-length over 400km. So I assume that they measured the distance using GPS. But then you have the issue that you are relying on U.S. government assurances that the absolute GPS positions are reliable to 20m. Relative distances might be ok even when absolute distances are off over large distances.
I can't say more without seeing the measurement, but it is certain in the scientific sense of 5 sigma confidence that this is not a correct result, so it is probably best to classify this as an irresponsible publicity stunt.
### AFTER SKIMMING THE PAPER: No error bound on the GPS absolute position
Their estimate of distance measurements is based on the excellent relative values for displacement given the GPS coordinates. They can detect cm shifts in the Earth's crust etc. But the whole point is that you need the relative distance between the two points, and they have absolutely no independent calibration of the error in the long distance measurement, and blow smoke and mirrors with how accurate the short distance measurements are.
Here is the reference they give for their absolute distance measurement; they did none of their own; http://www.iers.org/nn_11216/IERS/EN/IERSHome/home.html , and they did no error estimate on the values they get from this. This is no good.
I don't know any way to calibrate the absolute position independently which is more accurate than the neutrino beam, so the best interpretation of the paper is that they used the neutrino beam to measure the distance between the recieving and emitting point with better accuracy than the project above gives.
### Satelite Abberation
Given that the Earth is rotating with a speed v of approximately 400m/s, there is an abberation in the apparent angular position of satellites which is of the order v/c, and is normally negligible. The magnitude of the abberation between two instantaneous measurements 700 km apart depends on the angular position of the satellite in the sky, and for a satellite at 20,000 km gives a difference in estimated position of about 20m, times a trigonometric factor which can reduce this by 10% to 1%.
I don't see an estimate of correction for angular abberation in the paper.
-
6
Yes , they used GPS. see the paper , link in top comments. My basic question which the thorough analysis in the paper does not answer outright is that light used for the GPS has the same problems as light from supernova neutrino, i.e. it goes through matter that has electromagnetic characteristics, the atmosphere. It may be that the light beam from the GPS is slower because of electromagnetic effects, whereas neutrino basically do not interact. the same argument as with the supernova. They might have programmed it in the model, though it may be another source of errors. – anna v Sep 23 '11 at 6:07
1
continued: it needs sharp tools to be able to use indeces of refraction and RF and electric potentials existing in the atmosphere, which I lack. They may be measuring the group velocity of light through the atmosphere with the neutrinos as the upper limit. – anna v Sep 23 '11 at 6:09
2
It's a little bit ironic that they used device (GPS) whose functionality is based on the theory of relativity to disprove it :P – Andyk Sep 23 '11 at 14:26
10
– Frédéric Grosshans Sep 23 '11 at 15:32
6
This doesn't answer the question. I'm not asking about whether it's possible/true, that's what the other question is for. I just want to know the potential implications if this turned out to be real. – El'endia Starman Sep 23 '11 at 16:37
show 7 more comments
Since Hawking already found quantum effects to get around the black-hole roadblock thrown up by unadulterated, un-quantum General Relativity, it is possible that there are similar quantum tunnelling ways to get around the notion that $c$ is a roadblock. In fact, General Relativity already requires modification of the constancy of the speed of light in vacuo anyway, and the sky did not fall for that, so even if these experimenters are correct, which seems unlikely, this need not be any more revolutionary than what we already realised: there are fundamental problems with unifying Special Relativity with Quantum Mechanics and we may not have been finished yet. Without being earthshaking, it would be great and exciting to have a measurable incompatibility between Special Relativity and Quantum Theory more accessible to experiment than the Planck scale.
-
If the results of the OPERA experiment will be confirmed, that would probably mean that neutrinos are tachyons with negative mass square. As David wrote, tachyons are consistent with special relativity (SR), i.e. SR is not violated, but we still need to do something with potential causality violation.
But this situation is not new in physics. A thought experiment authored in 1935 by Einstein, Podolsky and Rosen (EPR) was intended to demonstrate that quantum theory does not comply with causality. It took about 50 years to establish that both causality and quantum theory are preserved in this thought experiment. By the way, it is not a thought experiment any more. The idea of EPR paradox is used in real experiments on quantum teleportation, as well as in newest information security systems.
I've found a book by Moscow State University professor Yakov Terletsky published in 1966 (unfortunately only in Russian). In this book Terletsky is making an attempt to explain existence of tachyons without violation of causality. Here is the link:
http://lib.mexmat.ru/books/8667
-
Here is a discussion of tachyons and causality: math.ucr.edu/home/baez/physics/ParticleAndNuclear/tachyons.html . There have been three main types of explanations proposed: (1) models in which Lorentz invariance is preserved, but neutrinos are tachyons, (2) models in which there is a preferred frame, neutrinos have real mass, and different particles have different limiting velocities, (3) extra dimensions. Both 1 and 2 fail: arxiv.org/abs/1110.3763 , arxiv.org/abs/1109.5682 . 3 also fails: arxiv.org/abs/1109.6312 , arxiv.org/abs/1109.5687 . – Ben Crowell Nov 11 '11 at 21:32
– Murod Abdukhakimov Nov 12 '11 at 12:55
A friend of mine wondered out loud if this discovery -- assuming it becomes that -- would affect a Polish theorist's viable alternative to many-worlds and 'string' theories wherein he postulates an x', y' and z' such that they are quadrantalie to each other and their base (unprime) axies. Dot product x' by y' and cross-product the result by z' to yield time and VOILA the resulting system has gravity BUILT IN, not grafted-on as with "Standard Model". He goes on to say that it is verifiable by experiment (unlike "string theory") and we will see it in a lower-than-expected Higgs boson mass (roughly 86%) from "Standard Model". I tried to get more information from him but he's off-line today.
-
2
Without links to relevant references (i.e. published papers or at least preprints), this sounds extremely suspect. – David Zaslavsky♦ Sep 29 '11 at 4:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9499884247779846, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/185018/sequence-that-maps-to-arbitrary-positive-real-number?answertab=active
|
# Sequence that maps to arbitrary positive real number
How do you construct a sequence of functions $f_n(x)$ such that
$$s = \limsup_{n\rightarrow \infty} \sqrt[n]{f_n(x)}$$
for all $s > 0$?
I know it's possible to this with a different sequence
$$s = \limsup_{n\rightarrow \infty} (1 + \frac{x}{n})^n$$
where $x = \log(s)$.
The motivation is from proofs on radius of convergence which rely on the definition of the radius
$$r = \frac{1}{\limsup_{n\rightarrow \infty} \sqrt[n]{f_n(x)}}$$
and out of curiosity I tried to construct a function similar to that for $e^x$ that could map to any $s = 1/r$ but couldn't.
-
## 1 Answer
If $f_n(x)=x^n$, then $\sqrt[n]{f_n(x)}=x$ and you are there.
-
Oh wow, I missed the obvious by about a mile. Thanks! – JasonMond Aug 21 '12 at 13:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9441207647323608, "perplexity_flag": "head"}
|
http://www.physicsforums.com/showthread.php?p=3888186
|
Physics Forums
## If there were only one charge in the universe.
there was this question that i saw in a book and it also had an answer .
(this is not a homework question)
The Question was:
If there were only one type of charge in the universe, then :
• $\phi = \oint_s\boldsymbol{E}.\partial\boldsymbol{ A}\neq 0$ on any surface
• $\phi = \oint_s\boldsymbol{E}.\partial\boldsymbol{ A}=0$ if the charge is outside a surface
• $\phi = \oint_s\boldsymbol{E}.\partial\boldsymbol{ A}$ is not defined
• $\phi = \oint_s\boldsymbol{E}.\partial\boldsymbol{ A}=\frac{q}{\epsilon_o}$ if the charge is outside a surface
the answers were given as being the second and the last option.
I believe the answer is incorrect, reasons:
1. The answer assumes that electric field will exist .
2. But this is not the case , until and unless there is a bipolarity there cannot be an electric field ( in case of isolated charged objects, the field exists because the bipolarity is separated by a distance ∞ )
3. This integral will result in a constant 0 as the electric field will be zero.(all cases)
I want to know if my reasons are correct or not if not then why is the given answer correct or wrong and what should be the correct answer.
PhysOrg.com physics news on PhysOrg.com >> Promising doped zirconia>> New X-ray method shows how frog embryos could help thwart disease>> Bringing life into focus
Recognitions:
Homework Help
Science Advisor
But this is not the case , until and unless there is a bipolarity there cannot be an electric field
Why? What would be wrong with a universe with only electrons inside?
If you plan to remove the option to have opposite charges from the whole theory, you might have to reformulate quantum electrodynamics. But that is not part of the question here, I think.
Tags
charges, electric field, flux, gauss law
Thread Tools
| | | |
|---------------------------------------------------------------------|-------------------|---------|
| Similar Threads for: If there were only one charge in the universe. | | |
| Thread | Forum | Replies |
| | Cosmology | 16 |
| | Cosmology | 5 |
| | General Physics | 5 |
| | General Astronomy | 6 |
| | General Physics | 2 |
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9433477520942688, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/35195/is-there-a-mathematical-name-given-to-the-count-of-negative-and-positive-numbers/49697
|
# Is there a mathematical name given to the count of negative and positive numbers in a set
If I have a set of numbers {-1, 2, 3, 4, -8, 2, 0, 44}
and I make the statement that there are: 2 negative numbers 5 positive numbers and one signless number
Is there a mathematical concept used to name this count?
Thanks
-
2
In the special case that your numbers are the eigenvalues of a matrix describing a quadratic form, you can say that the form has signature $(5, 1, 2)$. But in general I don't think so. – Qiaochu Yuan Apr 26 '11 at 12:56
1
What Qiaochu called the "signature" would also be termed the inertia in some references. – J. M. Apr 26 '11 at 13:01
## 2 Answers
Cardinality is the term you are looking for, e.g. :
Cardinality of the subset of positive numbers for your set is 5,
Cardinality of the subset of negative numbers for your set is 2,
Cardinality of your set is 8
Cardinality of the subset of numbers=0 in your set is 1.
-
I have no idea about the answer to your question, but I would like to make a general observation which is too long for a comment.
Mathematics thrives on brevity, accurate description and minimal sets of definitions. Its beauty is in the fact that with so little we can create so much, and prove further and further.
One can define pretty much anything, the question is why would you define something like that?
In most cases I have seen (and seeing now as I take my first dip into original research) definitions arise naturally from the work being done, or from some problem being dealt with.
For example, we see that if a function between two groups preserves the group's multiplication then it can be useful for us. We name it "homomorphism", further along the way we may see that it was more useful than we'd expect.
I can say that in set theory many times you find yourself baffled at notions of large cardinals, trying to understand why and how thought about them. These notions all came to be when someone was trying to prove something and needed "these sort of properties" - so he gave them a name. Sometimes you figure out that what you define is somewhat compatible to the things that were known before, and your definition represents a stronger, weaker or equivalent object.
In a way you found way to describe some idea, known or unknown. From time to time you actually come up with something new. These new ideas are hardly ever "out of the blue", as I said; they tend to be natural from some research.
Now you might find some idea useful when you solve a question, or two, or fifteen. However you need to consider what sort of problem this notion that you have defined helps you with, it might either help you define a problem or it could be a stepping stone in the solution. If it is indeed helpful, then it is useful.
There can be of course the need in shorter notation of something that you use a lot, but giving everything a short notation rises two problems:
The first is that not everyone will be familiar with your notations, and you would have to introduce and explain them often enough at the beginning - and so unless this notation is extremely useful it is likely that it will not catch on.
The second problem is that if you add more and more notations at a certain point it will overtake your work. It would be extremely hard to follow the work that you did, which is not a good thing usually.
To conclude, my point is that definitions and notations usually arise naturally when facing a problem we want to try to solve and/or define better; we may think that some mathematical object will help us in the solution and once this object becomes useful it tends to be named.
One last thing is that it is not customary to name things after yourself.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.963661253452301, "perplexity_flag": "head"}
|
http://en.wikipedia.org/wiki/Operator_(mathematics)
|
# Operator (mathematics)
This article . You can help. The discussion page may contain suggestions. (November 2010)
In basic mathematics, an operator is a symbol or function representing a mathematical operation.
In terms of vector spaces, an operator is a mapping from one vector space or module to another. Operators are of critical importance to both linear algebra and functional analysis, and they find application in many other fields of pure and applied mathematics. For example, in classical mechanics, the derivative is used ubiquitously, and in quantum mechanics, observables are represented by linear operators. Important properties that various operators may exhibit include linearity, continuity, and boundedness.
## Definitions
Let U, V be two vector spaces. Any mapping from U to V is called an operator. Let V be a vector space over the field K. We can define the structure of a vector space on the set of all operators from U to V:
$(A + B)\mathbf{x} := A\mathbf{x} + B\mathbf{x},$
$(\alpha A)\mathbf{x} := \alpha A \mathbf{x}$
for all A, B: U → V, for all x in U and for all α in K.
Additionally, operators from any vector space to itself form a unital associative algebra:
$(AB)\mathbf{x} := A(B\mathbf{x})$
with the identity mapping (usually denoted E, I or id) being the unit.
### Bounded operators and operator norm
Main articles: bounded operator, operator norm, and Banach algebra
Let U and V be two vector spaces over the same ordered field (for example, $\mathbf{R}$), and they are equipped with norms. Then a linear operator from U to V is called bounded if there exists C > 0 such that
$||A\mathbf{x}||_V \leq C||\mathbf{x}||_U$
for all x in U.
Bounded operators form a vector space. On this vector space we can introduce a norm that is compatible with the norms of U and V:
$||A|| = \inf\{C: ||A\mathbf{x}||_V \leq C||\mathbf{x}||_U\}$.
In case of operators from U to itself it can be shown that
$||AB|| \leq ||A||\cdot||B||$.
Any unital normed algebra with this property is called a Banach algebra. It is possible to generalize spectral theory to such algebras. C*-algebras, which are Banach algebras with some additional structure, play an important role in quantum mechanics.
## Special cases
### Functionals
Main article: Functional (mathematics)
A functional is an operator that maps a vector space to its underlying field. Important applications of functionals are the theories of generalized functions and calculus of variations. Both are of great importance to theoretical physics.
### Linear operators
Main article: Linear operator
The most common kind of operator encountered are linear operators. Let U and V be vector spaces over a field K. Operator A: U → V is called linear if
$A(\alpha \mathbf{x} + \beta \mathbf{y}) = \alpha A \mathbf{x} + \beta A \mathbf{y}$
for all x, y in U and for all α, β in K.
The importance of linear operators is partially because they are morphisms between vector spaces.
In finite-dimensional case linear operators can be represented by matrices in the following way. Let $K$ be a field, and $U$ and $V$ be finite-dimensional vector spaces over $K$. Let us select a basis $\mathbf{u}_1, \ldots, \mathbf{u}_n$ in $U$ and $\mathbf{v}_1, \ldots, \mathbf{v}_m$ in $V$. Then let $\mathbf{x} = x^i \mathbf{u}_i$ be an arbitrary vector in $U$ (assuming Einstein convention), and $A: U \to V$ be a linear operator. Then
$A\mathbf{x} = x^i A\mathbf{u}_i = x^i (A\mathbf{u}_i)^j \mathbf{v}_j$.
Then $a_i^j := (A\mathbf{u}_i)^j \in K$ is the matrix of the operator $A$ in fixed bases. $a_i^j$ does not depend on the choice of $x$, and $A\mathbf{x} = \mathbf{y}$ iff $a_i^j x^i = y^j$. Thus in fixed bases n-by-m matrices are in bijective correspondence to linear operators from $U$ to $V$.
The important concepts directly related to operators between finite-dimensional vector spaces are the ones of rank, determinant, inverse operator, and eigenspace.
Linear operators also play a great role in the infinite-dimensional case. The concepts of rank and determinant cannot be extended to infinite-dimensional matrices. This is why very different techniques are employed when studying linear operators (and operators in general) in the infinite-dimensional case. The study of linear operators in the infinite-dimensional case is known as functional analysis (so called because various classes of functions form interesting examples of infinite-dimensional vector spaces).
The space of sequences of real numbers, or more generally sequences of vectors in any vector space, themselves form an infinite-dimensional vector space. The most important cases are sequences of real or complex numbers, and these spaces, together with linear subspaces, are known as sequence spaces. Operators on these spaces are knows as sequence transformations.
Bounded linear operators over Banach space form a Banach algebra in respect to the standard operator norm. The theory of Banach algebras develops a very general concept of spectra that elegantly generalizes the theory of eigenspaces.
## Examples
### Geometry
Main articles: general linear group and isometry
In geometry, additional structures on vector spaces are sometimes studied. Operators that map such vector spaces to themselves bijectively are very useful in these studies, they naturally form groups by composition.
For example, bijective operators preserving the structure of a vector space are precisely the invertible linear operators. They form the general linear group under composition. They do not form a vector space under the addition of operators, e.g. both id and -id are invertible (bijective), but their sum, 0, is not.
Operators preserving the Euclidean metric on such a space form the isometry group, and those that fix the origin form a subgroup known as the orthogonal group. Operators in the orthogonal group that also preserve the orientation of vector tuples form the special orthogonal group, or the group of rotations.
### Probability theory
Main article: Probability theory
Operators are also involved in probability theory, such as expectation, variance, covariance, factorials, etc.
### Calculus
Main articles: differential operator and integral operator
From the point of view of functional analysis, calculus is the study of two linear operators: the differential operator $\frac{\mathrm{d}}{\mathrm{d}t}$, and the indefinite integral operator $\int_0^t$.
#### Fourier series and Fourier transform
Main articles: Fourier series and Fourier transform
The Fourier transform is useful in applied mathematics, particularly physics and signal processing. It is another integral operator; it is useful mainly because it converts a function on one (temporal) domain to a function on another (frequency) domain, in a way effectively invertible. Nothing significant is lost, because there is an inverse transform operator. In the simple case of periodic functions, this result is based on the theorem that any continuous periodic function can be represented as the sum of a series of sine waves and cosine waves:
$f(t) = {a_0 \over 2} + \sum_{n=1}^{\infty}{ a_n \cos ( \omega n t ) + b_n \sin ( \omega n t ) }$
Coefficients (a0, a1, b1, a2, b2, ...) are in fact an element of an infinite-dimensional vector space ℓ2, and thus Fourier series is a linear operator.
When dealing with general function R → C, the transform takes on an integral form:
$f(t) = {1 \over \sqrt{2 \pi}} \int_{- \infty}^{+ \infty}{g( \omega )e^{ i \omega t } \,d\omega }.$
#### Laplace transform
Main article: Laplace transform
The Laplace transform is another integral operator and is involved in simplifying the process of solving differential equations.
Given f = f(s), it is defined by:
$F(s) = (\mathcal{L}f)(s) =\int_0^\infty e^{-st} f(t)\,dt.$
### Fundamental operators on scalar and vector fields
Main articles: vector calculus, vector field, scalar field, gradient, divergence, and curl
Three operators are key to vector calculus:
• Grad (gradient), (with operator symbol ∇) assigns a vector at every point in a scalar field that points in the direction of greatest rate of change of that field and whose norm measures the absolute value of that greatest rate of change.
• Div (divergence) is a vector operator that measures a vector field's divergence from or convergence towards a given point.
• Curl is a vector operator that measures a vector field's curling (winding around, rotating around) trend about a given point.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 33, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9146297574043274, "perplexity_flag": "head"}
|
http://en.wikipedia.org/wiki/Instrumental_variable
|
# Instrumental variable
In statistics, econometrics, epidemiology and related disciplines, the method of instrumental variables (IV) is used to estimate causal relationships[citation needed] when controlled experiments are not feasible.
Instrumental variable methods allow consistent estimation when the explanatory variables (covariates) are correlated with the error terms of a regression relationship. Such correlation may occur when the dependent variable causes at least one of the covariates ("reverse" causation), when there are relevant explanatory variables which are omitted from the model, or when the covariates are subject to measurement error. In this situation, ordinary linear regression generally produces biased and inconsistent estimates.[1] However, if an instrument is available, consistent estimates may still be obtained. An instrument is a variable that does not itself belong in the explanatory equation and is correlated with the endogenous explanatory variables, conditional on the other covariates. In linear models, there are two main requirements for using an IV:
• The instrument must be correlated with the endogenous explanatory variables, conditional on the other covariates.
• The instrument cannot be correlated with the error term in the explanatory equation, that is, the instrument cannot suffer from the same problem as the original predicting variable.
## Definitions
Formal definitions of instrumental variables, using counterfactuals and graphical criteria, are given by Pearl (2000).[2] Notions of causality in econometrics, and their relationship with instrumental variables and other methods, are discussed by Heckman (2008).[3]
The theory of instrumental variables was first derived by Philip G. Wright in his 1928 book The Tariff on Animal and Vegetable Oils[4].
## Example
Informally, in attempting to estimate the causal effect of some variable x on another y, an instrument is a third variable z which affects y only through its effect on x. For example, suppose a researcher wishes to estimate the causal effect of smoking on general health (as in Leigh and Schembri 2004[5]). Correlation between health and smoking does not imply that smoking causes poor health because other variables may affect both health and smoking, or because health may affect smoking in addition to smoking causing health problems. It is at best difficult and expensive to conduct controlled experiments on smoking status in the general population. The researcher may proceed to attempt to estimate the causal effect of smoking on health from observational data by using the tax rate on tobacco products as an instrument for smoking in a causal analysis. If tobacco taxes affect health only because they affect smoking (holding other variables in the model fixed), correlation between tobacco taxes and health is evidence that smoking causes changes in health. An estimate of the effect of smoking on health can be made by also making use of the correlation between taxes and smoking patterns.
## Applications
IV methods are commonly used to estimate causal effects in contexts in which controlled experiments are not available. Credibility of the estimates hinges on the selection of suitable instruments. Good instruments are often created by policy changes. For example, the cancellation of a federal student-aid scholarship program may reveal the effects of aid on some students' outcomes. Other natural and quasi-natural experiments of various types are commonly exploited, for example, Miguel, Satyanath, and Sergenti (2004)[6] use weather shocks to identify the effect of changes in economic growth (i.e., declines) on civil conflict. Angrist and Krueger (2001)[7] presents a survey of the history and uses of instrumental variable techniques.
## Estimation
Suppose the data are generated by a process of the form
$y_i = \beta x_i + \varepsilon_i,$
where
• i indexes observations,
• $y_i$ is the dependent variable,
• $x_i$ is an independent variable,
• $\varepsilon_i$ is an unobserved error term representing all causes of $y_i$ other than $x_i$, and
• $\beta$ is an unobserved scalar parameter.
The parameter $\beta$ is the causal effect on $y_i$ of a one unit change in $x_i$, holding all other causes of $y_i$ constant. The econometric goal is to estimate $\beta$. For simplicity's sake assume the draws of $\varepsilon$ are uncorrelated and that they are drawn from distributions with the same variance, that is, that the errors are serially uncorrelated and homoskedastic.
Suppose also that a regression model of nominally the same form is proposed. Given a random sample of T observations from this process, the ordinary least squares estimator is
$\widehat{\beta}_\mathrm{OLS} = \frac{ x^\mathrm{T} y }{ x^\mathrm{T}x} = \frac{ x^\mathrm{T}(x\beta + \varepsilon )}{ x^\mathrm{T}x} = \beta + \frac{x^\mathrm{T} \varepsilon}{ x^\mathrm{T}x}.$
where x, y and $\varepsilon$ denote column vectors of length T. When x and $\varepsilon$ are uncorrelated, under certain regularity conditions the second term has an expected value conditional on x of zero and converges to zero in the limit, so the estimator is unbiased and consistent. When x and the other unmeasured, causal variables collapsed into the $\varepsilon$ term are correlated, however, the OLS estimator is generally biased and inconsistent for β. In this case, it is valid to use the estimates to predict values of y given values of x, but the estimate does not recover the causal effect of x on y.
An instrumental variable z is one that is correlated with the independent variable but not with the error term. Using the method of moments, take expectations conditional on z to find
$E [ y | z ] = \beta E [ x | z ] + E [ \varepsilon | z ]. \,$
The second term on the right-hand side is zero by assumption. Solve for $\beta$ and write the resulting expression in terms of sample moments,
$\widehat{\beta}_\mathrm{IV} = \frac{z^\mathrm{T} y}{ z^\mathrm{T} x } = \beta + \frac{z^\mathrm{T} \varepsilon}{z^\mathrm{T} x}. \,$
When z and $\varepsilon$ are uncorrelated, the final term, under certain regularity conditions, approaches zero in the limit, providing a consistent estimator. Put another way, the causal effect of x on y can be consistently estimated from these data even though x is not randomly assigned through experimental methods.
The approach generalizes to a model with multiple explanatory variables. Suppose X is the T × K matrix of explanatory variables resulting from T observations on K variables. Let Z be a T × K matrix of instruments. Then it can be shown that the estimator
$\widehat{\beta}_\mathrm{IV} = (Z^\mathrm{T} X)^{-1}Z^\mathrm{T} Y \,$
is consistent under a multivariate generalization of the conditions discussed above. If there are more instruments than there are covariates in the equation of interest so that Z is a T × M matrix with M > K, the generalized method of moments can be used and the resulting IV estimator is
$\widehat{\beta}_\mathrm{IV} = (X^\mathrm{T} P_Z X)^{-1}X^\mathrm{T} P_Z y,$
where $P_Z=Z(Z^\mathrm{T} Z)^{-1}Z^\mathrm{T}$. The second expression collapses to the first when the number of instruments is equal to the number of covariates in the equation of interest.
## Interpretation as two-stage least squares
One computational method which can be used to calculate IV estimates is two-stage least-squares (2SLS or TSLS). In the first stage, each explanatory variable that is an endogenous covariate in the equation of interest is regressed on all of the exogenous variables in the model, including both exogenous covariates in the equation of interest and the excluded instruments. The predicted values from these regressions are obtained.
Stage 1: Regress each column of X on Z, ($X = Z \delta + \text{errors}$)
$\widehat{\delta}=(Z^\mathrm{T} Z)^{-1}Z^\mathrm{T}X, \,$
and save the predicted values:
$\widehat{X}= Z\widehat{\delta} = Z(Z^\mathrm{T} Z)^{-1}Z^\mathrm{T}X = P_Z X.\,$
In the second stage, the regression of interest is estimated as usual, except that in this stage each endogenous covariate is replaced with the predicted values from the first stage.
Stage 2: Regress Y on the predicted values from the first stage:
$Y = \widehat X \beta + \mathrm{noise}.\,$
The resulting estimator of $\beta$ is numerically identical to the expression displayed above. A small correction must be made to the sum-of-squared residuals in the second-stage fitted model in order that the covariance matrix of $\beta$ is calculated correctly.
## Identification
In the instrumental variable regression, if we have multiple endogenous regressors $x_1 \dots x_k$ and multiple instruments $z_1 \dots z_m$ the coefficients on the endogenous regressors $\beta_1 \dots \beta_k$are said to be:
Exactly identified if m = k.
Overidentified if m > k.
Underidentified if m < k.
The parameters are underidentified (equivalently, not identified) if there are fewer instruments than there are covariates or, equivalently, if there are fewer excluded instruments than there are endogenous covariates in the equation of interest.
## Non-parametric analysis
When the form of the structural equations is unknown, an instrumental variable $Z$ can still be defined through the equations:
$x = g(z,u)$
$y = f(x,u)$
where $f$ and $g$ are two arbitrary functions and $Z$ is independent of $U$. Unlike linear models, however, measurements of $Z, X$ and $Y$ do not allow for the identification of the average causal effect of $X$ on $Y$, denoted ACE
$\mbox{ACE} = \mbox{Pr}(y|\mbox{do}(x)) = \mbox{E}_u[f(x,u)].$
Balke and Pearl [1997][8] derived tight bounds on ACE and showed that these can provide valuable information on the sign and size of ACE.
In linear analysis, there is no test to falsify the assumption the $Z$ is instrumental relative to the pair $(X,Y)$. This is not the case when $X$ is discrete. Pearl (2000)[2] has shown that, for all $f$ and $g$, the following constraint, called "Instrumental Inequality" must hold whenever $Z$ satisfies the two equations above:
$\max_x \sum_y [\max_z \Pr(y,x|z)]\leq 1.$
## On the interpretation of IV estimates
The exposition above assumes that the causal effect of interest does not vary across observations, that is, that $\beta$ is a constant. Generally, different subjects will respond differently to changes in the "treatment" x. When this possibility is recognized, the average effect in the population of a change in x on y may differ from the effect in a given subpopulation. For example, the average effect of a job training program may substantially differ across the group of people who actually receive the training and the group which chooses not to receive training. For these reasons, IV methods invoke implicit assumptions on behavioral response, or more generally assumptions over the correlation between the response to treatment and propensity to receive treatment.[9]
The standard IV estimator can recover local average treatment effects (LATE) rather than average treatment effects (ATE).[10] Imbens and Angrist (1994) demonstrate that the linear IV estimate can be interpreted under weak conditions as a weighted average of local average treatment effects, where the weights depend on the elasticity of the endogenous regressor to changes in the instrumental variables. Roughly, that means that the effect of a variable is only revealed for the subpopulations affected by the observed changes in the instruments, and that subpopulations which respond most to changes in the instruments will have the largest effects on the magnitude of the IV estimate.
For example, if a researcher uses presence of a land-grant college as an instrument for college education in an earnings regression, she identifies the effect of college on earnings in the subpopulation which would obtain a college degree if a college is present but which would not obtain a degree if a college is not present. This empirical approach does not, without further assumptions, tell the researcher anything about the effect of college among people who would either always or never get a college degree regardless of whether a local college exists.
## Potential problems
Instrumental variables estimates are generally inconsistent if the instruments are correlated with the error term in the equation of interest. Another problem is caused by the selection of "weak" instruments, instruments that are poor predictors of the endogenous question predictor in the first-stage equation. In this case, the prediction of the question predictor by the instrument will be poor and the predicted values will have very little variation. Consequently, they are unlikely to have much success in predicting the ultimate outcome when they are used to replace the question predictor in the second-stage equation.
In the context of the smoking and health example discussed above, tobacco taxes are weak instruments for smoking if smoking status is largely unresponsive to changes in taxes. If higher taxes do not induce people to quit smoking (or not start smoking), then variation in tax rates tells us nothing about the effect of smoking on health. If taxes affect health through channels other than through their effect on smoking, then the instruments are invalid and the instrumental variables approach may yield misleading results. For example, places and times with relatively health-conscious populations may both implement high tobacco taxes and exhibit better health even holding smoking rates constant, so we would observe a correlation between health and tobacco taxes even if it were the case that smoking has no effect on health. In this case, we would be mistaken to infer a causal effect of smoking on health from the observed correlation between tobacco taxes and health.
## Sampling properties and hypothesis testing
When the covariates are exogenous, the small-sample properties of the OLS estimator can be derived in a straightforward manner by calculating moments of the estimator conditional on X. When some of the covariates are endogenous so that instrumental variables estimation is implemented, simple expressions for the moments of the estimator cannot be so obtained. Generally, instrumental variables estimators only have desirable asymptotic, not finite sample, properties, and inference is based on asymptotic approximations to the sampling distribution of the estimator. Even when the instruments are uncorrelated with the error in the equation of interest and when the instruments are not weak, the finite sample properties of the instrumental variables estimator may be poor. For example, exactly identified models produce finite sample estimators with no moments, so the estimator can be said to be neither biased nor unbiased, the nominal size of test statistics may be substantially distorted, and the estimates may commonly be far away from the true value of the parameter (Nelson and Startz 1990).[11]
## Testing instrument strength and overidentifying restrictions
The strength of the instruments can be directly assessed because both the endogenous covariates and the instruments are observable (Stock, Wright, and Yogo 2002).[12] A common rule of thumb for models with one endogenous regressor is: the F-statistic against the null that the excluded instruments are irrelevant in the first-stage regression should be larger than 10.
The assumption that the instruments are not correlated with the error term in the equation of interest is not testable in exactly identified models. If the model is overidentified, there is information available which may be used to test this assumption. The most common test of these overidentifying restrictions, called the Sargan test, is based on the observation that the residuals should be uncorrelated with the set of exogenous variables if the instruments are truly exogenous. The Sargan test statistic can be calculated as $TR^2$ (the number of observations multiplied by the coefficient of determination) from the OLS regression of the residuals onto the set of exogenous variables. This statistic will be asymptotically chi-squared with m − k degrees of freedom under the null that the error term is uncorrelated with the instruments.
## References
1. Bullock, J. G., Green, D. P., and Ha, S. E. (2010). Yes, But What’s the Mechanism? (Don’t Expect an Easy Answer). , 98, 550-58.
2. ^ a b Pearl, J. Causality: Models, Reasoning, and Inference, Cambridge University Press, 2000.
3. Heckman, J. (2008) Econometric causality. National Bureau of Economic Research working paper #13934.
4. Stock, James H.; Trebbi, Francesco (2003). "Retrospectives: Who Invented Instrumental Variable Regression?". Journal of Economic Perspectives (AEA) 17 (3): 177–194. doi:10.1257/089533003769204416. Retrieved 2011-07-07.
5. Leigh, J.P. and M. Schembri (2004) Instrumental variables technique: cigarette price provided better estimate of effects of smoking on SF-12, Journal of Clinical Epidemiology 57(3), 284–293.
6. Miguel, E., Satyanath, S. and Sergenti, E. (2004) Economic shocks and civil conflict: An instrumental variable approach. , 725–753.
7. Angrist, J. and A. Krueger (2001) Instrumental variables and the search for identification: From supply and demand to natural experiments, (4), 69–85.
8. Balke, A. and Pearl, J. "Bounds on treatment effects from studies with imperfect compliance," , 92(439):1172–1176, 1997.
9. Heckman, J. (1997) Instrumental variables: A study of implicit behavioral assumptions used in making program evaluations, 32(3), 441–462.
10. Imbens, G. and J. Angrist (1994) Identification and estimation of local average treatment effects, , 467–476.
11. Nelson, C.R., and R. Startz (1990) Some further results on the small sample properties of the instrumental variable estimator. (4), 967–976.
12. Stock, J., J. Wright, and M. Yogo (2002) A Survey of weak instruments and weak identification in Generalized Method of Moments, (4), 518–29.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 54, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8980690836906433, "perplexity_flag": "middle"}
|
http://divisbyzero.com/2011/01/12/beautiful-theorems-about-dynamical-systems-on-the-plane/
|
# Division by Zero
A blog about math, puzzles, teaching, and academic technology
Posted by: Dave Richeson | January 12, 2011
## Beautiful theorems about dynamical systems on the plane
I was reading through some papers written by my Ph.D. advisor (John Franks) from the early 1990′s and was reminded of a few beautiful results about the dynamics of planar homeomorphisms. So I thought I’d share them here.
For those of you who are not familiar with the terminology, a planar homeomorphism is a bijective function ${f:\mathbb{R}^{2}\rightarrow \mathbb{R}^{2}}$ for which ${f}$ and ${f^{-1}}$ are continuous. A simple example of a planar homeomorphism is a translation, such as ${f(x,y)=(x+1,y)}$.
We will look at these homeomorphisms as discrete dynamical systems. That is, we are interested in orbits of points: ${x,f(x),f(f(x)),\ldots}$ For simplicity we write ${f^{k}(x)}$ for ${f(f(f(\cdots(x))))}$ (${k}$ compositions of ${f}$). Intuitively you can think of the point ${x}$ hopping around the plane as we repeatedly apply the function ${f}$.
A point ${x}$ is a fixed point of ${f}$ if ${x=f(x)=f^{2}(x)=\cdots}$ and ${x}$ is a periodic point if ${x=f^{p}(x)=f^{2p}(x)=\cdots}$
From here onward I will assume (without saying it explicitly) that the homeomorphisms are orientation-preserving. This means that the image of a circle oriented clockwise is a closed curve oriented clockwise. A translation is always orientation preserving, but the reflection ${f(x,y)=(x,-y)}$ is orientation reversing.
To warm up, let’s give a theorem of Brouwer’s.
Theorem [Brouwer]. If ${f}$ has a periodic point, then it has a fixed point.
In fact, this theorem can be strengthened considerably. Roughly speaking, if ${f}$ has just about any type of recurrent behavior, then it must have a fixed point.
Here are two examples:
Theorem [Barge, Franks (1993)]. If there are disjoint arcs (or disjoint disks) ${A_{1},\ldots,A_{n}}$ such that ${f(A_{i})\cap A_{i}=\emptyset}$ for all ${i}$, and some iterate of ${A_{1}}$ intersects ${A_{2}}$, some iterate of ${A_{2}}$ intersects ${A_{3}}$, etc., and some iterate of ${A_{n}}$ intersects ${A_{1}}$, then ${f}$ has a fixed point.
In 2002 Jim Wiseman and I gave a short proof of the following result:
Theorem. If the orbit of every point intersects the unit disk ${x^{2}+y^{2}\le1}$, then there is a fixed point in the disk. (Actually, the hypotheses of this theorem are so strong that it holds when $f$ is not invertible and when the space is ${\mathbb{R}^{n}}$.)
The meta-contrapositive of this collection of theorems is that if there is no fixed point, then there is no recurrent behavior. In fact, as Brouwer discovered, if ${f}$ has no fixed point then ${f}$ behaves like a translation.
An open connected set ${L}$ is a domain of translation for ${f}$ if its boundary is ${B\cup f(B)}$, where ${B}$ is a proper embedding of ${\mathbb{R}}$ that separates ${L}$ and ${f^{-1}(L)}$ (as in the image below).
Theorem [Brouwer's plane translation theorem] If ${f}$ has no fixed points, then every point is contained in some domain of translation.
See Franks (1992) for a short proof of the theorem. Apparently Brouwer wrote several papers on the plane translation theorem (1909-1919), and since 1920 others have had to go back and clean up the statement and the proof of his theorem. As Brouwer discovered, one has to be very careful with the topology of the plane. For instance, one pathological example that sent Brower back to the drawing board was the Lakes of Wada (isn’t that a great name?). In 1917 Takeo Wada discovered that it is possible to find three disjoint connected open sets in the plane that all have the same boundary! Here’s a picture of three such sets.
Now, consider the iterates of a set ${S\subset \mathbb{R}^{2}}$, ${f(S),f^{2}(S),\ldots}$, and keep track of which sets ${f^{k}(S)}$ are disjoint from ${S}$. Call this collection of integers ${k>0}$, ${E(S)}$; that is,
${E(S)=\{k>0:f^{k}(S)\cap S=\emptyset\}}$.
Theorem [Barge, Franks]. If ${f}$ has no fixed points and ${S}$ is an open or closed connected set, then ${E(S)}$ is closed under addition.
For example, if ${f^{2}(S)\cap S=\emptyset}$ and ${f^{5}(S)\cap S=\emptyset}$, then ${f^{k}(S)\cap S=\emptyset}$ for ${k=2+2=4}$, ${k=2+4=6}$, ${k=2+5=7}$, etc.
An immediate consequence of this theorem is that if ${S}$ and ${f(S)}$ are disjoint (i.e., ${1\in E(S)}$), then so are ${S}$ and ${f^{k}(S)}$ for all ${k>0}$ (i.e., ${E(S)=\mathbb{Z}^{+}}$). In particular it follows that:
Corollary. If ${S}$ and ${f(S)}$ are disjoint, then ${f^{k}(S)}$ and ${f^{j}(S)}$ are disjoint for all ${k\ne j}$.
In particular, if we added ${f^{2}(S), f^{3}(S), f^{4}(S), \ldots}$ to the the image below, then they would all be disjoint.
Franks and Barge also prove a converse to this theorem.
Theorem [Barge, Franks]. Suppose ${E}$ is a set of positive integers that is closed under addition. Then there is a translation ${T:\mathbb{R}^{2}\rightarrow\mathbb{R}^{2}}$ and an open topological disk ${D}$ such that ${E(D)=E}$.
Two-dimensional dynamical systems is a fascinating area of mathematics. These are only a few of the many beautiful theorems.
### Like this:
Posted in Math | Tags: Brouwer, dynamical systems, John Franks, Lakes of Wada, Marcy Barge, plane, surface, topology
## Responses
1. Isn’t it “such that T(D)=E” instead of “such that E(D)=D” in the last theorem.
By: Kévina on January 13, 2011
at 7:49 am
• No, that’s correct. I could have been more precise by writing $E(D,T)$ rather than $E(D)$.
By: Dave Richeson on January 13, 2011
at 9:52 am
2. [...] blogs: MathBlogging.org Dynamical systems on a plane Exotic spheres ESP and statistics Sums of [...]
By: Weekend miscellany — The Endeavour on January 15, 2011
at 9:58 am
3. So, the open disk D in the last theorem should be unbounded in general, something like an infinite centipede, isn’t it?
For closed disks the theorem is failing (at least for translations), but do you know some results about the set E(D) in this case, refining the fact that it is closed under adition? For example, in the case of translation the complement of E(D) should be finite, as D is bounded.
si-top
By: simba-top on March 29, 2011
at 1:11 am
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 77, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.946402907371521, "perplexity_flag": "head"}
|
http://www.physicsforums.com/showthread.php?s=7d7a0c55416f784e98393b538c85eef2&p=4203739
|
Physics Forums
## Induced electric field
Can electric field be induced at a point near a time varying uniform magnetic field? "Near" means not the in the place where magnetic field exist. But at a point outside the field's presence.
PhysOrg.com physics news on PhysOrg.com >> Promising doped zirconia>> New X-ray method shows how frog embryos could help thwart disease>> Bringing life into focus
Recognitions: Homework Help Science Advisor You can induce electric fields everywhere. Why do you expect that it would not be possible somewhere?
Recognitions:
Homework Help
Science Advisor
Quote by dev70 Can electric field be induced at a point near a time varying uniform magnetic field? "Near" means not the in the place where magnetic field exist. But at a point outside the field's presence.
You probably meant 'by a magnetic field, but not in the place where the magnetic field exists.
A time varying magnetic field will have time varying vector potential
$$\frac{\partial{\bf A}}{\partial t}$$ that can exist beyond the field, and induce an E field. This is like the 'Aharonov-Bohm' effect.
Recognitions:
Gold Member
## Induced electric field
Yes. Say, for example, there's a long solenoid with a time-varying current I(t) running through it. The resulting magnetic field is nonzero only inside the solenoid. However, (assuming ∂B/∂t isn't zero) the electric field induced will also be nonzero outside of the solenoid.
Recognitions:
Homework Help
Science Advisor
Quote by Meir Achuz A time varying magnetic field will have time varying vector potential $$\frac{\partial{\bf A}}{\partial t}$$ that can exist beyond the field, and induce an E field.
Only in areas where there is a changing magnetic field.
Quote by elfmotat However, (assuming ∂B/∂t isn't zero) the electric field induced will also be nonzero outside of the solenoid.
∂B/∂t ≠ 0 implies that there is a magnetic field (apart from some specific points in time maybe).
Take a circular area beyond the region of changing magnetic field,but it should include changing magnetic field area then E.2∏R=-∏r2.∂B/∂t,E is induced in region beyond WHERE B changes.
Recognitions:
Homework Help
Science Advisor
Quote by mfb Only in areas where there is a changing magnetic field.
B= curl A. Apply Stokes' theorem for a B field in a solenoid.
This gives an A outside the solenoid, where there is no B.
Recognitions: Homework Help Science Advisor I don't see how your quote and your post are related. You can get a non-zero A everywhere if you like - even in a perfect vacuum, as you have gauge freedom. But you do not get an electric field without a changing magnetic field or some charge distribution.
Recognitions:
Gold Member
Quote by mfb ∂B/∂t ≠ 0 implies that there is a magnetic field (apart from some specific points in time maybe).
Yes, but only inside the solenoid. The electric field it produces also "exists" (is nonzero) outside the solenoid where B=0.
Recognitions:
Homework Help
Science Advisor
Quote by elfmotat The electric field it produces also "exists" (is nonzero) outside the solenoid where B=0.
Sorry, but what you want just violates the laws of physics.
$$curl(B)=\frac{1}{c}\frac{\partial E}{\partial t} + \frac{4\pi}{c} j$$
You do not want currents and no magnetic field? => electric field is time-invariant. You cannot switch it on or off.
This means that a time-independent charge distribution (which might consist of moving charges) is the only relevant option for a source of an electric field.
Recognitions:
Gold Member
Quote by mfb Sorry, but what you want just violates the laws of physics. $$curl(B)=\frac{1}{c}\frac{\partial E}{\partial t} + \frac{4\pi}{c} j$$ You do not want currents and no magnetic field? => electric field is time-invariant. You cannot switch it on or off. This means that a time-independent charge distribution (which might consist of moving charges) is the only relevant option for a source of an electric field.
No, it certainly doesn't. If there's a long solenoid of radius a and turn density n with a current I(t) running through it, it will induce a magnetic field B(t)=μ0nI(t) inside the solenoid. Outside of the solenoid B=0 everywhere.
Evaluating the integral ∫E∙ds=-∂/∂t ∫B∙dA ⇔ E=-μ0na2 I'(t) / 2r
Even though B=0 outside the solenoid, it still produces a nonzero E outside the solenoid.
Recognitions:
Science Advisor
Quote by mfb Sorry, but what you want just violates the laws of physics.
Transformers violate laws of physics? You learn something new every day!
Sorry, I shouldn't be mean about it. It is a bit counter-intuitive. But yeah, if you take an infinitely-long solenoid, the magnetic field is ONLY present inside the solenoid. Yet you can wrap another solenoid around it, and induce a current on it by time-varying the current on the inner-solenoid. The B-field outside remains zero, but E-field is non-zero.
This all has to do with curl of the electric field being governed by ∂B/∂t. Outside of the solenoid, both curl and divergence of E is zero, but it doesn't mean that the field itself is zero. Feel free to verify that circular E field with 1/R intensity satisfies conditions of both curl and divergence being zero. (In other words for $E = E_0\frac{\hat{\phi}}{r}$, $\nabla \cdot E = 0$ and $\nabla \times E = 0$ everywhere except r=0.)
I have shown in post no.6 that even outside a solenoid if one take a circular area and if it encloses the region of changing magnetic field then electric field will be induced at far distances also.
Recognitions: Homework Help Science Advisor Ah ok, you are right. So we need a coil of infinite length, where B(t) changes linear in time. This gives a constant (in time), circular E(t) and no magnetic field outside.
then..how will a time varying electric field induce magnetic field and where?
Tags
electro magnetism, faraday's law
Thread Tools
| | | |
|---------------------------------------------|-------------------------------|---------|
| Similar Threads for: Induced electric field | | |
| Thread | Forum | Replies |
| | Introductory Physics Homework | 9 |
| | Introductory Physics Homework | 8 |
| | Introductory Physics Homework | 1 |
| | Classical Physics | 42 |
| | General Physics | 6 |
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9050841331481934, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/63928/a-question-about-elements-of-permutation-groups
|
# a question about elements of permutation groups
Let $n \in \mathbb{N}$ and let $S_n$ denote the permutation group on $n$ letters. I'm trying to figure out what kind of elements $\sigma, \delta \in S_n$ satisfy the following relations: $\sigma^2=e$, $\delta^4=e$, and $(\sigma\delta)^4=e$, where $e$ is the identity permutation. Can anyone provide a classification statement about elements of this form? Thanks!
-
$\sigma$ is a product of disjoint transpositions. $\delta$ and $\sigma\delta$ are a product of disjoint transpositions and 4-cycles with at least one 4-cycle. – lhf Sep 12 '11 at 16:37
@Jack, thanks, I've deleted that comment and added another. – lhf Sep 12 '11 at 16:38
@lhf: Not necessarily: $\delta^4=e$ means the order of $\delta$ is either $1$, $2$, or $4$, so you could have $\delta$ be a product of transpositions. $\sigma=(1,2)$ and $\delta=(3,4)$ satisfy all three conditions. – Arturo Magidin Sep 12 '11 at 16:41
What is the motivation for this question? – Arturo Magidin Sep 12 '11 at 16:46
@dan, I think looking at the cycle decomposition is the right idea, but I am nervous as to how much this can actually tell you, as there appears to be quite some variety in the group generated by sigma and delta. Do you care about that group structure, or just how the cycles of delta and sigma relate? – Jack Schmidt Sep 12 '11 at 16:46
show 7 more comments
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 19, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9432657957077026, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/254491/anti-derivatives-applications-of-the-fundamental-theorem
|
anti-derivatives & applications of the fundamental theorem
Find the f(c) guaranteed by the Mean Value Theorem for Integration on the
function f(x)=ln(x)/x on the interval [1, 100].
-
What have you tried Lias? – Nameless Dec 9 '12 at 11:44
2
I suggest you write out what the Mean Value Theorem says. It will have an integral in it. I suggest you evaluate that integral. I bet if you do all that you will be able to find $f(c)$. – Gerry Myerson Dec 9 '12 at 11:46
i tried the theorem: [f(b)- f(a)]/[b-a] ... that's as far as i understood @Nameless – lias Dec 9 '12 at 12:00
@lias That's the Mean Value Theorem for derivatives not the Mean Value Theorem for Integration. – Nameless Dec 9 '12 at 12:03
ahhh ok, thanku – lias Dec 9 '12 at 12:57
1 Answer
One has $$\int_1^{100}{\log x\over x}\ dx={1\over2}\bigl(\log x\bigr)^2\Biggr|_1^{100}={1\over2}\bigl(\log 100\bigr)^2\ .$$ In order to "find the $f(c)$ whose existence is guaranteed by the mean value theorem" we therefore have to solve the equation $$(100-1) f(c)={1\over2}\bigl(\log 100\bigr)^2$$ for $f(c)$. The result is $$f(c)={1\over198}\bigl(\log 100\bigr)^2\doteq 0.107\ .$$ Actually we have not used the MVT at all. We just have computed the average value of $f$ on the interval $[1,100]$. The essence of the MVT is that MVT guarantees the existence of a $c\in[1,100]$ such that $f(c)$ is equal to this average. As $f(1)=0$ and $f(10)\doteq0.23$, by the intermediate value theorem there has to be such a $c\in[1,10]$, even.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9247251152992249, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/26326/how-dense-are-nebulae
|
How dense are nebulae?
How functionaly dense are nebulae? Are they so sparse they are only visible from an interstellar or intergalactic perspective or would you be unable to see your hand in one?
Do they vary widely in density, between nebulae or even within a single one?
What would it look like from the inside of one?
-
1 Answer
They are very sparse. Typical densities are in the range of 100 to 10,000 particles per $\textrm{cm}^3$.
This is much more dense than the general interstellar medium (1 particle per $\textrm{cm}^3$), but much, much less dense than anything you are used to - air is around $10^{19}$ particles per $\textrm{cm}^3$. You would very easily see your own hand in a nebula.
Density variations can be quite sharp within the nebula; in star-forming regions, the variations are strong and the density variations appear to be organized like a fractal, produced by turbulence within the cloud.
However, most nebulae are basically the same, and there aren't huge differences between the densities of different starforming regions. Planetary nebulae and supernova remnants, of course, can have very different densities depending on their ages, since they are expanding balls of gas rather than broad molecular clouds loosely bound by gravity.
If you were within a nebula, it is hard to say what it would look like. But nebulae are so large that the optical depth of the cloud would actually probably be quite high, and I would guess that it would look like you were surrounded by glowing green and red gas in the far distance - instead of space looking black and dark, it would be colored all over. But this would only be an effect caused by the fact that you are looking through so much gas - even if your spaceship were a thousand kilometers away, it probably wouldn't look much different if you were inside a nebula versus outside of it.
-
2
– Andrew Jul 4 '11 at 21:25
5800km as the mean free path for what? Depending on whether its for gas particles, photons of a particular wavelength (absorption line?) or visible photons as a whole wound make a pretty big difference to visibility. – Kyle Jan 24 at 16:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9750909805297852, "perplexity_flag": "middle"}
|
http://en.wikipedia.org/wiki/Talk:Euler's_laws_of_motion
|
# Talk:Euler's laws of motion
WikiProject Physics / History (Rated Start-class, Mid-importance) PhysicsWikipedia:WikiProject PhysicsTemplate:WikiProject Physicsphysics articles
This article is within the scope of WikiProject Physics, a collaborative effort to improve the coverage of Physics on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
Start This article has been rated as Start-Class on the project's quality scale.
Mid This article has been rated as Mid-importance on the project's importance scale.
## Symbol convention used
An editor just changed the symbol for angular momentum from L to H in the initial statement of Euler's Second Law. That avoids the conflict with the use of L for angular momentum in the statement of the first law and might have been fine, but L is also used throught the rest of the article for angular momentum. I propose that we go with the most reliable source already used in the article and that is available online anyone to check: Dynamics of particles and rigid bodies: a systematic approach by Anil Vithala Rao. He uses G for linear momentum and H for angular momentum. Comments? -AndrewDressel (talk) 17:57, 8 November 2010 (UTC)
Since L for angular & p for linear momentum is pretty much the standard in physics, I don't see why we can't just use these, it would make it easier to follow & would by no means be original research. JIMp talk·cont 09:15, 31 January 2011 (UTC)
First, thanks for the reply. I had forgotten all about this issue. Next, this article was originally written and referenced from the point of view of Engineering Mechanics, not physics. Though both are perfectly valid, as with flavors of English, I believe the original language should be maintained, unless there is a compelling reason to change it. Finally, the physics textbooks I have seen also use P for power. I like the convention used by Ruina and Pratap: L for linear momentum and H for angular momentum. They reserve G for center of mass and also use P for power. -AndrewDressel (talk) 14:55, 31 January 2011 (UTC)
On the other hand, if we look at the categories that the article has been put into, it seems more like a physics article. Sticking to the original flavour is generally best. L for linear momentum would be particularly confusing since that's used for angular momentum in physics. JIMp talk·cont 21:14, 31 January 2011 (UTC)
Hello, there's several equations in this article in which the terms (I,T,S, and other greek symbols) are not defined. i'm a begining student trying to figure out the text. can someone make some revisions please? — Preceding unsigned comment added by Mikewax (talk • contribs) 18:46, 18 March 2012 (UTC)
Will do. Agreed about the symbol convention mentioned above in the article. It would be much simpler to use p = linear momentum, L = angular momentum, F for force, $\tau$ = torque, etc... aka all the standard symbols. Is the funny convention of G, = linear momentum, H = angular momentum etc in the sources? Will check soon... Maschen (talk) 20:50, 20 August 2012 (UTC)
WTF was I thinking??... As indicated by others above G and H are used in the sources... I think its best to use the common notation just stated for common usage with other articles like Euler's equations (rigid body dynamics), and include a couple of notes saying "G is also used for linear momentum" and "H is also used for angular momentum" as and when etc... Maschen (talk) 21:38, 20 August 2012 (UTC)
Ok - done, except I kept M for torque/moment (synonyms), easier to use instead of $\boldsymbol{\tau}, \boldsymbol{\Gamma}$ etc. If anyone notices any more unexplained notation just say so. Maschen (talk) 21:59, 20 August 2012 (UTC)
## Incorrect formula for "force density"
There is the incorrect claim that this:
$\mathbf F_B=\int_V\mathbf b\,dm=\int_V \rho\mathbf b\,dV$
is the force density acting on a rigid body. This has the dimensions of [force]·[volume]−1·[mass] = [mass]·[volume]−1[force]·[volume]−1·[volume] = [mass]·[force]·[volume]−1 ≠ [force]·[volume]−1. If b is the force density, then this is integrated over the volume of the body:
$\mathbf F_B=\int_V\mathbf b\,dV = \int_V\mathbf b\,\frac{dm}{\rho}$
It's just simple dimensional analysis. All the factors of ρ for mass density are wrong. Will fix... Maschen (talk) 21:26, 20 August 2012 (UTC)
You may need to read the referenced book "Plastic Theory", pg 27-28, by Jacob Lubliner. The content in those pages defines the variables used in this equation:
The total force $\mathbf{F}$ on a body B is thus the vector sum of all the forces exerted on it by all the other bodies in the universe. In reality these forces are of two kinds: long-range and short-range. If B is modeled as a continuum occupying a region R, then the effect of the long-range forces is felt throughout R, while the short-range forces act as contact forces on the boundary surface $\partial$R. Any volume element dV experiences a long-range force $\rho$b dV , where $\rho$is the density (mass per unit volume) and $\mathbf{b}$ is a vector field (with dimensions of force per unit mass) called the body force. Any oriented surface element $d\mathbf{S}$ = $\mathbf{n}dS$ experiences a contact force $\mathbf{t}(\mathbf{n})dS$, where $\mathbf{t}(\mathbf{n})$ is called the surface traction; it is not a vector field because it depends not only on position but also on the local orientation of the surface element as defined by the local value (direction) of $\mathbf{n}$.
--LaoChen (talk) 01:09, 21 August 2012 (UTC)
So you’re saying b is force per unit mass, which is acceleration? It was not clear from the initial article. It seems correct that your t(n) is a pseudovector field if it has the form t × n (which takes into account the orientation as you said), but this was not in the article: in the force expression T·dS usually T refers to the stress-energy tensor...
I'm not sure of the relevance of "long-range forces", aren’t they negligible (force by any distant Galaxy on any Earth bound object negligible in comparison to the weight acting on that object due to the Earth)? I thought the forces acting on the rigid body were taken in this article to be throughout the volume V (volume integral) and on the surface S (surface integral). Nothing to do with "long/short range forces"... Maschen (talk) 06:52, 21 August 2012 (UTC)
I fixed it up according to the source.Maschen (talk) 07:13, 21 August 2012 (UTC)
These more advanced equations are used in the context of plasticity theory, which may be a bit challenging for the high school students trying to learn the laws of classical mechanics.--LaoChen (talk)23:19, 21 August 2012 (UTC)
...which the article does already because the equations are stated upfront, with the plasticity equations after Euler's laws for sake of application. What's your point? Maschen (talk) 18:34, 17 September 2012 (UTC)
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 14, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9668662548065186, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/advanced-algebra/136910-rank-matrix-print.html
|
# Rank of a matrix
Printable View
• April 1st 2010, 05:17 PM
mybrohshi5
Rank of a matrix
Find the rank of the matrix
A = $\begin{bmatrix}0&6\\0&-2\\0&5\end{bmatrix}$
I know that rank is just the number of pivots a matrix has when in reduced row echelon form, but this one is confusing to me.
RREF A = $\begin{bmatrix}0&1\\0&0\\0&0\end{bmatrix}$
I thought this would have a rank of 0 because there are no pivots but i was wrong and it has a rank of 1.
Why is this?
Thanks for any help :)
• April 1st 2010, 05:32 PM
harish21
Quote:
Originally Posted by mybrohshi5
Find the rank of the matrix
A = $\begin{bmatrix}0&6\\0&-2\\0&5\end{bmatrix}$
I know that rank is just the number of pivots a matrix has when in reduced row echelon form, but this one is confusing to me.
RREF A = $\begin{bmatrix}0&1\\0&0\\0&0\end{bmatrix}$
I thought this would have a rank of 0 because there are no pivots but i was wrong and it has a rank of 1.
Why is this?
Thanks for any help :)
The matrix RREF A has one "non-zero" row. that means the matrix A has one independent row vector. So its rank is 1.
• April 1st 2010, 05:37 PM
mybrohshi5
So finding rank is just the number of rows that has at least one non-zero entry in it?
• April 1st 2010, 05:48 PM
harish21
Quote:
Originally Posted by mybrohshi5
So finding rank is just the number of rows that has at least one non-zero entry in it?
Yes, this is the way that I learnt to find the rank of a matrix. I would also suggest referring to linear independence
Actually, the NUMBER of rows (in the RREF matrix) that are NON-ZERO is the RANK of the matrix. Here your matrix has one $non-zero$ row, your rank is 1.
• April 1st 2010, 05:50 PM
mybrohshi5
Thanks that clears things up for me :)
All times are GMT -8. The time now is 09:51 AM.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.970085620880127, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/167579/a-product-puzzle
|
# A product puzzle
This is from a math contest. I have solved it, but I'm posting it on here because I think that it would be a good challange problem for precalculus courses. Also, it's kind of fun.
Write the polynomial $\prod_{n=1}^{1996}(1+nx^{3^n})$=$\sum_{n=0}^m a_nx^{k_n}$, where the $k_n$ are in increasing order, and the $a_n$ are nonzero. Find the coefficent $a_{1996}$.
-
Did you mean $k_n$? – copper.hat Jul 6 '12 at 18:40
Never mind. I mean $a_n$ – Chris Dugale Jul 6 '12 at 18:46
@ChrisDugale $k_n$ instead of $k_i$ in the exponent. Also, you should specify that $a_n \ne 0$. I would say the question is more satisfying if you ask for both $k_{1996}$ and $a_{1996}$, else parts of the LHS go unused. – Erick Wong Jul 6 '12 at 18:51
You're right, I should have specified nonzero $a_n$. – Chris Dugale Jul 6 '12 at 18:59
1
Hint: If you write $k_n$ in base $3$, then it has the same digits as if you wrote $n$ is base $2$. – Thomas Andrews Jul 6 '12 at 19:09
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 17, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9115177989006042, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/trigonometry/68108-trig-question-intervals.html
|
Thread:
1. Trig question: with intervals
Solve the equation for a in [0, 2pie]
A.) cos²(a) + 3cos(a) = -cos(a)
B.)sin(2a) = cos(a)
2. Originally Posted by VkL
Solve the equation for a in [0, 2pie]
A.) cos²(a) + 3cos(a) = -cos(a)
Add $\cos a$ to both sides to get $\cos^2 a + 4\cos a = 0$ and now $\cos a ( \cos a + 4) = 0$. Therefore, $\cos a = 0$ or $\cos a + 4 = 0$. The second condition, $\cos a + 4 = 0 \implies \cos a = -4$ is impossible because $|\cos a| \leq 1$. The first condition is $\cos a = 0$. That happens when $a = \tfrac{\pi}{2},\tfrac{3\pi}{2}$
B.)sin(2a) = cos(a)
Using the double angle identity we get $2\sin a \cos a = \cos a \implies 2\sin a \cos a - \cos a = 0$. Now factor, $\cos a ( 2\sin a - 1) = 0$. Thus, we get two possibilities, $\cos a = 0$ or $2\sin a - 1 = 0$. The first condition, $\cos a = 0$, gives $a = \tfrac{\pi}{2}, \tfrac{3\pi}{2}$. The second condition, $2\sin a - 1 = 0$ is equivalent to saying $\sin a = \tfrac{1}{2}$, thus, $a = \tfrac{\pi}{6}, \tfrac{5\pi}{6}$.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 18, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.851233720779419, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/number-theory/25104-extension-fields-splitting-fields-proof.html
|
# Thread:
1. ## Extension fields / splitting fields proof...
Hello, put this in the homework help but think its probably best off here, Im trying to prove the following by induction....
Given a polynomial, f(X) \in F[X], of degree n, there exists
an extension field K (sub set of) F such that f(X) has n roots
in K.
This is what I did first time round...
First assume that F is irreducible and argue by induction on n.
Suppose the result holds for irreducible polynomials of degree at
most n-1. Set E= F[x]/f(x).
Now, in E, f(x) has at least one root, so we can write f(x)
= (x - \alpha_{1})(x - \alpha_{2})...(x-\alpha_{r})g(x) with
\alpha_{1}, \alpha_{2}, ..., \alpha_{r} \in E and g(x) \in
E[x] irreducible. Since deg g < n, by the inductive assumption,
the result also applies to g.
But ive been told this is wrong because there could be more than one irreducible factor of degree > 1, also I shouldnt be trying to take irreduciblity through the proof.
If someone knows how to prove this it would be much appreciated, its part of a huge project and needs to be done by tomorrow!
Cheers!
ElGamal.
2. Originally Posted by ElGamal
Hello, put this in the homework help but think its probably best off here, Im trying to prove the following by induction....
Given a polynomial, f(X) \in F[X], of degree n, there exists
an extension field K (sub set of) F such that f(X) has n roots
in K.
You need to use something call Kronecker's theorem. It is not hard, so we will prove it. Let F be a field and f(x) in F[x] be a non-constant polynomial. Then there exists an extension field K over F and $\alpha \in K$ such that $f(\alpha) = 0$. By unique factorization we can write f(x) as a product of irreducible polynomials. Thus it is necessary an sufficient that only of those factors p(x) has a zero $\alpha \in K$. Since $p(x)$ is a non-constant polynomial which is irreducible over F it means $F / \left< p(x) \right>$ is a field. Let us call this field $K$. We can identify that K contains F (up to isomorphism) by $\phi (a) = [a]_{p(x)}$ (where $[a]_{p(x)}$ is the equivalence class mod $p(x)$) for $a\in F$. Next we note that $[x]_{p(x)}$ is a zero of $p(x)= a_nx^n+...+a_10+a_0$ for $p([x]_{p(x)}) = [a_nx^n+...+a_1x+a_0]_{p(x)} = [0]_{p(x)}$. Thus $K$ is an extension field that we desired. Now to go into induction you have to note that if f(x) is an arbitrary non-constant polynomial then we factorize it $p_1(x)...p_k(x)$ were all $p_i(x)$ are irreducible. Now apply the argument on every irreducible factor $k$ times. Thus, there exists an extension field K such that $f(x)$ splits (meaning factors into linear factors).
You can make this result a little stronger: Given field F there is extension field K for a non-constant polynomial such such [K:F]<=n! where n = deg f(x) and f(x) splits over K.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 18, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9470041990280151, "perplexity_flag": "head"}
|
http://mathematica.stackexchange.com/questions/19460/symbolic-derivative-of-n-term-product?answertab=oldest
|
# Symbolic derivative of $n$-term product
I want to determine the relationship that must exist between the $x_i$ and $y_i$ such that
$$\frac{\partial}{\partial\theta} \prod_{i=1}^n \frac{f(x_i,\theta)}{f(y_i,\theta)} = 0,$$
where
$$f(x,\theta) = \frac{e^{-(\theta - x)}}{(1+e^{-(\theta - x)})^2}, \;\; \forall x \in {\mathbb R}, \theta \in {\mathbb R}$$
Clarification: what I'm trying to find is a condition on the $x_i$ and $y_i$ such that the derivative above (viewed as a function of $\theta$) is zero for all $\theta$ only if this condition holds. Clearly, this derivative is zero for all $\theta$ if, $\forall i\in\{1,\dots,n\}$, the condition $x_i = y_i$ holds, since then the product in the derivative expression above is identically 1. But this condition is not necessary: the derivative will be identically zero also if there is an $n$-permutation $\sigma$ such that $\forall i,\,x_i = y_{\sigma(i)}$. My problem is to prove that the derivative is identically zero (i.e. it is zero for all $\theta$) only if such a $\sigma$ exists, for given $x_i$ and $y_i$.
So, hoping to have a look at the derivative above, I input this into Mathematica:
````Block[{f, θ, x, y, i, n},
f[x_][θ_] := E^(-x + θ)/(1 + E^(-x + θ))^2;
D[Product[f[x[i]][θ]/f[y[i]][θ], {i, n}], θ]
]
````
...but Mathematically basically spits back the last formula (after replacing the various expressions in `f`):
````D[Product[(E^(-x[i] + y[i])*(1 + E^(θ - y[i]))^2)/(1 + E^(θ - x[i]))^2, {i, n}], θ]
````
If instead of using a symbolic product (with an unspecified number of terms) I attempt the same thing with a product of three terms, namely
````(f[x1][θ]/f[y1][θ]) (f[x2][θ]/f[y2][θ]) (f[x3][θ]/f[y3][θ])
````
...Mathematica does compute the derivative (though the resulting expression is hairy, and I can't extract any insight from it). So my first question is
How can I get Mathematica to produce the expression for the derivative for the general case?
(After all, the derivative of an $n$-term product has a form that Mathematica should be able to express relatively easily.)
In any case, the results I got for a three-term product were not encouraging. Of course, I really don't care for the derivative per se, but rather, what I'm after are the conditions on the $x_i$ and $y_i$ that make this derivative vanish.
Is there a way that Mathematica can show me the relationship between the $x_i$ and $y_i$ when this derivative is 0?
-
The product must be constant in theta. I believe this will force the set of x's to be the same as the set of y's (that is, same lists up to ordering). – Daniel Lichtblau Feb 12 at 0:39
@DanielLichtblau: Yes, that's my guess too. – kjo Feb 12 at 0:40
Okay, so I guess you were hoping for a proof. I don't think Mathematica will be able to help here (I'll be happy if som ebody shows I am wrong about this). The thetas in the numerator of f will all cancel. So I think some fiddling with the denominator might show that those do not go away unless x's equal y's as sets. I'll give it some more thought. – Daniel Lichtblau Feb 12 at 0:43
@DanielLichtblau Actually, this condition is easy to see from the expression for the derivative when $n=1$, so maybe I can set up an induction.. – kjo Feb 12 at 0:55
## 3 Answers
Teach Mathematica the rules. The fundamental one is
$$\frac{d}{dx} \prod_i f_i(x) = \prod_i f_i(x) \sum_i \frac{f_i'(x)}{f_i(x)},$$
at least where none of the $f_i(x)=0$. Iterate this to obtain higher-order derivatives:
````Unprotect[D];
D[Product[f_, i___], x_Except[List]] := Product[f, i] Sum[D[f, x]/f, i];
D[Product[f_, i___], {x_, n_Integer}] := Nest[D[#, x] &, Product[f, i], n];
D[Product[f_, i___], x_Except[List], y__] := D[Product[f, i] Sum[D[f, x]/f, i], y];
Protect[D]
````
Example from the question:
````f[x_][\[Theta]_] := E^(-x + \[Theta])/(1 + E^(-x + \[Theta]))^2;
D[Product[f[x[i]][\[Theta]]/f[y[i]][\[Theta]], {i, n}], \[Theta]]
````
$$\left(\prod _i^n \frac{e^{y(i)-x(i)} \left(e^{\theta -y(i)}+1\right)^2}{\left(e^{\theta -x(i)}+1\right)^2}\right) \sum _i^n \frac{\left(e^{\theta -x(i)}+1\right)^2 e^{x(i)-y(i)} \left(\frac{2 e^{\theta -x(i)} \left(e^{\theta -y(i)}+1\right)}{\left(e^{\theta -x(i)}+1\right)^2}-\frac{2 \left(e^{\theta -y(i)}+1\right)^2 e^{\theta -2 x(i)+y(i)}}{\left(e^{\theta -x(i)}+1\right)^3}\right)}{\left(e^{\theta -y(i)}+1\right)^2}$$
Examples (variants of the help page examples):
````D[Product[x^i, {i, 1, n}], x]
````
$\frac{1}{2} n (n+1) x^{\frac{1}{2} n (n+1)-1}$
````D[Product[Sin[x], {n}], {x, 4}]
````
$(n-1)^2 \sin ^{n-1}(x)+2 (n-2) (n-1) \sin ^{n-1}(x)+(n-4) (n-3) (n-2) (n-1) \cos ^4(x) \sin ^{n-5}(x)-(n-2) (n-1)^2 \cos ^2(x) \sin ^{n-3}(x)-2 (n-2)^2 (n-1) \cos ^2(x) \sin ^{n-3}(x)-3 (n-3) (n-2) (n-1) \cos ^2(x) \sin ^{n-3}(x)$
````D[Product[Sin[x y]^i, {i, 1, n}], x, y]
````
$-\frac{1}{2} n (n+1) x y \sin ^{\frac{1}{2} n (n+1)}(x y)+\frac{1}{2} n (n+1) \left(\frac{1}{2} n (n+1)-1\right) x y \cos ^2(x y) \sin ^{\frac{1}{2} n (n+1)-2}(x y)+\frac{1}{2} n (n+1) \cos (x y) \sin ^{\frac{1}{2} n (n+1)-1}(x y)$
````D[Product[Subscript[f, i][x], {i, 1, 3}], x]
````
$f_2(x) f_3(x) f_1'(x)+f_1(x) f_3(x) f_2'(x)+f_1(x) f_2(x) f_3'(x)$
````D[Product[x Sin[y]^i, {i, 0, n}], {{x, y}, 2}]
````
$\left( \begin{array}{cc} n (n+1) x^{n-1} \sin ^{\frac{1}{2} n (n+1)}(y) & \frac{1}{2} n (n+1)^2 x^n \cos (y) \sin ^{\frac{1}{2} n (n+1)-1}(y) \\ \frac{1}{2} n (n+1)^2 x^n \cos (y) \sin ^{\frac{1}{2} n (n+1)-1}(y) & \frac{1}{2} n (n+1) \left(\frac{1}{2} n (n+1)-1\right) x^{n+1} \cos ^2(y) \sin ^{\frac{1}{2} n (n+1)-2}(y)-\frac{1}{2} n (n+1) x^{n+1} \sin ^{\frac{1}{2} n (n+1)}(y) \end{array} \right)$
The answer to the second question is generic: you are imposing one (differentiable) relationship among $2n$ variables, which therefore describes a $2n-1$ dimensional manifold. For instance, with $n=2$ applying `Solve` gives
$$y(2)\to \log \left(\frac{e^{2 \theta +x(1)}+e^{2 \theta +x(2)}+2 e^{\theta +x(1)+x(2)}+e^{x(1)+x(2)+y(1)}-e^{2 \theta +y(1)}}{e^{2 \theta }+e^{x(1)+y(1)}+e^{x(2)+y(1)}-e^{x(1)+x(2)}+2 e^{\theta +y(1)}}\right)$$
To see that this is nondegenerate, you can explore the solution space dynamically if you like:
````Manipulate[
ContourPlot[
Log[(E^(2 t + x1) + E^(2 t + x2) + 2 E^(t + x1 + x2) - E^(2 t + y1) + E^(x1 + x2 + y1))/(
E^(2 t) - E^(x1 + x2) + 2 E^(t + y1) + E^(x1 + y1) + E^(x2 + y1))], {x1, -1, 1}, {x2, -1, 1}],
{{y1, 0}, -1, 1}, {t, -1, 1}]
````
-
1
In response to a now-deleted comment: these new rules for differentiating an arbitrary finite product are not precisely the familiar "product rule" for derivatives; they are a generalization thereof. Because they apply to a general symbolic limit $n$, they do not follow by means of any finite sequence of operations from the product rule itself ($(fg)' = f'g + fg'$), but rather require an induction. The distinction perhaps is subtle but--as seen in this case--it's real. – whuber Feb 12 at 4:20
Thanks! I picked your answer reluctantly, because I really would have preferred to pick all three: each shows something different and valuable. I figured that it would be better to choose (however inaccurately) than not to, so I finally went with yours on the grounds that it contains most Mathematica-specific stuff... – kjo Feb 12 at 10:08
@whuber `x_Except[List]` doesn't really make sense to me - and, according to my tests, it doesn't seem to work - perhaps you mean `x : Except[List[___]]`? – VF1 Feb 13 at 0:05
@VF1 It works fine for me--MMA 8.0. In what fashion do your tests fail? – whuber Feb 13 at 0:33
1
Congrats on your 10k! Your answers are always a treat :) – rm -rf♦ Feb 26 at 3:02
show 1 more comment
You can get the result you want as follows. This does not really use Mathematica except for purposes of exposition.
I'll truncate in powers of Exp[theta-x] and rewrite that as e[theta-x] to suppress evaluation and make obvious the low order terms of the series. We will throw away the numerator terms because they clearly contribute a multiplicativbe constant with respect to theta. I will separately handle the case where we have the (1 + e[(-x + th)])^2 in the numerator (for y_j's) vs. denominator (for x_j's). For the former we have the function `g[x,th]` below.
````f[x_, th_] := (1 - e[(-x + th)] + e[(-x + th)]^2)^2
g[x_, th_] := (1 + e[(-x + th)])^2
````
Let's see what it looks like for the case `n=1`. I'll just call the variables `x' and`y`.
````Expand[f[x, th]*g[y, th]]
(* Out[16]= 1 - 2 e[th - x] + 3 e[th - x]^2 - 2 e[th - x]^3 +
e[th - x]^4 + 2 e[th - y] - 4 e[th - x] e[th - y] +
6 e[th - x]^2 e[th - y] - 4 e[th - x]^3 e[th - y] +
2 e[th - x]^4 e[th - y] + e[th - y]^2 - 2 e[th - x] e[th - y]^2 +
3 e[th - x]^2 e[th - y]^2 - 2 e[th - x]^3 e[th - y]^2 +
e[th - x]^4 e[th - y]^2 *)
````
Notice that the first order terms (in powers of e[theta+something], that is) do not vanish unless `x==y` (the first order term is -2 e[th - x] + 2 e[th - y]). Now suppose we have `n>1`. What will happen is we'll have, at the lowest order, products of factors of the form
````(1 -2 e[th - x[j]] + 2 e[th - y[j]])
````
It is easy to see that the first order terms will then be, in Mathematica notation,
````Sum[-2 e[th - x[j]] + 2 e[th - y[j]], {j,n}]
````
Now let `e[arg_] = Exp[arg]`. As we have a power series in powers of Exp[theta-x[j]] and Exp[theta-y[j]], and as it must vanish in theta, in particular the first order terms must vanish in theta. It is straightforward to show that this can only happen if these terms pairwise cancel. (If nothing else, expand as power series in theta at the origin. You can use some polynomial algebra reasoning from there.)
-
To be clear: it appears your interpretation of the second question is that it asks for the relationship among the $x_i$ and $y_i$ when the derivative is the zero function; not when the derivative merely vanishes! – whuber Feb 12 at 3:57
@DanielLichtblau: Thanks! This is probably the road to the proof I'm looking for, even though, as you say, it does not use Mathematica much. (See my comment to whuber's answer.) – kjo Feb 12 at 10:10
@whuber: I've added a clarification to my post. – kjo Feb 12 at 10:44
1
@whuber Yes, my interpretation was vanishing as a function rather than finding critical points. One reason is that the latter would give a relation involving the xs, ys, and theta rather than just the xs and ys. I guess there was a latter clarification that states the vanishing function interpretation. – Daniel Lichtblau Feb 12 at 14:33
One trick that hasn't been pointed out yet is this:
Extremizing `f` is equivalent to extremizing any monotonic function of `f`. Since your `f` is positive the product is too. Therefore, we can in particular choose the `Log` of the product as a convenient monotonic function to extremize. Then the equivalent problem is
````f[x_, θ_] := E^(-x + θ)/(1 + E^(-x + θ))^2;
D[Sum[Log@f[x[i], θ] - Log@f[y[i], θ], {i, n}], θ]
````
$\sum _i^n \left(e^{x(i)-\theta } \left(e^{\theta -x(i)}+1\right)^2 \left(\frac{e^{\theta -x(i)}}{\left(e^{\theta -x(i)}+1\right)^2}-\frac{2 e^{2 \theta -2 x(i)}}{\left(e^{\theta -x(i)}+1\right)^3}\right)-e^{y(i)-\theta } \left(e^{\theta -y(i)}+1\right)^2 \left(\frac{e^{\theta -y(i)}}{\left(e^{\theta -y(i)}+1\right)^2}-\frac{2 e^{2 \theta -2 y(i)}}{\left(e^{\theta -y(i)}+1\right)^3}\right)\right)$
As you see, the derivative has been done without any problems. Setting this to zero is equivalent to the original problem, and the rest just depends on what relationship between the variables you want to extract (i.e., following whuber's or Daniel Lichtblau's interpretation). I'll leave that open.
-
2
+1 Because $d(\log(f(x))/dx = f'(x)/f(x)$, this is identical to the first formula in my answer (but without the initial factor of the product itself): that's how the formula is usually derived. – whuber Feb 12 at 4:11
1
That's right - I simply worked with the OP's statement "I really don't care about the derivative per se," where the prefactor becomes irrelevant to the extremization. – Jens Feb 12 at 4:19
1
By ignoring the "prefactor," you could be overlooking some critical points. For instance, consider $\prod_{i=1}^2 x = x^2$ defined on the entire real line. Your formula would set $\sum_{i=1}^2 1/x = 2/x=0$, having no solutions, whereas $x=0$ is the (unique) critical point. Although I said my formula is derived with logs, which is true, in the end--because it does not use logarithms--it is fully general (because the apparent singularities are seen to be removable). – whuber Feb 12 at 4:24
2
I think I covered that in my intro. It doesn't apply here. – Jens Feb 12 at 5:26
lang-mma
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 41, "mathjax_display_tex": 5, "mathjax_asciimath": 2, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.910997748374939, "perplexity_flag": "middle"}
|
http://www.physicsforums.com/showthread.php?p=4271236
|
Physics Forums
## On The Solution of Matrix Ricatti Equation ODE
I've become a little confused about why noone cares to actually explicitly solve the Matrix Ricatti Differential Equation (RDE) of the form:
$$-{\dot{P}} = Q + PA^T + A^TP + PBB^TP$$ where $BB^T, Q, P$ are a positive-definite matrices, and $A, BB^T, Q, P \in \mathbb{R}^{n \times n}$
This equation pops up all the time in controls. A solution to this ode also appears to be avoided for some reason, even though deriving it (in my opinion) is nontrivial. Neither professors nor textbooks seem to address solving this other than to comment away the need to discuss a solution by saying "Often you need to solve this numerically". I asked about this and still felt curious but rather than asking more questions and just looking stupid, I wanted to fill what seems to be a void with my solution. It was also a good exercise for me.
Note: In the following derivation, $P,R,X,J$ are time-varying matrices, yet I will only explicitly show a time argument when this needs to be emphasized.
Step 1: Introduce change of coordinates.
Define
$$R(t) = P(t) - R_o$$
where $R_o$ is constant and must be positive-definite (PD). Solving for $P$ and substituting this into the RDE and collecting terms results in
$$-{\dot{R}} = (Q + R_oA + A^TR_o - R_oBB^TR_o) + R(A-BB^TR_o)+(A-BB^TR_o)^TR - RBB^TR$$
and select $R_o$ to satisfy
$$0=Q + R_oA + A^TR_o - R_oBB^TR_o$$
to get rid of the constant, resulting in a new RDE below
$$-{\dot{R}} = R(A-BB^TR_o)+(A-BB^TR_o)^TR - RBB^TR$$
Notice that the constraint resulting in the equation above is actually the Algebraic RE (no derivative term) and the solution to this will be discussed later, and remember now that $R_o$ is no longer arbitrary. In fact, for my application (I believe) I may not need to solve for this matrix due to something really really really convenient, but this post should solve the RDE given the solution to this algebraic
Step 2: Decompose $R(t)$
define matrices $$X(t), J(t)$$ as both N x N matrices such that they satisfy
$$R(t) = X^T(t)J(t)X(t)$$
and note that while $J(t)$ is positive-definite (PD), $X(t)$ is not. Plugging this into the RDE results in (this will get messy)
$$-{\dot{R}} = -{\dot{X}}^TJX - X^T{\dot{J}}X - X^TJ{\dot{X}} = X^TJX(A-BB^TR_o)-(A-BB^TR_o)^TX^TJX - X^TJXBB^TX^TJX$$
Step 3: Undetermined Coefficients
At this point you do "pattern matching" or "undetermined coefficients". You get two equations from this (two identifical equations for $X(t)$). The first equation is:
$${\dot{X}} = X(BB^TR_o-A)$$
and the second is
$${\dot{J}} = JXBB^TX^TJ$$
Step 4: A Solution.
The first equation in step 3 represents the matrix ODE for linear time-invariant systems.
$${\dot{X}} = X(BB^TR_o-A)$$
Inverting the system and integrating yields
$${\dot{X}}(t)X^{-1}(t) = (BB^TR_o-A)$$
$$\int_{t}^{T}{\dot{X}}(s)X^{-1}(s)ds = \int_{t}^{T}(BB^TR_o-A)ds$$
$$log(X(T-t)) = (BB^TR_o-A)(T-t)$$
and exponentiating...
$$X(T-t) = e^{(BB^TR_o-A)(T-t)}$$
The second equation in step 3 is a bit more complex and requires some more manipulation and thought, and the previously solved solution $X(t)$. Inverse $J$ on both sides to get
$$J^{-1}{\dot{J}}J^{-1} = XBB^TX^T$$
Integrating (the odd looking limits again come from my particular application, optimal control).
$$\int_{t}^{T}J(s)^{-1}{\dot{J(s)}}J(s)^{-1}ds = \int_{t}^{T}X(s)BB^TX^T(s)ds$$
This integral looks nasty, but on the left hand side is actually setup for chain rule of PD matrices, on the right hand side is a controllability grammian for an LTI system with the system matrix $BB^TR_o-A$. But keep in mind controllability grammians have to be full rank (i.e., invertible) for all time in order for the related system to be controllable. To be sure that it plays the role of the controllability grammian as it does in linear systems, trying to solve for $J(t)$
$$J^{-1}(T-t) = \int_{t}^{T}X(s)BB^TX^T(s)ds$$
Inversing gives
$$J(T-t) = \left( \int_{t}^{T}X(s)BB^TX^T(s)ds \right) ^{-1}$$
and THAT is why the controllability grammian needs to be invertible for all time. If the controllability grammian is not full rank, we cannot control the system, and thus we cannot invert this matrix to create our solution to the system.
Combining $X(T-t), J(T-t)$ we have
$$R(t) = X^T(T-t)J(T-t)X(T-t) = e^{(BB^TR_o-A)^Tt}\left( \int_{t}^{T}X(s)BB^TX^T(s)ds \right) ^{-1}e^{(BB^TR_o-A)t}$$
And yet we still need to add the constant term to find the solution in the original coordinate system
$$P(t) = R(t) + R_o = X^T(T-t)J(T-t)X(T-t) + R_o = e^{(BB^TR_o-A)^T(T-t)}\left( \int_{t}^{-1}X(s)BB^TX^T(s)ds \right) ^{-1}e^{(BB^TR_o-A)(T-t)} + R_o$$
(Conclusion)
I've reduced the problem of finding $P(t)$ for
$$-{\dot{P}} = Q + PA + A^TP - PBB^TP$$
to finding $R_o$ that solves
$$0=Q + R_oA + A^TR_o - R_oBB^TR_o$$
I left out some details I think, partly because I'm still learning them for myself, and some things may only make sense here if you're using this for optimal control like me. Regardless, I think I've left the meat and potatoes here. Let me know what you think!
PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug
I put some effort into this and if you want to print this out for w/e reason or save this, I've made a pdf file same post here https://dl.dropbox.com/u/25124237/Bl...n%20140213.pdf
Well, in your first step it should be: $$+R_0 BB^T R_0$$ and not with a minus sign, also the rest of RHS should be with plus signs, cause you plug $$P=R+R_0$$. I've noticed some more misprints, such as you forgot Transpose in some of A's. A piece of advice don't hurry, be precise and slow.
## On The Solution of Matrix Ricatti Equation ODE
Actually it looks like the I wrote the original equation wrong. It should be
$$-{\dot{P}} = Q + PA^T + A^TP - PBB^TP$$
Grrr.
.
OK, I spotted another mistake (or so I think). The integral $$\int_{t}^{T}{\dot{X}}(s)X^{-1}(s)ds$$ Shoule be $$\log(X(T))-\log(X(t))=\log (X(T)/X(t))$$ and not $$\log X(T-t)$$
Yes I believe you're right. Aside from these notation mistakes, do you believe the final answer to be correct?
I'll be reading it tomorrow again, by the weekend I hope I'll have time to respond to you. I just spotted these mistakes first, btw I think it's best to read your own stuff once more and be sure that every step is justified, and write your justification down for every equality. I myself have exams the next two days and next monday, not sure why I am being so altruistic... :-)
I understand. I have related issues.
I haven't studied your derivation, but I believe writing the differential Riccati system as a algebraic Riccati system is called Chandrasekhar decomposition. The problem has now moved to solving the algebraic system, which is still difficult to solve in general. There is also the Bernoulli substitution method, which transforms the Riccati ode system to a (larger) system of linear ode's (Hamilton ode's). A large number of books have been written on the subject of solving matrix riccati equations, they might give you some more ideas.
Thanks for the ideas! I'll see if I can get my hands on some of those books.
Tags
matrix, nonlinear, optimal control, ricatti
Thread Tools
| | | |
|---------------------------------------------------------------------|-------------------------------|---------|
| Similar Threads for: On The Solution of Matrix Ricatti Equation ODE | | |
| Thread | Forum | Replies |
| | Calculus & Beyond Homework | 0 |
| | Calculus & Beyond Homework | 5 |
| | Calculus & Beyond Homework | 2 |
| | Introductory Physics Homework | 4 |
| | Introductory Physics Homework | 5 |
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 18, "mathjax_display_tex": 29, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.937628448009491, "perplexity_flag": "middle"}
|
http://en.wikipedia.org/wiki/Pointed_topological_space
|
# Pointed space
(Redirected from Pointed topological space)
In mathematics, a pointed space is a topological space X with a distinguished basepoint x0 in X. Maps of pointed spaces (based maps) are continuous maps preserving basepoints, i.e. a continuous map f : X → Y such that f(x0) = y0. This is usually denoted
f : (X, x0) → (Y, y0).
Pointed spaces are important in algebraic topology, particularly in homotopy theory, where many constructions, such as the fundamental group, depend on a choice of basepoint.
The pointed set concept is less important; it is anyway the case of a pointed discrete space.
## Category of pointed spaces
The class of all pointed spaces forms a category Top• with basepoint preserving continuous maps as morphisms. Another way to think about this category is as the comma category, ({•} ↓ Top) where {•} is any one point space and Top is the category of topological spaces. (This is also called a coslice category denoted {•}/Top.) Objects in this category are continuous maps {•} → X. Such morphisms can be thought of as picking out a basepoint in X. Morphisms in ({•} ↓ Top) are morphisms in Top for which the following diagram commutes:
It is easy to see that commutativity of the diagram is equivalent to the condition that f preserves basepoints.
As a pointed space {•} is a zero object in Top• while it is only a terminal object in Top.
There is a forgetful functor Top• → Top which "forgets" which point is the basepoint. This functor has a left adjoint which assigns to each topological space X the disjoint union of X and a one point space {•} whose single element is taken to be the basepoint.
## Operations on pointed spaces
• A subspace of a pointed space X is a topological subspace A ⊆ X which shares its basepoint with X so that the inclusion map is basepoint preserving.
• One can form the quotient of a pointed space X under any equivalence relation. The basepoint of the quotient is the image of the basepoint in X under the quotient map.
• One can form the product of two pointed spaces (X, x0), (Y, y0) as the topological product X × Y with (x0, y0) serving as the basepoint.
• The coproduct in the category of pointed spaces is the wedge sum, which can be thought of as the one-point union of spaces.
• The smash product of two pointed spaces is essentially the quotient of the direct product and the wedge sum. The smash product turns the category of pointed spaces into a symmetric monoidal category with the pointed 0-sphere as the unit object.
• The reduced suspension ΣX of a pointed space X is (up to a homeomorphism) the smash product of X and the pointed circle S1.
• The reduced suspension is a functor from the category of pointed spaces to itself. This functor is a left adjoint to the functor $\Omega$ taking a based space $X$ to its loop space $\Omega X$.
## References
• Gamelin, Theodore W.; Greene, Robert Everist (1999) [1983]. Introduction to Topology (second ed.). Dover Publications. ISBN 0-486-40680-6.
• Mac Lane, Saunders (September 1998). (second ed.). Springer. ISBN 0-387-98403-8.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 3, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9307188391685486, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/calculus/206906-roots-arbitrary-polynomial-x-n-a0-a1x-a2x-2-n-1-x-n-1-a.html
|
2Thanks
• 1 Post By chiro
• 1 Post By Deveno
# Thread:
1. ## Roots of an arbitrary polynomial: x^n=a0+a1x+a2x^2+...+a(n-1)x^(n-1)
I could use some input on this one: Let n be an integer greater than 1.
Which of the following conditions guarantee that the equation $x^n=\sum _{i=0}^{n-1}a_ix^i$ has at least one root in the interval (0,1)?
I. $a_0>0\ \& \sum _{i=0}^{n-1}a_i<1$
II. $a_0>0\ \& \sum _{i=0}^{n-1}a_i>1$
III. $a_0<0\ \& \sum _{i=0}^{n-1}a_i>1$
2. ## Re: Roots of an arbitrary polynomial: x^n=a0+a1x+a2x^2+...+a(n-1)x^(n-1)
Hey Dark Sun.
In the [0,1] range x^n =< x^m if m < n and only equal if m=n or x = 0 or 1. This rules out II.
For III., the condition is definitely satisfied as the RHS will overtake LHS and result in a root (ie LHS = RHS).
For I. I don't know whether you can say for sure and you would have to look at the specifics.
If you want to look at proofs consider the result above with the m's and n's.
3. ## Re: Roots of an arbitrary polynomial: x^n=a0+a1x+a2x^2+...+a(n-1)x^(n-1)
I totally see what you said about II and III. I am looking at various graphs of I now, and they seem to indicate that I. is also valid. I still want to develop a little more for I. before I move on. Will report back if I find anything.
4. ## Re: Roots of an arbitrary polynomial: x^n=a0+a1x+a2x^2+...+a(n-1)x^(n-1)
consider the polynomial:
$p(x) = x^n - \sum_{i = 0}^{n-1} a_ix^i$.
note that: $p(0) = -a_0$ and $p(1) = 1 - \sum_{i = 0}^{n-1}a_i$.
if $a_0 < 0$ and $\sum_{i = 0}^{n-1} a_i > 1$, we see that p(0) is positive, and p(1) is negative, and by the continuity of polynomials, p must cross the x-axis somewhere on (0,1).
this is answer III.
5. ## Re: Roots of an arbitrary polynomial: x^n=a0+a1x+a2x^2+...+a(n-1)x^(n-1)
Thanks so much, this is an excellent answer. Thank you ^_^
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 9, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8682299852371216, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/235255/applying-greens-theorem
|
# Applying Green's Theorem
So I'm practicing a few problems and I can't get this one -
$$P(x,y) = e^x \sin(y) \\ Q(x,y) = e^x \cos(y)$$
$C$ is the right hand loop of the graph of the polar equation $r^2 = 4\cos(\theta)$
I want to evaluate:
$$\int_{C}{P(x,y)\:dx+Q(x,y)\:dy}$$
Now I tried the right hand side of Green's theorem, but it's difficult because $\frac{dp}{dy}$ in polar has a $\cos(r\sin(\theta))$ term in it.
If I just try parametrizing with $x = a\sin(t)$ and $y = a\cos(t)$, where $-\pi/2 \leq t \leq -\pi/2$, then I get another ludicrous integral with $e^{a\cos(t)}\sin(a\sin(t))a\cos(t)\:dt$ as $P \:dx$, which seems insane to solve.
So I think I may be missing some trick in doing this problem. What am I missing and how should I do this?
-
## 1 Answer
You apply this theorem with $\partial_{x}Q=e^{x}\cos[y],\partial_{y}P=e^{x}\cos[y]$, and the difference is trivially 0. I do not see any trouble in the computation.
Note $$r^{2}=\cos[\theta]$$ can be simplified as $x^{2}+y^{2}=\cos[\arctan[y/x]]=\frac{1}{\sqrt{\frac{y^{2}}{x^{2}}+1}}=\frac{x}{\sqrt{x^{2}+y^{2}}}\leftrightarrow (x^{2}+y^{2})^{3/2}=x$. A graph can be found in here:
http://www.wolframalpha.com/input/?i=plot+r^{2}%3D\cos[\theta]
-
oh wow- that was stupid - thanks! – praks5432 Nov 12 '12 at 0:51
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9607433080673218, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/6026/how-to-calculate-the-upper-limit-on-the-number-of-days-weather-can-be-forecast-r
|
# How to calculate the upper limit on the number of days weather can be forecast reliably?
To put it bluntly, weather is described by the Navier-Stokes equation, which in turn exhibits turbulence, so eventually predictions will become unreliable.
I am interested in a derivation of the time-scale where weather predictions become unreliable. Let us call this the critical time-scale for weather on Earth.
We could estimate this time-scale if we knew some critical length and velocity scales. Since weather basically lives on an $S^2$ with radius of the Earth we seem to have a natural candidate for the critical length scale.
So I assume that the relevant length scale is the radius of the Earth (about 6400km) and the relevant velocity scale some typical speed of wind (say, 25m/s, but frankly, I am taking this number out of, well, thin air). Then I get a typical time-scale of $2.6\cdot 10^5$s, which is roughly three days.
The result three days seems not completely unreasonable, but I would like to see an actual derivation.
Does anyone know how to obtain a more accurate and reliable estimate of the critical time-scale for weather on Earth?
-
1
– Johannes Feb 27 '11 at 16:17
## 3 Answers
I am not sure how useful this "back of the envelope" calculation of reliability of Numerical Weather Prediction is going to be. Several of the assumptions in the question are not correct, and there are other factors to consider.
Here are some correcting points:
1. The Weather is 3 dimensional and resides on the surface of the planet up to a height of at least 10km. Furthermore the density decreases exponentially upwards. Many atmospheric phenomena involve the third dimension such as rising and falling air circulation effects; jet streams (7-16km).
2. The equations are fluid dynamics plus thermodynamics. The Navier-Stokes equations are not only too complex to solve, but in a sense inappropriate as well for the larger scales. One problem is that they might introduce "high frequency" effects (akin to every individual gust of wind or lapping of waves), which should be ignored. The earliest weather prediction models were seriously wrong because the high frequency fluctuations of pressure needed to be averaged rather than directly extrapolated. Here is a possible equation for one point of the atmosphere:
Tchange/time = solar + IR(input) + IR(output) + conduction + convection + evaporation + condensation + advection
The regionality of the model is important too. In a global model there will be larger grid sizes and sources of error from initial conditions and surface and atmosphere top boundary conditions. In a mesoscopic prediction there will be smaller grid sizes but sources of error from the input edges as well. The smallest scale predictions of airflow around buildings and so on might be a true CFD problem using the Navier-Stokes equations however.
I dont know that any calculation is done to predict the inaccuracies, although the different types including the numerical analysis (chaos) error sources can be studied separately. Models are tested against historical data for accuracy overall with predictions made 6-10 days out.
To assume that the atmosphere "goes turbulent" after 3 days seems to conflate several issues together.
-
1
There has been some work to try to quantify the accuracy of a given days predictions. Currently the meteorologists compare the results from several different models, and look at the consistency from run to run of a given model to get a feeling for how likely the predictions are. There has been some work to that suggests that running a suite of models -usually the same model, with multiple perturbations that represent the error envelope of the observations might be able to improve this significantly. The degree of confidence for say 5days out depends upon the dynamics. – Omega Centauri Feb 27 '11 at 22:20
I agree with your last statement. But then, what is a good upper bound and how to obtain it? – Daniel Grumiller Feb 27 '11 at 23:47
@Daniel Grumiller : The ensemble method mentioned by @Omega Centauri is another practical method for obtaining and improving accuracy. However the sources of error are multiple, and the biggest might not even be from fluid dynamics, but from Chaos, which is not this question. – Roy Simpson Feb 28 '11 at 11:49
I don't think that such a computation of a theoretical limit of accuracy is possible. There are several sources of uncertainty in weather models:
• initial and boundary data,
• parameterizations,
• numerical instability, rounding and approximation errors of the numerical scheme employed to solve the Navier-Stokes equations for the atmosphere.
The term "parameterization" refers to the approximation of all subgrid processes, these are all processes/influences that happen at a scale that is smaller than the length of a grid cell. This includes effects from the topography, or the local albedo. More sophisticated approximations to subgrid processes can actually lead to less predictivity of a model, because the needed more detailed initial and boundary data are not available.
The Navier-Stokes equations themselves are usually approximated up to a minimum length scale that is way larger than the length scales that would be necessary to resolve turbulent flows, these kinds of approximations are called large eddy simulations.
The accuracy of this truncation depends critically on the kind of flow and turbulence.
While I don't think that it is possible to derive a theoretical limit, what people do instead is performing ensemble runs, where the results of a weather model are compared that are calculated with slightly perturbed initial and boundary data.
An example of such a "twin" experiment can be found here:
• David Strauss, Dan Paolino: Intermediate time error growth and predictability: tropics versus mid-latitudes
(The result is that the error becomes significant after ca. 15 days of simulated time.)
-
15 days seems a lot. I suppose there is no simple way to see how this time-scale comes about, is there? – Daniel Grumiller Feb 27 '11 at 23:46
First, you would need to define a "reliable" forecast, e.g. x% confident of temperature, wind, rain,etc within some limits.
Second, your question presumes that weather is not more or less deterministic. At some point, human behavior produces effects that influences the chaotic model. However, the essential problem with chaotic models is the sensitivity to the initial conditions. This results in limitations due to data collection as well as computational issues, e.g. rounding errors. It is not clear what are the inherent limits to model improvement.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9391860365867615, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/tagged/hilbert-space+quantum-information
|
# Tagged Questions
1answer
173 views
### Entangled or unentangled?
I got a little puzzled when thinking about two entangled fermions. Say that we have a Hilbert space in which we have two fermionic orbitals $a$ and $b$. Then the Hilbert space $H$'s dimension is just ...
2answers
133 views
### Why must quantum logic gates be linear operators?
Why must quantum logic gates be linear operators? I mean, is it just a consequence of quantum mechanics postulates?
0answers
67 views
### Shape of the state space under different tensor products
I am currently studying generalized probabilistic theories. Let me roughly recall how such a theory looks like (you can skip this and go to "My question" if you are familiar with this). Recall: In a ...
2answers
86 views
### What Shannon channel capacity bound is associated to two coupled spins?
The question asked is: What is the Shannon channel capacity $C$ that is naturally associated to the two-spin quantum Hamiltonian $H = \boldsymbol{L\cdot S}$? This question arises with a view ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9248154759407043, "perplexity_flag": "middle"}
|
http://nrich.maths.org/6554/note
|
### Number Detective
Follow the clues to find the mystery number.
### Six Is the Sum
What do the digits in the number fifteen add up to? How many other numbers have digits with the same total but no zeros?
### (w)holy Numbers
A church hymn book contains 700 hymns. The numbers of the hymns are displayed by combining special small single-digit boards. What is the minimum number of small boards that is needed?
# Coded Hundred Square
### Why do this
problem?
This problem challenges learners to think about the construction of the familiar hundred square and about the first hundred numbers in our counting system. It consolidates understanding of place value and promotes useful discussion between pairs of learners working together as they will have to conjecture, explain and justify their ideas.
### Possible approach
You could introduce this problem first by asking the group to picture a hundred square in their mind's eye. Challenge them to answer questions orally such as:
• What is immediately below $10$? [$20$]
• What is two squares to the left of $99$? [$97$]
• I start on $34$ and move three rows down and three places to the right. What do I land on? [$67$]
Each time, invite children to explain how they came to a solution. You may like to ask some learners to post their own challenge for the rest of the group. It may be that some children will want to refer to a paper copy of a hundred square to check their responses, but don't actively encourage this!
You can then present the problem itself, ideally on the interactive whiteboard, and ask pupils to work in pairs so that they are able to talk through their ideas with a partner. They could either use the interactivity on a computer or cut out the pieces from these two printed sheets .
At the end the group could discuss how they discovered the clues needed to put the whole together and what they learnt about the construction of a hundred square. It is interesting to see the number of different ways adopted - each one just as valid as the others. The important point is being able to justify why one piece goes in a particular place. You may decide to highlight the value of talking with someone else while working on this task. How did it help them?
### Key questions
Where could we start?
What might the first numbers look like?
What might the last number look like?
What do you know about the multiples of $11$?
What will be the same in each column?
What will be the same for the first nine numbers in each row?
### Possible extension
Learners could either try Alien Counting which introduces different bases or Which Scripts? which looks at numbers in different languages.
### Possible support
Some children may benefit from having an ordinary hundred square to refer to as they work on this problem. It might help to try this Hundred Square Jigsaw first, but be aware that this one goes from zero to ninety-nine, not one to a hundred.
The NRICH Project aims to enrich the mathematical experiences of all learners. To support this aim, members of the NRICH team work in a wide range of capacities, including providing professional development for teachers wishing to embed rich mathematical tasks into everyday classroom practice. More information on many of our other activities can be found here.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9412428140640259, "perplexity_flag": "middle"}
|
http://mathoverflow.net/revisions/118687/list
|
## Return to Question
2 edited body
Hi,
During my research I found an interesting fact, and I'd like to know if it's interesting for others as well. Find a function $g(x,t):[0,T]\times[0,T]\rightarrow[0,T]$ such that for any twice differentiable $f(x):[0,T]\rightarrow[0,T]$ such that $f(0)=f'(T)=0$, f(0)=f'(0)=0$, the equality $$f(x)=\intop_0^Tf''(t)g(x,t)dt$$ holds. Note that$g$is independent of$f\$.
I found such a $g$, and I'll post it as an answer soon. I'd like to know if this is simple/known/interesting.
1
# RFC for definite integral connection to second derivative
Hi,
During my research I found an interesting fact, and I'd like to know if it's interesting for others as well. Find a function $g(x,t):[0,T]\times[0,T]\rightarrow[0,T]$ such that for any twice differentiable $f(x):[0,T]\rightarrow[0,T]$ such that $f(0)=f'(T)=0$, the equality $$f(x)=\intop_0^Tf''(t)g(x,t)dt$$ holds. Note that $g$ is independent of $f$.
I found such a $g$, and I'll post it as an answer soon. I'd like to know if this is simple/known/interesting.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9793128371238708, "perplexity_flag": "head"}
|
http://mathhelpforum.com/calculus/211539-sketch-find-area-region-bounded-given-curves-choose-variab.html
|
# Thread:
1. ## Sketch and find the area of the region bounded by the given curves. Choose the variab
Sketch and find the area of the region bounded by the given curves. Choose the variable of integration so that the area is written as a single integral.
x = y2
x = 4
I am not sure where to start on this one.....If I could get some advice on how to graph this, that would be a great start. I have attached my attempt at graphing this.
Thanks in advance!
2. ## Re: Sketch and find the area of the region bounded by the given curves. Choose the va
Your graph is missing the bottom half of the parabola. When you draw that, you should have a region to integrate. I think the variable of integration can be either x or y; you get a single integral either way.
- Hollywood
3. ## Re: Sketch and find the area of the region bounded by the given curves. Choose the va
Thanks Hollywood....
Would this graph work?
4. ## Re: Sketch and find the area of the region bounded by the given curves. Choose the va
You want something like:
5. ## Re: Sketch and find the area of the region bounded by the given curves. Choose the va
Here is my worked out solution. Look correct?
6. ## Re: Sketch and find the area of the region bounded by the given curves. Choose the va
Originally Posted by JDS
Here is my worked out solution. Look correct?
Yes it does.
But I did not do the actual calculations. Here they are.
7. ## Re: Sketch and find the area of the region bounded by the given curves. Choose the va
Originally Posted by MarkFL2
You want something like:
Or perhaps the OP wants the region bounded by his/her functions and the x-axis...
8. ## Re: Sketch and find the area of the region bounded by the given curves. Choose the va
Originally Posted by Prove It
Or perhaps the OP wants the region bounded by his/her functions and the x-axis...
Perhaps. I actually thought of that. But the actual post said, "Sketch and find the area of the region bounded by the given curves ..., $x = y^2$, $x = 4$", so I went with that.
- Hollywood
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9402117133140564, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/1830/fourier-analysis-for-waves
|
# Fourier analysis for waves [closed]
If we have 1D wave equation:
$$\frac{\partial^2 \psi}{\partial x^2}=\frac{1}{c^2}\frac{\partial^2 \psi}{\partial t^2}$$
we say that it's always possible to decompose the generic solution $f(x+ct)+g(x-ct)$ using Fourier Transforms.
But
$$\frac{1}{2\pi} \int_R f(\omega)e^{i\omega t}d\omega$$
and we have two variables, $x$ and $t$, how can we use the FT with
$$\int_R f(\omega)e^{i(k(\omega)x\mp\omega t)}d\omega \; ?$$
We have two different types of evolution, $+ct$ and $-ct$... I have seen situations with no dispersion law, where we consider only the $x+ct$ wave and use the point $x=0$ to find the spectrum of waves $f(\omega)$, but can someone explain me the general method?
In 2D or 3D I haven't any idea about how I can do all this. Someone can explain me this too?
Thank you very much!
-
1
This question seems rather incomplete still, please edit. – Noldorin Dec 11 '10 at 22:04
1
I concur. Also, I have a feeling this really belongs to math.SE as it is a standard mathematical topic of PDE and Fourier analysis. No actual physics in this. – Marek Dec 11 '10 at 22:22
## closed as off topic by David Zaslavsky♦, Marek, NoldorinDec 12 '10 at 1:24
Questions on Physics Stack Exchange are expected to relate to physics within the scope defined in the FAQ. Consider editing the question or leaving comments for improvement if you believe the question can be reworded to fit within the scope. Read more about closed questions here.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9278402328491211, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/advanced-statistics/173403-help-understanding-filtered-probability-spaces.html
|
# Thread:
1. ## Help understanding filtered probability spaces
Hi
I am currently studying a course in financial economics for a professional actuarial qualification and I'm having trouble with some of the probability theory.
I'm having trouble understanding probability spaces and filtrations. Can anyone help? I figure that this level of mathematical theory won't appear on the exam but I'd feel more comfortable if I understood it.
From what I've read, a probability space is a triple (S, F, P)
• S is the space of all possible outcomes
• F is a collection of subsets of S, a sigma-algebra (i.e. closed under complement in S, and under countable (possibly infinite) unions and hence under intersection)
• P is a measure of the elements of F such that P:F -> [0,1] on the reals
For a discrete case, each s in S can be thought of as an event, a single outcome of running through an experiment or observing a share price move. Each element in F is a subset of S, a collection of events (possibly satisfying some condition, like every outcome in which the price increases by a certain amount). The probability measure P assigns a value between 0 and 1 to each element in F.
S and the null-set are elements of any sigma-algebra over S and have probabilities P(S) = 1 and P(null-set) = 0. Intuitively, the probability of anything at all happening is 1 and the probability of nothing happening is 0
I'm comfortable with everything above (though maybe I just think I understand it). My trouble is with filtrations.
A filtration {F_t}t>=0 is a collection of ordered sub-sigma algebras such that F_s is a subset of (or equal to) F_t if s <= t
Question:
• Does this mean that each F_t is also a subset of F? Hence that each F_t is also a sigma-algebra on S?
If t is thought of as the time, then each F_t is the history of the process up to t... This I don't get at all.
I'll use an example of a three-step binomial tree to illustrate my problem (this is exactly equivalent to tossing a coin three times or a one dimensional random walk).
• At each step, a value can randomly move up (u) or down (d).
• Thus, the state space S = {uuu, uud, udu, udd, duu, dud, ddu, ddd} - all possible outcomes of three steps.
• F could then be a collection of subsets of S. I think that F in a discrete state space is taken to be the power set of S: the set of all subsets of F, but I'm not sure.
How can the filtration {F_t} be understood as ths "history" of the process? My idea of what this means is outlined below, but even as I type it I don't think it makes sense.
Is F_2 a sigma algebra over a different state space, say S_2 = {uu, ud, du, dd}? In this case the state space after three steps would have to be reconstructed to include the elements of S_2 (and by extension) S_1 in order to allow F_2 to be a sub-sigma algebra of F (defined over S).
Thus, rewrite S = {u, d, uu, ud, du, dd, uuu, uud, udu, udd, duu, dud, ddu, ddd}
and construct F = pow(S).
Say the first step is up, and the second set is down. So do we construct F_2 as the smallest sigma-algebra over S which contains subset {ud}? This doesn't seem to make sense as such a collection would be (S, null, {ud}, S/{ud}), where S/{ud} is the complement.
I think that I'm rambling now, so I'll stop. Can anybody explain this to me, or tell me if I'm heading in completely the wrong direction?
Any help would be appreciated.
Many thanks
Barry
2. Originally Posted by tensorproduct
A filtration {F_t}t>=0 is a collection of ordered sub-sigma algebras such that F_s is a subset of (or equal to) F_t if s <= t
Question:
• Does this mean that each F_t is also a subset of F? Hence that each F_t is also a sigma-algebra on S?
Yes, $\mathcal{F}_t$ are increasing subsets of $\mathcal{F}$ such that each of them is a sigma algebra.
If t is thought of as the time, then each F_t is the history of the process up to t... This I don't get at all.
This is not true in general. This is the case when you take $\mathcal{F}_t:=\sigma(X_s:0\leq s\leq t)$ which is exactly the information of the paths up to time t (in case the notion is not clear, it is the smallest sigma algebra generated by $X_s^{-1}(B)$ where B is a Borel set).
I'll use an example of a three-step binomial tree to illustrate my problem (this is exactly equivalent to tossing a coin three times or a one dimensional random walk).
• At each step, a value can randomly move up (u) or down (d).
• Thus, the state space S = {uuu, uud, udu, udd, duu, dud, ddu, ddd} - all possible outcomes of three steps.
• F could then be a collection of subsets of S. I think that F in a discrete state space is taken to be the power set of S: the set of all subsets of F, but I'm not sure.
How can the filtration {F_t} be understood as ths "history" of the process? My idea of what this means is outlined below, but even as I type it I don't think it makes sense.
Is F_2 a sigma algebra over a different state space, say S_2 = {uu, ud, du, dd}? In this case the state space after three steps would have to be reconstructed to include the elements of S_2 (and by extension) S_1 in order to allow F_2 to be a sub-sigma algebra of F (defined over S).
Thus, rewrite S = {u, d, uu, ud, du, dd, uuu, uud, udu, udd, duu, dud, ddu, ddd}
and construct F = pow(S).
Say the first step is up, and the second set is down. So do we construct F_2 as the smallest sigma-algebra over S which contains subset {ud}? This doesn't seem to make sense as such a collection would be (S, null, {ud}, S/{ud}), where S/{ud} is the complement.
I think that I'm rambling now, so I'll stop. Can anybody explain this to me, or tell me if I'm heading in completely the wrong direction?
Any help would be appreciated.
Many thanks
Barry
You are getting confused between the value of the process and the state space. To define this process you want you need to consider $S=\{f: f:\{1,2,3\}\rightarrow \{u,d\}\}$.
I recommend reading Rogers & Williams if you need a reference for this stuff.
3. Hi Focus, thanks for the quick response.
Originally Posted by Focus
You are getting confused between the value of the process and the state space. To define this process you want you need to consider $S=\{f: f:\{1,2,3\}\rightarrow \{u,d\}\}$.
You are definitely right in saying that I'm confused. I don't really know how to interpret that expression.
Is $f:\{1,2,3\}\rightarrow \{u,d\}$ a defined function?
Is there any way of listing explicitly the elements contained in $S$? How does it differ from a set of all possible outcomes?
Originally Posted by Focus
I recommend reading Rogers & Williams if you need a reference for this stuff.
Is that this Rogers and Williams?
4. Originally Posted by tensorproduct
You are definitely right in saying that I'm confused. I don't really know how to interpret that expression.
Is $f:\{1,2,3\}\rightarrow \{u,d\}$ a defined function?
Is there any way of listing explicitly the elements contained in $S$? How does it differ from a set of all possible outcomes?
I mean the set of functions that map 1,2,3 to u,d.
This is essential all the things your process could be. This set will be the same as your set S with one added bonus that you can define the process X_n to be $X_n(f)=f(n)$.
A better example would be a simple random walk. Think about the space of functions $f:\{0,1,2\}\rightarrow \mathbb{Z}$ and X_n defined as before. What is F_1? Well it is X_1 either 1 or -1 so $\mathcal{F}_t=\{\{f:f(0)=0, f(1)=1\},\{f: f(0)=0, f(1)=-1\},\{f:f(0)=0, f(1)= \pm 1\}\}$.
Is that this Rogers and Williams?
Yes, except you need volume 1 (not 2).
5. Originally Posted by Focus
I mean the set of functions that map 1,2,3 to u,d.
This is essential all the things your process could be. This set will be the same as your set S with one added bonus that you can define the process X_n to be $X_n(f)=f(n)$.
Okay, but the elements of $S$ will be functions analagous to the elements I listed before: ${uuu, uud, ...}$
Where, for example, $uuu$ corresponds to a set of functions $\{f:f(1)=u,f(2)=u,f(3)=u\}$
Right? With $\mathcal{F}$ an algebra defined over these.
So, $\mathcal{F}_1$ would be the subset of $\mathcal{F}$ for which the outcome of the first move is known:
$\mathcal{F}_1=\{\{f:f(1)=u\},\{f:f(1)=d\},\{f:f(1) = u\text{ or }d\}\}$
Alternatively, in my previous notation:
$\mathcal{F}_1=\{\{uuu,uud,udu,udd\},\{duu,dud,ddu, ddd\},\emptyset,S\}$
(With the null-set there in order to make this an algebra.)
and
$\mathcal{F}_2=\{\{f:f(1)=u,f(2)=u\},\{f:f(1)=u,f(2 )=d\},...\}$
I'm not sure how $\mathcal{F}_2$ has "more" information than $\mathcal{F}_1$...
Yes, except you need volume 1 (not 2).
Well, I've tracked that down. I may need to refresh a lot of the basic analysis in my head before i get to the meat of it.
6. Originally Posted by tensorproduct
I'm not sure how $\mathcal{F}_2$ has "more" information than $\mathcal{F}_1$...
The sigma algebra F_2 strictly contains F_1. This is why you have more information. The bigger the sigma algebra, the more questions you can ask. Think of a sigma algebra as the set of questions you can ask.
For example let $\Omega=\{1,2,3,4,5,6\}$ (roll of a dice) and let F be the discrete sigma algebra and $\mathcal{G}:=\{\emptyset,\{1,3,5\},\{2,4,6\},\Omeg a\}$. Now which sigma algebra gives you more information? With G, you can only know if you rolled an even or an odd number, whereas with F you know exactly which number you rolled.
7. Originally Posted by Focus
The sigma algebra F_2 strictly contains F_1. This is why you have more information. The bigger the sigma algebra, the more questions you can ask. Think of a sigma algebra as the set of questions you can ask.
For example let $\Omega=\{1,2,3,4,5,6\}$ (roll of a dice) and let F be the discrete sigma algebra and $\mathcal{G}:=\{\emptyset,\{1,3,5\},\{2,4,6\},\Omeg a\}$. Now which sigma algebra gives you more information? With G, you can only know if you rolled an even or an odd number, whereas with F you know exactly which number you rolled.
Aha, now I get it - or at least I think I do. I'm still a long way from understanding the fully continuous case, but this is a good start.
Thanks, Focus, you've been a great help.
8. Thanks Focus! I, too, have been struggling with F algebras, you just confirmed my intuitive understanding.
Interestingly, these concepts remind me of 'information sets' in game theory (extensive games with imperfect information), where the player sometimes knows and sometimes does not know his exact position the game. The more 'partitioned' his information sets are, the better he is informed of his place in the game (and consequences of his future moves). I wonder if you know what I am talking about, and if there are similarities with the above sigma algebra concepts.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 32, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9409663677215576, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/39479?sort=votes
|
## Kalman filtering: 1D case
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
How will the kalman filtering model look like in the case when I just receive some data and want to filter them from noise? The data is actually an acceleration of some object. So the system must be like this:
$$x_t = A_tx_{t-1} + B_tu_t + \epsilon_t$$ $$z_t = C_tx_t + \delta_t$$ Where the $\epsilon_t$ and $\delta_t$ are the white noise. $x_t$ is a state variable. The problem is that I can't figure out what will the system look like in my case, when I receive acceleration measurements (observations - $z_t$) at each time period $\Delta t$. I think I don't need the control vector $u_t$ in my case, so the system will be: $$x_t = A_tx_{t-1} + \epsilon_t$$ $$z_t = C_tx_t + \delta_t$$ I suppose, but not sure about this kind of filter system: $$x_t = x_{t-1} + \epsilon_t$$ $$z_t = x_t + \delta_t$$ But it seems too simple. How to make the first iteration in the Kalman filtering procedure?
EDIT 1: here is the kalman filtering algorithm, taken from the book: probabilistic robotics.
Kalman_filter($\mu_{t-1}$, $\Sigma_{t-1}$, $u_t$, $z_t$) $$\bar{\mu}_t = A_t\mu_{t-1} + B_tu_t$$ $$\bar{\Sigma}_t = A_t\Sigma_{t-1}A_t^T + R_t$$ $$K_t = \bar{\Sigma}_tC_t^T\left(C_t\bar{\Sigma}_tC_t^T + Q_t\right)^{-1}$$ $$\mu_t = \bar{\mu_t} + K_t\left(z_t - C_t\bar{\mu}_t\right)$$ $$\Sigma_t = \left(I - K_tC_t\right)\bar{\Sigma}_t$$ return $\mu_t$, $\Sigma_t$
The thing that I do not understand here is: Here the data that is unknown is - $\Sigma_0$, $\mu_0$ I supppose that I can choose some data by myself for that values. But one more data that is unknown for me is: $R_t$ It comes from: The state transition probability is given by $p(x_t|u_t,x_{t-1})$. And we got: $$x_t = A_t\mu_{t-1} + B_tu_t+\epsilon_t$$ as one of the equations of the Kalman filter.
We also know the normal distribution: $$p(x) = det\left(2\pi\Sigma\right)^{-1/2}exp\left(-1/2(x-\mu)^T\Sigma^{-1}(x-\mu)\right)$$ (I've already asked a question from which you can see where it comes from: question)
So we have:
$\mu_t = A_tx_{t-1} + B_tu_t$(discussed in the question, the link is above) And also $R_t$ is a covariance of the posterior state. Here we got the whole formula.
$$p(x_t|u_t, x_{t-1}) = det\left(2\pi R_t\right)^{-1/2}exp\left(-1/2(x_t-A_tx_{t-1}-B_tu_t)^TR_t^{-1}(x_t-A_tx_{t-1}-B_tu_t)\right)$$
So, how should be the $R_t$ value estimated? It depends on $t$. If I set some value by myself to $\Sigma_0$ and $\mu_0$ then what should be done with $R_t$ which appears in the Kalman filter algorithm listed above in this step: $$\bar{\Sigma}_t = A_t\Sigma_{t-1}A_t^T + R_t$$ ?
Correct me please if I am wrong: $$R_t = cov\left(x_t|x_{t-1}, u_t\right) = E\left[x_t^2|x_{t-1}, u_t \right] - \left(E\left[x_t|x_{t-1},u_t\right]\right)^2$$ $$R_t = E\left[x_t^2|x_{t-1}, u_t \right] - \left(A_tx_{t-1}+B_tu_t\right)^2$$ So how to calculate the $R_t$? Should it be also set by user? Actually $R_t$ is a covariance of the noise $\epsilon_t$ in equation: $$x_t = A_tx_{t-1} + B_tu_t + \epsilon_t$$. And it depends on $t$. The same thing about the noise covariance of $\delta_t$ in case of this equation of the Kalman filter: $$z_t = C_tx_t + \delta_t$$
EDIT 2: So as I understood four parameters should be selected by the user (tuned), they are:
$Q_t$, $R_t$, $\mu_0$ and $\Sigma_0$
Am I right?
-
I'm not sure what you're asking. Everything works fine in that case. That's actually a good case to use to try to understand the general case. – arsmath Sep 21 2010 at 15:12
Actually to begin this filtering process I need to know the initial $\mu_t$ and $\sigma_t$ where $t=0$. But I have no idea how to calculate it.. In the general case at the prediction step we got: $$\bar{\mu}_t = A_t\mu_t + B_tu_t + \epsilon_t$$ So we need to know the initial $\mu_0$ The same thing and even more complicated about the initial covariance $\Sigma_0$. I do not remember the formula right now for calculating the $\bar{\Sigma}_t$. I can assume that the $\mu_0$ of the $x_0$ is $0$. ($\mu_0 = 0$) But what it is just How I think it should be and I can't put anything for $\Sigma_0$ – maximus Sep 21 2010 at 18:39
There is a mistake, it should be: $$\bar{\mu}_t = A_t\mu{t-1}+B_tu_t+\epsilon_t$$ I have no idea how to calculate the initial covariance for $x_0$ to be able then to start the process. – maximus Sep 21 2010 at 18:42
1
Read the link I wrote earlier: citeseerx.ist.psu.edu/viewdoc/… (Part 3 A Kalman Filter in Action: Estimating a Random Constant) They've worked out your problem right there. – Gilead Sep 22 2010 at 3:45
Very useful! Thank you! – maximus Sep 22 2010 at 5:58
show 1 more comment
## 2 Answers
A few remarks on your problem:
• You have to assume something for your initial variance (not covariance in this case, since it's univariate). The same applies in the multivariate case -- you have to know something about $P_{0|0}$. You do not calculate the initial variance.
• If you really have no idea what to choose for your initial variance, choose a large number. This is equivalent to saying "I don't know what's going on in the system, so I'm going to be conservative and assume the worst." As the Kalman filter iterates, it will generally converge and the variance will tend to decrease.
• Given a measurement $z_{0}$, you can do the rest (Kalman gain, prediction etc.). In fact in the linear case, it is proven that the Kalman gain can be calculated off-line (see "Separation Principle" http://en.wikipedia.org/wiki/Separation_principle).
• If your filter is having trouble converging (very unlikely in this simple case), you can use something called a Re-iterative Kalman Filter (http://tinyurl.com/2fokknm). This Kalman filters iterates $n$ steps and uses the information collected to correct $x_{0}$. At $n+1$, it uses to the corrected $x_{0}$ and recursively calculates $x_{n+1}$; thereafter the Kalman filter will usually converge rapidly.
Peter D. Joseph (a pioneer in the use of Kalman Filters in the 1960s) wrote a simple tutorial on the subject in which he gives the reader an intuitive understanding of what these filters do -- in it he motivates the subject through the derivation of a 1-D example. Unfortunately the webpage no longer exists; however I managed to find the original document in text format: http://www.humintel.com/hajek/kalman.txt
If you're willing to reformat it into $\LaTeX$, I think you'll find the document helpful.
-
@Gilead Thank you for your answer, it is very useful, I've updated the question. And I will try to read that book but text format is a little bit not convenient to read. May be I will make a PDF using LATEX, when I am free. Thanks – maximus Sep 22 2010 at 2:50
Here's another good reference: citeseerx.ist.psu.edu/viewdoc/… (see section on Filter Parameters and Tuning - it speaks to your problem). Yes, $R_{t}$ (normally assumed constant, so $R$) has to be measured, or otherwise assumed. – Gilead Sep 22 2010 at 3:11
1
If you have historical data from your process, you can use it to identify the noise/disturbance structure using System Identification techniques -- that's the way to get accurate starting covariance matrices. If you have no data, then you must assume something. – Gilead Sep 22 2010 at 3:17
Thank you very much! Very useful information! – maximus Sep 22 2010 at 3:19
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
The original uses of the filter were in navigation, although Kalman was arguing from an electrical engineering perspective. The first huge success was on the Apollo missions. So there are texts at a variety of levels. I photocopied two of them, lots of effort but these engineering-related books are amazingly expensive, even by mathematics standards. They are:
Global Positioning Systems, Inertial Navigation, and Integration (Second Edition, 2007, Wiley) Mohinder S. Grewal, Lawrence R. Weill, Angus P. Andrews
Kalman Filtering: Theory and Practice using MATLAB (second edition, 2001, Wiley) Mohinder S. Grewal, Angus P. Andrews
A year or two ago I was tutoring a CS major and the filter was included. The presentation (no course textbook, the lecturers wrote it as they went along) was hopeless. I encourage you to branch out to extra books. Given the nature of your questions, borrowing these books and others in some interlibrary loan would help you a good deal. It is nice that arsmath is available to answer some questions, but MO is hardly going to serve as an effective tutor for a subject that is so very intricate in practice.
-
Thank you very much! Useful information about the books! I am currently reading the probabilistic robotics, it is very good in my opinion, however some question arise while reading it.. – maximus Sep 22 2010 at 2:52
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 54, "mathjax_display_tex": 21, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9347680807113647, "perplexity_flag": "middle"}
|
http://www.physicsforums.com/showthread.php?p=4007003
|
Physics Forums
Recognitions:
Gold Member
## Vector in an EM Wave
Hey all. I don't really understand how the fields of an EM wave have a vector. I think I understand the vector of a static EM field, but I'm having trouble understanding it when it comes to an EM wave.
Could someone help me out a bit? Thanks. (I'm sure it's something simple that I just don't get at the moment. Self teaching is frustrating!)
PhysOrg.com physics news on PhysOrg.com >> Promising doped zirconia>> New X-ray method shows how frog embryos could help thwart disease>> Bringing life into focus
Check out the animation here: http://mutuslab.cs.uwindsor.ca/schur...ave/emwave.htm As you are watching the animation notice: If you freeze time (set T = 100 in animation) than both E and H field vectors are sinusoidal functions of distance from the origin. If you run time (say T=3) but freeze your position both E and H field vectors are sinusoidal functions of time.
Recognitions: Gold Member Is it simply that when the wave passes a charge, that charge will be accelerated in a particular direction depending on the phase of the wave at the time of the interaction? And the opposite direction when the phase is 180 degrees later?
## Vector in an EM Wave
Yes, that's it.
You can also notice looking at the animation that when both E and B vectors have zero magnitude (where they cross x-axis), they both have maximum partial derivative with respect to time, and maximum curl. When they have maximum magnitude (at their peaks) they both have zero partial with respect to time and zero curl. These reflect Maxwell's eqns.
$$\vec{\nabla} \times \vec{E}=-\partial_t \vec{B}$$
$$\vec{\nabla} \times \vec{B}=\mu\epsilon\partial_t \vec{E}$$
Thread Tools
| | | |
|-------------------------------------------|------------------------------------|---------|
| Similar Threads for: Vector in an EM Wave | | |
| Thread | Forum | Replies |
| | Classical Physics | 2 |
| | Atomic, Solid State, Comp. Physics | 8 |
| | Special & General Relativity | 1 |
| | Classical Physics | 5 |
| | Special & General Relativity | 1 |
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8995835185050964, "perplexity_flag": "middle"}
|
http://citizendia.org/Signal-to-noise_ratio
|
Signal-to-noise ratio (often abbreviated SNR or S/N) is an electrical engineering concept, also used in other fields (such as scientific measurements, biological cell signaling), defined as the ratio of a signal power to the noise power corrupting the signal. Electrical engineering, sometimes referred to as electrical and electronic engineering, is a field of Engineering that deals with the study and application of Measurement is the process of estimating the magnitude of some attribute of an object such as its length or weight relative to some standard ( unit of measurement) such as Cell signaling is part of a Complex system of Communication that governs basic cellular activities and coordinates cell actions
In less technical terms, signal-to-noise ratio compares the level of a desired signal (such as music) to the level of background noise. The higher the ratio, the less obtrusive the background noise is.
## Technical sense
In engineering, signal-to-noise ratio is a term for the power ratio between a signal (meaningful information) and the background noise:
$\mathrm{SNR} = {P_\mathrm{signal} \over P_\mathrm{noise}} = \left ( {A_\mathrm{signal} \over A_\mathrm{noise} } \right )^2$
where P is average power and A is RMS amplitude. In Physics, power (symbol P) is the rate at which work is performed or energy is transmitted or the amount of energy required or expended for In the fields of communications, Signal processing, and in Electrical engineering more generally a signal is any time-varying or spatial-varying quantity In Science, and especially in Physics and Telecommunication, noise is fluctuations in and the addition of external factors to the stream of target In Mathematics, the root mean square (abbreviated RMS or rms) also known as the quadratic mean, is a statistical measure of the Both signal and noise power (or amplitude) must be measured at the same or equivalent points in a system, and within the same system bandwidth.
Because many signals have a very wide dynamic range, SNRs are usually expressed in terms of the logarithmic decibel scale. In Mathematics, the logarithm of a number to a given base is the power or Exponent to which the base must be raised in order to produce The decibel ( dB) is a logarithmic unit of measurement that expresses the magnitude of a physical quantity (usually power or intensity relative to In decibels, the SNR is, by definition, 10 times the logarithm of the power ratio. If the signal and the noise is measured across the same impedance then the SNR can be obtained by calculating 20 times the base-10 logarithm of the amplitude ratio:
$\mathrm{SNR (dB)} = 10 \log_{10} \left ( {P_\mathrm{signal} \over P_\mathrm{noise}} \right ) = 20 \log_{10} \left ( {A_\mathrm{signal} \over A_\mathrm{noise}} \right )$
### Electrical SNR and acoustics
Often the signals being compared are electromagnetic in nature, though it is also possible to apply the term to sound stimuli. The decimal ( base ten or occasionally denary) Numeral system has ten as its base. In Mathematics, the logarithm of a number to a given base is the power or Exponent to which the base must be raised in order to produce Amplitude is the magnitude of change in the oscillating variable with each Oscillation, within an oscillating system Electromagnetism is the Physics of the Electromagnetic field: a field which exerts a Force on particles that possess the property of Sound' is Vibration transmitted through a Solid, Liquid, or Gas; particularly sound means those vibrations composed of Frequencies Due to the definition of decibel, the SNR gives the same result independent of the type of signal which is evaluated (such as power, current, or voltage). The decibel ( dB) is a logarithmic unit of measurement that expresses the magnitude of a physical quantity (usually power or intensity relative to
Signal-to-noise ratio is closely related to the concept of dynamic range, where dynamic range measures the ratio between noise and the greatest un-distorted signal on a channel. Dynamic range is a term used frequently in numerous fields to describe the Ratio between the smallest and largest possible values of a changeable quantity such as in Sound A distortion is the alteration of the original shape (or other characteristic of an object image sound waveform or other form of information or representation Channel, in communications (sometimes called communications channel) refers to the medium used to convey Information from a SNR measures the ratio between noise and an arbitrary signal on the channel, not necessarily the most powerful signal possible. Because of this, measuring signal-to-noise ratios requires the selection of a representative or reference signal. In audio engineering, this reference signal is usually a sine wave, sounding a tone, at a recognized and standardized nominal level or alignment level, such as 1 kHz at +4 dBu (1. Sound' is Vibration transmitted through a Solid, Liquid, or Gas; particularly sound means those vibrations composed of Frequencies Engineering is the Discipline and Profession of applying technical and scientific Knowledge and Pitch represents the perceived Fundamental frequency of a sound Nominal level is the operating level at which an electronic Signal processing device is designed to operate The alignment level in an audio signal chain or on an audio recording is a defined anchor point that represents a reasonable or typical level The decibel ( dB) is a logarithmic unit of measurement that expresses the magnitude of a physical quantity (usually power or intensity relative to 228 VRMS).
SNR is usually taken to indicate an average signal-to-noise ratio, as it is possible that (near) instantaneous signal-to-noise ratios will be considerably different. The concept can be understood as normalizing the noise level to 1 (0 dB) and measuring how far the signal 'stands out'. In general, higher signal to noise is better; the signal is 'cleaner'.
### Image processing and interferometry
In image processing, the SNR of an image is usually defined as the ratio of the mean pixel value to the standard deviation of the pixel values. An image (from Latin imago) or picture is an artifact usually two-dimensional that has a similar appearance to some subject &mdashusually In Statistics, mean has two related meanings the Arithmetic mean (and is distinguished from the Geometric mean or Harmonic mean In Probability and Statistics, the standard deviation is a measure of the dispersion of a collection of values Related measures are the "contrast ratio" and the "contrast-to-noise ratio". The contrast ratio is a measure of a display system defined as the Ratio of the Luminance of the brightest color (white to that of the darkest color (black that
The connection between optical power and voltage in an imaging system is linear. Optical power ( dioptric power or refractive power) is the degree to which a lens or Mirror converges or diverges light Electrical tension (or voltage after its SI unit, the Volt) is the difference of electrical potential between two points of an electrical This usually means that the SNR of the electrical signal is calculated by the 10 log rule. With an interferometric system, however, where interest lies in the signal from one arm only, the field of the electromagnetic wave is proportional to the voltage (assuming that the intensity in the second, the reference arm is constant). Interferometry is the technique of using the pattern of Interference created by the superposition of two or more Waves to diagnose the properties of Therefore the optical power of the measurement arm is directly proportional to the electrical power and electrical signals from optical interferometry are following the 20 log rule.
The Rose criterion (named after Albert Rose) states that an SNR of at least 5 is needed to be able to distinguish image features at 100% certainty. Albert Rose (born New York City, 30 March 1910, died on 26 July 1990) was an American physicist who made major contributions An SNR less than 5 means less than 100% certainty in identifying image details. [1]
### For measurement devices in general
Recording of the noise of a thermogravimetric analysis device that is poorly isolated from a mechanical point of view; the middle of the curve shows a lower noise, due to a lesser surrounding human activity at night. Thermogravimetric Analysis or TGA is a type of testing that is performed on samples to determine changes in Weight in relation to change in Temperature
Any measurement device is disturbed by parasitic phenomena. This includes the electronic noise as described above, but also any external event that affects the measured phenomenon — wind, vibrations, gravitational attraction of the moon, variations of temperature, variations of humidity etc. depending on what is measured and of the sensitivity of the device.
It is often possible to reduce the noise by controlling the environment. Otherwise, when the characteristics of the noise are known and are different from the signal's, it is possible to filter it or to process the signal.
When the noise is a random perturbation and the signal is a constant value, it is possible to enhance the SNR by increasing the measurement time.
## Digital signals
When using digital storage the number of bits of each value determines the maximum signal-to-noise ratio. In this case the noise is the error signal caused by the quantization of the signal, taking place in the analog-to-digital conversion. is a one volume manga created by Tsutomu Nihei as a prequel to his ten-volume work Blame!. The word error has different meanings and usages relative to how it is conceptually applied In Digital signal processing, quantization is the process of approximating a continuous range of values (or a very large set of possible discrete values by a relatively-small An analog-to-digital converter (abbreviated ADC, A/D or A to D) is an electronic integrated circuit which converts continuous signals to The noise level is non-linear and signal-dependent; different calculations exist for different signal models. The noise is modeled as an analog error signal being summed with the signal before quantization ("additive noise").
The modulation error ratio (MER) is a measure of the SNR in a digitally modulated signal. Like SNR, MER can be expressed in dB.
### Fixed point
For n-bit integers with equal distance between quantization levels (uniform quantization) the dynamic range (DR) is also determined. In Computing, a fixed-point number representation is a Real data type for a number that has a fixed number of digits after (and sometimes also before the In Digital signal processing, quantization is the process of approximating a continuous range of values (or a very large set of possible discrete values by a relatively-small Dynamic range is a term used frequently in numerous fields to describe the Ratio between the smallest and largest possible values of a changeable quantity such as in Sound
Assuming a uniform distribution of input signal values, the quantization noise is a uniformly-distributed random signal with a peak-to-peak amplitude of one quantization level, making the amplitude ratio 2n/1. The formula is then:
$\mathrm{DR (dB)} = \mathrm{SNR (dB)} = 20 \log_{10}(2^n) \approx 6.02 \cdot n$
This relationship is the origin of statements like "16-bit audio has a dynamic range of 96 dB". Each extra quantization bit increases the dynamic range by roughly 6 dB.
Assuming a full-scale sine wave signal (that is, the quantizer is designed such that it has the same minimum and maximum values as the input signal), the quantization noise approximates a sawtooth wave with peak-to-peak amplitude of one quantization level[2] and uniform distribution. In electronics and signal processing full scale or full code represents the maximum amplitude a system can present The sawtooth wave (or saw wave) is a kind of Non-sinusoidal waveform. In this case, the SNR is approximately
$\mathrm{SNR (dB)} \approx 20 \log_{10} (2^n \sqrt {3/2}) \approx 6.02 \cdot n + 1.761$
### Floating point
Floating-point numbers provide a way to trade off signal-to-noise ratio for an increase in dynamic range. In Computing, floating point describes a system for numerical representation in which a string of digits (or Bits represents a Real number. For n bit floating-point numbers, with n-m bits in the mantissa and m bits in the exponent:
$\mathrm{DR (dB)} = 6.02 \cdot 2^m$
$\mathrm{SNR (dB)} = 6.02 \cdot (n-m)$
Note that the dynamic range is much larger than fixed-point, but at a cost of a worse signal-to-noise ratio. This makes floating-point preferable in situations where the dynamic range is large or unpredictable. Fixed-point's simpler implementations can be used with no signal quality disadvantage in systems where dynamic range is less than 6. 02m. The very large dynamic range of floating-point can be a disadvantage, since it requires more forethought in designing algorithms. [3]
### Notes
• Analog-to-digital converters have other sources of noise that decrease the SNR compared to the theoretical maximum from the idealized quantization noise. An analog-to-digital converter (abbreviated ADC, A/D or A to D) is an electronic integrated circuit which converts continuous signals to
• Often special filters are used to weight the noise: DIN-A, DIN-B, DIN-C, DIN-D, CCIR-601; for video, special filters such as comb filters may be used. In Signal processing, a comb filter adds a delayed version of a signal to itself causing constructive and destructive interference.
• Maximum possible full scale signal can be charged as peak-to-peak or as RMS. Audio uses RMS, Video P-P, which gave +9 dB more SNR for video.
• It is more common to express SNR in digital systems using Eb/No - the energy per bit per noise power spectral density. E b/ N 0 (the energy per bit to noise power spectral density ratio) is an important parameter in Digital communication or
Further information: Quantization noise, Bit resolution
## Informal use
Informally, "signal-to-noise ratio" refers to the ratio of useful information to false or irrelevant data. The difference between the actual analog value and quantized digital value due is called quantization error.
In online discussion forums such as Usenet, off-topic posts and spam are regarded as "noise" that interferes with the "signal" of appropriate discussion. An, or message board, is a Bulletin board system in the form of a discussion site Usenet, a Portmanteau of "user" and "network" is a world-wide distributed Internet discussion system A contribution is on-topic if it is within the bounds of the current discussion and off-topic if not Spamming is the abuse of electronic messaging systems to indiscriminately send unsolicited bulk messages Another example is Bugzilla, where "please fix this" comments clutter up the discussion without helping to solve the bug. Bugzilla is a Web -based general-purpose Bugtracker tool originally developed and used by the Mozilla project and licensed under the [1] A system of moderation may improve the SNR by filtering out irrelevant posts. On Internet Websites which invite users to post comments a moderation system is the method the Webmaster chooses to sort contributions which are irrelevant
The wiki collaboration model addresses the same problem in a different way, by permitting users to "moderate" content, ideally adding signal while removing noise. A wiki is a page or collection of Web pages designed to enable anyone who accesses it to contribute or modify content using a simplified Markup language.
## See also
• Audio system measurements
• Video quality
• Subjective video quality
• Near-far problem
• Peak signal-to-noise ratio
• SINAD (ratio of signal-including-noise-and-distortion to noise-and-distortion only)
• ENOB
• Eb/N0
• Es/N0
• Carrier to Noise Ratio (CNR or C/N)
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 6, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9164953827857971, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/45588/cosmology-questions-from-a-novice?answertab=active
|
Cosmology questions from a novice
These ideas/questions probably represent a lack of understanding on my part, but here they are:
1) Cosmologists talk about the increasing speed of expansion of the universe and talk of dark energy as the cause. But, I keep thinking that the farther out we observe, is farther back in time and the light we observe is from a time billions of years ago. We don't know what they look like or how fast they are moving now. Who's to say they haven't slowed down? How can we know now? So, how can we say that the universe is increasing in it's rate of expansion? All we can say for sure is that it was expanding at that rate.
Another thought- on black holes:
2) If Black holes where not composed of highly dense matter, why would there be different "sizes" of Black holes. if all black holes were collapsed to a "singularity", there should be no difference in "size" (the diameter of the light free area). Therefore, I have trouble with the singularity concept and think they are just another form of dense matter that just happens to have enough gravity to hold back even photons.
-
5
The difference in "size" of a black hole is not a difference in the singularity but in the event horizon where general relativistic physics is still well behaved. – dmckee♦ Dec 1 '12 at 3:19
4
– anna v Dec 1 '12 at 5:03
2 Answers
The answer to question 1 is that astronomers assume the universe is spatially homogeneous and isotropic, meaning that it is roughly everywhere the same density, and the density only depends on time since the big-bang. this model is either true, or we are living at the only point in the universe where it appears to be true (since we can verify that it is true from our vantage point for concentric spheres around us, and it would require a conspiracy for these spheres to look like cross sections of a homogeneous universe if it weren't truly homogenous).
This assumption is justified theoretically today by inflationary cosmology, which predicts a homogeneous expansion with small corrections, which are predicted and matched. So it is both theoretically and observationally verified, and is certain in the scientific sense.
For the second question, you must remember than nothing can move faster than light, so if outgoing light is pulled inwards, matter, which must move slower, must be pulled inwards even more. This is why black holes can't be stablized matter which is compressed to a high density. But it isn't true that the matter is compressed to a single spatial point either, the interior is complicated, and the matter in a spherically symmetric collapse is compressed to a single point, but this point is a time, not a spatial position, because r and t switch inside a black hole. This is not easy to visualize outside of GR, using a Newtonian model, so all the usual popular pictures are misleading.
-
The universe is accelerating in the sense that you fit the data (e.g. from very far away supernovae of a certain type) better by taking $\ddot{a}>0$ than otherwise. $a(t)$ is the scale factor in FLRW metric, e.g. $ds^2=a^2(t)(-dt^2+d\vec{x}^2)$. From Friedman equation you get that $\rho+3p<0$ which requires a very special type of matter, generically dubbed dark energy. A cosmological constant is one important example with $\rho=-p$.
As for your second question, the size of the black hole horizon is set by its mass. I don't see why this is logically problematic as you are implicitly suggesting.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9568595886230469, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/74444/applications-for-knowing-the-singularities-parametrized-by-the-boundary-of-a-modu/74518
|
## Applications for knowing the singularities parametrized by the boundary of a moduli space
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Given a moduli space $M$ of some smooth algebraic geometric object such as curves, surfaces, etc. Let $\overline{M}$ be a compactification of $M$. Then, $\overline{M}\setminus M$ introduces singular objects in our moduli. The question is: What is the use of knowing the singularities parametrized by the boundary of the compactified moduli space i.e $\overline{M} \setminus M$. Usually $M$ has diferent compactifications, and so different " limit singular objects". Does this difference mean something?
For example: the smooth genus $g=3$ have the $\overline{M_3}$ compactification with only stable curves in the boundary, but is possible to find another compactification by the GIT analysis of degree four plane curves. The singular curves present in the boundaries are quite different. What is the use of having an explicit descriptions of them?
-
The output of de Jong's work on alterations of singularities (and subsequent work by various authors, such as Temkin and Gabber) is unrelated to moduli spaces. However, the method itself crucially relies on the ability to compactify the moduli space of smooth curves by adding (the very slightly singular) stable curves. – anon Jan 12 at 9:39
## 2 Answers
here's a try: suppose you have a quartic surface in P^3 and ask whether the isomorphism type of plane sections varies or not. If the planes pass through a general common line, the general singularity of the curve section is an odp. If these curve sections are also irreducible, it seems that the conclusion is that the holomorphic type of the sections varies. I.e. the fact that the compactified moduli space of smooth genus 3 curves contains all irreducible nodal curves of genus 3, and is hausdorff, implies that the plane sections of a quartic surface are not all isomorphic.
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Here's an example: Suppose you'd like to know about the divisors on $X = \overline{M_{g,n}}$. Say for instance that you have a divisor $D$ and you'd like to know whether $D$ is ample or nef, that is, if for all curves $C$ on $X$, we have $D\cdot C > 0$ or $\ge 0$.
There's a conjecture out there called the $F$ conjecture which says that if we want to show $D$ is nef, it suffices to check $D\cdot C \ge 0$ for a smaller set of curves in the boundary strata, called the $F$-curves. Because that's a definition for which pictures help, I refer you over to http://www-irm.mathematik.hu-berlin.de/~larsen/talkM2Goettingen.pdf
Of course that's a conjecture to be proven, but it's related to a big circle of ideas surrounding the minimal model program, including the question of Hu and Keel on whether or not $\overline{M_{0,n}}$ is a Mori Dream Space ( http://arxiv.org/PS_cache/math/pdf/0004/0004017v1.pdf )
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 20, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9203386902809143, "perplexity_flag": "head"}
|
http://mathematica.stackexchange.com/questions/tagged/linear-algebra?page=3&sort=active&pagesize=30
|
# Tagged Questions
Questions on the linear algebra functionality of Mathematica.
1answer
329 views
### What is the fastest way to find an integer-valued row echelon form for a matrix with integer entries?
Let me begin by saying that this is my first post on StackExchange. I apologize in advance if I unwittingly break any of its unwritten rules of etiquette. Recently, I've been trying to understand an ...
0answers
228 views
### Inverse of a large sparse Hermitian block matrix
I am looking for a method (if it exists) for the inverse of a large sparse Hermitian block matrix. The off diagonal sparse matrices, named δ are 4x4, and they have ...
3answers
1k views
### How to symbolically do matrix “Block Inversion”?
Consider a block (partitioned) matrix matrix = ArrayFlatten[{{a, b}, {c, d}}] where, a, ...
1answer
240 views
### On the parallelization of matrix multiplications in Mathematica 8
I have installed Mathematica 8, but I think the commands for parallelizations do not work! Even when I try to test the example in the Help of Mathematica, I face with ParallelDo::nopar: No ...
1answer
339 views
### Higher order SVD
Does anyone know how to do a higher order SVD in Mathematica ? A good reference seems to be here http://csmr.ca.sandia.gov/~tgkolda/pubs/bibtgkfiles/TensorReview.pdf but I don't understand their ...
2answers
346 views
### Solving a linear equation in Mathematica
This should be easy but I can't seem to find the right way to do it. I have an equation of the form $a x + b x + c y + a z + d z = 0$, and I'd like to solve for relations between the parameters ...
3answers
572 views
### Orthonormalization of non-hermitian matrix eigenvectors
When using Orthogonalize[] one can specify which definition of "inner product" is to be used. For example, ...
2answers
371 views
### Entering block matrices for an arbitrary matrix size
Background: How to enter matrices in block matrix format? and the following: I want to create $$f(A,t) = \left [ \begin{matrix} A & t \\ 0 & 1 \end{matrix} \right ]$$ where $A$ ...
3answers
463 views
### Trying to simplify Root expressions from the output of Eigenvalues
I am trying to calculate eigenvalues of a sparse matrix with only two distinct non-zero elements, here Alpha and Beta, which are both negative reals. Mathematica returns some complex expressions with ...
0answers
240 views
### Matrix multiplication involving MatrixForm [duplicate]
Possible Duplicate: Why does MatrixForm affect calculations? I am doing a matrix multiplication, but not getting the desired output. I am doing the matrix multiplication of $A^{-1}B$ from ...
2answers
258 views
### ordering of functional eigenvalues
Is there any order to the symbolic eigenvalues of a matrix returned by the command Eigenvalues[...]? While numerical eigenvalues are listed in descending order ...
3answers
279 views
### Composition of TransformationFunctions
I have a number of rotations computed by rot = RotationTransform[theta, point], and I would like to compose them to produce one function that is the composition of ...
1answer
176 views
### Obtaining a thin/compact SVD
I'm using SingularValueDecomposition for a least-squares regression, the instruction that works fine for what I need is ...
1answer
789 views
### Simpler way of performing Gaussian Elimination?
Is there a simpler way of performing Gaussian Elimination other than using RowReduce? Such as a single built in function? Edit: Look at the example from our simulation class. Not too difficult, but ...
4answers
1k views
### Computing eigenvectors and eigenvalues
I have a (non-sparse) $9 \times 9$ matrix and I wish to obtain its eigenvalues and eigenvectors. Of course, the eigenvalues can be quite a pain as we will probably not be able to find the zeros of its ...
3answers
431 views
### Can Eigenvalues[] and Eigenvectors[] be assumed to return the same ordering?
If I do back to back calls of Eigenvalues[] and Eigenvectors[] can these be assumed to order the values and vectors the same, or ...
1answer
512 views
### Obtaining the square-root of a general positive definite matrix
I have a matrix which I know to be positive definite. The entries of the matrix might be complicated but they are all real. To find an expression for the square root of this matrix (i.e., ...
1answer
229 views
### How to fix errors in Gram-Schmidt process when using random vectors?
I first make a function to get a random vector on unit sphere in a swath around the equator. That is what the parameter $\gamma$ controls; if $\gamma = 1/2$, the vectors can be chosen anywhere on the ...
1answer
233 views
### Why is MainEvaluate being used when LinearSolve can be compiled?
According to this question LinearSolve can be Compiled. However, CompilePrint shows a MainEvaluate but no-warning is generated. It appears that LinearSolve is not ...
0answers
155 views
### NullSpace[_, Method->“OneStepRowReduction”] is sometimes wrong; how can I work out when this happens?
(This is on MMA 7.0.1.0 on OS X) I've just found a large matrix m for which NullSpace[m] and ...
2answers
972 views
### How to enter matrices in block matrix format?
Example: I have a matrix $R = \left( \begin{array}{cc} A & \mathbf{t} \\ 0 & 1 \end{array} \right)$ where $A$ is 3-by-3 and $\mathbf{t}$ is 3 by 1. Or in Mathematica ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8804482817649841, "perplexity_flag": "middle"}
|
http://en.wikipedia.org/wiki/Centroid
|
# Centroid
Centroid of a triangle
In geometry and physics, the centroid or geometric center of a two-dimensional region is, informally, the point at which a cardboard cut-out of the region could be perfectly balanced on the tip of a pencil (assuming uniform density and a uniform gravitational field). Formally, the centroid of a plane figure or two-dimensional shape is the arithmetic mean ("average") position of all the points in the shape. The definition extends to any object in n-dimensional space: its centroid is the mean position of all the points in all of the coordinate directions.
While in geometry the term barycenter is a synonym for "centroid", in physics "barycenter" may also mean the physical center of mass or the center of gravity, depending on the context. The center of mass (and center of gravity in a uniform gravitational field) is the arithmetic mean of all points weighted by the local density or specific weight. If a physical object has uniform density, then its center of mass is the same as the centroid of its shape.
In geography, the centroid of a radial projection of a region of the Earth's surface to sea level is known as the region's geographical center.
## Properties
The geometric centroid of a convex object always lies in the object. A non-convex object might have a centroid that is outside the figure itself. The centroid of a ring or a bowl, for example, lies in the object's central void.
If the centroid is defined, it is a fixed point of all isometries in its symmetry group. In particular, the geometric centroid of an object lies in the intersection of all its hyperplanes of symmetry. The centroid of many figures (regular polygon, regular polyhedron, cylinder, rectangle, rhombus, circle, sphere, ellipse, ellipsoid, superellipse, superellipsoid, etc.) can be determined by this principle alone.
In particular, the centroid of a parallelogram is the meeting point of its two diagonals. This is not true for other quadrilaterals.
For the same reason, the centroid of an object with translational symmetry is undefined (or lies outside the enclosing space), because a translation has no fixed point.
## Locating the centroid
### Plumb line method
The centroid of a uniform planar lamina, such as (a) below, may be determined, experimentally, by using a plumbline and a pin to find the center of mass of a thin body of uniform density having the same shape. The body is held by the pin inserted at a point near the body's perimeter, in such a way that it can freely rotate around the pin; and the plumb line is dropped from the pin (b). The position of the plumbline is traced on the body. The experiment is repeated with the pin inserted at a different point of the object. The intersection of the two lines is the centroid of the figure (c).
| | | |
|-----|-----|-----|
| | | |
| (a) | (b) | (c) |
This method can be extended (in theory) to concave shapes where the centroid lies outside the shape, and to solids (of uniform density), but the positions of the plumb lines need to be recorded by means other than drawing.
### Balancing method
For convex two-dimensional shapes, the centroid can be found by balancing the shape on a smaller shape, such as the top of a narrow cylinder. The centroid occurs somewhere within the range of contact between the two shapes. In principle, progressively narrower cylinders can be used to find the centroid to arbitrary accuracy. In practice air currents make this unfeasible. However, by marking the overlap range from multiple balances, one can achieve a considerable level of accuracy.
### Of a finite set of points
The centroid of a finite set of ${k}$ points $\mathbf{x}_1,\mathbf{x}_2,\ldots,\mathbf{x}_k$ in $\mathbb{R}^n$ is
$\mathbf{C} = \frac{\mathbf{x}_1+\mathbf{x}_2+\cdots+\mathbf{x}_k}{k}$
This point minimizes the sum of squared Euclidean distances between itself and each point in the set.
### By geometric decomposition
The centroid of a plane figure $X$ can be computed by dividing it into a finite number of simpler figures $X_1,X_2,\dots,X_n$, computing the centroid $C_i$ and area $A_i$ of each part, and then computing
$C_x = \frac{\sum C_{i_x} A_i}{\sum A_i} , C_y = \frac{\sum C_{i_y} A_i}{\sum A_i}$
Holes in the figure $X$, overlaps between the parts, or parts that extend outside the figure can all be handled using negative areas $A_i$. Namely, the measures $A_i$ should be taken with positive and negative signs in such a way that the sum of the signs of $A_i$ for all parts that enclose a given point $p$ is 1 if $p$ belongs to $X$, and 0 otherwise.
For example, the figure below (a) is easily divided into a square and a triangle, both with positive area; and a circular hole, with negative area (b).
(a) 2D Object
(b) Object described using simpler elements
(c) Centroids of elements of the object
The centroid of each part can be found in any list of centroids of simple shapes (c). Then the centroid of the figure is the weighted average of the three points. The horizontal position of the centroid, from the left edge of the figure is
$x = \frac{5 \times 10^2 + 13.33 \times \frac{1}{2}10^2 - 3 \times \pi2.5^2}{10^2 + \frac{1}{2}10^2 -\pi2.5^2} \approx 8.5 \mbox{ units}.$
The vertical position of the centroid is found in the same way.
The same formula holds for any three-dimensional objects, except that each $A_i$ should be the volume of $X_i$, rather than its area. It also holds for any subset of $\R^d$, for any dimension $d$, with the areas replaced by the $d$-dimensional measures of the parts.
### By integral formula
The centroid of a subset X of $\R^n$ can also be computed by the integral
$C = \frac{\int x g(x) \; dx}{\int g(x) \; dx}$
where the integrals are taken over the whole space $\R^n$, and g is the characteristic function of the subset, which is 1 inside X and 0 outside it. Note that the denominator is simply the measure of the set X (However, this formula cannot be applied if the set X has zero measure, or if either integral diverges.)
Another formula for the centroid is
$C_k = \frac{\int z S_k(z) \; dz}{\int S_k(z) \; dz}$
where Ck is the kth coordinate of C, and Sk(z) is the measure of the intersection of X with the hyperplane defined by the equation xk = z. Again, the denominator is simply the measure of X.
For a plane figure, in particular, the barycenter coordinates are
$C_{\mathrm x} = \frac{\int x S_{\mathrm y}(x) \; dx}{A}$
$C_{\mathrm y} = \frac{\int y S_{\mathrm x}(y) \; dy}{A}$
where A is the area of the figure X; Sy(x) is the length of the intersection of X with the vertical line at abscissa x; and Sx(y) is the analogous quantity for the swapped axes.
#### Bounded region
The centroid $(\bar{x},\;\bar{y})$ of a region bounded by the graphs of the continuous functions $f$ and $g$ such that $f(x) \geq g(x)$ on the interval $[a, b]$, $a \leq x \leq b$, is given by
$\bar{x}=\frac{1}{A}\int_a^b x[f(x) - g(x)]\;dx$
$\bar{y}=\frac{1}{A}\int_a^b \left[\frac{f(x) + g(x)}{2}\right][f(x) - g(x)]\;dx,$
where $A$ is the area of the region (given by $\int_a^b [f(x) - g(x)]\;dx$).[1]
##### Example
Semicircle with a red dot showing the centroid
Consider the semicircle bounded by $f(x)=\sqrt{1-x^2}$ and $g(x)=0$. Its area is $A=\frac{\pi r^2}{2}=\frac{\pi}{2}$.
$\bar{x}=\frac{1}{A}\int_a^b x[f(x) - g(x)]\;dx=\frac{2}{\pi}\int_{-1}^1 x\sqrt{1-x^2}\;dx=0$
$\bar{y}=\frac{1}{A}\int_a^b \left[\frac{f(x) + g(x)}{2}\right][f(x) - g(x)]\;dx=\frac{2}{\pi}\int_{-1}^1 \left[\frac{\left(\sqrt{1-x^2}\right)^2}{2}\right]\;dx=\frac{4}{3\pi}$
The centroid is located at $(0,\;\frac{4}{3\pi})$.
### Of an L-shaped object
This is a method of determining the centroid of an L-shaped object.
1. Divide the shape into two rectangles, as shown in fig 2. Find the centroids of these two rectangles by drawing the diagonals. Draw a line joining the centroids. The centroid of the shape must lie on this line AB.
2. Divide the shape into two other rectangles, as shown in fig 3. Find the centroids of these two rectangles by drawing the diagonals. Draw a line joining the centroids. The centroid of the L-shape must lie on this line CD.
3. As the centroid of the shape must lie along AB and also along CD, it is obvious that it is at the intersection of these two lines, at O. The point O might not lie inside the L-shaped object.
### Of triangle and tetrahedron
The centroid of a triangle is the point of intersection of its medians (the lines joining each vertex with the midpoint of the opposite side). The centroid divides each of the medians in the ratio 2:1, which is to say it is located ⅓ of the perpendicular distance between each side and the opposing point (see figures at right). Its Cartesian coordinates are the means of the coordinates of the three vertices. That is, if the three vertices are $a = (x_a, y_a)$, $b = (x_b, y_b)$, and $c = (x_c, y_c)$, then the centroid is
$C = \frac13(a+b+c) = \left(\frac13 (x_a+x_b+x_c),\;\; \frac13(y_a+y_b+y_c)\right).$
The centroid is therefore at $\left(\frac13,\frac13,\frac13\right)$ in barycentric coordinates.
The centroid is also the physical center of mass if the triangle is made from a uniform sheet of material; or if all the mass is concentrated at the three vertices, and evenly divided among them. On the other hand, if the mass is distributed along the triangle's perimeter, with uniform linear density, then the center of mass lies at the Spieker center (the incentre of the medial triangle), which does not (in general) coincide with the geometric centroid of the full triangle.
The area of the triangle is 1.5 times the length of any side times the perpendicular distance from the side to the centroid.[2]
A triangle's centroid lies on its Euler line between its orthocenter and its circumcenter, exactly twice as close to the latter as to the former.
Similar results hold for a tetrahedron: its centroid is the intersection of all line segments that connect each vertex to the centroid of the opposite face. These line segments are divided by the centroid in the ratio 3:1. The result generalizes to any n-dimensional simplex in the obvious way. If the set of vertices of a simplex is ${v_0,\ldots,v_n}$, then considering the vertices as vectors, the centroid is
$C = \frac{1}{n+1}\sum_{i=0}^n v_i.$
The geometric centroid coincides with the center of mass if the mass is uniformly distributed over the whole simplex, or concentrated at the vertices as n equal masses.
The isogonal conjugate of a triangle's centroid is its symmedian point.
### Centroid of polygon
The centroid of a non-self-intersecting closed polygon defined by n vertices (x0,y0), (x1,y1), ..., (xn−1,yn−1) is the point (Cx, Cy), where [3]
$C_{\mathrm x} = \frac{1}{6A}\sum_{i=0}^{n-1}(x_i+x_{i+1})(x_i\ y_{i+1} - x_{i+1}\ y_i)$
$C_{\mathrm y} = \frac{1}{6A}\sum_{i=0}^{n-1}(y_i+y_{i+1})(x_i\ y_{i+1} - x_{i+1}\ y_i)$
and where A is the polygon's signed area,
$A = \frac{1}{2}\sum_{i=0}^{n-1} (x_i\ y_{i+1} - x_{i+1}\ y_i)\;$
In these formulas, the vertices are assumed to be numbered in order of their occurrence along the polygon's perimeter, and the vertex ( xn, yn ) is assumed to be the same as ( x0, y0 ). Note that if the points are numbered in clockwise order the area A, computed as above, will have a negative sign; but the centroid coordinates will be correct even in this case.
### Centroid of cone or pyramid
The centroid of a cone or pyramid is located on the line segment that connects the apex to the centroid of the base. For a solid cone or pyramid, the centroid is 1/4 the distance from the base to the apex. For a cone or pyramid that is just a shell (hollow) with no base, the centroid is 1/3 the distance from the base plane to the apex.
## References
1. Larson, Roland E.; Hostetler, Robert P.; Edwards, Bruce H. (1998). Calculus of a Single Variable (Sixth ed.). Houghton Mifflin Company. pp. 458–460.
2. Johnson, Roger A., Advanced Euclidean Geometry, Dover, 2007 (orig. 1929): p. 173, corollary to #272.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 54, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.922995388507843, "perplexity_flag": "head"}
|
http://mathdl.maa.org/mathDL/23/?pa=content&sa=viewDocument&nodeId=3287&bodyId=3543
|
Search
## Search Loci:
Keyword
Advanced Search
# Loci
Page 7 of 8
show printer friendly
send to a friend
# Visualizing Lie Subalgebras using Root and Weight Diagrams
by Aaron Wangberg (Winona State Univ.) and Tevian Dray (Oregon State Univ.)
## 4. Applications to Algebras of Dimension Greater than 3
We now show how these two techniques can be applied to find subalgebras of rank $$l$$ algebras, for $$l \ge 4$$. We begin by using slices and projections to find subalgebras of the exceptional Lie algebra $$F_4$$. We then show how to apply these techniques to algebras of higher rank.
### 4.1 Subalgebras of $$F_4$$ using Slices
We apply the slice and projection techniques to the 52-dimensional exceptional Lie algebra $$F_4$$, whose Dynkin diagram is shown in Figure 20. We number the nodes 1 through 4, from left to right, and use this numbering to label the simple roots $$r^1, \cdots, r^4$$. Thus, the magnitude of $$r^1$$ and $$r^2$$ is greater than the magnitude of $$r^3$$ and $$r^4$$. We color these simple roots magenta ($$r^1$$), red ($$r^2$$), blue ($$r^3$$), and green ($$r^4$$).
Figure 20. $$F_4$$ Dynkin Diagram
We consider the slicing of $$F_4$$ defined using roots $$r^2$$, $$r^3$$, and $$r^4$$. Laying the slices along the $$x$$ axis, the large number of grey struts in the resulting diagram, Figure 21, makes it difficult to observe the underlying structure of each slice, and so they are removed from the diagram in Figure 22. This diagram clearly contains three nontrivial rank $3$ root or weight diagrams. Comparing this diagram to the root diagrams in Figure 9, we identify the middle diagram, containing 18 non-zero vertices, as the root diagram of $$C_3=sp(2\cdot 3)$$. The other two slices are identical non-minimal weight diagrams of $$C_3$$. Because there are $46$ non-zero vertices visible in Figure 22, it is clear that two single vertices are missing from this representation of the root diagram of $$F_4$$, which has dimension 52.
Figure 21. Slicing of $$F_4$$ using roots $$r^2$$ (red), $$r^3$$ (blue), and $$r^4$$ (green). Grey colored struts connect vertices from different slices. Here is an interactive version.
Figure 22. Slicing of $$F_4$$ using roots $$r^2$$, $$r^3$$, and $$r^4$$. Eliminating the struts shows $$C_3=sp(2\cdot 3) \subset F_4$$. Here is an interactive version.
Figure 23 is the result of slicing the root diagram of $$F_4$$ using the simple roots $$r^1$$, $$r^2$$, and $$r^3$$. The center diagram again contains 18 non-zero weights, which we identify as $$B_3=so(7)$$ using Figure 9. Hence, $$B_3 \subset F_4$$. Furthermore, as all 48 non-zero vertices are present and there are 5 nontrivial slices in the root diagram, we compare this sliced root diagram of $$F_4$$ with that of $$B_4=so(9)$$, which is shown in Figure 17, and see that $$B_3 \subset B_4 \subset F_4$$. An additional slicing of $$B_4$$ shows $$D_4 =so(8) \subset B_4 \subset F_4$$.
Figure 23. Slice of $$F_4$$ showing $$B_3=so(7) \subset B_4 =so(9) \subset F_4$$. Here is an interactive version.
### 4.2 Subalgebras of $$F_4$$ using Projections
Given the 4-dimensional root diagram of $$F_4$$, we can observe its $$3$$-dimensional shadow when projected along any one direction. However, as a single projection eliminates the information contained in one direction, it is not possible to understand the root diagram of $$F_4$$ using a single projection. We work around this problem by creating an animation of projections, in which the direction of the projection changes slightly from one frame to the next.
The Dynkin diagram of $$F_4$$ reduces to the Dynkin diagram of $$C_3=sp(2\cdot 3)$$ or $$B_3=so(7)$$ by eliminating either the first or fourth node. The simple roots $$r^1$$ and $$r^4$$ define a plane in $$\mathbb{R}^4$$, and we choose a projection vector $$p_{\theta} = \cos \theta r^1 + \sin \theta r^4$$ to vary discretely in steps of size $$\frac{\pi}{18}$$ from $$\theta = 0$$ to $$\theta = \frac{\pi}{2}$$ in this plane. Each value of $$\theta$$ produces a frame of the animation sequence using the projection procedures of section 3.2. The resulting animation is displayed in Figure 24.
The result of each projection of $$F_4$$ is a diagram in three dimensions. We create the animation using Maple, and the software package Javaview is used to make a live, interactive applet of the animation. The Javaview applet allows the animation to be rotated in $$\mathbb{R}^3$$ as it plays. In particular, when $$\theta = 0$$, we can rotate the diagram to show a weight diagram of $$C_3=sp(2\cdot 3)$$, which our eyes project down to the root diagram of $$B_2 = C_2$$. Without rotating the diagram, the animation continuously changes the projected diagram as $$\theta$$ increases. When $$\theta = \frac{\pi}{2}$$, our eyes project the root diagram of $$B_3 = so(7)$$ down to the root diagram of $$G_2$$. However, it is also possible to rotate the animation to see the root diagram of $$G_2$$ at various other values of $$\theta$$. The interactive animation makes it easier to explore the structure of $$F_4$$.
This interactive animation can also illustrate an obvious fact about planes in $$\mathbb{R}^4$$. As $$p_{\theta}$$ is confined to a plane, there is a plane $$P^{\perp}$$ which is orthogonal to each of the projection directions. Thus, the projection does not affect $$P^{\perp}$$, and it is possible to see this plane in $$\mathbb{R}^3$$ by rotating the animation to the view shown in the sixth diagram in Figure 24. In this configuration, the roots and vertices in this diagram do not change as the animation varies from $$\theta = 0$$ to $$\theta = \frac{\pi}{2}$$. While this is obvious from the standpoint of Euclidean space, it is still surprising this plane can be seen in $$\mathbb{R}^3$$ even as the projected diagram is continually changing.
| | | |
|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------------|-------------------------------------|
| | | |
| $$\theta = 0$$ | $$\theta = \frac{\pi}{8}$$ | $$\theta = \frac{\pi}{4}$$ |
| | | |
| $$\theta = 0$$ | $$\theta = \frac{\pi}{8}$$ | Orthogonal View in $$\mathbb{R}^4$$ |
| Figure 24. Animation of $$F_4$$, projecting along the direction $$p_{\theta} = \cos(\theta)r^1 + \sin(\theta)r^4$$ which is confined to a plane in $$\mathbb{R}^4$$ containing $$r^1$$ and $$r^4$$.Here is the interactive animation in a separate window. | | |
### 4.3 Modifications of methods for $$E_6$$
Of particular interest is the exceptional Lie algebra $$E_6$$, which preserves the the determinant of elements of the Cayley plane. As explained in Section 2.1, this allows us to write $$E_6 = sl(3,\mathbb{O})$$. It therefore naturally contains the subalgebras $$sl(2,\mathbb{O})$$ and $$su(2, \mathbb{O})$$, which are identified as real forms of $$D_5$$ and $$B_4$$, respectively [13].
The projection technique can also be used to identify subalgebras of rank $$E_6$$. In one version, we project the root diagram of $$E_6$$ along one direction, thereby creating a diagram that possibly corresponds to a rank 5 algebra $$g$$. We then apply the same pair of projections to our projected $$E_6$$ diagram and to the candidate root diagram of $$g$$. If these two projections preserve the number of vertices in the $5$-dimensional diagrams, it is possible to compare the resulting diagrams in $$\mathbb{R}^3$$. If we have identified the correct subalgebra of $$E_6$$, the resulting two diagrams should match for every pair of projections applied to the 5-dimensional diagrams.
Projections of rank 5 and 6 algebras can also be simulated using slicings of their root diagrams. This is done using the slice and collapse technique, which collapses all the slices onto one another in a particular direction. When using this technique, we draw the grey struts, as we are now interested in the root diagram's structure after the projection. This technique provides clearer pictures compared to the pure projection method.
### 4.4 Subalgebras of $$E_6$$
We list in Figure 25 the subalgebras of $$E_6$$ found using the slicing and projection techniques applied to an algebra's root diagram. As mentioned in Section 2.1, we list certain real representations of the subalgebras of the $$sl(3,\mathbb{O})$$ representation of $$E_6$$ in the diagram. The particular real representations are listed below each algebra.
We use different notations to indicate the particular method that was used to identify subalgebras. The notation indicates the slicing method was used to identify $$A$$ as a subalgebra of $$B$$. The notation indicates that $$A$$ was identified as a subalgebra of $$B$$ using the normal projection technique, while we indicate projections done by the slice and collapse method as . If both dotted and solid arrows are present, then $$A$$ can be found as a subalgebra of $$B$$ using both slicing and projection methods. If $$A$$ and $$B$$ have the same rank, only the slicing method allows us to identify the root diagram of $$A$$ as a subdiagram of $$B$$. This case is indicated in the diagram using the notation . Each of the subalgebra inclusions below can be verified online [14].
Figure 25. Subalgebras of $$E_6$$ together with some important real representations
Lists of subalgebra inclusions are found in [12], which applies subalgebras to particle physics, and in [15], which recreates the subalgebra lists of [16]. However, the list in [15] mistakenly has $$C_4$$ and $$B_3$$ as subalgebras of $$F_4$$, instead of $$C_3$$ and $$B_4$$. Further, the list omits the inclusions $$G_2 \subset B_3$$, $$C_4 \subset E_6$$, $$F_4 \subset E_6$$, and $$D_5 \subset E_6$$. The correct inclusions of $$C_3 \subset F_4$$ and $$B_4 \subset F_4$$ are listed in Section 8 of [16], but the $$B_n$$ and $$C_n$$ chains are mislabeled in the final table which was used by Gilmore in [15]. Although van der Waerden uses root systems do determine subalgebra inclusions, he mistakenly claims that $$D_n \subset C_n$$ as a subalgebra in Section 21, which is not true since their root diagrams are based upon inequivalent highest weights.
In [9], Dynkin classified subalgebras depending upon the root structure. If the root system of a subalgebra can be a subset of the root system of the full algebra, the subalgebra is called a regular subalgebra. Otherwise, the subalgebra is special. A complete list of regular and special subalgebras are listed in [12]. All of the regular embeddings of an algebra in a subalgebra of $$E_6$$ can be found using the slicing method. In many cases, the projection technique also identifies these regular embeddings of subalgebras, but there are regular embeddings which can are not recognized as the result of projections. The special embeddings of an algebra in a subalgebra of $$E_6$$ can only be found using the projection technique.
Wangberg, Aaron and Tevian Dray, "Visualizing Lie Subalgebras using Root and Weight Diagrams," Loci (February 2009), DOI: 10.4169/loci003287
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 136, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9014447927474976, "perplexity_flag": "head"}
|
http://unapologetic.wordpress.com/2008/02/28/the-riemann-stieltjes-integral-i/?like=1&source=post_flair&_wpnonce=d0199d491e
|
# The Unapologetic Mathematician
## The Riemann-Stieltjes Integral I
Today I want to give a modification of the Riemann integral which helps give insight into the change of variables formula.
So, we defined the Riemann integral
$\displaystyle\int\limits_a^bf(x)dx$
to be the limit as we refined the tagged partition $x=((x_0,...,x_n),(t_1,...,t_n))$ of the Riemann sum
$\displaystyle f_x=\sum\limits_{i=1}^nf(t_i)(x_i-x_{i-1})$
But why did we multiply by $(x_i-x_{i-1})$? Well, that was the width of a rectangular strip we were using to approximate part of the area under the graph of $f$. But why should we automatically use that difference as the “width”?
Let’s imagine we’re walking past a fence. Sometimes we walk faster, and sometimes we walk slower, but at time $t$ we can measure the height of the fence right next to us: $f(t)$. So what’s the area of the fence? If we just integrated $f(t)$ we’d get the wrong answer. The samples we made when walking fast made fat rectangles, while the samples we made when we were walking slowly got paired with skinny rectangles, but we gave them the same weight if they took the same time to get through that segment of the partition. We need to reweight our sums to compensate for how fast we’re walking!
Okay, so how wide should we make the rectangles? Let’s say that at time $t$ we’re at position $\alpha(t)$ along the fence. Then in the segment of the partition between times $x_{i-1}$ and $x_i$ we move from position $\alpha(x_{i-1})$ to position $\alpha(x_i)$, so we should make the width come out to $(\alpha(x_i)-\alpha(x_{i-1}))$. We’ll put this into our formalism from before and get the “Riemann-Stieltjes sum”:
$\displaystyle f_{\alpha,x}=\sum\limits_{i=1}^nf(t_i)(\alpha(x_i)-\alpha(x_{i-1}))$
And now we can take the limit over tagged partitions as before to get the “Riemann-Stieltjes integral”:
$\displaystyle\int\limits_{\left[a,b\right]}f(x)d\alpha(x)=\int\limits_a^bf(x)d\alpha(x)$
if this limit exists.
Here we call the function $f$ the “integrand”, and the function $\alpha$ the “integrator”. Clearly, the old Riemann integral is the special case when $\alpha(x)=x$.
Immediately from the definition we can see the same “additivity” (using signed intervals) in the region of integration that the Riemann integral had:
$\displaystyle\int\limits_{\left[x_1,x_3\right]+\left[x_3,x_2\right]}f(x)d\alpha(x)=\int\limits_{\left[x_1,x_3\right]}f(x)d\alpha(x)+\int\limits_{\left[x_3,x_2\right]}f(x)d\alpha(x)$
and the same linearity in the integrand:
$\displaystyle\int\limits_{\left[x_1,x_2\right]}af(x)+bg(x)d\alpha(x)=a\int\limits_{\left[x_1,x_2\right]}f(x)d\alpha(x)+b\int\limits_{\left[x_1,x_2\right]}g(x)d\alpha(x)$
and also a new linearity in the integrator:
$\displaystyle\int\limits_{\left[x_1,x_2\right]}f(x)d(a\alpha(x)+b\beta(x))=a\int\limits_{\left[x_1,x_2\right]}f(x)d\alpha(x)+b\int\limits_{\left[x_1,x_2\right]}f(x)d\beta(x)$
Neat!
### Like this:
Posted by John Armstrong | Analysis, Calculus
## 23 Comments »
1. [...] follow on yesterday’s discussion of the Riemann-Stieltjes integral by looking at a restricted sort of integrator. We’ll assume here that is continuously [...]
Pingback by | February 29, 2008 | Reply
2. [...] Riemann-Stieltjes Integral IV Let’s do one more easy application of the Riemann-Stieltjes integral. We know from last Friday that when our integrator is continuously differentiable, we can reduce to [...]
Pingback by | March 4, 2008 | Reply
3. [...] of Bounded Variation I In our coverage of the Riemann-Stieltjes integral, we have to talk about Riemann-Stieltjes sums, which are of the [...]
Pingback by | March 5, 2008 | Reply
4. Hi,
I just wanted to say thanks for the Riemann-Stieltjes pieces. I am taking second semester analysis which heavily emphasizes this integral – along with doing every proof in Rudin in class (with a take no prisoner’s attitude to the homework – i.e. NO partial credit). I also would like to say that the Professor is Dan Oberlin (FSU math) who not only is the greatest math teacher I have ever had but is a man who has so much respect for mathematics that he refuses to let it be watered down into hand-waving. His is a great teacher and even injects some humor into every class (I know this sound weird in an analysis class) – for example – we wanted to sup over a bunch of terms so we came up with “Zupping” – umlaut over the ‘u’ – which we might submit to Colbert. For all of you out there that think you are ‘scooting’ – get hold of a teacher like Dan – it will blow your socks off in terms of how much math you can learn and do.
Comment by Carlie Saunders | March 8, 2008 | Reply
• I don’t believe Colbert would be able to use a joke that depends on math any higher than arithmetic. It has to do with the audience he gathers by violating logic, honesty, and common sense regularly.
Comment by Joe Bob | September 21, 2009 | Reply
5. [...] If we want our Riemann-Stieltjes sums to converge to some value, we’d better have our upper and lower sums converge to that value [...]
Pingback by | March 14, 2008 | Reply
6. [...] of Bounded Variation Today I want to start considering Riemann-Stieltjes integrals where the integrator is a function of bounded [...]
Pingback by | March 17, 2008 | Reply
7. [...] about to do, I’m going to need a couple results about increasing integrators, and how Riemann-Stieltjes integrals with respect to them play nicely with order properties of the real [...]
Pingback by | March 18, 2008 | Reply
8. [...] of integrable functions From the linearity of the Riemann-Stieltjes integral in the integrand, we know that the collection of functions that are integrable with respect to a [...]
Pingback by | March 20, 2008 | Reply
9. [...] Let’s consider some conditions under which we’ll know that a given Riemann-Stieltjes integral will exist. First off, we have a straightforward adaptation of our old result that continuous [...]
Pingback by | March 24, 2008 | Reply
10. [...] Function Integrators Now that we know how a Riemann-Stieltjes integral behaves where the integrand has a jump, we can put jumps together into more complicated functions. [...]
Pingback by | March 27, 2008 | Reply
11. [...] Integrals I We’ve dealt with Riemann integrals and their extensions to Riemann-Stieltjes integrals. But these are both defined to integrate a function over a finite interval. What if we want to [...]
Pingback by | April 18, 2008 | Reply
12. Dear Professor Armstrong,
I do have found your series of posts covering the Riemann-Stieltjes integral VERY useful to me. By using the “PDF Creator” I have already converted them into pdf files for my private use only and further study, because I see them as a reliable source and in a level appropriate to me.
I dare ask you if you intend to post a list of symbols and notation you use here. Although your notation is the standard one (for american mathematicians) as far as I have seen, it might help us in the topics we are not familiar with. To me, for instance, Abstract Algebra.
Américo Tavares
(retired engineer interested in Mathematics)
Comment by | April 19, 2008 | Reply
13. I’m not sure what I would put on such a list. I try to explain any symbols or notation as I introduce them. Are there any in particular you’re having difficulty with?
Comment by | April 19, 2008 | Reply
I think of it as a list that would start with relative few symbols and that would be extended to show new ones, when you find you have posted anything not yet there.
In principle that would require a separate page or post regularly updated.
Behind the symbols what really is important are the definitions. But that would help, anyway, I think.
For the purpose of giving you a particular example I went to one of your posts (http://unapologetic.wordpress.com/2007/02/15/cosets-and-quotients/) categorized as “Subgroups and Quotients Groups” by clicking this sub-category. There I found, for instance, among several algebraic symbols (e.g. the subgroup {e,(12)}) the “coset” concept that I still do not know what is it. This means only that I should studied the subject.
I searched for this word and found the posts where it appears.
I do realize that one cannot learn the new symbols without the associated definitions.
After being constructed this list would be very similar with a formal list of symbols in a textbook, the main difference would be that it would cover the symbols of your blog instead of the textbook.
Comment by | April 19, 2008 | Reply
15. [...] Subtracting off the integral of we get our result. (Technically to do this, we need to extend the linearity properties of Riemann-Stieltjes integrals to improper integrals, but this is [...]
Pingback by | April 22, 2008 | Reply
16. [...] say we take an integrator of bounded variation on an interval and a function that’s Riemann-Stieltjes integrable with respect to over that interval. Then we know that is also integrable with respect to over [...]
Pingback by | March 14, 2009 | Reply
17. [...] Across a Jump In the discussion of necessary conditions for Riemann-Stieltjes integrability we saw that when the integrand and integrator are discontinuous from the same side of the same [...]
Pingback by | March 14, 2009 | Reply
18. Aloha, What about a definition for a higher S-I? You use the first differences: \$\nabla\,x=x_i-x_{i-1}\$ in the definition above, but might to comment on the higher differences, such as \$\nabla^2\,x=x_i-2x_{i-1}+x_{i-2}\$. Do these work? Mahalo
Comment by dan | October 14, 2009 | Reply
19. I’m not really sure, since I haven’t considered it. It’s not just iterating integration, since the Riemann-Stieltjes integral gives back a number and not another function. What are you thinking you might gain from using these second-differences?
Comment by | October 14, 2009 | Reply
20. [...] of Partial Integrals There are some remaining topics to clean up in the theory of the Riemann-Stieltjes integral. First up is a question that seems natural from the perspective of iterated integrals: [...]
Pingback by | January 12, 2010 | Reply
21. I’ld really appreciate any references to Riemann-Stieltjes integration in n-dimensions. I’ve seen probability books evidently making use of Riemann-Stieltjes integration in n-dimensions, but nowhere — as far as I have found — are such integrals defined. I realize that one could make sense of Riemann-Stieltjes integration in n-dimensions in terms of Lebesgue-Stieltjes integration w.r.t an induced measure, but that is *NOT* what I’m looking for…
I’ld like to know of any results relating to computing the Lebesgue-Stieltjes integral (of, say, a continuous integrand) w.r.t. the measure induced by a joint distribution function dF(x_1, …, x_n) in terms of iterated Riemann-Stieltjes integrals involving (something like) various marginals… For example — in the 2 dimensional case — perhaps something like
\int \int f(x_1,x_2) dF(x_1,x_2)
= \int ( \int f(x_1,x_2) d G_{x_1}(x_2) ) dF(x_1)
where G_{t}(x) has argument x but is parametrized by t, and has value G_{x_1}(x_2) = F(x_1,x_2).
Any pointers would be appreciated!
Comment by michael | March 18, 2011 | Reply
22. Unfortunately, I’m not really much of an analyst or a statistician. My best guess for the definition would parallel this one-dimensional case, but using n-dimensional “intervals” — basically rectangular parallelepipeds with edges that line up with the coordinate axes.
I think that when I defined the n-dimensional integral I used these “intervals” to define tagged partitions, and so on as usual for the Riemann integral. The difference between the Riemann and Riemann-Stieltjes integrals is using $\alpha(x)$ on the tag point rather than $x$ itself when setting up the Riemann sums. That should generalize pretty directly.
Comment by | March 18, 2011 | Reply
« Previous | Next »
## About this weblog
This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”).
I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 25, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9281266331672668, "perplexity_flag": "middle"}
|
http://www.physicsforums.com/showthread.php?t=612916
|
Physics Forums
## 2D motion
1. The problem statement, all variables and given/known data
A rugby ball of mass 400 grams is to be kicked from ground level to clear
a crossbar 3m high. The goal line is 12m from the ball, and the junior
rugby player can impart a maximum speed of 4√g ms−1 to the ball.
(a) Modelling the ball as a projectile moving under gravity alone, what is
the minimum launch angle that will succeed? What is the range of
the ball when launched at this angle and speed?
3. The attempt at a solution
Not quite sure how to approach this one at all. The ball would need to go high enough, from the initial vertical speed given to it - 4√g.sinθ. That by the time it reaches the crossbar in a time we can calculate from its horizontal component, it is exactly 3m above the ground. I get the feeling I am missing something though.
PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug
I would find the time it takes for the ball to reach the crossbar as a function of angle. Then, you should be able to find an expression for the height of the ball at that time and see what constraints there are on the angle.
Thanks for the reply Muphrid. I made some progress thanks to your suggestion. I have an expression for the position vector. First problem is I dont know if they want a numerical or algebraic answer. r=4t√g cos(α)i + (4t√g sin(α)-1/2 gt2)j + C I need to get find the integration constant first, the only values i have are the initial values of (0,0) at time=0, giving C as zero, i dont know if this is correct. As I need to find the minimum angle I assume that the position vector to the top of the wall is key to this (12i, 3j). Now I am stuck again.
## 2D motion
You might want to check the problem statement or something; you said that the player can impart a velocity of "$4 \sqrt g$" in meters per second, yet if $g$ is Earth's gravitational acceleration, that makes no sense in terms of units. Until that is cleared up, nothing else about the problem will make sense.
Otherwise, yes, you've reasoned out the integration constant correctly. The easiest thing to do is to solve for $\cos \alpha$ and then use inverse trig to get $\alpha$ from that.
Quote by Muphrid You might want to check the problem statement or something; you said that the player can impart a velocity of "$4 \sqrt g$" in meters per second, yet if $g$ is Earth's gravitational acceleration, that makes no sense in terms of units. Until that is cleared up, nothing else about the problem will make sense.
I would assume that it just means $4\sqrt{9.8}\frac{m}{s^{2}}$.
2 constrains at time=t 1. x=12m 2.y≥3m or y=(3+z)m where z≥0 t=x/Ux y+z=Uyt - 0.5gt2 minimum value of z=0 imply minimum angle. 3=Uy(x/Ux) - 0.5g(x/Ux)2
Azizlwl, thanks for the reply, but can you explain it a little more in depth please, i dont understand what your symbols mean, especially the z. Is U initial velocity?
Quote by Kawakaze Azizlwl, thanks for the reply, but can you explain it a little more in depth please, i dont understand what your symbols mean, especially the z. Is U initial velocity?
Yes U is the intial velocity.
z is a distance above the crossbar.
Ux initial horizontal velocity
Uy initial vertical velocity
Thanks! One last question if I may, what is the significance of the integration constant that appears when integrating the velocity vector to get the position vector, and where does it appear in your solution. Cheers!
Quote by Kawakaze Thanks! One last question if I may, what is the significance of the integration constant that appears when integrating the velocity vector to get the position vector, and where does it appear in your solution. Cheers!
y0
y=y0+ut+at2
Thread Tools
| | | |
|--------------------------------|-------------------------------|---------|
| Similar Threads for: 2D motion | | |
| Thread | Forum | Replies |
| | Advanced Physics Homework | 7 |
| | Introductory Physics Homework | 6 |
| | Introductory Physics Homework | 3 |
| | General Physics | 1 |
| | Introductory Physics Homework | 4 |
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9062153100967407, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/trigonometry/157085-proving-sin-2-x-sin-x.html
|
# Thread:
1. ## proving sin^2(x)<=|sin(x)|
For all x, prove that $sin^{2}(x)\leq|sin(x)|$ given that x is in the set of real numbers.
I think i need to break this into a couple different cases, specifically: (0, $\pi$),( $\pi$, $2\pi$), but am not sure. Any help would be appreciated.
2. Originally Posted by snaes
For all x, prove that $sin^{2}(x)\leq|sin(x)|$ given that x is in the set of real numbers.
Don’t make it difficult. Recall that $0\leqslant a \leqslant 1\, \Rightarrow \,a^2 \leqslant a$.
We know that $0\le |\sin(x)|\le1$ so $\sin^2(x)=|\sin(x)|^2\le |\sin(x)|$.
3. Thanks, I was definitely over-thinking that.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 8, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9508921504020691, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/4959/can-noethers-theorem-be-understood-intuitively
|
Can Noether's theorem be understood intuitively?
Noether's theorem is one of those surprisingly clear results of mathematical calculations, for which I am inclined to think that some kind of intuitive understanding should or must be possible. However I don't know of any, do you?
Independence of time <=> energy conservation.
Independence of position <=> momentum conservation.
Independence of direction <=> angular momentum conservation.
I know that the mathematics leads in the direction of Lie-algebra and such but I would like to discuss whether this theorem can be understood from a non-mathematical point of view also.
-
It is interesting that you are asking about a non-mathematical point of view for a mathematical theorem! – MBN Feb 10 '11 at 20:51
Gerard, is my answer so bad, in your opinion? – Vladimir Kalitvianski Feb 10 '11 at 23:00
@vladimir the Landau page was not too bad, however I was looking for enlightenment from a non-mathematical domain – Gerard Feb 11 '11 at 22:08
Gerard, I added very simple examples to the end of my answer. – Vladimir Kalitvianski Feb 11 '11 at 22:31
1
@MBN especially because these results look so crystal clear, its almost a farce not to be able to explain them from a more common angle. – Gerard Feb 11 '11 at 22:36
show 1 more comment
6 Answers
It's intuitively clear that the energy most accurately describes how much the state of the system is changing with time. So if the laws of physics don't depend on time, then the amount how much the state of the system changes with time has to be conserved because it's still changing in the same way.
In the same way, and perhaps even more intuitively, if the laws don't depend on position, you may hit the objects, and hit them a little bit more, and so on. The momentum measures how much the objects depend on space, so if the laws themselves don't depend on the position on space, the momentum has to be conserved.
The angular momentum with respect to an axis is determining how much the state changes if you rotate it around the axis - how much it depends on the angle (therefore "angular" in the name). So the symmetry is linked to the conservation law once again.
If your intuition doesn't find the comments intuitive enough, maybe you should train your intuition because your current intuition apparently misses the most important properties of time, space, angles, energy, momentum, and angular momentum. ;-)
-
2
Lubosh, tell me please, what is wrong in my explanation? Whatever I write, I get downvotes. Very strange! – Vladimir Kalitvianski Feb 10 '11 at 20:54
1
So, if the laws of physics don't care about the angle - the phase of the charged fields - which is what it means for them to be symmetric, then it means that you may first change the phase, and then time-evolve, or first time-evolve, and then change the phase by the gauge transformation. It means that the initial and final states carry the same charge - change under the rotation: the charge is conserved because of the symmetry. Similarly, you may discuss the conservation of the SU(2) and SU(3) generators. – Luboš Motl Feb 10 '11 at 21:09
1
You may also think about the discrete counterpart of Noether's theorem. Take parity: it is the operator $P$ such that $P^2=+1$. Well, it may also be $-1$ but let me ignore those subtleties now. If the laws of physics are symmetric relatively to $\vec x \to -\vec x$, then it doesn't matter whether you first flip the orientation (mirror) and then time-evolve, or vice versa. This is equivalent to conserving parity as the quantum number because parity eigenstates are either even or odd under the reflection, and this even-ness or odd-ness - the parity :-) - is conserved in time evolution: tautology – Luboš Motl Feb 10 '11 at 21:11
1
Symmetry with respect to some transformations is not the same thing as time-independence (conservation). The latter needs essentially the equations of motion. – Vladimir Kalitvianski Feb 11 '11 at 9:43
5
This answer comes closest to what I was looking for, however the answer looks suspicious w.r.t. some cicular arguments e.g.: "it's intuitively clear that energy most accurately describes.." or "momentum measures..". These assumptions I would like to see clarified. The sentence "if the laws of physics don't depend on time, then the amount how much the state of the system changes with time has to be conserved" makes sense to me, but it is assumed that this amount is called 'energy', why? Same for momentum; note that when you hit objects, momentum is only conserved when you include the hitter. – Gerard Feb 11 '11 at 22:24
show 5 more comments
The intuitive argument for Noether's theorem, which is also the best completely precise argument for Noether's theorem, appears in Feynman's popular book "The Character of Physical Law". I will reproduce the argument, but not the diagram. The diagram is two parallel squiggles with a line connecting them at the top and at the bottom. These represent a particle path and a displaced particle path.
The action is stationary on the particle path, so the square squiggle which translates over, goes up parallel, and comes back has the same action as the original path. The original path, however, has the same action as just the squiggle part of the other path, therefore the two horizontal lines at top and bottom have equal action.
You can use this argument to find the exact form of the Noether current by replacing Feynmans horizontal lines with quick kicks by the momentum over a time $\epsilon$. His argument is an honest to goodness proof, it is by far the best proof, and it is the only case in all the history of publishing where a result is best presented in a popular book.
If you make the kicks continuous in time, so that they come here and there, you can still see that the kicks integrate by parts. This argument appears in the introduction to one of Hawking's 1970s papers, and is essentially equivalent to Feynman's "Character of Physical Law" argument, except it appears more than ten years later.
-
Well, I don't know about any intuitive explanation besides intuition gained by understanding the underlying math (mainly differential geometry, Hamiltonian mechanics and group theory). So with the risk of not giving you quite what you want, I'll try to approach the problem mathematically.
If you know Hamiltonian mechanics then the statement of the theorem is exceedingly simple. Assume we have a Hamiltonian $H$. To this there is associated a unique Hamiltonian flow (i.e. a one-parameter family of symplectomorphisms -- which is just a fancy name for diffeomorphisms preserving the symplectic structure) $\Phi_H(t)$ on the manifold. From the point of view of Lie theory, the flow is a group action and there exists its generator (which is a vector field) $V_H$ (this can also be obtained from $\omega(\cdot, V_H) = dH$ with $\omega$ being the symplectic form). Now, the completely same stuff can be written for some other function $A$, with generator $V_A$ and flow $\Phi_A(s)$. Think of this $A$ as some conserved quantity and of $\Phi_A(s)$ as a continuous family of symmetries.
Now, starting from Hamiltonian equation ${{\rm d} A \over {\rm d} t} = \left\{A,H\right\}$ we see that if $A$ Poisson-commutes with $H$ it is conserved. Now, this is not the end of the story. From the second paragraph it should be clear that $A$ and $H$ don't differ that much. Actually, what if we swapped them? Then we'd get ${{\rm d} H \over {\rm d} s} = \left\{H,A\right\}$. So we see that $A$ is constant along Hamiltonian flow (i.e. conserved) if and only if $H$ is constant along the symmetry flow (i.e. the physical laws are symmetric).
So much for why the stuff works. Now, how do we get from symmetries to conserved quantities? This actually isn't hard at all but requires some knowledge of differential geometry. Let's start with most simple example.
Translation
This is a symmetry such that $x \to x^\prime = x + a$. You can imagine that we move our coordinates along the $x$ direction. With $a$ being a parameter, this is a symmetry flow. If we differentiate with respect to this parameter, we'll get a vector field. Here it'll be $\partial_x$ (i.e. constant vector field aiming in the direction $x$). Now, what function on the symplectic manifold does it correspond to? Easy, it must be $p$ because by differentiating this we'll get a constant 1-form field $dp$ and then we have to use $\omega$ to get a vector field $\partial_x$.
Other way to see that it must be $p$: suppose you have a wave $\exp(ipx)$. Then $\partial_x \exp(ipx) = ip \exp(ipx)$ so momentum and partial derivatives are morally the same thing. Here we're of course exploiting the similarity between Fourier transform (which connects $x$ and $p$ images) and symplectic structure (which combines $x$ and $p$).
Rotation
Now onto something a bit harder. Suppose we have a flow $$\pmatrix{x \cr y} \to \pmatrix{x' \cr y'}= \pmatrix{\cos(\phi) & \sin(\phi) \cr - \sin(\phi) & \cos(\phi)} \pmatrix {x \cr y}$$ This is of course a rotational flow. Here we'll get a field $y {\rm d}x - x {\rm d} y$ and the conserved quantity of the form $y p_x - x p_y$ which can in three dimensions be thought of as a third component of angular momentum $L_z$.
Note that the above was done mainly for illustrative purposes as we could have worked in polar coordinates and then it would be actually the same problem as the first one because we'd get the field $\partial_{\phi}$ and conserved quantity $p_{\phi}$ (which is angular momentum).
-
Marek wrote: "if A Poisson-commutes with H it is conserved". This is what I wrote in my comment: without equations of motion it is impossible to derive conservation laws. Independence of something with respect to rotations is not the same as independence of something else with respect to time! – Vladimir Kalitvianski Feb 11 '11 at 10:11
@Vladimir: huh? Independence of Hamiltonian w.r.t. rotations is exactly the same as independence of angular momentum w.r.t. to evolution in time. In Hamiltonian formalism one can see this equivalence most clearly: there is no difference at all between $A$ and $H$ (or between their vector fields or flows). There is only the difference of semantics because we interpret one of those flows as time evolution. But that is put in by hand, it's not in the formalism itself. – Marek Feb 11 '11 at 11:03
To Marek: so you cannot do without equations of motion (Hamiltonian, Lagrangian), can you? It is easy to understand: if at some moment $t=t_1$ the system has some symmetry (for example, particles aligned along some axis), there is no guarantee that this symmetry will remain the same at other moments $t > t_1$: particles can fly away in 3D according to their equations and initial conditions. – Vladimir Kalitvianski Feb 11 '11 at 11:13
@Vladimir: aren't you confusing the symmetry of initial conditions with the symmetry of physical laws? Symmetry of physical laws is expressed via invariance of Hamiltonian w.r.t. to the said symmetry and this can't change in time. Besides this, there can also be symmetry in initial condition. But there is no theorem that would imply that the symmetry of the initial conditions has to be conserved. And it indeed doesn't have to be. – Marek Feb 11 '11 at 11:26
6
By the way, why the down-votes? I am quite confident this answer is correct, so I suppose it's because it seems too mathematical and off-topic? If you think it is off-topic, please up-vote this comment and I'll delete this answer if this comment gets enough up-votes. – Marek Feb 11 '11 at 11:57
show 9 more comments
Here are my two cents. Read the proof it will help you understand and build intuition because it is constructive. It explicitly shows you what the conserved quantity is, given the group of symmetries. If it is too hard to follow and you can't see the forest because of the trees, try a few examples it should help. Also here is a link that may help a bit.
http://math.ucr.edu/home/baez/noether.html
-
Since it is a mathematical theorem whose physical content you know already, it is difficult to discuss it without mathematics. But still I will try to present it in a simple way. It may help if we understand how it is derived.
Generally we look for an invariance of the action under a symmetry transformation with a time independent parameter. This is then a trivial mathematical identity. Now it is observed that if the dynamical variables obey the equations of motions then action becomes stationary even if the parameter is time dependent. We observe that the variation of the action - which must be zero since the action is stationary - can only depend on the integration of the time derivatives of the parameter. Now integrate by parts to take all the time derivatives off it and keep the rest in the integrand. Since the parameter is arbitrary, its co efficient in the integral must be zero. Now this coefficient is time derivative of something whose time derivative is zero. Therefore this "something" is constant or conserved in time.
-
I can only tell that those conserved quantities you have listed above are additive in particles: $P = \sum p_i(t)$, for example. But there are those that are not additive! They do not have special names.
For N differential equations there are as many integrals of motion as the initial conditions or so. Some of them can be casted sometimes in the additive form but generally (when there are no symmetries) the total number of integrals of motion remains the same. They all are simply non additive (more messy, if you like). So I would answer that symmetries help combine some integrals of motion as conserved quantities additive in all particles.
EDIT 1: Maybe Noether's theorem shows explicitly what the conserved quantities are whereas from equations it may be not so evident to derive?
EDIT 2: I got -4. Is my reasoning really that bad?
EDIT 3: a page from Landau:
EDIT 4: An example of integrals of motion:
-
2
Donwvoters, please give your disagreement statements. – Vladimir Kalitvianski Feb 10 '11 at 20:19
8
– anna v Feb 11 '11 at 7:52
2
Marek, you make an impression that a tree body problem has no solutions, that is is useless to solve the equations numerically because of unpredictable chaos, etc., etc. It is not the case. Integrals of motion (= solutions) exist without symmetries and of course they are expressed via independent initial conditions. Read Landau textbook on Classical mechanics, I learned this material from it. – Vladimir Kalitvianski Feb 11 '11 at 15:34
3
+1. And here's why. Not because I have any desire to be contrarian for the fun of it. But I think that @Vladimir, despite the extremely negative reception some of his questions and answers have received, is fundamentally sincere at some level. I also think that English is not @Vladimir's native language. This should have been apparent from the get-go but it can take time for such distinctions to filter in, especially in a non-verbal setting. Admittedly he has not helped his situation by coming on a bit too "strong". But perhaps such courage is an admirable trait rather than a deficit (contd) – user346 Feb 11 '11 at 17:23
3
@Kostya I don't think snide comments and name-calling are helpful to proving your side of the story. Anger doesn't work. Not even against creationists ;) – user346 Feb 11 '11 at 18:01
show 34 more comments
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 49, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9460554122924805, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/6827?sort=newest
|
## Basis of quantum SU(n)
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
As is well known, the set
$\{a^ib^jc^k | i,j,k \in \mathbb{Z}_{\geq 0},k>0\} \cup \{b^lc^md^n | l,m,n \in \mathbb{Z}_{\geq 0}\}$
forms a basis for quantum $SU(2)$. Does anyone know of a basis for quantum $SU(n)$?
My guess would be that a similar result holds. Namely that the set made up of all products of powers of the matrix entries ordered with respect to the canonical ordering such that the first entry in the q-det(n) does not appear would forms a basis. How to prove this, however, I do not know.
-
I tried to fix the LaTeX, but I think there are still some mathematical typos which I don't want to guess how to correct. – Reid Barton Nov 25 2009 at 20:10
The latex compiles fine now on my computer. – Abtan Massini Nov 25 2009 at 20:21
Please add a backslash before each underscore. The first compiler, which italicizes, etc., will remove each backslash but leave the underscore, and then JSMath will do its thing. – Theo Johnson-Freyd Nov 25 2009 at 20:32
1
The LaTeX is fine, but the mathematical formula is still poorly quantified. Why is there a k in the second set, any what is the difference between nonnegative elements of Z and nonnegative elements of N? – S. Carnahan♦ Nov 25 2009 at 22:12
The accepted answer by David is, while true for formal deformations, not correct for specializations: the diamond lemma proof stalls if done the same way as for $O_q(Mat_n)$. Klimyk-Schmuedgen certainly do not prove that result for $SL_q(n)$. In fact I talked to Schmuedgen about it in about 2000 at MSRI. Greg's answer however works as I learned from Brown around 2002 via email. – Zoran Škoda Jul 25 2011 at 15:08
## 3 Answers
[edit: Following John's helpful comments below, I made this answer much more complete.]
Yes, this is the statement that $O_q(G)$ is a flat deformation of $O(G)$ for any semi-simple group G. See the book by Klimyk and Schmuedgen, "Quantum Groups and Their Representations" for a proof of this: on page 311 they state the relevant theorem for $Mat_q(n)$ (although the proof is just a reference to the original source). In the following section, they prove that det_q is central, which allows us to identify $O_q(SL_N)$ with $Mat_q(n)/(det_q-1)$. The OP asked about $SU(N)$, but in the context of algebraic groups one studies SL_N, which has a compact real form $SU(N)$, and morally the same representation theory.
In general we have to be careful when either inverting or specializing to a scalar any element in a noncommutative algebra, because this can in general drastically change the size of the algebra relative to what you'd expect from the commutative situation (it is bigger in the former case and smaller in the latter than expected). For inverting, you need the element to lie in a "denominator set", which assures that you don't have to add too many more things to invert it (imagine inverting $y$ in the free algebra $k((x,y))$ on two generators x and y: it would be a lot bigger than the vector space $k((x,y))[1/y]$). [edit: I can't get carot's or braces to work, hence the awkward symbol for free algebra; I hope it's clear.] For specializing, your element should honestly lie in the center of the noncommutative algebra, since it's image in the quotient will be a scalar (thus central). For instance, if you take $A^2_{q}=k((x,y))/(yx=qxy)$, this has the same basis as $A^2=k[x,y]$. However, quotienting A_q by y-1 forces x=0, which doesn't happen in A.
So far as I remember, the standard proof of the PBW theorem in this example (and many examples) relies on a technical lemma called the diamond lemma, Lemma 4.8 from KS, which gives an ordering on the monomials of O_q(G) compatible with the defining relations, allowing one to prove the existence of PBW basis.
-
Where exactly in Klimyk and Schmuedgen is the proof? I'm having trouble finding it. – John McCarthy Feb 20 2010 at 18:03
Hi John, on page 311, they prove the claim for O_q(Mat_n) (well I should have been more careful; this book rarely proves things, but rather points to precise locations in the original literature. Their proof in this case consists of 3 citations). From that claim about O_q(Mat_N), together with the proof in the section following that the quantum determinant is central, one can conclude the PBW property for O_q(SL_N). I also find on page 103 a brief overview of the diamond lemma, also with references. – David Jordan Feb 21 2010 at 18:31
I also recently found sbseminar.wordpress.com/2009/11/20/… while trying to understand the diamond lemma more completely. It has a really nice exposition, and references to a good original source. – David Jordan Feb 21 2010 at 18:32
Sorry, but I don't understand what you mean by "which allows us to identify O_q(SL_N) with Mat_q(n)/." – John McCarthy Feb 22 2010 at 21:53
sorry, tex error =[. The js tex compiler doesn't like underscores or slashes. I'll fix it now. – David Jordan Feb 22 2010 at 22:17
show 2 more comments
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
The question is slightly mistated, from the example for $n=2$ it is seen that the question is about $SL_q(n)$ and not $SU_q(n)$; the answer is of course those standard normally ordered monomials
$(t^1_1)^{a_{11}}(t^1_2)^{a_{12}}...(t^n_n)^{a_{nn}}$
satisfying the condition that at least one of the diagonal exponents $a_{ii}$ is zero. Unlike in $O_q(M_n)$ literal application of the Bergman's diamond lemma does not produce the algorithm, because the diagonal enetries are not one next to another so if one wants to exclude the diagonal extra occurences one needs to go against the semigroup law. This is possible to do with great effort, I have checked this in 1999 with lots of algorithmic combinatorics; namely the set of reductions used is infinite and given algorithmically rather than by explicit formulas. Unlike the general rule advised by Bergman, it is not wise in the straight diamond lemma approach to exclude the nested ambiguities. Some other Grober arguments not relying on standard diamond lemma can give easy answer though.
For generic $q$ it is of course enough to use the classical commutative case and deformation arguments (Edit: alluded in David's answer).
It is not true, what is stated above in the accepted answer that the simple technique for $O_q(M_n)$ via diamond lemma and with the relations taken as reductions works when setting $det_q =1$. Imagine you have expression $(x^1_1)^2 (x^2_2)^2 (x^3_3)^2$ in $SL_q(3)$. How will you use centrality of the quantum determinant to translate this into something what does not have all three diagonal generators ? You need first to rearrange thing to be able to complete to a quantum determinant to exclude a bad diagonal generator, but this is not very compatible with the ordering. It can be done systematically but by now means is trivial or implied by Klimyk-Schmuedgen book.
-
This answer is not really all that different from David Jordan's answer, but is a somewhat different take. The coordinate ring $O_q(M_n)$, for all matrices, is an example of a multivariate "skew polynomial ring", which means a graded or filtered ring with certain axioms that imply that any map from a vector of exponents to monomials is a basis. (I am getting this from "Noncommutative Gröbner bases and filtered-graded transfer", by Huishi Li.) In any ring like this, there is a theory of Gröbner bases for any ideal, in particular for the determinant ideal. If you eliminate monomials that contain the leading term of a Gröbner generator, then remaining ones are a basis for the quotient ring. It is an equivalent answer, but more general, because it boils down to the same diamond lemma.
-
I'm very glad you added this reference, which I didn't know about. The one I provided gives a brief explanation in this special case, but I didn't know there was a whole book full of tricks. Thanks! – David Jordan Nov 26 2009 at 17:21
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 29, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9373573660850525, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/tagged/higgs-mechanism?sort=active&pagesize=15
|
# Tagged Questions
The higgs-mechanism tag has no wiki summary.
2answers
231 views
### Your Mass is NOT from Higgs Boson
Your Mass is NOT from Higgs Boson? http://www.youtube.com/watch?v=Ztc6QPNUqls This guy can't be correct, right? He argues that because mostly of a nucleus' mass is made out of the space between ...
0answers
127 views
### Field content and symmetry groups of Minimal Composite Higgs Models
I'm trying to teach myself the Composite Higgs Model, both its theory and its LHC phenomenology (particularly the 4DCHM). Unfortunately, I'm struggling; the literature is contradictory and/or omits ...
2answers
81 views
### What is the process that gives mass to free relativitic particles?
When a free particle move in space with a known momentum and energy then what is the physical process that gives mass to that free (relativistic) particle? What is role does the Higgs field in that ...
1answer
73 views
### How to find the Higgs coupling with a mixing matrix?
It is known that the couplings to the Higgs are proportional to the mass for fermions; $$g_{hff}=\frac{M_f}{v}$$ where $v$ is the VEV of the Higgs field. I'm trying to figure out why this is true ...
0answers
55 views
### Higgs boson/field symmetries and local symmetries
In the SM with gauge group U(1)xSU(2)xSU(3), those factors are associated to the gauge bosons associated with a local symmetry and the Higgs field provides masses to the elementary fermions AND the ...
1answer
165 views
### What breaks the symmetry between the electromagnetic and weak nuclear force?
I know the electromagnetic force is mediated by a photon and the weak nuclear force is mediated by two massive bosons. Are there any other insights into why the masses are so different?
1answer
116 views
### Does the Higgs mechanism address the spin statistics problem?
Since the Higgs mechanism is so intimately tied to binding together massless chiral fermions, does it happen to have anything to say about the spin statistics issue? I'm actually assuming the answer ...
2answers
3k views
### How does the Higgs mechanism work?
I'm not a particle physicist, but I did manage to get through the Feynman lectures without getting too lost. Is there a way to explain how the Higgs field works, in a way that people like me might ...
1answer
141 views
### Why do some particles have a greater mass than others?
The property of mass that almost every particle possesses comes from the Higgs Field. It is this field, which permeates all of space, that particles interact with and hence obtain mass. But why do ...
1answer
76 views
### Pauli-Villars (PV) regularisation breaks supersymmetry. How to see that?
Does the PV regulator breaks SUSY? Take for instance the 1-loop (top/stop loops) correction to the Higgs squared-mass parameter in the MSSM, and you'll get something like, \delta m^2_{h_u} = - ...
4answers
174 views
### The building blocks of energy
I have a couple of related questions that have been bothering me for a while. They might sound unscientific, but here is goes: What are the building blocks of energy? What does energy consist of? Is ...
0answers
113 views
### Do all the particles acquire mass in the Standard Model due to the Higgs mechanism only?
I know that a mass term for an intermediate boson is not compatible with the gauge symmetry. But in principle a mass term for the electron field does not violate a gauge symmetry. However to build an ...
1answer
57 views
### N=2 SSM without a Higgs
In arXiv:1012.5099, section III, the authors describe a supersymmetric extension to the standard model in which there is no Higgs sector at all, in the conventional sense. The up-type Higgs is a ...
1answer
187 views
### Technical naturalness of Yukawa couplings
Naturalness in the sense of 't Hooft tell us that a small parameter is a signal of a symmetry such that the parameter will be zero when the symmetry is exact. I am puzzled about how this principle is ...
1answer
245 views
### Spontaneous symmetry breaking in SU(5) GUT?
At the end of this video lecture about grand unified theories, Prof. Susskind explains that there should be some kind of an additional Higgs mechanism at work, to break the symmetry between the ...
1answer
156 views
### why are two higgs doublets required in SUSY?
I can't really understand why two higgs doublets are required in SUSY. From the literature, I have found opaque explanations that say something along the lines of: the superpotential W must be a ...
0answers
54 views
### In a composite Higgs, should the Z0 decay at the same rate that the neutral pion?
I am sorry to ask a question that obviously is a sum of separate questions about a process: Should the decay rate of Z0 be related to the decay of the "higgs" composite field it is eating? And, should ...
1answer
85 views
### In MSSM, are the higgs fields eaten by Z,W scalars or pseudoscalars?
I am very puzzled by the quintet of Higgs bosons in the MSSM: two charged, two scalars and a pseudoscalar. I wonder if they could be understood better if they were considered jointly with the three ...
1answer
330 views
### Does the Higgs Mechanism contradict Entropic Gravity?
Does the Higgs Mechanism contradict Entropic Gravity? It seems like it probably does. But then again, one is a microscopic theory and the other is macroscopic. Can they live together in harmony? or ...
1answer
276 views
### Why is mass renormalization insufficient to explain electron mass?
In the Standard Model, I understand that the mass of the electron is assume to arise from two effects: A bare mass given by Yukawa interaction with the Higgs field, and A mass correction from mass ...
1answer
93 views
### Origin of Higgs ghosts
In M. Veltman's Diagrammatica, appendix E, one can find the full Standard Model lagrangian. Some sectors (e.g fermion-Higgs and weak sectors) contain so-called Higgs ghosts $\phi^+,\phi^-$ and ...
1answer
371 views
### Why is the lightest Higgs not a free parameter in SUSY?
In the Standard Model, the Higgs mass doesn't really have any theoretical constraints. It could have basically any value and nothing 'breaks'. However, in MSSM models, we often see the tree level ...
4answers
911 views
### What is the need for the Higgs mechanism and electroweak unification?
The Higgs mechanism allows massless fields to acquire mass through their coupling to a scalar field. But if the masses cannot be predicted because the couplings have to be fixed, what really is the ...
4answers
813 views
### Is there an accepted analogy/conceptual aid for the Higgs field?
Is there an accepted analogy / conceptual aid for the Higgs field? In Physics there are many accepted conceptual aids such as * Schrödinger's cat * Maxwell's Demon * I'm sure I'm missing ...
1answer
129 views
### Is this a good explanation of the Higgs mechanism? [duplicate]
Possible Duplicate: Is there an accepted analogy/conceptual aid for the Higgs field? A video I watched explains the Higgs mechanism as follows. Take massless particles. These can only ever ...
0answers
66 views
### Could one theoretically build the Higgs equivalent of a Faraday cage?
My understanding is, within quantum mechanics, in a pure vacuum, all known fields have a lowest energy state of zero. The Higgs field is the only exception -- it's lowest energy state is not zero. ...
2answers
147 views
### do Higgs Bosons happen in nature all the time? Rarely? Or do they only happen when the Higgs field is excited in a particle accelerator?
I'm trying to reconcile an apparent contradiction between explanations given by Dr. Cox in 2009 and 2012, and those given by a panel of Berkeley professors. I'm not a physicist, and so I realize this ...
0answers
147 views
### Describing the Higgs mechanism to non-particle physicists
I'm sure I'm not the only person with this problem at the moment. I have been asked to give a public (not quite public, scientists, just not physicists) about 'this Higgs boson thing'. I am trying to ...
1answer
260 views
### What is the relationship between the Higgs field and quarks?
I have some difficulty considering the relative size of each and the meaning behind the shape of Higgs boson. I ask relating to the structures of both the Higgs field and quarks. How is it that the ...
1answer
180 views
### What gives matter Gravitational Mass? [duplicate]
Possible Duplicate: Does the equivalence between inertial and gravitational mass imply anything about the Higgs mechanism? In Higgs mechanism, Higgs field, which likes syrup, slows down ...
3answers
249 views
### How come a photon acts like it has mass in a superconducting field?
I've heard the Higgs mechanism explained as analogous to the reason that a photon acts like it has mass in a superconducting field. However, that's not too helpful if I don't understand the latter. ...
3answers
222 views
### Higgs Boson: The Big Picture
First, please pardon the ignorance behind this question. I know a fair amount of math but almost no physics. I'm hoping someone can give me a brief "big picture" explanation of how physicists were ...
3answers
147 views
### Charge Analog of the Higgs Boson?
Since mass can be given to particles via the interaction with the Higgs Field could there be a "Charger Field" that supplies particles with charge? Possibly this would require two different "charger ...
2answers
1k views
### Why do we need Higgs field to re-explain mass, but not charge?
We already had definition of mass based on gravitational interactions since before Higgs. It's similar to charge which is defined based on electromagnetic interactions of particles. Why did Higgs ...
1answer
216 views
### Higgs potential
The potential for the Higgs field is standard a quartic one (Mexican hat). Is this done for simplicity or are there fundamental reasons for this choice? I can imagine further contributions to this ...
2answers
127 views
### Do particles gain mass only at energy levels found during the big bang?
I am trying to make sure my understanding is correct. At energies and temperatures found during the big bang (or at CERN recently), the Higgs mechanism comes into effect. When it does, there is a ...
1answer
116 views
### Lepton masses in the Standard Model
Some simple questions regarding leptonic masses in the Standard Model (SM): Why there is not an explicit mass term in addition to the effective mass term that arises from the Yukawa terms after ...
0answers
109 views
### Relation among anomaly, unitarity bound and renormalizability
There is something I'm not sure about that has come up in a comment to other question: Why do we not have spin greater than 2? It's a good question--- the violation of renormalizability is linked ...
2answers
411 views
### What sort of “mass” is explained by the Higgs mechanism?
When I asked this question (probably in a less neutral form) to physicists, their answer was something along the lines that it's not gravity (i.e. unrelated to gravitons) but inertial mass. (So I ...
1answer
167 views
### How can coupling with the higgs field slow a particle down?
Quantum Diaries has an interesting introduction to the higgs. It makes it seem like the way that the higgs field gives mass to particles is via all of the interactions with virtual higgs particles. ...
1answer
318 views
### Higgs field requires a large cosmological constant — does the Zero Point Field balance it?
I just read Wolfram's blog post on the Higgs discovery. Still, there’s another problem. To get the observed particle masses, the background Higgs field that exists throughout the universe has to ...
1answer
152 views
### How to calculate Rest Mass practically with Standard Model?
With relativistic physics, we can apply force to see resistance against acceleration. It'd give us relativistic mass and we have well established formula to get to the Rest Mass as long as we know the ...
3answers
256 views
### Does Standard Model confirm that mass assigned by Higgs Mechanism creates gravitational field?
I am not comparing passive gravitational mass with rest inertial mass. Is there an evidence in Standard Model which says that active gravitational mass is essentially mass assigned by Higgs mechanism. ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9349940419197083, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/92916/vertex-arrangement-on-the-unit-sphere/93575
|
# Vertex arrangement on the unit sphere
The problem is how can I solve a following in polynomial time? There is a graph $G$ with $n$ vertices, and the goal is to find an arrangement of its vertices on an $n$-dimensional unit-sphere so as to maximize the sum of the angles made by the edges. Angles always should be in the range $[0; \pi]$. The problem is I can't find any similar well-known problem, which is can be solvable in polynomial time. I will appreciate any help. Thanks!
-
## 1 Answer
I wouldn't expect this to have a closed-form solution; if it doesn't, the question of solving it in polynomial time doesn't arise.
If the number of vertices were large compared to the number of dimensions, I'd expect it to be a difficult global optimization problem with lots of local maxima. However, since you have as many vertices as dimensions, there's room for the vertices to get out of each other's way, and you may be able to find the global maximum, or at least a satisfactory local maximum, by starting out with the $n$ vertices far apart, for instance at the vertices of a regular $(n+1)$-simplex (with $2$ vertices of the simplex left unoccupied, assuming that by "$n$-dimensional unit sphere" you mean the unit $n$-sphere $S^n$ in $n+1$ dimensions), and then applying your favourite global optimization algorithm. If I'm right, then you should be able to get by with a simple gradient search, similar to the one I described at Gradient Descent with constraints, but simpler since you don't have orthogonality constraints.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.967695415019989, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/38490/ordering-ambiguity-in-quantum-hamiltonian/38700
|
# Ordering Ambiguity in Quantum Hamiltonian
While dealing with General Sigma models (See e.g. Ref. 1)
$$\tag{10.67} S ~=~ \frac{1}{2}\int \! dt ~g_{ij}(X) \dot{X^i} \dot{X^j},$$
where the Riemann metric can be expanded as,
$$\tag{10.68} g_{ij}(X) ~=~ \delta_{ij} + C_{ijkl}X^kX^l+ \ldots$$ The Hamiltonian is given by,
$$H ~=~ \frac{1}{2} g^{ij}(X) P_{i}P_{j}.$$
The authors say that in quantum theory the above expression is ambiguous, because $X$ and $P$ don't commute. Hence there are many nonequivalent quantum choices for $H$ reduces to the same classical object. I am not able to figure this out.
Also this Hamiltonian is related to Laplacian, which I am not able to understand, why ? This Hamiltonian can be related to Laplacian if $g^{ij}$ is the usual $\eta^{ij}$. Do the authors want to say that in some atlas we can always find a local coordinates which reduces to $\eta^{ij}$ or is there a general definition of Laplacian which I am unaware of?
References:
1. K. Hori, S. Katz, A. Klemm, R. Pandharipande, R. Thomas, C. Vafa, R. Vakil, and E. Zaslow, Mirror Symmetry, 2003, chapter 10, eqs. 10.67-10.68. The pdf file is available here or here.
-
1
The operator is clearly "equal" to a Laplacian only if the metric $g$ is flat and positively definite. Otherwise it's just similar, that's why they say it's "related". Also, there are ordering ambiguities because $g^{ij}$ are functions of $X$ which don't commute with $P$. Does it answer all your questions? – Luboš Motl Sep 27 '12 at 15:33
@LubošMotl : Ok I get the "related" part now. Thanks for that interpretation, but is there an example of showing that two different definitions of Hamiltonian leads to same classical Hamiltonian. May be considering different definitions of momenta leading to same classical Hamiltonian. Earlier in the text, conjugate momentum was defined as $P_i = \dfrac{\delta S}{\delta \dot{X}^i} = g_{ij} \partial_t X^j$ – Jaswin Sep 27 '12 at 17:17
Dear Jaswin, differently ordered products of operators (those that exist classically) always differ by terms proportional to $\hbar$ or its positive powers, so in the classical $\hbar\to 0$ limit, they're the same. I would have to go to higher, 5th order polynomials for a good example. – Luboš Motl Sep 28 '12 at 4:28
@LubošMotl : Yes, now I could figure it out, probably he meant that $\dfrac{1}{2} g^{ij} P_iP_j \neq \dfrac{1}{2} P_i P_j g^{ij}$ quantum mechanically, but classically it is true. – Jaswin Sep 28 '12 at 5:23
1
– Jaswin Sep 28 '12 at 5:24
show 1 more comment
## 2 Answers
Well, the metric on the target space (not to be confused with the spacetime metric) $g_{ij}$ looks like $$g_{ij}\sim \delta_{ij}+(C_{ijkl}X^{k}X^{l})+\mathcal{O}(X^{4}).$$ We can invert this, obtaining ("for small $X$") $$g^{ij}\sim \delta^{ij}-{D^{ij}}_{kl}X^{k}X^{l}+\mathcal{O}(X^{4})$$ where ${D^{ij}}_{kl}$ are "some coefficients" we could figure out if forced to.
Really, to prove operator ordering ambiguity in the Hamiltonian, you just have to show that $$H\approx g^{ij}P_{i}P_{j} = \delta^{ij}P_{i}P_{j}-{D^{ij}}_{kl}X^{k}X^{l}P_{i}P_{k}+\mathcal{O}(X^{4}P^{2})$$ has ambiguities when quantized.
How? Well, consider the simpler case of a one-dimensional particle. We see that the Poisson brackets satisfy $$\tag{1}p^{2}x^{2}=(px)^{2}=\{x^{3},p^{3}\}-p^{2}x\{x^{2},p\}-\{x^{3},p\}p^{2}.$$ Woah, how did we get this equality? Well we use the property $$\{fg,h\}=f\{g,h\}+g\{f,h\},\quad\mbox{and}\quad\{f,gh\}=g\{f,h\}+h\{f,g\}.$$ Then we consider $\{x^{3},p^{3}\}$ and do some algebra. But when (1) is quantized, these equalities fails badly. It's unclear (or ambiguous) what's important to quantize, and how to do it.
In other words, if we have quantization as a map $$Q:\mathrm{classical}\to\mathrm{quantum}$$ satisfying:
1. quantization "puts hats" on position and momentum: $Q(x)=\widehat{x}$ and $Q(p)=\widehat{p}$, and are "represented irreducibly" (this is a technical condition, don't worry too much about it!);
2. $Q$ is linear, so $Q(c_{1}f+c_{2}g)=c_{1}Q(f)+c_{2}Q(g)$ where $f,g$ are functions of momentum and position;
3. Poisson brackets become $\displaystyle Q(\{f,g\})=\frac{1}{\mathrm{i}\hbar}[Q(f),Q(g)]$;
4. The number 1 is mapped to the identity operator $Q(1)=\mathrm{id}$.
We have problems trying to evaluate $Q(x^{2}p^{2})$. Do we have $$Q(x^{2}p^{2})\stackrel{??}{=}Q(x)^{2}Q(p)^{2}\stackrel{??}{=}Q(xp)^{2}?$$ What happens to equation (1)? It's ambiguous :(
For more on operator ordering ambiguities, see S. Twareque Ali, Miroslav Engliš "Quantization Methods: A Guide for Physicists and Analysts" arXiv:math-ph/0405065.
Also this Hamiltonian is related to Laplacian, which I am not able to understand, why ?
When we work with a linear sigma model, we have $g_{ij}=\delta_{ij}$ and we recover the usual Hamiltonian as the Laplacian (up to some constant).
This can be seen from the formula, and noting in this particular case $g^{ij}=\delta^{ij}$ so we find $$H = \frac{1}{2}\delta^{ij}P_{i}P_{j} = \frac{1}{2}P^{i}P_{i}$$ Again, up to some constant. (See equation (10.70) of the book you're reading, and you find $P_{i}=\mathrm{i}\partial/\partial X^{i}$)
And again do not confuse the "target space metric" $g_{ij}$ with the "spacetime metric" which I think you denote by $\eta_{ij}$ (later on in the book, I think the authors use $h_{ij}$ for the "spacetime metric").
-
Thanks a lot Nelson – Jaswin Oct 1 '12 at 14:25
The authors say that in quantum theory the above expression is ambiguous, because X and P don't commute. Hence there are many nonequivalent quantum choices for H reduces to the same classical object. I am not able to figure this out.
If you review the text, you'll see the authors have provided the commutation relation in equation 10.70 as:
$$[X^i,P_j] = i\delta_j^i$$
which tells you X and P are non-commutative and the commutation operation produce an imaginary number (and this equation might be understood as $\dfrac{1}{X}P-P\dfrac{1}{X} = i$).
The equation you reference is actually written as:
$$H = \dfrac{1}{2}g^{ij}(X)P_iP_j$$
The position variable X is important since momentum is classically understood as $mass \times velocity$ so $p^2 = m^2v^2$and kinetic energy is $\dfrac{1}{2}mv^2$. So the $g^{ij}(X)$ is taking place of an inverse mass term in order to stay consistent with the classically defined energy equation.
The problem arises if someone is trying to solve for an unknown value in the equation. Imagine that you know H and X and want to solve for P, you might get some answer for P and assume everything is hunky dory, however, if for some reason you have the value of P you just calculated as well as the value of H, and then try to solve for X, you run into problems, because the X and P are non-commutative, you will not get the same value of X as you started out with, and as defined you have to include an imaginary component.
This is what is meant by ambiguity, knowing two components of the equation will not give you a definite value for the third, only a range of values.
As far as the relationship to the Laplacian, if one looks at the Schrodinger equation one can see that the kinetic energy operator term is:
$$-\dfrac{\hbar}{2m}\nabla^2$$
so if one compares this to the equation you reference, it should be clear that the momentum symbols (P) are taking the place of the Laplacian operator.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 37, "mathjax_display_tex": 14, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9420430660247803, "perplexity_flag": "head"}
|
http://stats.stackexchange.com/questions/23753/use-of-autoregressive-metric-for-arima-clustering-and-analysis
|
# Use of autoregressive metric for ARIMA clustering and analysis
I wonder if anyone has put into use the autoregressive metric for ARIMA clustering proposed by Corduas and Piccolo (2008). The authors define the distance autoregressive metric between two processes $X, Y$ as:
$$\text{d}(X,Y)=\sum_{j=1}^m \left| (\pi_j,x-\pi_j,y)^2\right|\qquad (1)$$ for $j=1$ to $m$.
So in standard ARIMA notation, the squared distance of two $X,Y$ processes as:
$$\text{d}(X,Y)^2=(\phi x-\theta x)^2/(1-\theta x^2)+(\phi y-\theta y)^2/(1-\theta y^2)-2(\phi y-\theta y)^2/(1-\theta y\theta x)\qquad (2)$$
So I have a few questions.
1. Does $j$ in (1) refer to the number $m$ ACF and PACF values taken for $N$ lags? If that's the case why do we have to fit in the time series a particular ARIMA model beforehands?
2. For an ARIMA(0,0,1) model the $\phi$ parameter in (2) is 1?
3. For an ARIMA(1,0,0) model the $\theta$ parameter in (2) is 0?
4. In case of a very large value of $\theta x$ or $\theta y$ (close to 1) we get a "ridiculously" large value for squared distance (as described by (2)). Does this have a qualitative meaning?
I'm writing the necessary routines in R (for distance matrix calculation). I'm pretty confident there is nothing implemented yet but you never know for sure. However if anyone knows something more or is interested in this field I'll gladly share my experience.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9211380481719971, "perplexity_flag": "middle"}
|
http://mathoverflow.net/revisions/79420/list
|
## Return to Question
4 Addenda (following Yemon Choi's comments)
Question (edited on 10/29/2011). What's known about comprehensive generalisations of Gelfand's spectral theory for unital [associative] normed algebras [over the real or complex field] (*)?
Here, a generalisation should be meant as a framework, say, with the following distinctive features (among the others):
• It should be founded on somehow different bases than the classical theory - especially to the extent that the notion itself of spectrum isn't any longer defined in terms of, and cannot be reduced to, the existence of any inverse in some unital algebra.
• It should recover (at least basic) notions and results from the classical theory for unital Banach algebras in some appropriate incarnation (more details on this point are given below), for which the "generalised spectrum" does reproduce the classical one.
• It should be unsensitive to completeness [under suitable mild hypotheses] in any setting where completeness is a well-defined notion (**), so yielding as a particular outcome that an element in a unital normed algebra, $\mathfrak{A}$, shares the same spectrum as its image in the Banach completion of $\mathfrak{A}$.
• (*) If useful to know, my absolute reference here is the (let me say) wonderful book by Charles E. Rickart: General Theory of Banach Algebras (Academic Press, 1970).
(**) At least in principle, the kind of generalisation that I've in mind is tailored on the properties of topological vector spaces, though I've worked it out only in the restricted case of normed spaces.
In the end, my motivation for this long post is that I've seemingly developed (the basics of) something like resembling a spectral theory for linear (possibly unbounded) operators between different normed spaces, indeed linear (possibly discontinuous) operators between different topological vector spaces. To me, this stuff looks like a sharpening of the classical theory in that it removes some of its "defects" (including the one addressed above); and also as an abstraction since, on the one hand, it puts standard notions from the operator setting (such as the ones of eigenvalue, continuous spectrum, and approximate spectrum) on a somehow different ground (so possibly foreshadowing further generalisations) and, on the other, it recovers familiar results (such as the closeness, the boundedness, and the compactness of the spectrum as well as the fact that all the points in the boundary are approximate eigenvalues) as a special case (while revealing some (unexpected?) dependencies).
3 More minor corrections
Now, taking in mind (some parts of) another thread on this board about "wrong" definitions in mathematics, we are likely to agree that the worth of a notion is also measured by its sharpness (let me be vague on this point for the moment). And the classical notion of spectrum is, in fact, so successful because it is sharp in an appropriate sense, to the extent that it reveals deep underlying connections, say, between the algebraic and topological structures of a complicated object such as a Banach algebra (which is definitely magic, at least in my view). On another hand, what struck my curiosity is the consideration that the same conclusion doesn't hold (not at least with the same consistency) if Banach algebras are replaced by arbitrary (i.e. possibly incomplete) normed algebras, where the spectrum of a given element, $\mathfrak{a}$, can be scattered through the whole complex plane (in the complete case, as it is well-known, the spectrum is bounded by the norm of $a$, and indeed compact). So the question is: Why does this happen? And my answer is: essentially because the classical notion of spectrum is too algebraic, though completeness can actually conceal its true nature and make us even forgetful of it, or at least convinced that it must not be really so algebraic (despite of its own definition!) if it can dialogue so well with the topological structure. Yes, any normed algebra can be isometrically embedded (as a dense subalgebra) into a Banach one, but I don't think this makes a difference in what I'm trying to say, and it does not seriously explain anything. Clearly enough, the problem stems from the general failure in the convergence of the Neumann series $\sum_{n=0}^\infty (k^{-1}a)^n$ k^{-1}\mathfrak{a})^n$for$k$an arbitrary scalar with modulus greater than the norm of$a$. \mathfrak{a}$. And why this? Because the convergence of such a Neumann series follows from the cauchyness of its partial sums, which is not a sufficient condition to convergence as far as the algebra is incomplete. According to my humble opinion, this is something like a "bug" in the classical vision, but above all an opportunity for getting a better understanding of some facts.
In the end, my motivation for this long post is that I've seemingly developed (the basics of) something like a spectral theory for linear (possibly unbounded) operators between different normed spaces, indeed linear (possibly discontinuous) operators between different topological vector spaces. To me, this stuff looks like a sharpening of the classical theory in that it removes some of its "defects" (including the one addressed above); and also as an abstraction since, on the one hand, it puts standard notions from the operator setting (such as the ones of eigenvalue, continuous spectrum, and approximate spectrum) on a somehow different ground (so possibly foreshadowing possible further generalisations) and, on the other, it recovers familiar results (such as the closeness, the boundedness, and the compactness of the spectrum as well as the fact that all the points in the boundary are approximate eigenvalues) as a special case (while revealing some (unexpected?) dependencies).
2 Minor corrections
Now, it is undoubtable that Gelfand's work has deeply influenced the subsequent developments of spectral theory (and, accordingly, functional analysis). Yet, as far as I can understand in my own small way, something is still missingin this (wonderful) picture. I mean, something which may still be done, on the one hand, to clean up the notion itself of spectrum (as given in the classical framework of normed algebras) of some inherent "defects" (or better fragilities) of the classical theory and, on the other, to make it more abstract and, then, portable to different contexts.
1
# Generalising Gelfand's spectral theory
This is primarily a request for references and advices.
Historical background.
As acknowledged by Jean Dieudonné in his History of Functional Analysis, the notion of spectrum (along with the foundation of modern spectral theory) was first introduced by David Hilbert in a series of articles inspired by Fredholm's celebrated work on integral equations (the word spectrum seems to have been lent by Hilbert from an 1897 article by Wilhelm Wirtinger) in the effort of lifting properties and notions from matrix theory to the broader (and more abstract) framework of linear operators. Especially, this led Hilbert to the discovery of complete inner product spaces (what we call, today, Hilbert spaces just in his honour). In 1906, Hilbert himself extended his previous analysis and discovered the continuous spectrum (already present but not fully recognised in earlier work by George William Hill in connection with his own study of periodic Sturm-Liouville equations).
A few years later, Frigyes Riesz introduced the concept of an algebra of operators in a series of articles culminating in a 1913 book, where Riesz studied, among the other things, the algebra of bounded operators on the separable Hilbert space. In 1916 Riesz himself created the theory of what we call nowadays compact operators. Riesz's spectral theorem was the basis for the definitive discovery of the spectral theorem of self-adjoint (and more generally normal) operators, which was simultaneously accomplished by Marshall Stone and John von Neumann in 1929-1932.
The year 1932 is another important date in this story, as it saw the publication of the very first monography on operator theory, by Stefan Banach. The systematic work of Banach gave new impulse to the development of the field and almost surely influenced the later work of von Neumann on the theory of operator algebras (developed, partly with Francis Joseph Murray, in a series of articles starting from 1935). Then came the seminal work of Israil Gelfand (partly in collaboration with Georgi E. Shilov and Mark Naimark), who introduced Banach algebras (under the naming of normed rings) and elaborated the corresponding notion of spectrum starting with a 1941 article in Matematicheskii Sbornik. Now, it is undoubtable that Gelfand's work has deeply influenced the subsequent developments of spectral theory (and, accordingly, functional analysis).
Yet, as far as I can understand in my own small way, something is still missing in this (wonderful) picture, something which may still be done, on the one hand, to clean up the notion itself of spectrum (as given in the classical framework of normed algebras) of some inherent "defects" (or better fragilities) and, on the other, to make it more abstract and, then, portable to different contexts.
Naïve stuff.
As I learned from an anonymous user on MO (here), the term spectrum, in operator theory as well as in the context of normed algebras, is seemingly derived from the Latin verb spècere ("to see"), from which the root spec- of the Latin word spectrum ("something that appears, that manifests itself, vision"). Furthermore, the suffix -trum in spec-trum may come from the Latin verb instruo (like in the English word "instrument", which follows in turn the Latin noun instrumentum). So, the classical (or, herein, Gelfand) spectrum may be really considered, even from an etymological perspective, as a tool to inspect (or get improved knowledge of) some properties. I like to think of it as a sort of magnifying glass; we can move it through the algebra, zoom in and out on its elements, and get local information about them and/or global information about the whole structure.
Now, taking in mind (some parts of) another thread on this board about "wrong" definitions in mathematics, we are likely to agree that the worth of a notion is also measured by its sharpness (let me be vague on this point for the moment). And the classical notion of spectrum is, in fact, so successful because it is sharp in an appropriate sense, to the extent that it reveals deep underlying connections, say, between the algebraic and topological structures of a complicated object such as a Banach algebra (which is definitely magic, at least in my view). On another hand, what struck my curiosity is the consideration that the same conclusion doesn't hold (not at least with the same consistency) if Banach algebras are replaced by arbitrary (i.e. possibly incomplete) normed algebras, where the spectrum of a given element, $\mathfrak{a}$, can be scattered through the whole complex plane (in the complete case, as it is well-known, the spectrum is bounded by the norm of $a$, and indeed compact). So the question is: Why does this happen? And my answer is: essentially because the classical notion of spectrum is too algebraic, though completeness can actually conceal its true nature and make us even forgetful of it, or at least convinced that it must not be really so algebraic (despite of its own definition!) if it can dialogue so well with the topological structure. Yes, any normed algebra can be isometrically embedded (as a dense subalgebra) into a Banach one, but I don't think this makes a difference in what I'm trying to say, and it does not seriously explain anything. Clearly enough, the problem stems from the general failure in the convergence of the Neumann series $\sum_{n=0}^\infty (k^{-1}a)^n$ for $k$ an arbitrary scalar with modulus greater than the norm of $a$. And why this? Because the convergence of such a Neumann series follows from the cauchyness of its partial sums, which is not a sufficient condition to convergence as far as the algebra is incomplete. According to my humble opinion, this is something like a "bug" in the classical vision, but above all an opportunity for getting a better understanding of some facts.
Motivations.
In the end, my motivation for this long post is that I've seemingly developed (the basics of) something like a spectral theory for linear (possibly unbounded) operators between different normed spaces, indeed linear (possibly discontinuous) operators between different topological vector spaces. To me, this stuff looks like a sharpening of the classical theory in that it removes some of its "defects" (including the one addressed above); and also as an abstraction since, on the one hand, it puts standard notions from the operator setting (such as the ones of eigenvalue, continuous spectrum, and approximate spectrum) on a somehow different ground (so foreshadowing possible further generalisations) and, on the other, it recovers familiar results (such as the closeness, the boundedness, and the compactness of the spectrum as well as the fact that all the points in the boundary are approximate eigenvalues) as a special case (while revealing some (unexpected?) dependencies).
Then, I'd really like to know what has been already done in these respects before putting everything in an appropriate form, submitting the results to any reasonable journal, and being answered, possibly after several months, that I've just reinvented the wheel. It would be really frustrating... Yes, of course, I've already asked here around (in Paris), but I've got nothing more concrete than contrasting (i.e. negative and positive) feelings. Also, I was suggested to contact a few people, and I've done it with one of them some weeks ago (sending him something like a ten page summary after checking his availability by an earlier email), but I've got no reply so far and indeed he seems to have disappeared... Then, I resolved to come here and consult the "oracle of MO" (as I enjoy calling this astounding place). :-)
Thank you in advance for any help.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 13, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9542130827903748, "perplexity_flag": "middle"}
|
http://mathoverflow.net/revisions/39180/list
|
## Return to Answer
Nicholas Higham gives an algorithm for estimating the Hölder $p$-norm of a matrix with the estimate being within a factor of $n^{1-1/p} \|\mathbf{A}\|_p$ ; maybe you can somehow adapt this approach to your needs?
Nicholas Higham gives an algorithm for estimating the Hölder $p$-norm of a matrix with the estimate being within a factor of $n^{1-1/p} \|\mathbf{A}\|_p$ ; maybe you can somehow adapt this approach to your needs?
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9266440272331238, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/41057/categories-first-or-categories-last-in-basic-algebra/41075
|
## Categories First Or Categories Last In Basic Algebra?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Recently, I was reminded in Melvyn Nathason's first year graduate algebra course of a debate I've been having both within myself and externally for some time. For better or worse, the course most students first use and learn extensive category theory and arrow chasing is in an advanced algebra course, either an honors undergraduate abstract algebra course or a first-year graduate algebra course.
(Ok, that's not entirely true, you can first learn about it also in topology. But it's really in algebra where it has the biggest impact. Topology can be done entirely without it wherareas algebra without it beyond the basics becomes rather cumbersome. Also, homological methods become pretty much impossible.)
I've never really been comfortable with category theory. It's always seemed to me that giving up elements and dealing with objects that are knowable only up to isomorphism was a huge leap of faith that modern mathematics should be beyond. But I've tried to be a good mathematican and learn it for my own good. The fact I'm deeply interested in algebra makes this more of a priority.
My question is whether or not category theory really should be introduced from jump in a serious algebra course. Professor Nathanson remarked in lecture that he recently saw his old friend Hyman Bass, and they discussed the teaching of algebra with and without category theory. Both had learned algebra in thier student days from van der Waerden (which incidently, is the main reference for the course and still his favorite algebra book despite being hopelessly outdated). Melvyn gave a categorical construction of the Fundamental Isomorphism Theorum of Abelian Groups after Bass gave a classical statement of the result. Bass said, "It's the same result expressed in 2 different languages. It really doesn't matter if we use the high-tech approach or not." Would algebracists of later generations agree with Professor Bass?
A number of my fellow graduate students think set theory should be abandoned altogether and thrown in the same bin with Newtonian infinitesimals (nonstandard constructions not withstanding) and think all students should learn category theory before learning anything else. Personally, I think category theory would be utterly mysterious to students without a considerable stock of examples to draw from. Categories and universal properties are vast generalizations of huge numbers of not only concrete examples,but certain theorums as well. As such, I believe it's much better learned after gaining a considerable fascility with mathematics-after at the very least, undergraduate courses in topology and algebra.
Paolo Aluffi's wonderful book Algebra:Chapter 0, is usually used by the opposition as a counterexample, as it uses category theory heavily from the beginning. However, I point out that Aluffi himself clearly states this is intended as a course for advanced students and he strongly advises some background in algebra first. I like the book immensely, but I agree.
What does the board think of this question? Categories early or categories late in student training?
-
5
Your claim about topology makes me curious: How would a topologist prove the nonexistence of a deformation retract from a Möbius band to its boundary, without using a functor? – S. Carnahan♦ Oct 4 2010 at 20:49
13
Scott, here is a proof: step 1 if there were a retraction there would be a smooth one by standard relative smooth approximation theorems. step 2: take a regular value of a smooth retraction, it's pre-image is a smooth proper compact 1-dimensional submanifold of the Moebius band, and by design its boundary consists of a single point. This contradicts the classification of compact 1-manifolds, that they have an even number of boundary points. – Ryan Budney Oct 4 2010 at 21:02
3
@Ryan, that is a very nice argument, thanks. – S. Carnahan♦ Oct 4 2010 at 21:28
10
Despite some nice answers, I feel this question is subjective and argumentative. – Daniel Moskovich Oct 6 2010 at 12:20
7
@Andrew- nothing here is personal. I'm criticizing this question, not the person who asked it; and moderators would or would not back the opinion that this question is subjective and argumentative, not me. – Daniel Moskovich Oct 7 2010 at 12:30
show 9 more comments
## 11 Answers
There's a big difference between teaching category theory and merely paying attention to the things that category theory clarifies (like the difference between direct products and direct sums). In my opinion, the latter should be done early (and late, and at all other times); there's no reason for intentional sloppiness. On the other hand, teaching category theory is better done after the students have been exposed to some of the relevant examples.
Many years ago, I taught a course on category theory, and in my opinion it was a failure. Many of the students had not previously seen the examples I wanted to use. One of the beauties of category theory is that it unifies many different-looking concepts; for example, left adjoints of forgetful functors include free groups, universal enveloping algebras, Stone-Cech compactifications, abelianizations of groups, and many more. But the beauty is hard to convey when, in addition to explaining the notion of adjoint, one must also explain each (or at least several) of these special cases. So I think category theory should be taught at the stage where students have already seen enough special cases of its concepts to appreciate their unification. Without the examples, category theory can look terribly unmotivated and unintuitive.
-
20
I agree. One can also sneak in a little bit of categorical notation into other courses without having to develop any abstract category theory (e.g. note that homomorphisms can also be called group morphisms, linear transformations can be called linear morphisms, etc.; we already use "isomorphism" in this manner in non-category-theoretic contexts, so why not "morphism"?). One can certainly plant the idea that there is something in common to group theory, topology, linear algebra, algebraic geometry, etc. well before one gets to see the abstract formalism that encodes this commonality. – Terry Tao Oct 4 2010 at 23:47
6
I agree very much with the comments above. I just want add that I think it is nothing about categories in particular but applies equally well at every level of abstraction. The question is how much of the general theory or language of all objects of type $X$ should be introduced when students have seen only a small number of examples. The discussion above is when $X$='category' and the students have only seen vector spaces and groups, say. The situation when $X$='ring' occurs much earlier in ones education. It is important to draw attention to facts like (continued) – James Borger Oct 5 2010 at 13:00
2
...the commutativity of multiplication of numbers, that the product of two negatives is a positive, that zero times anything is zero, and so on. And it is also probably good to give them names to attach to the concepts, even if you never really use them in a serious way. But it usually the collection of all objects of type $X$ is a much bigger leap in abstraction and will usually be lost on the students. (continued) – James Borger Oct 5 2010 at 13:07
3
On the other hand, I think it is good for the teacher to be comfortable at one higher level of abstraction. So they should be experts on the collection of all objects of type $X$, but they should also be comfortable with examples of whatever such a collection is an example of. So if you're teaching groups, say, you should understand the category of all groups of course, but you should also be comfortable with categories. This makes it much clearer which concepts are need to be emphasized, which is essentially what Andreas Blass said above. – James Borger Oct 5 2010 at 13:14
4
Note that by this principle, you should not teach a class on category theory unless you're comfortable with 2-categories and know many natural examples. – James Borger Oct 5 2010 at 13:15
show 1 more comment
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
In my opinion, category theory is to mathematics what garlic is to cooking. It is a widespread ingredient that adds a very important flavor. But, usually, it should be minced and mixed in, and used with restraint.
(So, my answer to your question of early vs late is, a little of both.)
-
15
I have been known to eat raw garlic, and this has not won me many friends. – Steven Gubkin Oct 4 2010 at 22:10
10
(I mean this literally) – Steven Gubkin Oct 4 2010 at 22:10
44
Gauss called number theory the queen of mathematics. Calling category theory the garlic of mathematics somehow doesn't have the same level of glamour... – Terry Tao Oct 4 2010 at 23:38
10
@All Why this fixation on category theory as if it is a Hob Goblin lurking to pounce on the unwary algebraist or topologicst? (I know many of you do not feel that for yourself but there is a reluctance to admit it.:-) with notable exceptions of course.) Why not use categorical arguments lightly when they make the proofs clearer in your opinion and avoid them when they require too much machinery. When teaching a course on knots and surfaces, so partially in topology but with some combinatorial group theory there as well, I talked about the product topology using the idea of a product. – Tim Porter Oct 5 2010 at 6:46
19
@Harry : Is what I'm seeing actually true? Is Harry Gindi complaining that some part of mathematics is too abstract and boring? First BCnrd made an unnecessary technical assumption in another thread, and now this. Have I been magically teleported to Htrae or something? – Andy Putman Oct 5 2010 at 18:33
show 20 more comments
I wasn't going to weigh in on this as I think that this is very definitely "subjective and argumentative" (particularly the later), and when before I spoke up in favour of category theory in undergraduate education, it sparked a few comments and I was reminded of why I like the fact that discussion is suppressed in MO. But given that one side of the argument is already here, and the other is not so well represented, I'm going to answer.
Let me start by declaring: "I am not a category theorist". I am a differential topologist. Foundational questions leave me cold, size issues just don't bother me. I'll accept any axiomatic framework if someone wants me to (I'm a fully-paid-up member of the "Axiom of Choice" party). To enter Greg's culinary world for a moment, such things are bit like Norwegian cheese. I can see that to the right person, it's delicious. But I'm not that person.
To continue the analogy, category theory isn't an ingredient that can be added for Extra Flavour, but which not everyone likes. Category theory is like cooking with freshly harvested, organic ingredients as opposed to dull, insipid, shrink-wrapped stuff from the vast conglomerate supermarket. Just making one ingredient organic doesn't have much effect on the flavour of the whole dish, but changing the whole lot does.
But to the matter in hand: undergraduates and category theory. I believe that category theory is an excellent way to understand and express mathematical concepts. I find in my own work that, time and time again, when I express my ideas using categorical language then it makes them clearer both to me and to others. Believing this, as I do, why on earth would I want to deprive my students of the same benefits?
So I teach my students category theory. I don't necessarily tell them that I'm teaching them category theory, any more than I tell them that I'm teaching them logic, or how to write proofs, or even the basics of English grammar! But I use the insights and expressions of category theory because I think it makes it easier for the students to learn "other" mathematics.
In particular, in my current course, I am trying to teach my students the following things:
1. To focus on processes rather than things. Call them "morphisms" and "objects" and that's category theory! I don't tell them to do this because that's what category theory tells us to do, I tell them to do this because that's what the Real World(TM) tells us to do: mathematics (I tell them) is about modelling the real world, and the basic thing that one wants to model is a process.
2. To transfer knowledge from a known space to an unknown space. Here we have the extension of the mathematical idea of "function over form". That is, a thing is not defined by what it is (object) but what it does (what category it is in). But we can take this one step further and say that it's not just what it itself does that matters, but how it relates to the things around it (what morphisms are there from it to other objects in the category?). In particular, if I have an unknown vector space $V$ (unknown in the sense that I don't know much about it rather than I don't know how to define it), I gain a lot of knowledge if I can find an isomorphism $V \cong \mathbb{R}^n$ because I already know a lot about $\mathbb{R}^n$.
In a recent colloquium, I made this point (rather strongly) by saying that category theory is ubuntu mathematics: "I am what I am because of who we all are."
3. To be able to change ones point of view to suit the problem at hand. Say, "to look for what is preserved under isomorphism" and you've got one of the central tenets of category theory: that isomorphic objects should not be distinguished. This is a natural extension of the above. Once we know that an isomorphism $V \cong \mathbb{R}^n$ is a Good Thing, the next question is whether or not there's a best isomorphism (for the problem at hand).
To sum up, category theory isn't a "bit on the side" of mathematics to be taught as an optional extra at the higher levels, alongside homological algebra, Lie theory, and whatever-it-is-those-statisticians-down-the-hall-do. It can (and should) pervade all of our teaching because it makes the learning easier. Teaching it as a separate subject itself isn't a necessarily a bad thing, but it is if that is the only way in which it is taught, and by itself it can seem very dry, abstract, and disconnected. But then teaching it by itself is a bit like teaching logic without ever once mentioning Raymond Smullyan. Indeed, the comparison with logic is apt: we expect our students to pick up the basics of logic as they go along. Not many students actually study logic as a subject by itself, but if someone asked "Should we use logic when teaching undergraduates?" it would be closed instantly as "Not a real question.".
-
6
Thanks for a fresh and flavorful answer, Andrew! And one that helps restore dignity to those who choose category theory as a focus of research. – Todd Trimble Oct 7 2010 at 11:10
8
=( I don't like the implicit approval of so-called "organic" food. Not to be pedantic, but all food is organic in the technical sense, so that rubs me the wrong way immediately. Apart from that, genetically modified food has saved millions of lives from starvation and is a triumph of modern science. The fact that one can patent genetically modified plants and animals (in the US) is a travesty, but that doesn't make the food itself any worse. – Harry Gindi Oct 18 2010 at 4:54
In my defence, I wasn't the one who introduced the cooking analogy! – Andrew Stacey Oct 18 2010 at 7:37
3
+1 (and also +1 to Todd Trimble): your observation that category theory lowers the cognitive load is spot on. But what I liked most was what you have not mentioned: all the sloppy talk of category theory as a "language" or "toolbox" which at best is a tautology and at worst is nothing but a prejudice that denies category theory its relative autonomy and makes it the servant maid of whatever one ranks best in mathematics (independently of how objective such value judgements can be). And I better stop before I go off on a rant. – G. Rodrigues Oct 19 2010 at 11:52
Categorical ideas should be certainly introduced early, as they are quite useful. On the other hand, as Andreas and Terry say, studying category theory at the beginning of your mathematical education is a waste of time, and could be a turnoff, like all unmotivated formalism.
On the other hand, the formal language of category theory should be learned, and used, at some point. I have seen several interesting papers written by very good mathematicians, containing theorems with statements like "It is the same to give a regular thingamabob over $X$, and a von Neuman whatchamacallit with a seminormal connection over $X'$". What these statements usually mean, is that there is an equivalence of the category of regular thingamabobs over $X$ and that of von Neuman whatchamacallits with a seminormal connection over $X'$; but they could also simply mean that there is a bijection of isomorphism classes, and to know which is true you have to study the proof. This means, I suppose, that the authors, who must have seen the language of category theory at a certain point, have not interiorized it, and don't have a feeling for when its use is appropriate.
In my opinion, the concept of equivalence of categories is a real turning point. Up to that point, one can probably get away without it (for example, universal properties, like that for the tensor product, are easily explained without the formal language); this is harder to do with equivalences. On the other hand, you don't see many examples of equivalences in the beginning of your mathematical studies. Maybe the first one is that between coverings of a space, with appropriate hypothesis, and sets on which its fundamental group acts. Stating the connection between these two classes of objects as an equivalence of categories clarifies things enormously; I wish someone had explained it to me when I was a student, instead of just telling me that there is a bijection between isomorphism classes of connected covers and conjugacy classes of subgroups, and other statements in this spirit, all descending very immediately from the "real" theorem, which is the existence of the equivalence.
-
I think that there are certain notions that are needed to "set the stage" for category theory. I don't think students are going to understand category theory unless they've already seen some of the following examples:
• Galois Theory
• Covering spaces and pi_1
• The universal property of tensor product
• The difference between direct sum and direct product
If people have a very strong undergraduate background, then it seems to me that in graduate algebra they're ready to start seeing some category theory. On the other hand, I feel like (contrary to what AndrewL said) that category theory really finds its home in topology more than algebra, so I think an algebra course should introduce category theoretic language but that it should not be the main emphasis of the course (since people who haven't taken algebraic topology probably won't really grok the category theory anyway).
-
Introductory algebra courses tend to systematically confuse products with coproducts, and more generally, confuse targets with domains. This systematically causes confusion in students (what is the difference between the two kinds of infinite product? and why are there two kinds? and how do I decide which to use when?).
Even if no category theory is going to be introduced, this terrible confusion should be eliminated.
In a related note, I regard it as an extremely important point, that should be celebrated, that for abelian groups, or vector spaces, etc., sums and products agree. In my experience, it is glossed over: "the sum and the product are the same, so don't worry about it; and I'll use the two notations interchangeably."
And another thing: the free product of two groups -- really?
-
2
"... for abelian groups, or vector spaces, etc., sums and products agree" - but only with a finite number of summands/factors, which I think you were alluding to in your first paragraph. – Mark Meckes Oct 5 2010 at 13:48
2
Well, what can you expect in a rant? You're right. – Jeff Strom Oct 8 2010 at 1:52
3
All through undergrad math there is insufficient emphasis on morphisms. A fundamental point is that we describe things in two ways: as an image, by parameterization, or as an inverse image, by giving conditions that must be satisfied. The former is good at producing elements, that latter at telling whether an ostensible candidate is really in the object. Much of basic linear algebra is made clearer by emphasizing this. Products and coproducts fit neatly here, being characterized by maps in or maps out. So I agree that we should introduce categorical thinking early, language later. – Robert Bruner Dec 2 2010 at 14:53
The answers of Greg Kuperberg and Andreas Blass are great. Let me just add something:
Most of my mathematical ideas are formulated in terms of category theory. However, this does not mean that the ideas do not exist without category theory. Rather category theory is a universal language and toolbox to gather and transport these ideas.
I'm a little upset when in all these basic algebra lectures category theory is presented as something exotic and very complicated (and is postponed so later chapters). In particular when students then think that the compatibility of, say, localizations with localizations has to be checked with horrible double fractions. Perhaps this is the best moment to show them that writing down the hom functor and using Yoneda lemma simplifies the proof a lot. Then they probably appreciate this method also in other situations and start to think in morphisms rather than elements. Also, they can separate the trivial assertions from the interesting ones ;). For example it should be obvious after such a course that localizations commute with direct sums, but not necessarily with direct products (wrong arrow direction!).
But this step into category theory can only be done when there is a specific motivation. For example it is a good idea to introduce functors in a course on algebraic topology after having checked that singular homology is, what is then called, a generalized homology theory. In basic algebra categories should be introduced first when category theoretic theorems may be established without much effort which may be applied to concrete problems. For example you should not use Freyd's representabilty theorem in order to prove the existence of tensor products, when you just want to gather some basic facts about them. But it maybe a good idea to show that the equation $Hom(M \otimes N,P) = Hom(M,Hom(N,P))$ formally implies that $M \otimes -$ is right exact, and then introducing adjoint functors and other examples. Quite a few algebra texts prove the right exactness by a tedious calculation which is only a special instance of a composition of Yoneda and adjunction.
Although I don't like repetition in basic algebra courses, it makes it possible to develope some meta-theorems by analogy for yourself (which maybe will be made precise when you study advanced category theory later). For example when you have understood the construction of free abelian groups, free algebras, and other free constructions, you probably also might guess that there must be free groups, including how they are constructed and which universal property characterizes them.
-
This is an extension to part of my comment in another answer. I learnt group theory and enjoyed the initial parts but then we had Sylow theory and it looked mysterious and somewhat frightening, as not real motivation in terms of earlier material was given. If students do not see the need for a piece of mathematics,(internally within the subject or for 'applications') it becomes mysterious. Category theory is not that different from group theory in this, so don't make a fuss about it. When the material in an algebra course is simplified by doing it categorically use a bit of categorical language, don't make a fuss about it (I agree with the other answers on this.)
On the linked courses in Knots and Surfaces and on combinatorial group theory, we used categorical properties of products, as motivation for the product topology, used van Kampen theorem situations as motivation for 'free products with amalgamation' and pointed out the similarity of the observable pushout property with that of union, so the vKT is naturally about the preservation of some kind on mathematical structure namely pushout, by some kind of construction, oh dear, the concept of functor is just asking to be introduced and so on.
That was at Undergrad year 3 and MMAth 4 level (in UK terms). Then at Masters level, more examples led to a need to formalise things so as to make it clearer what was going on. I think it worked and was enjoyed and understood by students.
-
In answer to Andrew's question, I think it really depends on the student. I began learning category theory in my late teens because of the sorts of questions I was asking myself which, I discovered, could be answered through category theory. It just really "clicked" for me, and provided me with tools that I use every day in my mathematical life.
Sometimes I would find an application of category theory to an area I didn't know too much about, but because the application seemed pretty cool, I would be motivated to learn more about the area. My own feeling is that the category theory helped me learn mathematics more quickly than I otherwise might, in part because it helped provide broad conceptual frameworks in which to fit newly acquired knowledge. So in that respect, I am happy that I began learning category theory early on.
But category theory doesn't come naturally to a lot of people (some of the people who have answered or commented above, including some very distinguished mathematicians, don't strike me as having a whole lot of feeling for the subject). That's fine. If category theory does not come naturally to you, then simply learn category theory on a need-to-know basis, and try not to make up your mind what the subject is about (e.g., "doing away with elements") in advance. My advice is: don't force yourself to learn it unless you have a need to know (and my guess is you probably will, in tandem with other subjects).
Over time, while studying something that you've really latched on to, you may find some categorical reasoning coming into play, and marvel at how clean and efficient it is, and how it clears away conceptual clutter. Then you may be in a proper frame of mind to make a deeper study of what makes some aspect of category theory "tick", with some heightened appreciation of what category theory is good for, or how it can serve your ends.
-
My friend Joey Hirsch,a very talented PHD student at the CUNY Graduate Center,would wholeheartedly agree with you,Todd.But most of us aren't as talented or insightful as beginners as you and Joey were. Most of us think from collections of concrete examples and build to the "Aha!"-moment of generalization.That's why it doesn't come naturally to most of us,I think.But if that works for you guys,by all means! – Andrew L Mar 13 2011 at 0:08
1
Yes. I think what I was trying to say is that I don't advocate starting out with category theory for the young (i.e., those at the beginning of a career in mathematics), unless their taste or personal needs drive them to it. I don't see this as having to do with talent so much as personal taste; there are some very fine mathematicians who never seem to feel a great need for category theory (Erdos comes to mind). At the same time, I counsel keeping an open mind about it, and don't get psyched out by its reputation for being super-abstract. Ultimately, the goal is to simplify [cont'd] – Todd Trimble Mar 13 2011 at 1:11
3
mathematics, along the lines of Peter Freyd's dictum "Perhaps the purpose of categorical algebra is to show that which is trivial is trivially trivial." (For more discussion about the meaning of this quip, see for example ncatlab.org/nlab/show/nPOV#RoleOfnPOV – Todd Trimble Mar 13 2011 at 1:12
Everyone will agree with me that there are many levels of abstraction category theory can be introduced at. It makes no sense to start undergraduate math courses with a formal approach to category theory, I don't think anyone would argue the opposite. It makes very little sense either to postpone it to higher algebra classes of late undergrad at best or, as happens in many places, in graduate studies.
Category theory is above all a formalism, a way to frame our understanding. It has been a more and more prevalent facet of my thinking that a good notation does half the work of solving a problem, just as formulating a question properly does. Why then not start hinting towards such formulation early ? While teaching low level courses, I always have, or make a point to ensure that most of my class knows what a function is. While doing so I draw little blobs representing sets and big arrows representing a function. Then as I talk I keep presenting functions as a processes, or relations. Together with a fun example (I usually use a "friends and beer" variation) it helps them structure the knowledge they are presented with. It makes it easier to have them understand that one cannot just "add" functions by writing a plus sign in between since functions are (visually) not the same entity as numbers. It is I believe our duty to frame things as early as possible in a way that structures knowledge in the student's mind. To make another reference to food, it is better to have widely spread malleable foundations of rudimental cooking than of an elite of highly qualified cooks (of course it's best to have both).
Moreover I would like to point out that this formalism is urgently needed in other areas of science. As a physicist by training I cannot overstate the importance of category theory in areas of science other than mathematics. And even after a MS in theoretical and mathematical physics, "functors" and "categories" were frightening words that were reserved to Jedi Masters. I am but saddened by that state of things. About everything in physics deals with processes and change and yet there seems to be very little push to spread the categorical lingua. Relativity screams category theory (equivalent views of the world in different frames yet non identical), the standard model's soul is categorical (groups, tensor structures of representations, etc...). Why should we wait so long to plant these seeds ? Why not let them germ throughout the student's curricula.
In conclusion while it is dysfunctional to force feed students categories (why teach an intensive Japanese course to someone that just wants to make suchi ?), it is criminal to keep it, to its core, our little secret. I believe we need to join forces to move very basic categorical formalism to bigger circles, sans tambours ni trompettes (without fanfare), and without bells and whistles.
-
I tend to agree.And I think Adowey's CATEGORY THEORY will be just what the doctor ordered for undergraduate physics students,Dany. And it will make the language a whole lot less scary for them without dumbing it down one bit. – Andrew L Oct 18 2010 at 10:39
A little preliminary: I'm an undergraduate student and I started to study category theory as self-taught at the beginning of second year of university, mostly because of my interest in logic and foundations. Since then I've enjoyed of this fact because knowing some category theory helped me to understand lots of concepts that I've learned more quickly then what I would have done without it, also category theory move me to study some branch of maths like algebraic topology and algebraic geometry. Now I would distinguish between "category theory" and "the language and instrument of category theory": while the first is an abstract and too specific branch of math, so not adequate to be considered in a undergraduate courses, the second is the very useful conceptual tool that should be taught also to undergraduate students. What I mean here is that (the language of) category theory shouldn't be teached in a specific course but it should be taught during the regular courses.
I believe that some basic concepts like the ones of category and functor could be taught since first courses of algebra, that's because these concepts are not more abstract than those of groups-group homomorphism,ring-ring homomorphism, vector space-linear map which are taught in the first year's courses. Categories and functors can be easily shown to a young public respectively as graphs with structure (i.e. operations) and as graph morphisms preserving the structure. Many example can be given to those concepts which can be understood by undergraduates: the categories of graphs' points and graphs' paths, the category of sets and functions, the category of groups and group homomorphisms, vectorial spaces and linear maps, but also monoids, groups and poset as categories. In particular its very useful made these last example in first courses because they help in familiarizing with abstraction before mind is corrupted by concrete (I remember that after having done some basic algebra I found a lot of difficulties to understand why monoids should be categories with one object). Another good set of examples of category the are quite easy to understand and (in my personal opinion cool) are those of objects (which can be molecules, automaton's states, dynamic system states,...) and processes transforming one object into another. These examples are pretty cool because they open the way to application of category theory also to other science, besides giving really concrete examples of categories.
Obviously categorical concepts should be introduced in a very gradual way, for instance its useless teaching natural transformation before having seen homotopies and groups' representation (or equivalently groups' actions), same apply for other more complex concepts: every thing need to be introduced at right time.
Many would object that probably concepts should be presented every time when they are needed. To those people I would say that probably they right, anyway no-one have ever introduced to me abstract concepts like the ones of groups and rings with some motivation, same apply to topological spaces, the motivations for introducing these objects came late, when where introduced some results which gives us a more abstract framework in which some kind of problems tend to simplify and generalize.
Last motivations of teaching category theory early is that many times seeing thing from an abstract point of view helps when we want to switch constructions from categories, where these constructions are build naturally, to other categories (it comes to my mind the example of homotopies of complexes in homological algebra) and also shows deep unity of lots of mathematical objects that maybe at first seem unrelated.
Before to end I would also like to add some motivation to why not waiting to teach category theory in advanced courses: if you do so usually happens that these categorical concepts are presented in very fast way that make difficult to take familiarity with said concepts and that doesn't allow to deeply understand the meaning and usefulness of categorical results.
One last comment: I don't know why but every time I think to those people which consider category theory too abstract and useless they remind me of what Kronecker said about Cantor, and this make me smile.
-
You're making a lot of good points here, Giorgio. I'm a believer in teaching at least some of the underlying ideas of category theory in undergraduate courses. Not formally, and not even by scary names like functor, but just some of the ideas as they come up in their simplest and natural forms. I myself first learned about categories through the introduction of Spanier's Algebraic Topology (of all places!), and very quickly discovered conceptual answers to a lot of questions I had about things like "quotient", "product", and "dual". It wasn't long before I was reading CWM from cover to cover! – Todd Trimble Nov 8 2011 at 15:45
Thanks for have found time to read my answer, I'd like to comment about formalism. I don't think math student would be so frighten by formal definitions of category theory as long as one explains were this formal definition come from: about this we can consider my representation of a category just like graph with structure. When student have a natural representation of an object its easy for them work with this object and understand it, the only problem is give good representations – Giorgio Mossa Nov 8 2011 at 16:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 15, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9615917205810547, "perplexity_flag": "middle"}
|
http://stats.stackexchange.com/questions/34238/questions-about-specifying-linear-mixed-models-in-r-for-repeated-measures-data-w
|
# Questions about specifying linear mixed models in R for repeated measures data with additional nesting structure
## Data Structure
````> str(data)
'data.frame': 6138 obs. of 10 variables:
$ RT : int 484 391 422 516 563 531 406 500 516 578 ...
$ ASCORE : num 5.1 4 3.8 2.6 2.7 6.5 4.9 2.9 2.6 7.2 ...
$ HSCORE : num 6 2.1 7.9 1 6.9 8.9 8.2 3.6 1.7 8.6 ...
$ MVMNT : Factor w/ 2 levels "_Withd","Appr": 2 2 1 1 2 1 2 1 1 2 ...
$ STIM : Factor w/ 123 levels " arti"," cele",..: 16 23 82 42 105 4 93 9 34 25 ...
$ DRUG : Factor w/ 2 levels "Inactive","Pharm": 1 1 1 1 1 1 1 1 1 1 ...
$ FULLNSS: Factor w/ 2 levels "Fasted","Fed": 2 2 2 2 2 2 2 2 2 2 ...
$ PATIENT: Factor w/ 25 levels "Subj01","Subj02",..: 1 1 1 1 1 1 1 1 1 1 ...
$ SESSION: Factor w/ 4 levels "Sess1","Sess2",..: 1 1 1 1 1 1 1 1 1 1 ...
$ TRIAL : Factor w/ 6138 levels "T0001","T0002",..: 1 2 3 4 5 6 7 8 9 10 ...
````
## Full Model Candidate
````model.loaded.fit <- lmer(RT ~ ASCORE*HSCORE*MVMNT*DRUG*FULLNSS
+ (1|PATIENT) + (1|SESSION), data, REML = TRUE)
````
• Reaction times from trials are clustered within sessions, which in turn are clustered within patients
• Each trial can be characterized by two continuous covariates of ASCORE and HSCORE (ranging between 1-9) and by a movement response (withdraw or approach)
• Sessions are characterized by drug intake (placebo or active pharmacon) and by fullness (fasted or pre-fed)
## Modeling and R Syntax?
I'm trying to specify an appropriate full model with a loaded mean structure that can be used as a starting point in a top-down model selection strategy.
Specific issues:
• Is the syntax correctly specifying the clustering and random effects?
• Beyond syntax, is this model appropriate for the above within-subject design?
• Should the full model specify all interactions of fixed effects, or only the ones that I am really interested in?
• I have not included the STIM factor in the model, which characterizes the specific stimulus type used in a trial, but which I am not interested to estimate in any way - should I specify that as a random factor given it has 123 levels and very few data points per stimulus type?
-
if i cannot find advice here i really dont know who i could ask? maybe you know of any dedicated mixed models forums or even an expert willing to consult for a little money? – Cel Aug 14 '12 at 8:24
2
Hi @Cel, it looks like you've got ALL interactions in the model, including the 5-way, 4-way and 3-way interactions. I'm not sure about this case, but that will typically wildly overfit the data, which will make your results less generalizable. Backward selection (if you must use it) does not need to start with a completely saturated model - it should start with the largest model you find plausible. Can you reduce that at all? – Macro Aug 14 '12 at 14:05
@Macro great to know, i'll include only the interactions that seem plausible then. do you have suggestions regarding the other issues? if you do, maybe put it as an answer so i could accept it. – Cel Aug 14 '12 at 14:12
## 1 Answer
I will answer each of your queries in turn.
Is the syntax correctly specifying the clustering and random effects?
The model you've fit here is, in mathematical terms, the model
$$Y_{ijk} = {\bf X}_{ijk} {\boldsymbol \beta} + \eta_{i} + \theta_{ij} + \varepsilon_{ijk}$$
where
• $Y_{ijk}$ is the reaction time for observation $k$ during session $j$ on individual $i$.
• ${\bf X}_{ijk}$ is the predictor vector for observation $k$ during session $j$ on individual $i$ (in the model you've written up, this is comprised of all main effects and all interactions).
• $\eta_i$ is the person $i$ random effect that induces correlation between observations made on the same person. $\theta_{ij}$ is the random effect for individual $i$'s session $j$ and $\varepsilon_{ijk}$ is the leftover error term.
• ${\boldsymbol \beta}$ is the regression coefficient vector.
As noted on page 14-15 here this model is correct for specifying that sessions are nested within individuals, which is the case from your description.
Beyond syntax, is this model appropriate for the above within-subject design?
I think this model is reasonable, as it does respect the nesting structure in the data and I do think that individual and session are reasonably envisioned as random effects, as this model asserts. You should look at the relationships between the predictors and the response with scatterplots, etc. to ensure that the linear predictor (${\bf X}_{ijk} {\boldsymbol \beta}$) is correctly specified. The other standard regression diagnostics should possibly be examined as well.
Should the full model specify all interactions of fixed effects, or only the ones that I am really interested in?
I think starting with such a heavily saturated model may not be a great idea, unless it makes sense substantively. As I said in a comment, this will tend to overfit your particular data set and may make your results less generalizable. Regarding model selection, if you do start with the completely saturated model and do backwards selection (which some people on this site, with good reason, object to) then you have to make sure to respect the hierarchy in the model. That is, if you eliminate a lower level interaction from the model, then you should also delete all higher level interactions involving that variable. For more discussion on that, see the linked thread.
I have not included the STIM factor in the model, which characterizes the specific stimulus type used in a trial, but which I am not interested to estimate in any way - should I specify that as a random factor given it has 123 levels and very few data points per stimulus type?
Admittedly not knowing anything about the application (so take this with a grain of salt), that sounds like a fixed effect, not a random effect. That is, the treatment type sounds like a variable that would correspond to a fixed shift in the mean response, not something that would induce correlation between subjects who had the same stimulus type. But, the fact that it's a 123 level factor makes it cumbersome to enter into the model. I suppose I'd want to know how large of an effect you'd expect this to have. Regardless of the size of the effect, it will not induce bias in your slope estimates since this is a linear model, but leaving it out may make your standard errors larger than they would otherwise be.
-
2
wow. thank you Macro, i wish i could give more points. – Cel Aug 14 '12 at 20:52
default
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 16, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9229507446289062, "perplexity_flag": "middle"}
|
http://physics.aps.org/synopsis-for/print/10.1103/PhysRevB.82.014421
|
# Synopsis:
Extreme sensitivity
#### Extreme sensitivity of a frustrated quantum magnet: Cs2CuCl4
Oleg A. Starykh, Hosho Katsura, and Leon Balents
Published July 20, 2010
The challenge with modeling a magnetic material is deciding which interactions are most essential. This is the case with $Cs2CuCl4$, a frustrated quantum magnet consisting of layers of triangularly arranged $S=1/2$ spins. The strongest exchange coupling is between spins along chains in a layer, while there is a weaker, frustrated bond between the chains. The coupling between layers is an order of magnitude smaller.
Most measurements, including inelastic neutron scattering measurements, are consistent with treating $Cs2CuCl4$ as a system of weakly coupled chains. However, the low-temperature phase diagram of $Cs2CuCl4$ is highly sensitive to the direction in which an external magnetic field is applied—meaning there are additional, though very small, anisotropic terms in the Hamiltonian.
Now, writing in Physical Review B, Oleg Starykh from the University of Utah and Hosho Katsura and Leon Balents from the Kavli Institute for Theoretical Physics at the University of California, Santa Barbara present the hierarchy of emergent low-energy scales that control the complex ordering behavior of $Cs2CuCl4$ in a wide range of magnetic fields oriented in different directions. Using bosonization, renormalization group, and chain mean field theory, they identify a zoo of phases that match well with experiment. In particular, their calculations show that when the magnetic field is lined up with the crystal’s magnetic layers, spins are more correlated with their companions in neighboring triangular layers than with those within the layers, invalidating the naive picture of the system as a layered material. The key message here is that frustration and the quasi-$1D$ character of the system dramatically amplify the effect of tiny terms in the Hamiltonian to control the ground state. – Sarma Kancharla
ISSN 1943-2879. Use of the American Physical Society websites and journals implies that the user has read and agrees to our Terms and Conditions and any applicable Subscription Agreement.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 6, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9176772236824036, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/180760/when-to-read-of-the-degree-of-a-variety-from-its-defining-polynomials
|
# When to read of the degree of a variety from its defining polynomials
The question concerns algebraic varieties.
I just read the question The degree of an algebraic curve in higher dimensions and great answer by user M P. One of the thing he says is that if a curve in $\mathbb{P}^n$ is given by $n-1$ equations (which is often not the case of course), then its degree is in fact the product of the degree of the polynomials defining it.
I expect this not to hold if there are more than $n-1$ polynomials necessary to define the curve. Could someone tell me if this is true, and why?
I do expect it to hold for general varieties. If a variety in $\mathbb{P}^n$ is of codimension $r$, and also given by exactly $n-r$ polynomials, is its degree in fact the product of the degrees of the polynomials defining it? Could you tell me if this is the case, and more importantly: why?
By the way, degree as in generic number of intersection points with a variety of complementary dimension..
Thanks a lot in advance, Joachim
P.S. Georges E. i owe you one for your effort on my question on étale morphisms. I know this is not the place to say such things, but i'm doing it anyway.
-
## 2 Answers
Andrew's answer is correct, but here is a more geometric way to think about the same answer.
If a codimension $r$ variety is cut out by $r$ equations, then each equation genuinely cuts out a new dimension, i.e. $V(f_1,\ldots,f_{i+1})$ is a divisor in $V(f_1,\ldots,f_i)$ cut out by $f_{i+1} = 0$. Thus the intersection of $V(f_1,\ldots,f_i)$ and $V(f_{i+1})$ is a proper intersection, and so the degrees multiply.
We can assume that the degree of $V(f_1,\ldots,f_i)$ is equal to $d,$ the product of the degrees of $f_1, \ldots,f_i$, by induction. Geometrically, this means that a generic linear subspace $L$ of dimension $i$ meets $V(f_1,\ldots,f_i)$ in $d$ points.
To compute the degree of $V(f_1,\ldots,f_{i+1})$ we intersect with a generic linear subspace $L'$ of dimension $i+1$.
Suppose that $d'$ is the degree of $f_{i+1}$. Then you can deform the equation $f_{i+1} = 0$ to an equation of the form $l_1\ldots l_{d'}$, where each $l_j$ is a generically chosen linear equation. Thus $V(f_{i+1})$ can be deformed to the union of the hyperplanes $V(l_1)\cup \cdots \cup V(l_{d'})$, and so
$$V(f_1,\ldots,f_{i+1}) \cap L' = V(f_1,\ldots,f_i) \cap V(f_{i+1}) \cap L',$$ which can be deformed to $$V(f_1,\ldots,f_i) \cap \bigl(V(l_1)\cup \cdots \cup V(l_{d'})\bigr) \cap L' = \bigl(V(f_1,\ldots , f_i) \cap V(l_1) \cap L'\bigr) \cup \cdots \cup \bigl(V(f_1,\ldots,f_i)\cap V(l_1)\cap L'\bigr) \bigr).$$ Now each $V(l_j)\cap L'$ is the intersection of a generic hyperplane and the generic $i+1$-dimensional linear subspace $L'$, and so is a generic linear subspace of dimension $i$. Thus each intersection $V(f_1,\ldots,f_i)\cap V(l_j)\cap L'$ consists of $d$ points, and so their union consists of $dd'$ points. QED
If $V$ is an $r$-codimensional variety cut out by more than $r$ equations, then we can write it in the form $V(f_1,\ldots,f_r, f_{r+1},\ldots,f_s)$ for some $s > r$, where $V(f_1,\ldots,f_r)$ is already of codimension $r$ (but not irreducible). The closed subscheme $V(f_1,\ldots,f_r)$ of $\mathbb P^n$ is then of degree equal to the product of the degrees of the $f_1,\ldots,f_r$, but (by assumption) the additional equations $f_{r+1}, \ldots, f_s$ do not cut down the dimension any further. Instead, they cut out some particular irreducible component of $V(f_1,\ldots,f_r)$. In particular, the intersections $V(f_1,\ldots,f_r) \cap V(f_j)$ for $j > r$ are not proper, and the degree of a non-proper intersection is not given as the product of the degrees.
So the basic phenomenon here is as follows: if $V$ is a reducible algebraic set of (equi)codimension $r$, a union of components $V_1,\ldots,V_m$, then $\deg V = \sum_i \deg V_i$, but there is no immediate relationship between the degrees of the additional polynomials needed to cut out the various $V_i$ and the degrees of those $V_i$.
A good example to think about is the case of a twisted cubic curve in $\mathbb P^3$ (with hom. coords. $X,Y,Z,W$). It is obtained by first intersecting the quadrics $V(X^2 - YW)$ and $V(XZ - Y^2)$. This intersection is a degree $4$ curve which is reducible; it is the union of a line $L$ (the line cut out by $X = Y = 0$) and the twisted cubic curve $C$. To cut out $C$ we have to impose the additional equation $X^3 - ZW^2 = 0$.
-
Thanks! Both answers are great, i'd accept them both if possible. 2 small questions, if that's ok. I quote: "Then you can deform the equation $f_{i+1}=0$ to an equation of the form $l_1\ldots l_{d′}$", could you make this mathematically precise? Or this this just an intuitive argument? Second question, could you confirm that the same reasoning would give us that if $V,W$ are varieties of degree $d,e$, and suppose their intersection is in fact irreducible, then their intersection has degree $de$? You basically applied induction to the case where $W$ is a hypersurface, right? Thanks in advance! – Joachim Aug 9 '12 at 21:59
I just read that you only get a notification if i refer to you, so @Matt E please read my comment before this one.. =) – Joachim Aug 10 '12 at 1:35
@Joachim: Dear Joachim, First, a small point: since this is my answer, I notified about any comment here, whether or not it has "@Matt E". Second: regarding deformation, one can consider the one parameter family $g_t:= t l_1 \cdots l_{d'} + (1-t) f_{i+1}$; for all but possibly finitely many values of $t$, the intersesection of $V(g_t)$ with $V(f_1,\ldots,f_i)$ will be proper, and the degree will be constant in this family. (The technical concept I am using is linear equivalence, or algebraic equivalence if you prefer.) Also, yes, if $V$ and $W$ are of degree $d$ and $e$ and meet properly – Matt E Aug 10 '12 at 3:38
... i.e. their intersection is of the expected dimension, then its degree is equal to $d e$. This is a special part of general intersection theory. Regards, – Matt E Aug 10 '12 at 3:39
Thats clear, thanks. As you noticed i know very little of intersection theory, however i am reading a book that uses its concepts every now and then. Reading your answer i realized it might be helpful to get you know more about the subject. Could you recommend a good reference thats treats basic intersection theory of varieties? – Joachim Aug 10 '12 at 9:35
show 1 more comment
A variety $X$ in $\mathbb P^n_k$ of codimension $r$ generated by $r$ forms is called a complete intersection. Equivalently, $X$ is defined by a regular sequence $f_0,\ldots,f_r$ of (homogeneous) elements of $k[x_0,\ldots,x_n].$ It is well-known that the Koszul complex associated to a regular sequence $(f_0,\ldots,f_r)$ is a free resolution of the coordinate ring $k[x_0,\ldots,x_n]/(f_0,\ldots,f_r)$ and so enables us to compute the Hilbert polynomial of $X.$ Moreover, it can be shown that the Hilbert polynomial depends only on the degrees of $f_0,\ldots,f_r$, and its leading coefficient is $d_0\cdots d_r/(n-r)!,$ where $d_i$ is the degree of $f_i,$ so in this case the degree of $X$ is $d_0\cdots d_r.$
If $X$ is not a complete intersection, then the Koszul complex of defining equations will not be exact, and thus does not determine the Hilbert polynomial, though I'm not sure how badly behaved the degree can be in this case.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 99, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9487373232841492, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/228302/subgroups-of-mathbbz-2-times-mathbbz-12-of-order-6?answertab=oldest
|
# Subgroups of $\mathbb{Z}_2 \times \mathbb{Z}_{12}$ of order $6$
what are the subgroups of $\mathbb{Z}_2 \times \mathbb{Z}_{12}$ of order $6$? I know that there are three such subgroups, and two subgroups are clear to me, namely the subgroup isomorphic to $\mathbb{Z}_6$ and the subgroup isomorphic to $\mathbb{Z}_2\times \mathbb{Z}_3$. But I can't see the other one. Please help!
-
## 3 Answers
You can decompose $\mathbb{Z}_2\times \mathbb{Z}_{12}$ into $\mathbb{Z}_2\times \mathbb{Z}_4 \times \mathbb{Z}_3$. Clearly any group of order $6$ will contain $\mathbb{Z}_3$. Where else can you get your $2$ from?
Hint: I am guessing you've thought of getting the $2$ from $(1,0,0)$ and $(0,2,0)$. Why not both?
-
Alternatively, any abelian group of order $6$ is cyclic, so you need to find all the elements of your group of order $6$ and then just realize that two of them generate the same subgroup if and only if they are the same or additive inverses.
-
1
I think your "and then realize" needs more explanation if you want it not to be a rabbit from a hat. – MJD Nov 3 '12 at 19:13
1
The generators in an abelian group of order 6 come in pairs (other elements are identity or have order 2 or 3) - so the number of abelian subgroups of order 6 is half the number of elements of order 6. – Mark Bennet Nov 3 '12 at 21:05
@MJD The cyclic group of order 6 has only two generators, and they are additive inverses. I was trying not to spell things out, however. – Thomas Andrews Nov 4 '12 at 11:59
Not sure what you mean by "isomorphic to $\mathbb Z_6$". All groups of order $6$ are isomorphic to $\mathbb Z_6$.
Here, the three subgroups of order $6$ are those generated by $(0,2)$, $(1,2)$, and $(1,4)$. Namely, $$A_1=\{(0,0),(0,2),(0,4),(0,6),(0,8),(0,10)\},$$ $$A_2=\{(0,0),(1,2),(0,4),(1,6),(0,8),(1,10)\}$$ $$A_3=\{(0,0),(1,4),(0,8),(1,0),(0,4),(1,8)\}$$
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 23, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9393646121025085, "perplexity_flag": "head"}
|
http://unapologetic.wordpress.com/2010/11/13/projecting-onto-invariants/?like=1&source=post_flair&_wpnonce=a082172354
|
# The Unapologetic Mathematician
## Projecting Onto Invariants
Given a $G$-module $V$, we can find the $G$-submodule $V^G$ of $G$-invariant vectors. It’s not just a submodule, but it’s a direct summand. Thus not only does it come with an inclusion mapping $V^G\to V$, but there must be a projection $V\to V^G$. That is, there’s a linear map that takes a vector and returns a $G$-invariant vector, and further if the vector is already $G$-invariant it is left alone.
Well, we know that it exists, but it turns out that we can describe it rather explicitly. The projection from vectors to $G$-invariant vectors is exactly the “averaging” procedure we ran into (with a slight variation) when proving Maschke’s theorem. We’ll describe it in general, and then come back to see how it applies in that case.
Given a vector $v\in V$, we define
$\displaystyle\bar{v}=\frac{1}{\lvert G\rvert}\sum\limits_{g\in G}gv$
This is clearly a linear operation. I say that $\bar{v}$ is invariant under the action of $G$. Indeed, given $g'\in G$ we calculate
$\displaystyle\begin{aligned}g'\bar{v}&=g'\frac{1}{\lvert G\rvert}\sum\limits_{g\in G}gv\\&=\frac{1}{\lvert G\rVert}\sum\limits_{g\in G}(g'g)v\\&=\bar{v}\end{aligned}$
since as $g$ ranges over $G$, so does $g'g$, albeit in a different order. Further, if $v$ is already $G$-invariant, then we find
$\displaystyle\begin{aligned}\bar{v}&=\frac{1}{\lvert G\rvert}\sum\limits_{g\in G}gv\\&=\frac{1}{\lvert G\rvert}\sum\limits_{g\in G}v\\&=v\end{aligned}$
so this is indeed the projection we’re looking for.
Now, how does this apply to Maschke’s theorem? Well, given a $G$-module $V$, the collection of sesquilinear forms on the underlying space $V$ forms a vector space itself. Indeed, such forms correspond to correspond to Hermitian matrices, which form a vector space. Anyway, rather than write the usual angle-brackets, we will write one of these forms as a bilinear function $B:V\times V\to\mathbb{C}$.
Now I say that the space of forms carries an action from the right by $G$. Indeed, we can define
$\displaystyle\left[Bg\right](v_1,v_2)=B(gv_1,gv_2)$
It’s straightforward to verify that this is a right action by $G$. So, how do we “average” the form to get a $G$-invariant form? We define
$\displaystyle\bar{B}(v,w)=\frac{1}{\lvert G\rvert}\sum\limits_{g\in G}B(gv,gw)$
which — other than the factor of $\frac{1}{\lvert G\rvert}$ — is exactly how we came up with a $G$-invariant form in the proof of Maschke’s theorem!
## 1 Comment »
1. [...] can get a more explicit description to verify this equivalence by projecting onto the invariants. Given a tensor , we consider it instead as a tensor in . Now, this is far from unique, since many [...]
Pingback by | November 15, 2010 | Reply
« Previous | Next »
## About this weblog
This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”).
I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 33, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8962118029594421, "perplexity_flag": "head"}
|
http://gravityandlevity.wordpress.com/2010/12/07/feynmans-ratchet-and-the-perpetual-motion-gambling-scheme/
|
## A blog about the big ideas in physics, plus a few other things
Can you spot a perpetual motion machine when you see one?
In physics, that question is equivalent to “can you spot a scam when you see one?”. That’s because a perpetual motion machine is, by definition, a fraud. It is a device that claims to generate useful work in a way that violates one of the most basic laws of physics: the laws of thermodynamics. The laws of thermodynamics are extremely fundamental to physics; they belong to a set of five or so ideas that can really be called “laws”, upon which the rest of physics is built.
So if you (portrayed below by Lisa Simpson) submit an idea or invention to the physics community (portrayed by Homer Simpson) that violates one of the laws of thermodynamics, you’re opening yourself up to a world of ridicule.
If someone tells you “what you’re proposing is a perpetual motion machine” (they’ll say perpetuum mobile if they’re trying to sound snooty), they might as well be saying “you couldn’t tell a Lagrangian from a lawnmower”. It’s a pretty strong rebuke.
In my experience, though, most physics students have a false sense of confidence in their own ability to spot a perpetual motion machine. They think that such a whimsical contraption will have an obvious, glaring flaw that’s easy to notice because it will violate energy conservation. “Oh, you forgot to take into account friction,” they’ll say, and then they’ll give you a short lecture on the First Law of thermodynamics. “Energy is neither created nor destroyed,” they’ll say.
The truth, however, is that most perpetual motion machines that you are likely to encounter do not violate energy conservation. Rather, the tricky and persistent scientific “scams” violate the much more nebulous Second Law of Thermodynamics, which says (in one of its formulations):
It is impossible for a device to receive heat from a single reservoir and do a net amount of work.
It is much easier to be fooled by proposals which violate this Second Law, which ultimately has its roots in probability rather than in the deterministic notions of energy conservation. In my life I have been fooled on two noteworthy occasions by seemingly good ideas that violate the Second Law of Thermodynamics. One idea was for a hypothetical machine to generate energy from thin air (molecules). The other was a sure-fire gambling method. In this post I’ll discuss both of these fraudulent schemes and why they fail, and I’ll try to explain why the Second Law of Thermodynamics can be stated like this:
It is impossible to profit, in the long run, from a truly random process.
$\hspace{10mm}$
The remainder of this post is organized thusly: First, I’ll introduce you to Feynman’s ratchet, a fairly popular thought experiment that seemingly yields a perpetual motion machine. I won’t tell you why it fails, though, until later. In the second section I’ll introduce you to an idea that I once thought could make me a rich gambler and I’ll explain why it doesn’t work. Finally I’ll come back to Feynman’s ratchet and explain why it also must fail for a very similar reason.
$\hspace{10mm}$
$\hspace{10mm}$
Feynman’s Ratchet
Imagine that you manage to construct the following device. You take a very small, very light-weight metal rod and attach some thin, paddle-like fins to one end. Let’s say that the rod is held in place by some low-friction bearing which allows it to rotate on its axis. If the rod/fins are sufficiently light-weight, then when they are exposed to randomly-moving air molecules some of these molecules can hit the fins and cause the rod to rotate in one direction or the other. You, the inventor, are hoping to harness some of this rotation in a useful way, but you need the rod to rotate consistently in one direction before you can do anything with it. So you attach the other end of the rod to a ratchet mechanism: a saw-toothed gear that interlocks with a spring-loaded lever (called a pawl). Like this:
The ratchet, according to your design, will allow the rod to rotate easily in one direction (counterclockwise) but will not let it rotate in the other direction (clockwise).
So there you have it. A simple perpetual motion machine. As long as the surrounding air molecules continue to move randomly, the ratchet should continue to spin (perhaps sporadically) in the counterclockwise direction, driven by occasional collisions with high-energy air molecules. You can even get useful work out of the ratchet if you want, for example by winding up a rope that lifts a small mass or by using the rod to drive a tiny electrical generator.
This clever thought experiment is generally known as “Feynman’s Ratchet”. It was popularized by Richard Feynman in his Lectures on Physics, although the original explanation belongs to Smoluchowski (of diffusion law fame) in 1912. I first heard of it as a riddle passed around by undergraduate students.
It’s not immediately obvious that such a machine should be impossible. It certainly doesn’t violate energy conservation, nor does it rely on any “zero friction” assumptions. Feynman’s ratchet gradually uses up the energy of the randomly-moving air molecules around it (cooling the air as it gains energy through collisions), but so long as the earth is heated by the sun it should continue to rotate and, seemingly, provide useful work. It seemed to me, as an undergraduate, that this was a clever little device for converting solar energy to useful work.
But, by decree of thermodynamics, Feynman’s ratchet cannot work as a heat engine. It plainly violates the Second Law, which says that useful work can only be obtained by the flow of energy from high to low temperature. This device purports to get energy from a single temperature reservoir: that of the air around it.
Where does it go wrong?
If you’re encountering this riddle for the first time, you can try and figure it out for yourself before I tell you the answer below. But it may help you to first consider another bogus scheme, which I stumbled upon as a high school student and thought for sure could make someone a fortune.
$\hspace{10mm}$
$\hspace{10mm}$
The perpetual motion gambling scheme
It was during high school that my nerdy friends and I first discovered the joys of computer programming. It seemed to me then (and still seems now) a remarkable form of instant gratification: if you want to see what happens in a particular hypothetical situation, you just ask the computer to work it out for you and you get to avoid a lot of tedious and questionable theorizing. Of course, the marvelousness of the computer can quickly lead to the programmer developing an over-reliance on its powers, and from there it’s easy to fall into a kind of intellectual laziness that gets you into all kinds of (scientific) trouble. It’s probably this computer-born laziness that first allowed me to be fooled by the “perpetual motion gambling scheme”.
Back in 11th grade, the programming platform of choice for my friends and I was the TI-83 graphing calculator. Our setting of choice was the back of physics class. On one particular day, I was playing a simple blackjack program that my friend had made when I discovered that I could make money every single time I played. What’s more, I could make an arbitrarily large amount of money, apparently only by judiciously deciding how much to bet at each hand. I only learned much later in life that I had stumbled across a system called the “martingale strategy“. And only very recently did I realize that hoping to profit from the martingale strategy amounts to a perpetual motion machine, and is in violation of the Second Law.
If you’re unfamiliar with the martingale strategy, it goes as follows. Consider the simplest possible gambling game (you can easily generalize to other games, like blackjack): you place a bet and then flip a coin. If the coin comes up tails, then you lose all the money you bet. If the coin comes up heads, then the money you bet is doubled and given back to you. It’s a completely fair game which, on average, should give you zero net profit. The martingale strategy is to place an initial bet (say, \$1), and then double your bet each time you lose. In this way a victory at any given coin toss will completely compensate for all previous losses and give you a net profit of \$1. In flowchart form, it looks like this:
Notice that there’s no exit to this flow chart except at “Congratulations”. You can’t lose!
Of course, it’s possible that you, the bettor, only have a finite amount of money to bet, which would imply another ignominious exit to this flow chart corresponding to “you have completely run out of money”. (This was impossible in my friend’s TI-83 blackjack program, which allowed you to go into arbitrarily large amounts of debt). But the finiteness of a person’s funds didn’t seem like an insurmountable problem to me.
Here’s how the strategy played out in my high school student imagination. Come to the gambling table with some unthinkably huge amount of money: say, $2^{10} = 1,024$ dollars. Now follow the martingale system until you reach a profit. The only way the system could fail is in the extremely unlikely event that the coin comes up tails ten consecutive times. The probability of that happening is only $(1/2)^{10} = 0.097 \%$, so, I reasoned, it can be ignored. Once you’ve followed the chart and won your \$1, start over by resetting your bet to \$1. Repeat the system ad nauseum until you’ve made all the money you want. Go home rich and happy.
And, of course, the strategy is very flexible. If you’re richer than my “unthinkable” thousandaire and you’re not content with a 1-in-1000 chance of losing, then you can start by coming to the table with $2^{15} = 32,768$ dollars, which would imply a tiny $0.003 \%$ chance of failure. Or if you want to make money faster (with slightly higher risk), then at each coin toss you could bet (total amount of money lost) + \$10 instead of + \$1. What could go wrong?
$\hspace{10mm}$
$\hspace{10mm}$
What could go wrong, of course, is the Second Law of thermodynamics. It says (in my formulation) “you cannot profit from a random process.” Long-time readers of this blog (thanks!) may notice that the martingale system sounds suspiciously similar to Matt Ridley‘s strategy for biasing the gender distribution: keep having children until you have a boy, and then stop. It didn’t work there for the same reason that it doesn’t work here: a truly random process cannot be used for directed motion.
And, actually, the martingale system isn’t too hard to pick apart once you stop being analytically lazy (as I was in high school) and actually weigh the different outcomes. Take the example where I come to the gaming table with $2^{10}$ dollars and follow the strategy from the flowchart above. Then 1023 out of every 1024 games my strategy will succeed, and I’ll receive as my prize \$1. However, once in every 1024 games the strategy will fail, and when it fails it will fail spectacularly: I’ll lose \$1023. So if I keep playing the game long enough, on the whole I will make zero profit.
Just to make the point visually, here is a simulated string of “martingale” rounds, showing one possible evolution of the gambler’s net profit over time.
Note that at a given round, your profit is almost certainly increasing (positive slope), which is why the martingale strategy is so alluring. If you start from zero, then you will most likely earn some money in the short term. But given enough time, those big drops will hit you and you will find the strategy unprofitable.
$\hspace{10mm}$
Let me say this more again explicitly, as a hint to those still thinking about Feynman’s ratchet. You cannot get directed motion out of a random process. You can set up a system that makes a step in one direction (profit) more likely than a step in the other direction (loss), but it will always be accompanied by a change in the size of those steps so that on the whole you go nowhere.
Got it?
Feynman’s ratchet is explained after the jump
$\hspace{10mm}$
$\hspace{10mm}$
The downfall of Feynman’s ratchet
The problem with Feynman’s ratchet, as you’ve probably figured out by now, is that there is no such thing as a perfect ratchet mechanism. What I drew above was a spring-loaded lever that is supposed to prevent the gear from rotating backward. But in a thermal environment, where energy can be absorbed from randomly-moving air molecules, nothing is impossible. Things only become improbable due to the high energy they require.
So it must be possible for the gear to rotate backwards (clockwise). In this case, it requires a strong collision from some air molecules against the lever, so that the lever gets pushed up and past the tooth of the gear and the gear can slip backward. There is a corresponding small rate at which the gear skips backward by one tooth (so that the lever snaps into place in a new location).
Of course, this backwards rotation is much less probable than a small forward rotation. But consider that for the gear to rotate forward by one tooth, a whole bunch of small rotations must be chained together consecutively. The net rate of all of those small rotations coming together is also be fairly small.
And, in fact, the Second Law guarantees that the rates of a forward rotation and a backward rotation are the same. It seems surprising that this should be the case, no matter how carefully the ratchet is designed and no matter what size/shape the various pieces are. But it is. In the Lectures on Physics, Feynman estimates the rates of these two processes and shows that they are, in fact, equal (Chapter 46).
Of course, if you really wanted to make the machine work you could cool down the air on the ratchet side or heat up the air on the fin side, like this:
But in this case, you’ve only managed to generate work in the same way as a common steam engine: by creating a temperature difference and then using some of the heat that flows from hot to cold. (Here you’ll need a heat pump to prevent the temperature $T_1$ from equilibrating with $T_2$ by conduction along the metal rod).
$\hspace{10mm}$
$\hspace{10mm}$
$\hspace{10mm}$
What did we learn?
And now, like a good episode of G. I. Joe, this post concludes with a recap of the morals to be taken from it. The first moral is the Second Law itself: it is impossible to extract directed motion from a random process (a single heat reservoir). Anyone who claims they can do so is either mistaken or a charlatan.
A perhaps equally important lesson, though, is that it is easy to be fooled when it comes to the laws of thermodynamics. In the last decade or two, for example, there was much controversy over the mechanism by which muscle fibers contracted, before someone realized that one of the leading proposals amounted to a perpetual motion machine.
So be aware. Because knowing is half the battle.
### Like this:
from → physics
29 Comments leave one →
1. Erin Keenan
December 7, 2010 10:24 pm
Awesome post! I frickin’ love your blog
• gravityandlevity *
December 7, 2010 11:12 pm
Thanks, Erin. I wish the posts came more often, but they generally take me about 8 – 10 hours each to write so I tend to put them off. My “things to blog about” list is growing pretty long!
2. shane
December 8, 2010 8:59 pm
Fantastic stuff, again! A university, and their students, is going to get very lucky when you graduate (or after a postdoc, as necessary).
• gravityandlevity *
December 8, 2010 11:24 pm
Thanks shane. I don’t think I have the ego to make a fan page for myself, but your other suggestions are probably a good idea. I think I just successfully added a twitter/facebook “share” link. Let me know if it doesn’t work.
• shane
December 9, 2010 12:24 pm
Yep, it works. Nice. I look forward to your next installment, as always.
3. Albert Einstein
December 9, 2010 12:52 am
How does a windmill work then? Does it work under the same principle as the ratchet? The windmill only pumps water upward out of the ground, not down, and relies upon air movement. Do windmills only work where the wind mostly blows in one direction and not randomly? What if a fluid flows randomly in opposite directions? Can energy by extracted to perform useful work, such as pumping water upward? Your post needs further explanation.
• gravityandlevity *
December 9, 2010 9:35 am
A windmill works, ultimately, by exploiting temperature differences that produce gradients in air pressure. Hot temperatures create higher air pressure while cold temperatures produce low air pressure. Wind is the process of moving the thermal energy of air molecules from high to low temperature/pressure. A windmill exploits this motion (which is not random, as you suggested, but has an overall drift) in a way that is allowed by the Second Law.
4. antripathy
December 9, 2010 8:04 am
Well,the law of entropy says that temperaratures and pressures ,after doing work decay and increase the entropy of the system and tend to make them uniform or equal at everypoint.Consider the pressures in the atmosphere and the occeans which depend only on gravity which will never allow the pressures to become equal at every point on a vertical line.How to explain?
• gravityandlevity *
December 9, 2010 10:42 am
Hi antripathy.
You have to be a little careful with how you’re stating the second law. The second law says that the entropy of a closed system will become uniform over time (i.e. the pressure and density will be equal everywhere). A box of gas molecules, for example, when isolated from the rest of the world will eventually equilibrate completely so that its pressure is uniform.
If you have a column of water interacting with a gravitational field, however, that doesn’t constitute a closed system. There is still some large mass outside the system pulling on the water molecules and causing them to arrange in a way that prefers higher pressure at lower altitudes.
A more correct principle for this system is that the chemical potential is uniform. This is the average free energy per water molecule: energy minus temperature times entropy.
5. Curt F.
December 9, 2010 12:19 pm
It is impossible to profit, in the long run, from a truly random process.
I’m not sure this is the best way to summarize your observations. Say I invent a game where we flip a fair coin, and every time it comes up heads, I give you \$5. It seems like you profit, in the long run, from a truly random process.
Also, what about this article: (http://www.telegraph.co.uk/finance/personalfinance/consumertips/8185280/Is-this-a-bet-you-cant-lose.html). It says that some people are finding surefire ways to profit from betting on horses because sometimes bookies give away free bets. Are we to conclude from this that horse racing isn’t random (over the long term)?
• gravityandlevity *
December 9, 2010 12:34 pm
Good point. I guess a better statement would be “It is impossible to profit, in the long run, from an unbiased random process.” Here, “unbiased” means “zero expectation value. Your hypothetical game has an expectation value of \$2.50, so of course you will profit in the long run.
Coincidentally, this is why it is possible to make a profit from the stock market. There is a net upward drift in the total stock market value due to the world becoming more efficient at producing goods and services. Of course, there is the fact that that stock market isn’t “truly random”: you can make use of knowledge about the company to weigh the probabilities.
As for your article, it seems to me that the crux of the “sure fire” method is in taking advantage of incentives offered to first-time bettors. It looks like certain bookies have found it profitable to unbalance the odds for first-time users (so that users win, on average) in order to get people hooked on using their gambling service. I don’t know if the scheme reported in the article works, but if it does then it’s based on quickly jumping from one service to another and taking advantage of their “one time” offers. In that sense it’s a little bit like signing up for one of those “get 12 cd’s for 1 cent!” subscriptions and then canceling immediately. Again, the Second Law remains intact. : )
6. Albert Einstein
December 9, 2010 9:41 pm
Your restatement of the second law of thermodynamics as “impossible to profit, in the long run, from a truly random process” is incorrect. Blackjack is random and it is possible to profit in the long rug by card-counting.
• Hugh
December 10, 2010 9:37 am
By counting cards you’re removing the randomness.
• Albert Einstein
December 13, 2010 12:01 am
You’re not removing the randomness. The cards are no more ordered than they were before simply because you made some predictions about the order in which they would turn up. Does making weather forecasts that turn out to be correct more often than not remove randomness from the weather?
I don’t buy his reply about windmills either. Windmills I have seen are made so that they pivot vertically to always face toward the wind. Thus, windmills are able to generate work from wind that blows from random directions.
• Albert Einstein
December 13, 2010 12:09 am
What if am in a sail boat and I want to cross a lake where the wind blows randomly? When the wind is blowing in the right direction I put up my sail. When the wind isn’t favorable, I reef my sail and anchor. Eventually, I will be able to cross the lake by exploiting the wind even though it blows in random directions.
• gravityandlevity *
December 13, 2010 10:35 am
I like your sailboat question a lot. It’s a tricky one!
My guess at a solution is this:
First let me imagine that the sailboat and the lake are in complete isolation (i.e. not sitting in the middle of a big, externally-imposed air temperature/pressure gradient), so that the wind truly is “blowing randomly”. It does seem possible that you could cross the lake by (very quickly) raising and lowering the sail at the appropriate moments.
But then the problem is exactly like the problem of Maxwell’s demon ( http://en.wikipedia.org/wiki/Maxwell%27s_demon ), which says (in this case) that you might be able to move your sailboat from one side to the other, but you are going to expend a lot of metabolic energy raising and lowering the sail. So by the time you’ve managed to cross the lake, you’ve spent enough energy that you’ve essentially moved the boat by the process of transferring energy from a high-temperature source (yourself) to a low temperature source (the surrounding air). It’s the same as if you had just paddled the boat, which doesn’t violate any laws of thermodynamics. While you used the random air molecules to get across the lake, they’re not ultimately the driving force behind your motion.
You might think that you could just concoct some automated (ratchet) system to raise and lower the sail by itself. But I can guarantee, by the second law, that this system will either require a fuel source or it will fail. The same way that your swiveling windmill example will fail unless it has some energy input (e.g. a gas-powered engine) or exploits external temperature gradients.
• Albert Einstein
December 13, 2010 7:33 pm
In your Martingale example, what about the other party to the bets, the casino or the person with the deep pockets that accepts every bet? They make money from the random process. Maybe your restatement should be “[i]t is impossible to profit, in the long run, from a truly random process UNLESS YOU HAVE FIGURED OUT WAY TO PROFIT FROM THE RANDOM PROCESS.”
• gravityandlevity *
December 13, 2010 8:24 pm
The casino actually doesn’t make money in this example. If the game is fair (zero expectation value), then everyone gains nothing on average.
7. Albert Einstein
December 14, 2010 3:31 am
Wrong. The casino wins because it has a much larger bank roll. Review the part about “you only have a finite amount of money to bet”.
• Josh
December 21, 2010 5:16 pm
I think that, in a “real” game in a casino, the casino actually makes money not so much because it has a bigger bank roll but because the game is biased in its favor. I’m afraid I’m not that much of a gambling expert but I am pretty sure that this is the case, in terms of odds versus payouts, for roulette, craps, etc. It is notably *not* the case for blackjack, because as you pointed out above, it is possible to skillfully count cards to make money versus the house when playing blackjack. In *that* case, I believe that the advantage you have is not that you have outwitted a random process, but that the house is required to play by a set of fixed rules (hit on 16, stay on 17, or whatever the actual requirement is), and is not allowed to *itself* count cards to follow an optimum strategy.
8. Steve
December 16, 2010 7:07 pm
“It is impossible to profit, in the long run, from a truly random process.” Seems applicable to explain the faulty premises that belied the trading practices of the “quant jocks” which led to the financial meltdown of 2008? They seemed to have not considered that 1 out of 1024 times their assumptions wouldn’t work! Perhaps they were with you in the back of that physics class playing blackjack on that old TI-83? Curious if think there’s any merit to my conjecture? Love the blog!
9. January 18, 2011 8:56 am
Thing is — I can’t help thinking as the statistician that I am. Presumably all of this depends on the distribution of the random air movements. The reasoning seems to rely on a fat tail — i.e. a non-trivial probability of a large swing. Yet it seems equally possible that large swings are EXTREMELY unlikely. I find it hard to believe that there is no hidden assumption there.
• gravityandlevity *
January 18, 2011 2:45 pm
The kinetic energies of the air molecules follow the Boltzmann distribution: they are distributed according to $e^{-E/k_BT}$. So the “fatness” of the distribution depends only on the temperature, and the argument above works for all temperatures.
Really, though, all you need to know is that the rate at which a given thing happens depends only on how much energy is required to make it happen. In this case, the ratchet swings backward and forward at equal rates because it takes the same amount of energy to move over one of the teeth forward as to move over it backward (you have to lift the lever one tooth-height either way). The only difference is that the gear can rotate a little bit forward (which requires a small amount of energy) without clearing the edge of one tooth, and then the spring-loaded lever will push the gear back until the lever is at the bottom of the tooth again. On the other hand, if the lever happens to jump up and the gear rotates by only a tiny amount, then the lever will push the gear backwards until the lever rests at the bottom of the preceding tooth.
You should also remember that uniform temperature implies that everything in the environment follows the same Boltzmann distribution. So the air molecules and the atoms that make up the ratchet itself are all randomly kicking around. In this way all possible motions by all objects are being explored simultaneously, and any given motion occurs at a rate dependent only on how much energy it requires.
10. Filippo Inzaghi
February 8, 2011 3:50 am
Thanks for the impressive post. It clarified many issues and, as it is probably supposed to be, suggest new ones…:-)
Here’s my thought experiments that apparently violates the second law. Not a very original one, but still I cannot see where it breaks down.
Imagine you have a very thin and tiny whisker (what in scanning probe microscopy is usually called a cantilever). It is clamped at one end and free to move at the other. The system is at room temperature.
The whisker will vibrate at its resonance frequency around the equilibrium position just for the fact that it is at a finite temperature. Above the whisker you have some kind of traducing system: to simplify things just imagine that the vibrating sting hits something and then transfer to it some of its mechanical energy.
This “hitting something” will damp a little the vibration, but the thermal bath is providing constantly thermal energy so we will still be in steady state. Maybe it’s a different steady state than if the sting would be freely vibrating, but it’s still a steady state.
Am I not getting mechanical energy out of one single thermal bath or, if you prefer, out of a truly random process?
11. January 4, 2013 11:54 pm
What if the saw-toothed gear and the ratchet are in a vacuum?
• aurelius
March 22, 2013 10:30 am
The vacuum wouldn´t do so much, since the tiny pawl will anyhow bounce up and down due to thermal motion — even if it´s not hit by particles. (So, the system will even rotate in opposite direction if the temperature T2 is higher than T1…)
But how about this one: Consider the ratchet being built high above the fins, so that because of gravity T2 became lower than T1. It should work then, wouldn´t it? Anyhow, these temperature differences might be far too small to have a noteworthy effect…
Very nice blog btw, thanks very much!
### Trackbacks
%d bloggers like this:
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 21, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9352786540985107, "perplexity_flag": "middle"}
|
http://www.eecs.mit.edu/news-events/calendar/events/toward-optimal-method-convex-optimization-using-inexact-first-order
|
# Toward an optimal method for convex optimization using an inexact first-order oracle
SHARE:
## Event Speaker:
François Glineur (Université Catholique de Louvain)
## Event Location:
32D-677 (Stata Center, LIDS seminar room)
## Event Date/Time:
Wednesday, November 28, 2012 - 4:00pm
URL: http://perso.uclouvain.be/francois.glineur/
Title: Toward an optimal method for convex optimization using an inexact first-order oracle
Abstract: Standard analysis of first-order methods assumes availability of exact first-order information. Namely, the oracle must provide at each given point the exact values of the function and its gradient. However, in many convex problems, including those obtained by smoothing techniques, the objective function and its gradient are computed by solving another auxiliary optimization problem. In practice, we are often only able to solve these subproblems approximately. Hence, in that context, numerical methods solving the outer problem are provided with inexact first-order information.
We present in the first part of this talk a specific class of inexact first-order oracle, namely the $(\delta,L)$-oracle, which can be viewed as a common extension of the classical notions of epsilon-subgradient and Lipschitz-continuous gradient. We show that such an oracle is naturally available in several situations involving inexact computations, including many standard techniques where an auxiliary problem is solved approximately, such as convex-concave saddle point problems, augmented Lagrangians, and Moreau-Yosida regularization.
We then study the behavior of classical first-order methods for smooth convex optimization when such an inexact oracle is used instead of the exact gradient. In particular, we show that the convergence of the classical gradient method is mostly unchanged: it is guaranteed to converge to a solution whose accuracy is comparable to that of the oracle. In contrast, the behaviour of the fast gradient method seriously deteriorates: it suffers from error accumulation and is no longer guaranteed to converge. Moreover, the best accuracy reachable by a fast method is much larger than that of the oracle: if we want a better accuracy, we have to use a (much slower) classical gradient method.
In the second part of this talk, we propose a way to remedy this unsatisfactory situation. We introduce a new method that combines the fast gradient method during the first $\theta$ steps followed by a modified dual gradient method for the remaining steps. We show that, given an oracle accuracy $\delta$ and a target accuracy $\epsilon$ unattainable by the fast gradient method, this hybrid method requires a number of steps that is (much) smaller that the classical gradient method. We also show that the optimal switching point $\theta$ has a simple expression: it may be chosen as ${\delta}/{\epsilon}-2$, independently of the problem data.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9007090926170349, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/180575/how-many-small-cancellation-groups-are-there
|
# How many small cancellation groups are there?
It is known that there are uncountably many groups with two generators. But what about the restriction to small cancellation groups?
Are there countably or uncountably many small cancellation groups?
-
What's a small cancellation group? – Rasmus Aug 9 '12 at 9:38
1
– t.b. Aug 9 '12 at 10:12
Of course, there are only countably many finitely presented groups, though... – Steve D Aug 9 '12 at 10:16
@SteveD: Can all small cancellation groups necessarily be made finitely presented though? They aren't all hyperbolic (for example $C(4)-T(4)$ and $C(3)-T(6)$ are both flat), so finitely presented isn't a given... – user1729 Aug 9 '12 at 10:37
1
@SteveD: Lyndon and Schupp talks about "recursive" presentations in their book (every theorem is about a recursively presented group with $C^{\prime}(1/6)$ or whatever). I haven't looked why in detail, and is perhaps just because the small cancellation presentation might not be finite. – user1729 Aug 9 '12 at 17:00
show 2 more comments
## 2 Answers
Your question can be solved without appealing to any fancy result. But it is ambiguous, so I give 2 answers:
if you define "small cancelation" as finitely presented, then there are countably many such groups, just because you have countably many group presentations. (And there are infinitely many small cancelation groups on 2 generators and 1 relator, see Lyndon-Schupp)
if you allow infinite presentations, then start from a single small cancelation infinite presentation $\langle x,y|(R_n)\rangle$. For any subset $I$ of the integers, you get a group $G_I=\langle x,y|(R_n)_{i\in I}\rangle$. Thus get continuum many small cancelation presentations.
Actually it also gives countinuum many non-isomorphic groups: the argument is as follows: because of small cancelation, the $G_I$ are pairwise non-isomorphic as marked groups (a marked isomorphism is by definition required to map $(x,y)$ to $(x,y)$). And a given f.g. group has at most countably many pairs of generators. So the equivalence relation "being isomorphic" among the $G_I$ has at most countable classes, and thus they include continuum many non-isomorphic groups.
-
Nice answer, thank you! – Seirios Aug 21 '12 at 9:59
I don't follow your indexing argument in the second-last paragraph - how does the indexing change the group? Also, you begin by assuming that you have an infinitely-presented small-cancellation presentation. Could you perhaps give an example of one? (This was discussed in the comments beneath the question, but never really resolved...) – user1729 Aug 22 '12 at 8:36
To obtain $G_I$, you extract a sequence of relations from $\langle x,y | (R_n)_{n \in \mathbb{N}} \rangle$, that is $G_I= \langle x,y | (R_n)_{n \in I} \rangle$ if $I \subset \mathbb{N}$. – Seirios Aug 22 '12 at 14:45
For $i \geq 1$, let $w_i$ denote the word $ab^{2^{i-1}}ab^{2^{i-1}+1}...ab^{2^i}$. For all $i,j \geq 1$, $w_i$ and $w_j^{-1}$ don't have any commun subword; the largest commun piece between two cyclic permutations of $w_i$ is $b^{2^i-2}ab^{2^i-1}$; the largest commun piece between two cyclic permutations of $w_i$ and $w_j$ ($j >i$) is also $b^{2^i-1}ab^{2^i-1}$. On the other hand, $\ell g(w_i)= (2^{i-1}+1)(1+3.2^{i-1})$. So the family $\{w_i,i \geq 3\}$ satisfies the condition $C'(1/6)$. – Seirios Aug 22 '12 at 15:43
@Seirios If a group satisfies $C^{\prime}(1/6)$ it is hyperbolic (unless there is some subtlety I am missing...), and so is finitely presented. – user1729 Aug 23 '12 at 8:59
show 1 more comment
There is a famous result of Ol'shanskii which says that "almost all" finitely presented groups are small cancellation (indeed, are $C^{\prime}(1/6)$). However, I cannot access the paper I believe this result is in ("Almost every group is hyperbolic", Internat. J. Algebra Comput. $\mathbf{2}$ (1992), 1-17).
Instead, I will mention a paper I have on the desk in front of me. It is a paper of Ilya Kapovich and Paul Schupp, entitled "Genericity, the Arzhantseva-Ol'shanskii method and the isomorphism problem for one-relator groups". Before mentioning their result, I should give the notion of genericity that they use:
Let $N(m, n, t)$ be the number of all possible presentations of the form $\langle a_1, \ldots, a_n; r_1, \ldots, r_n\rangle$ where the $r_i$ are cyclically reduced non-trivial words from $F(a_1, \ldots, a_n)$ and where $|r_i|\leq t$ for $i=1, \ldots, n$. Let $N_p(m, n, t)$ be the number of presentations which these restrictions which define a group with property $P$. Then the property $P$ is $(m, n)$-generic if $$\displaystyle\lim_{t\rightarrow\infty}\frac{N_P(m, n, t)}{N(m, n, t)}=1.$$ If, moreoever, there is $0\leq c-c(m, n)<1$ such that for all sufficiently large $t$ we have $$1-\frac{N_P(m, n, t)}{N(m, n, t)}\leq c^t$$ we say that $P$ is exponentially $(m, n)$-generic.
Their main theorem is as follows,
Theorem: Let $m>1$ and $n>0$ be integers. There exists an exponentially (m, n)-generic class $P_{m, n}$ of $m$-generator $n$-relator presentations $$\langle a_1, \ldots, a_m; r_1, \ldots, r_n$$ with the following properties:
• Every group defined by a presentation from $P_{m, n}$ is torsion-free, one-ended and word-hyperbolic (They actually prove $C^{\prime}(1/6)$). Moreover, every subgroup of $G$ generated by at most $m-1$ elements is free.
• There is an algorithm which, given an arbitrary $m$-tuple of cyclically reduced words $r_1, \ldots, r_n \in F(a_1, \ldots, a_m)$, decides in at most exponential time (in the sum of the lengths of $r_i$) whether or not a presentation $\langle a_1, \ldots, a_n; r_1, \ldots, r_n\rangle$ belongs to $P_{m, n}$.
• For any presentation $\langle a_1, \ldots, a_n; r_1, \ldots, r_n\rangle$from $P_{m, n}$, for the group $G=\langle a_1, \ldots, a_n; r_1, \ldots, r_n\rangle$ any $m$-tuple, generating a non-free subgroup of $G$, is Nielsen-equivalent in $G$ to the $m$-tuple $(a_1, \ldots, a_m)$.
Their title talks about the isomorphism problem for one-relator groups because if you have a class of a one-relator presentations, $S=\{\langle x_1, \ldots, x_n; R\rangle\}$, and every presentation from $S$ has a single Nielsen equivalence class then two presentations $\langle a_1, \ldots, a_p; R_2\rangle$ and $a_1, \ldots, a_q; R_2\rangle$ are isomorphic if and only if $p=q$ and there exists a Nielsen transformation of the $a_i$, $\phi$ say, which maps $R_1$ to $R_2$: $R_1(a_1\phi)=R_2$. You can look up the paper for the definition of Nielsen equivalence class though! I think I have typed enough now...
EDIT: Primer on small cancellation. (This was originally a comment, but it got a bit long...)
A small cancellation group is a group given in terms of generators and relators such that only a small amount of cancellation happens between the relators (so really we should talk about small cancellation presentations). If $R=\langle X; \mathbf{r}\rangle$ then let $\mathbf{r}^{\ast}$ consist of the set of all cyclic shifts and inverses and inverses of all cyclic shifts (etc!) of elements from $\mathbf{r}$. Note that if $\mathbf{r}$ is finite then so is $\mathbf{r}^{\ast}$. Then, if $R$ and $S$ are in $\mathbf{r}^{\ast}$ such that $R=\hat{R}p$ and $S=\hat{S}p$ then $p$ is called a piece. A presentation has $C^{\prime}(1/\lambda)$ if every piece of every relator is $<1/6$ of the relator, and it has $C(m)$ if every relator is a product of no fewer that $m$ pieces. There is also the $T(m)$-condition, which I will explain in the next paragraph...
If you have ever heard of van Kampen diagrams, these are very much related. Draw a circle for each of your relators, and then partition the circle into the pieces of the relator. Essentially, small cancellation forces the tilings of these van Kampen diagrams to be "nice". For example, the $C(m)$ condition implies that every diagram is at least an $m$-gon, and the $T(m)$-condition implies that every vertex is the join of no more that $m$-diagrams. So, for example, if you have $C(4)-T(4)$ or $C(3)-T(6)$ then your diagram is flat. This yields, with a lot of work, a solution to the word and conjugacy problems for such groups.
However, the $C(4)-T(4)$ are kinda uninteresting - the $C^{\prime}(\lambda)$ one is the biggie. This is because if you have $C^{\prime}(1/6)$ the your diagrams are negatively curved! and so your group is hyperbolic. The result which gives you this is Greendlinger's Lemma. The standard reference for all this is the last chapter of Lyndon and Schupp's fine text "Combinatorial Group Theory" (which is not to be confused with Magnus Karrass and Solitar's book of the same name, which is differently excellent!).
Small cancellation theory has wound its way throughout combinatorial and geometric group theory. For example, Rips construction uses small cancellation theory, and a variation on this construction was used by Dani Wise and Inna Bumagin to construct finitely generated groups with a given outer automorphism group (as in, you give them a group $G$ and they can construct a finitely generated group $H$ such that $\operatorname{Out}(H)\cong G$).
Small cancellation can also be applied to other things, such as graphs (which is what the paper of Kapovich and Schupp does) as well as things called "cubical complexes". The cubical complex stuff is very powerful, and was engineered by Dani Wise. It has come to a head in the last couple of years, leading to a proof of the virtually Haken conjecture. Which is massive. Ian Agol then improved on this result. Again, this is massive. (the cubical complex stuff also proves that every one-relator group with torsion is residually finite, which ahs been open since 1967. So that is pretty big too!)
-
I haven't checked, but I believe Champetier's work on the Chabauty topology also includes some results on small cancellation groups. E.g. in . – t.b. Aug 9 '12 at 10:14
Also: Yann Ollivier has a few results on Small cancelation and random groups. e.g. here – t.b. Aug 9 '12 at 10:21
1
– Steve D Aug 9 '12 at 16:26
I spent a week at a workshop he was doing recently. Fascinating, but only locally understandable. However, that link should be very helpful - thanks! – user1729 Aug 9 '12 at 16:57
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 106, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9486957788467407, "perplexity_flag": "head"}
|
http://mathhelpforum.com/pre-calculus/152967-vector-problem-finding-angles-sides-triangle-3d-well-area.html
|
# Thread:
1. ## Vector Problem. Finding the angles and sides of a triangle in 3d as well as the area.
Problem:
Find all angles and the lengths of all the sides in the triangle with the corners P(1,2,3) Q(0,2,1) R(-2,4-2). Hence compute its area.
Solution so far:
Vectors (u = u2 - u1)
PQ = (-1,0,-2), PR = (-3,2,-5)
QP = (1,0,2), QR =(-2,2-3)
RQ = (2,-2,3), RP = (3,-2,5)
Lengths of vectors |u| = sqrt(u1^2 + u2^2 + u3^2)
hence,
|PQ| = sqrt(5)
|PR| = sqrt(38)
|QP| = sqrt(5)
|QR| = sqrt(17)
|RQ| = sqrt(17)
|RP| = sqrt(38)
Therefore the sides of the triangle are sqrt(5), sqrt(38) and sqrt(17)
Area of parallelogram = the magnitude of the cross product of PQ and PR
PQ cross PR = (4, 1 ,-2)
|PQ cross PR| = sqrt(21)
The area of the triangle is therefore 1/2 sqrt(21)
Angles between vectors
PQ and PR => cosine(a) = (PQ.PR)/(|PQ||PR|)
therefore a = cos^-1 (PQ.PR)/(|PQ||PR|)
Now if I calculate the angle between all the vectors then the sum of those angles should be 180 degrees, right...
Are there any problems with my workings?
Basically, am I doing anything wrong?
Feedback would be greatly appreciated.
2. You've done about twice as much work as what's needed.
Your calculation of the lengths look correct (though you didn't need to do it twice).
I would simply apply Heron's formula to find the area of the triangle, and use the Cosine rule to find all the angles...
3. But would I not require all those vector lengths in order to apply the law of cosines to calculate the angles?
That is angles between PQ and PR, QP and QR, RP and RQ
4. What I am saying is, the length of PR is the same as RP, the length of PQ is the same as QP, etc... Like I said, your calculations of the lengths look correct, but you only needed to do it three times, not six.
So you really only need to work out the three lengths of the triangle. That is enough to be able to apply Heron's formula to find the area, and the cosine rule three times to find the angles. But your logic is correct, as using dot products is another valid method (in fact, the dot product is proven using the cosine rule). And yes, the angles should add to $180^{\circ}$.
5. Cool, thankyou very much.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9328160285949707, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/29868/why-does-magnetic-field-lines-go-from-plus-to-minus
|
# Why does magnetic field lines go from plus to minus?
My question is why the magnetic field lines goes from plus to minus, if there was two charge. Is it true or isn't it true.
-
Hi user1098185, and welcome to Physics Stack Exchange! Could you expand on your question? What reason do you have to believe this is true or not true? Where else have you looked to try to find the answer, and what didn't you understand about what you found? etc. – David Zaslavsky♦ Jun 10 '12 at 15:45
– user1098185 Jun 10 '12 at 16:17
What are "plus" and "minus"? – Ignacio Vazquez-Abrams Jun 10 '12 at 16:26
Dipole. The magnetic charges. Dont know what to call it, but if you look at the picture you will see it. The round circle with + and - – user1098185 Jun 10 '12 at 16:48
1
I think if you look carefully at the picture you mentioned, it's showing the electric field, not magnetic. The $E$ label and arrow are the big hit ($E$ is for electric field). The $+$ and $-$ would then be electric charges. Also, $+$ and $-$ are not normally used for magnetic fields, which are always loops. The $N$ and $S$ designations, indicating loops flowing "out" or "in" from bunched-up bundles of such loops, are used instead. – Terry Bollinger Jun 10 '12 at 16:51
show 2 more comments
## 1 Answer
I think if you look carefully at the picture you mentioned, it's showing the electric field, not magnetic. The $E$ label and arrow are the big hit ($E$ is for electric field). The $+$ and $−$ would then be electric charges. Also, $+$ and $−$ are not normally used for magnetic fields, which are always loops. The $N$ and $S$ designations, indicating loops flowing "out" or "in" from bunched-up bundles of such loops, are used instead.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 16, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9425680041313171, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/8359/if-energy-is-only-defined-up-to-a-constant-can-we-really-claim-that-ground-stat/8360
|
# If energy is only defined up to a constant, can we really claim that ground state energy has an absolute value?
Sorry if this is really naive, but we learned in Newtonian physics that the total energy of a system is only defined up to an additive constant, since you can always add a constant to the potential energy function without changing the equation of motion (since force is negative the gradient of the potential energy).
Then in Quantum Mechanics we showed how the ground state of a system with potential energy $V(x) = \frac{1}{2} m \omega^{2} x^{2}$ has an energy $E_{0}=\frac{1}{2} \hbar \omega$.
But if we add a constant to $V(x)$ won't that just shift the ground state energy by the same constant? So in what sense can we actually say that the ground state energy has an absolute value (as opposed to just a relative value)? Is there some way to measure it?
I ask this in part because I have heard that Dark Energy might be the ground state energy of quantum fields, but if this energy is only defined up to a constant, how can we say what it's value is?
-
2
– Jerry Schirmer Apr 10 '11 at 8:19
## 2 Answers
In non-relativistic and non-gravitational physics (both conditions have to be satisfied simultaneously for the following proposition to hold), energy is only defined up to an arbitrary additive shift. In this restricted context, the choice of the additive shift is an unphysical, unobservable convention.
Special relativity
However, in special relativity, energy is the time component of a 4-vector and it matters a great deal whether it is zero or nonzero. In particular, the energy of the empty Minkowski space has to be exactly zero because if it were nonzero, the state wouldn't be Lorentz-invariant: Lorentz transformations would transform the nonzero energy (time component of a vector) to a nonzero momentum (spatial components).
General relativity
In general relativity, the additive shifts to energy also matter because energy is a source of spacetime curvature. A uniform shift of energy density in the Universe is known as the cosmological constant, and it will curve the vacuum. So it's important to know what it is - and it is not just a convention. Also, in general relativity, the argument from the previous paragraph may be circumvented: dark energy, regardless of its value, preserves the Lorentz (or de Sitter or anti de Sitter, which are equally large) symmetry because the stress energy tensor is proportional to the metric tensor (because $p=-\rho$). However, as long as there is gravity, the additive shift matters.
In practice, we don't measure the zero-point energy by its gravitational effects, and the value of the cosmological constant remains largely mysterious. So I surely have a different, more observationally relevant answer.
Casimir energy, comparison of situations
The additive shifts to the energy are also important when one can compare the energy in two different situations. In particular, the Casimir effect may be measured. The Casimir force arises because in between two metallic plates, the electromagnetic field has to be organized to standing waves - because of the different boundary conditions. By summing the $\hbar\omega/2$ zero-point energies of these standing waves (each wavelength produces a harmonic oscillator), and by subtracting a similar "continuous" calculation in the absence of the metallic plates, one may discover that the total zero-point energy depends on the distance of the metallic plates if they're present, and experiments have verified that the corresponding force $dE/dr$ exists and numerically agrees with the prediction.
There are many other contexts in which the zero-point energy may be de facto measured. For example, there exist metastable states that behave like the harmonic oscillator for several low-lying states. The energy of these metastable states may be compared with the energy of the free particle at infinity, and the result is $V_{\rm local\,minimum}-V_{\infty}+\hbar\omega/2$. This is somewhat analogous to calculating the energies of the bound state in a Hydrogen atom - which may be measured (think about the ionization energy).
So yes, whenever one adds either special relativity or gravity or comparisons of configurations where the structure and frequencies of the harmonic oscillators differ, the additive shift becomes physical and measurable.
-
Thanks for this! I still have some nagging uncertainties, probably because I got so used to Newtonian mechanics, but hopefully those will clear up when I read your answer (and Marek's below) a few more times. – user3035 Apr 10 '11 at 20:58
It's quite correct that you can additively shift energy, even in quantum mechanics, and one can always make the ground state carry zero energy. Nevertheless, you can still measure some other energy even in the ground state: the kinetic energy. Because $T = {p^2 \over 2m}$ the expectation of kinetic energy in a given energy state is essentially its uncertainty in momentum (because the average value of momentum is zero). So even in the ground state of the oscillator there is some intrinsic movement present (of course only in this sense, the state is still stationary w.r.t. evolution), notwithstanding that it has zero energy.
From another point of view, consider your potential $V(x) = {1\over 2} m\omega^2 x^2 - E_0$: it will intersect the $x$-axis. But the ground state energy lies at $E=0$. So it's not found at the bottom of your potential (as one would expect for a ground state in classical physics). This relative position of $E_0$ and $V(x)$ is independent of any shifts in energy.
-
Right, @Marek, $E_0-V_{\rm min}$ is independent of conventions. However, one may always imagine that $V(x)$ was different by $\hbar\omega/2$ than we thought and we will produce the same energy levels. Of course, then we must ask whether $E_0$ and $V_{\rm min}$ may be measured independently. It depends what tools we have to measure them. You have to assume that we can - $V_{\rm min}$ may be measured by localizing the electron, except that then it has a huge kinetic energy. – Luboš Motl Apr 10 '11 at 8:39
Note that if you calculate the energy as $T(p)+V(x)$ out of measured values of $x$ and $p$, the uncertainty principle makes the error of the energy exceed $\hbar\omega/2$ or so, anyway. In this sense, $V_{\rm min}$ cannot be measured separately from $E_0$. – Luboš Motl Apr 10 '11 at 8:40
@Luboš: well, measuring these values is certainly a problem. Nevertheless, QM tells us that $E > V_{\rm min}$ for any bound state localized around $V_{\rm min}$, right? It's no problem that it's not verifiable. Theory can (and must) surely produce lots of results we can never measure. – Marek Apr 10 '11 at 9:02
Dear @Marek, right, physics contains many important non-measurable concepts. But one must distinguish whether a quantity is unmeasurable just "directly" - but it has physical consquences - from the case when it's unmeasurable in principle. In the latter case, it's literally unphysical. In non-relativistic non-gravitating quantum mechanics with a fixed potential etc., the additive energy shift is unmeasurable even in principle because it may be incorporated into a redefinition of $V$. This is not the case in SR; GR; or when we may change $V$ or $H$ and compare the energies. – Luboš Motl Apr 10 '11 at 10:15
1
The question whether "it's verifiable" was really the original question of the OP. If it were not verifiable even in principle - and in non-relativistic non-gravitating QM with a fixed potential, it's not - then the OP would be right that we can't really claim that there is a physical zero-point energy because it depends on the way how we write it. – Luboš Motl Apr 10 '11 at 10:17
show 1 more comment
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 30, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9373883008956909, "perplexity_flag": "head"}
|
http://mathoverflow.net/revisions/94269/list
|
2 tag typo
1
# level 2,3 characters of affine su(2)
Does anyone know where I can find an explicit formula to compute the level 2 or level 3 characters of affine $su(2)$? I have found several sources that give a formula to compute the level 1 characters in terms of theta functions, but I cannot find anywhere in the literature nice formulas for level 2 or level 3. Thanks
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9151827692985535, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/316708/prove-it-is-a-circle
|
# Prove it is a circle
So I have this question:
Let $Q = (4, 8)$, $R = (6, 8)$ and $P = (a, b)$. Let $\lambda\in\mathbb R$ with $0 < \lambda < 1$.
Consider $C =\{P: |QP| = \lambda|RP|\}$
Give an equation to $C$ and prove its a circle.
I'm trying to figure out how to interpret the $\lambda$ symbol to come up with the an expression for $C$, which I have to prove is a circle.
I did work out the distances $PQ$ and $QR$, the $\lambda$ symbol is just puzzling me.
I tried to fix $\lambda$ and divide the two distance equations, but it leads me nowhere.
Can anyone give me some directions?
-
2
Have you got expressions for the distances $QP$ and $RP$? What happens when you set the first to $\lambda$ times the second, and simplify? (Note that $\lambda$ is just a name for a constant strictly between $0$ and $1$. Fix it at some particular value while you do some working out, if you like) – AakashM Feb 28 at 9:04
Since I will have to show radius, center, etc should I not have to keep it as a symbol? – nightcoder Feb 28 at 9:39
Sure, once you're happy with the process, you can do it again with a non-fixed $\lambda$. It might be easier to work through the algebraic manipulation the first time with a fixed value, that's all. – AakashM Feb 28 at 9:41
@AakashM I will try to work it out that way and come back to post results. Thanks once again! – nightcoder Feb 28 at 9:58
By using numbers for lambda (0.5) things turn out pretty, once I put back the lambda, there I am stuck again. – nightcoder Feb 28 at 12:13
## 3 Answers
Maybe the problem is that you don't know what sorts of equations represent circles, so, as you are doing the algebraic manipulations, you have no target/destination in mind.
Any equation of the form
$$a(x^2 + y^2) + bx + cy +d = 0$$
is a circle. If you don't know why, please ask. The key characteristics are that the $x^2$ and $y^2$ terms have the same coefficient, and there is no $xy$ term.
So, take the equation in Brian Scott's answer, and see if you can massage it into this form. If you can do that, then you will know you have a circle.
After you have the equation in the form above, it's easy to show that the center of the circle is at the point $(-\frac{b}{2a}, -\frac{c}{2a} )$. You can see this by "completing the squares" as Macavity said.
-
Here it is: 1-$\lambda$(x^2+y^2) + x(2+4$\lambda$) + y (2+6$\lambda$) - 13 $\lambda$ + 2 I grouped by your "x2 and y2 terms with same coefficient", which was very helpful. – nightcoder Feb 28 at 13:09
I think I'm getting there, I will update here. – nightcoder Feb 28 at 13:11
I am surprised you don't have $\lambda^2$ terms in your equation - please double check. Assuming what you have got is correct, all that remains is to divide throughout by the coefficient of $x^2$ (or $y^2$), group the terms involving $x, x^2$ together (and $y, y^2$ together), and complete the squares. The constant term outside the squares should be positive, and you have a circle. – Macavity Feb 28 at 13:29
1
sorry, I mistyped it. You are right, there are lots of them, since when I remove the square roots it turns the lambda into lambda squared. I got it Macavity. The radius of this beauty becomes a monster of an expression. You opened my eye with the "x^2 and y^2 with same coefficients". Thanks a lot for that. – nightcoder Feb 28 at 14:29
HINT: $\lambda$ is just some constant between $0$ and $1$. Consider the point $P=\langle x,y\rangle$: $$|QP|=\sqrt{(x-4)^2+(y-8)^2}\;,$$ and $$|RP|=\sqrt{(x-6)^2+(y-8)^2}\;,$$ so $C$ is the set of all points $P=\langle x,y\rangle$ such that
$$\sqrt{(x-4)^2+(y-8)^2}=\lambda\sqrt{(x-6)^2+(y-8)^2}\;.$$
Try manipulating this equation algebraically into a form that makes it clear that $C$ is a circle.
-
1
If you're comfortable with it you can shift coordinates $(4,8)$ first ($Q'=(0,0),R'=(2,0),P=(x',y')$) to simplify calculations. Then you just have to solve $\sqrt{x'^2+y'^2}=\lambda\sqrt{(x'-2)^2+y'^2}$. – Michalis Feb 28 at 9:14
@Brian, thanks for your hint. I actually did just that, the problem is to make the part inside the square root look like a circle algebra. Is it what the problem is all about? – nightcoder Feb 28 at 9:38
@Michalis: Thanks a lot. I'm working with that format now to make it easier. ;) – nightcoder Feb 28 at 9:40
Using vectors may simplify the algebra. For instance, with $P, Q, R$ as position vectors, we have the locus of points in set $C$ to be:
$|P- Q|^2 = \lambda^2 |P-R|^2$.
Using dot products, expanding and simplifying, one gets:
$\big|P - \dfrac{Q - \mu R}{1-\mu}\big|^2 = \mu \dfrac {|Q-R|^2}{(1-\mu)^2}$
where $\mu = \lambda^2$
from which it is easy to recognise the circle form.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 48, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9523544311523438, "perplexity_flag": "head"}
|
http://psychology.wikia.com/wiki/Number_theory?oldid=12022
|
Number theory
Talk0
31,728pages on
this wiki
Revision as of 22:57, April 23, 2006 by Lifeartist (Talk | contribs)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Assessment | Biopsychology | Comparative | Cognitive | Developmental | Language | Individual differences | Personality | Philosophy | Social |
Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology |
Statistics: Scientific method · Research methods · Experimental design · Undergraduate statistics courses · Statistical tests · Game theory · Decision theory
Number theory is the branch of pure mathematics concerned with the properties of numbers in general, and integers in particular, as well as the wider classes of problems that arise from their study. Number theory may be subdivided into several fields, according to the methods used and the type of questions investigated. (See the list of number theory topics).
The term "arithmetic" is also used to refer to number theory. This is a somewhat older term, which is no longer as popular as it once was. Number theory used to be called the higher arithmetic, but this too is dropping out of use. Nevertheless, it still shows up in the names of mathematical fields (arithmetic functions, arithmetic of elliptic curves, fundamental theorem of arithmetic). This sense of the term arithmetic should not be confused either with elementary arithmetic, or with the branch of logic which studies Peano arithmetic as a formal system. Mathematicians working in the field of number theory are called number theorists.
Fields
Elementary number theory
In elementary number theory, integers are studied without use of techniques from other mathematical fields. Questions of divisibility, use of the Euclidean algorithm to compute greatest common divisors, factorization of integers into prime numbers, investigation of perfect numbers and congruences belong here. Several important discoveries of this field are Fermat's little theorem, Euler's theorem, the Chinese remainder theorem and the law of quadratic reciprocity. The properties of multiplicative functions such as the Möbius function, Euler's φ function, integer sequences, factorials and Fibonacci numbers all also fall into this area.
Many questions in number theory can be stated in elementary number theoretic terms, but they may require very deep consideration and new approaches outside the realm of elementary number theory. Examples include:
• The Goldbach conjecture concerning the expression of even numbers as sums of two primes.
• Catalan's conjecture (now Mihăilescu's theorem) regarding successive integer powers.
• The twin prime conjecture about the infinitude of prime pairs.
• The Collatz conjecture concerning a simple iteration.
• Fermat's last theorem (stated in 1637, but not proved until 1994) concerning the impossibility of finding nonzero integers x, y, z such that $x^n + y^n = z^n$ for some integer n greater than 2.
The theory of Diophantine equations has even been shown to be undecidable (see Hilbert's tenth problem).
Analytic number theory
Analytic number theory employs the machinery of calculus and complex analysis to tackle questions about integers. The prime number theorem and the related Riemann hypothesis are examples. Waring's problem (representing a given integer as a sum of squares, cubes etc.), the Twin Prime Conjecture (finding infinitely many prime pairs with difference 2) and Goldbach's conjecture (writing even integers as sums of two primes) are being attacked with analytical methods as well. Proofs of the transcendence of mathematical constants, such as π or e, are also classified as analytical number theory. While statements about transcendental numbers may seem to be removed from the study of integers, they really study the possible values of polynomials with integer coefficients evaluated at, say, e; they are also closely linked to the field of Diophantine approximation, where one investigates "how well" a given real number may be approximated by a rational one.
Algebraic number theory
In algebraic number theory, the concept of a number is expanded to the algebraic numbers which are roots of polynomials with rational coefficients. These domains contain elements analogous to the integers, the so-called algebraic integers. In this setting, the familiar features of the integers (e.g. unique factorization) need not hold. The virtue of the machinery employed—Galois theory, group cohomology, class field theory, group representations and L-functions—is that it allows to recover that order partly for this new class of numbers.
Many number theoretic questions are best attacked by studying them modulo p for all primes p (see finite fields). This is called localization and it leads to the construction of the p-adic numbers; this field of study is called local analysis and it arises from algebraic number theory.
Geometric number theory
Geometric number theory (traditionally called geometry of numbers) incorporates all forms of geometry. It starts with Minkowski's theorem about lattice points in convex sets and investigations of sphere packings.
Combinatorial number theory
Combinatorial number theory deals with number theoretic problems which involve combinatorial ideas in their formulations or solutions. Paul Erdős is the main founder of this branch of number theory. Typical topics include covering system, zero-sum problems, various restricted sumsets, and arithmetic progressions in a set of integers. Algebraic or analytic methods are powerful in this field.
Computational number theory
Computational number theory studies algorithms relevant in number theory. Fast algorithms for prime testing and integer factorization have important applications in cryptography.
History
Vedic number theory
Mathematicians in India were interested in finding integral solutions of Diophantine equations since the Vedic era. The earliest geometric use of Diophantine equations can be traced back to the Sulba Sutras, which were written between the 8th and 6th centuries BC. Baudhayana (c. 800 BC) found two sets of positive integral solutions to a set of simultaneous Diophantine equations, and also used simultaneous Diophantine equations with up to four unknowns. Apastamba (c. 600 BC) used simultaneous Diophantine equations with up to five unknowns.
Jaina number theory
In India, Jaina mathematicians developed the earliest systematic theory of numbers from the 4th century BC to the 2nd century CE. The Jaina text Surya Prajinapti (c. 400 BC) classifies all numbers into three sets: enumerable, innumerable and infinite. Each of these was further subdivided into three orders:
• Enumerable: lowest, intermediate and highest.
• Innumerable: nearly innumerable, truly innumerable and innumerably innumerable.
• Infinite: nearly infinite, truly infinite, infinitely infinite.
The Jains were the first to discard the idea that all infinites were the same or equal. They recognized five different types of infinity: infinite in one and two directions (one dimension), infinite in area (two dimensions), infinite everywhere (three dimensions), and infinite perpetually (infinite number of dimensions).
The highest enumerable number N of the Jains corresponds to the modern concept of aleph-null $\aleph_0$ (the cardinal number of the infinite set of integers 1, 2, ...), the smallest cardinal transfinite number. The Jains also defined a whole system of transfinite cardinal numbers, of which $\aleph_0$ is the smallest.
In the Jaina work on the theory of sets, two basic types of transfinite numbers are distinguished. On both physical and ontological grounds, a distinction was made between asmkhyata and ananata, between rigidly bounded and loosely bounded infinities.
Hellenistic number theory
Number theory was a favorite study among the Hellenistic mathematicians of Alexandria, Egypt from the 3rd century CE, who were aware of the Diophantine equation concept in numerous special cases. The first Hellenistic mathematician to study these equations was Diophantus.
Diophantus also looked for a method of finding integer solutions to linear indeterminate equations, equations that lack sufficient information to produce a single discrete set of answers. The equation x + y = 5 is such an equation. Diophantus discovered that many indeterminate equations can be reduced to a form where a certain category of answers is known even though a specific answer is not.
Classical Indian number theory
Diophantine equations were extensively studied by mathematicians in medieval India, who were the first to systematically investigate methods for the determination of integral solutions of Diophantine equations. Aryabhata (499) gave the first explicit description of the general integral solution of the linear Diophantine equation ay + bx = c, which occurs in his text Aryabhatiya. This kuttaka algorithm is considered to be one of the most signicant contributions of Aryabhata in pure mathematics, which found solutions to Diophantine equations by means of continued fractions. The technique was applied by Aryabhata to give integral solutions of simulataneous linear Diophantine equations, a problem with important applications in astronomy. He also found the general solution to the indeterminate linear equation using this method.
Brahmagupta in 628 handled more difficult Diophantine equations. He used the chakravala method to solve quadratic Diophantine equations, including forms of Pell's equation, such as 61x2 + 1 = y2. His Brahma Sphuta Siddhanta was translated into Arabic in 773 and was subsequently translated into Latin in 1126. The equation 61x2 + 1 = y2 was later posed as a problem in 1657 by the French mathematician Pierre de Fermat. The general solution to this particular form of Pell's equation was found over 70 years later by Euler, while the general solution to Pell's equation was found over 100 years later by Lagrange in 1767. Meanwhile, many centuries ago, the general solution to Pell's equation was recorded by Bhaskara II in 1150, using a modified version of Brahmagupta's chakravala method, which he also used to find the general solution to other indeterminate quadratic equations and quadratic Diophantine equations. Bhaskara's chakravala method for finding the general solution to Pell's equation was much simpler than the method used by Lagrange over 600 years later. Bhaskara also found solutions to other indeterminate quadratic, cubic, quartic and higher-order polynomial equations. Narayana Pandit further improved on the chakravala method and found more general solutions to other indeterminate quadratic and higher-order polynomial equations.
Islamic number theory
From the 9th century, Islamic mathematicians had a keen interest in number theory. The first of these mathematicians was the Arab mathematician Thabit ibn Qurra, who discovered a theorem which allowed pairs of amicable numbers to be found, that is two numbers such that each is the sum of the proper divisors of the other. In the 10th century, Al-Baghdadi looked at a slight variant of Thabit ibn Qurra's theorem.
In the 10h century, al-Haitham seems to have been the first to attempt to classify all even perfect numbers (numbers equal to the sum of their proper divisors) as those of the form 2k-1(2k - 1) where 2k - 1 is prime. Al-Haytham is also the first person to state Wilson's theorem, namely that if p is prime then 1+(p-1)! is divisible by p. It is unclear whether he knew how to prove this result. It is called Wilson's theorem because of a comment made by Edward Waring in 1770 that John Wilson had noticed the result. There is no evidence that John Wilson knew how to prove it and most certainly Waring did not. Lagrange gave the first proof in 1771 and it should be noticed that it is more than 750 years after al-Haytham before number theory surpasses this achievement of Islamic mathematics.
Amicable numbers played a large role in Islamic mathematics. In the 13th century, Persian mathematician Al-Farisi gave a new proof of Thabit ibn Qurra's theorem, introducing important new ideas concerning factorisation and combinatorial methods. He also gave the pair of amicable numbers 17296, 18416 which have been attributed to Euler, but we know that these were known earlier than al-Farisi, perhaps even by Thabit ibn Qurra himself. In the 17th century, Muhammad Baqir Yazdi gave the pair of amicable numbers 9,363,584 and 9,437,056 still many years before Euler's contribution.
Early European number theory
Number theory began in Europe in the 16th and 17th centuries, with François Viète, Bachet de Meziriac, and especially Fermat, whose infinite descent method was the first general proof of diophantine questions. Fermat's last theorem was posed as a problem in 1637, a proof of which wasn't found until 1994. Fermat also posed the equation 61x2 + 1 = y2 as a problem in 1657.
In the eighteenth century, Euler and Lagrange made important contributions to number theory. Euler did some work on analytic number theory, and found a general solution to the equation 61x2 + 1 = y2, which Fermat posed as a problem. Lagrange found a solution to the more general Pell's equation. Euler and Lagrange solved these Pell equations by means of continued fractions, though this was more difficult than the Indian chakravala method.
Beginnings of modern number theory
Around the beginning of the nineteenth century books of Legendre (1798), and Gauss put together the first systematic theories in Europe. Gauss's Disquisitiones Arithmeticae (1801) may be said to begin the modern theory of numbers.
The formulation of the theory of congruences starts with Gauss's Disquisitiones. He introduced the symbolism
$a \equiv b \pmod c,$
and explored most of the field. Chebyshev published in 1847 a work in Russian on the subject, and in France Serret popularised it.
Besides summarizing previous work, Legendre stated the law of quadratic reciprocity. This law, discovered by induction and enunciated by Euler, was first proved by Legendre in his Théorie des Nombres (1798) for special cases. Independently of Euler and Legendre, Gauss discovered the law about 1795, and was the first to give a general proof. To the subject have also contributed: Cauchy; Dirichlet whose Vorlesungen über Zahlentheorie is a classic; Jacobi, who introduced the Jacobi symbol; Liouville, Zeller(?), Eisenstein, Kummer, and Kronecker. The theory extends to include cubic and biquadratic reciprocity, (Gauss, Jacobi who first proved the law of cubic reciprocity, and Kummer).
To Gauss is also due the representation of numbers by binary quadratic forms.
Prime number theory
A recurring and productive theme in number theory is the study of the distribution of prime numbers. Carl Friedrich Gauss conjectured the limit of the number of primes not exceeding a given number (the prime number theorem) as a teenager.
Chebyshev (1850) gave useful bounds for the number of primes between two given limits. Riemann introduced complex analysis into the theory of the Riemann zeta function. This led to a relation between the zeros of the zeta function and the distribution of primes, eventually leading to a proof of prime number theorem independently by Hadamard and de la Vallée Poussin in 1896. However, an elementary proof was given later by Paul Erdős and Atle Selberg in 1949+. Here elementary means that it does not use techniques of complex analysis; however, the proof is still very ingenious and difficult. The Riemann hypothesis, which would give much more accurate information, is still an open question.
Nineteenth-century developments
Cauchy, Poinsot (1845), Lebesgue(?) (1859, 1868), and notably Hermite have added to the subject. In the theory of ternary forms Eisenstein has been a leader, and to him and H. J. S. Smith is also due a noteworthy advance in the theory of forms in general. Smith gave a complete classification of ternary quadratic forms, and extended Gauss's researches concerning real quadratic forms to complex forms. The investigations concerning the representation of numbers by the sum of 4, 5, 6, 7, 8 squares were advanced by Eisenstein and the theory was completed by Smith.
Dirichlet was the first to lecture upon the subject in a German university. Among his contributions is the extension of Fermat's last theorem:
$x^n+y^n \neq z^n, (x,y,z \neq 0, n > 2)$
which Euler and Legendre had proven for $n = 3, 4$ (and therefore by implication, all multiples of 3 and 4), Dirichlet showing that $x^5+y^5 \neq az^5$. Among the later French writers are Borel; Poincaré, whose memoirs are numerous and valuable; Tannery, and Stieltjes. Among the leading contributors in Germany were Kronecker, Kummer Schering, Bachmann, and Dedekind. In Austria Stolz's Vorlesungen über allgemeine Arithmetik (1885-86), and in England Mathews' Theory of Numbers (Part I, 1892) were scholarly of general works. Genocchi, Sylvester, and J. W. L. Glaisher have also added to the theory.
Twentieth-century developments
Major figures in twentieth-century number theory include Paul Erdős, Gerd Faltings, G. H. Hardy, Edmund Landau, John Edensor Littlewood, Srinivasa Ramanujan and André Weil.
Milestones in twentieth-century number theory include the proof of Fermat's Last Theorem by Andrew Wiles in 1994 and the proof of the related Taniyama–Shimura theorem in 1999.
Quotations
• Mathematics is the queen of the sciences and number theory is the queen of mathematics. — Gauss
• God invented the integers; all else is the work of man. — Kronecker
• I know numbers are beautiful. If they aren't beautiful, nothing is. — Erdős
References
• Apostol, T. M. (1986). Introduction to Analytic Number Theory, Springer-Verlag. ISBN 0387901639.
• Dedekind, Richard (1963). Essays on the Theory of Numbers, Cambridge University Press. ISBN 0-486-21010-3.
• Davenport, Harold (1999). The Higher Arithmetic: An Introduction to the Theory of Numbers (7th ed.), Cambridge University Press. ISBN 0521634466.
• Guy, Richard K. (1981). Unsolved Problems in Number Theory, Springer-Verlag. ISBN 0-387-90593-6.
• Hardy, G. H. and Wright, E. M. (1980). An Introduction to the Theory of Numbers (5th ed.), Oxford University Press. ISBN 0198531710.
• Niven, Ivans Herbert S. Zuckermans and Hugh L. Montgomery (1991). An Introduction to the Theory of Numbers (5th ed.), Wiley Text Books. ISBN 0471625469.
• Ore, Oystein (1948). Number Theory and Its History, Dover Publications, Inc.. ISBN 0-486-65620-9.
• Smith, David. History of Modern Mathematics (1906) (adapted public domain text)
• Dutta, Amartya Kumar (2002). 'Diophantine equations: The Kuttaka', Resonance - Journal of Science Education.
• O'Connor, John J. and Robertson, Edmund F. (2004). 'Arabic/Islamic mathematics', MacTutor History of Mathematics archive.
• O'Connor, John J. and Robertson, Edmund F. (2004). 'Index of Ancient Indian mathematics', MacTutor History of Mathematics archive.
• O'Connor, John J. and Robertson, Edmund F. (2004). 'Numbers and Number Theory Index', MacTutor History of Mathematics archive.
• Important publications in number theory
Photos
Add a Photo
6,465photos on this wiki
• by Dr9855
2013-05-14T02:10:22Z
• by PARANOiA 12
2013-05-11T19:25:04Z
Posted in more...
• by Addyrocker
2013-04-04T18:59:14Z
• by Psymba
2013-03-24T20:27:47Z
Posted in Mike Abrams
• by Omaspiter
2013-03-14T09:55:55Z
• by Omaspiter
2013-03-14T09:28:22Z
• by Bigkellyna
2013-03-14T04:00:48Z
Posted in User talk:Bigkellyna
• by Preggo
2013-02-15T05:10:37Z
• by Preggo
2013-02-15T05:10:17Z
• by Preggo
2013-02-15T05:09:48Z
• by Preggo
2013-02-15T05:09:35Z
• See all photos
See all photos >
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 7, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9231252670288086, "perplexity_flag": "middle"}
|
http://infostructuralist.wordpress.com/category/control/
|
# The Information Structuralist
## Stochastic kernels vs. conditional probability distributions
Posted in Control, Feedback, Information Theory, Probability by mraginsky on March 17, 2013
Larry Wasserman‘s recent post about misinterpretation of p-values is a good reminder about a fundamental distinction anyone working in information theory, control or machine learning should be aware of — namely, the distinction between stochastic kernels and conditional probability distributions.
(more…)
## Updates! Get your updates here!
Posted in Conference Blogging, Control, Feedback, Information Theory, Models of Complex Stochastic Systems, Narcissism, Papers and Preprints by mraginsky on October 5, 2011
Just a couple of short items, while I catch my breath.
1. First of all, starting January 1, 2012 I will find myself amidst the lovely cornfields of Central Illinois, where I will be an assistant professor in the Department of Electrical and Computer Engineering at UIUC. This will be a homecoming of sorts, since I have spent three years there as a Beckman Fellow. My new home will be in the Coordinated Science Laboratory, where I will continue doing (and blogging about) the same things I do (and blog about).
2. Speaking of Central Illinois, last week I was at the Allerton Conference, where I had tried my best to preach Uncle Judea‘s gospel to anyone willing to listen information theorists and their fellow travelers. The paper, entitled “Directed information and Pearl’s causal calculus,” is now up on arxiv, and here is the abstract:
Probabilistic graphical models are a fundamental tool in statistics, machine learning, signal processing, and control. When such a model is defined on a directed acyclic graph (DAG), one can assign a partial ordering to the events occurring in the corresponding stochastic system. Based on the work of Judea Pearl and others, these DAG-based “causal factorizations” of joint probability measures have been used for characterization and inference of functional dependencies (causal links). This mostly expository paper focuses on several connections between Pearl’s formalism (and in particular his notion of “intervention”) and information-theoretic notions of causality and feedback (such as causal conditioning, directed stochastic kernels, and directed information). As an application, we show how conditional directed information can be used to develop an information-theoretic version of Pearl’s “back-door” criterion for identifiability of causal effects from passive observations. This suggests that the back-door criterion can be thought of as a causal analog of statistical sufficiency.
If you had seen my posts on stochastic kernels, directed information, and causal interventions, you will, more or less, know what to expect.
Incidentally, due to my forthcoming move to UIUC, this will be my last Allerton paper!
## Missing all the action
Posted in Control, Feedback, Games and Decisions, Information Theory, Optimization by mraginsky on July 25, 2011
Update: I fixed a couple of broken links.
I want to write down some thoughts inspired by Chernoff’s memo on backward induction that may be relevant to feedback information theory and networked control. Some of these points were brought up in discussions with Serdar Yüksel two years ago.
(more…)
## The lost art of writing
Posted in Academic Snark, Control, Games and Decisions by mraginsky on July 19, 2011
From the opening paragraph of Herman Chernoff‘s unpublished 1963 memo “Backward induction in dynamic programming” (thanks to Armand Makowski for a scanned copy):
The solution of finite sequence dynamic programming problems involve a backward induction argument, the foundations of which are generally understood hazily. The purpose of this memo is to add some clarification which may be slightly redundant and whose urgency may be something less than vital.
Alas, nobody writes like that anymore.
## ECE 299: regression with quadratic loss; stochastic simulation via Rademacher bootstrap
Posted in Control, Corrupting the Young, Statistical Learning and Inference by mraginsky on April 20, 2011
I gave the last lecture earlier today, wrapping up the semester. Here are the notes from the last two weeks:
• Regression with quadratic loss, mostly in reproducing kernel Hilbert spaces, with and without regularization.
• Case study: stochastic simulation via Rademacher bootstrap, where I discuss the work of Vladimir Koltchinskii et al. on efficient stopping algorithms for Monte Carlo stochastic simulation. The idea is to keep sampling until the empirical Rademacher average falls below a given threshold. Once that happens, you stop and compute a minimizer of the empirical risk. The work of Koltchinskii et al. was in turn inspired by the ideas of Mathukumalli Vidyasagar on the use of statistical learning theory in randomized algorithms for robust controller synthesis.
Monday’s lecture was on stochastic gradient descent as an alternative to batch empirical risk minimization. I will post the notes soon.
Tagged with: ECE 299
## Linkage
Posted in Control, Information Theory, Papers and Preprints by mraginsky on December 26, 2010
In lieu of serious posting, which will resume in the new year, a few links:
• Several videos of David Blackwell, over at The Inherent Uncertainty
• The (in)famous Witsenhausen counterexample in decentralized control theory now has its own Wikipedia entry (and I think I know who is behind it).
• Emmanuel Abbe, guest-blogging at Combinatorics and More, presents his perspective on Erdal Arikan’s polar codes. Much of what he says makes me think of Terence Tao‘s work on structure and randomness in “large” combinatorial objects.
• Markov decision processes make a surprising appearance in a paper on subexponential lower bounds for certain randomized pivot rules for the simplex algorithm.
• Computation and Control: a new blog by Jerome Le Ny.
• And, last but not least, exorcising Laplace’s demon.
## Value of information, Bayes risks, and rate-distortion theory
Posted in Control, Games and Decisions, Information Theory by mraginsky on December 1, 2010
In the previous post we have seen that access to additional information is not always helpful in decision-making. On the other hand, extra information can never hurt, assuming one is precise about the quantitative meaning of “extra information.” In this post, I will show how Shannon’s information theory can be used to speak meaningfully about the value of information for decision-making. This particular approach was developed in the 1960s and 1970s by Ruslan Stratonovich (of the Stratonovich integral fame, among other things) and described in his book on information theory, which was published in Russian in 1975. As far as I know, it was never translated into English, which is a shame, since Stratonovich was an extremely original thinker, and the book contains a deep treatment of the three fundamental problems of information theory (lossless source coding, noisy channel coding, lossy source coding) from the viewpoint of statistical physics.
(more…)
## Deadly ninja weapons: Blackwell’s principle of irrelevant information
Posted in Control, Feedback, Games and Decisions, Models of Complex Stochastic Systems, Optimization by mraginsky on November 8, 2010
Having more information when making decisions should always help, it seems. However, there are situations in which this is not the case. Suppose that you observe two pieces of information, ${x}$ and ${y}$, which you can use to choose an action ${u}$. Suppose also that, upon choosing ${u}$, you incur a cost ${c(x,u)}$. For simplicity let us assume that ${x}$, ${y}$, and ${u}$ take values in finite sets ${{\mathsf X}}$, ${{\mathsf Y}}$, and ${{\mathsf U}}$, respectively. Then it is obvious that, no matter which “strategy” for choosing ${u}$ you follow, you cannot do better than ${u^*(x) = \displaystyle{\rm arg\,min}_{u \in {\mathsf U}} c(x,u)}$. More formally, for any strategy ${\gamma : {\mathsf X} \times {\mathsf Y} \rightarrow {\mathsf U}}$ we have
$\displaystyle c(x,u^*(x)) = \min_{u \in {\mathsf U}} c(x,u) \le c(x,\gamma(x,y)).$
Thus, the extra information ${y}$ is irrelevant. Why? Because the cost you incur does not depend on ${y}$ directly, though it may do so through ${u}$.
Interestingly, as David Blackwell has shown in 1964 in a three-page paper, this seemingly innocuous argument does not go through when ${{\mathsf X}}$, ${{\mathsf Y}}$, and ${{\mathsf U}}$ are Borel subsets of Euclidean spaces, the cost function ${c}$ is bounded and Borel-measurable, and the strategies ${\gamma}$ are required to be measurable as well. However, if ${x}$ and ${y}$ are random variables with a known joint distribution ${P}$, then ${y}$ is indeed irrelevant for the purpose of minimizing expected cost.
Warning: lots of measure-theoretic noodling below the fold; if that is not your cup of tea, you can just assume that all sets are finite and go with the poor man’s version stated in the first paragraph. Then all the results below will hold.
(more…)
## Bell Systems Technical Journal: now online
Posted in Control, Echoes of Cybernetics, Feedback, Games and Decisions, Information Theory, Open Access by mraginsky on November 1, 2010
The Bell Systems Technical Journal is now online. Mmmm, seminal articles … . Shannon, Wyner, Slepian, Witsenhausen — they’re all here!
(h/t Anand Sarwate)
## Sincerely, your biggest Fano
Posted in Control, Feedback, Information Theory, Narcissism, Optimization, Papers and Preprints by mraginsky on October 13, 2010
It’s time to fire up the Shameless Self-Promotion Engine again, for I am about to announce a preprint and a paper to be published. Both deal with more or less the same problem — i.e., fundamental limits of certain sequential procedures — and both rely on the same set of techniques: metric entropy, Fano’s inequality, and bounds on the mutual information through divergence with auxiliary probability measures.
So, without further ado, I give you: (more…)
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 27, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9147527813911438, "perplexity_flag": "middle"}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.