url
stringlengths 17
172
| text
stringlengths 44
1.14M
| metadata
stringlengths 820
832
|
---|---|---|
http://math.stackexchange.com/questions/323288/statistics-probability-normal-distribution
|
# statistics: probability, normal distribution
The time that customers take to complete their transaction at a money machine is a random variable with mean $\mu$ = $2$ minutes and standard deviation $\sigma$ = $0.6$ minutes.
About 30% of customers take more than 3 minutes to complete their transaction. Take a random sample of size $50$.
Find the probability that the selected sample takes on average between 1.8 minutes and 2.25 minutes.
I tried----
First when I've read the question I was thinking I need to use central limit theorem so I did
$$n = 50\\ \sigma = 0.6 \\ \mu = 2$$
$\mathrm{P}(1.8 < X < 2.25)$
then applied CLT:
$\displaystyle\mathrm{P}\left(\frac{1.8 - \mu}{\sigma/n^{1/2}} < X < \frac{2.25 - \mu}{ \sigma/n^{1/2}} \right)$
and I was going to just plug in the given values..
"About 30% of customers take more than 3 minutes to complete their transaction.. "
How should I apply this with CLT ?? Is this mean 30 % of 50? so 15 customers are taking more than 3 minutes ?? So instead using 50, I should use 15 ??
-
1
It is a warning that the distribution of the sample means may not, for smallish sample size, be well-approximated by the normal. But a sample size of $50$ is almost not small, so I would cross my fingers and use CLT. The – André Nicolas Mar 7 at 5:50
## 1 Answer
The $z$-score of $0.3$ is $-0.52$. Consider that
$$0.52 \times 0.6 = 0.31$$ and
$$2 + 0.31 = 2.31$$
It is clear that the distribution is not normal. In fact, it is right-tailed.
I will say that since the sample size is large, consider using CLT because the distribution of the sample is normal, despite the distribution of the population.
To use CLT, $n=50$, $\mu = 2$, $\sigma = 0.6$. One thing I learnt constantly in my math class is to define the probability distribution. So in this case, I will say let $X$ be the distribution of the sample of time taken to complete their transaction. $$P(1.8<X<2.5) = P(\frac{1.8-2}{\frac{\sqrt{0.6}}{50}}\leq X \leq \frac{2.5-2}{\frac{\sqrt{0.6}}{50}})$$
-
THank you bryansis. Then I just have to use CLT on P(1.8 < X < 2.25) ? But I'm still confused about how to handle 30% of customer – hibc Mar 7 at 7:06
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 15, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9350972771644592, "perplexity_flag": "head"}
|
http://rjlipton.wordpress.com/2011/11/29/a-breakthrough-on-matrix-product/
|
## a personal view of the theory of computation
by
Beating Coppersmith-Winograd
Virginia Vassilevska Williams is a theoretical computer scientist who has worked on a number of problems, including a very neat question about cheating in the setup of tournaments, and a bunch of novel papers exemplified by this one on graph and matrix problems with runtime exponents of ${3}$ that have long been begging to be improved.
Today Ken and I want to discuss her latest breakthrough in improving our understanding of the matrix multiplication problem.
Of course Volker Strassen first showed in his famous paper in 1969 that the obvious cubic algorithm was suboptimal. Ever since then progress has been measured with one parameter ${\omega}$: if your algorithm runs in time ${O(n^{\omega})}$, then you are known by this one number. Strassen got ${\omega = 2.808\dots}$, and the race was off. A long series of improvements started to happen, which for a while seemed to be stuck above ${2.5}$. Then, Don Coppersmith and Shmuel Winograd (CW) got ${\omega < 2.496}$ and everything changed. After a contribution by Strassen himself, CW finally obtained ${\omega < 2.3755}$ in 1987, with full details here. This has been the best known for decades.
This has all changed now. Virginia has proved that she can beat the “barrier” of CW and get a new lower value for ${\omega}$. Currently her paper gives ${\omega < 2.3727}$, an improvement of “only” ${0.0028}$, but there is promise of more. This is also another case of proofs coming in twos, as a theorem ${\omega < 2.3737}$ in PhD thesis work by Andrew Stothers was circulated to some in June 2010 but not very widely. All this is extremely exciting, and is one of the best results proved in years in all of theory. While these algorithms are unlikely to be usable in practice, they help shed light on one of the basic questions of complexity theory: how fast can we multiply matrices? What could be more fundamental than that?
The Basic Idea
Matrix multiplication is bi-linear: the formula for the ${i,j}$ entry of ${XY}$ is ${\sum_k x_{i,k} y_{k,j}}$. The first step in simplifying the problem is to make it more complicated: Let us have indicator variables ${z_{i,j}}$ and compute instead the tri-linear form
$\displaystyle T(x,y,z) = \sum_{i,j,k} x_{i,k} y_{k,j} z_{i,j}$
This is a special case of a general tri-linear form
$\displaystyle F = \sum_{i,j,k = 1}^{N} C_{i,j,k} x_i y_j z_k$
where ${N = n^2}$ and we have re-mapped the indices. It looks like we have made order-of ${N^3 = n^6}$ work for ourselves. The key, however, is to try to fit a representation of the form:
$\displaystyle \begin{array}{rcl} F &=& \sum_{\ell = 1}^r (\sum_i a_{\ell,i} x_i)(\sum_j b_{\ell,j} y_j)(\sum_k c_{\ell,k} z_k)\\ &=& \sum_{\ell = 1}^r P_\ell (\sum_k c_{\ell,k} z_k), \end{array}$
where ${P_\ell = (\sum_i a_{\ell,i} x_i)(\sum_j b_{\ell,j} y_j)}$. The point is, suppose we can compute these products ${P_\ell}$ in total time ${T}$. Then we can compute (the coefficients for) all the desired entries
$\displaystyle z_k = \sum_{\ell = 1}^r c_{\ell,k} P_\ell$
in ${Nr}$ steps. Thus what we have are separate handles on the time ${T}$ for the products and the time for the ${z_k}$. The way to manage and balance these times involves recursion.
The Basis Idea
The recursion idea is nice to picture for matrices, though its implementation for the way we have unrolled matrices into vectors is not so nice. Picture ${X}$ and ${Y}$ as each being ${4 \times 4}$ matrices. We can regard ${X}$ instead as a ${2 \times 2}$ matrix of four ${2 \times 2}$ matrices ${X_1,X_2,X_3,X_4}$, and do the same for ${Y}$. Then the product ${XY}$ can be written via ${2 \times 2}$ products ${X_i Y_j}$, and we can picture ourselves recursing on these products.
The reason why the vector case does not look so nice is that the tri-linear form ${F}$ is so general—indeed we cannot expect to fit a general tri-linear form into a small number of products ${P_\ell}$. What CW did, building on work by Arnold Schönhage, is relax the tri-linear form by introducing more than ${N}$-many ${z_k}$ variables, supplying appropriate coefficients to set up the recursion, and most of all framing a strategy for setting variables to zero so that three goals are met: the recursion is furthered, the values of “${r}$” at each level stay relatively small, and the matrix product can be extracted from the variables left over. This involved a hashing scheme which used subsets of integers that are free of arithmetical progressions.
The final step by CW was to choose a starting algorithm ${{\cal A}}$ for the basis case of the recursion. They devised one and got ${\omega < 2.39}$. Then they noticed that if they bumped up the base case by manually expanding their algorithm to an ${{\cal A}^2}$ handling the next-higher case, they got a better analysis and their famous result ${\omega < 2.376}$. By their way of thinking, bumping the basis up once more to ${{\cal A}^3}$ was the way to do better, but they left analyzing this as a problem. Others attempted the analysis and…found it gave worse not better results. So ${2.376}$, actually ${2.375477}$, stood.
The insight for breaking through was to make a bigger jump in the basis. Vassilevska Williams was actually anticipated in this without her knowledge by Andrew Stothers, in his 2010 PhD thesis at the University of Edinburgh. Stothers used ${{\cal A}^4}$ and showed this method capable of achieving ${\omega < 2.3737}$, though there has been some doubt in whether all details were worked out. Vassilevska Williams, however, used ${{\cal A}^8}$ and brought some powerful computing software to bear on a more-extensive framework for the plan. It is not clear whether there is anything necessary about jumping ${{\cal A}}$ by a power of ${2}$—in any event her program and framework work for any exponent.
The Proof
We cannot yet really give a good summary of the proof—further details are in her paper. One quick observation about her work is in order. She used the CW method, but extended it into a general schema can be used to find good matrix product algorithms, perhaps even better than the one in the paper. The algorithms themselves can be generated and examined, but as usual the task of analyzing them is very hard. The brilliant insight that she has is this task can be laid out automatically by a certain computer program. This allows her to do the analysis where previous others failed.
For example here is a sample of the overview of her main program, in pseudo-code:
The details are not as important as the fact that this program allows one to work on much larger schemas than anyone could previously.
What Does the Bound Mean?
Note that she has improved the bound of Stothers by ${0.001}$. For what threshold value of ${n}$ does an additive improvement of ${a}$ in the exponent halve the running time? The answer is ${n = 2^{1/a}}$, which in this case is ${n = 2^{1000}}$. This value is far above the Bekenstein Bound for the number of particles that could be fit into a volume the size of the observable universe without collapsing it into a black hole. In this sense the algorithm itself is even beyond galactic.
The meaning instead comes from this question: Is there a fundamental reason why ${\omega}$ could settle at a value strictly greater than ${2}$? Note that ${\omega = 2}$ is not taken to mean the existence of a quadratic-time algorithm, but rather that for all ${\epsilon > 0}$ there are algorithms that achieve time ${O(n^{2+\epsilon})}$. There was some reason to think ${2.5}$ could be a natural barrier, but it was breached. Perhaps ${\sqrt{5} = 2.236\dots}$, since this is connected to the golden ratio? Her paper notes a recent draft by Noga Alon, Amir Shpilka, and Christopher Umans that speaks somewhat against the optimism shared by many that ${\omega = 2}$.
Open Problems
Can the current bounds be improved by more computer computations? Are we about to see the solution to this classic question? Or will it be struggling over increments of ${0.001}$?
In any event congratulate Virginia—and Andrew—for their brilliant work.
Update: Markus Bläser, who externally reviewed Stothers’ thesis, has contributed an important comment on the blog of Scott Aaronson here. It evaluates the significance of the work in-context, and also removes the doubt going back to 2010 that was expressed here.
### Like this:
from → History, News, Open Problems, Proofs
45 Comments leave one →
1. Sid
November 29, 2011 1:37 pm
Thanks for the very understandable summary. One thing I found interesting about this algorithm is that it uses nonlinear optimization for obtaining the bounds. It is not often one sees algorithms being directly used to prove a theorem.
• Sid
November 29, 2011 1:46 pm
2. Javaid Aslam
November 29, 2011 3:18 pm
I am not quite a theoretical CS person, but I fail to understand the significance or the implications of any improvement over even O(n^3) bound when the NP=?P issue has not made any progress.
• Serge
November 29, 2011 7:55 pm
In a previous post, Gödel said:
“I thought k*n means you understand the solution, k*n^2 means you can solve it but only partly understand it.”
Thus under the standards of Gödel himself, matrix multiplication is still not well understood. However if the new algorithm can be made practical, it will certainly give rise to new larger-scale implementations of matrix multiplication.
• November 29, 2011 8:29 pm
The data size is N = n^2, so in terms of N we’re talking N^{1.1863…}. Bien entendu!
Dick and I mused that the most “salient” constant c that could form a natural barrier for \omega, with 2 strictly < c < 2.3727…, may be 2 + 1/e = 2.367879441… This as opposed to c = sqrt(5) = 2.236… as I wrote and Dave Bacon independently offered on Scott's blog. As I said there, any takers or breakers?
• Serge
November 29, 2011 8:40 pm
Merci Ken pour l’explication.
So now we do have a practical bound for matrix multiplication. I really hope the new algorithm will soon be made feasible.
• November 29, 2011 9:09 pm
I should add actually that there is a good point between us and “Gödel” here. The “cubic” algorithms mentioned in this posts’s first sentence are really N^{1.5} by the same token, and N^2 means quartic. For graph and matrix problems one can indeed argue pretty sweepingly that n^4-time algorithms are probably improvable, and n^5 and higher ones are probably short of understanding.
• November 30, 2011 9:29 am
Re salient constants, I know of proofs that naturally give rise to bounds of the form $n^\alpha$ for some irrational $\alpha$ (usually the root of a quadratic). Typically they work by feeding in an existing bound to get a slightly better one, then iterating that process and showing that the limiting exponent satisfies some polynomial equation. I can’t think of any that naturally give rise to a transcendental bound — does anyone have a good example? Maybe something that depends on a continued-fraction expansion?
• November 30, 2011 10:27 am
Exponent bounds for matrix multiplication generally arise from an iterative process that leads to a transcendental equation (since exponentiation naturally comes into the equation). In simple cases, like Strassen’s algorithm, you get a nice enough equation (namely, 2^x = 7) that you can solve it explicitly and get a transcendental number. Coppersmith and Winograd and the new improvements are a lot more complicated. You can characterize the resulting exponents as solutions of big systems of somewhat complicated simultaneous equations. The solutions are surely transcendental numbers, but I don’t know whether this can be proved.
• December 1, 2011 4:29 am
Oops, I realize now (after reading your response) that I was of course aware of proofs that lead to exponents like $\log 2/\log 3$ — pretty well any proof that starts with a small example of something and tensors it up. So I suppose I’d better make my question more specific and ask whether there a general type of argument that could conceivably lead to the bound $2+1/e$. Are there natural proof techniques that lead to exponents resembling that one?
• rjlipton *
December 1, 2011 9:06 am
gowers,
I know of no even ideas for lower bounds that could come close to that. The best I know is $2n^2$ which seems pretty weak, but still needs a proof. Perhaps the best idea could be something like: if matrix product exponent was 2, then this means matrix product is really doable in linear time, which then shows $\dots$ Perhaps this approach could be made to work.
• Javaid Aslam
December 1, 2011 1:59 pm
Serge- Are these new algorithms really make understanding of matrix multiplication any better?
With the increased [process] complexity to lower the exponent by only a small fraction we going into the opposite direction.
• Serge
December 1, 2011 5:41 pm
Of course it does. Gödel was right to emphasize that faster algorithms come from a better understanding of the problem to solve.
Maybe it’s no progress towards solving PvsNP, but since it took 25 years to speed up this algorithm by such a small fraction I find it hard to believe that we’ll know someday whether SAT or integer factorization admit polynomial algorithms…
• December 1, 2011 6:41 pm
My question isn’t about the size of the exponent, but rather whether an exponent made in a simple way out of e could arise naturally from a proof. I can’t conceive of an argument that would lead to such a bound (upper or lower), but that may be just because there’s a technique I’m not familiar with or I’m forgetting something. To answer my question in the affirmative it would be sufficient to exhibit any sensible argument to a sensible combinatorial problem that yielded a bound of the form $n^{f(e)}$ where $f(x)$ was a simple function such as (for instance) a non-constant polynomial in $x$ and $x^{-1}$ with integer coefficients.
3. November 29, 2011 4:54 pm
To better appreciate Strassen-type theorems, I would very much enjoy reading natural statements of these conjectures in contexts other than purely algebraic. Here is an attempt from a geometrically natural point-of-view:
Let ${R}$ be a smooth Riemannian (or Kählerian) manifold that is equipped with a complete set of real (or holomorphic) coordinate functions and a metric (or complex) structure ${g}$. Regard ${g}$ as an oracle that computes at zero computational cost the inner product ${\langle X,Y\rangle_{g}|_p}$ at a given point ${p}$ for arbitrary vector fields ${X}$ and ${Y}$. Then given ${s}$ and ${t}$ as smooth real (or holomorphic) functions, and assigning zero computational cost to the evaluation of functions ${ds(Z)|_p}$ and ${dt(Z)|_p}$ for an arbitrary vector field ${Z}$, the arithmetic cost of computing ${\langle ds,dt\rangle_{g^{-1}}|_p}$ is ${\mathcal{O}[(\dim R)^\omega]}$, with ${\omega \le 2.3727}$.
Is there a more geometrically natural statement of Strassen-type theorems than this? For what other branches of mathematics can Strassen-type theorems be stated naturally? Comments and pointers to the literature — both fundamental and applied — would be very welcome.
• November 30, 2011 3:21 pm
As a followup, here’s another (tentative) geometric description of Strassen-type theorems. From a matrix algebra point of view, it is very remarkable that the matrix-matrix product ${C=AB}$, for ${A}$ and ${B}$ square matrices, requires — asymptotically and assuming ${\omega =2}$ — no more computational effort than the matrix-vector product ${y=Ax}$ for ${x}$ a general vector. Equivalently in the language of differential geometry, the computational effort required to sharp (${\sharp}$) or flat (${\flat}$) a full-rank bilinear form is no greater than the effort to ${\sharp}$ a vector or ${\flat}$ a covector (viewing vectors and covectors as rank-1 bilinear forms).
These reflections lead us to a (dim on my part) geometric appreciation that somehow, to “learn” that a bilinear form is rank-1 requires computational effort, or equivalently in algebraic terms, matrix-vector multiplication is surprisingly just as costly as matrix-matrix multiplication.
It would be great if someone smarter than me could explain these mysteries better and more naturally!
4. Anonymous
November 29, 2011 8:56 pm
Please, could people kindly refrain from fussing over this so called breakthrough and get on with whatever they are supposed to be doing
In particular Scott Aaronson’s blog reads like if someone just proved the Riemann hypothesis.
Such self promotion enterprise is simply embarrassing, both for the people involved and for the TCS community at large.
• rjlipton *
November 29, 2011 9:26 pm
Anonymous,
Hi. It is not that we will use the algorithm it is galactic. Strassen’s original one is borderline. But the idea that progress is made on a problem that stood for almost 25 years seems pretty cool to me.
dick
• November 29, 2011 9:48 pm
“the idea that progress is made on a problem that stood for almost 25 years seems pretty cool to me.”
You are right, but I guess you’d agree there is a difference between “pretty cool” and “one of the best results proved in years in all of theory”. Seriously, is that much hype justified, given that we currently have no clue whether this algorithm will lead to any significant further improvements or will it be a dead end a mere curiosity 20 years from now? Regardless of the importance (or lack of it) of the result, it would much more reasonable to show some restraint, especially as this blog is surely one of the “flagships” of TCS community when it comes to wider outreach.
• ano
December 1, 2011 9:32 am
Well said Michal, I wonder if there is a correlation between this way of thinking about break-through results in TCS and the relevance of TCS in Computer Science as well as how hard it is for students in the field to get a job.
• Anonymous
December 1, 2011 10:30 am
The significance of the result is already explained by Markus Bläser, this is a marginal improvement based on existing technique, period.
PS I would rather be enlightened if you would like to share your view on the recent update in the blog post by Aaronson, in which he advised people to “go to hell”?
5. December 1, 2011 4:36 am
That last talk about golden ratio sounds like numerology.
• Serge
December 1, 2011 9:48 am
Don’t let yourself be influenced by names! As David Hilbert used to say: “It must be possible to replace in all geometric statements the words point, line, plane by table, chair, beer mug.”
6. December 1, 2011 9:53 am
It seems to me that the ongoing discussion of the new Stothers-Vassilevska-Williams algorithms and also Lance Forrnow’s recent essay The Death of Complexity Classes? echo perennial themes that Charles Townes articulated in a 1984 IEEE article titled “Ideas and Stumbling Blocks in Quantum Electronics”;
The following excerpts from Townes’ 1984 essay are refocused to address contemporary issues in complexity theory:
Ideas and Stumbling Blocks in Quantum Electronics Computational Complexity
Quantum electronics Theoretical computer science, including in particular the maser the study of algorithmic efficiency and the laser complexity classes, represents a marriage between quantum physics mathematics and electrical software engineering which was probably longer delayed than it might have been because the two were not sufficiently acquainted.
It is sometimes said that there is no single component idea involved in the construction of masers or lasers surpassing of the Coppersmith-Winograd bound which had not been known for at least 20 years before the advent of these devices algorithms.
The beauty of the device this class of algorithms may have been more attractive to most computer scientists than its potential applications. A favorite quip which many will remember was
“the laser is Strassen-style algorithms and complexity classes are a solution looking for a problem.”
In looking back over why the field of quantum electronics applied complexity theory took as long as it did in getting started and why even then the buildup was initially not more rapid, I necessarily mention some of the stumbling blocks, misconceptions, and fumbles. The development of any science by humans has its similar mistakes and illogicalities. Recalling these can keep us humble and make us aware there may be other exciting events not yet visible around the corner.
It is wonderful to reflect that even today, more than 60 years after the invention of masers and lasers, we are still finding beautiful new mathematics, physics, engineering, and enterprises associated to these devices. On the other hand, it is sobering to reflect that Townes’ 1984 essay was written ten years after peak enrollment in north american physics programs; and yet few or no physicists of Townes’ generation anticipated this two-generation (and still-persisting) educational stagnation. In failing to recognize the onset of this stagnation, no effective steps were taken to forestall it.
Anxiety that computer science — and even the global STEM enterprise — may be stagnating largely accounts (it seems to me) for both the hyperbole and the rancor that have attended the discussion of these topics. Townes’ essay reminds us that neither hyperbole nor rancor are warranted, rather it’s prudent (for students especially) to take Townes’ advice that we “stay humble and be aware there may be other exciting events around the corner.”
• December 1, 2011 8:53 pm
Note: please don’t hesitate to delete the above post, from which some of the HTML markup needed to make it readable is missing; the following post repairs the lack.
7. December 1, 2011 10:00 am
Hmmm … let’s switch to an alternative markup strategy for Townes’ essay:
————————————–
It seems to me that the ongoing discussion of the new Stothers-Vassilevska-Williams algorithms and also Lance Fortnow’s recent essay The Death of Complexity Classes? echo perennial themes that Charles Townes articulated in a 1984 IEEE article titled “Ideas and Stumbling Blocks in Quantum Electronics”;
The following excerpts from Townes’ 1984 essay are refocused to address contemporary issues in complexity theory:
Ideas and Stumbling Blocks in Quantum Electronics Computational Complexity
{Quantum electronics} [Theoretical computer science], including in particular {the maser} [the study of algorithmic efficiency] and {the laser} [complexity classes], represents a marriage between {quantum physics} [mathematics] and {electrical} [software] engineering which was probably longer delayed than it might have been because the two were not sufficiently acquainted.
It is sometimes said that there is no single component idea involved in the {construction of masers or lasers} [surpassing of the Coppersmith-Winograd bound] which had not been known for at least 20 years before the advent of these {devices} [algorithms].
The beauty of {the device} [this class of algorithms] may have been more attractive to most {scientists} [computer scientists] than its potential applications. A favorite quip which many will remember was “{the laser is} [Strassen-style algorithms and complexity classes are] a solution looking for a problem.”
In looking back over why the field of {quantum electronics} [applied complexity theory] took as long as it did in getting started and why even then the buildup was initially not more rapid, I necessarily mention some of the stumbling blocks, misconceptions, and fumbles. The development of any science by humans has its similar mistakes and illogicalities. Recalling these can keep us humble and make us aware there may be other exciting events not yet visible around the corner.
It is wonderful to reflect that even today, more than 60 years after the invention of masers and lasers, we are still finding beautiful new mathematics, physics, engineering, and enterprises associated to these devices. On the other hand, it is sobering to reflect that Townes’ 1984 essay was written ten years after peak enrollment in north american physics programs; and yet few or no physicists of Townes’ generation anticipated this two-generation (and still-persisting) educational stagnation. In failing to recognize the onset of this stagnation, no effective steps were taken to forestall it.
Anxiety that computer science — and even the global STEM enterprise — may be stagnating largely accounts (it seems to me) for both the hyperbole and the rancor that have attended the discussion of these topics. Townes’ essay reminds us that neither hyperbole nor rancor are warranted, rather it’s prudent (for students especially) to take Townes’ advice that we “stay humble and be aware there may be other exciting events around the corner.”
8. Anonymous
December 2, 2011 9:59 am
Wouldn’t it be helpful to have an online catalogue of matrices
like oeis.org with known lower and upper runtime bounds
for matrix-vector and matrix-matrix product algorithms
for concrete matrices?
9. December 2, 2011 1:45 pm
Let me add my cranky comment.
In some places people say that Fourier transform of something may lead to major improvement. Fourier Transform usually mean translation invariance. Here that can mean the partial multiplication over the blobs for fixed n M_k,m= L( a_k, a_{k+1}, a_{k+n}) L(b_m, … b_{m+n} ) \forall k,m lead to the desired sums (L – linear operator). Then one can use Fourier transform.
Good luck.
10. Serge
December 2, 2011 6:14 pm
In the course of history we become able to design faster and faster algorithms, so that problems become easier and easier to solve. Therefore it’s not legitimate to credit each problem with an inherent difficulty, for it’s only with time that they become easier. This is why the PvsNP problem is ill-posed, being based on the wrong idea that history can be condensed onto a single point.
11. Surya
December 3, 2011 3:17 pm
This is not very relevant to this particular post…but I am curious about some recent possible developements at LHC. There will supposedly be a public announcement on Dec 13….
I am sorry to churn the rumor mill…Apparently the Higgs bosonmass is most likely to be around 125 Gev .
http://blog.vixra.org/2011/12/02/higgs-rumour-anaylsis-points-to-125-gev/
Curiously this is very close the the one predicted in this paper….
http://www.citebase.org/abstract?id=oai%3AarXiv.org%3A0912.5189
Apparently this involves the use of four color theorem….Is this completely bogus….or is there more to this?
• Surya
December 3, 2011 7:35 pm
ok, The author in the second paper seems to have made several claims including a simple proof of “4CT” which is quite bogus. Case closed.
### Trackbacks
%d bloggers like this:
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 104, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9452082514762878, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/tagged/surface-tension?sort=frequent&pagesize=15
|
Tagged Questions
Surface tension occurs due to the tendency of liquid molecules to favor their own kind. Surface tension is important in fluid multiphase systems typically at small length and velocity
learn more… | top users | synonyms (1)
1answer
290 views
Change in appearance of liquid drop due to gravity
A liquid drop is spherical in shape due to surface tension. But why does it appear as a vertical line under the free-fall due to gravity? (E.g. During a rain - falling raindrop) Is there a specified ...
1answer
129 views
Dropping condition
Imagine opening a water tap in order to have a smooth and cylindrical outflow and then slowly decrease the flow by adjusting the knob. At a certain moment, the side profile of the flow will become ...
1answer
149 views
Is this formula for the energy of a configuration of 3 fluids physically reasonable?
I have studied for a couple of months now a mathematical model of the energy of a configuration of immiscible fluids situated in a fixed container such that the fluids fill the container. In other ...
4answers
417 views
How far can water rise above the edge of a glass?
When you fill a glass with water, water forms a concave meniscus with constant contact angle $\theta$ (typically $\theta=20^\circ$ for tap water): Once you reach the top of the glass, the water-air ...
2answers
181 views
At what size will self-gravitation contribute more to stability than surface tension?
The governments of Earth have embarked on an experiment to place a massive ball of water in orbit. (umm... special water that doesn't freeze) Imagine this to be a fluid with a given density, $\rho$ ...
1answer
83 views
Beer bottle leftovers pour quickly only after waiting?
Why is it that after pouring a delicious beer from a bottle, I can hold it upside down for several seconds without reward, but if I wait a bit, the remainder presumably settles at the bottom and ...
2answers
91 views
Amount of material required for a pressure tank
I read the answer for the question Why is a hot air balloon “stiff”? and thought something sounded ridiculous. My engineering requirement is that the walls be strong enough. Here $T$ will be the ...
1answer
242 views
Causes of surface tension between two fluids
Suppose that we have two fluids $A$ and $B$ in a container $\Omega$, and we notice that $A,B$ do not mix. Can you pleas explain to me what is the cause of this property? What properties of the two ...
1answer
177 views
Calculational method for determining surface tensions from photograph of menisci?
How can I get from a photograph of a liquid surface to a value for the surface tension.
2answers
752 views
Physics behind Water drops during falling from a tap
what is physics behind Water drops during falling from a tap. water drop animation A drop or droplet is a small column of liquid, bounded completely or almost completely by free surfaces. Why Water ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9238476753234863, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/calculus/117903-1-quick-integration-problem.html
|
# Thread:
1. ## 1 Quick integration problem
Integral of (1/sqrt(x)) * cos((pi*sqrt(x))/2) in the range of 1 to 4. I'm supposed to integrate by substitution using u = pi*sqrt(x) all divided by 2
Thus,
dy = (pi*x^3/2)/3
And I get
integral of:
3/pi * cos(u) du from the range of 1 to 4, but this does not take care of the problem with the 1/sqrt(x) in the original and the new x^(3/2) in the new one.
2. Originally Posted by Lord Darkin
Integral of (1/sqrt(x)) * cos((pi*sqrt(x))/2) in the range of 1 to 4. I'm supposed to integrate by substitution using u = pi*sqrt(x) all divided by 2
Thus,
dy = (pi*x^3/2)/3 what is this ?
And I get
integral of:
3/pi * cos(u) du from the range of 1 to 4, but this does not take care of the problem with the 1/sqrt(x) in the original and the new x^(3/2) in the new one.
$\int_1^4 \frac{1}{\sqrt{x}} \cdot \cos\left(\frac{\pi \sqrt{x}}{2}\right) \, dx$
$u = \frac{\pi \sqrt{x}}{2}$
$du = \frac{\pi}{4\sqrt{x}} \, dx$
$\frac{4}{\pi} \int_1^4 \frac{\pi}{4\sqrt{x}} \cdot \cos\left(\frac{\pi \sqrt{x}}{2}\right) \, dx<br />$
substitute and reset the limits of integration ...
$\frac{4}{\pi} \int_{\frac{\pi}{2}}^{\pi} \cos{u} \, du$
finish.
3. Originally Posted by skeeter
$\int_1^4 \frac{1}{\sqrt{x}} \cdot \cos\left(\frac{\pi \sqrt{x}}{2}\right) \, dx$
$u = \frac{\pi \sqrt{x}}{2}$
$du = \frac{\pi}{4\sqrt{x}} \, dx$
$\frac{4}{\pi} \int_1^4 \frac{\pi}{4\sqrt{x}} \cdot \cos\left(\frac{\pi \sqrt{x}}{2}\right) \, dx<br />$
substitute and reset the limits of integration ...
$\frac{4}{\pi} \int_{\frac{\pi}{2}}^{\pi} \cos{u} \, du$
finish.
Ohhh my du was wrong. You got the right answer according to my answer key. Thanks. I forgot that I had to take the derivative of u, not take the integrand.
Last question - would this still work if I resetted th limits to be 0 and (-1/2)? Cos(pi) = -1/2 and cos(pi/2) = 0
4. Originally Posted by Lord Darkin
Ohhh my du was wrong. You got the right answer according to my answer key. Thanks. I forgot that I had to take the derivative of u, not take the integrand.
Last question - would this still work if I resetted th limits to be 0 and (-1/2)? Cos(pi) = -1/2 and cos(pi/2) = 0
no, the limits of integration are related to the variable of integration which is $u$ in this case.
the antiderivative of $\cos{u}$ is $\sin{u}$ , evaluated from $\frac{\pi}{2}$ to $\pi$ using the Fundamental Theorem of Calculus.
$\sin(\pi) - \sin\left(\frac{\pi}{2}\right) = 0 - 1 = -1$
therefore, the value of the original definite integral is $-\frac{4}{\pi}$
you need to review/relearn the process of finding the values of definite integrals by substitution.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 17, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9381094574928284, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/59606/upper-and-lower-sums-in-riemann-integral?answertab=oldest
|
# upper and lower sums in Riemann integral
I want to prove this:
If $P^{*}$ is a finer partition than $P$, then show that $L(f,P, \alpha) \leq L(f,P^{*}, \alpha)$ and $U(f,P^{*}, \alpha) \leq U(f,P, \alpha)$.
If you have a set $S = \{1.2.3 \}$ then adding an element can change the infimum. If $S' = \{\frac{1}{2},1.2.3 \}$ then $\inf S' \leq \inf S$. I don't get how the above holds then.
-
1
smaller the interval bigger the infimum and smaller the supremum. Now apply this fact to your example. – timhortons Aug 25 '11 at 4:24
What's $\alpha$? (Edit: Probably a Stieltjes integral) – Dylan Moreland Aug 25 '11 at 5:34
1
When we talk about partitions we are usually fixing an inverval $[a, b]$, and our partitions then start at $a$ and end at $b$, so usually it isn't correct to introduce points outside of that interval and call the result a refinement. If this were allowed, then the statement of your problem wouldn't be true. – Dylan Moreland Aug 25 '11 at 5:59
## 2 Answers
Remember that you don't take infimum on the set of division points; you take the infimum value of the function in each of the intervals in the division. Now finer division induces smaller intervals, hence you take the infimum on smaller sets and it can only rise.
Think of it this way: the finer the partition, the more intervals there are, and so each interval is smaller, and so the potential error is smaller (you need more extremal points to cause large errors than you needed before).
-
For this, I suppose that $f:[a,b]\to \mathbb{R}$, and $$P=\{a=x_0\lt x_1\lt\ldots\lt x_m=b\}$$ is a partition of $[a,b]$. Take care of the considerations about partitions given by Dyland's comment. Let $$S_m=\{1,2,\ldots, m\}.$$ Note that any partition $P^*$ finer than $P$ can be obtained from $P$ by adjoining it some points, we say $n$. So, we proceed by induction on $n$.
Suppose that $P^*$ is obtained by adding a point $x$ to $P$. Then $x\in (x_{r-1},x_r)$, for some $r\in S_m$. Let $$m_i=\inf f([x_{i-1},x_i]) \text{ for } i\in S_m$$ $$m_r'=\inf f([x_{r-1},x]) \text{ and } m_r''=\inf f([x,x_r])$$ Then $$\begin{align*} L(f,P^*)-L(f,P) &= m_r'(x-x_{r-1}) + m_r''(x_r-x)-m_r(x_r-x_{r-1})\\ &= (m_r'-m_r)(x-x_{r-1}) + (m_r''-m_r)(x_r-x)\\ &\geq 0. \end{align*}$$ You can verify that the factors that involve infs are nonnegative, and this proves the claim. I'll leave the inductive step to you.
For upper sums is similar. For the Riemann-Stieltjes integral the lower and upper sums are studied usually for increasing integrators $\alpha$, and then the proof is the same thing.
-
@integratethis: Is not this a correct answer to your question? – leo Aug 28 '11 at 18:53
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 24, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.914437472820282, "perplexity_flag": "head"}
|
http://unapologetic.wordpress.com/2007/06/06/yonedas-lemma/?like=1&source=post_flair&_wpnonce=d91e0e6d9a
|
# The Unapologetic Mathematician
## Yoneda’s Lemma
Okay, time to roll up our sleeves and get into one of the bits that makes category theory so immensely tweaky: Yoneda’s Lemma.
First, let’s lay out a bit of notation. Given a category $\mathcal{C}$ and an object $A\in\mathcal{C}$ we’ll use $h_A$ to denote the covariant functor represented by $A$ and $h'_A$ to denote the contravariant one. That is, $h_A(B)=\hom_\mathcal{C}(A,B)$ and $h'_A(B)=\hom_\mathcal{C}(B,A)$.
Now given a covariant functor $F:\mathcal{C}\rightarrow\mathbf{Set}$, we’re interested in the set of natural transformations $\mathrm{Nat}(h_A,F)$. Note that not all of these are natural isomorphisms. Indeed, $F$ may not be representable at all. Still, there can be natural transformations going in one direction.
The Yoneda Lemma is this: there is a bijection between $\mathrm{Nat}(h_A,F)$ and $F(A)$.
Seriously.
And the proof is actually pretty clear too. One direction leaps out when you think of it a bit. A natural transformation $\Phi:h_A\rightarrow F$ has components $\Phi_B:\hom_\mathcal{C}(A,B)\rightarrow F(B)$, sending morphisms of $\mathcal{C}$ to elements of certain sets. Now, given any category and any object, what morphism do we know to exist for certain? The identity on $A$! So take $1_A$ and stick it into $\Phi_A$ and we get an element $\Phi_A(1_A)=x_\Phi\in F(A)$. The really amazing bit is that this element completely determines the natural transformation!
So let’s start with a category $\mathcal{C}$, an object $A$, a functor $F:\mathcal{C}\rightarrow\mathbf{Set}$, and an element $x\in F(A)$. From this data we’re going to build the unique natural transformation $\Phi^x:h_A\rightarrow F$ so that $\Phi^x_A(1_A)=x$. We must specify functions $\Phi^x_B:\hom_\mathcal{C}(A,B)\rightarrow F(B)$ so that for every arrow $f:B\rightarrow C$ in $\mathcal{C}$ we satisfy the naturality condition. For now, let’s focus on the naturality squares for morphisms $f:A\rightarrow B$, and we’ll show that the other ones follow. This square is:
$\begin{matrix}\hom_\mathcal{C}(A,A)&\rightarrow&\hom_\mathcal{C}(A,B)\\\downarrow{\Phi^x_A}&&\downarrow{\Phi^x_B}\\F(A)&\rightarrow&F(B)\end{matrix}$
where the horizontal arrows are $\hom_\mathcal{C}(A,f)$ and $F(f)$, respectively. Now this square must commute no matter what we start with in the upper-left corner, but let’s see what happens when we start with $1_A$. Around the upper right this gets sent to $f\in\hom_\mathcal{C}(A,B)$, and then down to $\Phi^x_B(f)$. Around the lower left we first send $1_A$ to $\Phi^x_A(1_A)=x\in F(A)$, which then gets sent to $\left[F(f)\right](x)$. So, in order for these chosen naturality squares to commute for this specific starting value we must define $\Phi^x_B:\hom_\mathcal{C}(A,B)\rightarrow F(B)$ so that $\Phi^x_B(f)=\left[F(f)\right](x)$.
Now I say that these definitions serve to make all the naturality squares commute. Let $f:B\rightarrow C$ be any arrow in $\mathcal{C}$ and write out the square:
$\begin{matrix}\hom_\mathcal{C}(A,B)&\rightarrow&\hom_\mathcal{C}(A,C)\\\downarrow{\Phi^x_B}&&\downarrow{\Phi^x_C}\\F(B)&\rightarrow&F(C)\end{matrix}$
Now, starting with $g\in\hom_\mathcal{C}(A,B)$ we send it right to $f\circ g\in\hom_\mathcal{C}(A,C)$, and then down to $\left[F(f\circ g)\right](x)\in F(C)$. On the other side we send $g$ down to $\left[F(g)\right](x)\in F(B)$, and then right to $\left[F(f)\right](\left[F(g)\right](x))=\left[F(f)\circ F(g)\right](x)=\left[F(f\circ g)\right](x)$. And thus the square commutes.
So, for every natural transformation $\Phi:h_A\rightarrow F$ we have an element $x_\Phi\in F(A)$, for every element $x\in F(A)$ we have a natural transformation $\Phi^x:h_A\rightarrow F$, and these two functions are clearly inverses of each other.
Almost identically, there’s a contravariant Yoneda Lemma, saying that $\mathrm{Nat}(h'_A,F)\cong F(A)$ for every contravariant functor $F$. You can verify that you’ve understood the proof I’ve given above by adapting it to the proof of the contravariant version.
There’s a lot here, and although it’s very elegant it may not be clear why it’s so interesting. I’ll come back tomorrow to try explaining what the Yoneda Lemma means.
### Like this:
Posted by John Armstrong | Category theory
## 7 Comments »
1. [...] does Yoneda’s Lemma mean? As I promised, today I’ll try to explain what Yoneda’s Lemma really does for [...]
Pingback by | June 7, 2007 | Reply
2. [...] language and it looks pretty. So what? Hold on tight, because now I’m going to hit it with Yoneda’s Lemma (and its [...]
Pingback by | June 8, 2007 | Reply
3. [...] Weak Yoneda Lemma The Yoneda Lemma is so intimately tied in with such fundamental concepts as representability, universality, limits, [...]
Pingback by | September 3, 2007 | Reply
4. A question about Yoneda’s Lemma:
You gave good motivation of the importance of Yoneda’s lemma by using it to deduce that the Yoneda map of a category C into Set^C is fully faithfull. However that result only seems to use a special case of Yoneda’s lemma, where both contravariant functors from C to Set are representable. Why is the general version of Yoneda’s lemma important?
dan
Comment by Dan | October 8, 2007 | Reply
5. (after moving a comment over here.. there’s gotta be a better way of doing that)
Well, that’s just one application. In general, Yoneda tells us a lot about representable functors, both covariant and contravariant. One thing it tells us is that any two representations of a functor are given by isomorphic objects.
There’s really a lot more to it, the more you look. I tried to construct some sort of category analogue of bimodules over rings a while back, and all the associativities I needed turned out to be applications of Yoneda’s Lemma. It never really went anywhere, and I don’t remember where I put the notes, but I might dig it out for a post sometime.
Comment by | October 8, 2007 | Reply
6. thank you
can you give me an example about Yonedas Lemma
Comment by heshamtorkmanee | April 24, 2008 | Reply
7. [...] http://unapologetic.wordpress.com/2007/06/06/yonedas-lemma/ [...]
Pingback by | November 29, 2010 | Reply
« Previous | Next »
## About this weblog
This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”).
I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 55, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.894923985004425, "perplexity_flag": "head"}
|
http://meta.math.stackexchange.com/questions/8291/are-questions-of-the-following-form-real-questions
|
# Are questions of the following form “real questions”?
Of late, I've been finding myself increasingly interested in questions of the form: "Suppose we change this definition in the following way. Does this change break anything?" For instance, suppose we change the definition of a metric space so that $(X,d)$ can be a metric space even when $d$ is defined for values outside $X^2$. So we can say... "Suppose $X \subseteq Y$ and $(Y,d)$ is a metric space. Then $(X,d)$ is a metric space." Would this break anything?
My question is, are questions of the form "would this break anything?" considered to be "real questions"? Is there perhaps a better website for this sort of thing?
-
2
I am not sure about this particular question, because any introduction to metric spaces states the fact that any subset is also a metric space in precisely this sense. // In general, it would be nice if a question indicated in which direction your thoughts are going. Remember the basic criterion: if someone reads your question and another user's reply to it, they should be able to tell whether the reply actually answers the question. – user53153 Jan 25 at 5:53
Technically you have to restrict $d$ to $X$, forming a new metric $d|_X$. So $(X,d|_X)$ is a metric space, but $(X,d)$ maybe isn't. – user18921 Jan 25 at 5:56
2
I think the issue here is with the formalism. The notation $(Y,d)$ typically indicates that $d$ is defined on $Y\times Y$, so if this is the convention, there is abuse of notation in saying that $(X,d)$ is a metric space. Of course, some authors define things so, as long as the domain of $d$ contains $X\times X$, the notation $(X,d)$ is fine. But the point here is that whatever formalism is chosen, it is a convention, while the "real idea" is something else. – Andres Caicedo Jan 25 at 5:58
What is a query that produces examples of these kinds of questions? I've seen many before and asked some myself but I'm having trouble finding more with a search. I think some examples could create a common ground for discussion. – Dan Brumleve Jan 25 at 6:06
– Rahul Narain Jan 25 at 6:58
2
Jarrell: I thought you said to stay on the path! Old Man: Yes, but you must know when to break the rules! – draks ... Jan 25 at 21:21
## 1 Answer
Not only are these real questions, but these are precisely the sort of questions you will find yourself asking more and more as your understanding of a subject deepens.
Perhaps talking of "breaking" anything is not quite how I would phrase it myself, but the point remains: To understand a subject, you need to understand how its components fit together, whether their connections are tenuous or withstand some modifications.
In particular, doing research this is actually done all the time: You attempt to modify an argument, and see where problems may appear, and how can they be fixed, and how far this process can go.
-
I'm in total agreement. Is the soft-question tag appropriate in this sort of case? – Dan Brumleve Jan 25 at 5:54
2
@Dan, I think that tag is for questions with no significant mathematical content (I may be wrong about that), whereas the kind of question we are talking about can have lots of math content. – Gerry Myerson Jan 25 at 5:56
@Gerry the tag is described as being "for questions which admit no definitive answer" so perhaps its definition is too broad. – Dan Brumleve Jan 25 at 5:57
2
@Dan: "Would this break anything" admits definitive answers, in principle. The answer is either "Yes, it breaks some things" or "No, it doesn't break anything." – Willie Wong♦ Jan 25 at 9:00
2
@Willie, I think breakage is subjective. – Dan Brumleve Jan 25 at 9:15
@Dan, perhaps what the tag says $\ne$ how we actually use it. Have you looked at many questions with that tag, to get some sense of m.se practice? – Gerry Myerson Jan 25 at 11:40
@DanBrumleve At least it presents the question "does there exist something satisfying my new definition that does not satisfy the old definition?" – Alexander Gruber Feb 1 at 14:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 19, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9504384398460388, "perplexity_flag": "middle"}
|
http://www.physicsforums.com/showthread.php?t=169966
|
Physics Forums
## number theory Q
1. The problem statement, all variables and given/known data
Let $\zeta$ be a primative 6-th root of unity. Set $\omega = \zeta i$ where $i^2 = 1$.
Find a square-free integer $m$ such that $Q [\sqrt{m}] = Q[ \zeta ]$
2. Relevant equations
The minimal polynomial of $\zeta$ is $x^2 - x + 1$
3. The attempt at a solution
I was intending to use the theorem that:
Take $p$ to be a prime and $\zeta$ to be a $p$-th root of unity. if
$$S = \sum_{a =1}^{p-1} \big( \frac{a}{p} \big) \zeta^a$$
then
$$S^2 = \Big( \frac{-1}{p} \Big) p$$.
This would make $S^2$ an integer. However, 6 is not a prime though. I'm really stumped in what to do. Any help would be greatly appreciated.
Oh, and by $( \frac{-1}{p} \Big)$, I mean the legendre symbol.
PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug
Recognitions: Homework Help Science Advisor What are the roots of x^2-x+1?
The roots of that polynomial are the primative roots $\zeta$ and $\zeta^5$. How would I now use this information?
Recognitions:
Homework Help
Science Advisor
## number theory Q
No. What are the roots of that polynomial. You're making it too complicated. If I gave you that polynomial in you Freshman calc course, or whatever, you'd be able to write out the roots without thinking. What are the roots? Or better yet, don't write out the roots using THE QUADRATIC FORMULA, just write down a sixth root of unity using elementary complex numbers. HINT: If I asked for a primitive 4th root of unity, would i be acceptable? Or -i? You know that $\exp(2\pi i/n)$ is a primitive n'th root of unity, and that the others are $\exp(2\pi i m/n)$ for m prime to n.
Thread Tools
| | | |
|--------------------------------------|----------------------------|---------|
| Similar Threads for: number theory Q | | |
| Thread | Forum | Replies |
| | Calculus & Beyond Homework | 9 |
| | Linear & Abstract Algebra | 6 |
| | Linear & Abstract Algebra | 18 |
| | Linear & Abstract Algebra | 1 |
| | Linear & Abstract Algebra | 3 |
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 16, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9199453592300415, "perplexity_flag": "middle"}
|
http://en.wikisource.org/wiki/Uniform_Rotation_of_Rigid_Bodies_and_the_Theory_of_Relativity
|
# Uniform Rotation of Rigid Bodies and the Theory of Relativity
From Wikisource
Uniform Rotation of Rigid Bodies and the Theory of Relativity (1909) by Paul Ehrenfest, translated by Wikisource
In German: Gleichförmige Rotation starrer Körper und Relativitätstheorie, Physikalische Zeitschrift, 10: 918
Uniform Rotation of Rigid Bodies and the Theory of Relativity Paul Ehrenfest Wikisource 1909
Uniform rotation of rigid bodies and the theory of relativity.
By P. Ehrenfest
In the attempt to generalize the kinematics of relative-rigid bodies from uniform, straight-line translation to any form of motion, we come, on the basis of Minkowski's ideas, to the following approach:
A body is relative-rigid, that is: it is deformed continuously at any movement, so that (for a stationary observer) each of its infinitesimal elements at any moment has just that Lorentz contraction (compared to the state of rest) which corresponds to the instantaneous velocity of the element's center.
When (some time ago) I wanted to show me the consequences of this approach, I came to conclusions which seem to show that the above approach already leads to contradictions for some very basic types of motion.
Now, Born in a recent paper[1] gave a definition of relative-rigidity, which includes all possible motions at all. Born has based this definition – in accordance to the basic idea of relativity theory – not on the system of measurement of a stationary observer, but on the (Minkowskian) measure-determinations of, say, a continuum of infinitesimal observers who travel along with the points of the non-uniformly moving body: for each of them in their measure the infinitesimal neighborhood should appear permanently undeformed.
However, both definitions of relative-rigidity – if I understood correctly – are equivalent. It is permissible, therefore, to point in short to the simplest type of motion, for which the first definition already leads to contradictions: the uniform rotation about a fixed axis.
In fact: let a relative-rigid cylinder of radius $R$ and height $H$ be given. A rotation about its axis which is finally constant, will gradually be given to it. Let $R'$ be its radius during this motion for a stationary observer. Then $R'$ must satisfy two contradictory conditions:
a) The periphery of the cylinder has to show a contraction compared to its state of rest:
$2\pi R'<2\pi R,$
because each element of the periphery is moving in its own direction with instantaneous velocity $R'\omega$.
b) Taking any element of a radius, then its instantaneous velocity is normal to its extension; thus the elements of a radius cannot show a contraction compared to the state of rest. It should be:
$R'=R.$
Note: If we don't want, that the deformation only depends on the instantaneous velocity of the element's center, but also on the instantaneous rotation velocity of the element, then the deformation function must contain a universal dimensionless constant besides the speed of light, or the accelerations of the element's center must be included as well.
St. Petersburg, Sept. 1909.
1. M. Born, Die Theorie des starren Elektrons in der Kinematik des Relativitäts-Prinzipes. Ann. d. Phys. 30, 1, 1909. See also in this journal, 10, 814, 1909.
This is a translation and has a separate copyright status from the original text. The license for the translation applies to this edition only.
Original:
This work is in the public domain in the United States because it was published before January 1, 1923. The author died in 1933, so this work is also in the public domain in countries and areas where the copyright term is the author's life plus 75 years or less. This work may also be in the public domain in countries and areas with longer native copyright terms that apply the rule of the shorter term to foreign works.
Translation:
This work is released under the Creative Commons Attribution-ShareAlike 3.0 Unported license, which allows free use, distribution, and creation of derivatives, so long as the license is unchanged and clearly noted, and the original author is attributed.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 7, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9048588275909424, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/30480/particle-physics-and-representations-of-groups/30490
|
## Particle Physics and Representations of Groups
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
This question is asked from a point of complete ignorance of physics and the standard model.
Every so often I hear that particles correspond to representations of certain Lie groups. For a person completely ignorant of anything physics, this seems very odd! How did this come about? Is there a "reason" for thinking this would be the case? Or have observations in particle physics just miraculously corresponded to representation theory? Or has representation theory of Lie groups grown out of observations in particle physics?
In short: what is the chronology of the development of representation theory and particle physics (with relation to one another), and how can one make sense of this relation in any other way than a freakish coincidence?
-
4
en.wikipedia.org/wiki/… – Qiaochu Yuan Jul 4 2010 at 4:02
Thanks Qiaochu, I was not aware that there was a wiki article about this. – Makhalan Duff Jul 4 2010 at 4:04
Complementing the wiki article: mathoverflow.net/questions/12248/… – Steve Huntsman Jul 4 2010 at 4:25
3
You might want to read the preface to Shlomo Sternberg's Group Theory and Physics for some history and in particular for how group theory was not really welcome initially and referred to as the Gruppenpest. – José Figueroa-O'Farrill Jul 4 2010 at 13:52
2
Does the tag "intuition" automatically imply that the most vague and philosophical answer will be selected, over concrete, precise, and mathematical ones? – Victor Protsak Jul 5 2010 at 23:29
show 3 more comments
## 6 Answers
The "chronology" isn't clear to me, and having looked through the literature it seems much more convoluted than it should be. Although it seems like this is basically how things were done since the beginning of quantum mechanics (at least, by the big-names) in some form or another, and was 'partly' formalized in the '30s-'40s with the beginnings of QED, but not really completely carefully formalized until the '60s-'70s with the development of the standard model, and not really mathematically formalized until the more careful development of things in terms of bundles in the '70s-'80s. (These dates are guesses--someone who was a practicing physicist during those periods is more than welcome to correct my timeline!)
Generally speaking, from a 'physics' point of view, the reason particles are labeled according to representations is not too different than how, in normal quantum mechanics, states are labeled by eigenvalues (the wiki article linked to mentions this, but it's not as clear as it could be).
In normal QM, we can have a Hilbert space ('space of states') $\mathcal{H}$, which contains our 'physical states' (by definition). To a physicist, 'states' are really more vaguely defined as 'the things that we get the stuff that we measure from,' and the Hilbert space exists because we want to talk about measurements. The measurements correspond to eigenvalues of operators (why things are 'obviously' like this is a longer historical story...).
So we have a generic state $| \psi \rangle \in \mathcal{H}$, and an operator that corresponds to an observable $\mathcal{O}$. The measured values are
$\mathcal{O} |\psi\rangle = o_i | \psi \rangle$.
Because the $o_i$ are observable quantities, it's useful to label systems in terms of them.
We can have a list of observables, $\mathcal{O}_j$, (which we usually take to be commuting so we can simultaneously diagonalize), and then we have states $|\psi\rangle$,
$\mathcal{O}_j | \psi \rangle = {o_i}_j | \psi \rangle$.
So, what we say, is that we can uniquely define our normal QM states by a set of eigenvalues $o_{ij}$.
In other words, the $o_{ij}$ define states, from the physics point of view. Really, this defines a basis where our operators are diagonal. We can--and do!--get states that do not have observables which can be simultaneously diagonalized, this happens in things like neutrino oscillation, and is why they can turn into different types of neutrinos! The emitted neutrinos are emitted in states with eigenvalues which are not diagonal in the operator that's equivalent to the 'particle species' operator. (Note, we could just as well define the 'species' to be what's emitted, and then neutrinos would not oscillate in this basis, but would in others!)
This has to do with representations, because when we talk about particles with spin, for example, we're talking about operators which correspond to 'angular momentum.' We have an operator:
$L_z = i \frac{\partial}{\partial\phi}$
and label eigenvalues by half-integer states which physically correspond to spin. Group theoretically, $L_z$ comes from the lie algebra of the rotation group, because we're talking about angular momentum (or spin) which has associated rotational symmetries.
Upgrading from here to quantum field theory (and specializing that to the standard model) is technically complicated, but is basically the same as what's going on here. The big difference is, we want to talk there about 'quantum fields' instead of states, and have to worry about crazy things like apparently infinite values and infinite dimensional integrals, that confuse the moral of the story.
But the idea is simply, we want to identify things by observables, which correspond to eigenvalues, which correspond to operators, which correspond to lie algebra elements, which have an associated lie group.
So we define states corresponding to things which transform under physically convenient groups as 'particles.'
If you want a more mathematically careful description, that's still got some physical intuition in it, you can check out Gockler and Schuker's "Differential Geometry, Gauge theory, and Gravity," which does things from the bundle point of view, which is slightly different than I described (because it describes classical field theories) but the moral is similar. At first it might seems surprising that the classical structure here is the same, when it seemed to rely on operators and states in Hilbert spaces, but it only technically relied on it, but morally, what's important is actions under symmetry groups. And that is in the classical theory as well. But it's not as physically clear from the beginning from that point of view.
-
3
-1: Long stream of (un)conciousness, but absolutely no substance. – Victor Protsak Jul 5 2010 at 23:25
1
No, there's a reason we have names like "isospin" for these groups! They are, in fact, very much like the gauged symmetries! That's how the gauged symmetries were discovered!! If you don't believe me, check out any of the early literature on the topic, or any of the technical histories of physics books out there, or any intro nuclear physics text (they rely heavily on this!). The gauged case is obviously more complicated, but spin in quantum mechanics is where these ideas came from, and not realizing that is missing the quantum mechanical point of view of gauge theories entirely! – jeremy Jul 14 2010 at 8:53
1
From the practicing physics POV, a gauge symmetry is just a normal (global) symmetry that's got a different value at each point. Many physicists aren't even aware of the gauge geometry... And conserved charges don't directly have anything to do with representations. A conserved charge comes from a symmetry of a PDE, there are no representations that need to be involved here. The representations are what show up very clearly in the quantum mechanical picture of how states transform. For an explicit example of this, see any intro nuclear physics text, which all rely heavily on this. – jeremy Jul 14 2010 at 15:17
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Let me add a little bit to what has already been written above.
The current framework for quantum mechanics motivates the study of projective (anti)unitary representations of the symmetry groups of the physical system. In the context of four-dimensional relativistic quantum field theory (with some mild assumptions), it follows from a celebrated theorem of Coleman and Mandula that the symmetry group is a direct product of (the universal cover of) the Poincaré group and a compact Lie group. (One can get around this theorem by considering not Lie groups but Lie supergroups, but that's another story.) Let me focus on the Poincaré group. The determination of the (physically relevant) unitary irreducible representations of the Poincaré group is due to Wigner, generalising (though perhaps not consciously) Frobenius's method of induced representations. It was Mackey who extended Wigner's method and placed it firmly in the "right" mathematical context.
The point I would like to make is that approaching the representation theory of the Poincaré group (however one motivates this study) in this fashion naturally makes contact with particle physics.
Induced representations
Let us start with finite groups. Let $G$ be a finite group and $H$ be a subgroup and let $\delta: H \to \mathrm{U}(W)$ be a unitary representation on a finite-dimensional hermitian vector space $W$. Consider the vector space $V$ of functions $f:G \to W$ subject to the equivariance condition $f(gh) = \delta(h^{-1}) f(g)$ for all $g \in G$ and $h \in H$. The homomorphism $\rho:G \to \mathrm{GL}(V)$ defined by $$(\rho(g) f)(g') = f(g^{-1}g')$$ makes $V$ into a representation of $G$. (One has to check that $\rho(g) f \in V$ again.)
Moreover, it is possible to define on $V$ a hermitian structure relative to which $\rho$ is a unitary representation. This is best seen by viewing $V$ in a different light. Let $X= G/H$ be space of left $H$-cosets in $G$. Then $V$ is isomorphic to the vector space of functions $\psi: X \to W$, but not canonically. The isomorphism depends a choice of coset representative $\sigma: X \to G$, a section through the surjection $\pi: G \to X$. Then given $f \in V$ we define $\psi(x)= f(\sigma(x))$. Conversely, given $\psi:X \to G$ we define $f \in V$ by writing $g = \sigma(\pi(g))h(g)$ for some $h(g) \in H$ and declaring $f(g) = \delta(h(g)^{-1}) \psi(\pi(g))$. Then we define the inner product of $f_i \in V$ to be $$\langle f_1,f_2\rangle_V = \sum_{x\in X} \langle f_1(\sigma(x)), f_2(\sigma(x)) \rangle_W.$$ One can show that this is independent of the coset representative precisely because $W$ is a unitary representation of $H$.
The representation $V$ of $G$ is said to be induced from the representation $W$ of $H$.
Wigner's method
Wigner's method is formally very similar: $G$ is the Poincaré group; that is, the semidirect product $L \ltimes T$, where $L = \mathrm{Spin}(3,1)$ is the spin cover of the Lorentz group and $T$ is the translation ideal. Wigner starts by choosing a character $p$ of $T$, which physically is interpreted as a momentum. A version of Schur's Lemma says that on an irreducible representation of the $G$, all the characters of $T$ which appear share the same minkowskian norm $p^2$. Physically relevant representations have $p^2 = - m^2$, where $m\geq 0$ is the mass.
Let $H < G$ denote the stabilizer of $p$. Wigner induces a unitary representation of the Poincaré group from a finite-dimensional unitary representation of $H$. Now $H$ is non-compact, so such representations are necessarily not faithful. They factor through faithful representations of a group known as Wigner's little group. It is isomorphic to $\mathrm{Spin}(3)$ for $m>0$ and $\mathrm{Spin}(2)$ for $m=0$. Irreducible finite-dimensional representations of the little groups are labelled by integers: the spin (a non-negative integer) for $m>0$ the helicity for $m=0$. So Wigner tells us that to a unitary irreducible representation of the Poincaré group one can associate a mass and a spin/helicity, which are the basic data specifying relativistic particles. But there's more.
The space $G/H$ is the hyperboloid $p^2 = - m^2$ (for a fixed mass $m\geq 0$) in the dual to the Lie algebra of $T$. The vector space carrying the induced representation of $G$ consists of (square-integrable) sections of homogeneous vector bundles over $G/H$ associated to the representation of $H$ from which we induce. Thus this gives naturally a representations on geometric objects defined in the space $G/H$ of momenta. Fourier transforming to Minkowski spacetime we arrive at sections of homogeneous bundles over Minkowski spacetime satisfying (linear) partial differential equations coming from Fourier transforming the condition $p^2 = -m^2$ and the other irreducibility conditions. And the nice surprise is that these partial differential equations are precisely the linearised free field equations for the corresponding particles: the Klein-Gordon, Dirac, Weyl, Maxwell,... equations!
-
3
Just a small nit, but really quantum mechanics studies projective representations. For simple Lie groups, it's fine just to work with the universal cover, but I like to emphasize this because that's where the central extension arises in the affine case. – Aaron Bergman Jul 5 2010 at 21:56
Actually, non-trivial (and non-linear) central extension shows up already in the finite-dimensional situation of the symplectic group of linear symmetries of the flat phase space ($\mathbb{R}^{2n}$ with the standard symplectic structure $dp\wedge dq$). This is reflected in various quantization conditions (Bohr-Sommerfeld, Maslov index, Lere, etc). – Victor Protsak Jul 5 2010 at 23:19
You're both totally right, of course. In fact, in an earlier draft I did mention Bergmann's and Wigner's theorems on projective and anti-unitary reps, but decided in the end to cut the post in size. I'll edit, though, for the sake of precision. Thanks for keeping me honest. – José Figueroa-O'Farrill Jul 5 2010 at 23:26
The precursor to Wigners construction of representations of a semidirect product of Lie groups is Clifford theory for finite groups. This is a different Clifford to Clifford algebras. – Bruce Westbury Jul 6 2010 at 3:26
Bruce, could yo upleae provide a reference? I had never heard of this other Clifford. Thanks! – José Figueroa-O'Farrill Jul 6 2010 at 11:22
show 3 more comments
My understanding is that this started with the Dirac wave equation. This was a relativistic equation for an electron. However it happened to also introduce the idea that a point particle could have an internal state space. This was a successful theory and was taken up and imitated when it came to probing the structure of the nucleus.
For a description of the standard model a good place to start is: http://arxiv.org/abs/0904.1556
-
Yeah, that's what I was thinking in my answer with the beginning of QED, which is what the Dirac equation is the first step to, but I do not believe it was formalized in that language at the time. But the ideas were around before that, too, and date back to the initial foundational ideas of symmetries, and of linear spaces of states in quantum mechanics. – jeremy Jul 4 2010 at 8:40
There is an excellent introduction by John Baez and John Huerta to the Standard Model and Lie groups theory in the Bulletin of the AMS, vol 47, no. 3, July 2010. In particular, it gives plenty of references and historical notes.
I just noticed that this is the same article as the one suggested by Bruce, sorry!
-
I didn't know it had been published. – Bruce Westbury Jul 5 2010 at 0:34
As a physicist, I may be able to give a different perspective on this question. In particular, many of the responses so far have been about quantum mechanics and quantum field theory (which involve Lie groups), but if the question is, "Why is the particle content of physics theories derived from Lie groups?" then the answer is not specifically about the theories' quantumness. It's about their geometry, which can be discussed separately from quantum effects.
In 1926, Kaluza and Klein attempted to unify electromagnetism with gravity by proposing a theory of General Relativity with 5 dimensions (4 spatial). Since we don't macroscopically experience this extra degree of freedom, they proposed that it is topologically like a cylinder with a small radius, so small that the extra degree of freedom can't be probed as a direction. This degree of freedom does, however, allow us to encode classical electromagnetism as part of the geometry of space-time. We'll see in a moment that while this formulation isn't exactly right, it does show how the differential geometry concepts of General Relativity can be used in particle physics theories, leading to a unification of all four forces at a classical level. (It's the quantization of gravity that's the hard part.)
The Lagrangian of quantum electrodynamics (late 1940's) is just the Lagrangian of the Dirac equation with an additional requirement: that the spinor field $\psi$ has a local $\mathcal{U}(1)$ symmetry. I'll use the same notation as the Wikipedia article, except that I'll use $c = \hbar = 1$. The Dirac Lagrangian
$\mathcal{L_D} = m\bar{\psi}\psi - \frac{i}{2}\left(\bar{\psi} \gamma^\mu (\partial_\mu\psi) - (\partial_\mu\bar{\psi}) \gamma^\mu \psi \right)$ (1)
has a global $\mathcal{U}(1)$ symmetry in that the complex phases of components of $\psi$ cancel in the $\bar{\psi}\psi$ terms: multiplying all instances of $\psi$ by $e^{i\alpha}$ for some constant $\alpha$ would not change the value of $\mathcal{L}$. The Dirac equation does not have a local $\mathcal{U}(1)$ symmetry, that is, invariance under
$\psi(x,t) \to e^{i\theta(x,t)} \psi(x,t)$ where everything is a function of 4-D space-time points $(x,t)$ (2).
If we want to create a new Lagrangian which does have a local $\mathcal{U}(1)$ symmetry, we find that we would need to replace the derivative operators $\partial_\mu$ with
$D_\mu = \partial_\mu - iqA_\mu$ (3)
where $A$ is a new field with the transformation property
$A_\mu(x,t) \to A_\mu(x,t) + \frac{1}{q}\partial_\mu \theta(x,t)$ (4).
The new theory has a Lagrangian
$\mathcal{L_{QED}} = m\bar{\psi}\psi - \frac{i}{2}\left(\bar{\psi} \gamma^\mu (D_\mu\psi) - (D_\mu\bar{\psi}) \gamma^\mu \psi \right) + \frac{1}{4}\left((\partial_\mu A_\nu - \partial_\nu A_\mu)(\partial^\mu A^\nu - \partial^\nu A^\mu)\right)$ (5)
where the last term is required to preserve symmetry under Lorentz boosts (conservation of energy in the new $A$ field). Just following the consequences of a local $\mathcal{U}(1)$ symmetry, we have turned freely-streaming Dirac Lagrangian into the interacting electromagnetic Lagrangian, where we can interpret $\psi$ as charged particle (e.g. electron) waves and $A$ as the vector potential of electromagnetism, which is to say, the photon waves. The transformation of Eqns (2) and (4) is the gauge transformation of electromagnetism: we've learned that the electromagnetic gauge symmetry is fundamentally a local $\mathcal{U}(1)$ symmetry.
Getting back to Kaluza and Klein's theory, a 5th compactified dimension is a little like having a $\mathcal{U}(1)$ invariance at every point in 4-D space, since it's hard to see where we are in the loop of the 5th dimension. It's not exactly the same thing: with an extra dimension, we should in principle be able to perform rotations in which spatial dimensions and the extra dimension mix, while that would not be possible in a 4-D space plus $\mathcal{U}(1)$ fiber bundle. (This difference is perhaps related to the reason Kaluza and Klein's original theory didn't work...?) If we generalize our notion of space-time to include the $\mathcal{U}(1)$ fibers, we can think about electromagnetism and General Relativity in the same terms. For instance, the photon field $A$ plays the same role in the $\mathcal{U}(1)$ symmetry as the connection/covariant derivative in the local Lorentz symmetry of the space-time metric. That is, the classical photon field is the "curvature" of the fiber bundle in the same sense that gravitation is the curvature of space-time.
Moreover, this picture unifying the geometry of electromagnetism with the geometry of gravity also works for all the other known forces. In 1954, Yang and Mills generalized the "local $\mathcal{U}(1)$-to-electromagnetism" idea to work for any Lie group, including non-Abelian ones. The Yang-Mills idea wasn't popular at first because it didn't seem to describe the nuclear strong force (but that was based on a wrong assumption that the nuclear force is a Yukawa interaction). By the late 1960's, Weinberg derived a unified electro-weak theory from local $\mathcal{SU}(2)\times\mathcal{U}(1)$, and Han and Nambu derived a theory of nuclear strong force from $\mathcal{SU}(3)$. (I'm skipping over many important contributions for brevity.) By the mid-1970's or early 1980's, depending on who I ask, this became known as the Standard Model of particle physics because of its experimental success.
We can think about the Standard Model geometrically as an $\mathcal{SU}(3)\times\mathcal{SU}(2)\times\mathcal{U}(1)$ at every point in 4-D space-time, with the gluon, W and Z bosons, and photon being connections through groups at neighboring points of space-time, constantly arranging themselves to hide information about the components of matter fields in all of these "internal" degrees of freedom. The structures of the groups are directly responsible for the charges and interactions of the matter fields (quarks and leptons), but the matter fields themselves are not derived from the groups (supersymmetry might change that part of the picture). There is a direct analogy between these group connections (the gluon, W, Z, and photon) and the space-time connection in General Relativity (which we could call a graviton field, if you wish). I have said nothing at this point about the quantization of all of these fields, which further complicates the picture, especially in the case of gravity!
By the way, I would love to know more about the curvature of fiber bundles, in order to understand the above at a deeper mathematical level. If you have any suggested reading, I'm interested. Thanks!
-
Kaluza-Klein is on a cylinder, not a torus. Also, the breaking of Diff(MxS^1) -> Diff(M) x U(1) is a symmetry-breaking kind of breaking. You can find discussions of this in many easy-to-find articles by Weinberg, Witten, and a bunch of others and should be in most reviews of KK theory. The reason the "extra" S^1 coordinate is not normally noticeable in the 'macroscopic' theory is because it is a low-energy theory and its dependence has been integrated out. The remnant of this dynamics is electromagnetism. But this isn't really quite the same picture as in yang-mills theory. – jeremy Jul 6 2010 at 8:26
In yang-mills we want to look at the already-broken phase ('macroscopic') of the would-be microscopic theory. But the procedure of: 'go from local gauge symmetry to a space that locally is MxG and then do gravity' is not well-defined. In particular, it fails if G=SU(3)xSU(2)xU(1) and we insist on chiral fermions (as was shown by Witten). The details of what's going on here are somewhat ugly, and I do not know of a nice exposition of them. Naively, it should have worked because the manifold MxG exists, but the correct fields do not, which is why I didn't talk about this in my answer above. – jeremy Jul 6 2010 at 8:32
But the structure of everything in the classical case is still in terms of bundles with structure groups that are the gauge group. And it is the case that the classical structure of any yang-mills theory is basically the same as general relativity as you mention. But the Kaluza-Klein case isn't the right picture here. Also, the book I mentioned at the end of my answer above provides a reasonably detailed intro to the curvature of fiber bundles, from the physics POV, as well as several good references to the math POV. – jeremy Jul 6 2010 at 8:36
Thanks for the comments: I've fixed "torus" -> "cylinder" ("torus" would imply that two dimensions are curled up, not just the one). I should have emphasized more that the Kaluza-Klein picture is a historical precursor--- the right way to think about it is as a fiber bundle. The Göckeler and Schücker book looks perfect: thanks! I'll try to find one in my library or buy it online. – Jim Pivarski Jul 6 2010 at 16:19
1
On my way to work, I thought of a glib, two-word answer to the original question: "Noether's theorem." In the physics context, Noether's theorem states that symmetries in the Lagrangian of a theory correspond to conserved currents. Quantized conserved currents are particles, so it should be no surprise that Lie groups representing the gauge symmetries correspond to gauge particles (gluon, W, Z, photon). This doesn't say anything about the matter fields (quarks and leptons), though. – Jim Pivarski Jul 6 2010 at 16:21
I don't know how accurate this is, but this is what i have managed to glean from my physics friends. The impression that i got was that gluons mediate the forces in the standard model, they "act" on the various particles that "feel" the forces. This is where the gauge groups and their representations come in, i think.
i would really appreciate any feedback i get on this answer. The only reason i am posting it as an answer is because i want to hear about how i am wrong with this. I have been trying to get an explanation from the physics crew here but they dont ever seem to give me the types of answers i am looking for.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 115, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9467817544937134, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/117487/polynomial-rings/117530
|
## Polynomial Rings
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Let $R$ and $S$ be non-zero rings with identity. Is it possible to have $R[x] \cong S[[x]]$ ?
-
1
What about $R=\mathbb{Z}[[x]]$, $S=\mathbb{Z}[x]$? – Piotr Achinger Dec 29 at 9:17
8
@Piotr: that doesn't work, or at least the natural map isn't an isomorphism. $\mathbb{Z}[[x]] [y]$ is the subring of $\mathbb{Z}[y][[x]]$ in which the coefficients of the $x^n$, as a polynomial in $y$, have uniformly bounded degrees. – Qiaochu Yuan Dec 29 at 9:28
1
@Qiaochu: Wow, thanks, I never realized that! – Piotr Achinger Dec 29 at 10:10
1
$S[[x]]$ can be given the structure of a complete metric space, by defining $d(a,b)=2^{-v_x(a-b))}$, with $v_x$ the $x$-adic valuation. On the other hand $R[x]$ is naturally a countable union of quite small subsets (polynomials of degree at most $n$ for $n=0,1,2,3,\ldots$. Can one now try to argue topologically using some kind of Baire Category Theorem argument? Not that I can get it to work... – wccanard Dec 29 at 11:47
1
I made a similar comment on your other question, but please ammend your title style to include complete questions. For example, "Can $R[x] \cong S[[x]]$?" is a better title than the current one. Actually, the entire body of your question would fit in the title — titles on MO may be longer than tweets. – Theo Johnson-Freyd Jan 1 at 4:03
show 2 more comments
## 3 Answers
Here's a proof that no such commutative rings $R$, $S$ exist. (See the edit for an extension to noncommutative rings.)
Suppose we have an isomorphism $\phi: R[x] \to S[[x]]$; let $a = \phi^{-1}(x)$. First we claim that for all $b \in R[x]$, the element $1 + ab$ is invertible in $R[x]$. Indeed, the element $\phi(1 + ab) = 1 + (\phi(b))x$ has an inverse given by a formal geometric series, so $\phi^{-1}$ applied to this element must also be invertible.
In particular, $1 + ax$ must be invertible in $R[x]$. But it is well-known (in the commutative case) that any invertible polynomial $a_0 + a_1 x + \ldots + a_n x^n$ has $a_0$ invertible and the other $a_i$ nilpotent. It follows quickly that $a$ must be nilpotent. But then $\phi(a) = x$ is nilpotent in $S[[x]]$. We have reached an absurdity.
Edit: Aided by Martin's excellent suggestion in his comment, we may easily extend the argument to noncommutative rings. Indeed, since $x$ is central in $S[[x]]$, we have that $a$ is central in $R[x]$. In particular, $a$ commutes with scalars; writing $a$ as a polynomial, it follows that each coefficient of $a$ is central in $R$. This is true also of the polynomial $1 + ax$, and now the proof that all but the unit coefficient of $1 + ax$ is nilpotent goes through as in the commutative case (see for example the nice argument given here). Thus $a$ is nilpotent.
-
4
In order to reduce to the commutative case, one may try to compute the centers of $R[x]$ and $S[[x]]$. – Martin Brandenburg Dec 29 at 16:18
Thanks very much for that suggestion, Martin. I have edited my answer to take it into account. – Todd Trimble Dec 29 at 16:48
@Todd: Nice solution. Can you generalize this to skew polynomials or Ore extensions !? – chatish Dec 29 at 17:32
@shatich: perhaps someone else can see such a generalization, but to be honest I had to google Ore extension, so I'd have to sit down and think it over. It might be worth opening another question if you're interested. – Todd Trimble Dec 29 at 18:07
Another suggestion: In the end of the argument, just use the commutative case with $Z(R)[x]$. – Martin Brandenburg Dec 30 at 12:11
show 1 more comment
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Thanks to Martin Brandenburg suggestion, if $R[x] \cong S[[x]]$ then their centers are isomorphic too. So without loose of generality we can assume that $R$ and $S$ are commutative. In commutative case we know $J(R[x]) = Nil(R[x])$. This means that elements in the Jacobson radical of $R[x]$ are all nilpotent. On the other hand $x \in J(S[[x]])$ and $x$ is not nilpotent. This shows that $R[x]$ and $S[[x]]$ can not be isomorphic.
-
In your argument you seem to be assuming that $R[x]$ is Jacobson. There is no reason for this to be true if $R$ is not Jacobson. – Qiaochu Yuan Dec 29 at 21:39
2
@Qiaochu Yuan: I cann't see that, could you please tell exactly where he/she used that assumption ?? – chatish Dec 29 at 22:51
I think Qiaochu's objection arises from $J(R[x])=\mathrm{Nil}(R[x])$. But this, in fact, holds for every commutative ring $R$, and uses the description of the units $R[x]^* = R^* + \mathrm{Nil}(R) (x)$. @tvector: Your proof uses $Z(R[x])=Z(R)[x]$ and $Z(S[[x]])=Z(S)[[x]]$, right? – Martin Brandenburg Dec 30 at 12:17
@Martin: Exactly, yes. – tvector Dec 30 at 16:45
9
Honestly, I think my solution is much better. Neat, Clean and short ! – tvector Dec 30 at 16:52
Let me write $R[x]=S[[y]]$ to avoid confusion.
I. Note that 1+y is a unit. Therefore (thinking of y as an element of $R[x]$), we have $y\in R$.
II. Now mod out $y$ on both sides: $\overline{R}[x]=S$.
III. This gives $R[x]=\overline{R}[x][[y]]$
IV. The ring on the right contains the element $\sum(xy)^n$. Thus so does the ring on the left (thought of as a subring of the appropriate completion). It follows that $y$ is nilpotent in the ring on the left. But it's clearly not nilpotent in the ring on the right. Contradiction.
-
I don't think that step I is correct. See also Todd's post for the units of $R[x]$. – Martin Brandenburg Dec 29 at 16:22
Martin: Of course you're right. – Steven Landsburg Dec 29 at 17:05
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 73, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9356014728546143, "perplexity_flag": "head"}
|
http://quant.stackexchange.com/questions/4660/how-to-improve-the-black-scholes-framework/4706
|
# How to improve the Black-Scholes framework?
Since the distribution of daily returns are obviously not lognormal, my bottom line question is has BS been reworked for a better fitting distribution?
The best dist I've ever made fit is a double-sided exponential, but I'd easily settle for a regular exponential distribution for simplicity's sake.
If there aren't any papers showing what the net result could be, can the cumulative distribution function of the standard normal distribution simply be replaced with the cdf of the the exponential distribution? If so, do $d_1$ and $d_2$ have to be reworked?
-
Prices, not returns, are assumed to be lognormal. – Jase Dec 2 '12 at 12:42
I agree that $\frac{S(T)}{S(t)} = e^{(r-\sigma^2)(T-t)+\sigma(W(T)-W(t))}$ is lognormal, but I thought that the most common meaning of "stock returns" in finance is $ln \frac{S(T)}{S(t)}$. – Jase Dec 2 '12 at 16:33
2
Just to clarify, BS assumes normal returns and hence lognormal prices. – SRKX♦ Dec 2 '12 at 21:20
Do you have a source for this definition of returns? (where they actually use to word "returns") – Jase Dec 3 '12 at 2:43
@SRKX: Thank you. This is one of the most annoying (and surprisingly common) misnomer you see in a lot of papers/ discussion boards/ blogs but is never pointed out. – emsfeld Dec 3 '12 at 2:57
show 4 more comments
## 5 Answers
You're not gonna find much off google, since nobody's gonna go public with anything they develop to make money. Power Law distributions are a much better fit for financial returns than normal, also if you apply variance instead, it'd explain the OTM option values in a more practical manner.
-
– Rock Dec 16 '12 at 8:32
I have to partly concur with Rock here. I am not commenting on his suggestions about bs improvements but I agree that you can toss most of the improvement in academic literature into the garbage without fearing you missed out. Improvements to volatility modeling and pricing algorithms for options is a well guarded secret by most vol traders. There is (almost) no way an enlightenment makes it suddenly into the public domain. I worked as junior with a guy who really beat the crap out of the competition in long dated options and he brought the code inside a dll ... – Freddy Feb 14 at 18:32
... And a tight IP agreement and the dll even called an external server for permissioning. That was his conditions upon joining. He never shared his approach in detail with anyone. He was an index vol trader but later on traded a CB book as well where he tossed out everything but the optionalities. I learned a lot from him but not how he priced those >1 year expiries – Freddy Feb 14 at 18:35
Yeah, but do you think he was actually owned an awesome alternative to BSM or simply had a good model for relative dynamics of the implied vols? There seem to be plenty of high science alternatives to BS out there, you just can't really apply them to real world. – Strange Feb 16 at 3:05
Check out these resources:
• The book Levy Processes in finance.
• This paper basically enabling you to use any distribution for asset prices: Option Valuation Using the Fast Fourier Transform
-
Right, it's a really nice method. Beside enabling you to consider any distribution for underlying price, I think you can use a transform to analyize historical data and fit an appropriate characteristic function to the data. However, I am still confused how can you change from an actual measure to a risk neutral one using this method. – Amir Yousefi Dec 2 '12 at 3:20
I would like to provide an answer with a bit more embedded details.
The weaknesses of the Black-Scholes framework you refer come from the fact that it assumes that stock prices are following a Geometric Brownian Motion (GBM). This model assumes that stock prices evolve as follows:
$$dS_t = \mu S_t dt + \sigma S_t dW_t$$
You can solve this differential equation and get that, given $S_t$:
$$S_T = S_t e^{(\mu - \frac{\sigma^2}{2})(T-t) + \sigma (W_T-W_t)}$$
This means that stock prices are log-normally distributed, and that returns are normally distributed.
First, if you simply look at historical data, you can clearly see that returns do not seem to be normal. So it seems like GBM is an over-simplistic model for stock prices. Indeed, it fails to model (and this list is not exhaustive):
• Skewness
• Excess kurtosis (i.e. it underestimates the probability of rare events)
• Heteroskedasticity (the fact that, unlike in the GBM framework, it seems like $\sigma$ is not constant)
If you want to find improvements to the BS model, you could google for derivative pricing methods which assume models including the features listed above. For example, you could look at Monte-Carlo approach using the GARCH model.
-
Stochastic vol models with jumps are an updated version of Black-Scholes model. Because of volatility clustering and jumps in equity prices, stochastic vol models with jumps make sense (however, indicies do seem to follow a diffusion process with just stochastic vol as they do not have jumps, especially if you look at it from a point of view of trade time).
-
Non-Gaussian Merton-Black-Scholes Theory would be a possible source of information on this type of model.
Note: I have glanced through this book, but have not read it thoroughly. However I can say that if you want to read this book you should be very comfortable with partial differential equations (especially the theory of pseudodifferential operators).
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9482832551002502, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/calculus/179327-convergence-continuous-function.html
|
# Thread:
1. ## Convergence of continuous function
My professor hinted that this problem would be on the final, so any help would be greatly appreciated.
Find:
limit as n goes to infinity of (limit as k goes to infinity of cos((n!*pi*x)^2k))
I know since it's continuous, I should be able to move the limits inward, but I don't know if that's the right way to go about this problem. I'll take any suggestions please.
2. First we 'fix' n and define...
(1)
... and leaving undefined for all other values of x. Now what is the limit when n tends to infinity of ?...
Kind regards
$\chi$ $\sigma$
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9219505786895752, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/6635/does-such-a-subgroup-exist/6682
|
## Does such a subgroup exist?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I am looking for a certain masa in a $II_1$ factor which is singular and has nontrivial Takesaki invariant. For this I am looking for an example of an inclusion of groups $H\subset G$ such that:
• $G$ is a countable icc (infinite conjugacy class) group
• $H$ is abelian
• $\forall g\in G-H,\{ hgh^{-1} |h\in H \}$ is infinite
• $|H\backslash G/H| \geq 3$
• there exists $g\in G-H$ and $h_1\neq h_2\in H$ with $h_1 g=gh_2$.
Does such an example exist?
-
## 4 Answers
[generalization of Agol's answer]
Take $H$ a group and let $K$ act on $H$ by isomorphisms (write the action as $\sigma$) and consider $G=H\rtimes_\sigma K$. Then
• condition 2 is satisfied if $H$ is abelian
• condition 4 is satisfied if $K$ contains at least 3 elements
• condition 5 is satisfied if $K$ acts non-trivially
• condition 3 is satisfied if ${h^{-1}\sigma_k(h) : h\in H}$ is infinite for all $k \in K$
• condition 1 is satisfied if $K$ acts with infinite orbits on $H$ and condition 3 is satisfied.
-
1
In fact, both examples (once Yemon's is modified) are of this type. – Richard Kent Nov 24 2009 at 13:16
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
I think a lattice in the rank 3 solvable Lie group Sol works. For any 2x2 matrix A ∈ SL2 ℤ with tr(A) > 3, take the extension G of H=ℤ2 by ℤ, where 1 acts by A on ℤ2. We may write elements of G as (k, h), k ∈ ℤ, h ∈ ℤ2. The subgroups (k,0) and (0,h) are additive in the coordinates, and (k,0)(0,h)=(k,h). We have the relation (0,h)(1,0) =(1,0)(0, A(h)) (so the 5th condition holds). For example, the matrix
$$\begin{pmatrix} 2 & 1 \\ 1& 1\end{pmatrix}$$
gives rise to the fundamental group of 0-framed surgery on the figure 8 knot complement. G is countable icc, and H=ℤ2 is a normal subgroup, G/H = ℤ, so the 2nd and 4th conditions are satisfied. The 3rd condition is satisfied, since for h ∈ H= ℤ2, (0,h)(k,g)(0,-h) = (k, g+Ak(h) -h), which one can see is infinite as one varies h (for k ∈ ℤ - 0 ).
-
This is off the cuff, and I'm very much a dilettante when it comes to group theory, so I hope there isn't an error in what follows. Corrections welcome, of course.
[EDIT: it has been pointed out below that the group given below doesn't quite work. I'm leaving the bulk of this "answer" here, in case it suggests a correct solution or warns people off the same mistake I made.]
I think the group $G$ with presentation $\langle g, h | hg =gh^n \rangle$, where $n\geq 2$, will do the job, with $H$ being the group generated by $h$. [Conditions 2,5]
Elements of this group have a normal form with all the $g$s on the left and all the $h$s on the right. [EDIT: this is not quite right, one has to take care over negative powers of $g$.] Multiplying on the left or on the right by an element of $h$ should, once we bring to normal form, not change the index of $g$ in the normal form, and so there are infinitely many double $H$-cosets, taking care of Condition 4.
Also, given an element of the form $g^ah^b$ where $a\neq 0$, then some back-of-the-envelope scribbling indicates that repeated conjugation by $h$ ought to increase the absolute value of the index of $h$ in the resulting normal form, so that conjugation by $h$ cannot be an operation of finite order. That would take care of Condition 3. [EDIT: this is incorrect/insufficient, see comments below.]
Finally, I think Condition 1 should follow from some further case-by-case analysis (given a non-identity element in $H$, conjugate repeatedly by $g$; and all the elements in $G-H$ are taken care of by condition 3).
(The group $G$ is an example of a Baumslag-Solitar group, and these beasts have been quite well studied over the years, I'm told. I don't know if you can do similar games with other B-S groups.)
-
1
Those baumslag-solitar groups are almost good, icc with a lot of cosets. But the problem is that the condition 3 is not verified (look at the element $ghg^{-1}$). Thanks anyway, we are not so far from an example I hope. – Arnaud Brot Nov 24 2009 at 1:50
1
There is a map $G \to Z$ given by killing $h$. The kernel of this map is abelian. Can you use the same $G$ and let $H$ be this kernel? Anything not in the kernel will have nonzero exponent sum in $g$, which might help you with condition 3. – Richard Kent Nov 24 2009 at 2:14
It seems to work with the kernel of your map. The element of G have the form $g^k h^l g^{-s}$ and those in the kernel are the $g^k h^l g^{-k}$. Then we have the condition 3. And we still have an infinity of cosets. Thank you a lot for your help. – Arnaud Brot Nov 24 2009 at 4:02
In case it isn't obvious to the reader, all credit belongs to Richard; I just made some false statements, which is easy as any fule kno. Richard, do you want to write this up as an answer rather than a comment, so we can vote that up and let my error sink? – Yemon Choi Nov 24 2009 at 4:26
Oh, I don't deserve much credit, I just knew that bigger subgroup was abelian, Yemon had the really good idea of using G and its normal form. – Richard Kent Nov 24 2009 at 13:01
A more singular example:
Take an infinite index inclusion of abelian groups $K\subset H$. Let a non-trivial group $L$ act on $K$ by automorphisms. Then the amalgamated free product $G=H\underset{K}{\ast} (K\rtimes L)$ satisfies the conditions. Moreover, $L(H)\subset L(G)$ $\,$is a singular masa. On can use the results of Ioana, Peterson and Popa for this, but maybe there are more elementary ways to see this.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 55, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9442793130874634, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/30071/efficient-approximation-of-a-matrix-and-its-inverse
|
## Efficient approximation of a matrix and its inverse
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Assume that $A$ is a real $n\times n$ matrix whose rows constitute an orthonormal basis of $\mathbb R^n$.
Informal statement of question: Assume we want to approximate $A$ by a rational matrix, such that each entry can be written efficiently (that is, has a small binary encoding), but we require also the inverse of the approximate matrix to have small representation. Is this possible?
Formal statement of question: Let $p(n)$ be some polynomial in $n$. For a real number $r$, we say that $a/b$ is a polynomial approximation of $r$, if $a/b$ is a rational number (that is, $a,b$ are integers) and both $a$ and $b$ are of size at most $p(n)$ (e.g., their binary representation is of logarithmic size in $n$), such that $|r-a/b|\le 1/p(n)$.
Question: Does there exist a rational matrix $B$, such that $B$ polynomially approximates $A$ (that is, the entry $B_{ij}$ in $B$, is a polynomial approximation of the entry $A_{ij}$ in $A$, for all $1\le i,j\le n$), and such that $B^{-1}$ is a rational matrix whose entries are all polynomially-bounded (that is, for any $1\le i,j\le n$, $B^{-1}_{ij}=a/b$, where $a,b$ are integers of size at most $p(n)$) ?
-
## 2 Answers
In $\mathbb{R}^3$, Milenkovic and Milenkovic give an alogrithm for efficiently approximating an orthogonal matrix by a rational orthogonal matrix. As lhf points out, the inverse of an orthogonal matrix is its transpose, so the inverse will also have short entries in this setting.
Regarding $n>3$, here is a tentative thought, and a reference. I haven't put much effort into either :).
Let $v=(v_1, v_2, \ldots, v_n)$ be a nonzero vector. Define a linear operator `$$s_v(u) := u - 2 \frac{\langle v,u \rangle}{\langle v,v \rangle} v.$$` This is the orthogonal reflection that negates $v$. Note that, if $v \in \mathbb{Q}^n$, then the entries of the matrix $s_v$ are rational. This is true even if $v$ does not have norm $1$.
Now, any rotation matrix can be written as a product of $\leq n$ reflections: $R=\prod_{i=1}^h s_{v_i}$ for some sequence of vectors $v_i$ in $\mathbb{R}^n$. A potential algorithm, then, is to find such a factorization and then approximate each $v_i$ by a rational vector $w_i$ which is roughly parallel to it. (There are plenty of standard algorithms for rational approximation of a vector.) Then take $\prod s_{w_i}$ as the approximation to $R$.
I got this strategy from a paper of Eric Schmutz. Schmutz follows this strategy, but he forces his approximating vectors $w_i$ to lie on the unit sphere. As far as I can see, this is a waste of effort, since $s_v$ is orthogonal with rational entries even if $v$ is not on the unit sphere. However, Schmutz has exact bounds, which you may find useful.
-
Thanks, I'll have a look. Maybe they (or future citations of this paper) have some references for the general case of $n$. – Iddo Tzameret Jun 30 2010 at 18:09
I haven't followed through on the links, so I don't know which method do they use, but my first reaction to "rational orthogonal matrix" is "Cayley transform". – Victor Protsak Jul 1 2010 at 5:45
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
If A is orthogonal then its inverse is the transpose and so you only need to approximate A.
-
3
But your approximation may not be orthogonal, so its inverse may require a lot of bits to store. – David Speyer Jun 30 2010 at 17:45
Yes, David is right. (lhf would be right too, if the orthogonalization algorithm (of e.g., Gram-Schmidt) would end up with a matrix in which the entries are polynomially-bounded by the entries in the original matrix. I can't see why this should be true though.) – Iddo Tzameret Jun 30 2010 at 18:01
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 49, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9511499404907227, "perplexity_flag": "head"}
|
http://www.chegg.com/homework-help/questions-and-answers/below-is-a-set-of-current-t-0-prices-on-a-set-of-zero-coupon-bonds-the-face-value-on-all-o-q3455332
|
home / homework help / questions and answers / business / finance / below is a set of current (t...
Close
## Finance 2
Below is a set of current (t = 0) prices on a set of zero-coupon bonds. The face value on all of these bonds is $1000. The prices below are quoted per$100 in face value. In answering the following questions, assume that you can buy fractions of a bond.
Bond Price
1-year zero 95
2-year zero 90
3-year zero 86
4-year zero 79
Also assume, for simplicity, that that each of these bonds matures in exactly a multiple of a year from now (now = t = 0).
A. If you invested in a two-year bond and rolled over into a one-year bond at t = 2, what would the one-year yield at t = 2 have to be in order for you to be indifferent between that strategy and investing in the three-year bond?
You have \$20,000 and want to make an investment that will allow you to have a down payment on a house in exactly two years. Consider the following two alternative strategies:
a.You can invest in the default-free 3-year zero-coupon bond and sell it after two years (at t = 2)
b. You can invest in the default-free 2-year zero-coupon bond.
A. Consider the first strategy (i.e., strategy a.) above.
a. If the forward rates implied by the bond price data above turn out to be the future rates that actually occur, what price (per \$100 in face value) will you get at t = 2 for the 3-year your bonds you are buying at t = 0?
b. If the forward rates implied by the bond price data above turn out to be the future rates that actually occur, what will be your holding period yield on the 3-year bond you are buying at t = 0? That is, what is the internal rate of return on the first strategy (i.e., strategy a.)?
c. If the forward rates implied by the bond price data above turn out to be the future rates that actually occur, how much money will you have at t = 2 under this first strategy (i.e., strategy a.)?
d. If it turns out that the one-year yield at t = 2 is 1 percentage point higher than the forward rate implied by the bond price date now, how much money will you have at t = 2 to use as a down payment?
e. If it turns out that the one-year yield at t = 2 is 1 percentage point lower than the forward rate implied by the bond price data now, how much money will you have at t = 2 to use as a down payment?
B. Now consider the second strategy (simply invest in the two-year bond). How much money will you have at t = 2 to use as a down payment? Does your answer depend upon what the one-year yields turn out to be at t = 2? Why or why not.
## Answers (1)
• Anonymous answered 7 hours later
Rating 3 Stars
## You need a Homework Help subscription to view this answer!
### Chegg Homework Help subscribers get:
• 24/7 help from our community of subject experts
• Fast answers from experts, around the clock
Get homework help
#### Company
Chegg Plants Trees
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9401883482933044, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/221289/properties-of-adjoints-of-linear-maps/222138
|
# Properties of adjoints of linear maps
I am studying linear algebra by myself, and I came across the need to prove the following, for adjoints of linear maps:
• ($f^\dagger)^\dagger=f$
• $(f\circ g)^\dagger=g^\dagger\circ f^\dagger$
• $\langle v|fw\rangle=\langle f^\dagger v|w\rangle$
Thank you very much!
• -
Do you know the definition of the adjoint? I strongly recommend that you do these yourself, because they're very easy. If you still can't do them, tell us what you have tried. – wj32 Oct 26 '12 at 12:59
Thank you. Off course, the first 2 are obvious if I accept the third. But if I don't, I don't know where to start. – dagger Oct 26 '12 at 15:42
1
The third is usually the definition of the adjoint. What is your definition? – wj32 Oct 26 '12 at 20:53
## 1 Answer
Meanwhile this was asked on MO and has an accepted answer: http://mathoverflow.net/questions/110739/
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9546091556549072, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/tagged/rotational-dynamics+mass
|
# Tagged Questions
3answers
70 views
### Mass equals Moment of inertia when constant density?
I have found equation for moment of inertia $(J)$. I'm calculating $J$ for hemisphere, with rotational axis $Z$. $$J = \iiint\limits_V r^2 \cdot \rho \cdot dV$$ But if $\rho$ is constant ...
1answer
115 views
### How do the energy eigenvalues of rotational degrees of freedom in statistical mechanics come about?
I want to understand the hierarchy different degrees of freedom of a mechanical system. Specifically, I want to understand which subsystems equibrilate faster and why. This question comes up: Why ...
3answers
286 views
### Does a toy top weigh less when it is spinning?
I am under the understanding that a toy top will weigh less when it is spinning. The Russians made a spinning type transport back in the 70s to lessen its payload over the tundra. Is this an effective ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8939574956893921, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/114175/definition-of-convergence-in-distribution
|
# Definition of convergence in distribution
My question is convergence in distribution seems to be defined differently in Wikipedia and in Kai Lai Chung's book. My view is that the one by Wikipedia is a standard definition of convergence in distribution, and the one by Chung is actually vague convergence not convergence in distribution. I wonder if the two definitions are equivalent, and why?
Following are relevant quotes from the two sources:
1. From Wikipedia
A sequence $\{X_1, X_2, …\}$ of random variables is said to converge in distribution to a random variable $X$ if $$\lim_{n\to \infty} F_n(x) = F(x)$$ for every number $x ∈ \mathbb{R}$ at which $F$ is continuous. Here $F_n$ and $F$ are the cumulative distribution functions of random variables $X_n$ and $X$ correspondingly.
2. From Kai Lai Chung's A course in probability theory, consider (sub)probability measures (s.p.m.'s or p.m.'s) on $\mathbb{R}$.
definition of convergence "in distribution" (in dist.)- A sequence of r.v.'s $\{X_n\}$ is said to converge in distribution to d.f. $F$ iff the sequence $\{F_n\}$ of corresponding d.f.'s converges vaguely to the d.f. $F$.
My rephrase of vague convergence of a sequence of distribution functions (d.f.'s) based on the same book is
We say that $F_n$ converges vaguely to $F$, if their s.p.m.'s are $\mu_n$ and $\mu$, and $\mu_n$ converges to $\mu$ vaguely.
On p85 of Chung's book, vague convergence of a sequence of s.p.m.'s is defined as
a sequence of subprobability measures (s.p.m.'s) $\{ \mu_n, n\geq 1 \}$ is said vaguely converge to another subprobablity measure $\mu$ on $\mathbb{R}$, if there exists a dense subset $D$ of the real line $\mathbb{R}$ so that $\forall a \text{ and } b \in D \text{ with } a <b, \mu_n((a,b]) \rightarrow \mu((a,b])$.
Thanks and regards!
-
Every countably infinite subset of the real line, and only those, can be discontinuity sets for weakly increasing functions, and in particular for CDFs. Next question: which dense subsets of hte line can occur in the role of what you call $D$? – Michael Hardy Feb 27 '12 at 23:13
@MichaelHardy: Thanks! (1) are your both questions addressing the definition of vague convergence of a sequence of s.p.m.'s in the last quote? (2) What is your first point trying to say? (3) To your second question, the definition in the last quote says existence of a dense subset D in R, which depends on the sequence $\{\mu_n\}$ and $\mu$. That is my understanding. – Tim Feb 28 '12 at 1:13
Tim, these definitions are equivalent. The only difference is the usual difference between convergence in distribution and vague convergence, which is that vague convergence can be to an arbitrary measure (i.e. it need not have mass 1) while convergence in distribution requires that the limiting measure assign mass one to the probability space. It takes a little analysis to show that convergence on a dense set of points is sufficient for convergence at all points of continuity, but if I recall a full proof is in Durrett. – Chris Janjigian Feb 28 '12 at 3:05
@Chris: Thanks! (1) Distribution functions can be defined for general measures on $\mathbb{R}$, not just probability measures. So why can't convergence in distribution be defined for general measures? (2) I appreciate if you can point out where in Durrett's which book the equivalence of convergence in distribution and vague convergence appears. I hope to see if the equivalence is for general measures, not just for probability measures. – Tim Feb 28 '12 at 3:14
Convergence in distribution is not equivalent to vague convergence; it is equivalent to vague convergence with the added assumption of tightness, which is what prevents mass from escaping (it is a conservation requirement). You can certainly define a notion of convergence of the distribution functions pointwise for an arbitrary measure but that is actually vague convergence if you do not have a tightness requirement. That theory is quite well developed as it is just the usual weak convergence of measures. – Chris Janjigian Feb 28 '12 at 3:25
show 2 more comments
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 29, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9211573004722595, "perplexity_flag": "head"}
|
http://mathhelpforum.com/discrete-math/106349-arranging-objects-restrictions.html
|
# Thread:
1. ## Arranging objects with restrictions
There are 6 men and 9 women and they need to be seated in a row of 15 chairs. How many ways are there to arrange them such that the 6 men sit next to each other?
This is what I have done so far. There are 15! ways of arranging the entire group. Arranging the 6 men is 6!. In addition, there is the question of where you place the group of 6 men. It could be
6 men and then 9 women
a woman, 6 men and then 8 women
2 women, 6 men and then 7 women
...
etc
I do not know how to count these possibilities. I know what the answer is (from answer key) but I just can't arrive at it.
Thanks for taking the time to read my question!
2. Originally Posted by eeyore
There are 6 men and 9 women and they need to be seated in a row of 15 chairs. How many ways are there to arrange them such that the 6 men sit next to each other?
This is what I have done so far. There are 15! ways of arranging the entire group. Arranging the 6 men is 6!. In addition, there is the question of where you place the group of 6 men. It could be
6 men and then 9 women
a woman, 6 men and then 8 women
2 women, 6 men and then 7 women
...
etc
I do not know how to count these possibilities. I know what the answer is (from answer key) but I just can't arrive at it.
Thanks for taking the time to read my question!
If we view the chairs as being numbered 1 to 15 from left to right, then the leftmost man of the group of 6 could be in chair 1, 2, 3, ..., 10. So there are 10 ways to place the group of men without regard to the order of the men within the group. Once we have figured out where to place the group of men, there are 6! ways to arrange the men and 9! ways to arrange the women.
So there are $10 \times 6! \times 9!$ ways in all.
3. That's what I thought at first but the answer says it's 6! 10! / 15!. Are you sure the way the women are arranged shouldn't be considered as well? If you decide to place the men in the first 6 chairs, then the 9 women can be arranged in 9! ways and each of those arrangements is consistent with the requirement that the men are next to each other.
4. Hello, eeyore!
There are 6 men and 9 women to be seated in a row of 15 chairs.
How many ways are there to arrange them such that the 6 men sit next to each other?
Duct-tape the six men together.
. . There are $6!$ possible orders.
Then we have: . $\boxed{MMMMMM}, W, W, W, W, W, W, W, W, W$
With 10 "people" to arrange, there are $10!$ possible orders.
Therefore, there are: . $(6!)(10!) \:=\:2,\!612,\!736,\!000$ ways.
5. Very neat way of thinking of it. Thank you!
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9673808217048645, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/3554/whats-a-good-reference-for-this-classical-picture-feynmans-talking-about?answertab=oldest
|
# What's a good reference for this classical picture Feynman's talking about?
I have a mathematics background but am trying to educate myself a little about physics. At the beginning of Feynman's QED book (not the popular one) is the following:
Suppose all of the atoms in the universe are in a box. Classically the box may be treated as having natural modes describable in terms of a distribution of harmonic oscillators with coupling between the oscillators and matter.
I guess this is something that physicists learn, but I have never heard of it. What is Feynman talking about and where can I learn more about it? The Wikipedia article on harmonic oscillators gives no indication that physicists do this.
-
To me it sounds as if he is talking about classical Fourier analysis and its applications to solving PDEs. You separate the variables, get many equations of harmonic oscillators and so on. That is what you do in quantum field theory plus the extra step of quantizing them. – MBN Jan 22 '11 at 5:01
## 2 Answers
This is a way of giving systematic meaning to the radiation continuum in the context of a set of discrete states.
You assume some set of boundary conditions on the EM fields where they hit the box {1}, derive a set of allowed modes in terms of the geometry of the box {2}, then allow the box to expand without limit. Thus you arrive at a continuum of allowed modes.
{1} Say $E = M = 0$ at the boundary as if the box were a very good conductor.
{2} If the fields go to zero at the sides of the box then a half-integer number of wavelengths must fit, so only some wavelengths are allowed.
-
Cool. So you're talking about the electromagnetic wave equation, right? And the eigenvalues of the Laplacian are the "modes," and the eigenvalue equation is the harmonic oscillator associated to that mode? Are we not worrying about the rest of Maxwell's equations? – Qiaochu Yuan Jan 21 '11 at 22:29
3
@Qiaochu: Yep, pretty much. Technically a harmonic oscillator is a system with a quadratic potential $U(x) \propto x^2$, but the quantum harmonic oscillator has an evenly spaced set of eigenvalues, and there is a tendency to talk about any other system with that property as if it were a harmonic oscillator. – David Zaslavsky♦ Jan 21 '11 at 22:34
1
@David: thanks very much for clearing that up. Another question: what exactly does Feynman mean by "coupling"? – Qiaochu Yuan Jan 21 '11 at 22:36
3
@Qiaochu: "coupling" generally refers to some sort of interaction. I'd guess that here Feynman is talking about the fact that the atoms can exchange energy with the electromagnetic field. If you model the EM field as an infinite set of harmonic oscillators, an atom interacts with an oscillator of natural frequency $\omega$ when the atom's energy decreases by $\hbar\omega$ and the oscillator's energy increases by the same amount (i.e. it jumps up by a mode), or vice versa. – David Zaslavsky♦ Jan 21 '11 at 23:18
There is also a fantastic British comedy called "Coupling". Of course its more about coupling between humans than elementary particles ;) – user346 Jan 22 '11 at 3:49
I just purchased Feynman's Thesis, which provides some insight on how Feynman saw the world, and provides some context here. One of the key issues Feynman was trying to reconcile in his Lagrangian approach was how to describe quantum mechanics without rely on a field defined by harmonic oscillators, from page 5:
"In particular, the problem of the equivalence in quantum mechanics of direct interaction and interaction through the agency of an intermediate harmonic oscillator will be discussed in detail. The solution of this problem is essential if one is going to be able to comapare a theory which considers field oscillators as real mechanical and quantized systems, with a theory which considers the field as just a mathematical construction of classical electrodynamics required to simplify the discussion of the interaction between particles."
So we have to understand that Feynman viewed matter as something different than the harmonic oscillator. Since mass is a parameter for a simple harmonic oscillator, we can see that in his discussion, Feynman didn't necessarily view matter as being the same thing as mass. I suspect, matter would be viewed as the tangible reality that we are familar with, and quantum harmonic oscillators are the abstract entities that we use to describe behavior, so it is some way necessarily to map, or couple, the real to the astract.
-
Yes, I recognize that there must be subtleties, but I'm not even all that familiar with the slightly misleading ideas that you seem to be trying to correct. I am unfortunately much more ignorant about physics than that. – Qiaochu Yuan Jan 22 '11 at 14:42
I hope I didn't offend, I didn't mean to imply or correct any misleading ideas, I just think that one often learns more by understanding the context and viewpoint of the author than from the mechanix of the math. Understanding why Feynman might phrase something in a particular way is as least as important as the physical picture he is referring to...but I digress. – Humble Jan 22 '11 at 14:47
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9495120048522949, "perplexity_flag": "head"}
|
http://mathhelpforum.com/calculus/139495-continuity-limits.html
|
# Thread:
1. ## Continuity with limits
I am having some trouble with the concept of continuity and limits when it applies to questions. Here's a simple question I am not sure how to approach:
$\mbox{If} x^4 \leq \mbox{f(x)} \leq x^2$ for $-1 \leq x \leq 1$ and $x^2 \leq \mbox{f(x)} \leq x^4$ for $x<-1 \mbox{or} x>1$, at what points $\mbox{c}$ do you automatically know $\lim_{x\to\mbox{c}}\mbox{f(x)}$?
What are the values of the limits at these points?
What is the correct approach to this question?
2. ## Use sandwich theorem.
In x = 0, -1, 1 . You can tell what the limits are.
In x = 0, the limit is zero.
In x = 1, -1 the limit is one.
It is a consequence of the sandwich theorem:
$\lim_{x \to 0} x^{4} = \lim_{x \to 0} f(x) = \lim_{x\to 0} x^{2}$
the same can be said in x = -1,1 though in these cases you have to consider lateral limits.
3. Originally Posted by Diego
In x = 0, -1, 1 . You can tell what the limits are.
In x = 0, the limit is zero.
In x = 1, -1 the limit is one.
It is a consequence of the sandwich theorem:
$\lim_{x \to 0} x^{4} = \lim_{x \to 0} f(x) = \lim_{x\to 0} x^{2}$
the same can be said in x = -1,1 though in these cases you have to consider lateral limits.
Thanks.. though i am not sure I know about Lateral Limits...
4. I believe he means "one-sided" (from the left and from the right) limits.
5. ## Lateral limits
Oh, sorry I meant one-sided limits (in spanish they are called lateral). For instance in x=1, if you approach the limit with numbers bigger than one (where $x^{2} < f(x) < x^{4}$):
$\lim_{x \to 1^{+}} x^{2} = \lim_{x \to 1^{+}} f(x) = \lim_{x\to 1^{+}} x^{4}=1$
and if smaller than one (where $x^{4} < f(x) < x^{2}$, if x is not too far from 1):
$\lim_{x \to 1^{-}} x^{4} = \lim_{x \to 1^{-}} f(x) = \lim_{x\to 1^{-}} x^{2} =1$
thus:
$\lim_{x \to 1} f(x) = 1$
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 13, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.929062008857727, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/tagged/computational-mathematics?sort=active&pagesize=15
|
# Tagged Questions
Computational mathematics involves mathematical research in areas of science where computing plays a central and essential role, emphasizing algorithms, numerical methods, and symbolic methods.
0answers
33 views
+50
### Derivative of Associated Legendre polynomials at $x = \pm 1$
I'm creating meshes for spherical harmonics, and I need a normal at a given point. Whenever I'm at the poles, $\cos{\theta} = \pm 1$, and I do not know how to find the derivative there. All the ...
3answers
325 views
### Calculate $\pi$ in an arbitrary base, to arbitrary precision
I need to calculate $\pi$ -- in base: 4, 12, 32, and 128 -- to an arbitrary number of digits. (It's for an artist friend). I remember Taylor series and I've found miscellaneous "BBP" formulas, but so ...
1answer
39 views
### About parallel time computation
I am studying a paper where it is mentioned that Newton iteration may be used to compute the inverse of $n \times n$, well- conditioned matrix in parallel time $o(\log^2n)$ and that this computation ...
1answer
60 views
### Need little hint to prove a theorem from a paper
I have an iterative method \begin{eqnarray} X_{k+1}=(1+\beta)X_k-\beta X_k A X_k~~~~~~~~~~~~~~~~~ k = 0,1,\ldots \end{eqnarray} with initial approximation $X_0 = \beta A^*$ ($\beta$ is scalar ...
1answer
62 views
### Minimizing the norm related with iteration method
I am working on a iteration method to compute the generalized inverse of a matrix $A$ of rank $r$ Iteration method is $X_{k+1} = X_{k} + \beta X_{k} (I - A X_{k})$ where notations are as follows ...
1answer
133 views
### Exponential Probability Monte Carlo simulation
I need to write a Matlab program to estimate the quantity $\theta = \mathrm{Pr}(X < 1)$, where $X$ is an exponential random variable with mean $1$. I am doing this for multiple monte carlo ...
1answer
141 views
### Given two sets of vectors, how do I find a change of basis that will convert one set to another?
Given two sets of dimension $n$ vectors $\lbrace v_1 , v_2 , \ldots , v_m \rbrace$, $\lbrace u_1, u_2, \ldots , u_m \rbrace$, where $m > n$, is there a computational method (in particular, using ...
2answers
53 views
### What free software can I use to solve a system of linear equations containing an unknown?
Question: What free software can I use to solve a system of linear equations $M\mathbf{x}=\mathbf{y}$ where the entries of $\mathbf{y}$ vary with an unknown quantity $n$? Presumably I could do ...
0answers
43 views
### Simpson's rule characteristics
I just wanted to ask a quick question in regards to simpson's rule for integration. I have been reading up on the trapezoidal rule, and have found the notations and have an understanding such that: ...
2answers
97 views
### Efficient computation of $\sum_{k=1}^n \lfloor \frac{n}{k}\rfloor$
I realize there is probably not a closed form, but is there an efficient way to calculate the following expression? $$\sum_{k=1}^n \left\lfloor \frac{n}{k}\right\rfloor$$ I've noticed \sum_{k=1}^n ...
0answers
96 views
### C++ Polynomial Multiplication [closed]
\begin{eqnarray} \text{~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~} \end{eqnarray}\begin{eqnarray} \text{IF YOU HAVE A QUESTION, THEN ASK AND I CAN ...
0answers
42 views
### How to find an expression whose value is 190
Given a set of numbers (in this case): 3, 7, 7, 100, 50 Either: prove it is impossible to form the number k = 190 using ( ) + - * / operators between sub set of the these numbers ex: 1000 = ((3 + ...
1answer
204 views
### Krylov-like method for solving systems of polynomials?
To iteratively solve large linear systems, many current state-of-the-art methods work by finding approximate solutions in successively larger (Krylov) subspaces. Are there similar iterative methods ...
1answer
47 views
### how can I find the following problem using laplace transform?
For example here is the problem: $(t^2 \cos{\omega t})u(t)$ I have to find it using laplace transform; here is what I think it is, I have $t^2$$(\cos{\omega t})u(t)$ which I think I can solve them ...
5answers
410 views
### What is the value of $2^{3000}$
What is the value of $2^{3000}$? How to calculate it using a programming language like C#?
0answers
46 views
### Evaluation of a slow continued fraction
Puzzle question... I know how to solve it, and will post my solution if needed; but those who wish may participate in the spirit of coming up with elegant solutions rather than trying to teach me how ...
1answer
42 views
### Matrix completion: supplementary questions
Continuation of the question here, what is going to happen if we change the some of the conditions. I write it as a quote from here and change the appropriate places which are underlined: I need ...
1answer
44 views
### How are 10-20 digit multiperfect and hemiperfect numbers efficiently computed?
This numericana item on multiperfect and hemiperfect numbers contains some impressively enormous numbers. How were these actually computed ? The associated OEIS pages (A007691 & A159907) just ...
1answer
60 views
### What is the fastest computational graph theory package?
What is the fastest computational graph theory package with respect to executing algorithms and computing graph theoretic data? I am aware of this related question, which requests graph theory ...
1answer
47 views
### How do I determine if two of my software's representation of algebraic numbers are equal?
I have software which stores information about algebraic numbers with absolute precision. If you build it up by creating instances of a Python representation of an integer, float, Decimal, or string, ...
2answers
137 views
### Sequences of a computable function
Is there any computable function $f(n)$, which given any integer $n$ has been proven to return either $0$ or $1$ in finite time, and for which the statement "$f(1), f(2), f(3),\ldots$ contains ...
2answers
37 views
### Does rationalizing the denominator lead to more or less round-off error?
I evaluated $\frac{1}{\sqrt{2}}$ and $\frac{\sqrt{2}}{2}$ in Matlab, and got a slight difference: $0.707106781186547$ and $0.707106781186548$, respectively. Which is more accurate, the one with the ...
3answers
85 views
### Drawing graphs (vertices and edges) with or without technology
Given a collection of vertices $V$ and a collection of edges $E \subseteq V\times V$, is there an algorithm or program that will allow you to draw a nice graph? The placing of the vertices is very ...
1answer
73 views
### Best graphing program for Mac or PC?
I just bought the highest end iMac, with a student discount, of course, and was wondering what is the best graphing program out there. A program that can graph any equation that I throw at it AND one ...
1answer
91 views
### Matrix completion
I need to find an algorithm (if exists) of the following matrix completion problem. I need to construct $n^2$ positive semi-definite matrices, say $\{P_i\}_{i=1}^n$. Entries of these matrices are ...
1answer
76 views
### Minimizing mean squared error
I want to find a $d$ that minimizes the value of the expression below. I think the first step is to find the derivative w.r.t. $d$ (is that correct? If not, what is the first step?). If so, I'm having ...
1answer
48 views
### Where does the input x in Turing Machine subroutines come from in solving reductions to undecidable problems?
I'm taking an introduction to computation theory class and we went over the chapter on undecidable problems and proving undecidability through reductions. I can't seem to grasp some of the simplest ...
2answers
84 views
### How to get the minimum angle between two crossing lines?
I'm not a student, I'm just a programmer trying to solve a problem ... I just need the practical way to calculate the smallest angle between two lines that intersect. The value, of course, must always ...
0answers
58 views
### Drawing finite state machines based on regular expressions
I have problems drawing the following finite state machines which use the {0,1} alphabet: Are these DFAs or NFAs? How to determine the latter? Tyvm for your time and help!
3answers
115 views
### Mathematical Limitations of Computer Experiments
One problem that has always bothered me is the limitations of computers in studying math. With a chaotic dynamical system, for example, we know mathematically that they possess trajectories that never ...
0answers
47 views
### Solving an overdetermined system of inequalities using null-space arguments
The solutions to a linear system of equations: $$A\cdot x = b$$ (where $x$ is a $(n\times 1)$ column vector, $b$ is a $(m\times 1)$ column vector and $A$ is $(m\times n)$ matrix) can all be ...
1answer
35 views
### Discrete numerical derivative with respect to d/d(n*x)
How can I generate a stencil for a d/d(n*x) operator? I am writing a program that needs a method to calculate line derivatives in an image. If we want to calculate the simplest forward derivative ...
0answers
35 views
### Qubit state finding
Suppose we have two qubits in the state $x|00\rangle+y|11\rangle$. What is the resulting state of the second qubit in that case? Use and to denote and respectively.
2answers
57 views
### Can product of all pairwise sums be computed faster than the naive method?
Let $S$ be a set of integers. $|S|=n$. Can we find the product $\prod_{a,b\in S} a+b$ faster than naively add all pairs then multiply them one by one? By faster, I mean use less than $O(n^2)$ ...
1answer
47 views
### How to find a close form expression in terms generating functions for the triple summation
Given $$\sum\limits_{i=0}^\infty a_i z^i=A(z)$$ and $$\sum\limits_{i=1}^\infty b_i z^i=B(z)$$ and $$\sum\limits_{i=0}^\infty c_i z^i=C(z)$$ Find \$\sum\limits_{i=1}^\infty\sum\limits_{j=0}^\infty a_j ...
6answers
533 views
### Fastest Square Root Algorithm
What is the fastest algorithm for finding the square root of a number? I created one that can find the square root of "987654321" to 16 decimal places in just 20 iterations (I'm not ready to release ...
1answer
33 views
### Amicable numbers
Def: a pair natural numbers $a$, $b$, $a\ne b$ are an Amicable pair if $\sum_{d|a,a\ne d}d = b$ and $\sum_{d|b, b\ne d}d = a$. Ok. So I'm trying to optimize a calculation for finding the number of ...
3answers
117 views
### Understanding recursive definitions of a language.
I am having difficulty understanding the recursive definition of a language. The problem asked how to write this non recursively. But I want to understand just how a recursive definition of a ...
0answers
36 views
### Function design and Super-Symmetry Test Requirements
I request everyone concerned to give a serious thought and respond. If you are asked to design two independent functions say S() and P() and should satisfy the following conditions: There will be ...
1answer
39 views
### Probability : Dividing a list into 2 classes
I have a list of integer numbers ($n$). I am dividing it into two parts $n_1$ (smaller) and $n_2$ (bigger) such that the length of $n_1 \ge a*n$; $a$ is positive and $a \lt 0.5$. What is the ...
1answer
99 views
### Fractional part of exp(x)
I have a real number $x$ (for concreteness, say $10^4<x<10^6$) and would like to find $e^x-\lfloor e^x\rfloor$ to reasonable precision (10-20 decimal places). What is the most efficient method? ...
1answer
158 views
### A search for integers which can be written as a sum of two squares in multiple ways
As part of a number theory hobby project, I'm looking for a computational way to enumerate all integers $n$ which can be written as a sum of two integer squares in three or more ways. The range of ...
2answers
475 views
### How to handle big powers on big numbers e.g. $n^{915937897123891}$
I'm struggling with the way to calculate an expression like $n^{915937897123891}$ where $n$ could be really any number between 1 and the power itself. I'm trying to program (C#) this and therefor ...
0answers
75 views
### How to convert a hologram into an image?
Suppose one knows in full detail the phase and intensity of monochromatic light in a plane. This is basically what a hologram records, at least for some section of a plane. By using this as the ...
0answers
201 views
### Simple Lanczos algorithm code to obtain eigenvalues and eigenvectors of a symmetric matrix
I would like to write a simple program (in C) using Lanczos algorithm. I came across a Matlab example which helped me to understand a bit further the algorithm, however from this piece of code I can't ...
0answers
67 views
### How to make normalized cross correlation robust to small changes in uniform regions
the problem is described below: Given 2 sets of data: A= { 91 87 85 85 84 90 85 83 86 86 90 86 84 89 93 87 89 91 95 97 91 92 97 101 101 }, B = {133 130 129 131 133 136 131 131 135 135 133 133 133 ...
7answers
744 views
### Rapid approximation of $\tanh(x)$
This is kind of a signal processing/programming/mathematics crossover question. At the moment it seems more math-related to me, but if the moderators feel it belongs elsewhere please feel free to ...
2answers
92 views
### Number of ways to move 1 or more elements from one list to the previous list until one list remains
Given N elements, divided into at most N groups, which are then labeled 1 thru N, move all of the elements into the group labeled 1. By moving 1 to all of the elements, in group i to i-1. This means ...
1answer
144 views
### Using sympy im Python to substitute values into a 10x10 matrix
'm using the symbolic package sympy to store a 10X10 antisymmetric matrix in terms of 10 variables. and then at every iteration step, i substitute numerical values into the entries of the matrix. ...
1answer
216 views
### Show two finite state machines are equivalent
Suppose $M_1 = \langle Q_1,S,R,f_1,g_1\rangle$ and $M_2 = \langle Q_2,S,R,f_2,g_2\rangle$ are two strongly connected machines. I need to show that $M_1 \equiv M_2$ iff there exist a state $p \in Q_1$ ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 77, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9036519527435303, "perplexity_flag": "middle"}
|
http://mathoverflow.net/revisions/52742/list
|
## Return to Answer
2 Added reference
Here's a pretty abstract answer. Bill Lawvere, working on his ideas about "axiomatic cohesion", often talks about a Nullstellensatz that at first sight has nothing whatsoever to do with zeros of polynomials.
Axiomatic cohesion is about pinning down the properties that a category of "spaces" should have. Here the word "space" is up for negotiation; the word "cohesion" is to indicate that a space should somehow cohere to itself. The typical situation when you have a category Sp of spaces (in whatever sense) is that there's a string of adjoint functors $$\pi_0 \dashv D \dashv U \dashv I$$ between Sp and Set, where
• $\pi_0$ gives the set of connected components of a space
• $D$ gives the discrete space on a set
• $U$ gives the set of points of a space
• $I$ gives the indiscrete or codiscrete space on a set.
A couple of axioms are imposed on these adjunctions. Under those axioms, there are canonical natural transformations $U \to \pi_0$ and $D \to I$, and the former is an epimorphism iff the latter is a monomorphism. Lawvere calls this property (that $U \to \pi_0$ is an epimorphism) the Nullstellensatz.
In concrete terms, this says something like: for a space $X$, the quotient map from $X$ to its set of connected-components is surjective. Why call that the Nullstellensatz? I have no idea. Here's the reference (top of p.44).
In similar usage, Colin McLarty says (bottom of p.125) that a topos satisfies the Nullstellensatz if for every nonempty object $X$ there is at least one map $1 \to X$. Again I don't understand the usage. Maybe someone else will wander along and help out.
Update: Peter Johnstone has just published a paper all about this. He prefers the term "punctual local connectedness" to "Nullstellensatz". Here it is: http://tac.mta.ca/tac/volumes/25/3/25-03abs.html
1
Here's a pretty abstract answer. Bill Lawvere, working on his ideas about "axiomatic cohesion", often talks about a Nullstellensatz that at first sight has nothing whatsoever to do with zeros of polynomials.
Axiomatic cohesion is about pinning down the properties that a category of "spaces" should have. Here the word "space" is up for negotiation; the word "cohesion" is to indicate that a space should somehow cohere to itself. The typical situation when you have a category Sp of spaces (in whatever sense) is that there's a string of adjoint functors $$\pi_0 \dashv D \dashv U \dashv I$$ between Sp and Set, where
• $\pi_0$ gives the set of connected components of a space
• $D$ gives the discrete space on a set
• $U$ gives the set of points of a space
• $I$ gives the indiscrete or codiscrete space on a set.
A couple of axioms are imposed on these adjunctions. Under those axioms, there are canonical natural transformations $U \to \pi_0$ and $D \to I$, and the former is an epimorphism iff the latter is a monomorphism. Lawvere calls this property (that $U \to \pi_0$ is an epimorphism) the Nullstellensatz.
In concrete terms, this says something like: for a space $X$, the quotient map from $X$ to its set of connected-components is surjective. Why call that the Nullstellensatz? I have no idea. Here's the reference (top of p.44).
In similar usage, Colin McLarty says (bottom of p.125) that a topos satisfies the Nullstellensatz if for every nonempty object $X$ there is at least one map $1 \to X$. Again I don't understand the usage. Maybe someone else will wander along and help out.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 22, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9189396500587463, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/56892?sort=votes
|
## Entropy of nested compact invariant sets
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Let $f$ be a homeomorphism on a compact metric space $X$. $K_1\supset K_2\supset\cdots \supset K$ are compact subsets of $X$ such that $f(K_n)=K_n$ and $K=\bigcap K_n$. If $h(f, K_1)<\infty$, do we always have $h(f,K)=\lim h(f,K_n)$?
I can show that this is true for $C^\infty$ diffeomorphisms where the entropy map of invariant measures is upper semi-continuous.
OK. I see the point. This is definitely false for the most general case. For example, take the union of countable hyperbolic toral automorphisms of shrinking size and add one point with Identity map on it.
However, what if $f$ is a diffeomorphism on a compact manifold? I still expect negative answer.
-
## 2 Answers
The answer is negative: consider the map $\varphi:z\mapsto z^2$ acting on the unit disc $D(1)$ of $\mathbb{C}$. We know that this map has entropy $\log 2$ in restriction to every circle centered at $0$, and in fact since the dynamics is trivial transversally to this circle we have $h(\varphi,D(1))=\log 2$. Now, consider $K_n=D(1/n)$: we still have $h(\varphi,K_n)=\log 2$ since $\varphi_{|K_n}$ is conjugated to $\varphi$. But then $K=\{0\}$ and $h(\varphi,\{0\})=0$!
Ok, I cheated: $\varphi$ is not a homeomorphism. But there is an obvious way to extend the above example to an homeomorphism. First take $\psi:Y\to Y$ a positive entropy one, then construct $c\psi : cY\to cY$ its cone as follows. First, $cY$ is the topological cone over $Y$, that is $cY=(Y\times[0,1])/((x,0)\~ (y,O))$. Then $c\psi$ is the map induced on this quotient by $(x,t)\mapsto (\psi(x),t)$. Then $c\psi$ is a homeomorphism, and has the same entropy as $\psi$. But taking $K_n$ the trace on the quotient of $Y\times[0,1/n]$ you get a sequence of compacts on which the dynamics is the same than for the full map, but whose intersection is reduced to a point.
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
The strategy of Benoît Kloeckner fails for differentiable maps. Indeed if $K$ is a single point and $K_n$ invariant balls around $K$ it implies that $K$ is an attracting fixed point. Therefore the log of the differential of the diffeo $f$ should be close to zero near $K$ and so does the entropy.
However conter-examples in any finite smoothness ($C^r$ maps with $1\leq r<+\infty$) were given by Misisurewicz in the early seventies :
Diffeomorphism without any measure with maximal entropy, Bull. Acad. Pol. Sci., Ser. sci. math., astr. et phys. 21 (1973), 903--910
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 42, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9426552653312683, "perplexity_flag": "head"}
|
http://blog.hvidtfeldts.net/index.php/category/fractals/page/2/
|
Distance Estimated 3D Fractals (VII): Dual Numbers
Posted on December 13, 2011 by
Previous parts: part I, part II, part III, part IV, part V, and part VI.
This was supposed to be the last blog post on distance estimated 3D fractals, but then I stumbled upon the dual number formulation, and decided it would blend in nicely with the previous post. So this blog post will be about dual numbers, and the next (and probably final) post will be about hybrid systems, heightmap rendering, interior rendering, and links to other resources.
Dual Numbers
Many of the distance estimators covered in the previous posts used a running derivative. This concept can be traced back to the original formula for the distance estimator for the Mandelbrot set, where the derivative is described iteratively in terms of the previous values:
$$f’_n(c) = 2f_{n-1}(c)f’_{n-1}(c)+1$$
In the previous post, we saw how the Mandelbox could be described a running Jacobian matrix, and how this matrix could be replaced by a single running scalar derivative, since the Jacobians for the conformal transformations all have a particular simple form (and thanks to Knighty the argument was extended to non-Julia Mandelboxes).
Now, some month ago I stumbled upon automatic differentation and dual numbers, and after having done some tests, I think this a very nice framework to complete the discussion of distance estimators.
So what are these dual numbers? The name might sound intimidating, but the concept is very simple: we extend the real numbers with another component – much like the complex numbers:
$$x = (x_r, x_d) = x_r + x_d \epsilon$$
where $$\epsilon$$ is the dual unit, similar to the imaginary unit i for the complex numbers. The square of a dual unit is defined as: $$\epsilon * \epsilon = 0$$.
Now for any function which has a Taylor series, we have:
$$f(x+dx) = f(x) + f’(x)dx + (f”(x)/2)dx^2 + …$$
If we let $$dx = \epsilon$$, it follows:
$$f(x+\epsilon) = f(x) + f’(x)\epsilon$$
because the higher order terms vanish. This means, that if we evaluate our function with a dual number $$d = x + \epsilon = (x,1)$$, we get a dual number back, (f(x), f’(x)), where the dual component contains the derivative of the function.
Compare this with the finite difference scheme for obtaining a derivative. Take a quadratic function as an example and evaluate its derivative, using a step size ‘h’:
$$f(x) = x*x$$
This gives us the approximate derivative:
$$f’(x) \approx \frac {f(x+h)-f(x)}{h} = \frac { x^2 + 2*x*h + h^2 – x^2 } {h} = 2*x+h$$
The finite difference scheme introduces an error, here equal to h. The error always gets smaller as h gets smaller (as it converges towards to the true derivative), but numerical differentiation introduces inaccuracies.
Compare this with the dual number approach. For dual numbers, we have:
$$x*x = (x_r+x_d\epsilon)*(x_r+x_d\epsilon) = x_r^2 + (2 * x_r * x_d )\epsilon$$.
Thus,
$$f(x_r + \epsilon) = x_r^2 + (2 * x_r)*\epsilon$$
Since the dual component is the derivative, we have f’(x) = 2*x, which is the exact answer.
But the real beauty of dual numbers is, that they make it possible to keep track of the derivative during the actual calculation, using forward accumulation. Simply by replacing all numbers in our calculations with dual numbers, we will end up with the answer together with the derivative. Wikipedia has a very nice article, that explains this in more details: Automatic Differentation. The article also list several arithmetric rules for dual numbers.
For the Mandelbox, we have a defining function R(p), which returns the length of p, after having been through a fixed number of iterations of the Mandelbox formula: scale*spherefold(boxfold(z))+p. The DE is then DE = R/DR, where DR is the length of the gradient of R.
R is a scalar-valued vector function. To find the gradient we need to find the derivative along the x,y, and z direction. We can do this using dual vectors and evaluate the three directions, e.g. for the x-direction, evaluate $$R(p_r + \epsilon (1,0,0))$$. In practice, it is more convenient to keep track of all three dual vectors during the calculation, since we can reuse part of the calculations. So we have to use a 3×3 matrix to track our derivatives during the calculation.
Here is some example code for the Mandelbox:
```// simply scale the dual vectors
void sphereFold(inout vec3 z, inout mat3 dz) {
float r2 = dot(z,z);
if (r2 < minRadius2) {
float temp = (fixedRadius2/minRadius2);
z*= temp; dz*=temp;
} else if (r2<fixedRadius2) {
float temp =(fixedRadius2/r2);
dz[0] =temp*(dz[0]-z*2.0*dot(z,dz[0])/r2);
dz[1] =temp*(dz[1]-z*2.0*dot(z,dz[1])/r2);
dz[2] =temp*(dz[2]-z*2.0*dot(z,dz[2])/r2);
z*=temp; dz*=temp;
}
}
// reverse signs for dual vectors when folding
void boxFold(inout vec3 z, inout mat3 dz) {
if (abs(z.x)>foldingLimit) { dz[0].x*=-1; dz[1].x*=-1; dz[2].x*=-1; }
if (abs(z.y)>foldingLimit) { dz[0].y*=-1; dz[1].y*=-1; dz[2].y*=-1; }
if (abs(z.z)>foldingLimit) { dz[0].z*=-1; dz[1].z*=-1; dz[2].z*=-1; }
z = clamp(z, -foldingLimit, foldingLimit) * 2.0 - z;
}
float DE(vec3 z)
{
// dz contains our three dual vectors,
// initialized to x,y,z directions.
mat3 dz = mat3(1.0,0.0,0.0,0.0,1.0,0.0,0.0,0.0,1.0);
vec3 c = z;
mat3 dc = dz;
for (int n = 0; n < Iterations; n++) {
boxFold(z,dz);
sphereFold(z,dz);
z*=Scale;
dz=mat3(dz[0]*Scale,dz[1]*Scale,dz[2]*Scale);
z += c*Offset;
dz +=matrixCompMult(mat3(Offset,Offset,Offset),dc);
if (length(z)>1000.0) break;
}
return dot(z,z)/length(z*dz);
}
```
The 3×3 matrix dz contains our three dual vectors (they are stored as columns in the matrix, dz[0], dz[1], dz[2]).
In order to calculate the dual numbers, we need to know how to calculate the length of z, and how to divide by the length squared (for sphere folds).
Using the definition of the product for dual numbers, we have:
$$|z|^2 = z \cdot z = z_r^2 + (2 z_r \cdot z_d)*\epsilon$$
For the length, we can use the power rule, as defined on Wikipedia:
\(|z_r + z_d \epsilon| = \sqrt{z_r^2 + (2 z_r \cdot z_d)*\epsilon}
= |z_r| + \frac{(z_r \cdot z_d)}{|z_r|}*\epsilon\)
Using the rule for division, we can derive:
$$z/|z|^2=(z_r+z_d \epsilon)/( z_r^2 + 2 z_r \cdot z_d \epsilon)$$
$$= z_r/z_r^2 + \epsilon (z_d*z_r^2-2z_r*z_r \cdot z_d)/z_r^4$$
Given these rules, it is relatively simple to update the dual vectors: For the sphereFold, we either multiply by a real number or use the division rule above. For the boxFold, there is both multiplication (sign change), and a translation by a real number, which is ignored for the dual numbers. The (real) scaling factor is also trivially applied to both real and dual vectors. Then there is the addition of the original vector, where we must remember to also add the original dual vector.
Finally, using the length as derived above, we find the length of the full gradient as:
$$DR = \sqrt{ (z_r \cdot z_x)^2 + (z_r \cdot z_y)^2 + (z_r \cdot z_z)^2 } / |z_r|$$
In the code example, the vectors are stored in a matrix, which makes a more compact notation possible: DR = length(z*dz)/length(z), leading to the final DE = R/DR = dot(z,z)/length(z*dz)
There are some advantages to using the dual numbers approach:
• Compared to the four-point Makin/Buddhi finite difference approach the arbitrary epsilon (step distance) is avoided – which should give better numerical accuracy. It is also somewhat slightly faster computationally.
• Very general – e.g. works for non-conformal cases, where running scalar derivatives fail. The images here are from a Mandelbox where a different scaling factor was applied to each direction (making them non-conformal). This is not possible to capture in a running scalar derivative.
On the other hand, the method is slower than using running scalar estimators. And it does require code changes. It should be mentioned that libraries exists for languages supporting operator overloading, such as C++.
Since we find the gradient directly in this method, we can also use it as a surface normal – this is also an advantage compared to the scalar derivates, which normally use a finite difference scheme for the normals. Using the code example the normal is:
```// (Unnormalized) normal
vec3 normal = vec3(dot(z,dz[0]),dot(z,dz[1]),dot(z,dz[2]));
```
It should be noted that in my experiments, I found the finite difference method produced better normals than the above definition. Perhaps because it smothens them? The problem was somehow solved by backstepping a little before calculating the normal, but this again introduces an arbitrary distance step.
Now, I said the scalar method was faster – and for a fixed number of ray steps it is – but let us take a closer look at the distance estimator function:
The above image shows a sliced Mandelbox.
The graph in the lower right conter shows a plot of the DE function along a line (two dimensions held fixed): The blue curve is the DE function, and the red line shows the derivative of the DE function. The function is plotted for the dual number derived DE function. We can see that our DE is well-behaved here: for a consistent DE the slope can never be higher than 1, and when we move away from the side of the Mandelbox in a perpendicular direction the derivative of the DE should be plus or minus one.
Now compare this to the scalar estimated DE:
Here we see that the DE is less optimal – the slope is ~0.5 for this particular line graph. Actually, the slope would be close to one if we omitted the ‘+1′ term for the scalar estimator, but then it overshoots slightly some places inside the Mandelbox.
We can also see that there are holes in our Mandelbox – this is because for this fixed number of ray steps, we do not get close enough to the fractal surface to hit it. So even though the scalar estimator is faster, we need to crank up the number of ray steps to achieve the same quality.
Final Remarks
The whole idea of introducing dual derivatives of the three unit vectors seems to be very similar to having a running Jacobian matrix estimator – and I believe the methods are essentially idential. After all we try to achieve the same: keeping a running record of how the R(p) function changes, when we vary the input along the axis.
But I think the dual numbers offer a nice theoretical framework for calculating the DE, and I believe they could be more accurate and faster then finite difference four point gradient methods. However, more experiments are needed before this can be asserted.
Scalar estimators will always be the fastest, but they are probably only optimal for conformal systems – for non-conformal system, it seems necessary to introduce terms that make them too conservative, as demonstrated by the Mandelbox example.
The final part contains all the stuff that didn’t fit in the previous posts, including references and links.
Posted in Distance Estimation, Fractals, Fragmentarium |
Distance Estimated 3D Fractals (VI): The Mandelbox
Posted on November 11, 2011 by
Previous parts: part I, part II, part III, part IV and part V.
After the Mandelbulb, several new types of 3D fractals appeared at Fractal Forums. Perhaps one of the most impressive and unique is the Mandelbox. It was first described in this thread, where it was introduced by Tom Lowe (Tglad). Similar to the original Mandelbrot set, an iterative function is applied to points in 3D space, and points which do not diverge are considered part of the set.
Tom Lowe has a great site, where he discusses the history of the Mandelbox, and highlights several of its properties, so in this post I’ll focus on the distance estimator, and try to make some more or less convincing arguments about why a scalar derivative works in this case.
The Mandelbulb and Mandelbrot systems use a simple polynomial formula to generate the escape-time sequence:
$$z_{n+1} = z_n^\alpha + c$$
The Mandelbox uses a slightly more complex transformation:
$$z_{n+1} = scale*spherefold(boxfold(z_n)) + c$$
I have mentioned folds before. These are simply conditional reflections.
A box fold is a similar construction: if the point, p, is outside a box with a given side length, reflect the point in the box side. Or as code:
```if (p.x>L) { p.x = 2.0*L-p.x; } else if (p.x<-L) { p.x = -2.0*L-p.x; }
```
(this must be done for each dimension. Notice, that in GLSL this can be expressed elegantly in one single operation for all dimensions: p = clamp(p,-L,L)*2.0-p)
The sphere fold is a conditional sphere inversion. If a point, p, is inside a sphere with a fixed radius, R, we will reflect the point in the sphere, e.g:
```float r = length(p);
if (r<R) p=p*R*R/(r*r);
```
(Actually, the sphere fold used in most Mandelbox implementations is slightly more complex and adds an inner radius, where the length of the point is scaled linearly).
Now, how can we create a DE for the Mandelbox?
Again, it turns out that it is possible to create a scalar running derivative based distance estimator. I think the first scalar formula was suggested by Buddhi in this thread at fractal forums. Here is the code:
```float DE(vec3 z)
{
vec3 offset = z;
float dr = 1.0;
for (int n = 0; n < Iterations; n++) {
boxFold(z,dr); // Reflect
sphereFold(z,dr); // Sphere Inversion
z=Scale*z + offset; // Scale & Translate
dr = dr*abs(Scale)+1.0;
}
float r = length(z);
return r/abs(dr);
}
```
where the sphereFold and boxFold may be defined as:
```void sphereFold(inout vec3 z, inout float dz) {
float r2 = dot(z,z);
if (r2<minRadius2) {
// linear inner scaling
float temp = (fixedRadius2/minRadius2);
z *= temp;
dz*= temp;
} else if (r2<fixedRadius2) {
// this is the actual sphere inversion
float temp =(fixedRadius2/r2);
z *= temp;
dz*= temp;
}
}
void boxFold(inout vec3 z, inout float dz) {
z = clamp(z, -foldingLimit, foldingLimit) * 2.0 - z;
}
```
It is possible to simplify this even further by storing the scalar derivative as the fourth component of a 4-vector. See Rrrola's post for an example.
However, one thing that is missing, is an explanation of why this distance estimator works. And even though I do not completely understand the mechanism, I'll try to justify this formula. It is not a strict derivation, but I think it offers some understanding of why the scalar distance estimator works.
A Running Scalar Derivative
Let us say, that for a given starting point, p, we obtain an length, R, after having applied a fixed number of iterations. If the length is less then $$R_{min}$$, we consider the orbit to be bounded and is thus part of the fractal, otherwise it is outside the fractal set. We want to obtain a distance estimate for this point p. Now, the distance estimate must tell us how far we can go in any direction, before the final radius falls below the minimum radius, $$R_{min}$$, and we hit the fractal surface. One distance estimate approximation would be to find the direction, where R decreases fastest, and do a linear extrapolation to estimate when R becomes less than $$R_{min}$$:
$$DE=(R-R_{min})/DR$$
where DR is the magnitude of the derivative along this steepest descent (this is essentially Newton root finding).
In the previous post, we argued that the linear approximation to a vector-function is best described using the Jacobian matrix:
\(
F(p+dp) \approx F(p) + J(p)dp
\)
The fastest decrease is thus given by the induced matrix norm of J, since the matrix norm is the maximum of $$|J v|$$ for all unit vectors v.
So, if we could calculate the (induced) matrix norm of the Jacobian, we would arrive at a linear distance estimate:
\(
DE=(R-R_{min})/\|J\|
\)
Calculating the Jacobian matrix norm sounds tricky, but let us take a look at the different transformations involved in the iteration loop: Reflections (R), Sphere Inversions (SI), Scalings (S), and Translations (T). It is also common to add a rotation (ROT) inside the iteration loop.
Now, for a given point, we will end applying an iterated sequence of operations to see if the point escapes:
\(
Mandelbox(p) = (T\circ S\circ SI\circ R\circ \ldots\circ T\circ S\circ SI\circ R)(p)
\)
In the previous part, we argued that the most obvious derivative for a R^3 to R^3 function is a Jacobian. According to the chain rule for Jacobians, the Jacobian for a function such as this Mandelbox(z) will be of the form:
\(
J_{Mandelbox} = J_T * J_S * J_{SI} * J_R .... J_T * J_S * J_{SI} * J_R
\)
In general, all of these matrices will be functions of R^3, which should be evaluated at different positions. Now, let us take a look of the individual Jacobian matrices for the Mandelbox transformations.
Translations
A translation by a constant will simply have an identity matrix as Jacobian matrix, as can be seen from the definitions.
Reflections
Consider a simple reflection in one of the coordinate system planes. The transformation matrix for this is:
\(
T_{R} = \begin{bmatrix}
1 & & \\
& 1 & \\
& & -1
\end{bmatrix}
\)
Now, the Jacobian of a transformation defined by multiplying with a constant matrix is simply the constant matrix itself. So the Jacobian is also simply a reflection matrix.
Rotations
A rotation (for a fixed angle and rotation vector) is also constant matrix, so the Jacobian is also simply a rotation matrix.
Scalings
The Jacobian for a uniform scaling operation is:
\(
J_S = scale*\begin{bmatrix}
1 & & \\
& 1 & \\
& & 1
\end{bmatrix}
\)
Sphere Inversions
Below can be seen how the sphere fold (the conditional sphere inversion) transforms a uniform 2D grid. As can be seen, the sphere inversion is an anti-conformal transformation - the angles are still 90 degrees at the intersections, except for the boundary where the sphere inversion stops.
The Jacobian for sphere inversions is the most tricky. But a derivation leads to:
\(
J_{R} = (r^2/R^2) \begin{bmatrix}
1-2x^2/R^2 & -2xy/R^2 & -2xz/R^2 \\
-2yx/R^2 & 1-2y^2/R^2 & -2yz/R^2 \\
-2zx/R^2 & -zy/R^2 & 1-2z^2/R^2
\end{bmatrix}
\)
Here R is the length of p, and r is radius of the inversion sphere. I have extracted the scalar front factor, so that the remaining part is an orthogonal matrix (as is also demonstrated in the derivation link).
Notice that all reflection, translation, and rotation Jacobian-matrices will not change the length of a vector when multiplied with it. The Jacobian for the Scaling matrix, will multiply the length with the scale factor, and the Jacobian for the Sphere Inversion will multiply the length by a factor of (r^2/R^2) (notice that the length of the point must evaluated at the correct point in the sequence).
Now, if we calculated the matrix norm of the Jacobian:
\(
|| J_{Mandelbox} || = || J_T * J_S * J_{SI} * J_R .... J_T * J_S * J_{SI} * J_R ||
\)
we can easily do it, since we do only need to keep track of the scalar factor, whenever we encounter a Scaling Jacobian or a Sphere Inversion Jacobian. All the other matrix stuff will simply not change the length of a given vector and may be ignored. Also notice, that only the sphere inversion depends on the point where the Jacobian is evaluated - if this operation was not present, we could simply count the number of scalings performed and multiply the escape length with $$2^{-scale}$$.
This means that the matrix norm of the Jacobian can be calculated using only a simply scalar variable, which is scaled, whenever we apply the scaling or sphere inversion operation.
This seems to hold for all conformal transformations (strictly speaking sphere inversions and reflections are not conformal, but anti-conformal, since orientations are reversed). Wikipedia also mentions, that any function, with a Jacobian equal to a scalar times a rotational matrix, must be conformal, and it seems the converse is also true: any conformal or anti-conformal transformation in 3D has a Jacobian equal to a scalar times a orthogonal matrix.
Final Remarks
There are some reasons, why I'm not completely satisfied why the above derivation: first, the translational part of the Mandelbox transformation is not really a constant. It would be, if we were considering a Julia-type Mandelbox, where you add a fixed vector at each iteration, but here we add the starting point, and I'm not sure how to express the Jacobian of this transformation. Still, it is possible to do Julia-type Mandelbox fractals (they are quite similar), and here the derivation should be more sound. The transformations used in the Mandelbox are also conditional, and not simple reflection and sphere inversions, but I don't think that matters with regard to the Jacobian, as long as the same conditions are used when calculating it.
Update: As Knighty pointed out in the comments below, it is possible to see why the scalar approximation works in the Mandelbrot case too:
Let us go back to the original formula:
$$f(z) = scale*spherefold(boxfold(z)) + c$$
and take a look at its Jacobian:
$$J_f = J_{scale}*J_{spherefold}*J_{boxfold} + I$$
Now by using the triangle inequality for matrix norms, we get:
$$||J_f|| = ||J_{scale}*J_{spherefold}*J_{boxfold} + I||$$
$$\leq ||J_{scale}*J_{spherefold}*J_{boxfold}|| + ||I||$$
$$= S_{scale}*S_{spherefold}*S_{boxfold} + 1$$
where the S's are the scalars for the given transformation. This argument can also be applied to repeated applications of the Mandelbox transformation. This means, that if we add one to the running derivative at each iteration (like in the Mandelbulb case), we get an upper bound of the true derivative. And since our distance estimate is calculated by dividing with the running derivate, this approximation yields a smaller distance estimate than the true one (which is good).
Another point is, that it is striking that we end up with the same scalar estimator as for the tetrahedron in part 3 (except that is has no sphere inversion). But for the tetrahedron, the scalar estimator was based on straight-forward arguments, so perhaps it is possible to come up with a much simpler argument for the running scalar derivative for the Mandelbox as well.
There must also be some kind of link between the gradient and the Jacobian norm. It seems, that the norm of the Jacobian should be equal to the absolute value of the length of the Mandelbox(p) function: ||J|| = |grad|MB(p)||, since they both describe how fast the length varies along the steepest descent path. This would also make the link to the gradient based numerical methods (discussed in part 5) more clear.
And finally, if we reused our argumentation for using a linear zero-point approximation of the escape length to the Mandelbulb, it just doesn't work. Here it is necessary to introduce a log-term ($$DE= 0.5*r*log(r)/dr$$). Of course, the Mandelbulb is not composed of conformal transformations, so the "Jacobian to Scalar running derivative" argument is not valid anymore, but we already have an expression for the scalar running derivative for the Mandelbulb, and this expression does not seem to work well with the $$DE=(r-r_{min})/dr$$ approximation. So it is not clear under what conditions this approximation is valid. Update: Again, Knighty makes some good arguments below in the comments for why the linear approximations holds here.
The next part is about dual numbers and distance estimation.
Posted in Distance Estimation, Fractals, Fragmentarium |
Distance Estimated 3D Fractals (V): The Mandelbulb & Different DE Approximations
Posted on September 20, 2011 by
Previous posts: part I, part II, part III and part IV.
The last post discussed the distance estimator for the complex 2D Mandelbrot:
(1) $$DE=0.5*ln(r)*r/dr$$,
with ‘dr’ being the length of the running (complex) derivative:
(2) $$f’_n(c) = 2f_{n-1}(c)f’_{n-1}(c)+1$$
In John Hart’s paper, he used the exact same form to render a Quaternion system (using four-component Quaternions to keep track of the running derivative). In the paper, Hart never justified why the complex Mandelbrot formula also should be valid for Quaternions. A proof of this was later given by Dang, Kaufmann, and Sandin in the book Hypercomplex Iterations: Distance Estimation and Higher Dimensional Fractals (2002).
I used the same distance estimator formula, when drawing the 3D hypercomplex images in the last post – it seems to be quite generic and applicable to most polynomial escape time fractal. In this post we will take a closer look at how this formula arise.
The Mandelbulb
But first, let us briefly return to the 2D Mandelbrot equation: $$z_{n+1} = z_{n}^2+c$$. Now, squaring complex numbers has a simple geometric interpretation: if the complex number is represented in polar coordinates, squaring the number corresponds to squaring the length, and doubling the angle (to the real axis).
This is probably what motivated Daniel Nylander (Twinbee) to investigate what happens when turning to spherical 3D coordinates and squaring the length and doubling the two angles here. This makes it possible to get something like the following object:
On the image above, I made made some cuts to emphasize the embedded 2D Mandelbrot.
Now, this object is not much more interesting than the triplex and Quaternion Mandelbrot from the last post. But Paul Nylander suggested that the same approach should be used for a power-8 formula instead: $$z_{n+1} = z_{n}^8+c$$, something which resulted in what is now known as the Mandelbulb fractal:
The power of eight is somewhat arbitrary here. A power seven or nine object does not look much different, but unexpectedly these higher power objects display a much more interesting structure than their power two counterpart.
Here is some example Mandelbulb code:
```float DE(vec3 pos) {
vec3 z = pos;
float dr = 1.0;
float r = 0.0;
for (int i = 0; i < Iterations ; i++) {
r = length(z);
if (r>Bailout) break;
// convert to polar coordinates
float theta = acos(z.z/r);
float phi = atan(z.y,z.x);
dr = pow( r, Power-1.0)*Power*dr + 1.0;
// scale and rotate the point
float zr = pow( r,Power);
theta = theta*Power;
phi = phi*Power;
// convert back to cartesian coordinates
z = zr*vec3(sin(theta)*cos(phi), sin(phi)*sin(theta), cos(theta));
z+=pos;
}
return 0.5*log(r)*r/dr;
}
```
It should be noted, that several versions of the geometric formulas exists. The one above is based on doubling angles for spherical coordinates as they are defined on Wikipedia and is the same version as Quilez has on his site. However, several places this form appears:
```float theta = asin( z.z/r );
float phi = atan( z.y,z.x );
...
z = zr*vec3( cos(theta)*cos(phi), cos(theta)*sin(phi), sin(theta) );
```
which results in a Mandelbulb object where the poles are similar, and where the power-2 version has the nice 2D Mandelbrot look depicted above.
I’ll not say more about the Mandelbulb and its history, because all this is very well documented on Daniel White’s site, but instead continue to discuss various distance estimators for it.
So, how did we arrive at the distance estimator in the code example above?
Following the same approach as for the 4D Quaternion Julia set, we start with our iterative function:
$$f_n(c) = f_{n-1}^8(c) + c, f_0(c) = 0$$
Deriving this function (formally) with respect to c, gives
(3) $$f’_n(c) = 8f_{n-1}^7(c)f’_{n-1}(c)+1$$
where the functions above are ‘triplex’ (3 component) valued. But we haven’t defined how to multiply two spherical triplex numbers. We only know how to square them! And how do we even derive a vector valued function with respect to a vector?
The Jacobian Distance Estimator
Since we have three different function components, which we can derive with three different number components, we end up with nine possible scalar derivatives. These may be arranged in a Jacobian matrix:
\(
J = \begin{bmatrix}
\frac {\partial f_x}{\partial x} & \frac {\partial f_x}{\partial y} & \frac {\partial f_x}{\partial z} \\
\frac {\partial f_y}{\partial x} & \frac {\partial f_y}{\partial y} & \frac {\partial f_y}{\partial z} \\
\frac {\partial f_z}{\partial x} & \frac {\partial f_z}{\partial y} & \frac {\partial f_z}{\partial z}
\end{bmatrix}
\)
The Jacobian behaves like similar to the lower-dimensional derivatives, in the sense that it provides the best linear approximation to F in a neighborhood of p:
(4) \(
F(p+dp) \approx F(p) + J(p)dp
\)
In formula (3) above this means, we have to keep track of a running matrix derivative, and use some kind of norm for this matrix in the final distance estimate (formula 1).
But calculating the Jacobian matrix above analytically is tricky (read the comments below from Knighty and check out his running matrix derivative example in the Quadray thread). Luckily, other solutions exist.
Let us start by considering the complex case once again. Here we also have a two-component function derived by a two-component number. So why isn’t the derivative of a complex number a 2×2 Jacobian matrix?
It turns out that for a complex function to be complex differentiable in every point (holomorphic), it must satisfy the Cauchy Riemann equations. And these equations reduce the four quantities in the 2×2 Jacobian to just two numbers! Notice, that the Cauchy Riemann equations are a consequence of the definition of the complex derivative in a point p: we require that the derivative (the limit of the difference) is the same, no matter from which direction we approach p (see here). Very interestingly, the holomorphic functions are exactly the functions that are conformal (angle preserving) – something which I briefly mentioned (see last part of part III) is considered a key property of fractal transformations.
What if we only considered conformal 3D transformation? This would probably imply that the Jacobian matrix of the transformation would be a scalar times a rotation matrix (see here, but notice they only claim the reverse is true). But since the rotational part of the matrix will not influence the matrix norm, this means we would only need to keep track of the scalar part – a single component running derivative. Now, the Mandelbulb power operation is not a conformal transformation. But even though I cannot explain why, it is still possible to define a scalar derivative.
The Scalar Distance Estimator
It turns out the following running scalar derivative actually works:
(5) $$dr_n = 8|f_{n-1}(c)|^7dr_{n-1}+1$$
where ‘dr’ is a scalar function. I’m not sure who first came up with the idea of using a scalar derivative (but it might be Enforcer, in this thread.) – but it is interesting, that it works so well (it also works in many other cases, including for Quaternion Julia system). Even though I don’t understand why the scalar approach work, there is something comforting about it: remember that the original Mandelbulb was completely defined in terms of the square and addition operators. But in order to use the 3-component running derivative, we need to able to multiply two arbitrary ‘triplex’ numbers! This bothered me, since it is possible to draw the Mandelbulb using e.g. a 3D voxel approach without knowing how to multiply arbitrary numbers, so I believe it should be possible to formulate a DE-approach, that doesn’t use this extra information. And the scalar approach does exactly this.
The escape length gradient approximation
Let us return to formula (1) above:
(1) $$DE=0.5*ln(r)*r/dr$$,
The most interesting part is the running derivative ‘dr’. For the fractals encountered so far, we have been able to find analytical running derivatives (both vector and scalar valued), but as we shall see (when we get to the more complex fractals, such as the hybrid systems) it is not always possible to find an analytical formula.
Remember that ‘dr’ is the length of the f’(z) (for complex and Quaternion numbers). In analogy with the complex and quaternion case, the function must be derived with regard to the 3-component number c. Deriving a vector-valued function with regard to a vector quantity suggests the use of a Jacobian matrix. Another approach is to take the gradient of the escape length: $$dr=|\nabla |z_{n}||$$ – while it is not clear to me why this is valid, it work in many cases as we will see:
David Makin and Buddhi suggested (in this thread) that instead of trying to calculate a running, analytical derivative, we could use an numerical approximation, and calculate the above mentioned gradient using the finite forwarding method we also used when calculating a surface normal in post II.
The only slightly tricky point is, that the escape length must be evaluated for the same iteration count, otherwise you get artifacts. Here is some example code:
```int last = 0;
float escapeLength(in vec3 pos)
{
vec3 z = pos;
for( int i=1; i<Iterations; i++ )
{
z = BulbPower(z, Power) + pos;
float r2 = dot(z,z);
if ((r2 > Bailout && last==0) || (i==last))
{
last = i;
return length(z);
}
}
return length(z);
}
float DE(vec3 p) {
last = 0;
float r = escapeLength(p);
if (r*r<Bailout) return 0.0;
gradient = (vec3(escapeLength(p+xDir*EPS), escapeLength(p+yDir*EPS), escapeLength(p+zDir*EPS))-r)/EPS;
return 0.5*r*log(r)/length(gradient);
}
```
Notice the use of the ‘last’ variable to ensure that all escapeLength’s are evaluated at the same iteration count. Also notice that ‘gradient’ is a global varible – this is because we can reuse the normalized gradient as an approximation for our surface normal and save some calculations.
The approach above is used in both Mandelbulber and Mandelbulb 3D for the cases where no analytical solution is known. On Fractal Forums it is usually refered to as the Makin/Buddhi 4-point Delta-DE formula.
The potential gradient approximation
Now we need to step back and take a closer look at the origin of the Mandelbrot distance estimation formula. There is a lot of confusion about this formula, and unfortunately I cannot claim to completely understand all of this myself. But I’m slowly getting to understand bits of it, and want to share what I found out so far:
Let us start by the original Hart paper, which introduced the distance estimation technique for 3D fractals. Hart does not derive the distance estimation formula himself, but notes that:
Now, I haven’t talked about this potential function, G(z), that Hart mentions above, but it is possible to define a potential with the properties that G(Z)=0 inside the Mandelbrot set, and positive outside. This is the first thing that puzzled me: since G(Z) tends toward zero near the border, the “log G(Z)” term, and hence the entire term will become negative! As it turns out, the “log” term in the Hart paper is wrong. (And also notice that his formula (8) is wrong too – he must take the norm of complex function f(z) inside the log function – otherwise the distance will end up being complex too)
In The Science of Fractal Images (which Hart refers to above) the authors arrive at the following formula, which I believe is correct:
Similar, in Hypercomplex Iterations the authors arrive at the same formula:
But notice that formula (3.17) is wrong here! I strongly believe it misses a factor two (in their derivation they have $$sinh(z) \approx \frac{z}{2}$$ for small z – but this is not correct: $$sinh(z) \approx z$$ for small z).
The approximation going from (3.16) to (3.17) is only valid for points close to the boundary (where G(z) approaches zero). This is no big problem, since for points far away we can restrict the maximum DE step, or put the object inside a bounding box, which we intersect before ray marching.
It can be shown that $$|Z_n|^{1/2^n} \to 1$$ for $$n \to \infty$$. By using this we end up with our well-known formula for the lower bound from above (in a slightly different notation):
(1) $$DE=0.5*ln(r)*r/dr$$,
Instead of using the above formula, we can work directly with the potential G(z). For $$n \to \infty$$, G(z) may be approximated as $$G(z)=log(|z_n|)/power^n$$, where ‘power’ is the polynomial power (8 for Mandelbulb). (This result can be found in e.g. Hypercomplex Iterations p. 37 for quadratic functions)
We will approximate the length of G’(z) as a numerical gradient again. This can be done using the following code:
```float potential(in vec3 pos)
{
vec3 z = pos;
for(int i=1; i<Iterations; i++ )
{
z = BulbPower(z, Power) + pos;
if (dot(z,z) > Bailout) return log(length(z))/pow(Power,float(i));
}
return 0.0;
}
float DE(vec3 p) {
float pot = potential(p);
if (pot==0.0) return 0.0;
gradient = (vec3(potential(p+xDir*EPS), potential(p+yDir*EPS), potential(p+zDir*EPS))-pot)/EPS;
return (0.5/exp(pot))*sinh(pot)/length(gradient);
}
```
Notice, that this time we do not have to evaluate the potential for the same number of iterations. And again we can store the gradient and reuse it as a surface normal (when normalized).
A variant using Subblue’s radiolari tweak
Quilez’ Approximation
I arrived at the formula above after reading Iñigo Quilez’ post about the Mandelbulb. There are many good tips in this post, including a fast trigonometric version, but for me the most interesting part was his DE approach: Quilez used a potential based DE, defined as:
$$DE(z) = \frac{G(z)}{|G’(z)|}$$
This puzzled me, since I couldn’t understand its origin. Quilez offers an explanation in this blog post, where he arrives at the formula by using a linear approximation of G(z) to calculate the distance to its zero-region. I’m not quite sure, why this approximation should be justified, but it seems a bit like an example of Newtons method for root finding. Also, as Quilez himself notes, he is missing a factor 1/2.
But if we start out by formula (3.17) above, and notes that $$sinh(G(z)) \approx G(z)$$ for small G(z) (near the fractal boundary) , and that $$|Z_n|^{1/2^n} \to 1$$ for $$n \to \infty$$ we arrive at:
$$DE(z) = 0.5*\frac{G(z)}{|G’(z)|}$$
(And notice that the same two approximations are used when arriving at our well-known formula (1) at the top of the page).
Quilez’ method can be implemented using the previous code example and replacing the DE return value simply by:
```return 0.5*pot/length(gradient);
```
If you wonder how these different methods compare, here are some informal timings of the various approaches (parameters were adjusted to give roughly identical appearances):
Sinh Potential Gradient (my approach) 1.0x
Potential Gradient (Quilez) 1.1x
Escape Length Gradient (Makin/Buddhi) 1.1x
Analytical 4.1x
The three first methods all use a four-point numerical approximation of the gradient. Since this requires four calls to the iterative function (which is where most of the computational time is spent), they are around four times slower than the analytical solution, that only uses one evaluation.
My approach is slightly slower than the other numerical approaches, but is also less approximated than the others. The numerical approximations do not behave in the same way: the Makin/Buddhi approach seems more sensible to choosing the right EPS size in the numerical approximation of the gradient.
As to which function is best, this requires some more testing on various systems. My guess is, that they will provide somewhat similar results, but this must be investigated further.
The Mandelbulb can also be drawn as a Julia fractal.
Some final notes about Distance Estimators
Mathematical justification: first note, that the formulas above were derived for complex mathematics and quadratic systems (and extended to Quaternions and some higher-dimensional structures in Hypercomplex Iterations). These formulas were never proved for exotic stuff like the Mandelbulb triplex algebra or similar constructs. The derivations above were included to give a hint to the origin and construction of these DE approximations. To truly understand these formula, I think the original papers by Hubbard and Douady, and the works by John Willard Milnor should be consulted – unfortunately I couldn’t find these online. Anyway, I believe a rigorous approach would require the attention of someone with a mathematical background.
Using a lower bound as a distance estimator. The formula (3.17) above defines lower and upper bounds for a given point to the boundary of the Mandelbrot set. Throughout this entire discussion, we have simply used the lower bound as a distance estimate. But a lower bound is not good enough as a distance estimate. This can easily be realized since 0 is always a lower bound of the true distance. In order for our sphere tracing / ray marching approach to work, the lower bound must converge towards to the true distance, as it approaches zero! In our case, we are safe, because we also have an upper bound which is four times the lower bound (in the limit where the exp(G(z)) term disappears). Since the true distance must be between the lower and upper bound, the true distance converges towards the lower bound, as the lower bound get smaller.
DE’s are approximations. All our DE formulas above are only approximations – valid in the limit $$n \to \infty$$, and some also only for point close to the fractal boundary. This becomes very apparent when you start rendering these structures – you will often encounter noise and artifacts. Multiplying the DE estimates by a number smaller than 1, may be used to reduce noise (this is the Fudge Factor in Fragmentarium). Another common approach is to oversample – or render images at large sizes and downscale.
Future directions. There is much more to explore and understand about Distance Estimators. For instance, the methods above use four-point numerical gradient estimation, but perhaps the primary camera ray marching could be done using directional derivatives (two point delta estimation), and thus save the four-point sampling for the non-directional stuff (AO, soft shadows, normal estimation). Automatic differentation with dual numbers (as noted in post II) may also be used to avoid the finite difference gradient estimation. It would be nice to have a better understanding of why the scalar gradients work.
The next blog post discusses the Mandelbox fractal.
|
Distance Estimated 3D Fractals (IV): The Holy Grail
Posted on September 9, 2011 by
Previous posts: part I, part II, and part III.
Despite its young age, the Mandelbulb is probably the most famous 3D fractal in existence. This post will examine how we can create a Distance Estimator for it. But before we get to the Mandelbulb, we will have to step back and review a bit of the history behind it.
The Search for the Holy Grail
The original Mandelbrot fractal is a two dimensional fractal based on the convergence properties of a series of complex numbers. The formula is very simple: for any complex number z, check whether the sequence iteratively defined by $$z_{n+1} = z_{n}^2+c$$ diverges or not. The Mandelbrot set is defined as the set of points which do not diverge, that is, the points with a series that stays bounded within a given radius. The results can be depicted in the complex plane.
The question is how to extend this to three dimensions. The Mandelbrot set fits two dimensions, because complex numbers have two components. Can we find a similar number system for three dimensions?
The Mandelbrot formula involves two operations: adding numbers, and squaring them. Creating a n-component number where addition is possible is easy. This is what mathematicians refer to as a vector space. Component-wise addition will do the trick, and seems like the logical choice.
But the Mandelbrot formula also involves squaring a number, which requires a multiplication operator (a vector product) on the vector space. A vector space with a (bilinear) vector product is called an algebra over a field. The numbers in these kind of vector spaces are often called hypercomplex numbers.
To see why a three dimensional number system might be problematic, let us try creating one. We could do this by starting out with the complex numbers and introduce a third component, j. We will try to keep as many as possible of the characteristic properties of the complex and real numbers, such as distributivity, $$a*(b+c)=(a*b)+(a*c)$$, and commutativity, $$a*b=b*a$$. If we assume distributivity, we only need to specify how the units of the three components multiply. This can be illustrated in a multiplication table. Since we also assumed commutativity, such a table must be symmetric:
\(
\begin{bmatrix}
& \boldsymbol{1} & \boldsymbol{i} & \boldsymbol{j} \\
\boldsymbol{1} & 1 & i & j \\
\boldsymbol{i} & i & -1 & ?\\
\boldsymbol{j} & j & ? & ?
\end{bmatrix}
\)
For a well-behaved number system, anything multiplied by 1 is still one, and if we now require the real and imaginary components to behave as for the complex numbers, we only have three components left – the question marks in the matrix. I’ve rendered out a few of the systems, I encountered while trying arbitrary choices of the missing numbers in the matrix:
(Many people have explored various 3D component multiplication tables – see for instance Paul Nylander’s Hypercomplex systems for more examples).
Unfortunately, our toy system above fails to be associative (i.e. it is not always true, that $$a*(b*c) = (a*b)*c$$), as can be seen by looking at the equation $$i*(i*j) = (i*i)*j \Rightarrow i*x = -j$$, which can not be satisfied no matter how we choose x.
It turns out that it is difficult to create a consistent number system in three dimensions. There simply is no natural choice. In fact, if we required that our number system allowed for a division operator, there is a theorem stating that only four such mathematical spaces are possible: the real numbers (1D), the complex numbers (2D), the quaternions (4D) and the octonions (8D). But no 3D systems.
But what about the 4D Quaternions? Back in 1982, Alan Norton published a paper showing a Quaternion Julia set made by displaying a 3D “slice” of the 4D space. Here is an example of a Quaternion Julia fractal:
Of course, in order to visualize a 4D object, you have to make some kind of dimensional reduction. The most common approach is to make a 3D cross-section, by simply keeping one of the four components at a fixed value.
If you wonder why you never see a Quaternion Mandelbrot image, the reason is simple. It is not very interesting because of its axial symmetry:
If you, however, make a rotation inside the iteration loop, you can get something more like a 3D Mandelbrot.
The Quaternion system (and the 3D hypercomplex systems above) are defined exactly as the 2D system – by checking if $$z_{n+1} = z_{n}^2+c$$ converges or not.
But how do we draw a 3D image of these fractals? In contrast to the 2D case, where it is possible to build a 2D grid, and check inside each cell, building a 3D grid and checking each cell would be far too memory and time consuming for images in any decent resolution.
A distance estimator for quadratic systems.
While Alan Norton used a different rendering approach, a very elegant solution to this was found by John Hart et al in a 1989 paper: distance estimated rendering. As discussed in the previous posts, distance estimated rendering requires that we are able to calculate a lower bound to the distance from every point in space to our fractal surface! A first, this might seem impossible. But it turns out such a formula already was known for the 2D Mandelbrot set. A distance estimate can be found as:
(1) $$DE=0.5*ln(r)*r/dr$$
Where ‘r’ is the escape time length, and ‘dr’ is the length of the running derivative. (The approximation is only exact in the limit where the number of iterations goes to infinity)
In order to define what we mean by the running derivative, we need a few extra definitions. For Mandelbrot sets, we study the sequence $$z_{n+1} = z_{n}^2+c$$ for each point c. Let us introduce the function $$f_n(c)$$, defined as the n’th entry for the sequence for the point c. By this definition, we have the following defining formula for the Mandelbrot set:
$$f_n(c) = f_{n-1}^2(c) + c, f_0(c) = 0$$
Deriving this function with respect to c, gives
(2) $$f’_n(c) = 2f_{n-1}(c)f’_{n-1}(c)+1$$ (for Mandelbrot formula)
Similar, the Julia set is defined by choosing a fixed constant, d, in the quadratic formula, using c only as the first entry in our sequence:
$$f_n(c) = f_{n-1}^2(c) + d, f_0(c) = c$$
Deriving this function with respect to c, gives
(3) $$f’_n(c) = 2f_{n-1}(c)f’_{n-1}(c)$$ (for Julia set formula)
which is almost the same result as for the Mandelbrot set, except for unit term. And now we can define the length of $$f_n$$, and the running derivative $$f’_n$$:
$$r = |f_n(c)|$$ and $$dr = |f’_n(c)|$$
used in the formula (1) above. This formula was found by Douady and Hubbard in a 1982 paper (more info).
2D Julia set rendered using a distance estimator approach. This makes it possible to emphasize details, without having to use extensive oversampling.
Due to a constraint in WordPress, this post has reached its maximum length. The next post continues the discussion, and shows how the formula above can be used for other types of fractals than the 2D Mandelbrot.
Posted in Folding, Fractals, Fragmentarium, Mandelbulb |
Distance Estimated 3D Fractals (III): Folding Space
Posted on August 13, 2011 by
The previous posts (part I, part II) introduced the basics of rendering DE (Distance Estimated) systems, but left out one important question: how do we create the distance estimator function?
Drawing spheres
Remember that a distance estimator is nothing more than a function, that for all points in space returns a length smaller than (or equal to) the distance to the closest object. This means we are safe to march at least this step length without hitting anything – and we use this information to speed up the ray marching.
It is fairly easy to come up with distance estimators for most simple geometric shapes. For instance, let us start by a sphere. Here are three different ways to calculate the distance from a point in space, p, to a sphere with radius R:
```(1) DE(p) = max(0.0, length(p)-R) // solid sphere, zero interior
(2) DE(p) = length(p)-R // solid sphere, negative interior
(3) DE(p) = abs(length(p)-R) // hollow sphere shell
```
From the outside all of these look similar. But (3) is hollow – we would be able to position the camera inside it, and it would look different if intersected with other objects.
What about the first two? There is actually a subtle difference: the common way to find the surface normal, is to sample the DE function close to the camera ray/surface intersection. But if the intersection point is located very close to the surface (for instance exactly on it), we might sample the DE inside the sphere. And this will lead to artifacts in the normal vector calculation for (1) and (3). So, if possible use signed distance functions. Another way to avoid this, is to backstep along the camera ray a bit before calculating the surface normal (or to add a ray step multiplier less than 1.0).
From left to right: Sphere (1), with normal artifacts because the normal was not backstepped. Sphere (2) with perfect normals. Sphere (3) drawn with normal backstepping, and thus perfect normals. The last row shows how the spheres look when cut open.
Notice that distance estimation only tells the distance from a point to an object. This is in contrast to classic ray tracing, which always is about finding the distance from a point to a given object along a line. The formulas for ray-object intersection in classic ray tracing are thus more complex, for instance the ray-sphere intersection involves solving a quadratic equation. The drawback of distance estimators is that multiple ray steps are needed, even for simple objects like spheres.
Combining objects
Distance fields have some nice properties. For instance, it is possible to combine two distance fields using a simple minimum(a,b) operator. As an example we could draw the union of two spheres the following way:
```DE(p) = min( length(p)-1.0 , length(p-vec3(2.0,0.0,0.0))-1.0 );
```
This would give us two spheres with unit radius, one centered at origo, and another at (2,0,0). The same way it is possible to calculate the intersection of two objects, by taking the maximum value of the fields. Finally, if you are using signed distance functions, it is possible to subtract one shape from another by inverting one of the fields, and calculating the intersection (i.e. taking max(A, -B)).
So now we have a way to combine objects. And it is also possible to apply local transformations, to get interesting effects:
This image was created by combining the DE’s of a ground plane and two tori while applying a twisting deformation to the tori.
Rendering of (non-fractal) distance fields is described in depth in this paper by Hart: Sphere Tracing: A Geometric Method for the Antialiased Ray Tracing of Implicit Surfaces. This paper also describes distance estimators for various geometric objects, such as tori and cones, and discuss deformations in detail. Distance field techniques have also been adopted by the demoscene, and Iñigo Quilez’s introduction contains a lot of information. (Update: Quilez has created a visual reference page for distance field primitives and transformations)
Building Complexity
This is all nice, but even if you can create interesting structures, there are some limitations. The above method works fine, but scales very badly when the number of distance fields to be combined increases. Creating a scene with 1000 spheres by finding the minimum of the 1000 fields would already become too slow for real-time purposes. In fact ordinary ray tracing scales much better – the use of spatial acceleration structures makes it possible for ordinary ray tracers to draw scenes with millions of objects, something that is far from possible using the “find minimum of all object fields” distance field approach sketched above.
But fractals are all about detail, and endless complexity, so how do we proceed?
It turns out that there are some tricks, that makes it possible to add complexity in ways that scales much better.
First, it is possible to reuse (or instance) objects using e.g. the modulo-operator. Take a look at the following DE:
```float DE(vec3 z)
{
z.xy = mod((z.xy),1.0)-vec3(0.5); // instance on xy-plane
return length(z)-0.3; // sphere DE
}
```
Which generates this image:
Now we are getting somewhere. Tons of detail, at almost no computational cost. Now we only need to make it more interesting!
A Real Fractal
Let us continue with the first example of a real fractal: the recursive tetrahedron.
A tetrahedron may be described as a polyhedron with vertices (1,1,1),(-1,-1,1),(1,-1,-1),(-1,1,-1). Now, for each point in space, lets us take the vertex closest to it, and scale the system by a factor of 2.0 using this vertex as center, and then finally return the distance to the point where we end, after having repeated this operation. Here is the code:
```float DE(vec3 z)
{
vec3 a1 = vec3(1,1,1);
vec3 a2 = vec3(-1,-1,1);
vec3 a3 = vec3(1,-1,-1);
vec3 a4 = vec3(-1,1,-1);
vec3 c;
int n = 0;
float dist, d;
while (n < Iterations) {
c = a1; dist = length(z-a1);
d = length(z-a2); if (d < dist) { c = a2; dist=d; }
d = length(z-a3); if (d < dist) { c = a3; dist=d; }
d = length(z-a4); if (d < dist) { c = a4; dist=d; }
z = Scale*z-c*(Scale-1.0);
n++;
}
return length(z) * pow(Scale, float(-n));
}
```
Which results in the following image:
Our first fractal! Even though we do not have the infinite number of objects, like the mod-example above, the number of objects grow exponentially as we crank up the number of iterations. In fact, the number of objects is equal to 4^Iterations. Just ten iterations will result in more than a million objects - something that is easily doable on a GPU in realtime! Now we are getting ahead of the standard ray tracers.
Folding Space
But it turns out that we can do even better, using a clever trick by utilizing the symmetries of the tetrahedron.
Now, instead of scaling about the nearest vertex, we could use the mirror points in the symmetry planes of the tetrahedron, to make sure that we arrive at the same "octant" of the tetrahedron - and then always scale from the vertex it contains.
The following illustration tries to visualize this:
The red point at the top vertex is the scaling center at (1,1,1). Three symmetry planes of the tetrahedron have been drawn in red, green, and blue. By mirroring points if they are on the wrong side (the non-white points) of plane, we will ensure they get mapped to the white "octant". The operation of mirroring a point, if it is on one side of a plane, is called a 'folding operation' or just a fold.
Here is the code:
```float DE(vec3 z)
{
float r;
int n = 0;
while (n < Iterations) {
if(z.x+z.y<0) z.xy = -z.yx; // fold 1
if(z.x+z.z<0) z.xz = -z.zx; // fold 2
if(z.y+z.z<0) z.zy = -z.yz; // fold 3
z = z*Scale - Offset*(Scale-1.0);
n++;
}
return (length(z) ) * pow(Scale, -float(n));
}
```
These folding operations shows up in several fractals. A fold in a general plane with normal n can be expressed as:
```float t = dot(z,n1); if (t<0.0) { z-=2.0*t*n1; }
```
or in a optimized version (due to AndyAlias):
```z-=2.0 * min(0.0, dot(z, n1)) * n1;
```
Also notice that folds in the xy, xz, or yz planes may be expressed using the 'abs' operator.
That was a lot about folding operations, but the really interesting stuff happens when we throw rotations into the system. This was first introduced by Knighty in the Fractal Forum's thread Kaleidoscopic (escape time) IFS. The thread shows recursive versions of all the Platonic Solids and the Menger Sponge - including the spectacular forms that arise when inserting rotations and translations into the system.
The Kaleidoscopic IFS fractals are in my opinion some of the most interesting 3D fractals ever discovered (or created if you are not a mathematical platonist). Here are some examples of forms that may arise from a system with icosahedral symmetry:
Here the icosahedral origin might be evident, but it possible to tweak these structures beyond any recognition of their origin. Here are a few more examples:
Knighty's fractals are composed using a small set of transformations: scalings, translations, plane reflections (the conditional folds), and rotations. The folds are of course not limited to the symmetry planes of the Platonic Solids, all planes are possible.
The transformations mentioned above all belong to the group of conformal (angle preserving) transformations. It is sometimes said (on Fractal Forums) that for 'true' fractals the transformations must be conformal, since non-conformal transformations tend to stretch out detail and create a 'whipped cream' look, which does not allow for deep zooms. Interestingly, according to Liouville's theorem there are not very many possible conformal transformations in 3D. In fact, if I read the theorem correctly, the only possible conformal 3D transformations are the ones above and the sphere inversions.
Part IV discusses how to arrive at Distance Estimators for the fractals such as the Mandelbulb, which originates in attempts to generalize the Mandelbrot formula to three dimensions: the so-called search for the holy grail in 3D fractals.
Posted in Folding, Fractals, Fragmentarium |
Distance Estimated 3D Fractals (II): Lighting and Coloring
Posted on August 6, 2011 by
The first post discussed how to find the intersection between a camera ray and a fractal, but did not talk about how to color the object. There are two steps involved here: setting up a coloring scheme for the fractal object itself, and the shading (lighting) of the object.
Lights and shading
Since we are raymarching our objects, we can use the standard lighting techniques from ray tracing. The most common form of lightning is to use something like Blinn-Phong, and calculate approximated ambient, diffuse, and specular light based on the position of the light source and the normal of the fractal object.
Surface Normal
So how do we obtain a normal of a fractal surface?
A common method is to probe the Distance Estimator function in small steps along the coordinate system axis and use the numerical gradient obtained from this as the normal (since the normal must point in the direction where the distance field increase most rapidly). This is an example of the finite difference method for numerical differentiation. The following snippet shows how the normal may be calculated:
```vec3 n = normalize(vec3(DE(pos+xDir)-DE(pos-xDir),
DE(pos+yDir)-DE(pos-yDir),
DE(pos+zDir)-DE(pos-zDir)));
```
The original Hart paper also suggested that alternatively, the screen space depth buffer could be used to determine the normal – but this seems to be both more difficult and less accurate.
Finally, as fpsunflower noted in this thread it is possible to use Automatic Differentiation with dual numbers, to obtain a gradient without having to introduce an arbitrary epsilon sampling distance.
Ambient Occlusion
Besides the ambient, diffuse, and specular light from Phong-shading, one thing that really improves the quality and depth illusion of a 3D model is ambient occlusion. In my first post, I gave an example of how the number of ray steps could be used as a very rough measure of how occluded the geometry is (I first saw this at Subblue’s site – his Quaternion Julia page has some nice illustrations of this effect). This ‘ray step AO‘ approach has its shortcomings though: for instance, if the camera ray is nearly parallel to a surface (a grazing incidence) a lot of steps will be used, and the surface will be darkened, even if it is not occluded at all.
Another approach is to sample the Distance Estimator at points along the normal of the surface and use this information to put together a measure for the Ambient Occlusion. This is a more intuitive method, but comes with some other shortcomings – i.e. new parameters are needed to control the distance between the samplings and their relative weights with no obvious default settings. A description of this ‘normal sampling AO‘ approach can be found in Iñigo Quilez’s introduction to distance field rendering.
In Fragmentarium, I’ve implemented both methods: The ‘DetailAO’ parameter controls the distance at which the normal is sampled for the ‘normal sampling AO’ method. If ‘DetailAO’ is set to zero, the ‘ray step AO’ method is used.
Other lighting effects
Besides Phong shading and ambient occlusion, all the usual tips and tricks in ray tracing may be applied:
Glow – can be added simply by mixing in a color based on the number of ray steps taken (points close to the fractal will use more ray steps, even if they miss the fractal, so pixels close to the object will glow).
Fog - is also great for adding to the depth perception. Simply blend in the background color based on the distance from the camera.
Hard shadows are also straight forward – check if the ray from the surface point to the light source is occluded.
Soft shadows: Iñigo Quilez has a good description of doing softened shadows.
Reflections are pretty much the same – reflect the camera ray in the surface normal, and mix in the color of whatever the reflected ray hits.
The effects above are all implemented in Fragmentarium as well. Numerous other extensions could be added to the raytracer: for example, environment mapping using HDRI panoramic maps provides very natural lighting and is easy to apply for the user, simulated depth-of-field also adds great depth illusion to an image, and can be calculated in reasonable time and quality using screen space buffers, and more complex materials could also be added.
Coloring
Fractal objects with a uniform base color and simple colored light sources can produce great images. But algorithmic coloring is a powerful tool for bringing the fractals to life.
Algorithmic color use one or more quantities, determined by looking at the orbit or the escape point or time.
Orbit traps is a popular way to color fractals. This method keeps track of how close the orbit comes to a chosen geometric object. Typical traps include keeping track of the minimum distance to the coordinate system center, or to simple geometric shapes like planes, lines, or spheres. In Fragmentarium, many of the systems use a 4-component vector to keep track of the minimum distance to the three x=0, y=0, and z=0 planes and to the distance from origo. These are mapped to color through the X,Y,Z, and R parameters in the ‘Coloring’ tab.
The iteration count is the number of iterations it takes before the orbit diverges (becomes larger than the escape radius). Since this is an integer number it is prone to banding, which is discussed later in this post. One way to avoid this is by using a smooth fractional iteration count:
```float smooth = float(iteration)
+ log(log(EscapeRadiusSquared))/log(Scale)
- log(log(dot(z,z)))/log(Scale);
```
(For a derivation of this quantity, see for instance here)
Here ‘iteration’ is the number of iterations, and dot(z,z) is the square of the escape time length. There are a couple of things to notice. First, the formula involves a characteristic scale, referring to the scaling factor in the problem (e.g. 2 for a standard Mandelbrot, 3 for a Menger). It is not always possible to obtain such a number (e.g. for Mandelboxes or hybrid systems). Secondly, if the smooth iteration count is used to lookup a color in a palette, offset may be ignored, which means the second term can be dropped. Finally, which ‘log’ functions should be used? This does not matter if only they are used consistently: since all different log functions are proportional, the ratio of two logs does not depend on the base used. For the inner logs (e.g. log(dot(,z))), changing the log will result in a constant offset to the overall term, so again this will just result in an offset in the palette lookup.
The lower half of this image use a smooth iteration count.
Conditional Path Coloring
(I made this name up – I’m not sure there is an official name, but I’ve seen the technique used several times in Fractal Forums posts.)
Some fractals may have conditional branches inside their iteration loop (sometimes disguised as an ‘abs’ operator). The Mandelbox is a good example: the sphere fold performs different actions depending on whether the length of the iterated point is smaller or larger than a set threshold. This makes it possible to keep track of a color variable, which is updated depending on the path taken.
Many other types of coloring are also possible, for example based on the normal of the surface, spherical angles of the escape time points, and so on. Many of the 2D fractal coloring types can also be applied to 3D fractals. UltraFractal has a nice list of 2D coloring types.
Improving Quality
Some visual effects and colorings, are based on integer quantities – for example glow is based on on the number of ray steps. This will result in visible boundaries between the discrete steps, an artifact called banding.
The smooth iteration count introduced above is one way to get rid of banding, but it is not generally applicable. A more generic approach is to add some kind of noise into the system. For instance, by scaling the length of the first ray step for each pixel by a random number, the banding will disappear – at the cost of introducing some noise.
Personally, I much prefer noise to banding – in fact I like the noisy, grainy look, but that is a matter of preference.
Another important issue is aliasing: if only one ray is traced per pixel, the image is prone to aliasing and artifacts. Using more than one sample will remove aliasing and reduce noise. There are many ways to oversample the image – different strategies exist for choosing the samples in a way that optimizes the image quality and there are different ways of weighting (filtering) the samples for each pixel. Physical Based Rendering has a very good chapter on sampling and filtering for ray tracing, and this particular chapter is freely available here:
In Fragmentarium there is some simple oversampling built-in – by setting the ‘AntiAlias’ variable, a number of samples are chosen (on a uniform grid). They are given the same weight (box filtered). I usually only use this for 2D fractals – because they render faster, which allows for a high number of samples. For 3D renders, I normally render a high resolution image, and downscale it in a image editing program – this seems to create better quality images for the same number of samples.
Part III discusses how to derive and work with Distance Estimator functions.
Posted in Distance Estimation, Fractals, Fragmentarium |
Syntopia Blog Update
Posted on July 12, 2011 by
It has not been possible to post comments at my blog for some months. Apparently, my reCAPTCHA plugin was broken (amazingly, spam comments still made their way into the moderation queue).
This should be fixed now.
I’m also on twitter now: @SyntopiaDK, where I’ll post links and news releated to generative systems, 3D fractals, or whatever pops up.
Finally, if you are near Stockholm, some of my images are on display at a small gallery (from July 9th to September 11th): Kungstensgatan 27.
Posted in Digital Art, Fractals, Fragmentarium, Twitter |
Distance Estimated 3D Fractals (Part I)
Posted on June 3, 2011 by
During the last two years, the 3D fractal field has undergone a small revolution: the Mandelbulb (2009), the Mandelbox (2010), The Kaleidoscopic IFS’s (2010), and a myriad of equally or even more interesting hybrid systems, such as Spudsville (2010) or the Kleinian systems (2011).
All of these systems were made possible using a technique known as Distance Estimation and they all originate from the Fractal Forums community.
Overview of the posts
Part I briefly introduces the history of distance estimated fractals, and discuss how a distance estimator can be used for ray marching.
Part II discuss how to find surface normals, and how to light and color fractals.
Part III discuss how to actually create a distance estimator, starting with distance fields for simple geometric objects, and talking about instancing, combining fields (union, intersections, and differences), and finally talks about folding and conformal transformation, ending up with a simple fractal distance estimator.
Part IV discuss the holy grail: the search for generalization of the 2D (complex) Mandelbrot set, including Quaternions and other hypercomplex numbers. A running derivative for quadratic systems is introduced.
Part V continues the discussion about the Mandelbulb. Different approaches for constructing a running derivative is discussed: scalar derivatives, Jacobian derivatives, analytical solutions, and the use of different potentials to estimate the distance.
Part VI is about the Mandelbox fractal. A more detailed discussion about conformal transformations, and how a scalar running derivative is sufficient, when working with these kind of systems.
Part VII discuss how dual numbers and automatic differentation may used to construct a distance estimator.
Part VIII is about hybrid fractals, geometric orbit traps, various other systems, and links to relevant software and resources.
The background
The first paper to introduce Distance Estimated 3D fractals was written by Hart and others in 1989:
Ray tracing deterministic 3-D fractals
In this paper Hart describe how Distance Estimation may be used to render a Quaternion Julia 3D fractal. The paper is very well written and definitely worth spending some hours on (be sure to take a look at John Hart’s other papers as well). Given the age of Hart’s paper, it is striking that is not until the last couple of years that the field of distance estimated 3D fractals has exploded. There has been some important milestones, such as Keenan Crane’s GPU implementation (2004), and Iñigo Quilez 4K demoscene implementation (2007), but I’m not aware of other fractal systems being explored using Distance Estimation, before the advent of the Mandelbulb.
Raymarching
Classic raytracing shoots one (or more) rays per pixel and calculate where the rays intersect the geometry in the scene. Normally the geometry is described by a set of primitives, like triangles or spheres, and some kind of spatial acceleration structure is used to quickly identify which primitives intersect the rays.
Distance Estimation, on the other hand, is a ray marching technique.
Instead of calculating the exact intersection between the camera ray and the geometry, you proceed in small steps along the ray and check how close you are to the object you are rendering. When you are closer than a certain threshold, you stop. In order to do this, you must have a function that tells you how close you are to the object: a Distance Estimator. The value of the distance estimator tells you how large a step you are allowed to march along the ray, since you are guaranteed not to hit anything within this radius.
Schematics of ray marching using a distance estimator.
The code below shows how to raymarch a system with a distance estimator:
```float trace(vec3 from, vec3 direction) {
float totalDistance = 0.0;
int steps;
for (steps=0; steps < MaximumRaySteps; steps++) {
vec3 p = from + totalDistance * direction;
float distance = DistanceEstimator(p);
totalDistance += distance;
if (distance < MinimumDistance) break;
}
return 1.0-float(steps)/float(MaxRaySteps);
}
```
Here we simply march the ray according to the distance estimator and return a greyscale value based on the number of steps before hitting something. This will produce images like this one (where I used a distance estimator for a Mandelbulb):
It is interesting that even though we have not specified any coloring or lighting models, coloring by the number of steps emphasizes the detail of the 3D structure - in fact, this is an simple and very cheap form of the Ambient Occlusion soft lighting often used in 3D renders.
Parallelization
Another interesting observation is that these raymarchers are trivial to parallelise, since each pixel can be calculated independently and there is no need to access complex shared memory structures like the acceleration structure used in classic raytracing. This means that these kinds of systems are ideal candidates for implementing on a GPU. In fact the only issue is that most GPU's still only supports single precision floating points numbers, which leads to numerical inaccuracies faster than for the CPU implementations. However, the newest generation of GPU's support double precision, and some API's (such as OpenCL and Pixel Bender) are heterogeneous, meaning the same code can be executed on both CPU and GPU - making it possible to create interactive previews on the GPU and render final images in double precision on the CPU.
Estimating the distance
Now, I still haven't talked about how we obtain these Distance Estimators, and it is by no means obvious that such functions should exist at all. But it is possible to intuitively understand them, by noting that systems such as the Mandelbulb and Mandelbox are escape-time fractals: we iterate a function for each point in space, and follow the orbit to see whether the sequence of points diverge for a maximum number of iterations, or whether the sequence stays inside a fixed escape radius. Now, by comparing the escape-time length (r), to its spatial derivative (dr), we might get an estimate of how far we can move along the ray before the escape-time length is below the escape radius, that is:
$$DE = \frac{r-EscapeRadius }{dr}$$
This is a hand-waving estimate - the derivative might fluctuate wildly and get larger than our initial value, so a more rigid approach is needed to find a proper distance estimator. I'll a lot more to say about distance estimators inthe later posts, so for now we will just accept that these function exists and can be obtained for quite a diverse class of systems, and that they are often constructed by comparing the escape-time length with some approximation of its derivative.
It should also be noticed that this ray marching approach can be used for any kinds of systems, where you can find a lower bound for the closest geometry for all points in space. Iñigo Quilez has used this in his impressive procedural SliseSix demo, and has written an excellent introduction, which covers many topics also relevant for Distance Estimation of 3D fractals.
This concludes the first part of this series of blog entries. Part II discusses lighting and coloring of fractals.
Posted in Fractals, Fragmentarium, GPU |
Hybrid 3D Fractals
Posted on March 30, 2011 by
A lot of great images have been made of the Mandelbulb, the Mandelbox, and the various kaleidoscopic IFS’s (the non-platonic non-solids). And it turns out that by combining these formulas (and stirring a few assorted functions into the mix), a variety of new, amazing, and surprising forms emerge.
I’m currently working on making it easier to combine different formulas in Fragmentarium – but until I get something released, here is a collection of images and movies created by Mandelbulb 3D (Windows, free) and Mandelbulber (Windows, free, open-source), that illustrates the beauty and diversity of these hybrid systems. Be sure to view the large versions by following the links. The images were all found at Fractal Forums.
Videos
Buddhi – Mandelbox and Flying Lights
Jérémie Brunet (Bib) – Weird Planet II
Jérémie Brunet (Bib) – Like in a dream II
Images
Tomot – It’s a jungle out there
Lenord – J.A.R.
MarkJayBee – Security Mechanisms
Fractal00 – Alien Stones
Kr0mat1k – Restructuration
BrutalToad – Jülchen
Posted in Fractals, Fragmentarium, Kaleidoscopic IFS, Mandelbulb |
Fragmentarium v0.8
Posted on March 20, 2011 by
I’ve released a new build of Fragmentarium with some much needed updates, including better camera control, high resolution renders, and animation.
New features in version 0.8:
• The 3D camera has been rewritten: it is now a “first-person”, pinhole camera (like Boxplorer and Fractal Lab), and is controllable using mouse and keyboard. Camera view can now be saved together with other settings.
• Arbitrary resolution renderings (using tile based rendering – the GPU won’t time out).
• Preview modes (renders to FBO with lower resolution and rescales).
• ‘Tile preview’ for previewing part of high-resolution renders.
• Animation controller (experimental: no keyframes yet, you must animate using the system supplied ‘time’ variable. Animation is output as a sequence of still images).
• Presets (group parameters settings and load them into a dropbox)
• New fractals: QuaternionMandelbrot4D, Ducks, NewMenger.
• Improved raytracer: dithering, fog, new coloring schemes.
High-resolution render of a 4D Quaternion Mandelbrot (click for large)
Samuel Monnier’s ‘Ducks’ Fractal has been added.
Mandelbrot/Julia type system now with embedded Mandelbrot map.
Posted in Fractals, Fragmentarium |
Syntopia
This blog is written by Mikael Hvidtfeldt Christensen, a physicist with a passion for computational chemistry, generative art, and complex systems in general.
Projects:
Structure Synth
Fragmentarium.
Network:
Syntopia @ Flickr
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 72, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9127521514892578, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/63414/nash-style-embedding-theorem-for-connections/63419
|
## “Nash Style” Embedding Theorem for Connections
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
The strong Whitney embedding theorem states that any smooth (Hausdorff and second-countable) manifold can be smoothly embedded in Euclidean space. John Nash went on to show that any Riemannian manifold could be embedded into Euclidean space in such a way that the metric of the manifold would coincide with the standard dot product. This means that the Levi-Civita connection for the manifold maps to the standard connection. More generally, for a manifold $M$ with connection $\nabla$, when does there exist an embedding into Euclidean space such that the connection is mapped to the standard connection?
-
## 1 Answer
The standard connection is the Levi-Civita connection of the flat metric. So if you have an embedding such that the given connection is the (projection of the) flat connection then you can also induce a metric on the embedded manifold. Hence the given metric was already a Levi-Civita connection of a Riemannian metric on your manifold. Thus the question is whether a given connection is the Levi-Civita of some Riemannian metric. That was one of your earlier questions. Does that work?
-
Great. Sorry for inadvertently asking the same question twice. – Jean Delinez Apr 29 2011 at 14:36
no problem. I even overlooked that it was you who asked the embedding previous question. – Stefan Waldmann Apr 29 2011 at 14:43
A link to the other question mathoverflow.net/questions/54434/… – jc Apr 29 2011 at 14:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9209932684898376, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/tagged/quantum-gravity+quantum-mechanics
|
Tagged Questions
0answers
54 views
How can superstring theories unify general relativity and quantum theory when no prediction can be made?
I am a newbie to superstring theories, but I came into this question: so superstring theories purport to unify general relativity and quantum theory. However, there is yet no definitive way to test ...
0answers
237 views
Can the laws of quantum mechanics be derived from a more fundamental theory? [closed]
String theory takes quantum mechanics for granted and tries to make it compatible with gravity but if it turns out to be a theory of everything then shouldn't it in principle explain why our world is ...
1answer
208 views
How do I quantize a classical field theory
I have not been able to find any information about this on the Internet. I am a middle-schooler, 14, who self-studies physics, and I know up to and including ODEs, and some of the calculus of ...
2answers
321 views
The Uncertainty Principle and Black Holes
What are the consequences of applying the uncertainty principle to black holes? Does the uncertainty principle need to be modified in the context of a black hole and if so what are the implications ...
3answers
270 views
Could all strings be one single string which weaves the fabric of the universe?
This question popped out of another discussion, about if the photon needs a receiver to exist. Can a photon get emitted without a receiver? A universe containing only one electron was hypothetically ...
0answers
138 views
Does local physics depend on global topology?
Motivating Example In standard treatments of AdS/CFT (MAGOO for example), one defines $\mathrm{AdS}_{p+2}$ as a particular embedded submanifold of $\mathbb R^{2,p+1}$ which gives it topology ...
3answers
306 views
An electron falling into a black hole
If an electron falls into a black hole. How can the Heisenberg uncertainty principle hold? The electron has fallen into the singularity now so it has a well defined position which means that it ...
0answers
139 views
Newton Gravitational constant $G$, Plank constant $\hbar$ , Speed of Light $c$ : The Dream Team of moderators?
The 3 great constants of Nature are well known : The Speed of light $c$ (special relativity) The Plank constant $\hbar$ (quantum mechanics) The Newton ...
3answers
529 views
Is there a thought experiment which brings to light the contradiction between General Relativity and Quantum Mechanics?
I've been told that GR and QM are not compatible, is there an intuitive reason/thought experiment which demonstrates the issue? (Or one of the issues?)
0answers
128 views
How is the 'cluster decomposition principle' implemented in holographic theories?
Since holographic theories are non-local by definition, how is this principle implemented? Naively, it seems to me it is not, at least, in some sense. I would appreciate an explanation as simple ...
2answers
177 views
Was Planck's constant $h$ the same when the Big Bang happened as it is today?
Was Planck's constant $h$ the same when the Big Bang happened as it is today? Planck's constant : $$h= 6.626068 × 10^{-34}\, m^2 kg / s,$$ $$E=n.h.\nu,$$ $$\epsilon=h.\nu$$
2answers
150 views
Motivation for “discretized quantum state spaces”
I know that the title of the question is rather vague, so let me clarify what I mean. For a quantum system, the set of states has infinitely (even continuously) many extreme points, i.e. there are ...
1answer
248 views
Set theory, category theory, realism and the recent “reality of the wavefunction” papers
I will add a better phrased question here. Do we need to consider quantum foundations to form a quantum theory of gravity? The kind of foundational question I am thinking of is expressed in the ...
1answer
202 views
A quanta of time
A question of Quantum Time: Does a minimum interval of time cause wave-like behavior? If we think about the uncertainty principle, could it derive from a quanta of time? Does plank’s constant somehow ...
1answer
303 views
Physical interpretation of Wheeler - Dewitt equation
What is the mainstream ( if there is one ) interpretation of the Wheeler - Dewitt equation $\hat{H}\Psi =0$ ?
2answers
271 views
Momentum Energy and Higgs
So, as an object accelerates it gains energy. And energy is mass. So an object becomes more massive as it approaches the speed of light. But, if mass is ONLY due to an object's interaction with the ...
2answers
164 views
Can the implications of dark energy be used to bridge the gap between Quantum Mechanics and General Relativity?
Can the findings of the Physics Nobel Laureates of 2011, namely the overpowering existence of dark energy (vacuum energy) have any implications in the quest the combine Quantum Mechanics and General ...
1answer
81 views
Quantum mechanical gravitational bound states
The quantum mechanics of Coloumb-force bound states of atomic nuclei and electrons lead to the extremely rich theory of molecules. In particular, I think the richness of the theory is related to the ...
0answers
63 views
Complementarity between the laws of physics? [closed]
Is this following proposal plausible, worth considering, or dismissable as lunatic fringe science? What if the universe isn't really what we think it is but some universal quantum computer where we ...
1answer
153 views
Sun-Earth Virtual Gravitons?
How many virtual gravitons do the sun and earth exchange in one year? What are their wavelengths?
1answer
189 views
Scale set by cosmological constant
Following on Jim Graber's answer to: Can "big rip" rip apart an atomic nucleus? If the cosmological constant is large enough, even the ground state of a hydrogen atom can be affected. So ...
4answers
197 views
What are the conditions to be satisfied by a theory in order to be a quantum theory?
This is in continuation to my previous question. It is not a duplicate of the previous one. This question arises because of the answers and discussions in that question. Can we call a theory, quantum ...
1answer
236 views
What are the current (popular(ish)) approaches to modelling the quantum nature of spacetime at the Planck scale?
My guess at a list of them would be: spin foams, casual sets, non-commutative geometry, Machian theories, twistor theory or strings and membranes existing in some higher-dimensional geometry... ...
6answers
1k views
Is the Planck length Lorentz invariant?
The planck length is defined as $l_P = \sqrt{\frac{\hbar G}{c^3}}$. So it is a combination of the constants $c, h, G$ which I believe are all Lorentz invariants. So I think the Planck length should ...
2answers
2k views
What are the main differences between these quantum theories?
What are the main differences between these quantum theories? Quantum Mechanics Quantum Field theory Quantum Gravity EDIT: I ask this question because when I asked a question before people talk ...
4answers
1k views
Is there a maximum possible acceleration?
I'm thinking equivalence principle, possibilities of unbounded space-time curvature, quantum gravity ...
7answers
2k views
Does Quantum Physics really suggests this universe as a computer simulation? [closed]
I was reading about interesting article here which suggests that our universe is a big computer simulation and the proof of it is a Quantum Physics. I know quantum physics tries to provide some ...
5answers
2k views
A list of inconveniences between quantum mechanics and relativity?
It is well known that quantum mechanics and (special and/or general) relativity do not fit well. I am wondering whether it is possible to make a list of contradictions or problems between them? E.g. ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9141817092895508, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/252924/type-of-bifurcation-occurs
|
# type of bifurcation occurs
What are the fixed points of
$$θ'=1-a\sinθ$$
what type of bifurcation occurs at $$a=1, \;\;\;θ=π/2$$
Solution:
$$1/a=\sinθ$$
or $$θ=\arcsin(1/a)$$
I cant seem to find the proper fixed points after this step
-
## 1 Answer
You don't need to find any fixed points analytically to get the answer you are seeking.
Plot the function $$a=\frac{1}{\sin \theta}$$ in the interval $(\pi/2-\varepsilon,\pi/2+\varepsilon)$. Fixing $a=\hat{a}$ gets you an idea about the number of fixed points and their types (this is called bifurcation diagram).
Note that if $a>1$ then you have two fixed points, one is stable and another is unstable. If $a<1$ then you have no fixed points, the two above approached each other and collapsed. This should be known to you as a saddle-node, or tangent, or fold bifurcation.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9387019872665405, "perplexity_flag": "head"}
|
http://mathoverflow.net/revisions/112199/list
|
2 edited tags
1
# Differential equations and axiom of choice
In the most general context, the Picard-Lindelöf theorem (aka Cauchy-Lipschitz in French) asserts the existence of a maximal solution for $\dot{x}(t) = f(t,x(t))$, i.e. of a solution $x(t)$ defined on a interval $I$ such that there exist no other solution whose restriction to $I$ coincide with $x$. The usual proofs of this (when $f$ is such that there is no local unicity) use Zorn's lemma, or some other weaker form of choice. But is this result actually not provable in ZF?
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8902741074562073, "perplexity_flag": "head"}
|
http://nrich.maths.org/public/leg.php?code=87&cl=3&cldcmpid=6793
|
nrich enriching mathematicsSkip over navigation
# Search by Topic
#### Resources tagged with Isosceles triangles similar to Weekly Problem 29 - 2010:
Filter by: Content type:
Stage:
Challenge level:
##### Other tags that relate to Weekly Problem 29 - 2010
Angle properties of shapes. Perpendicular lines. Angles. Circles. Radius (radii) & diameters. Pythagoras' theorem. Similarity. Isosceles triangles.
### There are 17 results
Broad Topics > 2D Geometry, Shape and Space > Isosceles triangles
### Lighting up Time
##### Stage: 2 and 3 Challenge Level:
A very mathematical light - what can you see?
### Tricircle
##### Stage: 4 Challenge Level:
The centre of the larger circle is at the midpoint of one side of an equilateral triangle and the circle touches the other two sides of the triangle. A smaller circle touches the larger circle and. . . .
### Are You Kidding
##### Stage: 4 Challenge Level:
If the altitude of an isosceles triangle is 8 units and the perimeter of the triangle is 32 units.... What is the area of the triangle?
### Three Way Split
##### Stage: 4 Challenge Level:
Take any point P inside an equilateral triangle. Draw PA, PB and PC from P perpendicular to the sides of the triangle where A, B and C are points on the sides. Prove that PA + PB + PC is a constant.
### Hexy-metry
##### Stage: 4 and 5 Challenge Level:
A hexagon, with sides alternately a and b units in length, is inscribed in a circle. How big is the radius of the circle?
### The Eyeball Theorem
##### Stage: 4 and 5 Challenge Level:
Two tangents are drawn to the other circle from the centres of a pair of circles. What can you say about the chords cut off by these tangents. Be patient - this problem may be slow to load.
### Arrh!
##### Stage: 4 Challenge Level:
Triangle ABC is equilateral. D, the midpoint of BC, is the centre of the semi-circle whose radius is R which touches AB and AC, as well as a smaller circle with radius r which also touches AB and AC. . . .
### A Shade Crossed
##### Stage: 4 Challenge Level:
Find the area of the shaded region created by the two overlapping triangles in terms of a and b?
### Xtra
##### Stage: 4 and 5 Challenge Level:
Find the sides of an equilateral triangle ABC where a trapezium BCPQ is drawn with BP=CQ=2 , PQ=1 and AP+AQ=sqrt7 . Note: there are 2 possible interpretations.
### Isosceles
##### Stage: 3 Challenge Level:
Prove that a triangle with sides of length 5, 5 and 6 has the same area as a triangle with sides of length 5, 5 and 8. Find other pairs of non-congruent isosceles triangles which have equal areas.
### Interacting with the Geometry of the Circle
##### Stage: 1, 2, 3 and 4
Jennifer Piggott and Charlie Gilderdale describe a free interactive circular geoboard environment that can lead learners to pose mathematical questions.
### Isosceles Triangles
##### Stage: 3 Challenge Level:
Draw some isosceles triangles with an area of $9$cm$^2$ and a vertex at (20,20). If all the vertices must have whole number coordinates, how many is it possible to draw?
### Triangles in a Square
##### Stage: 4 Challenge Level:
Given that ABCD is a square, M is the mid point of AD and CP is perpendicular to MB with P on MB, prove DP = DC.
### Farhan's Poor Square
##### Stage: 4 Challenge Level:
From the measurements and the clue given find the area of the square that is not covered by the triangle and the circle.
### Pareq Calc
##### Stage: 4 Challenge Level:
Triangle ABC is an equilateral triangle with three parallel lines going through the vertices. Calculate the length of the sides of the triangle if the perpendicular distances between the parallel. . . .
### Pareq Exists
##### Stage: 4 Challenge Level:
Prove that, given any three parallel lines, an equilateral triangle always exists with one vertex on each of the three lines.
### Lens Angle
##### Stage: 4 Challenge Level:
Find the missing angle between the two secants to the circle when the two angles at the centre subtended by the arcs created by the intersections of the secants and the circle are 50 and 120 degrees.
The NRICH Project aims to enrich the mathematical experiences of all learners. To support this aim, members of the NRICH team work in a wide range of capacities, including providing professional development for teachers wishing to embed rich mathematical tasks into everyday classroom practice. More information on many of our other activities can be found here.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8435434699058533, "perplexity_flag": "middle"}
|
http://en.wikiversity.org/wiki/User:Egm6321.f09.Team1/HW7
|
From Wikiversity
# Problem 1: Odd and Even Solutions of Legendre DE
From Lecture Slide 37-1.
## Given
From lecture slide 31-3, general expression for Legendre Polynomial is given by,
$\displaystyle P_n(x) = \sum_{i=0}^{\lfloor n/2 \rfloor} \frac{1 \cdot 3 \dots \left(2n - 2i - 1\right)}{2^i i! (n-2i)!} (-1)^i x^{n-2i},$ $\displaystyle (Eq. 1)$
and from lecture slide 37-1, the general expression for the second solution of Legendre Differential Equation is given as,
$\displaystyle Q_n(x) = P_n(x) \tanh^{-1}(x) - 2 \sum_{j=1,3,5...}^{J} \frac{2n-2j+1}{(2n-j+1)j} P_{n-j}(x),$ $\displaystyle (Eq. 2)$
where $\displaystyle J$ is given by,
$\displaystyle J := 1 + 2 \left\lfloor \frac{n-1}{2} \right\rfloor .$ $\displaystyle (Eq. 3)$
## Find
Using Equation (2), to show when $\displaystyle Q_n$ is even or odd, depending on whether "$\displaystyle n$" is even or odd.
## Solution
We know, from Odd and Even property of $\displaystyle P_n$, that $\displaystyle P_n$ is odd, when $\displaystyle n$ is odd and even, when $\displaystyle n$ is even and the fact that
$\displaystyle \tanh^{-1}(x),$ $\displaystyle (Eq. 4)$
is odd.
(i). When $\displaystyle n$ is odd.
When $\displaystyle n$ is odd, $\displaystyle P_n$ is odd and so,
$\displaystyle P_n(x)\tanh^{-1}(x),$
is even.
So, when $\displaystyle n$ is odd, first term in Equation (2) is even.
When $\displaystyle n$ is odd, value of $\displaystyle n - j$ is even since $\displaystyle j$ values are odd, and so, $\displaystyle P_{n-j}$ is even.
So, when $\displaystyle n$ is odd, all the terms in the summation of Equation (2) are even.
And so,
$\displaystyle \Rightarrow$ So, when $\displaystyle n$ is odd, $\displaystyle Q_n$ is even.
(ii). When $\displaystyle n$ is even.
When $\displaystyle n$ is even, $\displaystyle P_n$ is even and so,
$\displaystyle P_n(x)\tanh^{-1}(x),$
is odd.
So, when $\displaystyle n$ is even, first term in Equation (2) is odd.
When $\displaystyle n$ is even, value of $\displaystyle n - j$ is odd since $\displaystyle j$ values are odd, and so, $\displaystyle P_{n-j}$ is odd.
So, when $\displaystyle n$ is even, all the terms in the summation of Equation (2) are odd.
And so,
$\displaystyle \Rightarrow$ So, when $\displaystyle n$ is even, $\displaystyle Q_n$ is odd.
# Problem 2: Legendre Solutions Plot
From Lecture Slide 37-1.
## Given
From lecture slide 31-3, general expression for Legendre Polynomial is given by,
$\displaystyle P_n(x) = \sum_{i=0}^{\lfloor n/2 \rfloor} \frac{1 \cdot 3 \dots \left(2n - 2i - 1\right)}{2^i i! (n-2i)!} (-1)^i x^{n-2i} ,$ $\displaystyle (Eq. 1)$
and from lecture slide 37-1, the general expression for the second solution of Legendre Differential Equation is given as,
$\displaystyle Q_n(x) = P_n(x) \tanh^{-1}(x) - 2 \sum_{j=1,3,5...}^{J} \frac{2n-2j+1}{(2n-j+1)j} P_{n-j}(x),$ $\displaystyle (Eq. 2)$
where $\displaystyle J$ is given by,
$\displaystyle J := 1 + 2 \left\lfloor \frac{n-1}{2} \right\rfloor .$ $\displaystyle (Eq. 3)$
## Find
Plot {$\displaystyle P_0, P_1, P_2, P_3, P_4$} and {$\displaystyle Q_0, Q_1, Q_2, Q_3, Q_4$} using Matlab.
## Solution
Please refer Problem 6 Solution of HW6, for $\displaystyle P_0(x),\ P_1(x),\ P_2(x),\ P_3(x),\ P_4(x)$.
Please refer [1], p. 33, for expressions of $\displaystyle Q_0(x), Q_1(x), Q_2(x), Q_3(x)$.
The expression for $\displaystyle Q_4(x)$is given by,
$\displaystyle Q_4= P_4(x) \tanh^{-1}(x) - 2 \sum_{j=1,3}^{} \frac{2n-2j+1}{(2n-j+1)j} P_{n-j}(x)$
$\displaystyle \Rightarrow Q_4= \left[\frac{35}{8} x ^ 4 - \frac{15}{4} x ^ 2 + \frac{3}{8}\right] \tanh^{-1}(x) - 2 \left[ \frac{2(4)-2+1}{(2(4)-1+1)1} P_{3}(x) \right] - 2 \left[ \frac{2(4)-2(3)+1}{(2(4)-3+1)3} P_{1}(x) \right]$
$\displaystyle \Rightarrow Q_4= \left[\frac{35}{8} x ^ 4 - \frac{15}{4} x ^ 2 + \frac{3}{8}\right] \tanh^{-1}(x) - \frac{7}{8} \left[ 5 x ^ 3 - 3 x\right] - \frac{1}{3} x$
.
Matlab Code:
```clear all;
x = -1:0.01:1;
P_0 = 1*ones(1,length(x));
P_1 = x;
P_2 = (1/2)*(3*x.^2 - 1);
P_3 = (1/2)*(5*x.^3 - 3*x);
P_4 = (35/8)*x.^4 - (15/4)*x.^2 + (3/8);
Q_0 = atanh(x);
Q_1 = x.*atanh(x) - 1;
Q_2 = (1/2)*(3*x.^2 - 1).*atanh(x) - (3/2)*x;
Q_3 = (1/2)*(5*x.^3 - x).*atanh(x) - (5/2)*x.^2 + (2/3);
Q_4 = ((35/8)*x.^4 - (15/4)*x.^2 + (3/8)).*atanh(x) -(7/8)*(5*x.^3 - 3*x) - (1/3)*x;
figure(1);
plot(x,P_0,'--r',x,P_1,'--b',x,P_2,'--g',x,P_3,'--c',x,P_4,'--m');
legend('P_0','P_1','P_2','P_3','P_4');
figure(2);
plot(x,Q_0,'--r',x,Q_1,'--b',x,Q_2,'--g',x,Q_3,'--c',x,Q_4,'--m');
legend('Q_0','Q_1','Q_2','Q_3','Q_4');
```
Plots:
The plots for $\displaystyle P_n(x)$ with $\displaystyle n = 0,1,2,3,4$ are shown below:
File:Prob2-P(x).jpg.
The plots for $\displaystyle Q_n(x)$ with $\displaystyle n = 0,1,2,3,4$ are shown below:
File:Prob2-Q(x).jpg.
# Problem 3: Orthogonality of Legendre Solutions
From Lecture Slide 37-2.
## Given
From lecture slide 31-3, general expression for Legendre Polynomial is given by,
$\displaystyle P_n(x) = \sum_{i=0}^{\lfloor n/2 \rfloor} \frac{1 \cdot 3 \dots \left(2n - 2i - 1\right)}{2^i i! (n-2i)!} (-1)^i x^{n-2i} ,$ $\displaystyle (Eq. 1)$
and from lecture slide 37-1, the general expression for the second solution of Legendre Differential Equation is given as,
$\displaystyle Q_n(x) = P_n(x) \tanh^{-1}(x) - 2 \sum_{j=1,3,5...}^{J} \frac{2n-2j+1}{(2n-j+1)j} P_{n-j}(x),$ $\displaystyle (Eq. 2)$
where $\displaystyle J$ is given by,
$\displaystyle J := 1 + 2 \left\lfloor \frac{n-1}{2} \right\rfloor$ $\displaystyle (Eq. 3)$
## Find
Show that, $\displaystyle < P_n, Q_n > = 0$.
## Solution
From lecture slide 33-1, the inner (scalar) product is given by,
$\displaystyle <G(x),F(x)> = \int_{-1}^{+1} G(x)F(x)dx,$ $\displaystyle (Eq. 4)$
and Equation (4) becomes zero, when $\displaystyle G(x)$ is even and $\displaystyle F(x)$ is odd or when $\displaystyle G(x)$ is odd and $\displaystyle F(x)$ is even.
For $\displaystyle P_n(x)$ and $\displaystyle Q_n(x)$ using Equation (4),
$\displaystyle <P_n(x),Q_n(x)> = \int_{-1}^{+1} P_n(x)Q_n(x)dx.$ $\displaystyle (Eq. 5)$
From Solution to Problem 1 above, $\displaystyle Q_n(x)$ is odd and $\displaystyle P_n(x)$ is even, when $\displaystyle n$ is even. When $\displaystyle n$ is odd, $\displaystyle Q_n(x)$ is even and $\displaystyle P_n(x)$ is odd.
So, for all values of $\displaystyle n$, $\displaystyle Q_n(x)$ is opposite to $\displaystyle P_n(x)$ in oddness and evenness, which means,
$\displaystyle \int_{-1}^{+1} P_n(x)Q_n(x)dx = 0.$
And so,
$\displaystyle <P_n(x),Q_n(x)> = 0$
# Problem 4: Integration by Parts: Legendre Solutions
From lecture slide 37-3.
## Given
From lecture slide 37-2,
$\displaystyle \alpha = \int_{-1}^{+1} L_m \left[ (1-x^2) L'_n \right]' dx$ $\displaystyle (Eq. 1)$
where $\displaystyle L_n$ and $\displaystyle L_m$ are solutions to the Legendre differential equation.
## Find
Show that
$\displaystyle \alpha = -\int_{-1}^{+1} (1-x^2) L'_n L'_m dx,$ $\displaystyle (Eq. 2)$
after integration by parts of $\displaystyle \alpha$ in Equation (1).
## Solution
Let,
$\displaystyle u = L_m \Rightarrow du = L'_m dx,$ $\displaystyle (Eq. 3)$
and,
$\displaystyle dv = \left[ (1-x^2) L'_n \right]' dx.$ $\displaystyle (Eq. 4)$
$\displaystyle \Rightarrow \int dv = \int\left[ (1-x^2) L'_n \right]' dx \Rightarrow v = \left[ (1-x^2) L'_n \right].$
We know,
$\displaystyle \int_{a}^{b} udv = \left[uv \right]_{a}^{b} - \int_{a}^{b} vdu ,$ $\displaystyle (Eq. 5)$
and so, with $\displaystyle a = -1$ and $\displaystyle b = +1$
$\displaystyle \alpha = \int_{-1}^1 L_m \left[ (1-x^2) L'_n \right]' dx = \left[ L_m(1-x^2) L'_n\right]_{-1}^{+1} - \int_{-1}^{+1} (1-x^2) L'_n L'_m dx ,$ $\displaystyle (Eq. 6)$
For values of $\displaystyle x = -1$ and $\displaystyle x = +1$,
$\displaystyle (1-x^2) = 0.$ $\displaystyle (Eq. 7)$
Substituting Equation (7) in first term of Equation (6), first term becomes zero.
$\displaystyle \alpha = \cancelto{0}{\left[ L_m(1-x^2) L'_n\right]_{-1}^{+1}} - \int_{-1}^{+1} (1-x^2) L'_n L'_m dx.$
So,
$\displaystyle \alpha = - \int_{-1}^{+1} (1-x^2) L'_n L'_m dx$
# Problem 5: Attraction of Spheres
From lecture slide 38-2.
## Given
From lecture slide 38-2,
$\displaystyle (r_{PQ})^2 = \sum_{i=1}^{3} \left(x_Q^i - x_P^i\right)^2,$ $\displaystyle (Eq. 1)$
where,
$\displaystyle x_P^1 = x_P,\ x_P^2 = y_P,\ x_P^3 = z_P,\$
and,
$\displaystyle x_Q^1 = x_Q,\ x_Q^2 = y_Q,\ x_Q^3 = z_Q,\$
given by,
$\displaystyle x_P = r_P \cos(\theta_P) \cos(\psi_P);\ y_P = r_P \cos(\theta_P) \sin(\psi_P);\ z_P = r_P \sin(\theta_P),$ $\displaystyle (Eq. 2)$
and,
$\displaystyle x_Q = r_Q \cos(\theta_Q) \cos(\psi_Q);\ y_Q = r_Q \cos(\theta_Q) \sin(\psi_Q);\ z_Q = r_Q \sin(\theta_Q).$ $\displaystyle (Eq. 3)$
## Find
Show that,
$\displaystyle (r_{PQ})^2 = (r_P)^2 + (r_Q)^2 - 2 (r_P) (r_Q) \cos\gamma,$ $\displaystyle (Eq. 4)$
where,
$\displaystyle \cos\gamma = \cos(\theta_Q) \cos(\theta_P) \cos(\psi_Q-\psi_P) + \sin(\theta_Q)\sin(\theta_P)$ $\displaystyle (Eq. 5)$
## Solution
Equation (1) can be written as,
$\displaystyle (r_{PQ})^2 = \left(x_Q - x_P\right)^2 + \left(y_Q - y_P\right)^2 + \left(z_Q - z_P\right)^2.$ $\displaystyle (Eq. 6)$
Substituting Equations (2) and (3) in (6),
$\displaystyle \begin{align} \Rightarrow (r_{PQ})^2 = & \left[r_Q \cos(\theta_Q) \cos(\psi_Q) - r_P \cos(\theta_P) \cos(\psi_P)\right]^2 + \\ & \left[r_Q \cos(\theta_Q) \sin(\psi_Q) - r_P \cos(\theta_P) \sin(\psi_P)\right]^2 + \\ & \left[r_Q \sin(\theta_Q) - r_P \sin(\theta_P)\right]^2 \end{align}$
$\displaystyle \begin{align} \Rightarrow (r_{PQ})^2 = & \left[r_Q^2 \cos^2(\theta_Q) \cos^2(\psi_Q) + r_P^2 \cos^2(\theta_P) \cos^2(\psi_P) - 2 r_P r_Q \cos(\theta_Q) \cos(\psi_Q) \cos(\theta_P) \cos(\psi_P) \right] + \\ & \left[r_Q^2 \cos^2(\theta_Q) \sin^2(\psi_Q) + r_P^2 \cos^2(\theta_P) \sin^2(\psi_P) - 2 r_P r_Q \cos(\theta_Q) \sin(\psi_Q) \cos(\theta_P) \sin(\psi_P)\right] + \\ & \left[r_Q^2 \sin^2(\theta_Q) + r_P^2 \sin^2(\theta_P) - 2 r_P r_Q \sin(\theta_Q) \sin(\theta_P)\right] \end{align}$
$\displaystyle \begin{align} \Rightarrow (r_{PQ})^2 = &\ r_Q^2 \cos^2(\theta_Q) \cancelto{1}{\left[ \cos^2(\psi_Q) + \sin^2(\psi_Q) \right]} + r_Q^2 \sin^2(\theta_Q) - 2 r_P r_Q \cos(\theta_Q) \cos(\theta_P) \left [\cos(\psi_Q) \cos(\psi_P) \right] + \\ &\ r_P^2 \cos^2(\theta_P) \cancelto{1}{\left[ \cos^2(\psi_P) + \sin^2(\psi_P) \right]} + r_P^2 \sin^2(\theta_P) - 2 r_P r_Q \cos(\theta_Q)\cos(\theta_P) \left [\sin(\psi_Q) \sin(\psi_P)\right] - 2 r_P r_Q \sin(\theta_Q) \sin(\theta_P) \end{align}$
$\displaystyle \begin{align} \Rightarrow (r_{PQ})^2 = &\ r_Q^2 \cancelto{1}{\left[\cos^2(\theta_Q) + \sin^2(\theta_Q) \right]} - 2 r_P r_Q \cos(\theta_Q) \cos(\theta_P) \underbrace{\left [\cos(\psi_Q) \cos(\psi_P) + \sin(\psi_Q) \sin(\psi_P)\right]}_{=\ \cos(\psi_Q-\psi_P)} + \\ &\ r_P^2 \cancelto{1}{\left[ \cos^2(\theta_P) + \sin^2(\theta_P) \right]} - 2 r_P r_Q \sin(\theta_Q) \sin(\theta_P) \end{align}$
$\displaystyle \begin{align} \Rightarrow (r_{PQ})^2 = &\ r_Q^2 + r_P^2 - 2 r_P r_Q \left[\underbrace{\cos(\theta_Q) \cos(\theta_P) {\cos(\psi_Q-\psi_P)} + \sin(\theta_Q)\sin(\theta_P)}_{=\ \cos\gamma} \right] \end{align}$
And so,
$\displaystyle (r_{PQ})^2 = (r_P)^2 + (r_Q)^2 - 2 (r_P) (r_Q) \cos\gamma$
# Problem 6: Binomial Series
From lecture note slide 38-4.
## Given
The binomial series expansion is
$\displaystyle (x+y)^r = \sum_{k=0}^\infty \begin{pmatrix} r \\ k \end{pmatrix} x^{r-k} y^k$ $\displaystyle (Eq. 1)$
where
$\displaystyle \begin{pmatrix} r \\ k \end{pmatrix} = \frac{r(r-1) \cdot\cdot\cdot (r-k+1)}{k!}$ $\displaystyle (Eq. 2)$
## Find
Use Eqs. 1 and 2 to show that
$\displaystyle (1-x)^{-1/2} = \sum_{i=0}^\infty \alpha_i x^i$ $\displaystyle (Eq. 3)$
where
$\displaystyle \alpha_i = \frac{1 \cdot 3 \cdot ... \cdot (2i - 1)}{2 \cdot 4 \cdot ... \cdot (2i)}$ $\displaystyle (Eq. 4)$
## Solution
Using Eq. 3, we can rewrite the LHS of Eq. 1 as
$\displaystyle (1-x)^{-1/2} = \sum_{k=0}^\infty \begin{pmatrix} -1/2 \\ k \end{pmatrix} 1^{-1/2-k} (-x)^k$ $\displaystyle (Eq. 5)$
We can expand this
$\displaystyle (1-x)^{-1/2} = \sum_{k=0}^\infty \frac{(-1/2)(-3/2)\cdot\cdot\cdot(1/2-k)}{k!} (-x)^k$ $\displaystyle (Eq. 6)$
The number of factors in the numerator is $\displaystyle k$, hence we can rewrite this as
$\displaystyle (1-x)^{-1/2} = \sum_{k=0}^\infty \frac{(1)(3)\cdot\cdot\cdot(2k-1)}{(-2)^k k!} (-x)^k$ $\displaystyle (Eq. 7)$
Change $\displaystyle k \rightarrow i$, and cancel two minus signs
$\displaystyle (1-x)^{-1/2} = \sum_{i=0}^\infty \frac{(1)(3)\cdot\cdot\cdot(2i-1)}{2^i i!} x^i$ $\displaystyle (Eq. 8)$
The denominator can then be expanded
$\displaystyle (1-x)^{-1/2} = \sum_{i=0}^\infty \underbrace{\frac{1 \cdot 3\cdot\cdot\cdot(2i-1)}{2 \cdot 4 \cdot \cdot \cdot (2i)}}_{\alpha_i} x^i$ $\displaystyle (Eq. 9)$
So that
$\displaystyle (1-x)^{-1/2} = \sum_{i=0}^\infty \alpha_i x^i$ $\displaystyle (Eq. 10)$
where
$\displaystyle \alpha_i = \frac{1 \cdot 3\cdot\cdot\cdot(2i-1)}{2 \cdot 4 \cdot \cdot \cdot (2i)}$ $\displaystyle (Eq. 11)$
# Problem 7: Generating Functon for Legendre Polynomials
From lecture note slide 39-2.
## Given
We have that
$\displaystyle A(\mu, \rho) := 1 - 2 \mu \rho + \rho^2$ $\displaystyle (Eq. 1)$
and the binomial expansion
$\displaystyle (1-x)^{-1/2} = \sum_{i=0}^\infty \alpha_i x^i$ $\displaystyle (Eq. 2)$
where
$\displaystyle \alpha_i = \frac{1 \cdot 3 \cdot ... \cdot (2i - 1)}{2 \cdot 4 \cdot ... \cdot (2i)}$ $\displaystyle (Eq. 3)$
Using Eqs. 1 and 2, we can expand the expression
$\displaystyle [A(\mu,\rho)]^{-1/2} = \alpha_0 + \alpha_1(2\mu \rho- \rho^2) + \alpha_2(2\mu \rho- \rho^2)^2 + ...$
$\displaystyle = \underbrace{\alpha_0}_{P_0(\mu)} + \underbrace{2 \mu \alpha_1}_{P_1(\mu)} \rho + \underbrace{(- \alpha_1 + 4 \mu^2 \alpha_2)}_{P_2(\mu)} \rho^2 + ...$ $\displaystyle (Eq. 4)$
## Find
Continue the expansion given by Eq. 4 to yield $\displaystyle P_3$, $\displaystyle P_4$, and $\displaystyle P_5$, and compare the result to that obtained by Eq. 7 on lecture note slide 31-3
$\displaystyle P_n(x) = \sum_{i=0}^{[n/2]} = \frac{1 \cdot 3 \cdot ... \cdot (2n - 2i - 1)}{2^i i! (n-2i)!} (-1)^i x^{n-2i}$ $\displaystyle (Eq. 5)$
## Solution
We need the first 6 terms of the expansion
$\displaystyle [A(\mu,\rho)]^{-1/2} = \alpha_0 + \alpha_1(2\mu \rho- \rho^2) + \alpha_2(2\mu \rho- \rho^2)^2 + \alpha_3(2\mu \rho- \rho^2)^3 + \alpha_4(2\mu \rho- \rho^2)^4 + \alpha_5(2\mu \rho- \rho^2)^5 + ...$
$\displaystyle = \alpha_0$
$\displaystyle + \alpha_1(2\mu \rho - \rho^2)$
$\displaystyle + \alpha_2(4\mu^2 \rho^2 - 4\mu \rho^3 + \rho^4)$
$\displaystyle + \alpha_3 (8\mu^3 \rho^3 - 12\mu^2 \rho^4 + 6\mu \rho^5 - \rho^6)$
$\displaystyle + \alpha_4 (16\mu^4 \rho^4 - 32\mu^3 \rho^5 + 24\mu^2 \rho^6 - 8 \mu \rho^7 + \rho^8)$
$\displaystyle + \alpha_5 (32\mu^5 \rho^5 - 80 \mu^4 \rho^6 + 80 \mu^3 \rho^7 - 40 \mu^2 \rho^8 + 10 \mu \rho^9 - \rho^{10}) + ...$
$\displaystyle = \alpha_0 + \alpha_1 \rho$
$\displaystyle + (-\alpha_1 + 4 \alpha_2 \mu^2) \rho^2$
$\displaystyle + (- 4 \alpha_2 \mu + 8 \alpha_3 \mu^3) \rho^3$
$\displaystyle + (\alpha_2 - 12 \alpha_3 \mu^2 + 16 \alpha_4 \mu^4) \rho^4$
$\displaystyle + (6 \alpha_3 \mu - 32 \alpha_4 \mu^3 + 32 \alpha_5 \mu^5) \rho^5 + ...$ $\displaystyle (Eq. 6)$
Using Eq. 3, we have that
$\displaystyle \alpha_0 = 1 \quad \alpha_1 = \frac{1}{2} \quad \alpha_2 = \frac{3}{8} \quad \alpha_3 = \frac{15}{48} \quad \alpha_4 = \frac{105}{384} \quad \alpha_5 = \frac{945}{3840} \quad$
we can substitute this into Eq. 6
$\displaystyle = 1 + \frac{1}{2} \rho$
$\displaystyle + \left(-\frac{1}{2} + \frac{3}{2} \mu^2\right) \rho^2$
$\displaystyle + \left(- \frac{3}{2} \mu + \frac{5}{2} \mu^3\right) \rho^3$
$\displaystyle + \left(\frac{3}{8} - \frac{15}{4} \mu^2 + \frac{35}{8} \mu^4\right) \rho^4$
$\displaystyle + \left(\frac{15}{8} \mu - \frac{35}{4} \mu^3 + \frac{63}{8} \mu^5\right) \rho^5 + ...$ $\displaystyle (Eq. 7)$
From the above, we can extract
$\displaystyle P_3(\mu) = - \frac{3}{2} \mu + \frac{5}{2} \mu^3$ $\displaystyle (Eq. 8)$
$\displaystyle P_4(\mu) = \frac{3}{8} - \frac{15}{4} \mu^2 + \frac{35}{8} \mu^4$ $\displaystyle (Eq. 9)$
$\displaystyle P_5(\mu) = \frac{15}{8} \mu - \frac{35}{4} \mu^3 + \frac{63}{8} \mu^5$ $\displaystyle (Eq. 10)$
Using Eq. 5, we get
$\displaystyle P_3(x) = \sum_{i=0}^1 \frac{1 \cdot 3 \cdot ... \cdot (2n - 2i - 1)}{2^i i! (n-2i)!} (-1)^i x^{n-2i}$ $\displaystyle (Eq. 11)$
$\displaystyle P_3(x) = \frac{1 \cdot 3 \cdot 5}{ 3!} x^3 - \frac{1 \cdot 3 }{2} x$ $\displaystyle (Eq. 12)$
$\displaystyle P_3(x) = \frac{5}{ 2} x^3 - \frac{3}{2} x$ $\displaystyle (Eq. 13)$
$\displaystyle P_4(x) = \sum_{i=0}^2 \frac{1 \cdot 3 \cdot ... \cdot (2n - 2i - 1)}{2^i i! (n-2i)!} (-1)^i x^{n-2i}$ $\displaystyle (Eq. 14)$
$\displaystyle P_4(x) = \frac{1 \cdot 3 \cdot ... \cdot 7}{4!} x^4 - \frac{1 \cdot 3 \cdot 5}{4} x^2 + \frac{1 \cdot 3 }{8}$ $\displaystyle (Eq. 15)$
$\displaystyle P_4(x) = \frac{35}{8} x^4 - \frac{15}{4} x^2 + \frac{3}{8}$ $\displaystyle (Eq. 16)$
$\displaystyle P_5(x) = \sum_{i=0}^2 \frac{1 \cdot 3 \cdot ... \cdot (2n - 2i - 1)}{2^i i! (n-2i)!} (-1)^i x^{n-2i}$ $\displaystyle (Eq. 17)$
$\displaystyle P_5(x) = \frac{1 \cdot 3 \cdot ... \cdot 9}{5!} x^5 - \frac{1 \cdot 3 \cdot ... \cdot 7}{2 \cdot 3!} x^3 + \frac{1 \cdot 3 \cdot 5}{8} x$ $\displaystyle (Eq. 18)$
$\displaystyle P_5(x) = \frac{63}{8} x^5 - \frac{35}{4} x^3 + \frac{15}{8} x$ $\displaystyle (Eq. 19)$
We have confirmed that the methods are equivalent, at least for terms up to $\displaystyle P_5$.
# Notes and References
1. King et al., "Differential Equations (Linear, Nonlinear, Ordinary, Partial)", Cambridge University Press, 2003
# Contributing Team Members
Egm6321.f09.Team1.andy 12:03, 29 November 2009 (UTC)
Egm6321.f09.Team1.sallstrom 18:48, 2 December 2009 (UTC)
Egm6321.f09.Team1.vasquez 05:09, 5 December 2009 (UTC)
Egm6321.f09.Team1.AH 15:59, 8 December 2009 (UTC)
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 216, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8305523991584778, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/107154/counter-example-for-a-result-of-intersection-of-subspaces?answertab=oldest
|
# Counter example for a result of intersection of subspaces
I am struggling with this question from Halmos's text, please ignore the imperative language.
"Suppose that $L, M$ and $N$ are subspaces of a vector space. Show that the equation
$$L \cap (M + N) = (L \cap M) + (L \cap N)$$
is not necessarily true.
Since each of these subspaces has origin in them, clearly there intersections could not be empty. I wasn't able to formulate an example where this result did n't hold. Any help would be highly appreciated.
-
1
What kind of examples have you tried? – Qiaochu Yuan Feb 8 '12 at 18:31
1
As a first step, try to show that one side is a subspace of the other. What goes wrong when you try to prove the reverse inclusion? – Aaron Feb 8 '12 at 18:32
1
Hint: If $M$ is the whole space, this equation becomes $L = L + (L \cap N)$. Can you finish it from here? – student Feb 8 '12 at 18:35
1
– mt_ Feb 8 '12 at 18:36
@Leandro Could i do subtract L from both sides and come to the conclusion (L∩N) = a null set which is obviously not true as the origin belongs to all subspaces and since intersection of two subspaces is also a subspace. Hence we have a contradiction if this result was true. – Hardy Feb 8 '12 at 18:49
show 2 more comments
## 1 Answer
Work for example in $\mathbb{R}^2$.
Let $M$ be the set of multiples of the vector $(1,0)$, let $N$ be the set of multiples of $(0,1)$. It's your turn, you can choose $L$.
-
Well choosing L as the set of multiples of the vector (1,1) would show the result to be true. As we get vectors along with the axis on both sides. So obviously that could n't be the choice for our L. Any other hints ? – Hardy Feb 8 '12 at 18:58
1
@Hardy: Maybe I am misinterpreting $+$. My interpretation is that it is the space generated by $M$ and $N$. With that interpretation, and $M$ and $N$ as in my answer, and $L$ as you mention, we have that $M+N$ is all of $\mathbb{R}^2$. So the left-hand side is just $L$. Look now at the right-hand side. We have $L\cap M$ contains only the $0$ vector, and the same is true of $L\cap N$. So the sum of the two also only contains the $0$ vector. Thus left in this case is $L$, right is $\{0\}$, not equal. If I am misinterpreting the meaning of $+$, please tell me. – André Nicolas Feb 8 '12 at 19:46
Firstly thank you for your reply. I was interpreting M + N as that too, but Should n't L ∩ M be all the vectors along the x-axis and similar argument for the y-axis one, although i agree i had stuffed up in interpreting M + N as all of R^2 myself. I was interpreting M + N to be just the vectors along the axis which i am unsure about now. Your last reply seems much more feasible. I am a newbie. – Hardy Feb 8 '12 at 19:59
@Hardy: Definitely $L\cap M$ just contains the $0$ vector. Think of it geometrically. You can think of $L$ as all points on the line $y=x$, and $M$ as the points on the $x$-axis. The only point they have in common is the origin. And I checked the standard usage of $U+V$, when both are subspaces $W$. It is the set of all points of $W$ of the shape $u+v$, where $u\in U$ and $v\in V$. – André Nicolas Feb 8 '12 at 21:03
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 38, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9588872790336609, "perplexity_flag": "head"}
|
http://cs.stackexchange.com/questions/9781/how-to-find-l-q-emptyset-a-state-that-is-not-reachable-for-any-given-strin
|
# How to find $L_q = \emptyset$, a state that is not reachable for any given string?
In the book Introduction to Languages and the Theory of Computation, I'm reading section 2.6 on how to minimize the number of states in an FA.
I'm having trouble understanding a notation defined as $L_q$. Here's what the book says:
Suppose we have a finite automaton $M = (Q, \Sigma, q_0, A, \delta)$ accepting $L \subseteq \Sigma^*$. For a state $q$ of $M$, we have introduced the notation $L_q$ to denote the set of strings that cause $M$ to be in state $q$:
$$L_q = \{ x \in \Sigma^* | \delta^*(q_0, x) = q\}$$.
The first step in reducing the number of states of M as much as possible is to eliminate every state $q$ for which $L_q$ = $\emptyset$, along with transitions from these states. None of these states is reachable from the initial state, and eliminating them does not change the language accepted by $M$.
I tried looking at this automaton to try to understand this definition:
How can any of the states $1$ through $5$ be $L_q = \emptyset$ if I can find a string that can reach every state for this FA?
That is I can reach state $2$ with string $a$, and state $5$ with string $ab$, etc. Is this a correct way to approach this?
-
## 1 Answer
You're right. There is no such state with $L_q = \emptyset$
The first step in the minimization algorithm is: "delete all nonreachable states".
-
I thought so... I just got confused because I was doing one of the exercises, and all of the FAs given had no unreachable states, so I thought maybe I was misunderstanding the definition. – deezy Feb 14 at 22:03
2
You weren't. If you look at the definition of FA, it is perfectly legal to have states that aren't reachable at all. That makes the definition simpler, while making no difference in the languages handled. – vonbrand Feb 14 at 22:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 21, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9561356902122498, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/100812/solving-a-limit-with-taylor-series
|
# Solving a limit with Taylor Series
I have trouble with the method of solving limits using Taylor series.
Consider for example these limits: $$\begin{align*} \lim_{x\to 0^-} \frac{1 + \log(1 + \sin\sqrt{x}) - e^{\sqrt{x}}}{\tan\sqrt{x} -\sin\sqrt{x}} &= i\infty\\ \lim_{x\to 0^+}\frac{1 + \log(1 + \sin\sqrt{x}) - e^{\sqrt{x}}}{\tan\sqrt{x} -\sin\sqrt{x}} &= -\infty \end{align*}$$
In the numerator there are three functions: $\ln()$, $\sin()$ and $e^x$. Now, I know that the function on the same side of the fraction line must have the same $o(x)$, but is the same for the nested function? The $\sin$ function, must have the same $o(x)$ of the other two?
-
Why don't you use L'Hopital's Rule? – Peter Tamaroff Apr 2 '12 at 2:23
## 1 Answer
I'm not entirely sure where your problem is. Your idea is (presumably) to expand both numerator and denominator in taylor series about 0.
Let's suppose we are computing the limit through positive numbers. Since $\sqrt{x}$ is continuous then, we can just replace all the $\sqrt{x}$ by $x$.
Now from the series of $\tan$ and $\sin$, you find that the denominator is $x^3/2 + o(x^3)$.
I contend your problem is with finding a taylor series for $\log(1+\sin{x})$. There are various ways of doing this. It will turn out that we need the first three terms. The constant terms is obviously zero. To obtain two more terms, it would be possible to just differentiate twice and plug in zero. Alternatively (and I contend this is what your question is about), you can write $\log(1+y) = y - y^2/2 + o(y^2)$. You now want to substitute $y = x + o(x^2)$ (this is sine). This yields $\log(1+\sin{x}) = x - x^2/2 + o(x^2)$. I suppose you might be worried about this substitution step; however if you recall the definition of $o(...)$ you may easily verify this is correct.
Putting this together with the series for $e^x$ shows that the denominator is $-x^2 + o(x^2)$. You now have enough information to determine the limit.
-
– Overflowh Jan 20 '12 at 18:25
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 19, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9434753060340881, "perplexity_flag": "head"}
|
http://en.wikipedia.org/wiki/Assignment_problem
|
# Assignment problem
The assignment problem is one of the fundamental combinatorial optimization problems in the branch of optimization or operations research in mathematics. It consists of finding a maximum weight matching in a weighted bipartite graph.
In its most general form, the problem is as follows:
There are a number of agents and a number of tasks. Any agent can be assigned to perform any task, incurring some cost that may vary depending on the agent-task assignment. It is required to perform all tasks by assigning exactly one agent to each task and exactly one task to each agent in such a way that the total cost of the assignment is minimized.
If the numbers of agents and tasks are equal and the total cost of the assignment for all tasks is equal to the sum of the costs for each agent (or the sum of the costs for each task, which is the same thing in this case), then the problem is called the linear assignment problem. Commonly, when speaking of the assignment problem without any additional qualification, then the linear assignment problem is meant.
## Algorithms and generalizations
The Hungarian algorithm is one of many algorithms that have been devised that solve the linear assignment problem within time bounded by a polynomial expression of the number of agents.
The assignment problem is a special case of the transportation problem, which is a special case of the minimum cost flow problem, which in turn is a special case of a linear program. While it is possible to solve any of these problems using the simplex algorithm, each specialization has more efficient algorithms designed to take advantage of its special structure. If the cost function involves quadratic inequalities it is called the quadratic assignment problem.
## Example
Suppose that a taxi firm has three taxis (the agents) available, and three customers (the tasks) wishing to be picked up as soon as possible. The firm prides itself on speedy pickups, so for each taxi the "cost" of picking up a particular customer will depend on the time taken for the taxi to reach the pickup point. The solution to the assignment problem will be whichever combination of taxis and customers results in the least total cost.
However, the assignment problem can be made rather more flexible than it first appears. In the above example, suppose that there are four taxis available, but still only three customers. Then a fourth dummy task can be invented, perhaps called "sitting still doing nothing", with a cost of 0 for the taxi assigned to it. The assignment problem can then be solved in the usual way and still give the best solution to the problem.
Similar tricks can be played in order to allow more tasks than agents, tasks to which multiple agents must be assigned (for instance, a group of more customers than will fit in one taxi), or maximizing profit rather than minimizing cost.
## Formal mathematical definition
The formal definition of the assignment problem (or linear assignment problem) is
Given two sets, A and T, of equal size, together with a weight function C : A × T → R. Find a bijection f : A → T such that the cost function:
$\sum_{a\in A}C(a,f(a))$
is minimized.
Usually the weight function is viewed as a square real-valued matrix C, so that the cost function is written down as:
$\sum_{a\in A}C_{a,f(a)}$
The problem is "linear" because the cost function to be optimized as well as all the constraints contain only linear terms.
The problem can be expressed as a standard linear program with the objective function
$\sum_{i\in A}\sum_{j\in T}C(i,j)x_{ij}$
subject to the constraints
$\sum_{j\in T}x_{ij}=1\text{ for }i\in A, \,$
$\sum_{i\in A}x_{ij}=1\text{ for }j\in T, \,$
$x_{ij}\ge 0\text{ for }i,j\in A,T. \,$
The variable $x_{ij}$ represents the assignment of agent $i$ to task $j$, taking value 1 if the assignment is done and 0 otherwise. This formulation allows also fractional variable values, but there is always an optimal solution where the variables take integer values. This is because the constraint matrix is totally unimodular. The first constraint requires that every agent is assigned to exactly one task, and the second constraint requires that every task is assigned exactly one agent.
## Further reading
• Brualdi, Richard A. (2006). Combinatorial matrix classes. Encyclopedia of Mathematics and Its Applications 108. Cambridge: Cambridge University Press. ISBN 0-521-86565-4. Zbl 1106.05001.
• Burkard, Rainer; M. Dell'Amico, S. Martello (2012). Assignment Problems (Revised reprint). SIAM. ISBN 978-1-61197-222-1.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 9, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9286603927612305, "perplexity_flag": "head"}
|
http://www.askamathematician.com/2012/03/q-is-there-an-intuitive-proof-for-the-chain-rule/
|
Your Math and Physics Questions Answered
# Q: Is there an intuitive proof for the chain rule?
Posted on March 13, 2012 by
Physicist: The chain rule is a tool from calculus that says that if you have one function “nested” inside of another, $f(g(x))$, then the derivative of the whole mess is given by $\frac{df}{dx} (g(x))\cdot \frac{dg}{dx} (x)$. There are a number of ways to prove this, but one of the more enlightening ways to look at the chain rule (without rigorously proving it) is to look at what happens to any function, $f(x)$, when you muck about with the argument (the “x” part).
Doubling the "argument" of a function scrunches it up. As a result the slope at each corresponding part of the new function is doubled.
When you multiply the argument by some amount, the graph of the function gets squished by the same amount. If you, for example, plug in “x=3″ to f(2x), that’s exactly the same as plugging in “x=6″ to $f(x)$. For $f(2x)$, everything happens at half the original x value.
However, while $f(2x)$ when x=3 is the same as $f(x)$ when x=6, the same is not true of their slopes. The slope (derivative) is “rise over run” and the run just became half as long, so the slope just got twice as big. Scrunching a graph makes the slope steeper (see picture above).
So, the slope of $f(2x)$ at x=3 is actually double the slope of $f(x)$ at x=6. You can write this in general as $\frac{d}{dx} \left[ f(2x) \right] = \frac{df}{dx}(2x)\cdot 2$.
Here’s the calculus leap: replacing the x in $f(x)$ with 2x clearly means that you’re running through the function twice as fast, so when you take the derivative you just multiply by two to deal with the scrunching. But, if you instead replace x with a more complicated function, $g(x)$, then the amount of speed up and slow down depends on $g(x)$. If $g(x)$ has a slope of 2 at some point, then it’s acting like 2x and you get the same “times two” slope. If it’s got a slope of 3 or 1/5, then the slope of $f$ at the corresponding point will be multiplied by 3 or 1/5 respectively.
sin(x) in blue and sin(x^2/4) in green. x^2 starts slow and gets faster and faster, and as a result the green line gets steeper and steeper.
So, to find the slope of $f(g(x))$, which is just the derivative, $\frac{d}{dx}\left[f(g(x))\right]$ you first find what the slope of $f$ would be at the appropriate x value, $\frac{df}{dx}(g(x))$, and then multiply by how much $g$ is speeding things up or slowing things down (scrunching or expanding). The slope of $g$ is just the derivative, so you’re multiplying by $\frac{dg}{dx}(x)$.
Boom! Chain rule: $\frac{d}{dx}\left[ f(g(x))\right] = \frac{df}{dx} (g(x))\cdot \frac{dg}{dx} (x)$
It’s worth pointing out that, like all calc rules, it doesn’t matter that this rule only talks about two functions. If you have something like $f(g(h(x)))$, then you can treat $g(h(x))$ as one function, and you’ll find that after running through the chain rule once you’ll be faced with another, simpler, chain rule problem:
$\frac{d}{dx}\left[f(g(h(x)))\right] = \frac{df}{dx}(g(h(x)))\cdot\frac{d}{dx}\left[g(h(x))\right]$
This entry was posted in -- By the Physicist, Equations, Math. Bookmark the permalink.
### 8 Responses to Q: Is there an intuitive proof for the chain rule?
1. Santo D'Agostino says:
Very nice physical/geometric interpretation!
2. Asa says:
Is this a proof of the chain rule?:
Because dg(x)/dg(x) = 1 ,
[df(g(x))/dx] = [df(g(x))/dx] * 1 = [df(g(x))/dx] * [dg(x)/dg(x)] = [df(g(x)) * g(x)]/[dx * g(x)] = [df(g(x))/dg(x)] * [dg(x)/dx]
I know that infinitesimals can work funny sometimes but this seems very reasonable.
3. The Physicist says:
It’s a good thumbnail sketch but, as you say, infinitesimals work funny sometimes. For example, the proof here (and in the post) breaks down when multi-variable functions are considered.
4. Eli Bashwinger says:
That was unequivocally brilliant.
5. David Liao says:
I made a video tutorial that might be useful (see the segment from 11m04s to 14m43s):
https://vimeo.com/channels/lookatphysics/34827074
Best,
DL (lookatphysics.com)
6. Sreedev Narayanan says:
I Really Needed The Proof’s Like This!
Not The Booring Proofs In My Text Books!
This Is What A Student Who Needs If He Needs To Study Mathematics Conceptually!
Thank’s!
It Gives The Clear Idea!
7. Pingback: TWSB: Back on the Chain Gang « Eigenblogger
8. Brendan says:
This is awesome!! My thanks to The Physicist
Cancel
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 26, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.903685450553894, "perplexity_flag": "middle"}
|
http://mathoverflow.net/users/14969?tab=recent
|
# Giorgio Mossa
1,387 Reputation
1089 views
## Registered User
Name Giorgio Mossa
Member for 2 years
Seen 3 hours ago
Website
Location Earth
Age 25
I'm math student, in particular I'm interested in algebra, geometry, topology and category theory (especially higher dimensional category theory) and its application in mathematics.
May9 awarded ● Yearling
Mar7 revised Natural transformations as categorical homotopies improved answer
Mar2 awarded ● Notable Question
Mar1 revised Natural transformations as categorical homotopies made a correction
Mar1 comment The semicat of morphisms which are neither right nor left invertibleJust a curiosity, how does a class of morphisms which are neither left nor right invertible be category? identities are always both right and left invertible.
Jan28 answered In your opinion, what are the relative advantages of n-fold categories and n-categories?
Jan26 answered What is the difference between a function and a morphism?
Dec10 comment Is there a high-concept explanation for why characteristic 2 is special?For the combinatorial point of view I guess that the importance of 2 is linked to the fact that for every set $X$ there's the canonical representation of each subset of $X$ with an element of $2^X$ which by the way is the support of a $F_2$ vector space, the space $\prod_{x \in X} \mathbb Z/2 \mathbb Z$. About other anomalies related to characteristic 2, I wonder if they are due to the fact that in characteristic 2 inverse coincide with identity and so we don't have the (involutive) symmetry respect to 0.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9158823490142822, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/geometry/87173-find-measure-angle-c.html
|
# Thread:
1. ## Find Measure of Angle C
In triangle ABC, the measure of angle B is twice the measure of angle A. If the measure of angle A is subtracted from the measure of angle C, the difference is 20. Find the measure of angle C.
My answer is 120 degrees.
The book's answer is 60 degrees.
2. $A + B + C = 180 \; \mbox{ and } C-A=20 \; \mbox{ and } -2A+B=0$
$\left[\begin{matrix} 1 & 1 & 1 \\ -1 & 0 & 1 \\ 2 & -1 & 0 \end{matrix}\right] \left[\begin{matrix} A \\ B \\ B \end{matrix}\right]$
Solving this linear system of equations yields $C = 60$ .
3. ## twig...
Originally Posted by Twig
$A + B + C = 180 \; \mbox{ and } C-A=20 \; \mbox{ and } -2A+B=0$
$\left[\begin{matrix} 1 & 1 & 1 \\ -1 & 0 & 1 \\ 2 & -1 & 0 \end{matrix}\right] \left[\begin{matrix} A \\ B \\ B \end{matrix}\right]$
Solving this linear system of equations yields $C = 60$ .
I notice that you decided to use matrix algebra to show the answer. Can you set this up for me using a system of linear equations in 3 unknowns?
4. hi
Uhm, is this what you mean?
$\{A+B+C=180\}$
$\{C-A=20\}$
$\{-2A+B=0\}$
PS: If anyone could show the code for making a "large bracket" around all of the equations plz =)
5. ## ok...
Originally Posted by Twig
hi
Uhm, is this what you mean?
$\{A+B+C=180\}$
$\{C-A=20\}$
$\{-2A+B=0\}$
PS: If anyone could show the code for making a "large bracket" around all of the equations plz =)
I will use this system of linear equations in 3 variables to find the answer.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 12, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9169737696647644, "perplexity_flag": "middle"}
|
http://gilkalai.wordpress.com/2011/06/15/the-combinatorics-of-cocycles-and-borsuks-problem/?like=1&source=post_flair&_wpnonce=a34a077b61
|
Gil Kalai’s blog
## The Combinatorics of Cocycles and Borsuk’s Problem.
Posted on June 15, 2011 by
## Cocycles
Definition: A $k$-cocycle is a collection of $(k+1)$-subsets such that every $(k+2)$-set $T$ contains an even number of sets in the collection.
Alternative definition: Start with a collection $\cal G$ of $k$-sets and consider all $(k+1)$-sets that contain an odd number of members in $\cal G$.
It is easy to see that the two definitions are equivalent. (This equivalence expresses the fact that the $k$-cohomology of a simplex is zero.) Note that the symmetric difference of two cocycles is a cocycle. In other words, the set of $k$-cocycles form a subspace over Z/2Z, i.e., a linear binary code.
1-cocycles correspond to the set of edges of a complete bipartite graph. (Or, in other words, to cuts in the complete graphs.) The number of edges of a complete bipartite graph on $n$ vertices is of the form $k(n-k)$. There are $2^{n-1}$ 1-cocycles on $n$ vertices altogether, and there are $n \choose k$ cocycles with $k(n-k)$ edges.
2-cocycles were studied under the name “two-graphs”. Their study was initiated by J. J. Seidel.
Let $e(k,n)$ be the number of $k$-cocycles.
Lemma: Two collections of $k$-sets (in the second definition) generate the same $k$-cocycle if and only if their symmetric difference is a $(k-1)$-cocycle.
It follows that $e(k,n)= 2^{{n}\choose {k}}/e(k-1,n).$ So $e(k,n)= 2^{{n-1} \choose {k}}$.
A very basic question is:
Problem 1: For $k$ odd what is the maximum number $f(k,n)$ of $(k+1)$-sets of a $k$-cocycle with $n$ vertices?
When $k$ is even, the set of all $(k+1)$-subsets of {1,2,…,n} is a cocycle.
Problem 2: What is the value of $m$ such that the number $ef(k,n,m)$ of $k$-cocycles with $n$ vertices and $m$ $k$-sets is maximum?
When $k$ is even the complement of a cocycle is a cocycle and hence $ef(k,n,m)$$=ef(k,n,{{n}\choose{k+1}}-m)$. It is likely that in this case $ef(k,n,m)$ is a unimodal sequence (apart from zeroes), but I don’t know if this is known. When $k$ is odd it is quite possible that (again, ignoring zero entries) $ef(n,m)$ is unimodal attaining its maximum when $m=1/2 {{n} \choose {k+1}}$.
## Borsuk’s conjecture, Larman’s conjecture and bipartite graphs
Karol Borsuk conjectured in 1933 that every bounded set in $R^d$ can be covered by $d+1$ sets of smaller diameter. David Larman proposed a purely combinatorial special case (that looked much less correct than the full conjecture.)
Larman’s conjecture: Let $\cal F$ be an \$latex r\$-intersecting family of $k$-subsets of $\{1,2,\dots, n\}$, namely $\cal F$ has the property that every two sets in the family have at least $r$ elements in common. Then \$\cal F\$ can be divided into $n$ $(r+1)$-intersecting families.
Larman’s conjecture is a special case of Borsuk’s conjecture: Just consider the set of characteristic vectors of the sets in the family (and note that they all belong to a hyperplane.) The case $r=1$ of Larman’s conjecture is open and very interesting.
A slightly more general case of Borsuk’s conjecture is for sets of 0-1 vectors (or equivalently $\pm 1$ vectors. Again you can consider the question in terms of covering a family of sets by subfamilies. Instead of intersection we should consider symmetric differences.
Borsuk 0-1 conjecture: Let $\cal F$ be a family of subsets of $\{1,2,\dots, n\}$, and suppose that the symmetric difference between every two sets in $\cal F$ has at most $t$ elements. Then \$\cal F\$ can be divided into $n+1$ families such that the symmetric difference between any pair of sets in the same family is at most $t-1$.
## Cuts and complete bipartite graphs
The construction of Jeff Kahn and myself can be described as follows:
Construction 1: The ground set is the set of edges of the complete graph on $4p$ vertices. The family $\cal F$ consists of all subsets of edges which represent the edge sets of a complete bipartite graph with $2p$ vertices in every part. In this case, $n={{4p} \choose {2}}$, $k=4p^2$, and $r=2p^2$.
It turns out (as observed by A. Nilli) that there is no need to restrict ourselves to balanced bipartite graphs. A very similar construction which performs even slightly better is:
Construction 2: The ground set is the set of edges of the complete graph on $4p$ vertices. The family $\cal F$ consists of all subsets of edges which represent the edge set of a complete bipartite graph.
Let $f(d)$ be the smallest integer such that every set of diameter 1 in $R^d$ can be covered by $f(d)$ sets of smaller diameter. Constructions 1 and 2 show that $f(d) >exp (K\sqrt d)$. We would like to replace $d^{1/2}$ by a larger exponent.
## The proposed constructions.
To get better bounds for Borsuk’s problem we propose to replace complete bipartite graphs with higher odd-dimensional cocycles.
Construction A: Consider all $(2k-1)$-dimensional cocycles of maximum size (or of another fixed size) on the ground set $\{1,2,\dots,n\}$.
Construction B: Consider all $(2k-1)$-dimensional cocycles on the ground set $\{1,2,\dots,n\}$.
## A Frankl-Wilson/Frankl-Rodl type problem for cocycles
Conjecture: Let $\alpha$ be a positive real number. There is $\beta = \beta (k,\alpha)<1$ with the following property. Suppose that
(*) The number of $k$-cocycles on $n$ vertices with $m$ edges is not zero
and that
(**) $m>\alpha\cdot {{n}\choose {k+1}}$, and $m<(1-\alpha){{n}\choose {k+1}}$. (The second inequality is not needed for odd-dimensional cocycles.)
Let $\cal F$ be a family of $k$-cocycles such that no symmetric difference between two cocycles in $\cal F$ has precisely $m$ sets. Then
$|{\cal F}| \le 2^{\beta {{n}\choose {k}}}.$
If true even for 3-dimensional cocycles this conjecture will improve the asymptotic lower bounds for Borsuk’s problem. For example, if true for 3-cocycles it will imply that $f(d) \ge exp (K d^{3/4})$. The Frankl-Wilson and Frankl-Rodl theorems have a large number of other applications, and an extension to cocycles may also have other applications.
## Crossing number of complete graphs, Turan’s (2k+1,2k) problems, and cocycles
The question on the maximum number of sets in a $k$-cocycle when $k$ is odd is related to several other (notorious) open problems.
Let $T(n,r,k)$ be the maximum number of edges in a $k$-uniform hypergraph which does not contain all the $k$-subsets of some $r$-set. When $k>2$ finding or even estimating $T(n,r,k)$ is very difficult. An old conjecture of mine is:
Conjecture: For every even $k$ and every $n$, $T(n,k+1,k)$ is attained (only) by cocycles.
For $k=4$ the question is related to yet another well-known problem: that of the crossing number of a complete graph on $n$ vertices. It is conjectured that the hypergraph whose 4-sets correspond to planar $K_4$s in the proposed construction for $K_n$ with minimum number of crossings is optimal also for the Turan (4,5) problem and also for the maximum size of a 3-cocycle on n vertices.
## Relaxation: what could be the analog of tensor powers and positive definite matrices
Another variation of constructions 1 and 2 is
Construction 3: The set $x\otimes x$ for all $n$-dimensional unit vectors $x$.
This set is simply the set of (normalized) rank-1 definite matrices and its convex hull is the unti ball in the set of all definite matrices. It is quite possible that this set will give better bounds than construction 2 (which can be regarded as the subset of vectors $x\otimes x$ where $x$ is a $\pm 1$ vector), which will reduce the lowest dimension where Borsuk’s conjecture is false to below 100. This will follow from the conjecture on subsets of the unit spheres without two orthogonal vectors, which we discussed on this post. (See also this post.)
What could be an analog of Construction 3 for higher-dimensional cocycles?
Construction C?: Let $x$ be a unit vector whose coordinates are indexed by $k$-subsets of elements from $\{1,2,\dots,n\}$. Consider the vector indexed by $(k+1)$-subsets whose \$S\$-coordinate is $\prod x_R$ for all $k$-subsets of $S$. Let $X$ be the set of all such vectors and $Y$ be their convex hull.
$X$ is a sort of linear-programming relaxation of $k$-cocycles. I don’t know if for larger odd $k$‘s this is the “correct” relaxation of the set of cocycles and if the diameter is increased when we move from cocycles to Y.
Seidel’s original definition of 2-graphs was as a function $f$ from ${\Omega \choose 3}\to {-1,1}$, such that for every quadruple $\alpha,\beta,\gamma,\delta$, $f(\alpha,\beta,\gamma)f(\alpha,\beta,\delta)f(\alpha,\gamma,\delta)f(\beta,\gamma,\delta)=1$. Construction C is based on trying to relax the second definition of cocycles. We can try instead to relax the first definition by replacing the \$\latex \pm\$ vectors by arbitrary real vectors of the same norm.
Construction D?: Consider all functions $f$ from ${\Omega \choose k} \to R$, such that $\sum f^2(x)$ = $n\choose k$ and for every \$(k+1)\$-set $T$ the product of $f(S)$ for all $S \subset T, |S|=k$ is 1.
This seems to give a different “relaxation” even for $k=2$.
Another way to replace hypergraphs by continuous objects is via hypegraph limits.
Question: What are the hypergraph limits of $k$-cocycles for odd $k$.
Question: Can flag-algebras shed light on Problem 1?
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 137, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9040044546127319, "perplexity_flag": "head"}
|
http://projecteuclid.org/DPubS?service=UI&version=1.0&verb=Display&handle=euclid.jsl/1183746559
|
### Exact Bounds for Lengths of Reductions in Typed $\lambda$-Calculus
Arnold Beckmann
Source: J. Symbolic Logic Volume 66, Issue 3 (2001), 1277-1285.
#### Abstract
We determine the exact bounds for the length of an arbitrary reduction sequence of a term in the typed $\lambda$-calculus with $\beta-, \xi$- and $\eta$-conversion. There will be two essentially different classifications, one depending on the height and the degree of the term and the other depending on the length and the degree of the term.
Full-text: Remote access
If you are a member of the ASL, log in to Euclid for access.
Full-text is available via JSTOR, for JSTOR subscribers. Go to this article in JSTOR.
Permanent link to this document: http://projecteuclid.org/euclid.jsl/1183746559
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8298137187957764, "perplexity_flag": "middle"}
|
http://mathematica.stackexchange.com/questions/2357/how-can-i-get-mathematica-to-solve-an-equation-with-multiple-variables?answertab=active
|
# How can I get Mathematica to solve an equation with multiple variables?
How can I get mathematica to solve the non-homogeneous differential equation with undetermined coefficients listed below?
Solve $\ y''(t)+3y'(t) = 2t^4$, using: $\ y_p(t) = t(A_0t^4+A_1t^3+A_2t^2+A_3t+A_4)$
where $\ y_p(t)$ is a particular solution
incidentally, $\ y'(t) = A_4 + 2 A_3 t + 3 A_2 t^2 + 4 A_1 t^3 + 5 A_0 t^4$ and $\ y''(t) = 2 A_3 + 6 A_2 t + 12 A_1 t^2 + 20 A_0 t^3$
I'd like a solution that doesn't use `DSolve` because I can already solve the entire differential equation with that. I just need to see this particular part for verification of my work. Any suggestions?
-
1
I don't understand. Initially, you're giving us a differential equation, what's $y(t)$, an ansatz? And how are the initial conditions working here, you're giving us a) some functions where there should be points and b) the first/second derivative are not suitable initial conditions, zero-th and first are. And finally, what's wrong with DSolve? – David Feb 26 '12 at 19:47
ah. @David, so this question comes from a larger question. involving a non-homogenous differential equation with undetermined coefficients: $\ y''(t)+3y'(t)=2t^4 + t^2e^{-3t} + Sin(3t)$ Right now i'm focusing on the particular solution with the term $\ 2t^4$ hence the above question. As for why I'm not using `DSolve` I want to check my handwritten work. – franklin Feb 26 '12 at 20:30
## 2 Answers
If I understand the question correctly, you are looking for the solution of the differential equation
$$y''(t)+3y'(t) = 2t^4$$
in the form
$$y(t) = t(A_0t^4+A_1t^3+A_2t^2+A_3t+A_4).$$
That is, you need to find the values of the $A_i$ constants that satisfy this equation. Please clarify if this is what you are asking.
To do this in Mathematica, we can define
````y[t_] := t (t^4 Subscript[A, 0] + t^3 Subscript[A, 1] +
t^2 Subscript[A, 2] + t Subscript[A, 3] + Subscript[A, 4])
````
then use `SolveAlways`:
````SolveAlways[y''[t] + 3 y'[t] == 2 t^4, t]
````
-
Sorry about the subscripts, I copied the MathML. @franklin please consider posting in Mathematica syntax when possible. – Szabolcs Feb 26 '12 at 20:32
@Szablocs,that is correct. I am looking for the values of $\ A_i$ for $\ i = {0..4}$ that satisfy the equation. For further calrification, this syntax `t*(t^4*Subscript[A,0]) ` is preferable to $\ t(t^4A_0)$ right? Thanks so much for the help. – franklin Feb 26 '12 at 20:42
@franklin Actually `A0` is preferably, for the simple reason that it is both readable and easy to paste into Mathematica. If you write something merely to explain, but it'll never need to be pasted into Mathematica, then math notation is of course more readable. – Szabolcs Feb 26 '12 at 20:47
You can make a WolframAlpha query directly from Mathematica (shortcut `==`) :
````Solve y''(t)+3y'(t)=2t^4
````
Then just click the show steps link.
-
mathematica does beautiful work when it comes to showing the steps but this isn't exactly what i was looking for @Artes – franklin Feb 26 '12 at 20:46
@franklin Look at the last line in Show steps : "The general solution is: ..." which is what you were looking for, if you set `c1=c2=0`. As Szabolcs' solution works fine here you can still find my approach simpler and faster, so it can be pretty helpful for you. – Artes Feb 26 '12 at 20:57
lang-mma
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9114042520523071, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/geometry/191988-find-surface-area-open-topped-box-print.html
|
# Find the surface area of an open-topped box
Printable View
• November 15th 2011, 01:19 PM
Dragon08
1 Attachment(s)
Find the surface area of an open-topped box
Attachment 22758
Find the surface area of this open-topped lid.
I know the formula for the surface area of a cylinder is pi(r^2)h, but I don't know how to calculate the inside of the lid.
• November 15th 2011, 01:43 PM
takatok
Re: Find the surface area of an open-topped box
Actually
$\pi r^2 h$ is the volume of a cylinder.
$2 \pi r^2+ 2 \pi r h$ is the Surface Area.
If you think about it you'll see why. You need the area of the circle on top, and bottom which is $\pi r^2$ * 2.
Now the circumference of those circles is $2 \pi r$ and we need to add all of the circumferences from top to bottom. So we multiply by h. This gives us the edge all the way around connecting the top to the bottom.Now this gives the total Surface Area.
Thinking about it I forgot to add. If you cut a hole in the cylinder you have to consider adding up the circumferences of the inside you cut out since they are not on the surface.
I think your drawing is like a mason jar lid. A cylinder with a hole all the way through from top to bottom. So what pieces from the total surface area are missing? Figure that out and subtract it from the above formula.
N.B: Sorry forgot to add. Since we are cutting a hole through the middle of this thing. We need to find the circumferences of the inside ring and multiply by h. Then ADD that to the total, since that is now part of the surface and wasn't before.
• November 15th 2011, 01:50 PM
Soroban
Re: Find the surface area of an open-topped box
Hello, Dragon08!
Could you state the original probem?
The given description makes no sense.
Quote:
Find the surface area of this open-topped lid. . What "lid"?
I know the formula for the surface area of a cylinder is pi(r^2)h . . . . no
but I don't know how to calculate the inside of the lid.
Your title refers to an open-topped box.
Then you refer to the volume of a cylinder.
I don't see any "lid" in this problem.
Your sketch is of a thick-walled cylinder.
. . Its height is 500 cm.
. . Its outer radius is 50 cm.
. . Its inner radius is 49 cm.
If you intend to paint the visible surface of the cylinder,
. . we want the inside surface area, the outside surface area,
. . and the area of the two "rings" at the top and bottom.
*
All times are GMT -8. The time now is 03:54 PM.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9138073921203613, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/73651?sort=oldest
|
## True by accident (and therefore not amenable to proof)
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
The graph reconstruction conjecture claims that (barring trivial examples) a graph on n vertices is determined (up to isomorphism) by its collection of (n-1)-vertex induced subgraphs (again up to isomorphism).
The way it is phrased ("reconstruction") suggests that a proof of the conjecture would be a procedure, indeed an algorithm, that takes the collection of subgraphs and then ingeniously "builds" the original graph from these.
But based on some experience with a related conjecture (the vertex-switching reconstruction conjecture), I am led to wonder whether this is something that is simply true "by accident". By this I mean that it is something that is just overwhelmingly unlikely to be false ... there would need to be a massive coincidence for two non-isomorphic graphs to have the same "deck" (as the collection of (n-1)-vertex induced subgraphs is usually called). In other words, the only reason for the statement to be true is that it "just happens" to not be false.
Of course, this means that it could never actually be proved.. and therefore it would be a very poor choice of problem to work on!
My question (at last) is whether anyone has either formalized this concept - results that can't be proved or disproved, not because they are formally undecidable, but just because they are "true by accident" - or at least discussed it with more sophistication than I can muster.
EDIT: Apologies for the delay in responding and thanks to everyone who contributed thoughtfully to the rather vague question. I have accepted Gil Kalai's answer because he most accurately guessed my intention in asking the question.
I should probably not have used the words "formally unprovable" mostly because I don't really have a deep understanding of formal logic and while some of the "logical foundations" answers contained interesting ideas, that was not really what I was trying to get at.
What I was really trying to get at is that some assertions / conjectures seem to me to be making a highly non-obvious statement about combinatorial objects, the truth of which depends on some fundamental structural understanding that we currently lack. Other assertions / conjectures seem, again, to me, to just be saying something that we would simply expect to be true "by chance" and that we would really be astonished if it were false.
Here are a few unproved statements all of which I believe to be true: some of them I think should reflect structure and others just seem to be "by chance" (which is which I will answer later, if anyone is still interested in this topic).
(1) Every projective plane has prime power order
(2) Every non-desarguesian projective plane contains a Fano subplane
(3) The graph reconstruction conjecture
(4) Every vertex-transitive cubic graph has a hamilton cycle (except Petersen, Coxeter and two related graphs)
(5) Every 4-regular graph with a hamilton cycle has a second one
Certainly there is a significant chance that I am wrong, and that something that appears accidental will eventually be revealed to be a deep structural theorem when viewed in exactly the right way. However I have to choose what to work on (as do we all) and one of the things I use to decide what NOT to work on is whether I believe the statement says something real or accidental.
Another aspect of Gil's answer that I liked was the idea of considering a "finite version" of each statement: let S(n) be the statement that "all non-desarguesian projective planes of order at most n have a Fano subplane". Then suppose that all the S(n) are true, and that for any particular n, we can find a proof - in the worst case, "simply" enumerate all the projective planes of order n and check each for a Fano subplane. But suppose that the length of the shortest possible proof of S(n) tends to infinity as n tends to infinity - essentially there is NO OTHER proof than checking all the examples. Then we could never make a finite length proof covering all n. This is roughly what I would mean by "true by accident".
More comments welcome and thanks for letting me ramble!
-
2
Can't one ask the same question about any hard conjecture? – Gjergji Zaimi Aug 25 2011 at 11:44
3
It seems unlikely to me that the question "Has anyone has formalized this concept?" will produce an interesting answers. Probably this is not the case and the answer is simply "No: no one has formalized that concept". On the other hand, asking for a list of examples of similar-sounding conjectures (i.e. things that are true by accident - Goldbach's conjecture being the prototypical example) might be more fruitful. If you decide to change the question in the way I suggest, then please don't forget to make it community-wiki. – André Henriques Aug 25 2011 at 11:46
4
Assuming the conjecture is true, one could computably reconstruct the original graph from the subgraphs by brute force - enumerate all the graphs on $n$ vertices and then eliminate them one at a time if they do not have the correct collection of subgraphs. So the truth of the conjecture would imply the existence of the procedure, but that procedure only works because the conjecture is already known to be true, and the procedure itself would exist even if the conjecture is false, although in that case the procedure doesn't do what it is supposed to do. – Carl Mummert Aug 25 2011 at 11:53
7
Perhaps, when you try to formalize the concept "true by accident", you arrive at the concept "true but formally undecidable"? At least I am having difficulty seeing what the distinction between the two might be. – Mark Grant Aug 25 2011 at 12:18
3
What sort of graphs are we talking about? Directed, with loops, with multiple edges? I am asking out of idle curiosity. – Andrej Bauer Aug 25 2011 at 13:15
show 8 more comments
## 6 Answers
This is a very interesting (yet rather vague) question. Most answers were in the direction of mathematical logic but I am not sure this is the only (or even the most appropriate) way to think about it. The notion of coincidence is by itself very complicated. (See http://en.wikipedia.org/wiki/Coincidence ). One way to put it on rigurous grounds is using probabilistic/statistical framework. Indeed, as Timothy mentioned it is sometimes possible to give a probabilistic heuristic in support of some mathematical statement. But its is notorious statistical problem to try to determine aposteriori if some events represent a coincidence.
I am not sure that (as the OP assumes) if a statement is "true by accident" it implies that it can never be proved. Also I am not sure (as implied by most answers) that "can never be proved" should be interpreted as "does not follow from the axioms". It can also refers to situations where the statement admits a proof, but the proof is also "accidental" as the original statement is, so it is unlikely to be found in the systematic way mathematics is developed.
In a sense (as mentioned in quid's answer), the notion of "true by accident" is related to mathematics psychology. It is more related to the way we precieve mathematical truths than to some objective facts about them.
Regarding the reconstruction conjecture. Note that we can ask if the conjecture is true for graphs with at most million vertices. Here, if true it is certainly provable. So the logic issues disappear but the main issue of the question remains. (We can replace the logic distinctions by computational complexity distinctions. But still I am not sure this will cpature the essence of the question.) There is a weaker form of the conjecture called the edge reconstruction conjecture (same problem but you delete edges rather than vertices) where much is known. There is a very conceptual proof that every graph with n vertices and more than nlogn edges is edge-reconstructible. So this gives some support to the feeling that maybe vertex reconstruction can also be dealt with.
Finally I am not aware of a heuristic argument that "there would need to be a massive coincidence for two non-isomorphic graphs to have the same 'deck'" as the OP suggested. (Coming up with a convincing such heuristic would be intereting.) It is known that various graph invariants must have the same value on such two graphs.
-
I have accepted this as best matching the spirit of my question; additional comments are in the "Edit" above. – gordon-royle Sep 1 2011 at 2:05
3
I'm reminded of Problem 3c in Chapter 3 of Richard Stanley's *Enumerative Combinatorics*: "Let $f(n)$ be the number of non-isomorphic $n$-element posets. Let $P$ denote the statement that infinitely many values of $f(n)$ are palindromes when written in base ten. Show that $P$ cannot be proved or disproved in Zermelo-Fraenkel set theory." The bit about formal unprovability is, in my opinion, a red herring. I think Stanley is just saying that he thinks that if $P$ is false (which it surely is), then it is false by accident. – Timothy Chow Nov 4 2011 at 14:19
This is a good example. We do not expect "hidden structures" in the sequence f(n), or, more importantly in the sequence of primes, or even in the writings of Shakespeare, but we dont expect that we will be able to refute their existence either. – Gil Kalai Nov 4 2011 at 15:17
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Yes, this is what constructivism is all about! In intuitionistic logic, the law of excluded middle doesn't generally hold, so it is not always possible to derive $A$ (i.e. $A$ is true) from $\lnot\lnot A$ (i.e. $A$ is not false).
The particular case you're considering is a form of Markov's Principle, which can be worded as if it is not the case that there is no example, then an example does exist. Symbolically, the rule is $$\lnot\forall x\lnot A(x) \to \exists x A(x),$$ where $A(x)$ is required to be decidable: $\forall x(A(x) \lor \lnot A(x))$. In constructive mathematics, existence is very strong — it is not acceptable to merely show that there must be an example, one needs to actually produce an example in some way or another. Markov's principle says that showing that there must be an example is enough to prove existence. Thus this principle is not generally accepted by most schools of constructivism, except in limited instances.
-
2
What you describe is not Markov’s rule, but Markov’s principle. Markov’s rule is the corresponding derivation rule which states that when $\neg\forall x\neg A(x)$ is provable, then `$\exists x\,A(x)$` is provable (under appropriate conditions on $A$). All usual constructive theories (for example, Heyting arithmetic) are closed under this rule, even though they typically do not prove Markov’s principle. Therefore, even in constructive setting, it would suffice to prove that there is no counterexample in order to establish the conjecture. – Emil Jeřábek Aug 25 2011 at 12:22
You're absolutely right, I just fixed it. – François G. Dorais♦ Aug 25 2011 at 12:39
Apart from your specific example, the idea of truth-by-accident has been studied in the context of formal first-order languages, which includes the language of graph theory, and in his dissertation, Kurt Gödel proved that the statements that happen to be true in all models of a first order theory $T$ are exactly the statements that are provable in $T$. This is his famous completeness theorem.
Thus, any statement expressible in the first-order theory of groups that happens to be true in all groups will be provable from the group axioms, and any statement expressible in the first-order statement of graphs that happens to be true in all graphs will be provable from the axioms of graph theory.
Your statement, however, does not seem to be expressible directly in the language of graph theory, since it also uses the concept of cardinality and of subgraphs, so the completeness theorem does not apply directly to it for the language of graphs. Rather, it is a statement of number theory, and the relevant models for this case would include all the standard and nonstandard models of arithmetic.
So the relevant conclusion would be that if the statement were not provable in the first-order Peano's axioms PA, then there is a nonstandard model of arithmetic having a bad (pseudo)finite graph.
But the particular form of the statement means that it has complexity $\Pi^0_1$, which means it is a universal statement quantifying over the natural numbers, and if any such statement is independent of PA, then it is true, because if it is true in any model, then because the standard model is an initial segment of all the others, it follows that it must be true in the standard model and hence true. This level of complexity is the same complexity as many of the interesting independent statements, including consistency statements.
Incidentally, this seems to be my 500th answer on mathoverflow. It's been a lot of fun, and I've surely learned a lot of mathematics!
-
I think you mean 'true' rather than 'provable in ZF' since Con(ZF) is a (hopefully true) $\Pi^0_1$ statement that is not provable in ZF (again, hopefully). – François G. Dorais♦ Aug 25 2011 at 12:57
Yes, I'll edit. – Joel David Hamkins Aug 25 2011 at 12:58
1
@Joel: I confess that I don't understand how this answers the OP's question. Are you saying that there aren't any statements that are true by accident? – Timothy Chow Aug 25 2011 at 20:08
5
Congrats on your 500th! – François G. Dorais♦ Aug 25 2011 at 20:13
1
Timothy, yes, I am arguing that for statements in a first-order language, the (profound) fact that true-in-all-models is the same as provable means that there is no accidental truth. A statement in the language of graph theory is true in all graphs if and only if it is provable from the axioms of a graph. So we don't need another distinction besides provable, negated-provable, and independent. In the case of statements of arithmetic, such a view is intimately connected with a considerations of nonstandard models of arithmetic, but I maintain that it still answers the question. – Joel David Hamkins Aug 25 2011 at 21:48
show 3 more comments
The statement in question can be formalized in the language of Peano Arithmetic, and I will treat it as a statement in that language. A similar analysis works for any effective theory stronger than PA, such as ZFC.
Consider the set of all sentences in the language of PA; define an order relation $R$ so that $\phi \mathbin{R} \psi$ if $\phi \to \psi$ is provable in PA. This gives a pre-order; if we perform the usual equivalence class construction then the resulting algebra is a partial order called a Lindenbaum algebra (*).
Because the graph reconstruction conjecture corresponds to a sentence $G$ in PA, it corresponds to a particular node in this algebra.
• If $G$ is provable in PA, then $G$ corresponds to the bottom element of the algebra
• If $G$ is false, it corresponds to the top node of the algebra, but in this case we're not very worried about its provability
• Otherwise, $G$ corresponds to some intermediate node of the algebra. In that case, we cannot prove $G$ from PA, but we can prove $G$ by assuming PA plus any axiom either in the equivalence class of sentences that forms $G$'s node or in the equivalence class of any node higher than $G$'s node.
In every case, unless $G$ is false, $G$ is amenable to proof, but the proof will have to assume axioms that are strong enough to prove the desired conclusion. There is no sentence which could "never actually be proved", although there are plenty of sentences that cannot be proved in PA, and false sentences can only be proved from false axioms. The question is simply which axioms are required to prove a particular sentence.
*: Traditionally, a "Lindenbaum algebra" or "Lindenbaum–Tarski algebra" should be defined with the dual ordering of the ordering I use. But the ordering in which $0=1$ corresponds to the top of the algebra matches better with the diagrams we create to illustrate relationships between different axiom systems, such as 1. People also use the reverse ordering in the context of set theory, where large cardinal axioms are sorted by consistency strength, e.g. 2.
-
1
(I think you mean '≥' and not '>'.) – François G. Dorais♦ Aug 25 2011 at 12:41
1
For the usual definition of the Lindenbaum algebra, you would mean neither `$>$` nor $\geq$ but $\leq$. But your use of "bottom", "top", and "higher" indicates that you really intend $\geq$ and thus the dual of the usual definition. – Andreas Blass Aug 25 2011 at 14:14
I do mean the algebra which puts $0=1$ at the top. I was only using '$>$' as an arbitrary relation symbol. I changed it to an $R$ to avoid entirely any confusion about whether it should have a horizontal line. – Carl Mummert Aug 25 2011 at 14:23
1
(The reason for making falsity the bottom in the standard definition is, of course, that once you really treat it as an algebra rather than just order, it is quite inconvenient to have $[\phi]\land[\psi]=[\phi\lor\psi]$ and $[\phi]\lor[\psi]=[\phi\land\psi]$, where $[\phi]$ denotes the equivalence class of a formula $\phi$.) – Emil Jeřábek Aug 25 2011 at 14:40
@Emil: Yes; it's just the order that is of interest here. The algebra itself is often uninteresting as an algebra, as you explained in mathoverflow.net/questions/65851/… . The point of the diagrams I linked at the end of my answer is to show interesting suborders obtained by picking a few recognizable nodes out of the order. – Carl Mummert Aug 25 2011 at 14:47
show 1 more comment
This is a take on this question of a form different from the other answers, along the lines of the comments of Gjergji Zaimi and André Henriques.
No doubt in present day mathematics there are some conjectures or perhaps even results that have an 'accidental' feel, while others do not have this feel.
However, it seems to me that to a certain and I believe considerable extent this is a subjective impression, or perhaps subjective is not really the right term and I should rather say, this impression is informed by the current state of mathematics.
I do not have really good and precise historical examples at hand, but I think it is true that for example certain results on diophantine equations (finiteness results of solutions, say) a long time ago had a much more accidental feel. But now with the developement of Arithmetic Geometry (eg Mordell conj.) some of them are much better and conceptually understood and now feel natural or at least not accidental anymore.
Perhaps some of the results and conjectures that today look accidental will at some point in the future be natural consequence of theories yet to be developped. Indeed, I believe it to be a quite common pattern of progress in mathematics that developements begin with concrete and isolated results and questions, and then theories follow that explain the 'accidents' and 'miracles'.
Goldbach's conjecture got mentioned and gets mentioned frequently as accidental. Let us look at something not too far away.
The prime numbers contain infinitely many 3-term arithmetic progressions (van der Corput, 1930s). Accident, yes or no? Perhaps one could have thought so, when it was proved, but today the situation is different and it is widely conjectured (very recent results of Sanders achieve this for sets of just a slightly larger density) that any set of positive integers with the density of the primes has this property.
Who knows where the Theory of Set Addition (more generally Additive Combinatorics/Number Theory) will stand in some decades or centuries? Perhaps, then Goldbach conjecture will be a corollary of some natural theory.
In brief, I believe 'true by accident' (in the informal way I understand it, which seems not that far from the questioners intent) is to a considerable extent a time-dependent notion; it seems thus difficult for me to imagine that its spirit can be captured in a formal (time-independent) theory.
-
Maybe I am misunderstanding the OP's take on these things, but the way I read it, the fact that any set of positive integers with the density of the primes should have the same property is precisely the sort of "accident" the OP was talking about: given how ubiquitous the primes are, a suitable random model would tell you that the statement has a very low chance of failing. In other words, it is not true for a special reason, it is true because it has no reason to be false. – Alex Bartel Aug 25 2011 at 18:14
1
@Alex: However, the OP also says that accidental truths cannot be proved. This puts an additional spin on the topic. I think quid is right that one's feeling of what sorts of things "cannot be proved" are informed mainly by one's sense of what current mathematical technology is or is not capable of achieving. – Timothy Chow Aug 25 2011 at 18:22
1
@Timothy Yes, I wasn't trying to address that part at all, since the wonderful answers already given address the provability issue very nicely. I was just joining quid in trying to understand what exactly the OP has in mind when he talks about accidents. What I am trying to say is that even though a theorem is provable, it can feel like it's true for no special reason other than the absence of a reason to be false. And the example quid gave seems to me to exemplify exactly this situation. – Alex Bartel Aug 25 2011 at 18:46
1
"it is true because it has no reason to be false." Zagier had a similar comment, say about why $\pi$ is normal; if it isn't, there must be some deep reason, but if it is normal, the reason is just that it's random. For the other part, you can model many things by random sense (like $\mu(n)$ as random $0,\pm 1$ coefficients of a Dirichlet series, for RH, or Cramer's model of primes, for gaps), and the question should be whether the general randomness persists in the problem at hand. Another similar is Goldbach (partitio numerorum for general) as said. – Junkie Aug 26 2011 at 1:26
1
Junkie, thank you for the examples. In particular, the quote on $\pi$, which I in principle knew but forgot. Let us look a bit more than a century back on the transcendence of certain numbers (not pi itself, but in Hilbert 7th problem). Perhaps somebody could have said...since there would need to be a reason for them to be algebraic, and there is not, they will be transcendental, but no proof in sight (if I remeber well Hilbert was quite pessimistic in his prediction when 7th prob. would be solved.) Yet, 'suddenly' came Gelfond--Schneider and a 'reason' for transcendence of these numbers. – quid Aug 26 2011 at 14:28
show 5 more comments
Probably the closest thing to what you're looking for is Chaitin's proof of the incompleteness theorem, which shows that for any formal system $S$, there is a constant $L$ such that the statement "$K(s) > L$" is unprovable in $S$ for all strings $s$ (here $K$ denotes Kolmogorov complexity). The vast majority of such statements are true "at random" because a random string will have high Kolmogorov complexity.
However, you didn't ask whether there exists a family of statements that can be regarded as being "true by accident"; you asked whether a specific statement (the graph reconstruction conjecture) can be regarded as being "true by accident." So Chaitin's incompleteness theorem doesn't quite address your question as stated.
It is certainly possible, in some situations, to construct a heuristic probabilistic model that predicts that certain things ought to be true just for "random reasons." Perhaps the most famous example is Cramér's random model for the primes, which can be used to give heuristic "proofs" of various number-theoretic conjectures; e.g., one can use the model to predict that there will be only finitely many primes with such-and-such a property, because the probability that a prime $p$ has the property decreases rapidly to zero as $p\to\infty$. If a conjecture is predicted by such a model, and also "happens" to be unprovable in your favorite axiomatic system for mathematics, then it might be tempting to say that it is "true by accident."
This kind of thing could happen, but we don't know of any good examples involving "naturally occurring" mathematical conjectures. So it's all pure speculation at this point. Certainly we have no objective evidence that any particular mathematical conjecture isn't worth working on for this reason. I think it would be interesting, though, if you could develop a heuristic probabilistic model for graph theory, in the spirit of Cramér's model, that could "predict" various well-known graph-theoretic conjectures.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 56, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9524539709091187, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/25535/distributions-more-complicated-than-the-dirac-and-derivatives/25536
|
## Distributions more complicated than the Dirac δ and derivatives
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
The responses to another question clarifies that the best known examples of distributions that are not measures, are the derivatives of the delta and such. What I want to know is: Is that the only way a distribution is not a measure?
Are there distributions that are not measures, their derivatives, and their derivatives, and so on?
-
## 4 Answers
There is a structure theorem for distributions that shows that they are all (possibly infinite, but locally finite) sums of derivatives.
Here's a proper statement. Let $T$ be a distribution on $\mathbb{R}^n$. Then there exists continuous functions $f_{\alpha}$ such that $T = \sum_{\alpha} (\frac{\partial}{\partial x})^{\alpha} f_{\alpha}$, where for each bounded open set $\Omega$, all but a finite number of the distributions $(\frac{\partial}{\partial x})^{\alpha} f_{\alpha}$ vanish identically on $\Omega$. Here $\alpha$ is a multiindex.
I recommend perusing the beautiful little book "A Guide to Distribution Theory and Fourier Transforms" by Strichartz. The above theorem is discussed in section 6.2.
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
One can construct various "exotic" classes of distributions by playing with spaces of test functions. For example, if we choose a suitable space of real analytic functions, the dual space will include "distributions" which are not finite sums of derivatives of measures.
Let $$T(x)=\sum\limits_{k=0}^{\infty}c_k \delta^{(k)}(x),\quad c_k\in \mathbb C, \quad \limsup\limits_{k\to\infty}\left(k!|c_k|\right)^{1/k}=0.\qquad\qquad(*)$$ $T(.)$ is not a distribution in the classical sense. Its support is localized at $x=0$ but $T(.)$ cannot be written as a finite linear combination of $\delta^{(k)}(x)$. Yet $(*)$ defines a continuous linear functional on a subspace of the space of entire analytic functions on $\mathbb C$. This is a simple example of a hyperfunction (a.k.a. analytic functional).
The analogue of the structure theorem for hyperfunctions basically says that every hyperfunction can be written as a convolution $T*\mu$, where $T$ is of the form $(*)$ and $\mu$ is a Borel measure on $\mathbb R$.
-
Very nice!! Thanks for telling about hyperfunctions. – Akela May 23 2010 at 18:06
You can take distributional derivatives of the devil's staircase. See Wikipedia for an explanation.
Edit: Sorry I was so curt. The answer to your revised question is no. The Wikipedia link above gives references to the fact that all distributions on the real line (and finite dimensional Euclidean spaces) are distributional derivatives of continuous functions, and continuous functions can be lifted canonically to measures.
As Rekalo mentioned, there is a notion of hyperfunction due to Sato that arises from studying boundary values of holomorphic functions, and hyperfunctions form a strictly larger space than distributions (in particular, essential singularities are allowed). I've heard of other spaces of generalized functions such as modules over sheaves of pseudodifferential operators, but I don't know how they relate to each other, or how one proves theorems with them.
-
3
I apologize; I edited the question a bit after seeing your response. I hope the new formulation is more relevant. The only possible recompense in my ability for your trouble is a +1, which is done. – Akela May 21 2010 at 22:55
Maybe a simple example to demonstrate that, although a distribution is of finite order on each compact, it could be of infinite order, i.e. not of finite order. Consider simply the following distribution on the real line $$\langle T,\phi\rangle=\sum_{k\in \mathbb N}\phi^{(k)}(k).$$ Bazin.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 19, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9320809245109558, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/108509/calculate-perimeter-from-parametric-form-with-an-ellipse/108518
|
# Calculate perimeter from parametric form with an ellipse?
Suppose I have a thing such as an ellipse:
$$\left(\frac{x}{a}\right)^{2}+\left(\frac{y}{b}\right)^{2}=1$$
now we can define it so that $\frac{x}{a}=cos(\theta)$ and $\frac{y}{b}=sin(\theta)$. I know the perimeter formula
$$\mu(S)=\int\sqrt{1+\left(f'(x)\right)^{2}} dx.$$
It is easy to paramerize the ellipse but how can I parametrize the perimeter formula so that I can easily calculate the perimeter?
I find that I am doing things the hard way like this:
$$y=\pm b \sqrt{1-\left(\frac{x}{a}\right)^{2}}$$
now if I plug in the y into the formula of perimeter, it is messy. Can I do it elegantly with parametric form somehow?
-
– J. M. Feb 12 '12 at 15:11
– Américo Tavares Feb 12 '12 at 15:14
1
If there would be a simple formula for the perimeter of an ellipse you would have met it in high school $\ldots$ – Christian Blatter Feb 12 '12 at 16:23
## 1 Answer
I would write the following
$$ds=\sqrt{\left(\frac{dx}{d\theta}\right)^2+\left(\frac{dy}{d\theta}\right)^2}d\theta$$
and so
$$P=\int_0^{2\pi}\sqrt{a^2\sin^2\theta+b^2\cos^2\theta}d\theta$$
and this is just an elliptic integral. The final result takes the form ($b>a$)
$$P=4bE(e)$$
being
$$E(e)=\int_0^{\frac{\pi}{2}}\sqrt{1-e^2\sin^2\theta}d\theta$$
and $e^2=1-\frac{a^2}{b^2}$ the eccentricity.
-
...I have never really understoond what the $S$ here means? In physics, it sometimes denotes some surface $S$ but here it is? Do you change the coordinates somehow and use the Jacobian matrix or something like that, sorry thinking alound (maybe totally skewed wrong ideas)... – hhh Feb 12 '12 at 15:23
1
@hhh: it's the differential corresponding to arclength. – J. M. Feb 12 '12 at 15:27
@J.M.: yes but what is is $S$? It is a partial derivative apparently, there is something I cannot grap in the very beginning of this answer -- chain rule ...have to calculate... thinking still aloud... – hhh Feb 12 '12 at 15:32
1
If we have $x=a\cos\theta$ and $y=b\sin\theta$ as in OP and $a\geq b$ (major axis on $x$ axis, length $2a$), then $e^2=1-\left(\frac{b}{a}\right)^2$ and $P=4aE(e)$; cf. en.wikipedia.org/wiki/Ellipse#Eccentricity & en.wikipedia.org/wiki/Ellipse#Circumference. So in general, $e^2=\frac{|a^2-b^2|}{\max(a,b)^2}$ and $P=4\max(a,b)E(e)$. @hhh: $s$ is arc length, i.e. the length traversed along the portion of curve parametrized in the integral. – bgins Feb 12 '12 at 17:34
1
@hhh: Indeed, the initial integral goes from $0$ to $2\pi$ but the elliptic integral is given between $0$ and $\frac{\pi}{2}$. The factor 4 comes out as you need four times the elliptic integral. – Jon Feb 12 '12 at 20:25
show 8 more comments
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 22, "mathjax_display_tex": 7, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.900091826915741, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/pre-calculus/154678-limits-positive-negative-infinity-question.html
|
# Thread:
1. ## limits of positive/negative infinity question
I am having some trouble evaluating two limit questions. They are the same equation but the question asks for an answer for limit of positive infinity and one for limit of negative infinity.
The equation is:
$\sqrt (x^2 + 1)\div (x-1)$
How do I go about solving this question?
Sorry that I didn't right out the limits properly I have lost my syntax sheet for latex and can't find a decent replacement yet
2. Notice that $\dfrac{{\sqrt {x^2 + 1} }}<br /> {{x - 1}} = \dfrac{{\sqrt {1 + \frac{1}<br /> {{x^2 }}} }}<br /> {{1 - \frac{1}<br /> {x}}}$
3. so then when you sub in a very large number the 1/x goes to 0 as it is so close to 0?
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9440761208534241, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/trigonometry/89076-solved-getting-wrong-sign-cosx-sinx-rsin-x-alpha.html
|
# Thread:
1. ## [SOLVED] Getting the wrong sign for cosx+sinx=rsin(x+alpha)
The question asks me to solve $7\cos x +6\sin x = 2$
So I put it into the form $r\sin (x + \alpha)$:
$7\cos x +6\sin x \equiv r\sin (x + \alpha) \equiv r\sin x\cos\alpha + r\cos x\sin\alpha$
Then I equate the coefficients:
of cos x: $7=r\sin\alpha$ [1]
of sinx: $6=r\cos\alpha$ [2]
Divide [1] by [2] to get:
$\frac{7}{6}=tan\alpha \Rightarrow \alpha = 49.4^\circ$
Square and add [1] and [2] to get:
$7^2+6^2=r^2(\sin^2\alpha+\cos^2\alpha) \Rightarrow r=\sqrt{85}$
$\therefore 7\cos x+6\sin x \equiv \sqrt{85}\sin(x+49.4^\circ)$
Except that last statement doesn't seem to be true, according to
the answers in my book and when I have a look at the two curves in a
graphing application. I tried $\sqrt{85}\sin(x-49.4^\circ)$ in the graphing app and
that seems to be correct...
but why??
What have I done wrong? Why am I getting a positive value for
alpha when I should be getting the same number but negative?
The only way I can see to get -49.4 would be by introducing a minus
sign when dividing [1] by [2]. But I don't see how to justify that.
Have I made a simple error? I can't spot it ...
Any help would be much appreciated, thanks in advance.
2. Your result, $\sqrt{85}sin(x+ 49.4)$ is correct. An easy way to check that is to take x= 0. $\sqrt{85} sin(49.4)$= (9.219)(0.759)= 7 while obviously $\sqrt{85} sin(-49.4)$= -7.
3. Hello, tleave2000!
I see nothing wrong with your work.
Why are you convinced that alpha must be negative?
I was taught a different approach . . .
Solve: . $7\cos x +6\sin x \:=\: 2$
Divide by: $\sqrt{7^2+6^2} \:=\:\sqrt{85}\!:\quad \frac{7}{\sqrt{85}}\cos x + \frac{6}{\sqrt{85}}\sin x \:=\:\frac{2}{\sqrt{85}}$
Let $\alpha$ be an angle such that: . $\sin\alpha = \tfrac{7}{\sqrt{85}},\;\cos\alpha = \tfrac{6}{\sqrt{85}}$
Then we have: . $\sin\alpha\cos x + \cos\alpha\sin x \:=\:\frac{2}{\sqrt{85}}$
. . which is equivalent to: . $\sin(x + \alpha) \:=\:\frac{2}{\sqrt{85}}$
And we have: . $x + \alpha \:=\:\arcsin\left(\tfrac{2}{\sqrt{85}}\right) \;\approx\;12.53^o$
. . Then: . $x \;=\;12.53^o - \alpha$
Since $\sin\alpha = \frac{7}{\sqrt{85}}$, then: . $\alpha \:=\:\arcsin\left(\tfrac{7}{\sqrt{85}}\right) \:\approx\:49.40^o$
. . Therefore: . $x \;=\;12.53^o - 49.40^o \;=\;-36.87^o$
4. ## oops
Ah thanks. I was convinced it should be negative because when you give degree values to a graph plotting application that's expecting radian values for it's trig functions , then when you compare the graphs of $7\cos x+6\sin x$ and $\sqrt{85}\sin(x+49.4^\circ)$ they aren't the same... but oddly if you stick a minus sign before the offset, the graphs match up. (Is that just a coincidence?)
So that's what I was doing wrong, I needed to convert the input by multiplying it by pi/180. Thank you both again, and it was interesting to see an alternative way to get to rsin(x+alpha).
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 25, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.947922945022583, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/318308/equintinuity-of-bounded-linear-functions-equivalent-to-uniform-boundedness
|
# Equintinuity of bounded linear functions equivalent to uniform boundedness
The claim is the following:
Every family of bounded linear functions is equicontinuous if and only it is uniformly bounded.
I only have trouble figuring out the forward direction.
-
If $\|f\| \le M$, then $|f(x) - f(y)| \le M \|x-y\|$. Can you see how to do it now? And for the forward direction, I don't think you can use the uniform boundedness principle, because the conclusion of that theorem is that such a family is equicontinuous. – Christopher A. Wong Mar 2 at 0:40
Yes! How did I miss that! What would be a way to go about it? – user44069 Mar 2 at 0:42
Try just using the definition of equicontinuity. – Christopher A. Wong Mar 2 at 0:47
It doesn't seem to be very helpful. – user44069 Mar 2 at 1:55
By linearity, a family of operators is equicontinuous if they are equicontinuous at the origin. Perhaps this can help you. – Christopher A. Wong Mar 2 at 2:12
show 1 more comment
## 1 Answer
1. Suppose the family $F$ is bounded. Then there is $M > 0$ such that $\|f\| \le M$ for all $f \in F$, and hence by linearity $|f(x) - f(y)| = |f(x-y)| \le M \|x - y\|$. Thus we can choose $\delta = \epsilon /M$.
2. Suppose $F$ is equicontinuous. Then $F$ must be equicontinuous at the origin. Then, for every ball $B_{\epsilon}$, there exists $\delta$ such that $f(B_{\delta}) \subset B_{\epsilon}$ for every $f \in F$. In particular, if $S$ is the unit sphere, then $f(S) \subset B_{\epsilon/\delta}$. Then, by linearity, for any $x$, we have $f(x) \subset \|x\| B_{\epsilon/\delta}$, which is equivalent to $|f(x)| \le \epsilon/\delta \|x\|$.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 19, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9600468277931213, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/169017/prove-that-there-does-not-exist-any-positive-integers-pair-m-n-satisfying/169120
|
# Prove that there does not exist any positive integers pair $(m, n)$ satisfying: $n(n + 1)(n + 2)(n + 3) = m{(m + 1)^2}{(m + 2)^3}{(m + 3)^4}$
How to prove that, there does not exist any positive integers pair $(m,n)$ satisfying:
$n(n + 1)(n + 2)(n + 3) = m{(m + 1)^2}{(m + 2)^3}{(m + 3)^4}$.
-
it is clear that,if this function have a solution,then $n>m$ or $n=m*t+q$ but put this value makes much complex this equation – dato Jul 10 '12 at 12:21
– dato Jul 10 '12 at 12:29
6
Where does the problem come from? What do you know about it? – Gerry Myerson Jul 10 '12 at 13:13
Someone give it to me ,i was curious if there is some specific method to solve like this problem. – Frank Jul 10 '12 at 15:18
2
Given the number of +1 to @GerryMyerson's comment I guess that I am not the only person to realise that there is no totally standard/easy way to do it. Depending on where this question comes from, it might be worth continueing to search for an elementary solution or to use heavier machinery. – Simon Markett Jul 10 '12 at 15:34
show 3 more comments
## 1 Answer
This is an edited version of a partial answer that I posted sometime ago and subsequently deleted (not sure if resurrecting an answer is the correct thing to do after it has been up-voted and then deleted, perhaps someone will advise). If anyone can suggest where any of this can be improved, or point out any mistakes, I would be grateful.
Consider the equation $$n(n+1)(n+2)(n+3) = m(m+1)^2(m+2)^3(m+3)^4$$ To avoid some trivialities later on, it is easy to check that there are no solutions with $m=1$ or $m=2$.
Using the fact that $n(n+1)(n+2)(n+3)$ is almost a square, we have $$(n^2+3n+1)^2-1 = m(m+2)\times[(m+1)(m+2)(m+3)^2]^2$$ Putting $N = n^2+3n+1$ and $M = (m+1)(m+2)(m+3)^2$, this becomes $$N^2-1 = m(m+2)M^2$$ so that $$N^2-1 = [(m+1)^2-1]M^2.$$ Our approach now is to write this as $$N^2 - [(m+1)^2-1]M^2 = 1,$$ which is Pell's equation, with $d = (m+1)^2-1$. In this case there is a particularly nice solution for the Pell equation, as the continued fraction is very simple in this instance. For convenience we change notation slightly and use $k = m+1$, so that we are looking at solutions of $$x^2 - (k^2-1)y^2 = 1,$$ and bear in mind that for any solution $(x,y)$ we also require $$y = (m+1)(m+2)(m+3)^2 = k(k+1)(k+2)^2.$$ So we investigate the properties of solutions of the Pell equation above by looking at the standard continued fraction method. We have $$\sqrt{k^2-1} = (k-1)+\cfrac{1}{1+\cfrac{1}{(2k-2)+\cfrac{1}{1+\cfrac{1}{(2k-2) + \ddots}}}}$$ which gives the first few solutions $(x_n,y_n)$ as $(1,0), (k,1), (2k^2-1,2k), \dots$.
Looking at the solutions for $y$, we see that they are generated by the recurrence relation $$y_{n+2} = 2ky_{n+1} - y_n, \mbox{ with } y_0 = 0, y_1 = 1.$$ Recalling that we also need $y = k(k+1)(k+2)^2$, it is enough to prove that this last expression cannot be one of the $y_n$ from the recurrence relation as follows. Clearly, the values $y_n$ are strictly increasing, and we claim that $$y_4 < k(k+1)(k+2)^2 < y_5$$ A bit of algebra gives $$y_4 = 8k^3-4k$$ so that \begin{equation*}k(k+1)(k+2)^2-y_4 = k^4-3k^3+8k^2+8k = k^3(k-3)+8k^2+8k\end{equation*} which is clearly positive for $k\geq 3$ and is easily checked to be positive for $k=1,2$.
For the right-hand inequality above, we have $$y_5 = 16k^4-12k^2+1$$ and then $$y_5 - k(k+1)(k+2)^2 = 16k^4-12k^2+1 - (k^4+5k^3+8k^2+4k)$$ $$= 15k^4-5k^3-20k^2-4k+1$$ and by examining the graph (because I can't see an elegant way to do this bit) we see that this is positive for $k\geq 2$, and we know that $k=1$ (corresponding to $m=2$) is not a solution of the original equation.
This shows that $k(k+1)(k+2)^2$ cannot be one of the $y_n$, so that no solution of the original equation is possible.
I am sure that there ought to be a simpler solution, but I have been unable to find one.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 27, "mathjax_display_tex": 14, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.960789144039154, "perplexity_flag": "head"}
|
http://mathoverflow.net/revisions/64136/list
|
## Return to Answer
2 corrected q_0 and q' switch; added 5 characters in body
I don't think there is a magic theorem here which works all the time. What you want depends a lot on the specific $\phi$, ${\bf q}'$, ${\bf q}_0$ you are working with. What you said sounds a lot like a toy model for the renormalization group, which is a way to study central limit type theorems. For instance the classical central limit theorem can be interpreted as a convergence in the basin of attraction of the Gaussian $\mathcal{N}(0,1)$ law with respect to a map $\phi$ on probability distributions of centered variables with variance 1 given by $$Z\longrightarrow \frac{X+Y}{\sqrt{2}}\ .$$ Namely given a probability distribution for a random variable $Z$, one makes two independent copies $X$, $Y$, and looks at the probability distribution of $\frac{X+Y}{\sqrt{2}}$.
In the absence of a precise description of your setup I can only throw in some general ideas. The first thing to do is to identify the fixed points of your map $\phi$. I assume it is nonlinear, so this is not a trivial question. If you cannot solve it completely, you need to at least find some easy fixed points. I assume this is what your ${\bf q}_0$ q}'$is. Then you need to analyze the linearization of$\phi$at these easy fixed points. I am again assuming that you did that and found that this linearization$A$has all eigenvalues of modulus less than 1 so it is contractive. From this it follows easily that if you start near${\bf q}_0$q}'$ you will converge to ${\bf q}_0$q}'$. If your starting point${\bf q}'$q}_0$ is not in this neighborhood of ${\bf q}_0$q}'$, then indeed you might want to run some simulations on the computer not only for${\bf q}'$q}_0$ but also for many other starting points. In the optimistic situation where the simulation indicates that all points go to ${\bf q}_0$q}'$, then there could be a Lyapunov function for your map, see http://en.wikipedia.org/wiki/Lyapunov_function That's a function which changes monotonically under$\phi$. If you have such a function which controls the norm of${\bf q}$then it might tell you that after enough iterations you will be in the neighborhood of${\bf q}_0$q}'$ where you have the contraction property. Of course the difficult thing is to come up with the correct guess for this Lyapunov function if it exists. This guess depends on the specifics of your example. For the central limit theorem there is a notion of entropy which does the job.
Some references:
• the book "Information theory and the central limit theorem" by Oliver Johnson, Imperial College Press, 2004.
• the book "Theory of probability and random processes" by Koralov and Sinai which mentions the renormalization group approach to the CLT.
• you may also find more ideas in the paper by Takashi, Tetsuya and Watanabe "Triviality of hierarchical Ising model in four dimensions" Comm. Math. Phys. 220 (2001), no. 1, 13–40. A preprint version is available at http://www.ma.utexas.edu/mp_arc-bin/mpa?yn=00-397
1
I don't think there is a magic theorem here which works all the time. What you want depends a lot on the specific $\phi$, ${\bf q}'$, ${\bf q}_0$ you are working with. What you said sounds a lot like a toy model for the renormalization group, which is a way to study central limit type theorems. For instance the classical central limit theorem can be interpreted as a convergence in the basin of attraction of the Gaussian $\mathcal{N}(0,1)$ law with respect to a map $\phi$ on probability distributions of centered variables with variance 1 given by $$Z\longrightarrow \frac{X+Y}{\sqrt{2}}\ .$$ Namely given a probability distribution for a random variable $Z$, one makes two independent copies $X$, $Y$, and looks at the probability distribution of $\frac{X+Y}{\sqrt{2}}$.
In the absence of a precise description of your setup I can only throw in some general ideas. The first thing to do is to identify the fixed points of your map $\phi$. I assume it is nonlinear, so this is not a trivial question. If you cannot solve it completely, you need to at least find some easy fixed points. I assume this is what your ${\bf q}_0$ is. Then you need to analyze the linearization of $\phi$ at these easy fixed points. I am again assuming that you did that and found that this linearization $A$ has all eigenvalues of modulus less than 1 so it is contractive. From this it follows easily that if start near ${\bf q}_0$ you will converge to ${\bf q}_0$. If your starting point ${\bf q}'$ is not in this neighborhood of ${\bf q}_0$, then indeed you might want to run some simulations on the computer not only for ${\bf q}'$ but also for many other starting points. In the optimistic situation where the simulation indicates that all points go to ${\bf q}_0$, then there could be a Lyapunov function for your map, see http://en.wikipedia.org/wiki/Lyapunov_function That's a function which changes monotonically under $\phi$. If you have such a function which controls the norm of ${\bf q}$ then it might tell you that after enough iterations you will be in the neighborhood of ${\bf q}_0$ where you have the contraction property. Of course the difficult thing is to come up with the correct guess for this Lyapunov function if it exists. This guess depends on the specifics of your example. For the central limit theorem there is a notion of entropy which does the job.
Some references:
• the book "Information theory and the central limit theorem" by Oliver Johnson, Imperial College Press, 2004.
• the book "Theory of probability and random processes" by Koralov and Sinai which mentions the renormalization group approach to the CLT.
• you may also find more ideas in the paper by Takashi, Tetsuya and Watanabe "Triviality of hierarchical Ising model in four dimensions" Comm. Math. Phys. 220 (2001), no. 1, 13–40. A preprint version is available at http://www.ma.utexas.edu/mp_arc-bin/mpa?yn=00-397
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 48, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9336089491844177, "perplexity_flag": "head"}
|
http://stats.stackexchange.com/questions/tagged/inference
|
# Tagged Questions
Inference, in a statistical context, refers to drawing conclusions about a population from information about a sample from that population.
0answers
9 views
### Iterated Conditional Mode approximation in E step of EM
I wanted to know what is the mathematical justification for using ICM as an approximation for the E step in an EM algorithm. As I understand in the E step the idea is to find a distribution that is ...
1answer
62 views
### Neg Binomial and the Jeffreys' Prior
I'm trying to obtain the Jeffreys' prior for a negative binomial distribution. I can't see where I go wrong, so if someone could help point that out that would be appreciated. Okay, so the situation ...
1answer
26 views
### Measuring some of the patients more than once
I'm conducting a clinical study where I determine an anthropometrical measure of the patients. I know how to handle the situation where I have one measure per patient: I make a model, where I have a ...
1answer
35 views
### how to calculate E[vech(x x')vech(x x')']?
Supposing a vector x follows normal distribution. I want to calculate the expectation of the "fourth moment" in a vector form, meaning $\text{E}[\text{vech}(x x')\text{vech}(x x')']$, given that we ...
2answers
62 views
### Hypothesis Testing
A Lab has been asked to evaluate the claim that drinking water in a local restaurant has a lead concentration of 6 parts per billion (ppb). Repeated measurements follow a normal distribution ...
1answer
50 views
### Calculating confidence intervals for a proportion when there are no 'successes' in the sample
Newbie here! Apologies in advance if I'm asking something that is based on flawed understanding of statistical analysis. I'm looking to analyse 400k replies to a Facebook-equivalent post, to ...
0answers
21 views
### Naive Bayes without model
I have the following scenario: I have two "states". I measure variables $n$ that are affected by the state. A state of 0 is the background state and in this case I expect each variable $n_i$ to be ...
1answer
82 views
### Prior for Bayesian Inference on Failure Rate in Poisson Distribution
I'm trying to derive the posterior distribution for the failure rate (lambda) of a process with poisson distribution. I have tried the use of an improper uniform distribution on lambda by letting the ...
2answers
122 views
### Bayesian inference with Gaussian distributions
This is Problem 4(c), Chapter 2 from Thrun's Probabilistic Robotics . Note that this is self-study and not homework. Suppose I know my position $x$ to be a normal distribution with density ...
0answers
32 views
### Statistical inference about degree of a node in a genetic network
I am working on Gene-Gene interaction networks. I build a graph by adding edges between genes (nodes) representing statistical interaction in prediction of a quantitative parameter value (say, brain ...
0answers
21 views
### Heterogeneous Treatment Effects - How to test differences in the ATE?
I have the following problem: I want to conduct a simple Propensity Score Estimation where the treatment D is a binary variable (D=1 individual i participates in the labor market program, zero ...
2answers
70 views
### Bayesian Inference Notation Confusion
In Bayesian Inference the following notation is quite common: $P(H|D) = \frac{P(D|H)P(H)}{P(D)}$ where $D$ is data and $H$ is hypothesis. Moreover $P(D)$ is represented as total probability. \$P(D) ...
1answer
78 views
### Inference from linear regression slope and Pearson
Sorry if this has been asked before but I've already done quite a bit of work here and I feel like I'm quite close to an answer. I am interested in testing whether the PHP function array_key_exists ...
2answers
261 views
### Why is the Fisher Information matrix positive semidefinite?
Let $\theta \in R^{n}$. The Fisher Information Matrix is defined as: $$I(\theta)_{i,j} = -E\left[\frac{\partial^{2} \log(f(X|\theta))}{\partial \theta_{i} \partial \theta_{j}}\bigg|\theta\right]$$ ...
1answer
72 views
### Marginal posterior and prior are similar (and flat!)
I designed a Bayesian model and sampled the posterior using a MCMC algorithm. My problem is that the posterior marginal distribution of a given latent intermediate variable appears to be uniform just ...
1answer
235 views
### If a tennis match was a single large set, how many games would give the same accuracy?
Tennis has a peculiar three tier scoring system, and I wonder if this has any statistical benefit, from the point of view of a match as an experiment to determine the better player. For those ...
2answers
32 views
### How to determine influence of two time series with feedback?
Say you have two time series, A & B. Each have a mutual effect on the other. To give a real-world example, say that time series A measures an artist's CD sales per month, and time series B is some ...
0answers
58 views
### Conjugate prior for a Gaussian model with shifted variance
Consider a set of observations $\{ y_i \}$ and assume a Gaussian model for these data: $y_i \sim \mathcal{N}(\mu, \sigma^2)$. Suppose the mean parameter $\mu$ is known, but the variance parameter ...
2answers
42 views
### How to show the significance of the difference in means in a paired test
Suppose I perform a paired test. The null hypothesis is that mean of difference is zero: $\mathrm{E[X-Y]} = 0$. The actual difference is positive but distributed non normally. Which statistical ...
1answer
41 views
### Statistical test for computer-systems performance analysis?
Suppose we do the following study. We have 2 web-servers (server A and server B) which are processing user requests. Both run on the same hardware, but one of them (server B) uses a slightly different ...
0answers
71 views
### Multiple categorical Variables and Multiple Hierarchical Counts- how to infer the effects?
I have the following categorical/count data : ...
0answers
69 views
### Matrix completion: How to assign names to the completed columns?
I am wondering if this the right place to ask this question. Normally it should be :). I am recently reading some papers about matrix completion such as in here, and here. I didn't go through some ...
2answers
436 views
### How would you do Bayesian ANOVA and regression in R?
I have a fairly simple dataset consisting of one independent variable, one dependent variable, and a categorical variable. I have plenty of experience running frequentist tests like ...
0answers
26 views
### Can we apply inferential statistics on the entire population? [duplicate]
Possible Duplicate: Statistical inference when the sample “is” the population Greeting, My question is:Can we apply inferential statistics on the entire population in case of the possibilty ...
1answer
158 views
### Latent Dirichlet Allocation (LDA): What exactly is inferred?
I am working my way through LDA and I think I got they main idea of it. Please correct me if I am wrong. Given the Plate notation: The variables $\alpha$ and $\beta$ are Dirichlet distribution ...
2answers
143 views
### Restricted Boltzmann Machines and Markov Networks: relationship in inference?
I am wondering if there is some equivalence between retricted Boltzmann machines and pairwise Markov networks in terms of MAP inference. More specifically, let $y \in \{0,1\}^m$ be the ...
0answers
98 views
### Are these statistics sufficient?
Question (Casella and Berger 6.5): Let $X_1 \ldots X_n$ be independent random variables with pdfs: \$f(x_i|\theta)= \begin{cases} \frac{1}{2i\theta}, & -i(\theta - 1)<x_i<i(\theta+1) \\ ...
2answers
171 views
### Non-fair die - College Probability
How many times must you roll a non-fair die to be at least 84% sure that the sample probability will be within 3% from the actual probability. Since the die is not-fair, we do not know p. My question ...
1answer
191 views
### Computing marginals on a graphical model in Python
I am looking for libraries available from Python to compute marginals on an undirected graphical model (i.e. a random field) with loops. Some algorithms for this could be LBP (loopy belief ...
0answers
158 views
### Sufficient, Complete Sufficient, UMVUE, Rao-Blackwell, Admissible. What are ties between these?
I am taking stat inference course. I have some trouble understanding some these terms: Sufficient Statistics: a stat that does not depend on the parameter, say $\Sigma X$ for normal distribution ...
1answer
204 views
### If the likelihood principle clashes with frequentist probability then do we discard one of them?
In a comment recently posted here one commenter pointed to a blog by Larry Wasserman who points out (without any sources) that frequentist inference clashes with the likelihood principle. The ...
2answers
674 views
### How to find an unbiased estimator?
Suppose $X_1, X_2, ...,X_n$ are samples from a uniform discrete distribution with probability 1/3 on each of the points $\theta-1, \theta, \theta+1$, where $\theta\in\mathbb{Z}.$ From "Theory of ...
1answer
81 views
### What to look for in a pocket calculator for a Graduate level statistics course midterm / final [closed]
Soon I'll be sitting for a midterm (and then the dreaded final) for a graduate class in Statistics. It's open book / notes but I'll be needing a calculator -- something I've not used for years. Any ...
1answer
327 views
### How to show order statistic is sufficient
I have some trouble showing sufficiency for largest order statistic ${x}_{n}$. This is from Casella's text, problem 1.6.3. Let ${p}_{\theta}$ be a density function. ${p}_{\theta}{x}=c({\theta})f(x)$ ...
1answer
71 views
### Inference from conditional observations
Let $(x_1, \ldots, x_n)$ be an i.i.d. random sampling from a conditional normal distribution ${\cal N}(\mu,\sigma^2)$ distribution given some event $A$ possibly parameter-dependent: for instance when ...
0answers
106 views
### Marginal parameter estimation in copula with copula (dependence) parameter known
Suppose we have data $x_i, i=1,2,3,...n$ that are dependent and identically distributed with marginal $f(\cdot|\alpha)$. If we model this with the likelihood \$ L = ...
1answer
120 views
### Understanding the Behrens–Fisher problem
This section of this article says: Ronald Fisher in 1935 introduced fiducial inference in order to apply it to this problem. He referred to an earlier paper by W. V. Behrens from 1929. Behrens and ...
4answers
777 views
### What hypothesis test to use for categorical variables? Possibly in R?
Edit: I think this is a better question, Say, I have categorical characteristics such as gender, race. How should I use Fisher's test and chi-square test? I was looking at this: ...
4answers
223 views
### What if your randomly formed groups are clearly not similar?
What if, before you begin the data collection for an experiment, you randomly divide your subject pool into two (or more) groups. Before implementing the experimental manipulation you notice the ...
1answer
81 views
### Inferring multiple ratios and binomial proportions with missing data
I have a number of studies describing families tested for a genetic condition. For each study the following data are described: $n_p$, number of probands (the proband is the first person in a family ...
2answers
394 views
### Optimal software package for bayesian analysis
I was wondering which software statistical package do you guys recommend for performing Bayesian Inference. For example, I know that you can run openBUGS or winBUGS as standalones or you can also ...
1answer
551 views
### How to write a poker player using Bayes networks
This is my first question on stackexchange and also my first time implementing a Bayesian network so I will apologize ahead of time for any novice mistakes I make. The goal of my project is to ...
2answers
257 views
### Behrens–Fisher problem
Is there a good published expository account, with mathematical details, of the various approaches that have been taken to the Behrens–Fisher problem?
3answers
973 views
### What if your random sample is clearly not representative?
What if you take a random sample and you can see it is clearly not representative, as in a recent question. For example, what if the population distribution is supposed to be symmetric around 0 and ...
3answers
291 views
### Good summaries (reviews, books) on various applications of Markov chain Monte Carlo (MCMC)?
Are there any good summaries (reviews, books) on various applications of Markov chain Monte Carlo (MCMC)? I've seen Markov Chain Monte Carlo in Practice, but this books seems a bit old. Are there ...
2answers
100 views
### How to go about selecting an algorithm for approximate Bayesian inference
I am wondering if there are any good rules of thumb for how to go about selecting an approximate inference algorithm for a problem/model (specifically when exact inference is intractable)? When you ...
1answer
135 views
### Does Loopy BP give the same solutions as a Gibbs sampler?
The literature in MCMC and LBP never refer to the fact that the two methods look (on expectation) exactly the same. To illustrate, first consider a simple Ising model, that is, a graphical model ...
0answers
159 views
### Inductive vs deductive Inference
I am curious to know exactly, what are the (possible) differences between inductive and deductive statistical inferences in applied statistics. Suggestions for some good resources to learn their ...
3answers
465 views
### Mixture Models and Dirichlet Process Mixtures (beginner lectures or papers)
In the context of online clustering, I often find many papers talking about: "dirichlet process" and "finite/infinite mixture models". Given that I've never used or read about dirichlet process or ...
0answers
57 views
### Querying a junction tree for joint probability
How do I query a junction tree for a joint probability? Let's say I have a factor graph of the form: \$P(x_1, x_2, x_3, x_4, x_5) = \frac{1}{Z} F(x_1) F(x_2) F(x_3) F(x_4) F(x_5) *\\ \qquad \qquad ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 30, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9052231311798096, "perplexity_flag": "middle"}
|
http://crypto.stackexchange.com/questions/228/how-robust-is-discrete-logarithm-in-gf2n/469
|
# How robust is discrete logarithm in GF(2^n) ?
"Normal" discrete logarithm based cryptosystems (DSA, Diffie-Hellman, ElGamal) work in the finite field of integers modulo a big prime p. However, there exist other finite fields out there, in particular binary fields GF(2n). There is a specific attack described by Coppersmith for discrete logarithm in a binary field, and it was later on refined into the more general Function Field Sieve by Adleman and Huang. The FFS was used by Joux and Lercier to obtain the current record in GF(2n) discrete logarithm, where n = 613.
What I would like to know is:
• How does discrete logarithm in GF(2n) compares to discrete logarithm modulo a prime p of n bits ? At the time when Coppersmith published his algorithm, it made discrete logarithm in binary fields look easier than its prime p counterpart, but the latter also got improved later on.
• Is it important, for discrete logarithm in GF(2n), whether n is itself prime or not ? The current record is for GF(2613), beating the previous record of GF(2607), and both 607 and 613 are prime numbers. Would discrete logarithm in GF(21024) be easier than in GF(21021) ?
-
## 3 Answers
Discrete logarithms in $\mathbb{F}_{p}$ share the same asymptotic complexity as integer factorization for general numbers: $L_p[1/3,1.923]$ for general integers, $L_p[1/3,1.587]$ for special integers. Discrete logarithms in $\mathbb{F}_{p^n}$ have the same asymptotic complexity as factoring special integers, i.e. $L_{p^n}[1/3, 1.587]$, via the Function Field Sieve.
So, extrapolating from the factoring records, we can handwave that a discrete log in $\mathbb{F}_{2^{1021}}$ is an order of magnitude easier than a discrete log modulo a 768-bit general prime, and about the same as modulo a pseudo-Mersenne prime $2^{1021} - c$.
As to whether composite degree extension fields are easier to solve, maybe. It is possible to represent the same composite field in a number of ways (usually known as tower of fields), and it is possible that some representations allow for faster breaks than others. Here's a quote from Andrew Odlyzko's 1985 paper Discrete logarithms and their cryptographic significance:
In fact, these fields may be very weak because of the possibility of moving between the field and its subfields.
However, there is no data on significant asymptotic advantages of composite degree over prime degree (if there was, pairing-based schemes would be toast, as embedding degrees are by and large composite).
One shouldn't also forget to check the smoothness of $p^n-1$, to avoid embarrassing Pohlig-Hellman breaks.
-
1
– Paŭlo Ebermann♦ Aug 19 '11 at 0:54
Alright, removed the LaTeX syntax, and did some general fixing (including links). – Samuel Neves Aug 19 '11 at 11:59
As we now have working LaTeX syntax, I added the formulas again to your post. Please check that I didn't transform anything wrong. – Paŭlo Ebermann♦ Sep 30 '11 at 18:02
Antoine Joux very kindly sent me the following on the topic:
People worry that [logarithms over fields with composite exponent] might be easier, this is why they use prime exponent. For some factorization of the exponent, viewing the finite field as a tower of extensions $(p^{n_1})^{n_2}$ indeed makes things easier.
See "The function field sieve in the Medium Prime Case" by R. Lercier and myself.
[The portion below, while correct, is not relevant to this question. I have not deleted it as that would leave the comment dangling.]
Elliptic cuves over $GF(2^m)$ where $m$ is composite are potentially vulnerable to attacks based on Weil descent. People wishing to show the state of the art in discrete logarithm computation choose problems with m a prime so that no such shortcuts are possible.
The paper Analysis of the GHS Weil Descent Attack on the ECDLP over Characteristic Two Finite Fields of Composite Degree (2001) by Markus Maurer, Alfred Menezes and Edlyn Teske is the most recent work on the topic I can conveniently find.
-
Note that the Weil descent approach to elliptic curves of composite-degree transfers the discrete logarithm from $E(\mathbb{F}_{2^n})$ to $J(\mathbb{F}_{2^{(n/p)}})$, where $p$ is some divisor of $n$, and $J$ is the Jacobian of a high-degree hyperelliptic curve where the discrete log might be faster. The original question, however, was about discrete logarithms in the base field $\mathbb{F}_{2^n}$; the Weil/Tate pairing might be used to transfer $E(\mathbb{F}_{2^n})$ to $\mathbb{F}_{2^n}$ (MOV and FR attacks). – Samuel Neves Sep 30 '11 at 21:19
To complete @Samuel's answer, there are a few shortcuts that can be used when n is composite; however, they only contribute small constant factors, hence they do not change the asymptotic behavior:
• If n can be divided by r, then one can first solve the discrete logarithm in the subfield GF(2r). In a sieve-based algorithm, this can provide up to half the relations that we need for the final linear algebra step.
• The final linear algebra step of the FFS computes things modulo 2n-1. If n is not prime, then 2n-1 is not prime either, and such operations can be implemented more efficiently through the use of the Chinese Remainder Theorem.
Also, if working in a subgroup, one can botch the choice of the subgroup. If n = rs for non-trivial factors r and s, then GF(2n)* has size 2n-1, which is a multiple of 2r-1. If we choose a subgroup generated by a value g of order q where q divides 2r-1, then we are actually computing things in GF(2r) and we can solve the discrete log by working in that subfield, where attacks are much more efficient, since r is no more than n/2. In other words, when choosing the subgroup order q (a prime), we must make sure that q does not divide any 2r-1 for any r which divides n. A prime n is a simple way to ensure that.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 19, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9230635762214661, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/differential-geometry/176150-null-space-linear-functional.html
|
# Thread:
1. ## Null Space of A linear Functional
Dear Colleagues,
If $Z$ is an $(n-1)-$dimensional subspace of an $n-$dimensional vector space $X$, show that $Z$ is the null space of a suitable linear functional $f$ on $X$, which is uniquely determined to within a scalar multiple.
Regards,
Raed.
2. Originally Posted by raed
Dear Colleagues,
If $Z$ is an $(n-1)-$dimensional subspace of an $n-$dimensional vector space $X$, show that $Z$ is the null space of a suitable linear functional $f$ on $X$, which is uniquely determined to within a scalar multiple.
Regards,
Raed.
Let $Z$ have a basis $\{x_1,\cdots,x_{n-1}\}$ and extend it to a basis $\{x_1,\cdots,x_n\}$ for $X$. Then, merely define $\varphi:X\to F$ by $x_k\mapsto \delta_{k,n}$ and extend by linearity.
Show then that $\ker\varphi= Z$ and moreover that the only other way to construct a linear functional was to extend the basis for $Z$ to a basis for $X$ by picking some other $x'_n\in \text{span}\{x_n\}$ which then amounts to any other such linear functional looking like $x_k\to \alpha\delta_{k,n}$ where $x'_n=\alpha x_n$. etc.
Now prove all of that
3. Originally Posted by raed
Dear Colleagues,
If $Z$ is an $(n-1)-$dimensional subspace of an $n-$dimensional vector space $X$, show that $Z$ is the null space of a suitable linear functional $f$ on $X$, which is uniquely determined to within a scalar multiple.
Regards,
Raed.
Let $\{x_1,\ldots ,x_{n-1}\}$ be a basis for $Z$ , and complete this to a basis $\{x_1,...,x_{n-1},x_n\}$ of
the whole n-dimensional space.
Now define $f:X\rightarrow \mathbb{F}\,,\,\,\mathbb{F}=$ the definition field, by
$f(x_i)=\left\{\begin{array}{ll}0&\mbox{ , if }i=1,...,n-1\\1&\mbox{ , if }i=n\end{array}\right.$ and extend the definition by linearity.
Show now that $Z=\ker f$
Tonio
4. Thank you very much.
5. How the extension by linearity can be done.
Regards.
6. Originally Posted by raed
How the extension by linearity can be done.
Regards.
This is a standar procedure: Just write any element of the space as a linear combination
of the basis and define $\displaystyle{f(v)=f\left(\sum\limits^n_{i=1}a_ix_ i\right):=\sum\limits^n_{i=1}a_if(x_i)$ ...
Tonio
7. I undetrstand you. Thank you very much.
Best Regards.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 40, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9053692817687988, "perplexity_flag": "head"}
|
http://mathhelpforum.com/discrete-math/192527-proof-there-infinitely-many-prime-numbers-using-induction.html
|
# Thread:
1. ## proof there are infinitely many prime numbers using induction
Euclid proved that there are infinitely many primes. His proof doesn't exactly use induction but it is close. Rewrite the proof to use proof by induction.
Euclid's proof
Basis
$S={2,3,5,7,11,13...}$
Induction Hypothesis
let $S_k={p_1, p_2, p_3,...,p_k}$ be an exhaustive list of prime numbers where k is an integer greater than 0
Induction step
consider $S_k={p_1, p_2, p_3,...,p_k}$has cardinality k
let $S_{k+1}={p_1, p_2, p_3,...,p_k, p_{k+1}}$
then $S_{k+1}={S_k, p_{k+1}}$
Therefore $p_{k+1} \in S_{k+1}$ but $P_{K+1}$ not in $S_k$
Conclusion
by the PMI there are infinitely many prime numbers
2. ## Re: proof there are infinitely many prime numbers using induction
Why? This is not a proof that follows from PFI. You're not doing induction on the naturals, because almost all naturals are not primes, and you're not doing induction on primes because you'd need to assume the answer beforehand.
3. ## Re: proof there are infinitely many prime numbers using induction
Just to clarify (since I can't figure out how to edit the original post):
The reason you can't do induction on primes to prove there are infinitely many primes is that induction can only prove that any item from the set under consideration must have the property you want. The property you're trying to prove (that there exist infinitely many primes) is not a property of the individual primes. You can't establish a base case or an inductive step if you don't have a provable property for the individual items.
So your argument is flawed because you assume exactly what you're trying to show. Notice you say, "let S_k+1 = [...]p_k+1". "Let" means you just assumed that a larger set of primes exists, with one additional prime when you did that, even though that's what you're trying to show. You derive a contradiction but it's because you put it in there yourself.
The proper way induction works:
1) You have a set of things that you can prove is well-ordered (which means they're totally ordered and every subset has a least element). We can expand this idea to different kinds of orders (well-founded relations, larger countable ordinals, uncountable ordinals, etc.), but for simplicity here you can assume this is a naturally-ordered set that behaves just like the natural numbers (both primes and naturals behave like this).
2) You prove the property you want to show holds for one or more consecutive base cases.
3) You prove that anytime the property holds for a set of cases, it must also hold for the least case not in that set.
4) By FIP, this proves that the property holds for all cases which are at least as large as the smallest base case. (If the smallest base case is the smallest case, it proves the set of things is equal to the set of things having the property.) But again, you need a property of individual items that you want to show.
4. ## Re: proof there are infinitely many prime numbers using induction
Originally Posted by Jskid
Euclid proved that there are infinitely many primes. His proof doesn't exactly use induction but it is close. Rewrite the proof to use proof by induction.
Euclid's proof
Basis
$S={2,3,5,7,11,13...}$
Induction Hypothesis
let $S_k={p_1, p_2, p_3,...,p_k}$ be an exhaustive list of prime numbers where k is an integer greater than 0
Induction step
consider $S_k={p_1, p_2, p_3,...,p_k}$has cardinality k
let $S_{k+1}={p_1, p_2, p_3,...,p_k, p_{k+1}}$
then $S_{k+1}={S_k, p_{k+1}}$
Therefore $p_{k+1} \in S_{k+1}$ but $P_{K+1}$ not in $S_k$
Conclusion
by the PMI there are infinitely many prime numbers
Try this:
For any natural number N there is a prime greater than N.
Base case: N=1, since 2 is prime this holds.
Induction step, suppose for some k there is a prime greater than k, now show that there is a prime greater than k+1.
(this is a bit peculiar since we do not need to use that there is a prime greater than k to prove there is a prime greater than k+1, but it is still induction..)
CB
#### Search Tags
View Tag Cloud
Copyright © 2005-2013 Math Help Forum. All rights reserved.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 16, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.956369936466217, "perplexity_flag": "head"}
|
http://mathhelpforum.com/differential-geometry/75630-homeomorphisms-interior-boundary.html
|
# Thread:
1. ## homeomorphisms and interior, boundary
Show that if f: X->Y is a homeomorphism, then:
$f(\partial(A))=\partial(f(A))$
I am stuck!
2. Originally Posted by Andreamet
Show that if f: X->Y is a homeomorphism, then:
$f(\partial(A))=\partial(f(A))$
I assume A is a subset of X.
Since $\partial A = \overline{A} \cap \overline {X \setminus A}$, $f (\partial A) = f( \overline{A} \cap \overline {X \setminus A})$.
We need to show that $f( \overline{A} \cap \overline {X \setminus A})$ is $\overline{f(A)} \cap \overline {Y \setminus f(A)}$, which is $\partial (f(A))$.
1. For every subset A of X, one has $f(\bar{A}) \subset \overline{f(A)}$ when f is continuous. If f is a homeomorphism, $f(\bar{A}) = \overline{f(A)}$.
2. $f(X \setminus A) = (Y \setminus f(A))$. Using 1, $f(\overline{X \setminus A}) = \overline{Y \setminus f(A)}$.
Now, it remains to combine 1 & 2 to get the answer.
#### Search Tags
View Tag Cloud
Copyright © 2005-2013 Math Help Forum. All rights reserved.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 11, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9237300157546997, "perplexity_flag": "middle"}
|
http://stats.stackexchange.com/questions/tagged/image-processing
|
# Tagged Questions
A form of signal processing where the input is an image. Usually treating the digital image as a two-dimensional signal (or multidimensional). This processing may include image restoration and enhancement (in particular, pattern recognition and projection).
learn more… | top users | synonyms
0answers
40 views
### Image classifier in python for few samples
I have 150 pictures that represent archeological signs and 5 categories to which they belong. These pictures have features like circularity, roughness and elongation that are expressed as continuous ...
0answers
12 views
### Which features to extract for classifying segmented zones of an image into two classes “handwritten text” and “graphics”
I have some chemical document images segmented into different zones, some zones represents "handwritten text" and others represent "graphics". I want to classify this zones into two classes, one for ...
2answers
73 views
### Application of Poisson distribution to image processing
I'm trying to write a program to detect water bubbles in heated oil. I've applied a canny edge detection filter to the image and the results look like the following: No bubbles: ...
0answers
23 views
### How can you reconstruct “base” (profile/headshot) image of face using several obscured images of the face?
I recently read a paper titled "3D Face Reconstruction from a Single Image using a Single Reference Face Shape" which got me thinking about the fact that it is rare for one to encounter unobscured ...
1answer
300 views
### How to calculate the signal-to-noise ratio (SNR) in an image?
I am working with an image X, I apply the "adaptive median filter" in it and I get the image Y. I'd like to measure the SNR in both in order to evaluate the quantity of noise deleted. I know the ...
0answers
23 views
### Classifying foreground vs background for ellipse shapes
I have images of ellipse shapes which are read in as a matrix of pixel intensities. I'd like a way to be able to classify whether a pixel is foreground (belonging to ellipse) or background (not ...
0answers
42 views
### Image similarity database
I'll preface this by stating I'm very new to computer vision. I took a Stanford Machine Learning course on Coursera, and that is the extent of my experience. I'm working on a free-time project to ...
0answers
42 views
### Markov random field and iterated condition mode
I have spent a lot of time studying MRF (applied to images) but still can't grasp the idea. Could you please clarify these ideas: What is the clique potential? What is a clique in image, and do they ...
2answers
229 views
### does it make sense for non-negative data to subtract the mean and divide by the std dev?
It is a very usual procedure to subtract the mean and divide by the standard deviation in a set of data. If we deal with non-negative data, i.e. image, (in [0,1] or [0,255]), does this procedure make ...
2answers
133 views
### Auto crop black borders from a scanned image by making stats about gray values
I'm writing a computer program to automatically detect black noisy borders on scanned images and crop them off. My algorithm is based on 2 variables: gray mean value (of the pixels in a rows/columns) ...
2answers
193 views
### Classification on principal components
For my research I am doing classification on the dataset of three variables. I run unsupervised clustering (based on a histogram peak technique of cluster analysis)and the result I evaluated visually ...
1answer
85 views
### bag of words in an online configuration, for classification / clustering
I have a set of image documents. I extract text keywords from this images using OCR to represent each image as a bag of words (a vector where each value is the number of occurrence of a word in the ...
1answer
81 views
### Suitable data preprocessing for an image recognition task: normalization ,standardization or neither?
I aim to train my classifier for an image recognition task. What kind of preprocessing steps I need to take to enhance my results? (I have >40000 images with >700 pixels so large amounts of data with ...
2answers
273 views
### Image Clustering with K-means - Postprocessing
I did some clustering on an image (each pixel is an observation that has 5 variables associated with it), I get pretty detailed results but they are a little bit noisey... I think. I used K-means. ...
0answers
15 views
### how Neural Network works on image recognition [duplicate]
Possible Duplicate: How does neural network recognise images? I am trying to learn how Neural Network works on image recognition. I have seen some examples and become even more confused. In ...
3answers
803 views
### How does neural network recognise images?
I am trying to learn how Neural Network works on image recognition. I have seen some examples and become even more confused. In the example of letter recognition of a 20x20 image, the values of each ...
0answers
98 views
### Algorithms and libraries to do fast character recognition
I just found out about Pleco the other day and was wondering how they achieve this type of accuracy: http://www.youtube.com/watch?v=x7VTo0656Rc I have a strong background in machine learning, yet ...
1answer
73 views
### Document image analysis and retrieval with online incremental clustering
Is there any interesting problem in the area of "Document Image Analysis and Retrieval" which by nature needs an online/incremental clustering process ? The problem may be in the context of "Logical ...
2answers
132 views
### The most common extracted features for image recognition
I want to build a neural network, but because I have high resolution pictures, I rejected the idea of passing the entire image to the NN. I was wondering what are the most common extracted features ...
1answer
168 views
### How to measure the number of people in a picture of a crowd?
Background: Israel (and the middle east in general) is filled with protests. I am curious, when given a picture, to estimate how many people are in that picture (often a picture of a large crowd). ...
0answers
196 views
### How to combine features extracted by PCA, LDA and LBP?
What I'm thinking is to combine PCA features, LDA features and LBP features together to get a higher accuracy, since I think the three features are all kind of histogram vectors and when we decide the ...
1answer
134 views
### Comparing two sets of pixels to determine whether they belong to the same object
I have two sets of data, and I want to know if the second set is sufficiently different from the first to be considered different. More specifically, I have a sample set A from a number of pixels in ...
1answer
173 views
### Help on SVM for road image processing
I am new to SVM. I would like to use SVM to segment/cluster/classify a road image into two distinct regions i.e. drivable region and non-drivable region. Unfortunately, I do not have any images where ...
1answer
115 views
### I have a statistic, how do I calculate its distribution?
I am comparing images for correlation. The images are all correlated, but I would like to determine when one pair is much more highly correlated, relative to another pair. I am using as a statistic ...
1answer
497 views
### Using PCA for detecting similar regions in an image
I'm trying to understand an algorithm that detects similar regions of an image using PCA. The algorithm essentially divides the image into overlapping square blocks and then does PCA with each block ...
1answer
243 views
### How to run K-means clustering on data points of varying dimensionality?
I'm trying to aggregate $T$ local image descriptors (i.e. histograms) into a vector, namely, the Fisher Vector as described in this paper by H. Jégou et al., Aggregating local image descriptors into ...
1answer
161 views
### Definition of “Natural Images” in the context of machine learning
Whilst reading up on the Deep learning literature, I noticed that a few variations on the standard network structure that were created specifically to better model "Natural/Real Images". For example, ...
1answer
118 views
### How to select discrete cosine transform coefficients as a feature vector?
I need to use DCT on frames of videos as a feature vector to train a Feed-forward artificial neural network, but the problem is the large number of coefficients (in thousands or so). How can I choose ...
1answer
303 views
### Creating tree splits in Hough forest
I am working on my thesis and working machine learning. The overall problem is Detecting and Recognition of Road Inventory. For my research part I am looking into decision trees, especially Hough ...
3answers
659 views
### What is the most accurate way of determining an object's color?
I have written a computer program that can detect coins in a static image (.jpeg, .png, etc.) using some standard techniques for computer vision (Gaussian Blur, thresholding, Hough-Transform etc.). ...
2answers
94 views
### Averaging images
I want to see if it is possible to analyse openly available images (from google, flickr, etc.), and conclude something from them (the same way one might do this from census data but with images as ...
2answers
100 views
### How is EM used in the sense of data mining on images?
I understand how EM is used in the sense of estimating the Gaussian model that underlies a set of data, but its unclear how this is applicable. I am trying to understand how EM might be used to ...
1answer
296 views
### What is the level of measurement of image data?
Question: This is a bit of an abstract question, but bear with me. I am averaging images, to try and deduce what the average image of a specific subject looks like (just out of curiosity, it might ...
1answer
49 views
### Where do I find large face datasets?
Are there any large, freely available (but not necessarily labeled) face datasets out there? The ones I have seen usually range in the hundreds, but for unsupervised feature learning it would be ...
0answers
215 views
### Similarities between different size matrices, rescaling problem
Given a series of matrices {$M_i$($m_i\times n_i$),i=1...k,$m_j,n_j \in$random} if we rescale (resize) all matrices into a ...
1answer
296 views
### How to equalize histograms
I am learning some image processing stuff and equalizing a histogram comes up as an important topic. I have followed the procedure listed on Wikipedia but the resulting equalized histogram does not ...
2answers
154 views
### Do image recognition efforts always rely on machine learning and statistics?
This is something I've always wondered. Consider the Kinect. It takes its 3d image data and manages to recognize that a human is contained at a given boundary. Are these types of technologies ...
1answer
724 views
### 2D object recognition using MATLAB
Have you any idea about implementing 2D object recognition with MATLAB? Which characteristics of objects can feed a neural network? It's my training data-set (provided by ETH University of ...
0answers
495 views
### Template matching involving scale-, rotation- and shape-invariant transformation
I am not sure whether this is the right place to ask this question, but any possible answer will help. I want to apply some form of template matching on these set of pictures. This is the template ...
2answers
359 views
### Valid method to analyze spatial correlations in images?
Is there a reasonable way to quantify the amount of local correlations in an image? For example, I want to justify the correlations between a neighbourhood of pixels is much higher than the ...
1answer
281 views
### Cross variogram with a moving window
I need to generate cross variograms of images using moving windows. For that I use the following equation: ...
1answer
429 views
### Detect circular patterns in point cloud data
For some volume reconstruction algorithm I'm working on, I need to detect an arbitrary number of circular patterns in 3d point data (coming from a LIDAR device). The patterns can be arbitrarily ...
1answer
162 views
### Hidden states in hidden conditional random fields
I am trying to study hidden conditional random fields but I still have some fundamental questions about those methods. I would be immensely grateful if someone could provide some clarification over ...
0answers
88 views
### How to call intensity domain elements? [closed]
When working with image histograms, I often find my thought stumbling against the lack of exact word to name "intensity element". When someone needs the analogous term for time domain element, he uses ...
3answers
4k views
### How to assess the similarity of two histograms?
Given two histograms, how do we assess whether they are similar or not? Is it sufficient to simply look at the two histograms? The simple one to one mapping has the problem that if a histogram is ...
8answers
13k views
### Detecting a given face in a database of facial images
I'm working on a little project involving the faces of twitter users via their profile pictures. A problem I've encountered is that after I filter out all but the images that are clear portrait ...
1answer
473 views
### Matlab image blending
Hey guys. I got two images from video frames. They have a certain portion of overlap. After warping one of them, I'm currently trying to blend them together. In other words, I would like to stitch ...
3answers
297 views
### How to find text blocks in a scanned document?
I am trying to detect text in a scanned document by examining variations in the lightness of the scan collapsed vertically. Here's a sample of the input I would receive, with the lightness plot of ...
4answers
400 views
### Suggested R packages for frontier estimation or segmentation of hyperspectral images
An hyperspectral image is a multidimensional image with more than 200 spectral bands i.e. an image for which each pixel is a vector of dimension 200 (most often it is a sampled spectral curve that is ...
2answers
2k views
### Differences between Baum Welch and Viterbi Training
I am currently using Viterbi training for an image segementation probelm. I wanted to know what are the advantages/disadvantages of using the Baum-Welch algorithm instead of Viterbi training.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.929029107093811, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/302402/limit-sequence-sets
|
# Limit sequence sets
In my measure theory book I came across the following definition: Let $(A_n)_{n\ge1}$ be a sequence of subsets of some set $X$. Define:
$\limsup_{n\to\infty} A_n:=\bigcap_{n\ge1}\bigcup_{k\ge n}A_k$
$\liminf_{n\to\infty} A_n:=\bigcup_{n\ge1}\bigcap_{k\ge n}A_k$
Call the sequence convergent if $\limsup_{n\to\infty} A_n=\liminf_{n\to\infty} A_n$ , in which case we define $\lim_{n\to\infty} A_n:=\limsup_{n\to\infty} A_n$
My question is, does this notion of convergence correspond to some sort of metric on the set of subsets of $X$, or is it completely unrelated to the usual concept of a limit? Thanks
-
Why should it be metric? One can define limits for sequences in topological spaces. – CutieKrait Feb 13 at 20:54
## 2 Answers
The usual real limit can be phrased in terms of this language. Suppose $(a_n)$ is a real sequence and define $A_n := (-\infty,a_n]$. Then we have $$\sup \left ( {\lim \sup}_{n \to \infty} A_n \right) = {\lim \sup}_{n \to \infty} a_n$$ and similarly for limes inferior and limit. Informally, the usual convergence can be formulated in terms of set convergence of rays of real numbers.
Nevertheless, the set convergence is much more general and requires no additional structure. You can formulate it for any collection of sets whatsoever (not just sequences). Actually note that not every notion of convergence is topologizable (and so a fortiori not metrizable). So in general you shouldn't expect that there is a metric involved where convergence is.
-
I agree with @Marek's answer that we do not really need to look at it topologically at all (there is a general notion of convergence spaces, e.g.). There is however one context where it does correspond to a metric convergence: the so-called Hausdorff metric on the hyperspace of a compact metric space (which is the set of non-empty closed subsets of that space, a topological analogue of a powerset). IIRC, your notion of convergence and the one induced by the Hausdorff metric coincide. But the set-theoretic one is much more general.
Also, I seem to recall that if we consider $\mathcal{P}(X)$ to be topologized like $\{0,1\}^X$, identifying a set and its characteristic function, and using the product topology, we get this notion of convergence as well. http://en.wikipedia.org/wiki/Limit_superior_and_limit_inferior seems to confirm this.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9141911268234253, "perplexity_flag": "head"}
|
http://crypto.stackexchange.com/questions/1246/must-the-order-of-the-groups-in-a-bilinear-map-be-the-same?answertab=active
|
# Must the order of the groups in a bilinear map be the same?
I've been reading up on bilinear maps and their application to cryptography and one thing I keep seeing hasn't yet clicked.
If $e:G_1\times G_2\to G_n$ is a bilinear map, $G_1,G_2,G_n$ are always defined as having the same order.
It seems to me, however, that $ord(G_n)$ should be $ord(G_1)\cdot ord(G_2)$. Is there a reason that I always see the groups having the same order by definition? Must that be the case?
-
1
You could have a bijective (linear) mapping (i.e. an isomorphism) with this order, but not a bilinear (and surjective) mapping. Bilinear mappings are not injective (other than in trivial cases). – Paŭlo Ebermann♦ Nov 18 '11 at 2:49
## 1 Answer
If both $G_1$ and $G_2$ have prime order $r$, then this means that there are generators $g_1$ and $g_2$; thus, for every $u_1 \in G_1$, there is an integer $x_1$ modulo $r$ such that $u_1 = g_1^{x_1}$. Therefore, every pairing value $e(u_1, u_2)$ is equal to $e(g_1^{x_1},g_2^{x_2}) = e(g_1, g_2)^{x_1x_2}$ by bilinearity. It follows that $e(g_1,g_2)$ is a generator of all the possible pairing values, and the bilinearity implies that $e(g_1,g_2)^r = e(g_1^r, g_2) = 1$. Hence, the group of possible pairing values also has prime order $r$.
Now you can imagine $G_n$ as being larger, with possible pairing values being only a strict subset of $G_n$, but that's just cheating.
Note that bilinearity and non-degeneracy imply that if a prime $p$ divides the order of $G_1$, then $1 = e(u_1^p,u_2) = e(u_1,u_2^p)$, from which we can conclude that $p$ also divides the order of $G_2$, and the order of $G_n$ as well. So we cannot have pairings over just any groups.
It is possible, however, that $G_1$ and/or $G_2$ are larger than $G_n$. In practice, the currently known efficient pairings are all derived from Weil or Tate pairings, which work over elliptic curves. From now on, I will denote operations in $G_1$ and $G_2$ additively, because Tradition requires that we talk about point additions. The basic setup of Weil and Tate pairings goes thus:
Let $\mathbb{F}_q$ be a finite field of order $q$. Let $E$ be an elliptic curve over $\mathbb{F}_q$. Let $r$ be a prime divisor of the order of $E$, such that $r^2$ does not divide the order of $E$, and $r$ is not equal to the field characteristic (this is to avoid a lot of degenerate cases). We denote $E[r](\mathbb{F}_q)$ the subgroup of points of $r$-torsion: these are the points which yield $0$ (the "point at infinity") when multiplied by $r$, and there are $r$ of them.
Then there is an embedding degree which is the lowest integer $k \geq 2$ such that $r$ divides $q^k-1$. It so happens (theorem of Balasubramanian-Koblitz) that $E[r](\mathbb{F}_{q^k})$ (the group of $r$-torsion points over the curve $E$, this time considering point coordinates over the field $\mathbb{F}_{q^k}$) contains $r^2$ points.
In that situation, both Weil and Tate pairings become non-trivial; they take as input $r$-torsion points in $E[r](\mathbb{F}_{q^k})$, and yield as output values in $\mathbb{F}_{q^k}^*$, and more specifically $r$-th roots of $1$ in that extended field. This is a subgroup of size exactly $r$ of the invertible elements in the field. This is our group $G_n$.
So we have the following situation:
• $G_1$ and $G_2$ are both subgroups of $E[r](\mathbb{F}_{q^k})$, and thus may have order $r$ or $r^2$.
• $G_n$ has always order $r$, no more, no less.
At that point, we must choose our groups so that we get some desirable properties:
• $G_1$ and $G_2$ both have order $r$.
• It is easy to hash arbitrary data messages into elements of $G_2$ (i.e. we can "randomly" generate elements of $G_2$ without knowing the discrete logarithm of the resulting point with regards to a given generator of $G_2$).
• There exists a one-way non-trivial morphism from $G_2$ to $G_1$: this is a linear function which outputs values in $G_1$, such that values other than 0 are achievable, the function is easy to compute, but its inverse is computationally infeasible.
Unfortunately, we cannot have all three properties simultaneously. We end up with the following usual choices:
• We use a supersingular curve of embedding degree $2$. $G_1$ is $E[r](\mathbb{F}_q)$ (the $r$-torsion points on the curve in the base field, not the extended field). $G_2$ is the very same group; to compute the pairing, we first map $G_2$ into another subgroup of $E[r](\mathbb{F}_{q^k})$ through a distortion map (if both operands of Weil or Tate pairing are from the curver over the unextended field, the pairing output is always 1, hence trivial). Easily computable distortion maps are a rarity, but with a supersingular curve, we have some. For that scenario, $G_1$, $G_2$ and $G_n$ all have the same order $r$; it is easy to hash data into elements of $G_2$; an isomorphism between $G_1$ and $G_2$ is easily computed in both directions since they are the same group.
• We use a non-supersingular curve. $G_1$ is $E[r](\mathbb{F}_q)$ and $G_2$ is a subgroup of $E[r](\mathbb{F}_{q^k})$ generated by a conventional $r$-torsion point (not one which is in $G_1$); thus, $G_1$ and $G_2$ both have order $r$. We do not know how to hash points into $G_2$. The Trace of Frobenius of a point $P = (X, Y)$ is defined as:
$$\phi(X,Y) = \sum_{i=0}^{k-1} (X^{q^i}, Y^{q^i})$$
(it is a sum of elliptic curve points, and each of these points is obtained by taking the coordinates of the input point, raised to power $i$). $\phi$ happens to be an isomorphism from $G_2$ onto $G_1$, and it appears to be difficult to invert.
• We use a non-supersingular curve. $G_1$ is $E[r](\mathbb{F}_q)$. $G_2$ is the subset of $E[r](\mathbb{F}_{q^k})$ consisting in points $P$ such that $\phi(P) = 0$ (the set of "points of trace zero"). $G_2$ is a group of order $r$ and we know how to hash points into $G_2$. However, we do not know any easily computable non-trivial morphism from $G_2$ to $G_1$, or from $G_1$ to $G_2$.
• We use a non-supersingular curve. $G_2$ is the complete $E[r](\mathbb{F}_{q^k})$; thus, it has order $r^2$. $G_1$ is any subgroup of $G_2$ (of order $r$), possibly $G_2$ itself (of order $r^2$). It is easy to hash into $G_2$.
One way to think of pairings is that a pairing is a product. If we had a group $G$ with an "addition" and we could find a pairing of pairs of elements of $G$ into elements of $G$ itself, then that pairing would behave just as multiplication behaves with regards to addition (it would, by the way, totally break Diffie-Hellman on the group $G$). So the "natural" situation is really that $G_1$, $G_2$ and $G_n$ all have the same order.
-
Why $e(g_1^r, g_2) = 1$ ? – curious Sep 27 '12 at 12:08
@curious: because $g_1^r = 1$ (group $G_1$ has order $r$) and bilinearity implies that $e(1,x) = 1$ for all $x$. – Thomas Pornin Sep 27 '12 at 12:32
But if we think pairings as products then $e(1,x)=x$ no? – curious Sep 27 '12 at 12:57
@curious: sorry, that's the usual confusion. In the first part of the post, I use multiplicative notation on the groups; in the second part, I use additive. With the additive notation for groups $G_1$, $G_2$ and $G_n$, that's $e(rg_1, g_2) = 0$ because $rg_1 = 0$ and $e(0,x) = 0$ by bilinearity. – Thomas Pornin Sep 27 '12 at 13:02
I still can't get why $e(1,x)=1$. $e(0,x)=0$ seems reasonable with multiplication on mind.Thanks in any case. I am missing some theory maybe – curious Sep 27 '12 at 13:08
show 2 more comments
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 149, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9464111924171448, "perplexity_flag": "head"}
|
http://en.wikipedia.org/wiki/Huzita's_axioms
|
# Huzita–Hatori axioms
(Redirected from Huzita's axioms)
The Huzita–Hatori axioms or Huzita–Justin axioms are a set of rules related to the mathematical principles of paper folding, describing the operations that can be made when folding a piece of paper. The axioms assume that the operations are completed on a plane (i.e. a perfect piece of paper), and that all folds are linear. These are not a minimal set of axioms but rather the complete set of possible single folds.
The axioms were first discovered by Jacques Justin in 1989.[1] Axioms 1 through 6 were rediscovered by Italian-Japanese mathematician Humiaki Huzita and reported at the First International Conference on Origami in Education and Therapy in 1991. Axioms 1 though 5 were rediscovered by Auckly and Cleveland in 1995. Axiom 7 was rediscovered by Koshiro Hatori in 2001; Robert J. Lang also found axiom 7.
## The seven axioms
The first 6 axioms are known as Huzita's axioms. Axiom 7 was discovered by Koshiro Hatori. Jacques Justin and Robert J. Lang also found axiom 7. The axioms are as follows:
1. Given two points p1 and p2, there is a unique fold that passes through both of them.
2. Given two points p1 and p2, there is a unique fold that places p1 onto p2.
3. Given two lines l1 and l2, there is a fold that places l1 onto l2.
4. Given a point p1 and a line l1, there is a unique fold perpendicular to l1 that passes through point p1.
5. Given two points p1 and p2 and a line l1, there is a fold that places p1 onto l1 and passes through p2.
6. Given two points p1 and p2 and two lines l1 and l2, there is a fold that places p1 onto l1 and p2 onto l2.
7. Given one point p and two lines l1 and l2, there is a fold that places p onto l1 and is perpendicular to l2.
Axiom 5 may have 0, 1, or 2 solutions, while Axiom 6 may have 0, 1, 2, or 3 solutions. In this way, the resulting geometries of origami are stronger than the geometries of compass and straightedge, where the maximum number of solutions an axiom has is 2. Thus compass and straightedge geometry solves second-degree equations, while origami geometry, or origametry, can solve third-degree equations, and solve problems such as angle trisection and doubling of the cube. However, in practice the construction of the fold guaranteed by Axiom 6 requires "sliding" the paper, or neusis, which is not allowed in classical compass and straightedge constructions. Use of neusis together with a compass and straightedge does allow trisection of an arbitrary angle.
## Details
### Axiom 1
Given two points p1 and p2, there is a unique fold that passes through both of them.
In parametric form, the equation for the line that passes through the two points is :
$F(s)=p_1 +s(p_2 - p_1).$
### Axiom 2
Given two points p1 and p2, there is a unique fold that places p1 onto p2.
This is equivalent to finding the perpendicular bisector of the line segment p1p2. This can be done in four steps:
• Use Axiom 1 to find the line through p1 and p2, given by $P(s)=p_1+s(p_2-p_1)$
• Find the midpoint of pmid of P(s)
• Find the vector vperp perpendicular to P(s)
• The parametric equation of the fold is then:
$F(s)=p_\mathrm{mid} + s\cdot\mathbf{v}^{\mathrm{perp}}.$
### Axiom 3
Given two lines l1 and l2, there is a fold that places l1 onto l2.
This is equivalent to finding a bisector of the angle between l1 and l2. Let p1 and p2 be any two points on l1, and let q1 and q2 be any two points on l2. Also, let u and v be the unit direction vectors of l1 and l2, respectively; that is:
$\mathbf{u} = (p_2-p_1) / \left|(p_2-p_1)\right|$
$\mathbf{v} = (q_2-q_1) / \left|(q_2-q_1)\right|.$
If the two lines are not parallel, their point of intersection is:
$p_\mathrm{int} = p_1+s_\mathrm{int}\cdot\mathbf{u}$
where
$s_{int} = -\frac{\mathbf{v}^{\perp} \cdot (p_1 - q_1)} {\mathbf{v}^{\perp} \cdot \mathbf{u}}.$
The direction of one of the bisectors is then:
$\mathbf{w} = \frac{ \left|\mathbf{u}\right| \mathbf{v} + \left|\mathbf{v}\right| \mathbf{u}} {\left|\mathbf{u}\right| + \left|\mathbf{v}\right|}.$
And the parametric equation of the fold is:
$F(s) = p_\mathrm{int} + s\cdot\mathbf{w}.$
A second bisector also exists, perpendicular to the first and passing through pint. Folding along this second bisector will also achieve the desired result of placing l1 onto l2. It may not be possible to perform one or the other of these folds, depending on the location of the intersection point.
If the two lines are parallel, they have no point of intersection. The fold must be the line midway between l1 and l2 and parallel to them.
### Axiom 4
Given a point p1 and a line l1, there is a unique fold perpendicular to l1 that passes through point p1.
This is equivalent to finding a perpendicular to l1 that passes through p1. If we find some vector v that is perpendicular to the line l1, then the parametric equation of the fold is:
$F(s) = p_1 + s\cdot\mathbf{v}.$
### Axiom 5
Given two points p1 and p2 and a line l1, there is a fold that places p1 onto l1 and passes through p2.
This axiom is equivalent to finding the intersection of a line with a circle, so it may have 0, 1, or 2 solutions. The line is defined by l1, and the circle has its center at p2, and a radius equal to the distance from p2 to p1. If the line does not intersect the circle, there are no solutions. If the line is tangent to the circle, there is one solution, and if the line intersects the circle in two places, there are two solutions.
If we know two points on the line, (x1, y1) and (x2, y2), then the line can be expressed parametrically as:
$x = x_1 + s(x_2 - x_1)\,$
$y = y_1 + s(y_2 - y_1).\,$
Let the circle be defined by its center at p2=(xc, yc), with radius $r = \left|p_1 - p_2\right|$. Then the circle can be expressed as:
$(x-x_c)^2 + (y-y_c)^2 = r^2.\,$
In order to determine the points of intersection of the line with the circle, we substitute the x and y components of the equations for the line into the equation for the circle, giving:
$(x_1 + s(x_2-x_1) - x_c)^2 + (y_1 + s(y_2 - y_1) - y_c)^2 = r^2.\,$
Or, simplified:
$as^2 + bs + c = 0\,$
where:
$a = (x_2 - x_1)^2 + (y_2 - y_1)^2\,$
$b = 2(x_2 - x_1)(x_1 - x_c) + 2(y_2 - y_1)(y_1 - y_c)\,$
$c = x_c^2 + y_c^2 + x_1^2 + y_1^2 - 2(x_c x_1 + y_c y_1)-r^2.\,$
Then we simply solve the quadratic equation:
$\frac{-b\pm\sqrt{b^2-4ac}}{2a}.$
If the discriminant b2 − 4ac < 0, there are no solutions. The circle does not intersect or touch the line. If the discriminant is equal to 0, then there is a single solution, where the line is tangent to the circle. And if the discriminant is greater than 0, there are two solutions, representing the two points of intersection. Let us call the solutions d1 and d2, if they exist. We have 0, 1, or 2 line segments:
$m_1 = \overline{p_1 d_1} \,$
$m_2 = \overline{p_1 d_2}. \,$
A fold F1(s) perpendicular to m1 through its midpoint will place p1 on the line at location d1. Similarly, a fold F2(s) perpendicular to m2 through its midpoint will place p1 on the line at location d2. The application of Axiom 2 easily accomplishes this. The parametric equations of the folds are thus:
$\begin{align} F_1(s) & = p_1 +\frac{1}{2}(d_1-p_1)+s(d_1-p_1)^\perp \\[8pt] F_2(s) & = p_1 +\frac{1}{2}(d_2-p_1)+s(d_2-p_1)^\perp. \end{align}$
### Axiom 6
Given two points p1 and p2 and two lines l1 and l2, there is a fold that places p1 onto l1 and p2 onto l2.
This axiom is equivalent to finding a line simultaneously tangent to two parabolas, and can be considered equivalent to solving a third-degree equation as there are in general three solutions. The two parabolas have foci at p1 and p2, respectively, with directrices defined by l1 and l2, respectively.
This fold is called the Beloch fold after Margharita P. Beloch who in 1936 showed using it that origami can be used to solve general cubic equations.[2]
### Axiom 7
Given one point p and two lines l1 and l2, there is a fold that places p onto l1 and is perpendicular to l2.
This axiom was originally discovered by Jacques Justin in 1989 but was overlooked and was rediscovered by Koshiro Hatori in 2002.[3] Robert J. Lang has proven that this list of axioms completes the axioms of origami.
## Constructibility
Subsets of the axioms can be used to construct different sets of numbers. The first three can be used with three given points not on a line to do what Alpern calls Thalian constructions.[4]
The first four axioms with two given points define a system weaker than compass and straightedge constructions: every shape that can be folded with those axioms can be constructed with compass and straightedge, but some things can be constructed by compass and straightedge that cannot be folded with those axioms.[5] The numbers that can be constructed are called the origami or pythagorean numbers, if the distance between the two given points is 1 then the constructible points are all of the form $(\alpha,\beta)$ where $\alpha$ and $\beta$ are Pythagorean numbers. The Pythagorean numbers are given by the smallest field containing the rational numbers and $\sqrt{1+\alpha^2}$ whenever $\alpha$ is such a number.
Adding the fifth axiom gives the Euclidean numbers, that is the points constructible by straightedge and compass constructions.
Adding the neusis axiom 6, the reverse becomes true: all compass-straightedge constructions, and more, can be made. In particular, the constructible regular polygons with these axioms are those with $2^a3^b\rho\ge3$ sides, where $\rho$ is a product of distinct Pierpont primes. Compass-straightedge constructions allow only those with $2^a\phi\ge3$ sides, where $\phi$ is a product of distinct Fermat primes. (Fermat primes are a subset of Pierpont primes.)
The seventh axiom does not allow construction of further point. The seven axioms give all the single fold constructions that can be done rather than being a minimal set of axioms.
## References
1. Justin, Jacques, "Resolution par le pliage de l'equation du troisieme degre et applications geometriques", reprinted in Proceedings of the First International Meeting of Origami Science and Technology, H. Huzita ed. (1989), pp. 251–261.
2. Thomas C. Hull (April 2011). "Solving Cubics With Creases: The Work of Beloch and Lill". American Mathematical Monthly: 307–315. doi:10.4169/amer.math.monthly.118.04.307.
3. Roger C. Alperin; Robert J. Lang (2009). "One-, Two-, and Multi-Fold Origami Axioms". 4OSME (A K Peters).
4. Alperin, Roger C (2000). "A Mathematical Theory of Origami Constructions and Numbers". New York Journal of Mathematics 6: 119–133.
5. D. Auckly and J. Cleveland. "Totally real origami and impossible paperfolding". American Mathematical Monthly (102): pp. 215–226. arXiv:math/0407174.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 32, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9079151749610901, "perplexity_flag": "middle"}
|
http://stats.stackexchange.com/questions/28826/tips-and-tricks-for-getting-good-parameter-estimates-using-bayesian-nonlinear-re/28846
|
# Tips and tricks for getting good parameter estimates using Bayesian nonlinear regression
I've been playing around with fitting nonlinear models using rjags. Specifically 3 and 4 parameter sigmoid curves, e.g.,
````upAsym + (y0 - upAsym)/ (1 + (x[r]/midPoint)^slope))
````
I've noticed that good parameter estimates (+- 95% HDIs) are highly dependent upon having tightly constrained priors. Are there any good guidelines (or easy read papers/books) that can help with choosing reasonable uninformative priors to aid parameter estimation?
I'm thinking along the lines of: whether it's better to have uniform priors over a specific range vs using moderate precision normal priors, or, fitting a least squares estimate first to help set boundaries and starting values, or, adjust the burn in and thinning.
A toy example I've been playing with (the slope parameter in this example can sometimes be difficult to get a decent estimate for):
````x <- 0:20
y <- 20 + (2 - 20)/(1 + (x/10)^5) + rnorm(21, sd=2)
dataList = list(y = y, x = x, N = 21)
models = "
model {
for( i in 1 : N ) {
y[i] ~ dnorm( mu[i] , tau )
mu[i] <- upAsym + (y0 - upAsym)/ (1 + pow(x[i]/midPoint, slope))
}
tau ~ dgamma( sG , rG )
sG <- pow(m,2)/pow(d,2)
rG <- m/pow(d,2)
m ~ dgamma(1, 0.01)
d ~ dgamma(1, 0.01)
midPoint ~ dnorm(10,0.0001) T(0,21)
slope ~ dnorm(5, 0.0001) T(0,)
upAsym ~ dnorm(30,0.0001) T(0,40)
y0 ~ dnorm(0, 0.0001) T(-20,20)
}"
writeLines(models,con="model.txt")
````
-
A large part of your problem is that you have a highly nonlinear function (in the parameter in question) and not much data. You might want to try the experiment of using $n=20, 40, 80, 160, 320$ and plotting the posteriors for a given weakly informative prior, to see how fast they become "reasonable." Plots of the series of generated estimates and the `acf` function will help with determining burn in times and thinning rates. – jbowman May 20 '12 at 16:40
@jbowman Thanks for the sim suggestion, I'll give it a try on the weekend. – Matt Albrecht May 21 '12 at 13:30
## 1 Answer
Here's the notation I'm going to use for the sigmoid model:
$y = U + \frac{L - U}{1 + (\frac{x}{x_0})^k}$
The problem is that the sigmoid model nests functions that are close to linear within a bounded domain, and further, that very different parameter values give rise to almost-lines that are almost the same. Check it out:
````sigmoid <- function(x, L, U, x_0, k) U + (L-U)/(1 + (x/x_0)^k)
x<- runif(n = 40, min = 15, max = 25)
y1 <- sigmoid(x, -10, 50, 20, 5) + rnorm(length(x), sd = 2)
y2 <- sigmoid(x, -24, 76, 21.6, 3) + rnorm(length(x), sd = 2)
curve(sigmoid(x, -10, 50, 20, 5), from = 15, to = 25, ylab = "y")
curve(sigmoid(x, -24, 76, 21.6, 3), add = TRUE, col = "red")
points(x, y1)
points(x, y2, col = "red")
````
The upshot is that the likelihood function changes very, very slowly in some directions over vast swaths of the parameter space. If the priors don't constrain the parameters, then the posterior distribution inherits the likelihood's ill-conditioning.
I haven't used jags, so I don't know how much freedom you have to specify priors. (When things get this complicated I usually roll my own sampling algorithm in R.) The approach I'd use in this situation is to give zero prior support to sigmoid functions that don't have detectable saturation on both ends of the data domain (by "data domain" I mean the closed interval between the minimum and maximum $x$ values). This won't work unless the data really do turn out to have detectable saturation at both ends -- but if the data look linear on either end, one shouldn't be fitting a sigmoid anyway.
First, note that slope of the function at the midpoint is $\frac{(U-L)k} {4x_0}$. Let the set of $x$ values for which the ratio of slope of the sigmoid function to the slope at the midpoint is at most $\frac{1}{2}$ be the "saturation regions". There will be two saturation regions, one above the midpoint and one below. Points in these regions contribute most of their information to determining the values of the asymptotes. In fact, estimating an asymptote is basically like estimating a constant, so the standard error of the estimate of an asymptote is approximately $\frac{\sigma}{\sqrt{n}}$, where $n$ is the number of data points in the appropriate saturation region.
Let $n_U$ and $n_L$ be the number of data points within the upper and lower saturation regions, respectively. Note that these numbers are implicitly functions of all the parameters of the sigmoid function. To exclude regions of flat likelihood from the prior support, I would choose a prior which assigns zero density unless the following conditions are satisfied:
• $x_0$ is within the data domain
• $n_U > 0$
• $n_L > 0$
• conditionally on $\sigma$, $U - \frac{2\sigma}{\sqrt{n_U}} > L + \frac{2\sigma}{\sqrt{n_L}}$
I'm not sure what prior is reasonable to assign within this region of prior support, but if it's just flat, at least it can't be worse than frequentist inference based on asymptotics of the likelihood function.
-
Thanks this is very helpful. Could you clarify this a little more for me? "...is to give zero prior support to sigmoid functions that don't have detectable saturation on both ends of the data domain". How would you give zero support to non-asymptotic data? – Matt Albrecht May 21 '12 at 13:36
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 15, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.866291344165802, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/20800/acceleration-value-disparity
|
Acceleration: Value Disparity?
If we consider a ball moving at an acceleration of $5ms^{-2}$, over a time of 4 seconds, the distance covered by the ball in the first second is $5m$. In the 2nd second will $5 + 5 = 10m$. In the third second it will cover a distance of $5 + 5 + 5 = 15m$ and so on and so forth. Now, when we substitute this answer in the equations of motion derived from the area under velocity-time and distance-time graphs, we see a variation:
$$s = 1/2at^2$$ $$= 1/2*5*4^2$$ $$= 1/2*5*16$$ = 40 metres is the distance covered. Now if we go back to our initial description of acceleration we that in the 1st sec = 5m 2nd sec = 10 m 3rd sec = 15 sec 4 sec = 20 sec. Total distance covered in this case is 5 + 10 + 15 + 20 = 50 metres?
40 != 50? Why this disparity between the values? Can someone please explain?!
-
Good question :-) – David Zaslavsky♦ Feb 10 '12 at 14:56
2 Answers
You say:
If we consider a ball moving at an acceleration of 5m/s^2, over a time of 4 seconds, the distance covered by the ball in the first second is 5m. etc
But that's not true. Why do you think it would travel 5m? You already know the correct equation:
$$s = ut + \frac{1}{2}at^2$$
and if you use this to calculate the distance travelled in 1 second it comes out at 2.5m.
Look at this another way:
If the acceleration is $5ms^{-2}$ then at the end of 1 second the ball is travelling at $5ms^{-1}$, and that means for most of that first second the ball must have been travelling at less than $5ms^{-1}$. So it can't have travelled 5m. To travel 5m in the first second the average speed over the first second must be $5ms^{-1}$, not the final speed at the end of the first second.
-
Thanks for your answer. – Ram Sidharth Feb 10 '12 at 14:35
2
@RamSidharth If you choose this time-interval smaller (0.5s, 0.1s), you will get closer to the actual value. You are actually performing a Riemannsummation here. – Bernhard Feb 10 '12 at 15:07
Thanks, that is truly enlightening. However, I shall have to read up on Riemann Summation. – Ram Sidharth Feb 10 '12 at 15:13
– Qmechanic♦ Feb 10 '12 at 15:45
It seems to me you have confused velocity and acceleration.
If we consider a ball moving at an acceleration of 5m/s^2
This doesn't really make sense, acceleration is change of rate of movement. Movement is defined physically by 'velocity' in units of $ms^{-1}$. I think you have this mostly sussed out but just have your setup wrong. To correctly answer this type of question you typically need need to consider either an inital or final velocity (or in some variants distance).
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9411115646362305, "perplexity_flag": "middle"}
|
http://nrich.maths.org/367
|
nrich enriching mathematicsSkip over navigation
### Bang's Theorem
If all the faces of a tetrahedron have the same perimeter then show that they are all congruent.
### Rudolff's Problem
A group of 20 people pay a total of £20 to see an exhibition. The admission price is £3 for men, £2 for women and 50p for children. How many men, women and children are there in the group?
### Medallions
I keep three circular medallions in a rectangular box in which they just fit with each one touching the other two. The smallest one has radius 4 cm and touches one side of the box, the middle sized one has radius 9 cm and touches two sides of the box and the largest one touches three sides of the box. What is the radius of the largest one?
# Our Ages
##### Stage: 4 Challenge Level:
Many thanks to Robert Simons for this question:
"I am exactly $n$ times my daughter's age. In $m$ years I shall be exactly $(n-1)$ times her age. In $m^2$ years I shall be exactly $(n-2)$ times her age. After that I shall never again be an exact multiple of her age. Ages, $n$ and $m$ are all whole numbers. How old am I?
Now suppose there is some wishful thinking in the above assertion and I have to admit to being older, and indeed that I will be an exact multiple of her age in $m^3$ years. How old does this make me?"
The NRICH Project aims to enrich the mathematical experiences of all learners. To support this aim, members of the NRICH team work in a wide range of capacities, including providing professional development for teachers wishing to embed rich mathematical tasks into everyday classroom practice. More information on many of our other activities can be found here.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9468222856521606, "perplexity_flag": "middle"}
|
http://nrich.maths.org/6278/clue
|
### A Problem of Time
Consider a watch face which has identical hands and identical marks for the hours. It is opposite to a mirror. When is the time as read direct and in the mirror exactly the same between 6 and 7?
### Eight Dominoes
Using the 8 dominoes make a square where each of the columns and rows adds up to 8
### Holly
The ten arcs forming the edges of the "holly leaf" are all arcs of circles of radius 1 cm. Find the length of the perimeter of the holly leaf and the area of its surface.
# Trominoes
##### Stage: 3 and 4 Challenge Level:
Think about colouring in the $8 \times 8$ chessboard with three colours.
How many squares of each colour are there?
The NRICH Project aims to enrich the mathematical experiences of all learners. To support this aim, members of the NRICH team work in a wide range of capacities, including providing professional development for teachers wishing to embed rich mathematical tasks into everyday classroom practice. More information on many of our other activities can be found here.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9188581109046936, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/160384/summation-of-a-sequence
|
# Summation of a sequence
Let $f(a)$ be the sequence defined by $$f(a)=\left[\frac{a^2+8a+10}{a+9}\right]$$ where $[x]$ is the largest integer that does not exceed $x$.
Find the value of $$\sum_{x=1}^{30}f(x).$$
-
1
Welcome to math.SE. Since you are new, I want to let you know a few things about the site. In order to get the best possible answers, it is helpful if you say in what context you encountered the problem, and what your thoughts on it are; this will prevent people from telling you things you already know, and help them give their answers at the right level. If this is homework, please add the [homework] tag; people will still help, so don't worry. Also, many users find the use of the imperative ("Find", "Show", etc) to be rude when asking for help. Please consider rewriting your post. – Arturo Magidin Jun 19 '12 at 16:58
2
A rather straightforward method would be to compute it with the aid of, say, a computer. – akkkk Jun 19 '12 at 17:01
## 1 Answer
Since
$$\frac{a^2+8a+10}{a+9}=a-1+\frac{11}{a+9}\;,$$
we have
$$\left\lfloor\frac{a^2+8a+10}{a+9}\right\rfloor=a-1+\left\lfloor\frac{11}{a+9}\right\rfloor$$ whenever $a$ is an integer. Thus,
$$\begin{align*} \sum_{a=1}^{30}\frac{a^2+8a+10}{a+9}&=\sum_{a=1}^{30}\left(a-1+\left\lfloor\frac{11}{a+9}\right\rfloor\right)\\\\ &=\sum_{a=1}^{29}a+\sum_{a=1}^{30}\left\lfloor\frac{11}{a+9}\right\rfloor\\\\ &=\frac12(29)(30)+2\\\\ &=437\;, \end{align*}$$
since $\dfrac{11}{a+9}<1$ for $a>2$.
-
+1 Simply beautiful.. and simple – DonAntonio Jun 20 '12 at 2:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.925203263759613, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/pre-calculus/7705-what-domain-function-i-am-so-confused.html
|
# Thread:
1. ## What is the domain of this function? (I am so confused)
Hello.
I would like to ask what is the domain of the following fucnction.
y=(5x^x)^(x+1)
Our teachers tell us that the domain of a power function (A^X where A is constant and X is a variable) is that the base must be positive and must differ from 1,so in the above example, it is logically that the solution of the following system of inequalities defines the domain:
5x>0 and
5x<>1 and
5x^x>0 and
5x^x<>1
Am I right? And I must note that in our textbook the power function's domain(sorry if I misuse mathematical argo) A^X is that the base A must be positive but it does not say that it must differ from 1. Ohhhh! Please clarify this very obscure situation!
2. Originally Posted by Val21
Hello.
I would like to ask what is the domain of the following fucnction.
y=(5x^x)^(x+1)
Our teachers tell us that the domain of a power function (A^X where A is constant and X is a variable) is that the base must be positive and must differ from 1,so in the above example, it is logically that the solution of the following system of inequalities defines the domain:
5x>0 and
5x<>1 and
5x^x>0 and
5x^x<>1
Am I right? And I must note that in our textbook the power function's domain(sorry if I misuse mathematical argo) A^X is that the base A must be positive but it does not say that it must differ from 1. Ohhhh! Please clarify this very obscure situation!
$y=\left ( 5x^x \right )^{x+1} = 5^{x+1} \cdot x^{x(x+1)}$
Looking at these terms individually, we see that we have no restrictions on the function due to the first factor. The second factor gives us that $x \geq 0$, since the base must be non-negative, but note that if x = 0 the second factor is $0^0$, which is undefined. So the domain is x > 0.
-Dan
3. ## Must a base differ from 1?
And what if we take X for 1 so we would get number 1 in the base ,which is prohibited. So probably the domain must be x>0 and x<>1 ?
4. Originally Posted by Val21
And what if we take X for 1 so we would get number 1 in the base ,which is prohibited. So probably the domain must be x>0 and x<>1 ?
$x$ can also be any integer, so it looks to me as though the domain is:
$<br /> \left[ \mathbb{R}_+ \cup \mathbb{Z} \right] \backslash \{0\} <br />$
The union of all positive reals, and non-zero integers.
On second thoughts it looks as though this is also defined when $x$ is a rational of
the form $-\frac{2n}{2m-1},\ n,\ m \in \mathbb{Z}_+$ (negative rationals which in lowest form have an even numerator)
So my latest bet is:
$<br /> \left[ \mathbb{R}_+ \cup \left( \mathbb{Z}\backslash \{0\} \right) \cup A \right]<br />$
where $A=\{a: a=-2n/(2m-1),\ n,\ m \in \mathbb{Z}_+ \}$.
But somehow I don't think your teacher is looking for an answer of this nature.
RonL
5. Originally Posted by Val21
And what if we take X for 1 so we would get number 1 in the base ,which is prohibited. So probably the domain must be x>0 and x<>1 ?
I don't see why $1$ should be prohibited in the base, $0$ yes as $0^0$ is undefined, but not $1$.
When $x=1$, we have:
$y=(5x^x)^{x+1}=(5 \times 1^1)^2=5^2=25$
RonL
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 15, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9405168890953064, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/27565/status-of-local-gauge-invariance-in-axiomatic-quantum-field-theory/27567
|
# Status of local gauge invariance in axiomatic quantum field theory
In his recent review...
• Sergio Doplicher, The principle of locality: Effectiveness, fate, and challenges, J. Math. Phys. 51, 015218 (2010), doi
...Sergio Doplicher mentions an important open problem in the program of axiomatic quantum field theory, quotes:
In the physical Minkowski space, the study of possible extensions of superselection theory and statistics to theories with massless particles such as QED is still a fundamental open problem.
...
More generally the algebraic meaning of quantum gauge theories in terms local observables is missing. This is disappointing since the success of the standard model showed the key role of the gauge principle in the description of the physical world; and because the validity of the principle of locality itself might be thought to have a dynamical origin in the local nature of the fundamental interactions, which is dictated by the gauge principle combined with the principle of minimal coupling.
While it is usually hard enough to understand the definite results of a research program, it is even harder if not impossible, as an outsider, to understand the so far unsuccessful attempts to solve important open problems.
So my question is:
• Is it possible to decribe the walls that attempts to incorporate local gauge invariance in AQFT have hit?
• What about the possibility that this is the wrong question and there is and should not be a place of local gauge invariance in AQFT?
Edit: For this question let's assume that a central goal of the reasearch program of AQFT is to be able to describe the same phenomena as the standard model of particle physics.
-
2
Just a comment for this moment: in a preprint by Ciolli and Ruzzi that appeared on the arxiv today (1109.4824) they make a remark that progress towards the description of massless particles in AQFT is underway. They refer to a recent talk by Doplicher, which unfortunately I have missed this talk so I cannot comment on this. – Pieter Sep 23 '11 at 8:09
Just to throw it in there: there are known examples of dualities, two equivalent formulations of the samd theory, one involving gauge invariance and one not. In other words, gauge invariance is a property of a theory together with a specific classical limit thereof. For any inherently non-perturbative approach gauge invariance is probably not a good guide to follow then. – user566 Sep 23 '11 at 14:41
## 3 Answers
I think the open question here should be formulated -- and usually is formulated -- not as whether Yang-Mills theory "has a place" in AQFT, but whether one can abstractly characterize those local nets that arise from quantization of Yang-Mills-type lagrangians. In other words: since AQFT provides an axiomatics for QFT independently of a quantization process starting form an action functional: can one detect from the end result of quantization that it started out from a Yang-Mills-type Lagrangian?
On the other hand, we certainly expect that the quantization of any Yang-Mills-type action functional yields something that satisfies the axioms of AQFT. While for a long time there was no good suggestion for how to demonstrate this, Fredenhagen et al. have more recently been discussing how all the standard techniques of perturbative QFT serve to provide (perturbative) constructions of local nets of observables. References are collected here, see in particular the last one there on the perturbative construction of local nets of observables for QED.
In this respect, concerning Kelly's comment, one should remember that also the construction of gauge theory examples in axiomatic TQFT is not completely solved. One expects that the Reshetikhin-Turaev+construction for the modular category of $\Omega G$-representations gives the quantization of G-Chern-Simons theory, but I am not aware that this has been fully proven. And for Chern-Simons theory as an extended TQFT there has only recently be only a partial proposal for the abelian case FHLT. Finally, notice also that non-finite degrees of freedom can be incorporated here, if one passes to non-compact cobordisms (see the end of Lurie's), which in 2-dimensions are "TCFTs" that contain all the 2d TQFT models that physicists care about, such as the A-model and the B-model.
Concerning Moshe's comment: the known dualities between gauge and non-gauge theories usually involve a shift in dimension. This would still seem to allow for the question whether a net in a fixed dimension is that of a Yang-Mills-type theory.
But even if it turns out that Yang-Mills-type QFTs do not have an intrinsic characterization. their important invariant porperties should have. For instance it should be possible to tell from a local net of observables whether the theory is asymptotically free. I guess?
-
Urs, see Seiberg's duality in 4 dimensions in which both sides of the duality involve some gauge invariance, but a different one. The gauge invariance of one side is invisible on the other side - simply because all fields a already singlets. Besides, any gauge non-invariant statement is by definition unphysical, so you cannot even formulate what it means for a theory to be a "gauge theory" using only physical statements involving only observables. This suggests that gauge invariance is merely a useful tool tied directly to perturbation theory. – user566 Sep 24 '11 at 1:36
Whether one can "even formulate" what it means for a theory to be a quantum Yang-Mills-type theory or a quantum Chern-Simons theory and so on is the (open) question here. It is not true that nothing about the gauge group is invariantly encoded. The invariant assigned by CS theory of course depend on this. So the fact that physical states are gauge invariant is not an argument that quantum YM does not have an intrinsic characterization. – Urs Schreiber Sep 24 '11 at 1:55
Since Seiberg- and Montonen-Olive and other S-dualities relate two gauge theories with each other, that does not provide an argument that quantum gauge theories don't have an intrinsic characterization. For that argument you need that one side of the duality is not a gauge theory. And of the same dimension. – Urs Schreiber Sep 24 '11 at 1:58
Look at the Seiberg duality examples which are simultaneously described as su(n1) theory and su(n2) theory. Any observable quantity has these two simultaneous descriptions. I think this means that you cannot resolve the difference between the two descriptions using only physically measurable input, maybe this is just lack of imagination on my part but I don't see how to get around that argument. – user566 Sep 24 '11 at 2:03
But the question is: can we invariantly tell from a QFT if it is a gauge theory at all? Both the su(n1) and the su(n2) theory are, so this example would still be consistent with the answer "yes". (The answer may still be "no", but not for this reason, as far as I can see.) – Urs Schreiber Sep 24 '11 at 2:08
show 17 more comments
This is more of an extended comment, rather than an answer completely separate from what Urs and Moshe have already said. The axioms of AQFT are designed to capture a mathematical model of the physical observables of a theory, while OTOH gauge invariance is a feature of a formulation of a theory, though perhaps an especially convenient one. Yours and related questions are somewhat muddied by the fact that one physical theory may have several equivalent, but distinct formulations, which may also have different gauge symmetries. One example of this phenomenon is gravity, consider the metric and frame-field formulations, and another one according to Moshe is Seiberg duality. Another confounding factor is that some physical theories are only known in a formulation involving gauge symmetries (automatically rendering such formulations "especially convenient"), which naturally leads to your second question. However, one must remember that by design the gauge formulation should be visible in the AQFT framework only if it is detectable through physical observables.
Now, to be honest, I really have no idea about what is the state of the art in AQFT of figuring out when a given net of local algebras of observables admits an "especially convenient" formulation involving gauge symmetry. But I believe answering this kind of question will remain difficult until the notion of "especially convenient" is made mathematically precise. I don't know how much progress has been made on that front either. But I think a prototype of this kind of question can be analyzed, though somewhat sketchily, in the simplified case of classical electrodynamics.
Suppose we are given a local net of Poisson algebras of physical observables (the quantum counterpart would have *-algebras, but other than that, the geometry of the theory is very similar). The first step is to somehow recognize that this net of algebras is generated by polynomials in smeared fields, $\int f(F,x) g(x)$, where $g(x)$ is a test volume form, and $f(F,x)$ is some function of $F$ and its derivatives at $x$, with $F(x)$ a 2-form satisfying Maxwell's equations $dF=0$ and $d({*}F)=0$. Since we were handed the net of algebras with a given Poisson structure, as a second step we can compute the Poisson bracket $\{F(x),F(y)\}=(\cdots)$. The answer for electrodynamics would be the well known Pauli-Jordan / Lichnerowicz / causal propagator, which I will not reproduce here. Very roughly speaking, the components of $F(x)$ and the expression for the Pauli-Jordan propagator give a set of local "coordinates" on the phase space of the theory and an expression for the Poisson tensor on it. In the third step we can compute the inverse of the Poisson tensor, which if it exists would be a symplectic form. The answer for electrodynamics is well known and what's important is that the symplectic form is not given by some local expression like $\Omega(\delta F_1,\delta F_2) = \int \omega(\delta F_1, \delta F_2, x)$, where $\omega$ is a form depending only on the values of $\delta F_{1,2}(x)$ and their derivatives at $x$. Step four would consist of asking the question whether there is another choice of local "coordinates" on the phase space in which the symplectic form is local. The answer is again well known: extend the phase space by introducing the 1-form field $A(x)$ such that $F=dA$. The pullback of the symplectic form to the extended phase space now has a local expression $\Omega(\delta A_1,\delta A_2) = \int_\Sigma [{*}d(\delta A_1)(x)\wedge (\delta A_2)(x) - (1\leftrightarrow 2)]$, up to some constant factors, with $\Sigma$ some Cauchy surface. Note that $\Omega$ is no longer symplectic on the extended phase space, but only presymplectic, while its projection back to the physical phase space is. As a last step, one might try to solve the inverse problem of the calculus of variations and come up with a local action principle reproducing the the equations of motion for $A$ and the presymplectic structure $\Omega$.
Let me summarize. (1) Obtain fundamental local fields and their equations of motion. (2) Express the Poisson tensor and symplectic form in terms of local fields. (3) Introduce new fields to make the expression for the (pre)symplectic form local. (4) Obtain local action principle in the new fields. Note that gauge symmetry and all the issues associated with it appear precisely in step (3). In my limited understanding of it, the literature on AQFT has spent a significant amount of time on step (1), but perhaps not enough time on steps (2) and (3) even to formulate these problems precisely.
Finally, I should emphasize that the idea that redundant gauge degrees of freedom are introduced principally to give local structure to the (pre)symplectic structure on phase space is somewhat speculative. But it seems to fit in the field theories I am familiar with and I've not been able to identify a different yet equally competitive one.
-
As a comment, could one perhaps get a better view of the difficulty of "step 3" by jumping straight to a non-Abelian theory? In particular, I would have argued that the "nature" variables for the non-Abelian theories are the holonomies (intrinsically non-local), but we know that even classically one needs to smear "strongly" to get a theory. One could view this as a classical symptom of some fundamental singularity in the quantum theory (because after all the Poisson structure is the limit of the corresponding quantum geometry). Further, the electrodynamics case is then the limiting case. – genneth Oct 7 '11 at 15:23
What about the possibility that this is the wrong question and there is and should not be a place of (sic) local gauge invariance in AQFT?
I'd guess this hinges upon how one views AQFT. One can view AQFT in one of two ways:
• AQFT as a theory needs to correspond to nature.
• AQFT as a theory need not correspond to nature.
If AQFT needs to correspond to nature, then it should incorporate local gauge invariance, as nature incorporates local gauge invariance. (Note, "incorporate" here could mean include a mechanism which at "low energies" looks like local gauge invariance.)
If AQFT need not correspond to nature, then it need not incorporate local gauge invariance.
With that in mind I would also add that axiomatic TQFT includes local gauge symmetries without problems. In fact, axiomatic TQFT local gauge symmetries are so "strong" that they remove all local degrees of freedom.
-
Ok, maybe I should explain my own viewpoint of what AQFT is: That it should be able to describe the same phenomena as the standard model. – Tim van Beek Sep 23 '11 at 12:50
@TimvanBeek I guess your question then is almost equivalent to: "Do we need local gauge symmetry?" I guess the obvious answer is: "No." We have some theory with a local gauge invariance, we fix the local gauge invariance, and work in that particular gauge. Ugly, unilluminating, but it would work. I guess my real point is that your second question is rather "ill-defined" and maybe needs to be sharpened. – Kelly Davis Sep 23 '11 at 13:00
Well, maybe, but how? But please note that the question is not "do we need local gauge symmetry" but "do we need local gauge symmetry in the framework of AQFT", with the latter being somewhat more precise than the former. – Tim van Beek Sep 23 '11 at 13:09
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 22, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9379375576972961, "perplexity_flag": "head"}
|
http://mathhelpforum.com/differential-equations/108648-strange-step-used-solutions.html
|
# Thread:
1. ## strange step used in solutions
i wont write the whole question, but the term
$\frac{dy}{dx} \times \frac{1}{y}$
in the next line they change to
$\frac{d (ln(y))}{dx}$
i just dont see why you would be able to do that. none of the other terms in the equation (i havent written out) are changed. only thing i can think of is some sort of implicit differentiation idea or some sort. im not too sure
2. Originally Posted by walleye
i wont write the whole question, but the term
$\frac{dy}{dx} \times \frac{1}{y}$
in the next line they change to
$\frac{d (ln(y))}{dx}$
i just dont see why you would be able to do that. none of the other terms in the equation (i havent written out) are changed. only thing i can think of is some sort of implicit differentiation idea or some sort. im not too sure
Without the whole question there is no context in which to judge why this is done. However, if you're question is that you don't understand the equality of the two expressions, then you should note that from the chain rule it follows that
$\frac{d (\ln y)}{dx} = \frac{d (\ln y)}{dy} \cdot \frac{dy}{dx}$ ....
3. Originally Posted by walleye
i wont write the whole question, but the term
$\frac{dy}{dx} \times \frac{1}{y}$
in the next line they change to
$\frac{d (ln(y))}{dx}$
i just dont see why you would be able to do that. none of the other terms in the equation (i havent written out) are changed. only thing i can think of is some sort of implicit differentiation idea or some sort. im not too sure
It is the chain rule of differentiation.
If I asked you to differentiate $\ln(y)$ with respect to $x$, you'd have to do it like so:
$\frac{d}{dx} \ln(y) = \frac{dy}{dx} \times \frac{d}{dy} \ln(y) = \frac{dy}{dx} \frac{1}{y}$
4. ahhh i see, thanks to both of you
Originally Posted by mr fantastic
Without the whole question there is no context in which to judge why this is done.
yeah its obvious (to me) why they do it, that step just confused me is all
thanks
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 10, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.966286301612854, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/tagged/graph-theory?sort=votes&pagesize=15
|
# Tagged Questions
Use this tag for questions in graph theory. Here a graph is a collections of vertices and connecting edges. Use (graphing-functions) instead if your question is about graphing or plotting functions.
learn more… | top users | synonyms
2answers
499 views
### Counting trails in a triangular grid
A triangular grid has $N$ vertices, labeled from 1 to $N$. Two vertices $i$ and $j$ are adjacent if and only if $|i-j|=1$ or $|i-j|=2$. See the figure below for the case $N = 7$. How many trails ...
0answers
313 views
### Number of simple edge-disjoint paths needed to cover a planar graph
Let $G=(V,E)$ be a graph with $|E|=m$ of a graph class $\mathcal{G}$. A path-cover $\mathcal{P}=\{P_1,\ldots,P_k\}$ is a partition of $E$ into edge-disjoint simple paths. The size of the cover is ...
7answers
8k views
### Online tool for making graphs (vertices and edges)?
Anyone know of an online tool available for making graphs (as in graph theory - consisting of edges and vertices)? I have about 36 vertices and even more edges that I wish to draw. (why do I have so ...
3answers
810 views
### An example of a real-world map that is not 4-colourable?
The four-colour mapping theorem states that all maps can be four-coloured (adjacent regions receive distinct colours, and four different colours are used in total). However, the technical definition ...
5answers
379 views
### Motivation for spectral graph theory.
Why do we care about eigenvalues of graphs? There must be some reason. There is an entire mathematical discipline about them. I always assumed that spectral graph theory is an extension of graph ...
2answers
369 views
### Cover time chess board (king)
Consider a random walk of a king on a standard chess board, which at each step moves to a uniformly random permitted square. What's the exact mean time to visit all squares (cover time), starting ...
2answers
428 views
### Groups and generating sets
This question feels completely trivial and I am somewhat embarrassed to be asking it, but I am having a brain dead moment and failing to prove what I'm sure is a completely trivial statement about ...
2answers
397 views
### Connecting a $n, n$ point grid
I stumbled across the problem of connecting the points on a $n, n$ grid with a minimal amount of straight lines without lifting the pen. For $n=1, n=2$ it is trivial. For $n=3$ you can find the ...
3answers
267 views
### Why should “graph theory be part of the education of every student of mathematics”?
Until recently, I thought that graph theory is a topic which is well-suited for math olympiads, but which is a very small field of current mathematical research with not so many connections to ...
1answer
1k views
### The $n$ Immortals problem.
I saw this riddle posted on reddit a long time ago, called the "Seven Immortals." In the beginning, the world is inhabited by seven immortals, ageless and sexless, who begin to multiply and ...
3answers
814 views
### Self-avoiding walk on $\mathbb{Z}$
How many sequences $a_1,a_2,a_3,\dotsc$, satisfy: i) $a_1=0$ ii) ($a_{n+1}=a_n-n$ or $a_{n+1}=a_n+n$) iii) $a_i\neq a_j$ for $i\neq j$ iiii) $\mathbb{Z}=\{a_i\}_{i>0}$ Are the two alternating ...
4answers
2k views
### Logic question: Ant walking a cube
There is a cube and an ant is performing a random walk on the edges where it can select any of the 3 adjoining vertices with equal probability. What is the expected number of steps it needs till it ...
1answer
385 views
### How to create mazes on the hyperbolic plane?
I'm interested in building maze-like structures on the [5, 4] tiling of the hyperbolic plane, where by maze-like I mean something akin to a spanning tree of the underlying lattice: a subgraph of the ...
2answers
475 views
### Does Birkhoff - von Neumann imply any of the fundamental theorems in combinatorics?
I recently had the occasion to think about Hall's Marriage Theorem for the first time since my undergraduate combinatorics class more than a decade ago. Reading the wikipedia article linked above, I ...
1answer
1k views
### What do the eigenvectors of an adjacency matrix tell us?
The principal eigenvector of the adjacency matrix of a graph gives us some notion of vertex centrality. What do the second, third, etc. eigenvectors tell us? Motivation: A standard information ...
2answers
1k views
### Human checkable proof of the Four Color Theorem?
Four Color Theorem is equivalent to the statement: "Every cubic planar bridgeless graphs is 3-edge colorable". There is computer assisted proof given by Appel and Haken. Dick Lipton in of his ...
2answers
715 views
### Not lifting your pen on the $n\times n$ grid
The question I am asking has basically already been asked. Please see this MSE thread. There are a few questions brought up on that thread, and a smaller number were answered. The reason I am ...
2answers
478 views
### If $G$ is biconnected and $\delta(G) \geq 3 \Rightarrow \exists v: G-v$ is also biconnected.
If $G$ is biconnected and $\delta(G) \geq 3 \Rightarrow \exists v: G-v$ is also biconnected. Where $\delta (G) -$ minimum degree of all vertices, $G-v$ is equal to if we remove this vertex from $G$ ...
3answers
3k views
### Average Scrabble graph structure: diameter?
Tonight a game of Scrabble ended in what I consider a very unusual graph structure, unlike this generic web image, which seems more typical: ...
2answers
633 views
### Do your friends on average have more friends than you do?
I was watching this TED talk, which suggested that on average your friends tend to individually have more friends than you do. To define this more formally, we are comparing the average number of ...
1answer
301 views
### What are all conditions on a finite sequence $x_1,x_2,…,x_m$ such that it is the sequence of orders of elements of a group?
My Definition: The finite sequence $x_1,x_2,...,x_m$ of nonnegative integers, is said to be generated by the finite group $G$ iff $n:=|G|=x_1+x_2+...+x_m$. $n$ has $m$ divisors. if ...
2answers
509 views
### What is the probability that every pair of students studies together at some point?
A cohort in a school consists of 75 students who study for 6 years. Each year, the students are randomly distributed into 3 classrooms of 25 students each. What is the probability that, after 6 ...
2answers
163 views
### For a graph $G$, why should one expect the ratio $\text{ex} (n;G)/ \binom n2$ to converge?
$\text{ex} (n;G)$ is the maximal number of edges of a graph of order $n$ can have without containing $G$ as a subgraph. There are theorems saying what the limit actually is. But my lecture notes ...
4answers
708 views
### Are these 2 graphs isomorphic?
They meet the requirements of both having an = number of vertices (7) They both have the same number of edges (9) They both have 3 vertices of deg(2) and 4 of deg(3) However, graph two has 2 ...
1answer
551 views
### Did the Appel/Haken graph colouring (four colour map) proof really not contribute to understanding?
I hope this isn't off topic - sorry if I'm wrong. In 1976, Kenneth Appel and Wolfgang Haken proved the claim (conjecture) that a map can always be coloured with four colours, with no adjacent regions ...
7answers
1k views
### What are good books to learn graph theory?
What are some of the best books on graph theory, particularly directed towards an upper division undergraduate student who has taken most the standard undergraduate courses? I'm learning graph theory ...
2answers
687 views
### Automorphisms of the Petersen graph
I am trying to find out the automorphism group of the Petersen graph. My book carries the hint: "Show that the $\tbinom{5}{2}$ pairs from {1, . . . , 5} can be used to label the vertices in such a way ...
3answers
256 views
### Exceptional books on real world applications of graph theory.
What are some exceptional graph theory books geared explicitly towards real-world applications? I would be interested in both general books on the subject (essentially surveys of applied graph ...
1answer
202 views
### Help with a Bollobás proof - Switching between random graph models
I'm trying to make my way through Bollobás' book 'Models of Random Graphs', and unfortunately I've come entirely unstuck on one of his typical 2-line "and of course, this is entirely trivial"-style ...
3answers
405 views
### Counterexamples to proofs of correct statements
This question is in part inspired by a quote I saw in an answer to another question: The problem with incorrect proofs to correct statements is that it is hard to come up with a counterexample. ...
2answers
300 views
### Is there a reason why the number of non-isomorphic graphs with $v=4$ is odd?
I am working through Trudeau's Introduction to Graph Theory, which contains the following problem: In the following table, the numbers in the second column are mostly even. If we ignore the first ...
1answer
265 views
### Is it true that a connected graph has a spanning tree, if the graph has uncountably many vertices?
I found a proof that every connected graph (possibly infinite) has a spanning tree in Diestel's Graph Theory (Fourth Edition), Ch. 8 that uses Zorn's Lemma, but at a crucial step it seems to be ...
3answers
462 views
### Two seemingly unrelated puzzles have very similar solutions; what's the connection?
I think it's an interesting coincidence that the locker puzzle and this puzzle about duplicate array entries (see problem 6b) have such similar solutions. Spoiler alert! Don't read on if you want to ...
1answer
239 views
### In how many ways we can place $N$ mutually non-attacking knights on an $M \times M$ chessboard?
Given $N,M$ with $1 \le M \le 6$ and $1\le N \le 36$. In how many ways we can place $N$ knights (mutually non-attacking) on an $M \times M$ chessboard? For example: $M = 2, N = 2$, ans $= 6$ \$M = 3, ...
2answers
140 views
### Integer sequences which quickly become unimaginably large, then shrink down to “normal” size again?
There are a number of integer sequences which are known to have a few "ordinary" size values, and then to suddenly grow at unbelievably fast rates. The TREE sequence is one of these sequences, which ...
1answer
193 views
### Is there a Hamiltonian path for the graph of English counties?
The mainland counties of England form a graph with counties as vertices and edges as touching borders. Is there a Hamiltonian path one can take? This is not homework, I just have an idea for a holiday ...
4answers
245 views
### Number of possibilities to cross a hexagonal lattice.
An ant walks along the line segments in the hexagonal lattice shown, from start to finish. The ant must go in the direction shown if there is an arrow, and never goes on the same line segment twice. ...
3answers
860 views
### Homology and Graph Theory
What is the relationship between homology and graph theory? Can we form simplicial complexes from a graph $G$ and compute their homology groups? Are there any practical results in looking at the ...
2answers
401 views
### Seating friends around a dinner table
This problem came from a Putnam problem solving seminar. If each person in a group of n people is a friend of at least half the people in the group, then show that it is possible to seat the n ...
1answer
3k views
### Is Sage on the same level as Mathematica or Matlab for graph theory and graph vizualization?
The context: I'm going to start working on a project that involves running predefined algorithms (and defining my own) for very big graphs (thousands of nodes). Visualization would also be welcome if ...
1answer
171 views
### Graph theory question
Here is an exercise from the book by Bondy/Murty that I am not quite able to understand. Show that every simple graph has a vertex $x$ and a family of $\left\lfloor\frac{1}{2}d(x)\right\rfloor$ ...
2answers
570 views
### Diameter of a graph when removing a non-cut edge
It appears plausible to me that if we have a connected graph $G$ with diameter $d$ and we remove a non-cut edge $e$ from it, the diameter of the resulting graph $G_e$ will be at most $2d$. By ...
1answer
242 views
### Is this similarity between trees and vector space bases just a coincidence?
A vector space basis is a set of vectors that span the space and is linearly independent. It is well-known that for finite dimensional vector spaces this is equivalent to: The set is minimal with ...
1answer
226 views
### What does the minimal eigenvalue of a graph say about the graph's connectivity?
I'm reading Fan Chung's Spectral Graph Theory, and I'm now in chapter 2. There, Chung proves Cheeger's inequality, which is that $2h_G \geq \lambda_1 > h_G^2/2$ for any graph $G$. To me, this ...
1answer
492 views
### Hamiltonicity of $G^2$
I am going through a proof of hamiltonicity of $G^2$ and stuck quite in the beginning. Some definitions: $G$ is a finite non-hamiltonian 2-connected graph, $C$ is a cycle in $G$, $D$ is a component ...
2answers
165 views
### Embedding the Infinite Binary Tree in Regular Tilings
Consider the regular tiling $(m,n)$ in which $m$ $n$-agons meet at each vertex. Most of the time this tilings have to "live" in the hyperbolic plane. The edges of its polygons define a graph where two ...
2answers
514 views
### Partition a binary tree by removing a single edge
The question is : B-3 Bisecting trees Many divide-and-conquer algorithms that operate on graphs require that the graph be bisected into two nearly equal-sized subgraphs, which are induced by a ...
1answer
283 views
### How to “explain” Szemerédi's Regularity Lemma so that classmates may understand its value?
I am a student, preparing myself for a talk in which I want to present and prove Szemerédi's Regularity Lemma. I understand the proof and I am able to reproduce it - that is no problem. But I am ...
0answers
226 views
### Normalizers of automorphism groups
In abstract groups $\Gamma$ the normalizer $N_\Gamma(S)$ of a subset $S\subseteq\Gamma$ is the subgroup of all $x \in \Gamma$ that commute with $S$, i.e. $xS = Sx$, i.e. $x\ y\ x^{-1} \in S$ for all ...
0answers
177 views
### Is Erdös' lemma on intersection graphs a special case of Yoneda's lemma?
Under which name is the following proposition filed actually: Every poset $P$ embeds fully and faithfully in the powerset of $P$, ordered by subset inclusion. Let me call it Dedekind's lemma. ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 83, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9404004812240601, "perplexity_flag": "middle"}
|
http://en.wikipedia.org/wiki/A*_search_algorithm
|
# A* search algorithm
"A*" redirects here. For other uses, see A* (disambiguation).
Graph and tree
search algorithms
Listings
Related topics
In computer science, A* (pronounced "A star" ( listen)) is a computer algorithm that is widely used in pathfinding and graph traversal, the process of plotting an efficiently traversable path between points, called nodes. Noted for its performance and accuracy, it enjoys widespread use. (However, in practical travel-routing systems, it is generally outperformed by algorithms which can pre-process the graph to attain better performance.[1])
Peter Hart, Nils Nilsson and Bertram Raphael of Stanford Research Institute (now SRI International) first described the algorithm in 1968.[2] It is an extension of Edsger Dijkstra's 1959 algorithm. A* achieves better time performance by using heuristics.
## Description
A* uses a best-first search and finds a least-cost path from a given initial node to one goal node (out of one or more possible goals). As A* traverses the graph, it follows a path of the lowest expected total cost or distance, keeping a sorted priority queue of alternate path segments along the way.
It uses a knowledge-plus-heuristic cost function of node $x$ (usually denoted $f(x)$) to determine the order in which the search visits nodes in the tree. The cost function is a sum of two functions:
• the past path-cost function, which is the known distance from the starting node to the current node $x$ (usually denoted $g(x)$)
• a future path-cost function, which is an admissible "heuristic estimate" of the distance from $x$ to the goal (usually denoted $h(x)$).
The $h(x)$ part of the $f(x)$ function must be an admissible heuristic; that is, it must not overestimate the distance to the goal. Thus, for an application like routing, $h(x)$ might represent the straight-line distance to the goal, since that is physically the smallest possible distance between any two points or nodes.
If the heuristic h satisfies the additional condition $h(x) \le d(x,y) + h(y)$ for every edge x, y of the graph (where d denotes the length of that edge), then h is called monotone, or consistent. In such a case, A* can be implemented more efficiently—roughly speaking, no node needs to be processed more than once (see closed set below)—and A* is equivalent to running Dijkstra's algorithm with the reduced cost $d'(x, y) := d(x, y) - h(x) + h(y)$.
## History
In 1968 Nils Nilsson suggested a heuristic approach for Shakey the Robot to navigate through a room containing obstacles. This path-finding algorithm, called A1, was a faster version of the then best known formal approach, Dijkstra's algorithm, for finding shortest paths in graphs. Bertram Raphael suggested some significant improvements upon this algorithm, calling the revised version A2. Then Peter E. Hart introduced an argument that established A2, with only minor changes, to be the best possible algorithm for finding shortest paths. Hart, Nilsson and Raphael then jointly developed a proof that the revised A2 algorithm was optimal for finding shortest paths under certain well-defined conditions. They thus named the new algorithm in Kleene star syntax to be the algorithm that starts with A and includes all possible version numbers or A*.[citation needed]
## Process
Like all informed search algorithms, it first searches the routes that appear to be most likely to lead towards the goal. What sets A* apart from a greedy best-first search is that it also takes the distance already traveled into account; the $g(x)$ part of the heuristic is the cost from the starting point, not simply the local cost from the previously expanded node.
Starting with the initial node, it maintains a priority queue of nodes to be traversed, known as the open set. The lower $f(x)$ for a given node $x$, the higher its priority. At each step of the algorithm, the node with the lowest $f(x)$ value is removed from the queue, the $f$ and $g$ values of its neighbors are updated accordingly, and these neighbors are added to the queue. The algorithm continues until a goal node has a lower $f$ value than any node in the queue (or until the queue is empty). (Goal nodes may be passed over multiple times if there remain other nodes with lower $f$ values, as they may lead to a shorter path to a goal.) The $f$ value of the goal is then the length of the shortest path, since $h$ at the goal is zero in an admissible heuristic.
The algorithm described so far gives us only the length of the shortest path. To find the actual sequence of steps, the algorithm can be easily revised so that each node on the path keeps track of its predecessor. After this algorithm is run, the starting node will point to its predecessor, and so on, until some node's predecessor is the goal node.
Additionally, if the heuristic is monotonic (or consistent, see below), a closed set of nodes already traversed may be used to make the search more efficient.
## Pseudocode
The following pseudocode describes the algorithm:
``` function A*(start,goal)
closedset := the empty set // The set of nodes already evaluated.
openset := {start} // The set of tentative nodes to be evaluated, initially containing the start node
came_from := the empty map // The map of navigated nodes.
g_score[start] := 0 // Cost from start along best known path.
// Estimated total cost from start to goal through y.
f_score[start] := g_score[start] + heuristic_cost_estimate(start, goal)
while openset is not empty
current := the node in openset having the lowest f_score[] value
if current = goal
return reconstruct_path(came_from, goal)
remove current from openset
add current to closedset
for each neighbor in neighbor_nodes(current)
tentative_g_score := g_score[current] + dist_between(current,neighbor)
if neighbor in closedset
if tentative_g_score >= g_score[neighbor]
continue
if neighbor not in openset or tentative_g_score < g_score[neighbor]
came_from[neighbor] := current
g_score[neighbor] := tentative_g_score
f_score[neighbor] := g_score[neighbor] + heuristic_cost_estimate(neighbor, goal)
if neighbor not in openset
add neighbor to openset
return failure
function reconstruct_path(came_from, current_node)
if current_node in came_from
p := reconstruct_path(came_from, came_from[current_node])
return (p + current_node)
else
return current_node
```
Remark: the above pseudocode assumes that the heuristic function is monotonic (or consistent, see below), which is a frequent case in many practical problems, such as the Shortest Distance Path in road networks. However, if the assumption is not true, nodes in the closed set may be rediscovered and their cost improved. In other words, the closed set can be omitted (yielding a tree search algorithm) if a solution is guaranteed to exist, or if the algorithm is adapted so that new nodes are added to the open set only if they have a lower $f$ value than at any previous iteration.
Illustration of A* search for finding path from a start node to a goal node in a robot motion planning problem. The empty circles represent the nodes in the open set, i.e., those that remain to be explored, and the filled ones are in the closed set. Color on each closed node indicates the distance from the start: the greener, the farther. One can first see the A* moving in a straight line in the direction of the goal, then when hitting the obstacle, it explores alternative routes through the nodes from the open set.
See also: Dijkstra's algorithm
### Example
An example of an A star (A*) algorithm in action where nodes are cities connected with roads and h(x) is the straight-line distance to target point:
Key: green: start; blue: goal; orange: visited
Note: This example uses a comma as the decimal separator.
## Properties
Like breadth-first search, A* is complete and will always find a solution if one exists.
If the heuristic function $h$ is admissible, meaning that it never overestimates the actual minimal cost of reaching the goal, then A* is itself admissible (or optimal) if we do not use a closed set. If a closed set is used, then $h$ must also be monotonic (or consistent) for A* to be optimal. This means that for any pair of adjacent nodes $x$ and $y$, where $d(x,y)$ denotes the length of the edge between them, we must have:
$h(x) \le d(x,y) + h(y)$
This ensures that for any path $X$ from the initial node to $x$:
$L(X) + h(x) \le L(X) + d(x,y) + h(y) = L(Y) + h(y)$
where $L(\cdot)$ denotes the length of a path, and $Y$ is the path $X$ extended to include $y$. In other words, it is impossible to decrease (total distance so far + estimated remaining distance) by extending a path to include a neighboring node. (This is analogous to the restriction to nonnegative edge weights in Dijkstra's algorithm.) Monotonicity implies admissibility when the heuristic estimate at any goal node itself is zero, since (letting $P = (f,v_1,v_2,\ldots,v_n,g)$ be a shortest path from any node $f$ to the nearest goal $g$):
$h(f) \le d(f,v_1) + h(v_1) \le d(f,v_1) + d(v_1,v_2) + h(v_2) \le \ldots \le L(P) + h(g) = L(P)$
A* is also optimally efficient for any heuristic $h$, meaning that no optimal algorithm employing the same heuristic will expand fewer nodes than A*, except when there are multiple partial solutions where $h$ exactly predicts the cost of the optimal path. Even in this case, for each graph there exists some order of breaking ties in the priority queue such that A* examines the fewest possible nodes.
### Special cases
Dijkstra's algorithm, as another example of a uniform-cost search algorithm, can be viewed as a special case of A* where $h(x) = 0$ for all $x$. General depth-first search can be implemented using the A* by considering that there is a global counter C initialized with a very large value. Every time we process a node we assign C to all of its newly discovered neighbors. After each single assignment, we decrease the counter C by one. Thus the earlier a node is discovered, the higher its $h(x)$ value. It should be noted, however, that both Dijkstra's algorithm and depth-first search can be implemented more efficiently without including a $h(x)$ value at each node.
### Implementation details
There are a number of simple optimizations or implementation details that can significantly affect the performance of an A* implementation. The first detail to note is that the way the priority queue handles ties can have a significant effect on performance in some situations. If ties are broken so the queue behaves in a LIFO manner, A* will behave like depth-first search among equal cost paths.
When a path is required at the end of the search, it is common to keep with each node a reference to that node's parent. At the end of the search these references can be used to recover the optimal path. If these references are being kept then it can be important that the same node doesn't appear in the priority queue more than once (each entry corresponding to a different path to the node, and each with a different cost). A standard approach here is to check if a node about to be added already appears in the priority queue. If it does, then the priority and parent pointers are changed to correspond to the lower cost path. When finding a node in a queue to perform this check, many standard implementations of a min-heap require $O(n)$ time. Augmenting the heap with a hash table can reduce this to constant time.
## Admissibility and optimality
A* is admissible and considers fewer nodes than any other admissible search algorithm with the same heuristic. This is because A* uses an "optimistic" estimate of the cost of a path through every node that it considers—optimistic in that the true cost of a path through that node to the goal will be at least as great as the estimate. But, critically, as far as A* "knows", that optimistic estimate might be achievable.
Here is the main idea of the proof:
When A* terminates its search, it has found a path whose actual cost is lower than the estimated cost of any path through any open node. But since those estimates are optimistic, A* can safely ignore those nodes. In other words, A* will never overlook the possibility of a lower-cost path and so is admissible.
Suppose now that some other search algorithm B terminates its search with a path whose actual cost is not less than the estimated cost of a path through some open node. Based on the heuristic information it has, Algorithm B cannot rule out the possibility that a path through that node has a lower cost. So while B might consider fewer nodes than A*, it cannot be admissible. Accordingly, A* considers the fewest nodes of any admissible search algorithm.
This is only true if both:
• A* uses an admissible heuristic. Otherwise, A* is not guaranteed to expand fewer nodes than another search algorithm with the same heuristic. See (Generalized best-first search strategies and the optimality of A*, Rina Dechter and Judea Pearl, 1985[3])
• A* solves only one search problem rather than a series of similar search problems. Otherwise, A* is not guaranteed to expand fewer nodes than incremental heuristic search algorithms. See (Incremental heuristic search in artificial intelligence, Sven Koenig, Maxim Likhachev, Yaxin Liu and David Furcy, 2004[4])
A* search that uses a heuristic that is 5.0(=ε) times a consistent heuristic, and obtains a suboptimal path.
### Bounded relaxation
While the admissibility criterion guarantees an optimal solution path, it also means that A* must examine all equally meritorious paths to find the optimal path. It is possible to speed up the search at the expense of optimality by relaxing the admissibility criterion. Oftentimes we want to bound this relaxation, so that we can guarantee that the solution path is no worse than $(1 + \epsilon)$ times the optimal solution path. This new guarantee is referred to as $\epsilon$-admissible.
There are a number of $\epsilon$-admissible algorithms:
• Weighted A*. If $h_a(n) \,\!$ is an admissible heuristic function, in the weighted version of the A* search one uses $h_w(n) = \epsilon h_a(n), \epsilon > 1$ as the heuristic function, and perform the A* search as usual (which eventually happens faster than using $h_a \,\!$ since fewer nodes are expanded). The path hence found by the search algorithm can have a cost of at most $\epsilon \,\!$ times that of the least cost path in the graph.[5]
• Static Weighting[6] uses the cost function $f(n) = g(n) + (1 + \epsilon)h(n)$.
• Dynamic Weighting[7] uses the cost function $f(n) = g(n) + (1 + \epsilon w(n))h(n)$, where $w(n) = \begin{cases} 1 - \frac{d(n)}{N} & d(n) \le N \\ 0 & otherwise \end{cases}$, and where d(n) is the depth of the search and N is the anticipated length of the solution path.
• Sampled Dynamic Weighting.[8] uses sampling of nodes to better estimate and debias the heuristic error.
• $A^*_\epsilon$.[9] uses two heuristic functions. The first is the FOCAL list, which is used to select candidate nodes, and the second $h_F$ is used to select the most promising node from the FOCAL list.
• $A_\epsilon$.[10] selects nodes with the function $A f(n) + B h_F(n)$, where A and B are constants. If no nodes can be selected, the algorithm will backtrack with the function $C f(n) + D h_F(n)$, where C and D are constants.
• AlphA*[11] attempts to promote depth-first exploitation by preferring recently expanded nodes. AlphA* uses the cost function $f_\alpha(n) = (1 + w_\alpha(n)) f(n)$, where $w_\alpha(n) = \begin{cases} \lambda & g(\pi(n)) \le g(\tilde{n}) \\ \Lambda & otherwise \end{cases}$, where $\lambda$ and $\Lambda$ are constants with $\lambda \le \Lambda$, $\pi(n)$ is the parent of n, and $\tilde{n}$ is the most recently expanded node.
## Complexity
The time complexity of A* depends on the heuristic. In the worst case, the number of nodes expanded is exponential in the length of the solution (the shortest path), but it is polynomial when the search space is a tree, there is a single goal state, and the heuristic function h meets the following condition:
$|h(x) - h^*(x)| = O(\log h^*(x))$
where $h^*$ is the optimal heuristic, the exact cost to get from $x$ to the goal. In other words, the error of h will not grow faster than the logarithm of the “perfect heuristic” $h^*$ that returns the true distance from x to the goal (see Pearl 1984[12] and also Russell and Norvig 2003, p. 101[13])
## Variants of A*
• D*
• Field D*
• IDA*
• Fringe
• Fringe Saving A* (FSA*)
• Generalized Adaptive A* (GAA*)
• Lifelong Planning A* (LPA*)
• Simplified Memory bounded A* (SMA*)
• Theta*
• A* can be adapted to a bidirectional search algorithm. Special care needs to be taken for the stopping criterion [14]
## References
1. Delling, D. and Sanders, P. and Schultes, D. and Wagner, D. (2009). "Engineering route planning algorithms". Algorithmics of large and complex networks. Springer. pp. 117–139. doi:10.1007/978-3-642-02094-0_7.
2. Hart, P. E.; Nilsson, N. J.; Raphael, B. (1968). "A Formal Basis for the Heuristic Determination of Minimum Cost Paths". 4 (2): 100–107. doi:10.1109/TSSC.1968.300136.
3. Dechter, Rina; Judea Pearl (1985). "Generalized best-first search strategies and the optimality of A*". 32 (3): 505–536. doi:10.1145/3828.3830.
4. Koenig, Sven; Maxim Likhachev, Yaxin Liu, David Furcy (2004). "Incremental heuristic search in AI". 25 (2): 99–112.
5. Pearl, Judea (1984). Heuristics: intelligent search strategies for computer problem solving. Boston, MA, USA: Addison-Wesley Longman Publishing Co., Inc. ISBN 0-201-05594-5.
6. Pohl, Ira (1970). "First results on the effect of error in heuristic search". Machine Intelligence 5: 219–236.
7. Pohl, Ira (August, 1973). "The avoidance of (relative) catastrophe, heuristic competence, genuine dynamic weighting and computational issues in heuristic problem solving". Proceedings of the Third International Joint Conference on Artificial Intelligence (IJCAI-73). 3. California, USA. pp. 11–17.
8. Köll, Andreas; Hermann Kaindl (August, 1992). "A new approach to dynamic weighting". Proceedings of the Tenth European Conference on Artificial Intelligence (ECAI-92). Vienna, Austria. pp. 16–17.
9. Pearl, Judea; Jin H. Kim (1982). "Studies in semi-admissible heuristics". IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI) 4 (4): 392–399.
10. "$A_\epsilon$ - an efficient near admissible heuristic search algorithm". Proceedings of the Eighth International Joint Conference on Artificial Intelligence (IJCAI-83). 2. Karlsruhe, Germany. August, 1983. pp. 789–791.
11. Reese, Bjørn (1999). AlphA*: An $\epsilon$-admissible heurstic search algorithm.
12. Pearl, Judea (1984). Heuristics: Intelligent Search Strategies for Computer Problem Solving. Addison-Wesley. ISBN 0-201-05594-5.
13. Russell, S. J.; Norvig, P. (2003). . Upper Saddle River, N.J.: Prentice Hall. pp. 97–104. ISBN 0-13-790395-2.
14.
## Further reading
• Hart, P. E.; Nilsson, N. J.; Raphael, B. (1972). "Correction to "A Formal Basis for the Heuristic Determination of Minimum Cost Paths"". 37: 28–29.
• Nilsson, N. J. (1980). Principles of Artificial Intelligence. Palo Alto, California: Tioga Publishing Company. ISBN 0-935382-01-1.
• Pearl, Judea (1984). Heuristics: Intelligent Search Strategies for Computer Problem Solving. Addison-Wesley. ISBN 0-201-05594-5.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 74, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8819729089736938, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/differential-equations/184661-how-do-you-get-characteristic-equation-trickier-euler-equations.html
|
# Thread:
1. ## How do you get the characteristic equation of trickier Euler equations?
When dealing with 2nd order ODE's with constant coefficients, you have this:
$Ay'' + By' + Cy = 0$
which you can get the characteristic equation like so:
$Ar^2 + Br + C = 0$
Now, when dealing with Euler equations, you have this:
$Ax^2y'' + Bxy' + Cy = 0$
Here's where things get dicey.... When you have it in this nice form, you can get the characteristic equation like so:
$Ar(r - 1) + Br + C = 0$
However, if you have an Euler equation like this:
$A(x + D)^2y'' + B(x + D)y' + Cy = 0$
...now it's a lot nastier. How do you deal with Euler equations like this?
Can you just do a substitution like
$t = x + D$
...and revert it back at the end?
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9224714040756226, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/24569/seeking-reference-for-the-enumerative-mass-formula-concept/24698
|
## Seeking reference for the enumerative “mass formula” concept
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I am teaching a combinatorics class in which I introduced the notion of a "mass formula". My terminology is inspired by the Smith–Minkowski–Siegel mass formula for the total mass of positive-definite quadratic forms of a given size and genus. That famous mass formula is much too fancy of an example for my class. All that I really do is define the concept of the "mass" of a combinatorial object to be $1/|G|$ if $G$ is its automorphism group, and then argue that it can be easier to find the total mass of a collection of objects than to count them straight (using Polya counting theory). For example, the total mass of unlabeled trees of order $n$ is $n^{n-2}/n!$, because there are $n^{n-2}$ labeled trees.
So I have two questions for which a quick answer (i.e. sooner than two weeks) would be most convenient:
1. Is "mass formula" a standard name for this concept? Is there a standard name?
2. Can someone suggest a free on-line reference, comparable to a Wikipedia page or a little longer? The class textbook doesn't have a discussion.
-
Here is another remark: The "mass formula" is a term in the Burnside counting theorem, the term corresponding to the identity permutation. Maybe this remark points to another name for the quantity? – Greg Kuperberg May 15 2010 at 5:36
## 3 Answers
I do call such things "mass formulas", but then again I am a number theorist, and one of my colleagues is a quadratic form theorist who specializes in such things. So this is mostly an expression of my specific mathematical culture.
I do not think that it is a standard term, at least not the only standard term. For instance, from another MO answer I noticed that some categorists call this the groupoid cardinality. This term in fact seems quite sensible to me, because the concept seems closely related to taking a quotient by the action of a group with nontrivial stabilizers and regarding the quotient set as a groupoid rather than a mere set.
As you say, combinatorially minded people speak of "Polya theory" or "counting with symmetry". Many algebraic geometers, upon seeing this phenomenon, would use the word "stacky". I wouldn't be surprised if there were other terms as well.
Overall I think this has the effect that a lot of people are partially rediscovering what is essentially the same concept. I would very much like to see a reasonably authoritative treatment of this subject appealing to mathematicians from different fields. Of course, I also look forward to seeing (better!) answers to this question.
-
The name "mass formula" is standard, quite old, and not very good. The name "groupoid cardinality" is much better, but only appeared recently in the literature. (It looks like it is due to Baez and Dolan.) If there is another established name, I couldn't find it. Also the nlab page on this, and the Wikipedia page on Burnside's formula, seem to be the current documentation for the idea. So I suppose that this answer should be accepted as fair. – Greg Kuperberg Jun 16 2010 at 23:53
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
This doesn't qualify as a free reference, but "Graphs on surfaces and their applications" by Lando and Zvonkin has some nice examples. On p.46, after stating a theorem enumerating trees with a given "passport", the authors remark:
We will often encounter enumerative formulas where the objects are not counted one by one but a weight us assigned ti each object, and this weight is equal to 1/|Aut|, where the denominator means the order of the automorphism group of the object. Formulas of this kind are often called mass-formulas. (Footnote: The first mass-formula was proposed by H.J.S. Smith in 1867. Mass-formulas are also called Siegel–Minkowski formulas.)
-
Still another formulation: I recall hearing the whole idea being referred by a metaphor of skeleton and flesh. The "mass" of your example would be the "skeleton weight" or "bone mass" of the collection, the amount of "flesh" around each "bone" (the radius of the muscle ?) being the size of the automorphism group.
I find the groupoid cardinality sensible too and perfectly compatible with the basic results of Polya theory, and will certainly use it in the future.
I note that this is the converse operation of counting things with multiplicities such as roots.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.947179913520813, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/72342/which-finite-groups-have-exactly-2-conjugacy-classes-of-the-same-order-and-exactl
|
## Which finite groups have exactly 2 conjugacy classes of the same order and exactly 2 irreducible characters of the same degree?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Dears
I have a interesting question,
Let G be a finite group with exactly 2 conjugacy class of the same order. when degrees of exactly 2 irreducible characters are equivalent?
-
Changed the tags, as the question has nothing to do with characteristic classes. – Mark Grant Aug 8 2011 at 10:43
1
Seems to work for A_5 and PSL(2,7). But it does not work for A_4. The condition is quite restrictive and I am not sure if there is a clean general statement. – Geoff Robinson Aug 8 2011 at 10:45
1
Doesn't work for dihedral group of order 10; conjugacy classes have orders 1, 2, 2, 5, irreducible characters have degrees 1, 1, 2, 2. – Gerry Myerson Aug 8 2011 at 10:53
1
Also doesn't work for $S_4$ and $S_5$ (class lengths 1,3,6,6,8 vs degrees 1,1,2,3,3 and 1,10,15,20,20,24,30 vs 1,1,4,4,5,5,6). – F. Ladisch Aug 8 2011 at 12:26
What's the motivation for this question? mathoverflow.net/howtoask#motivation – jc Aug 8 2011 at 12:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8483012914657593, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/119357/definition-of-an-extreme-set?answertab=votes
|
# Definition of an extreme set?
I have an issue with a definition in Rudin's Functional Analysis in the paragraph regarding the Krein-Milman Theorem.
"Let $K$ be a subset of a vector space $X$. A nonempty set $S$ in $K$ is called an extreme set if no point of $S$ is an internal point of a line interval whose end points are in $K$ but not in $S$. Analytically, the condition can be expressed as follows: if $x$ and $y$ are in $K$, if $t$ is in $(0, 1)$, and if $tx + (1 - t)y$ is in $S$, then $x$ and $y$ are in $S$. The extreme points of $K$ are the extreme sets that consist of just one point."
For this condition to be equivalent to the definition, one should replace the conclusion by: "$\dots$ then $x$ is in $S$ or $y$ is in $S$." It turns out that this is indeed equivalent when $S$ consists of a single point, but not in general.
So my question is: what is the good definition for an extreme set?
-
– t.b. Mar 12 '12 at 18:39
@t.b.: thank you. Rudin's proof does not assume its extreme sets to be convex, though. Anyway, it looks like the condition should be taken to be the definition here. – user26770 Mar 12 '12 at 19:43
## 1 Answer
The second sentence is convoluted and, to me, hard to understand. The third sentence conforms to several other texts. If you want a fancier definition you can use the following:
Let $\mathsf{con}$ denote the convex hull operator. For $S \subseteq K$, the set $S$ is an extreme subset of $K$ if and only if for all $D \subseteq K$ we have $S \cap \mathsf{con}(D) = S \cap \mathsf{con}(S \cap D)$. If you want to, you only have to consider finite subsets $D$.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 31, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9366698265075684, "perplexity_flag": "head"}
|
http://cs.stackexchange.com/questions/9274/bayesian-coding
|
# Bayesian Coding
Suppose you have a sequence generated by an i.i.d. process (such as repeatedly rolling a die and recording the values in order) parameterized by some K-dimensional vector $\vec{\gamma}$ (the probabilities associated with each side of the die), which is unknown. But, if you assume some distribution $Q$ on $\vec{\gamma}$, you can calculate the probability of a sequence (where $n(k)$ is the number of times symbol $k$ appears in the sequence): $$P(seq) = \int P(seq|\vec{\gamma}')Q(\vec{\gamma}')d\vec{\gamma}' = \int \prod_{k=1}^K \gamma_k' ^{n(k)}Q(\vec{\gamma}')d\vec{\gamma}'.$$ Then $-\log P(seq)$ is the code length. I believe this code length is equal to $$-n\sum_{k=1}^K \gamma_k\log \gamma_k+\frac{K-1}{2}\log n +O(1),$$ where the $O(1)$ term approaches a constant as $n \rightarrow \infty$. Can someone help me find the $O(1)$ term, or its value as $n\rightarrow \infty$ in terms of $\vec{\gamma}$, $n$ and $k$ (Or, even just proving that this limit exists would be good too.)? $n$ being the length of the sequence, $n=\sum_{k=1}^K n(k)$.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 15, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9621518850326538, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/25914/proof-by-strong-induction-every-natural-a-product-of-an-odd-and-a-power-of-2?answertab=active
|
# Proof by Strong Induction: every natural a product of an odd and a power of 2
Can someone guide me in the right direction on this question?
Prove that every $n$ in $\mathbb{N}$ can be written as a product of an odd integer and a non-negative integer power of $2$.
For instance: $36 = 2^2(9)$ , $80 = 2^4(5)$ , $17 = 2^0(17)$ , $64 = 2^6(1)$ , etc...
Any hints in the right direction is appreciated (please explain for a beginner). Thanks.
-
## 3 Answers
To prove something by strong induction, you have to prove that
If all natural numbers strictly less than $N$ have the property, then $N$ has the property.
is true for all $N$.
So: our induction hypothesis is going to be:
Every natural number $k$ that is strictly less than $n$ can be written as a product of a power of $2$ and an odd number.
And we want to prove that from this hypothesis, we can conclude that $n$ itself can be written as the product of a power of $2$ and an odd number.
Well, we have two cases: either $n$ is odd, or $n$ is even. If we can prove the result holds in both cases, we'll be done.
Case 1: $n$ is odd. Then we can write $n=2^0\times n$, and we are done. So in Case 1, the result holds for $n$.
Case 2: If $n$ is even, then we can write $n=2k$ for some natural number $k$. But then $k\lt n$, so we can apply the induction hypothesis to $k$. We conclude that `...and you should finish this part...`
So we conclude that the result holds for all natural numbers by strong induction.
-
So if k < n then by induction hypothesis k can be written as a product of a power of 2 and an odd number? Then that would imply that n itself follows from the hypothesis? – 1337holiday Mar 9 '11 at 5:52
@1337holiday: I assume you are talking about case 2. Not "if", but since $k\lt n$, then $k$ can be written as a product of a power of $2$ and an odd number (by the induction hypothesis). So $k=2^r\times s$, where $s$ is odd. What does that tell you about $n$? – Arturo Magidin Mar 9 '11 at 5:53
So since n = 2k and k = (2^r(s)). It means that n = 2(2^r(s)) or n = (2^(r+1))(s) and therefore it is true by the hypothesis? – 1337holiday Mar 9 '11 at 6:00
@1337holiday: Essentially yes: though I would phrase it as "and therefore, $n$ can be written as the product of a power of $2$ and an odd number." The induction hypothesis has already been invoked, no need to remember her yet again in the coda. (-: – Arturo Magidin Mar 9 '11 at 6:02
THIS IS AMAZING! I didnt understand this at all but now im starting to get it. Thanks so much! – 1337holiday Mar 9 '11 at 6:06
This may be deduced from a more general result that's both simpler to prove and more insightful, viz. the result follows immediately by this frequently applicable multiplicative form of induction.
Lemma $\rm\ \mathbb N$ is the only set of naturals containing $1$ and all primes and closed under multiplication.
Proof $\$ Suppose $\rm\!\ N\subset \mathbb N\:$ has said properties. We prove by strong induction that all naturals $\rm\!\ n\in N.$ If $\rm\:n\:$ is $1$ or prime then by hypothesis $\rm\:n\in N.\:$ Else $\rm\:n\:$ is composite hence $\rm\ n\, =\, j\ k\$ for $\rm\: j,k < n.\:$ By induction $\rm\ j,k\in N,\:$ so $\rm\: n\, =\, j\ k\in N,\:$ since $\rm\:N\:$ is closed under multiplication. $\$ QED
This yields the sought result. Let $\rm\!\ N\!\$ be the set of naturals that have the form $\rm\,2^{\,i}\!\ n\:$ for odd $\rm\:n\in \mathbb N.\$ Notice $\!\, 1\!\$ and all primes $\rm\,p\,$ are in $\rm\!\ N,\,$ by $\rm\: 1 =2^{\!\ 0}\!\cdot\! 1 ,\ 2 = 2^{\!\ 1}\!\cdot\! 1,\,$ odd $\rm\, p = 2^{\!\ 0}\!\cdot\! p.\:$ $\rm\!\ N\!\$ is closed under multiplication by $\rm\ (2^{\,i}\!\ m)\ (2^{\,j}\!\ n)\, =\, 2^{\,i+j}\!\ m\!\ n,\:$ with $\rm\!\ m\!\ n\!\$ odd by $\rm\!\ m,n\!\$ odd. $\!\$ So $\rm\ N = \mathbb N\$ by Lemma.
-
Proof:
By the fundamental theorem of algebra, every integer $N$ can be uniquely factored as $\prod^{n}_{i=1}p_{i}^{a_{i}}$. Now, mark $2=p_{1}$, note $a_{i}$ can take value of $0$. You got the theorem.
For the "inductive" proof, suppose for $n<k$ this is true. For $n+1$ its factors must be in previous $n$ numbers. Hence $n+1=\prod n_{i}$. Decompose $n_{i}$ by induction hypothesis you get the statement.
-
2
@user7887: The Fundamental Theorem of Algebra says that every nonconstant polynomial with complex coefficients has at least one complex root; you mean the Fundamental Theorem of Arithmetic. – Arturo Magidin Mar 9 '11 at 5:37
now I understand. – Changwei Zhou Mar 9 '11 at 5:49
@user7887: Your inductive proof has two problems: first, the OP asked for a proof by strong induction, not regular induction. Second, your argument is incorrect as given (it's possible for $n+1$ to be prime, after all). – Arturo Magidin Mar 9 '11 at 5:50
1
I don't think $n+1=\prod n_i$ is true (should be $p_i$), and if $n+1$ is prime its factor(s) are not in the previous $n$ numbers. You are close-if $n+1$ is prime, it is odd. – Ross Millikan Mar 9 '11 at 5:52
I guess answering more such questions can help me realize my own weakness. Thanks for the comments. – Changwei Zhou Mar 9 '11 at 5:54
show 4 more comments
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 78, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9465226531028748, "perplexity_flag": "head"}
|
http://www.advogato.org/person/ssp/diary.html
|
# Recent blog entries for ssp
Fast Multiplication of Normalized 16 bit Numbers with SSE2
If you are compositing pixels with 16 bits per component, you often need this computation:
```uint16_t a, b, r;
r = (a * b + 0x7fff) / 65535;
```
There is a well-known way to do this quickly without a division:
```uint32_t t;
t = a * b + 0x8000;
r = (t + (t >> 16)) >> 16;
```
Since we are compositing pixels we want to do this with SSE2 instructions, but because the code above uses 32 bit arithmetic, we can only do four operations at a time, even though SSE registers have room for eight 16 bit values. Here is a direct translation into SSE2:
```a = punpcklwd (a, 0);
b = punpcklwd (b, 0);
a = pmulld (a, b);
a = paddd (a, 0x8000);
b = psrld (a, 16);
a = paddd (a, b);
a = psrld (a, 16);
a = packusdw (a, 0);
```
But there is another way that better matches SSE2:
```uint16_t lo, hi, t, r;
hi = (a * b) >> 16;
lo = (a * b) & 0xffff;
t = lo >> 15;
hi += t;
t = hi ^ 0x7fff;
if ((int16_t)lo > (int16_t)t)
lo = 0xffff;
else
lo = 0x0000;
r = hi - lo;
```
This version is better because it avoids the unpacking to 32 bits. Here is the translation into SSE2:
```t = pmulhuw (a, b);
a = pmullw (a, b);
b = psrlw (a, 15);
t = paddw (t, b);
b = pxor (t, 0x7fff);
a = pcmpgtw (a, b);
a = psubw (t, a);
```
This is not only shorter, it also makes use of the full width of the SSE registers, computing eight results at a time.
Unfortunately SSE2 doesn’t have 8-bit variants of `pmulhuw`, `pmullw`, and `psrlw`, so we can’t use this trick for the more common case where pixels have 8 bits per component.
Exercise: Why does the second version work?
Syndicated 2013-05-16 05:00:56 from ssp
Sysprof 1.1.8
A new version 1.1.8 of Sysprof is out.
This is a release candidate for 1.2.0 and contains mainly bug fixes.
Syndicated 2013-05-16 05:00:56 from ssp
Gamma Correction vs. Premultiplied Pixels
Pixels with 8 bits per channel are normally sRGB encoded because that allocates more bits to darker colors where human vision is the most sensitive. (Actually, it’s really more of a historical accident, but sRGB nevertheless remains useful for this reason). The relationship between sRGB and linear RGB is that you get an sRGB pixel by raising each component of a linear pixel to the power of $1/2.2$.
A lot of graphics software does alpha blending directly on these sRGB pixels using alpha values that are linearly coded (ie., an alpha value of 0 means no coverage, 0.5 means half coverage, and 1 means full coverage). Because alpha blending is best done with premultiplied pixels, such systems store pixels in this format:
```[ alpha, alpha * red_s, alpha * green_s, alpha * blue_s ]
```
where alpha is linearly coded, and (`red_s`, `green_s`, `blue_s`) are sRGB coded. As long as you are happy with blending in sRGB, this works well. Also, if you simply discard the alpha channel of such pixels and display them directly on a monitor, it will look as if the pixels were alpha blended (in the sRGB space) on top of a black background, which is the desired result.
But what if you want to blend in linear RGB? If you use the format above, some expensive conversions will be required. To convert to premultiplied linear, you have to first divide by alpha, then raise each color to 2.2, then multiply by alpha. To convert back, you must divide by alpha, raise to $1/2.2$, then multiply with alpha.
The conversions can be avoided if you store the pixels linearly, ie., keeping the premultiplication, but coding red, green, and blue linearly instead of as sRGB. This makes blending fast, but the downside is that you need deeper pixels. With only 8 bits per pixel, the linear coding loses too much precision in darker tones. Another problems is that to display these pixels, you will either have to convert them to sRGB, or if the video card can scan them out directly, you have to make sure that the gamma ramp is set to compensate for the fact that the monitor expects sRGB pixels.
```[ alpha, alpha_s * red_s, alpha_s * green_s, alpha_s * blue_s ]
```
That is, the alpha channel is stored linearly, and the color channels are stored in sRGB, premultiplied with the alpha value raised to 1/2.2. Ie., the red component is now
```(red * alpha)^(1/2.2),
```
where before it was
```alpha * red^(1/2.2).
```
It is sufficient to use 8 bits per channel with this format because of the sRGB encoding. Discarding the alpha channel and displaying the pixels on a monitor will produce pixels that are alpha blended (in linear space) against black, as desired.
You can convert to linear RGB simply by raising the R, G, and B components to 2.2, and back by raising to $1/2.2$. Or, if you feel like cheating, use an exponent of 2 so that the conversions become a multiplication and a square root respectively.
This is also the pixel format to use with texture samplers that implement the sRGB OpenGL extensions (textures and framebuffers). These extensions say precisely that the R, G, and B components are raised to 2.2 before texture filtering, and raised to 1/2.2 after the final raster operation.
Syndicated 2013-05-16 05:00:56 from ssp
Over is not Translucency
The Porter/Duff Over operator, also known as the “Normal” blend mode in Photoshop, computes the amount of light that is reflected when a pixel partially covers another:
The fraction of bg that is covered is denoted alpha. This operator is the correct one to use when the foreground image is an opaque mask that partially covers the background:
A photon that hits this image will be reflected back to your eyes by either the foreground or the background, but not both. For each foreground pixel, the alpha value tells us the probability of each:
$a \cdot \text{fg} + (1 - a) \cdot \text{bg}$
This is the definition of the Porter/Duff Over operator for non-premultiplied pixels.
But if alpha is interpreted as translucency, then the Over operator is not the correct one to use. The Over operator will act as if each pixel is partially covering the background:
Which is not how translucency works. A translucent material reflects some light and lets other light through. The light that is let through is reflected by the background and interacts with the foreground again.
Let’s look at this in more detail. Please follow along in the diagram to the right. First with probability $a$, the photon is reflected back towards the viewer:
$a \cdot \text{fg}$
With probability $(1 - a)$, it passes through the foreground, hits the background, and is reflected back out. The photon now hits the backside of the foreground pixel. With probability $(1 - a)$, the foreground pixel lets the photon back out to the viewer. The result so far:
$
\begin{align*}
&a\cdot \text{fg} \\
+&(1 - a) \cdot \text{bg} \cdot (1 - a)
\end{align*}
$
But we are not done yet, because with probability $a$ the foreground pixel reflects the photon once again back towards the background pixel. There it will be reflected, hit the backside of the foreground pixel again, which lets it through to our eyes with probability $(1 - a)$. We get another term where the final $(1 - a)$ is replaced with $a \cdot \text{fg} \cdot \text {bg} \cdot (1 - a)$:
$
\begin{align*}
&a\cdot \text{fg} \\
+&(1 - a) \cdot \text{bg} \cdot (1 - a)\\
+&(1 - a) \cdot \text{bg} \cdot a \cdot \text{fg} \cdot \text{bg} \cdot (1 - a)
\end{align*}
$
And so on. In each round, we gain another term which is identical to the previous one, except that it has an additional $a \cdot \text{fg}
\cdot \text{bg}$ factor:
$
\begin{align*}
&a\cdot \text{fg} \\
+&(1 - a) \cdot \text{bg} \cdot (1 - a)\\
+&(1 - a) \cdot \text{bg} \cdot a \cdot \text{fg} \cdot \text{bg} \cdot (1 - a)\\
+&(1 - a) \cdot \text{bg} \cdot a \cdot \text{fg} \cdot \text{bg} \cdot a \cdot \text{fg} \cdot \text{bg} \cdot (1 - a) \\
+&\cdots
\end{align*}
$
or more compactly:
$\displaystyle
a \cdot \text{fg} + (1 - a)^2 \cdot \text{bg} \cdot
\sum_{i=0}^\infty (a \cdot \text{fg} \cdot \text{bg})^i
$
Because we are dealing with pixels, both $a$, $\text{fg}$, and $\text{bg}$ are less than 1, so the sum is a geometric series:
$\displaystyle
\sum_{i=0}^\infty x^i = \frac{1}{1 - x}
$
Putting them together, we get:
$\displaystyle
a \cdot \text{fg} + \frac{(1 - a)^2 \cdot bg}{1 - a \cdot \text{fg} \cdot \text{bg}}
$
I have sidestepped the issue of premultiplication by assuming that background alpha is 1. The calculations with premultipled colors are similar, and for the color components, the result is simply:
$\displaystyle
r = \text{fg} + \frac{(1 - a_\text{fg})^2 \cdot \text{bg}}{1 - \text{fg}\cdot\text{bg}}
$
The issue of destination alpha is more complicated. With the Over operator, both foreground and background are opaque masks, so the light that survives both has the same color as the input light. With translucency, the transmitted light has a different color, which means the resulting alpha value must in principle be different for each color component. But that’s not possible for ARGB pixels. A similar argument to the above shows that the resulting alpha value would be:
$\displaystyle
r = 1 - \frac{(1 - a)\cdot (1 - b)}{1 - \text{fg} \cdot \text{bg}}
$
where $b$ is the background alpha. The problem is the dependency on $\text{fg}$ and $\text{bg}$. If we simply assume for the purposes of the alpha computation that $\text{fg}$ and $\text{bg}$ are equal to $a$ and $b$, we get this:
$\displaystyle
r = 1 - \frac{(1 - a)\cdot (1 - b)}{1 - a \cdot b}
$
which is equal to
$\displaystyle
a + \frac{(1 - a)^2 \cdot b}{1 - a \cdot b}
$
Ie., exactly the same computation as the one for the color channels. So we can define the Translucency Operator as this:
$\displaystyle
r = \text{fg} + \frac{(1 - a)^2 \cdot \text{bg}}{1 - \text{fg} \cdot \text{bg}}
$
for all four channels.
Here is an example of what the operator looks like. The image below is what you will get if you use the Over operator to implement a selection rectangle. Mouse over to see what it would look like if you used the Translucency operator.
Both were computed in linear RGB. Typical implementations will often compute the Over operator in sRGB, so that’s what see if you actually select some icons in Nautilus. If you want to compare all three, open these in tabs:
Over, in sRGB
Translucency, in linear RGB
Over, in linear RGB
And for good measure, even though it makes zero sense to do this,
Translucency, in sRGB
Syndicated 2013-05-16 05:00:56 from ssp
Sysprof 1.2.0
A new stable releasenew stable release of Sysprof is now available. Download version 1.2.0.
Syndicated 2013-05-16 05:00:56 from ssp
Big-O Misconceptions
In computer science and sometimes mathematics, big-O notation is used to talk about how quickly a function grows while disregarding multiplicative and additive constants. When classifying algorithms, big-O notation is useful because it lets us abstract away the differences between real computers as just multiplicative and additive constants.
Big-O is not a difficult concept at all, but it seems to be common even for people who should know better to misunderstand some aspects of it. The following is a list of misconceptions that I have seen in the wild.
But first a definition: We write
$f(n) = O(g(n))$
when $f(n) \le M g(n)$ for sufficiently large $n$, for some positive constant $M$.
Misconception 1: “The Equals Sign Means Equality”
$f(n) = O(g(n))$
is a widespread travestry. If you take it at face value, you can deduce that since $5 n$ and $3 n$ are both equal to $O(n)$, then $3 n$ must be equal to $5 n$ and so $3 = 5$.
The expression $f(n) = O(g(n))$ doesn’t type check. The left-hand-side is a function, the right-hand-side is a … what, exactly? There is no help to be found in the definition. It just says “we write” without concerning itself with the fact that what “we write” is total nonsense.
The way to interpret the right-hand side is as a set of functions:
$ O(f) = \{ g \mid g(n) \le M f(n) \text{ for some \(M > 0\) for large \(n\)}\}. $
With this definition, the world makes sense again: If $f(n) = 3 n$ and $g(n) = 5 n$, then $f \in O(n)$ and $g \in O(n)$, but there is no equality involved so we can’t make bogus deductions like $3=5$. We can however make the correct observation that $O(n)
\subseteq O(n \log n)\subseteq O(n^2) \subseteq O(n^3)$, something that would be difficult to express with the equals sign.
Misconception 2: “Informally, Big-O Means ‘Approximately Equal’"
If an algorithm takes $5 n^2$ seconds to complete, that algorithm is $O(n^2)$ because for the constant $M=7$ and sufficiently large $n$, $5
n^2 \le 7 n^2$. But an algorithm that runs in constant time, say 3 seconds, is also $O(n^2)$ because for sufficiently large $n$, $3 \le
n^2$.
So informally, big-O means approximately less than or equal, not approximately equal.
If someone says “Topological Sort, like other sorting algorithms, is $O(n \log n)$", then that is technically correct, but severely misleading, because Toplogical Sort is also $O(n)$ which is a subset of $O(n \log n)$. Chances are whoever said it meant something false.
If someone says “In the worst case, any comparison based sorting algorithm must make $O(n \log n)$ comparisons” that is not a correct statement. Translated into English it becomes:
“In the worst case, any comparison based sorting algorithm must make fewer than or equal to $M n \log (n)$ comparisons”
which is not true: You can easily come up with a comparison based sorting algorithm that makes more comparisons in the worst case.
To be precise about these things we have other types of notation at our disposal. Informally:
$O()$: Less than or equal, disregarding constants
$\Omega()$: Greater than or equal, disregarding constants
$o()$: Stricly less than, disregarding constants
$\Theta()$: Equal to, disregarding constants
and some more. The correct statement about lower bounds is this: “In the worst case, any comparison based sorting algorithm must make $\Omega(n \log n)$ comparisons. In English that becomes:
“In the worst case, any comparison based sorting algorithm must make at least $M n \log (n)$ comparisons”
which is true. And a correct, non-misleading statement about Topological Sort is that it is $\Theta(n)$, because it has a lower bound of $\Omega(n)$ and an upper bound of $O(n)$.
Misconception 3: “Big-O is a Statement About Time”
Big-O is used for making statements about functions. The functions can measure time or space or cache misses or rabbits on an island or anything or nothing. Big-O notation doesn’t care.
In fact, when used for algorithms, big-O is almost never about time. It is about primitive operations.
When someone says that the time complexity of MergeSort is $O(n \log
n)$, they usually mean that the number of comparisons that MergeSort makes is $O(n \log n)$. That in itself doesn’t tell us what the time complexity of any particular MergeSort might be because that would depend how much time it takes to make a comparison. In other words, the $O(n \log n)$ refers to comparisons as the primitive operation.
The important point here is that when big-O is applied to algorithms, there is always an underlying model of computation. The claim that the time complexity of MergeSort is $O(n \log n)$, is implicitly referencing an model of computation where a comparison takes constant time and everything else is free.
Which is fine as far as it goes. It lets us compare MergeSort to other comparison based sorts, such as QuickSort or ShellSort or BubbleSort, and in many real situations, comparing two sort keys really does take constant time.
However, it doesn’t allow us to compare MergeSort to RadixSort because RadixSort is not comparison based. It simply doesn’t ever make a comparison between two keys, so its time complexity in the comparison model is 0. The statement that RadixSort is $O(n)$ implicitly references a model in which the keys can be lexicographically picked apart in constant time. Which is also fine, because in many real situations, you actually can do that.
To compare RadixSort to MergeSort, we must first define a shared model of computation. If we are sorting strings that are $k$ bytes long, we might take “read a byte” as a primitive operation that takes constant time with everything else being free.
In this model, MergeSort makes $O(n \log n)$ string comparisons each of which makes $O(k)$ byte comparisons, so the time complexity is $O(k\cdot n \log n)$. One common implementation of RadixSort will make $k$ passes over the $n$ strings with each pass reading one byte, and so has time complexity $O(n k)$.
Misconception 4: Big-O Is About Worst Case
Big-O is often used to make statements about functions that measure the worst case behavior of an algorithm, but big-O notation doesn’t imply anything of the sort.
If someone is talking about the randomized QuickSort and says that it is $O(n \log n)$, they presumably mean that its expected running time is $O(n \log n)$. If they say that QuickSort is $O(n^2)$ they are probably talking about its worst case complexity. Both statements can be considered true depending on what type of running time the functions involved are measuring.
Syndicated 2013-05-16 05:00:56 from ssp
Porter/Duff Compositing and Blend Modes
In the Porter/Duff compositing algebra, images are equipped with an alpha channel that determines on a per-pixel basis whether the image is there or not. When the alpha channel is 1, the image is fully there, when it is 0, the image isn’t there at all, and when it is in between, the image is partially there. In other words, the alpha channel describes the shape of the image, it does not describe opacity. The way to think of images with an alpha channel is as irregularly shaped pieces of cardboard, not as colored glass. Consider these two images:
When we combine them, each pixel of the result can be divided into four regions:
One region where only the source is present, one where only the destination is present, one where both are present, and one where neither is present.
By deciding on what happens in each of the four regions, various effects can be generated. For example, if the destination-only region is treated as blank, the source-only region is filled with the source color, and the ‘both’ region is filled with the destination color like this:
The effect is as if the destination image is trimmed to match the source image, and then held up in front of it:
The Porter/Duff operator that does this is called “Dest Atop”.
There are twelve of these operators, each one characterized by its behavior in the three regions: source, destination and both. The ‘neither’ region is always blank. The source and destination regions can either be blank or filled with the source or destination colors respectively.
The formula for the operators is a linear combination of the contents of the four regions, where the weights are the areas of each region:
$A_\text{src} \cdot [s] + A_\text{dest} \cdot [d] + A_\text{both} \cdot [b]$
Where $[s]$ is either 0 or the color of the source pixel, $[d]$ either 0 or the color of the destination pixel, and $[b]$ is either 0, the color of the source pixel, or the color of the destination pixel. With the alpha channel being interpreted as coverage, the areas are given by these formulas:
$A_\text{src} = \alpha_\text{s} \cdot (1 - \alpha_\text{d})$
$A_\text{dst} = \alpha_\text{d} \cdot (1 - \alpha_\text{s})$
$A_\text{both} = \alpha_\text{s} \cdot \alpha_\text{d}$
The alpha channel of the result is computed in a similar way:
$A_\text{src} \cdot [\text{as}] + A_\text{dest} \cdot [\text{ad}] + A_\text{both} \cdot [\text{ab}]$
where $[\text{as}]$ and $[\text{ad}]$ are either 0 or 1 depending on whether the source and destination regions are present, and where $[\text{ab}]$ is 0 when the ‘both’ region is blank, and 1 otherwise.
Here is a table of all the Porter/Duff operators:
| | | | |
|----------|--------------|--------------|--------------|
| | $[\text{s}]$ | $[\text{d}]$ | $[\text{b}]$ |
| Src | $s$ | $0$ | s |
| Atop | $0$ | $d$ | s |
| Over | $s$ | $d$ | s |
| In | $0$ | $0$ | s |
| Out | $s$ | $0$ | $0$ |
| Dest | $0$ | $d$ | d |
| DestAtop | $s$ | $0$ | d |
| DestOver | $s$ | $d$ | d |
| DestIn | $0$ | $0$ | d |
| DestOut | $0$ | $d$ | $0$ |
| Clear | $0$ | $0$ | $0$ |
| Xor | $s$ | $d$ | $0$ |
And here is how they look:
Despite being referred to as alpha blending and despite alpha often being used to model opacity, in concept Porter/Duff is not a way to blend the source and destination shapes. It is way to overlay, combine and trim them as if they were pieces of cardboard. The only places where source and destination pixels are actually blended is where the antialiased edges meet.
Blending
Photoshop and the Gimp have a concept of layers which are images stacked on top of each other. In Porter/Duff, stacking images on top of each other is done with the “Over” operator, which is also what Photoshop/Gimp use by default to composite layers:
Conceptually, two pieces of cardboard are held up with one in front of the other. Neither shape is trimmed, and in places where both are present, only the top layer is visible.
A layer in these programs also has an associated Blend Mode which can be used to modify what happens in places where both are visible. For example, the ‘Color Dodge’ blend mode computes a mix of source and destination according to this formula:
$
\begin{equation*}
B(s,d)=
\begin{cases} 0 & \text{if \(d=0\),}
\\
1 & \text{if \(d \ge (1 - s)\),}
\\
d / (1 - s) & \text{otherwise}
\end{cases}
\end{equation*}
$
The result is this:
Unlike with the regular Over operator, in this case there is a substantial chunk of the output where the result is actually a mix of the source and destination.
Layers in Photoshop and Gimp are not tailored to each other (except for layer masks, which we will ignore here), so the compositing of the layer stack is done with the source-only and destination-only region set to source and destination respectively. However, there is nothing in principle stopping us from setting the source-only and destination-only regions to blank, but keeping the blend mode in the ‘both’ region, so that tailoring could be supported alongside blending. For example, we could set the ‘source’ region to blank, the ‘destination’ region to the destination color, and the ‘both’ region to ColorDodge:
Here are the four combinations that involve a ColorDodge blend mode:
In this model the original twelve Porter/Duff operators can be viewed as the results of three simple blend modes:
Source: $B(s, d) = s$
Dest: $B(s, d) = d$
Zero: $B(s, d) = 0$
In this generalization of Porter/Duff the blend mode is chosen from a large set of formulas, and each formula gives rise to four new compositing operators characterized by whether the source and destination are blank or contain the corresponding pixel color.
Here is a table of the operators that are generated by various blend modes:
The general formula is still an area weighted average:
$A_\text{src} \cdot [s] + A_\text{dest} \cdot [d] + A_\text{both}\cdot B(s, d)$
where [s] and [d] are the source and destination colors respectively or 0, but where $B(s, d)$ is no longer restricted to one of $0$, $s$, and $d$, but can instead be chosen from a large set of formulas.
The output of the alpha channel is the same as before:
$A_\text{src} \cdot [\text{as}] + A_\text{dest} \cdot [\text{ad}] +
A_\text{both} \cdot [\text{ab}]$
except that [ab] is now determined by the blend mode. For the Zero blend mode there is no coverage in the both region, so [ab] is 0; for most others, there is full coverage, so [ab] is 1.
Syndicated 2013-05-16 05:00:56 from ssp
17 Mar 2013 (updated 25 Mar 2013 at 13:22 UTC) »
Porter/Duff Compositing and Blend Modes
In the Porter/Duff compositing algebra, images are equipped with an alpha channel that determines on a per-pixel basis whether the image is there or not. When the alpha channel is 1, the image is fully there, when it is 0, the image isn’t there at all, and when it is in between, the image is partially there. In other words, the alpha channel describes the shape of the image, it does not describe opacity. The way to think of images with an alpha channel is as irregularly shaped pieces of cardboard, not as colored glass. Consider these two images:
When we combine them, each pixel of the result can be divided into four regions:
One region where only the source is present, one where only the destination is present, one where both are present, and one where neither is present.
By deciding on what happens in each of the four regions, various effects can be generated. For example, if the destination-only region is treated as blank, the source-only region is filled with the source color, and the ‘both’ region is filled with the destination color like this:
The effect is as if the destination image is trimmed to match the source image, and then held up in front of it:
The Porter/Duff operator that does this is called “Dest Atop”.
There are twelve of these operators, each one characterized by its behavior in the three regions: source, destination and both. The ‘neither’ region is always blank. The source and destination regions can either be blank or filled with the source or destination colors respectively.
The formula for the operators is a linear combination of the contents of the four regions, where the weights are the areas of each region:
$$A_\text{src} \cdot [s] + A_\text{dest} \cdot [d] + A_\text{both} \cdot [b]$$
Where $$[s]$$ is either 0 or the color of the source pixel, $$[d]$$ either 0 or the color of the destination pixel, and $$[b]$$ is either 0, the color of the source pixel, or the color of the destination pixel. With the alpha channel being interpreted as coverage, the areas are given by these formulas:
$$A_\text{src} = \alpha_\text{s} \cdot (1 – \alpha_\text{d})$$
$$A_\text{dst} = \alpha_\text{d} \cdot (1 – \alpha_\text{s})$$
$$A_\text{both} = \alpha_\text{s} \cdot \alpha_\text{d}$$
The alpha channel of the result is computed in a similar way:
$$A_\text{src} \cdot [\text{as}] + A_\text{dest} \cdot [\text{ad}] + A_\text{both} \cdot [\text{ab}]$$
where $$[\text{as}]$$ and $$[\text{ad}]$$ are either 0 or 1 depending on whether the source and destination regions are present, and where $$[\text{ab}]$$ is 0 when the ‘both’ region is blank, and 1 otherwise.
Here is a table of all the Porter/Duff operators:
| | | | |
|----------|----------------|----------------|----------------|
| | $$[\text{s}]$$ | $$[\text{d}]$$ | $$[\text{b}]$$ |
| Src | $$s$$ | $$0$$ | s |
| Atop | $$0$$ | $$d$$ | s |
| Over | $$s$$ | $$d$$ | s |
| In | $$0$$ | $$0$$ | s |
| Out | $$s$$ | $$0$$ | $$0$$ |
| Dest | $$0$$ | $$d$$ | d |
| DestAtop | $$s$$ | $$0$$ | d |
| DestOver | $$s$$ | $$d$$ | d |
| DestIn | $$0$$ | $$0$$ | d |
| DestOut | $$0$$ | $$d$$ | $$0$$ |
| Clear | $$0$$ | $$0$$ | $$0$$ |
| Xor | $$s$$ | $$d$$ | $$0$$ |
And here is how they look:
Despite being referred to as alpha blending and despite alpha often being used to model opacity, in concept Porter/Duff is not a way to blend the source and destination shapes. It is way to overlay, combine and trim them as if they were pieces of cardboard. The only places where source and destination pixels are actually blended is where the antialiased edges meet.
Blending
Photoshop and the Gimp have a concept of layers which are images stacked on top of each other. In Porter/Duff, stacking images on top of each other is done with the “Over” operator, which is also what Photoshop/Gimp use by default to composite layers:
Conceptually, two pieces of cardboard are held up with one in front of the other. Neither shape is trimmed, and in places where both are present, only the top layer is visible.
A layer in these programs also has an associated Blend Mode which can be used to modify what happens in places where both are visible. For example, the ‘Color Dodge’ blend mode computes a mix of source and destination according to this formula:
\(\begin{equation*}
B(s,d)=
\begin{cases} 0 & \text{if $$d=0$$,}
\\
1 & \text{if $$d \ge (1 – s)$$,}
\\
d / (1 – s) & \text{otherwise}
\end{cases}
\end{equation*}\)
The result is this:
Unlike with the regular Over operator, in this case there is a substantial chunk of the output where the result is actually a mix of the source and destination.
Layers in Photoshop and Gimp are not tailored to each other (except for layer masks, which we will ignore here), so the compositing of the layer stack is done with the source-only and destination-only region set to source and destination respectively. However, there is nothing in principle stopping us from setting the source-only and destination-only regions to blank, but keeping the blend mode in the ‘both’ region, so that tailoring could be supported alongside blending. For example, we could set the ‘source’ region to blank, the ‘destination’ region to the destination color, and the ‘both’ region to ColorDodge:
Here are the four combinations that involve a ColorDodge blend mode:
In this model the original twelve Porter/Duff operators can be viewed as the results of three simple blend modes:
Source: $$B(s, d) = s$$
Dest: $$B(s, d) = d$$
Zero: $$B(s, d) = 0$$
In this generalization of Porter/Duff the blend mode is chosen from a large set of formulas, and each formula gives rise to four new compositing operators characterized by whether the source and destination are blank or contain the corresponding pixel color.
Here is a table of the operators that are generated by various blend modes:
The general formula is still an area weighted average:
$$A_\text{src} \cdot [s] + A_\text{dest} \cdot [d] + A_\text{both}\cdot B(s, d)$$
where [s] and [d] are the source and destination colors respectively or 0, but where $$B(s, d)$$ is no longer restricted to one of $$0$$, $$s$$, and $$d$$, but can instead be chosen from a large set of formulas.
The output of the alpha channel is the same as before:
$$A_\text{src} \cdot [\text{as}] + A_\text{dest} \cdot [\text{ad}] + A_\text{both} \cdot [\text{ab}]$$
except that [ab] is now determined by the blend mode. For the Zero blend mode there is no coverage in the both region, so [ab] is 0; for most others, there is full coverage, so [ab] is 1.
Syndicated 2013-03-17 18:50:24 (Updated 2013-03-25 13:06:40) from Søren Sandmann Pedersen
15 Oct 2012 (updated 19 Oct 2012 at 09:08 UTC) »
Big-O Misconceptions
In computer science and sometimes mathematics, big-O notation is used
to talk about how quickly a function grows while disregarding multiplicative and additive constants. When classifying algorithms, big-O notation is useful because it lets us abstract away the differences between real computers as just multiplicative and additive constants.
Big-O is not a difficult concept at all, but it seems to be common even for people who should know better to misunderstand some aspects of it. The following is a list of misconceptions that I have seen in the wild.
But first a definition: We write $$f(n) = O(g(n))$$ when $$f(n) \le M g(n)$$ for sufficiently large $$n$$, for some positive constant $$M$$.
Misconception 1: “The Equals Sign Means Equality”
The equals sign in $$f = O(g(n))$$ is a widespread travestry. If you take it at face value, you can deduce that since $$5 n$$ and $$3 n$$ are both equal to $$O(n)$$, then $$3 n$$ must be equal to $$5 n$$ and so $$3 = 5$$.
The expression $$f = O(g(n))$$ doesn’t type check. The left-hand-side is a function, the right-hand-side is a … what, exactly? There is no help to be found in the definition. It just says “we write” without concerning itself with the fact that what “we write” is total nonsense.
The way to interpret the right-hand side is as a set of functions: $$O(f) = \{ g \mid g(n) \le M f(n) \text{ for some $$M > 0$$ for large $$n$$}\}.$$ With this definition, the world makes sense again: If $$f(n) = 3 n$$ and $$g(n) = 5 n$$, then $$f \in O(n)$$ and $$g \in O(n)$$, but there is no equality involved so we can’t make bogus deductions like $$3=5$$. We can however make the correct observation that $$O(n) \subseteq O(n \log n)\subseteq O(n^2) \subseteq O(n^3)$$, something that would be difficult to express with the equals sign.
Misconception 2: “Informally, Big-O Means ‘Approximately Equal’”
If an algorithm takes $$5 n^2$$ seconds to complete, that algorithm is $$O(n^2)$$ because for the constant $$M=7$$ and sufficiently large $$n$$, $$5 n^2 \le 7 n^2$$. But an algorithm that runs in constant time, say 3 seconds, is also $$O(n^2)$$ because for sufficiently large $$n$$, $$3 \le n^2$$.
So informally, big-O means approximately less than or equal, not approximately equal.
If someone says “Topological Sort, like other sorting algorithms, is O(n log n)”, then that is technically correct, but severely misleading, because Toplogical Sort is also $$O(n)$$ which is a subset of $$O(n \log n)$$. Chances are whoever said it meant something false.
If someone says “In the worst case, any comparison based sorting algorithm must make $$O(n \log n)$$ comparisons” that is not a correct statement. Translated into English it becomes:
“In the worst case, any comparison based sorting algorithm must make fewer than or equal to $$M n \log (n)$$ comparisons”
which is not true: You can easily come up with a comparison based sorting algorithm that makes more comparisons in the worst case.
To be precise about these things we have other types of notation at our disposal. Informally:
$$O()$$: Less than or equal, disregarding constants
$$\Omega()$$: Greater than or equal, disregarding constants
$$o()$$: Stricly less than, disregarding constants
$$\Theta()$$: Equal to, disregarding constants
and some more. The correct statement about lower bounds is this: “In the worst case, any comparison based sorting algorithm must make $$\Omega(n \log n)$$ comparisons. In English that becomes:
“In the worst case, any comparison based sorting algorithm must make at least $$M n \log (n)$$ comparisons”
which is true. And a correct, non-misleading statement about Topological Sort is that it is $$\Theta(n)$$, because it has a lower bound of $$\Omega(n)$$ and an upper bound of $$O(n)$$.
Misconception 3: “Big-O is a Statement About Time”
Big-O is used for making statements about functions. The functions can measure time or space or cache misses or rabbits on an island or anything or nothing. Big-O notation doesn’t care.
In fact, when used for algorithms, big-O is almost never about time. It is about primitive operations.
When someone says that the time complexity of MergeSort is $$O(n \log n)$$, they usually mean that the number of comparisons that MergeSort makes is $$O(n \log n)$$. That in itself doesn’t tell us what the time complexity of any particular MergeSort might be because that would depend how much time it takes to make a comparison. In other words, the $$O(n \log n)$$ refers to comparisons as the primitive operation.
The important point here is that when big-O is applied to algorithms, there is always an underlying model of computation. The claim that the time complexity of MergeSort is $$O(n \log n)$$, is implicitly referencing an model of computation where a comparison takes constant time and everything else is free.
Which is fine as far as it goes. It lets us compare MergeSort to other comparison based sorts, such as QuickSort or ShellSort or BubbleSort, and in many real situations, comparing two sort keys really does take constant time.
However, it doesn’t allow us to compare MergeSort to RadixSort because RadixSort is not comparison based. It simply doesn’t ever make a comparison between two keys, so its time complexity in the comparison model is 0. The statement that RadixSort is $$O(n)$$ implicitly references a model in which the keys can be lexicographically picked apart in constant time. Which is also fine, because in many real situations, you actually can do that.
To compare RadixSort to MergeSort, we must first define a shared model of computation. If we are sorting strings that are $$k$$ bytes long, we might take “read a byte” as a primitive operation that takes constant time with everything else being free.
In this model, MergeSort makes $$O(n \log n)$$ string comparisons each of which makes $$O(k)$$ byte comparisons, so the time complexity is $$O(k n \log n)$$. One common implementation of RadixSort will make $$k$$ passes over the $$n$$ strings with each pass reading one byte, and so has time complexity $$O(n k)$$.
Misconception 4: Big-O Is About Worst Case
Big-O is often used to make statements about functions that measure the worst case behavior of an algorithm, but big-O notation doesn’t imply anything of the sort.
If someone is talking about the randomized QuickSort and says that it is $$O(n \log n)$$, they presumably mean that its expected running time is $$O(n \log n)$$. If they say that QuickSort is $$O(n^2)$$ they are probably
talking about its worst case complexity. Both statements can be considered true depending on what type of running time the functions involved are measuring.
Syndicated 2012-10-15 09:16:39 (Updated 2012-10-19 08:11:32) from Søren Sandmann Pedersen
8 Sep 2012 (updated 15 Sep 2012 at 00:09 UTC) »
Sysprof 1.2.0
A new stable release of Sysprof is now available. Download version 1.2.0.
Syndicated 2012-09-08 21:32:30 (Updated 2012-09-15 00:00:28) from Søren Sandmann Pedersen
17 older entries...
New Advogato Features
New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.
Keep up with the latest Advogato features by reading the Advogato status blog.
If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 108, "mathjax_inline_tex": 34, "mathjax_display_tex": 110, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9199951887130737, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/calculus/106228-product-rule-terms-implicit-differentiation.html
|
# Thread:
1. ## Product rule in terms of implicit differentiation
1) x^2 + xy - y^2 = 4
x^2 + x*y(x) - y(x)^2 = 4
I understand this much: 2x + ____ + 2y*(dy/dx) = 0
But I don't understand how to differentiate the middle portion properly. I'm assuming it's just the product rule, but doing it didn't lead me to the right answer.
2) x^4(x + y) = y^2(3x - y)
Product rule again?
2. Originally Posted by CFem
1) x^2 + xy - y^2 = 4
x^2 + x*y(x) - y(x)^2 = 4
I understand this much: 2x + ____ + 2y*(dy/dx) = 0
But I don't understand how to differentiate the middle portion properly. I'm assuming it's just the product rule, but doing it didn't lead me to the right answer.
$u = x \: \rightarrow \: u' = 1$
$v = y \: \rightarrow \: v' = \frac{dy}{dx}$
$\frac{dy}{dx} = u'v + v'u = y + x\frac{dy}{dx}$
Isolate the dy/dx terms and solve for dy/dx
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9384276270866394, "perplexity_flag": "middle"}
|
http://www.physicsforums.com/showthread.php?t=153322&page=3
|
Physics Forums
Page 3 of 5 < 1 2 3 4 5 >
## Derivation of the Equation for Relativistic Mass
Quote by bernhard.rothenstein Start with the identity (Rosser) g(u)=g(V)g(u')(1+u'V/cc) (1) Multiply both its sides with m (invariant rest mass) in order to obtain mg(u)=g(V)mg(u')(1+u'V/cc). (2) Find out names for mg(u), mg(u') and mg(u')u' probably in accordance with theirs physical dimensions. In order to avoid criticism on this Forum avoid the name relativistic mass for mg(u) and mg(u) using instead E=mc^2g(u) and E'=mc^2g(u') calling them in accordance with theirs physical dimensions relativistic energy in I and I' respectively using for p=Eu/c^2 and p'=E'u'/c^2 the names of relativistic energy. Consider a simple collision from I and I' in order to convince yourself that they lead to results in accordance with conservation of momentum and energy. Is there more to say? sine ira et studio
This seems to lead back to Tolman's derivation mentioned above. Same equations , you would need to add the momentum conservation in the collision as you mention.
Quote by nakurusil This seems to lead back to Tolman's derivation mentioned above. Same equations , you would need to add the momentum conservation in the collision as you mention.
Thanks. Please let me know in which way does the derivation I have presented leads back to Tolman as not involving conservation of momentum and energy?
Mentor
Quote by bernhard.rothenstein Start with the identity (Rosser) g(u)=g(V)g(u')(1+u'V/cc) (1) Multiply both its sides with m (invariant rest mass) in order to obtain mg(u)=g(V)mg(u')(1+u'V/cc). (2) Find out names for mg(u), mg(u') and mg(u')u' probably in accordance with theirs physical dimensions. In order to avoid criticism on this Forum avoid the name relativistic mass for mg(u) and mg(u) using instead E=mc^2g(u) and E'=mc^2g(u') calling them in accordance with theirs physical dimensions relativistic energy in I and I' respectively using for p=Eu/c^2 and p'=E'u'/c^2 the names of relativistic energy. Consider a simple collision from I and I' in order to convince yourself that they lead to results in accordance with conservation of momentum and energy. Is there more to say? sine ira et studio
This looks interesting, but I don't understand what you're doing here. In particular, what is the definition of the function g, and how did you obtain identity (1)?
Quote by Fredrik This looks interesting, but I don't understand what you're doing here. In particular, what is the definition of the function g, and how did you obtain identity (1)?
thanks. g(u) and g(u') stand for the gamma factor in the inertial reference frames I and I. We obtain the relativistic identity by expressing g(u) as a function of u' via the addition law of relativistic velocities (see W.G.V Rosser "Classical electromagnetism via relativity" London Butterworth 1968) pp 165-173)
For more details please have a critical look at
arXiv.org > physics > physics/0605203
Date: Tue, 23 May 2006 22:28:50 GMT (259kb)
Relativistic dynamics without conservation laws
Subj-class: Physics Education
We show that relativistic dynamics can be approached without using conservation laws (conservation of momentum, of energy and of the centre of mass). Our approach avoids collisions that are not easy to teach without mnemonic aids. The derivations are based on the principle of relativity and on its direct consequence, the addition law of relativistic velocities.
Full-text: PDF only
I would highly appreciate your experience with it.
Quote by Fredrik This looks interesting, but I don't understand what you're doing here. In particular, what is the definition of the function g, and how did you obtain identity (1)?
I showed you how (1) is obtained in my first post about Tolman's solution (the one that you keep trying to re-explain to me). <<mentor snip>> You get it from the fact that The Lorentz transforms satisfy the cndition:
L(u)*L(v)=L(w) where w=(u+v)/(1+uv/c^2))
where L(v)=gamma(v)*|1............-v|
...............................|-v/c^2......1|
Try it, you might even be able to calculate it all by yourself.
Quote by bernhard.rothenstein Thanks. Please let me know in which way does the derivation I have presented leads back to Tolman as not involving conservation of momentum and energy?
You use a hypothetical collision experiment
You use the conservation of momentum equation as "seen" from two different frames (you call them I and I', right?).
This is the formalism employed by Tolman about 100 years ago.
Mentor
Quote by bernhard.rothenstein g(u) and g(u') stand for the gamma factor in the inertial reference frames I and I. We obtain the relativistic identity by expressing g(u) as a function of u' via the addition law of relativistic velocities...
I haven't had time to look at this yet. Maybe tomorrow.
Quote by nakurusil You follow the same steps. You use a hypothetical collision experiment You use the conservation of momentum equation as "seen" from two different frames (you call them I and I', right?). This is the formalism employed by Tolman about 100 years ago.
With all respect, I think that in the derivation I poposed "collision", "conservation laws" are not mentioned and so it has nothing in common with Tolman but it has with the 100 years old special relativity.
Quote by bernhard.rothenstein With all respect, I think that in the derivation I poposed "collision", "conservation laws" are not mentioned and so it has nothing in common with Tolman but it has with the 100 years old special relativity.
It's been a while, I left this thread because of the now fortunately deleted bickering. However, having once again picked up this subject, no matter how useless, I've stumbled upon a problem. I refer to this post: http://www.physicsforums.com/showpos...3&postcount=21 It states: $$m(u_1)u_1+m(u_2)u_2 = (m(u_1)+m(u_2))V$$ However, since after the collision the speeds are no longer $$u_1$$ and $$u_1$$, how can you say that the relativistic mass of the "new" object is simply $$m(u_1)+m(u_2)$$?
Though he only had time for a quick glance, my physics teacher agreed that this looked incorrect. He also suggested I should use the following equation to derive the equation for relativistic mass (I was too lazy to LaTeX, so here's a link): http://hyperphysics.phy-astr.gsu.edu...grel/emcpc.gif I'm not sure how to go about that, though. If anyone has any comment on why that which I think to be false is true or any suggestion on how to go about this using the referred equation, I'd much appreciate it.
Quote by NanakiXIII It's been a while, I left this thread because of the now fortunately deleted bickering. However, having once again picked up this subject, no matter how useless, I've stumbled upon a problem. I refer to this post: http://www.physicsforums.com/showpos...3&postcount=21 It states: $$m(u_1)u_1+m(u_2)u_2 = (m(u_1)+m(u_2))V$$ However, since after the collision the speeds are no longer $$u_1$$ and $$u_1$$, how can you say that the relativistic mass of the "new" object is simply $$m(u_1)+m(u_2)$$?
Interesting, it appears that you found a weakness in Tolman's derivation. Since Tolman surmises that the two masses will have zero speed in S' (in order to move together with speed V wrt S after their collision) you would expect :
$$m(u_1)u_1+m(u_2)u_2 = (m(0)+m(0))V$$
However,I think that Tolman must have assumed that mass cannot vary non-continously (from $$m_1(u_1)$$ to $$m_1(0)$$), thus justifying his use of :
$$m(u_1)u_1+m(u_2)u_2 = (m(u_1)+m(u_2))V$$
Granted, this is a very weak argument, so the best thing is to dump the blasted "relativistic mass" altogether, as I mentioned in the opening statement. The whole darned thing was introduced in order to reconcile the relativistic momentum/energy:
$$p=\gamma m(0)v$$ (1)
$$E=\gamma m(0)c^2$$
with the Newtonian counterpart:
$$p=mv$$ (2)
So the best thing is to tell your teacher that your proof is you grouped together $$\gamma$$ and proper mass m(0) into $$\gamma m(0)$$ and you assigned that quantity to m
Quote by nakurusil Interesting, it appears that you found a weakness in Tolman's derivation. Since Tolman surmises that the two masses will have zero speed in S' (in order to move together with speed V wrt S after their collision) you would expect : $$m(u_1)u_1+m(u_2)u_2 = (m(0)+m(0))V$$
But since $$u_1$$ and $$u_2$$ are the speeds in S, why are you using the speed in S' after the collision?
Quote by nakurusil However,I think that Tolman must have assumed that mass cannot vary non-continously (from $$m_1(u_1)$$ to $$m_1(0)$$), thus justifying his use of :
Why should it vary non-continuously? For the relativistic mass to change non-continuously, wouldn't the velocity have to as well? I don't see why it would, it simply changes due to the collision, quite continuously, right?
Quote by NanakiXIII But since $$u_1$$ and $$u_2$$ are the speeds in S, why are you using the speed in S' after the collision?
I see, correction:
$$m_1(u_1)u_1+m_2(u_2)u_2=(m_1(V)+m_2(V))V$$
Doesn't change anything, the derivation is still flawed.
Why should it vary non-continuously?
Because the formula above would imply that m_1(u_1) before collision becomes m_1(V) after collision and this would entail a discontinuity. This is why Tolman must be silently assuming that m_1 before after collision are exactly the same. In fact , he writes :
$$m_1u_1+m_2u_2=(m_1+m_2)V$$
So, Tolman must be assuming $$m_1(u_1)$$ before and after the collision in his formula. Otherwise, the derivation falls apart.
Conclusion: use the derivation based on relativistic momentum I gave you twice.
Quote by nakurusil Because the formula above would imply that m_1(u_1) before collision becomes m_1(V) after collision and this would entail a discontinuity.
This I don't understand. Changing $$u_1$$ to $$V$$ doesn't break continuity, does it? There is just an acceleration. The collision isn't an instantaneous event. So if the velocity changes continuously, why wouldn't the relativistic mass?
Quote by nakurusil Conclusion: use the derivation based on relativistic momentum I gave you twice.
I'm probably overlooking it or something, but I'll risk looking stupid and ask: what derivation are you talking about?
Quote by NanakiXIII This I don't understand. Changing $$u_1$$ to $$V$$ doesn't break continuity, does it? There is just an acceleration. The collision isn't an instantaneous event. So if the velocity changes continuously, why wouldn't the relativistic mass?
You got things backwards, if $$m_1(u_1)$$ before collision (LHS) is different from $$m_1(V)$$ after collision (RHS), Tolman's derivation falls apart. So, the only out for his derivation is that $$m_1$$ before and after collision are the same. THOUGH the $$m_1$$ speed has jumped from $$u_1$$ to V
I'm probably overlooking it or something, but I'll risk looking stupid and ask: what derivation are you talking about?
http://www.physicsforums.com/showpos...3&postcount=46
Look at the bottom.
Try starting with: ds^2 = (cdt)^2 - dx^2 Divide through by dt^2 and rearrange: c^2 = v^2 + (ds/dt)^2 Multiply by m^2 to get p^2: (E/c)^2 = p^2 + (m*ds/dt)^2 Make the last term a constant: m*ds/dt = Eo/c And arrive at: mc = Eo*dt/ds Which from the 2nd eqn above is: mc^2 = Eo/[1-(v/c)^2] = E The Lorentz transformations come from the invariance of ds. The mass behaves just so that the rest energy is invariant.
Page 3 of 5 < 1 2 3 4 5 >
Thread Tools
| | | |
|-----------------------------------------------------------------------|-------------------------------|---------|
| Similar Threads for: Derivation of the Equation for Relativistic Mass | | |
| Thread | Forum | Replies |
| | Introductory Physics Homework | 7 |
| | Special & General Relativity | 8 |
| | Special & General Relativity | 8 |
| | Introductory Physics Homework | 3 |
| | General Physics | 0 |
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 36, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.949988603591919, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/85920?sort=oldest
|
## Least Prime Factors: found a counting formula for a given range — what is the standard approach?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Hi Everyone,
I am a math amateur who for the past year has been working on better understanding Bertrand's Postulate, the Ramanujan Primes, and the recent expansion of Bertrand's Postulate (always a prime between 2x and 3x and always a prime between 3x and 4x) using elementary methods.
I've been working with least prime factors and primorials and I came up with a counting formula that I have not seen elsewhere. It is quite similar to the standard prime counting formula using floor functions and it is using elementary methods so it is most likely uninteresting. I hope you don't mind me posting a sketch of it here.
I am presenting it here in hopes that experts can steer me to more modern analytic methods that accomplish the same thing in a better way. I would also be interested in understanding why the new methods are superior to the elementary methods.
The counting formula accomplishes the following:
Let $p_k$ be any prime. The formula provides an exact count of the number of least prime factors greater than $p_k$ in the range $r_{start}$ (exclusive) and $r_{end}$ (inclusive).
The formula consists of $2^{k-2}$ subformulas where each subformula looks something like this:
Least Prime Factor (5 or greater) between $x_{start}$ and $x_{end}$ =
$2\lfloor\frac{x_{end}}{6}\rfloor + \lfloor\frac{(x_{end} \% 6) + 3}{4}\rfloor - 2\lfloor\frac{x_{start}}{6}\rfloor - \lfloor\frac{(x_{start} \% 6) + 3}{4}\rfloor$
where $x_{end} \% 6$ is the value congruent to $x$ modulo $6$.
Note: The above formula, for example, is the expression for finding the number of least prime factors greater than $3$ in the range $r_{start}$ to $r_{end}$.
To give another example, if I wanted to count the number of least prime factors greater than $p_{6} = 13$, then the formula consists of $2^{6-2} = 16$ subformulas where each subformula is roughly similar to the example above.
Thanks very much.
-
## 1 Answer
First I'll toot my own horn.
There is still some work left for elementary and near elementary methods to accomplish. Based on your description, I think your formulas say something about the distribution of numbers coprime to the kth primorial. I have been working on something similar, and part of the path has led me to finding some elementary arguments which improve on part of the literature. I tell some of the story in the MathOverflow question http://mathoverflow.net/questions/37679/erik-westzynthiuss-cool-upper-bound-argument-update . The question title refers to a nice argument which serves as an introduction to sieve theory, and shows the potential for getting a handle on something as unwieldy as the distribution of primes which has a lot of regular and fractal behaviour occuring in its development.
If you are interested in this type of mathematics, you could do worse than to read through that question and the answers to it. If you want to know about general lower bound results, the Westzynthius paper has a nice construction which will produce gaps between primes which are larger than average, also using elementary means; it was the first published construction to show that for any constant C there are infinitely many k such that $p_{k+1} \gt C \log p_k + p_k$. You might even find a way to make elementary improvements on the arguments, as well as search the literature to find improvements by Rankin, Erdos, and others. (If you are patient, you can wait for a writeup I am doing which includes an interpretation of the key results of the paper.)
I am somewhat interested in the result of yours, but I suspect that I speak for others as well as myself when I say I would prefer a single approximation (or a small system of equations that I could numerically compute) to an exponential family of formulas I would need to calculate one value precisely. I believe that is one advance of sieve theory over elementary methods: it caters to such a preference. I don't know of any very accessible literature on the subject, but I sometimes refer to Cojocaru and Murty's book, and those more familiar with the literature may come with their recommendations. If you and I are fortunate, we may hear from the likes of zeb or quid, whose opinions I believe are more informed than mine on this subject.
Disclaimer: my training is not in analytic number theory; I suspect my perspective on the subject is much the same as yours.
Gerhard "But That Doesn't Stop Me" Paseman, 2012.01.17
-
Hi Gerhard, Thanks very much for the link! I look forward to reading your question and the answer to it. My method is very straight forward. It consists of the following recurrence relation: The number of LPF for $p_k$ between $x_{start}$ and $x_{end}$ is: $LPF_{p_k}(x_{end}) − LPF_{p_k}(x_{start})$ $LPF_{p_k}(x)=LPF_{p_{k−1}}(x)−LPF_{p_{k−1}}(⌊\frac{x}{p_k}⌋)$ $LPF_3(x)=2⌊\frac{x}{6}⌋+⌊\frac{(x \% 6) +3}{4}⌋$ – Larry Freeman Jan 18 2012 at 4:28
In my first approach on the upper bound problem, I decided to extend the argument to thinner sets and define a series of error functions; they satisfy a recursion similar to your LPF recursion. Also, similar relations appear implicitly in Ldgendre's analysis of the prime counting function, so I am confident that your relations are used if not explicitly stated in the number theory literature. Gerhard "Ask Me About System Design" Paseman, 2012.01.23 – Gerhard Paseman Jan 23 2012 at 8:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 23, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9562342762947083, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/tagged/schemes+complex-geometry
|
# Tagged Questions
1answer
85 views
### Pullback of very ample sheaf again very ample? And other questions.
Let $S \subseteq \mathbb{P}^n$ be a smooth projective surface with given embedding in projective space. Moreover, let $X$ be another smooth surface and let there be a map $\pi: X \rightarrow S$ that ...
1answer
124 views
### (Continued:) finiteness of étale morphisms
I was writing a question, it became too long, and i decided to split it into two parts. I hope posting two questions at the same time is not a problem. First question: Checking flat- and smoothness: ...
1answer
61 views
### Checking flat- and smoothness: enough to check on closed points?
I am currently studying varieties over $\mathbb{C}$, i know some scheme theory. Let $f: X \rightarrow Y$ be a morphism of varieties. If we want to show flatness, is it enough to check the condition ...
2answers
150 views
### Fibres in algebraic geometry: multiplicity
Currently I am studying varieties over $\mathbb{C}$ and I know some scheme theory. My professor mentioned the other day that given a morphism of varieties over an alg. closed field $k$: \$f: X ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9440708756446838, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/244944/equivalence-of-definition-for-polarized-k3?answertab=active
|
# Equivalence of definition for polarized K3
In the literature there are two different definitions of polarized K3 surfaces.
1) A polarized K3 is the data $(X,\omega)$. Where $X$ is a K3 surface and $\omega$ is an ample class in $H^2(X,\mathbb{Z})$.
2) A polarized K3 is the data of a K3 surface $X$ together with a polarization on the Hodge decomposition $H^2(X,\mathbb{Z})$, which is a pairing $H^2(X,\mathbb{Z}) \times H^2(X,\mathbb{Z}) \rightarrow \mathbb{Z}$. Such a polarization is given by the intersection pairing: $$(v,w) \mapsto \int_X v \wedge w\;.$$
This, together with an observation from Huybrechts lecture notes on K3 surfaces page 40:
...it is not the intersection pairing that defines a polarization, but the pairing that is obtained from it by changing the sign of the intersection pairing for an ample class.
justifies the following question.
Question: How does the choice of an ample class $\omega$ comes into the definition of the intersection pairing?
It is clear that we need such a class to define a polarization on lower degree cohomology groups, for example on $H^1(X,\mathbb{Z})$. And the well definition of the intersection pairing is related to the existence of an ample class (i.e. the projectivity of $X$). But I don't see how the intersection pairing would change changing the choice of the ample class.
It is possible (or even probable) that I'm making a huge mess out of nothing, but I'm quite confused. Thank you in advance!
-
## 1 Answer
The intersection pairing on $H^2$ is non-degenerate (by Poincare duality) and is symmetric (we are considering even-dimensional cohomology), but it is not positive definite. One can "change the sign" of the intersection pairing to make it positive definite, but the change in sign is not uniform (i.e. there is not a single sign you have to change it by).
Roughly speaking, in order to define a polarization on $H^2$, you have to decompose $H^2$ into a direct sum of the span of the ample class and the primitive cohomology, and consider each one separately. For more details, see this MO answer. (So your second definition of polarization is not correct; a polarization is not just given by the intersection pairing; it is a modification of the intersection pairing to achieve certain positivity properties, and the modification depends on the choice of ample class.)
If you want to learn more about this, the keywords are primitive cohomology, Lefschetz decomposition, and Hodge--Riemann bilinear relations. Unfortunately wikipedia doesn't seem to give much detail on these topics; Griffiths and Harris has a brief sketch near the end of their discussion Hodge theory in Chapter 0.
-
Thank you very much, now I see it! Unfortunately I didn't encounter the Lefschetz decomposition while googling around. I have a stupid follow up question: "Do all the polarizations of an algebraic K3 surface arise as intersection pairing with signs changed?". On one hand I would say no, because there are polarizable non-algebraic K3s. But on the other hand this would contradict the choice of a polarized algebraic K3 as a pair $(X,\omega)$ for $\omega$ ample. Again I'm a bit confused. Thank you! :) – Giovanni De Gaetano Nov 26 '12 at 15:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 15, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9302690625190735, "perplexity_flag": "head"}
|
http://mathhelpforum.com/algebra/16598-simplifying-expression.html
|
# Thread:
1. ## Simplifying an expression
Hi, I'm trying to understand how this expression is simplified. (Disclaimer: I don't know how to get this expression into the text here - so have entered it in Grapher, taken a screenshot, and attached it. If anyone could point me to the relevant FAQ on how to enter this kind of expression directly, I'd be much obliged).
The answer given is the second screen shot.
Attached Thumbnails
2. Originally Posted by earachefl
Hi, I'm trying to understand how this expression is simplified. (Disclaimer: I don't know how to get this expression into the text here - so have entered it in Grapher, taken a screenshot, and attached it. If anyone could point me to the relevant FAQ on how to enter this kind of expression directly, I'd be much obliged).
The book gives the answer as $??b?$
see here on how to write radical signs as powers.
then notice that $\sqrt[3] {b^2 \sqrt {b}} = \left( b^2 \cdot b^{\frac {1}{2}} \right)^{\frac {1}{3}}$
Can you take it from here?
3. Thanks for both the hint and the tips on writing exponents for the forum. Having said that, your link doesn't show how you entered your equation, which shows up in my browser as an image link with the source showing
<a href="javascript:;" onclick="do_texpopup('\\sqrt[3] {b^2 \\sqrt {b}} = \\left( b^2 \\cdot b^{\\frac {1}{2}} \\right)^{\\frac {1}{3}}', 'math'); return false;"><img src="http://www.mathhelpforum.com/math-help/latex2/img/d8b692684f32d78cd66af4e9e3154f62-1.gif" alt="\sqrt[3] {b^2 \sqrt {b}} = \left( b^2 \cdot b^{\frac {1}{2}} \right)^{\frac {1}{3}}" title="\sqrt[3] {b^2 \sqrt {b}} = \left( b^2 \cdot b^{\frac {1}{2}} \right)^{\frac {1}{3}}" style="border: 0px; vertical-align: middle;" /></a
So how do you do that?
My problem was that I was looking at the b^2 part as being the index of b^(1/2), instead of just realizing that it was a simple multiplication of b^2 with b^(1/2).
4. Originally Posted by earachefl
Thanks for both the hint and the tips on writing exponents for the forum. Having said that, your link doesn't show how you entered your equation, which shows up in my browser as an image link with the source showing
So how do you do that?
I'm sorry, i don't know what you are referring to here. are you talking about the pretty-math typing, or the diagram with the flower?
My problem was that I was looking at the b^2 part as being the index of b^(1/2), instead of just realizing that it was a simple multiplication of b^2 with b^(1/2).
so, are you able to work out the answer now? what is the answer?
5. Originally Posted by Jhevon
I'm sorry, i don't know what you are referring to here. are you talking about the pretty-math typing, or the diagram with the flower?
Your answer showed a real honest-to-goodness equation, not like
b^(5/6)
which is the answer; it just doesn't look like an equation.
Originally Posted by Jhevon
so, are you able to work out the answer now? what is the answer?
yes, thank, see above
6. Originally Posted by earachefl
Your answer showed a real honest-to-goodness equation, not like
b^(5/6)
which is the answer; it just doesn't look like an equation.
yes, thank, see above
i used the equation to show that they were the same thing, not necessarily to solve the equation
the complete answer would look something like:
$\sqrt[3] {b^2 \sqrt {b}} = \left( b^2 \cdot b^{\frac {1}{2}} \right)^{\frac {1}{3}} = \boxed { b^{5/6} }$
7. Originally Posted by Jhevon
i used the equation to show that they were the same thing, not necessarily to solve the equation
the complete answer would look something like:
$\boxed { b^{5/6} }$
yes, I understand. What I'm trying to say is that the way your answers are graphically represented in my browser is better than just typing
b^(5/6)
as an example; your answers look like real equations! Are you typing in all that extra code
\boxed { b^{5/6} }
or is there a simpler way of doing it?
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9504867792129517, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/15642/at-what-fraction-of-the-speed-of-light-have-people-traveled/15643
|
# At what fraction of the speed of light have people traveled?
I'm guessing that, this would be someone in a rocket or something... When they hit their top speed, at what fraction of $c$ are they traveling?
-
4
lurscher's answer is absolutely correct... if you don't specify what the speed is relative to, the answer is any fraction you like. – AdamRedwine Oct 12 '11 at 20:01
1
Speed of Earth around Sun is velocity=107,300 km/h ? Don't we all travel at this speed ? Does this count ? – Andrei Oct 14 '11 at 21:13
1
@Andrei, no. Doesn't count. – Abe Miessler Oct 14 '11 at 21:20
## 2 Answers
Maximum velocity attained by the Apollo spacecraft was 39,897 km/h which is $3.6\times 10^{-5}$ times the speed of light...
-
4
remember this speed was relative to earth....if you want to pump it up a bit further add up the speed of earth moving (around sun), sun moving(around the milky way) and the milky way moving(with respect of I don't know what... ) :P... – Vineet Menon Oct 13 '11 at 8:45
1
@VineetMenon: I suppose that would be the velocity of the Milky Way with respect to the CBM, which is about 600 km/s I think. – dotancohen Mar 2 '12 at 13:17
When swinging my comfy hammock, I travel all day even up to 0.99 $c$, some days even more, depending on what particles are passing me by and measuring my exorbitant speeds with their atomic clocks and photons..!
-
1
Great job on thinking outside the box! – xaav Oct 13 '11 at 2:36
+1: Man, I really love this one... – Ϛѓăʑɏ βµԂԃϔ Oct 23 '12 at 11:53
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9237843751907349, "perplexity_flag": "middle"}
|
http://quant.stackexchange.com/questions/798/cost-function-for-hedging-portfolio/820
|
# Cost function for hedging portfolio
Let's say I am hedging an exotic instrument $E$ with $N$ liquid instruments $L_i$, each of which has an associated hedging ratio $R_i$ and a bid-ask spread $\delta_i$ (per dollar of notional). What would you recommend as a cost function to balance the completeness of the hedge and minimize the hedging cost?
-
## 1 Answer
I would assign the cost of incompleteness as the 90th percentile of N-period losses expected on the mis-hedged portfolio (where N is perhaps 5 trading days -- enough for a trader to get hit by a bus and someone else to catch up on his book). This is nicely compatible with VaR computations, corrects for the fact that expected cost of a mis-hedge is usually zero, and doesn't involve any tricky utility function theory.
You sometimes see more precise measurements made. For example some papers in the 1990s calculated the exact optimal hedging strategy for European options given a particular bid-offer spread on the underlying and (I seem to recall) utility function assumptions.
If you like utility functions a lot, clearly you can assign one to the variance (or other metric) of P&L arising from mis-hedging, and use the utility function to turn that directly into a cost. You can approximate the "right" function parameters by, say, looking at your firm's recent returns and Sharpe ratio.
-
Thank you, you set me on the right track. I'll see what the traders say ;-) – quant_dev Mar 25 '11 at 20:11
Coming back to it, calculation of the percentile of N-period losses requires some assumptions about the distribution of the hedged risky parameter (let's say it is the spread of the hedging instrument). I looked at the data and it's clear that it's neither Gaussian nor log-normal. What are other distributions people typically use? – quant_dev May 7 '11 at 12:55
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9265020489692688, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/259195/is-f-mathbb-z-times-mathbb-z-to-mathbb-z-fm-n-31m23n-injective
|
# Is $f:\mathbb Z \times \mathbb Z \to \mathbb Z$, $f(m,n)=31m+23n$ injective?
Is $f:\mathbb{Z} \times \mathbb{Z} \to \mathbb Z$, $f(m,n)=31m+23n$ injective?
-
## 2 Answers
1) Can you find $(m,n)$ such that $31m + 23n = 0$?
2) What happens if you multiply $31m + 23n = 0$ by some integer? Can you spot more solutions?
-
@user50554: You have at least, $(23,-31)$ in the $\ker f$, so it is not injective. – Babak S. Dec 15 '12 at 11:55
$f(23,0)=f(0,31)$ thus $f$ is not injective.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8973544239997864, "perplexity_flag": "middle"}
|
http://physics.aps.org/story/v2/st32
|
# Focus: Nuclei Affect the Bounce of $H2$
Published December 22, 1998 | Phys. Rev. Focus 2, 32 (1998) | DOI: 10.1103/PhysRevFocus.2.32
#### Observation of Large Differences in the Diffraction of Normal- and Para- H2 from LiF(001)
M. F. Bertino, A. L. Glebov, J. P. Toennies, F. Traeger, E. Pijper, G. J. Kroes, and R. C. Mowrey
Published December 21, 1998
A molecule’s atomic nuclei don’t usually have much effect on its interactions with other atoms and molecules. Chemical reactions, for example, involve only the external electrons. But the alignment of the nuclear spins can affect the rotation of a diatomic molecule like $H2.$ Now a report in the 21 December PRL shows that this alignment strongly affects even a simple scattering experiment, where hydrogen molecules bounce off the surface of a crystal. The experiment may also provide a new method for probing the electric field structures of surfaces.
Each hydrogen atom consists of only a proton and an electron, and the two protons’ spins in an $H2$ molecule are either aligned (ortho-hydrogen) or oppositely directed (para-hydrogen). Quantum mechanics requires that the quantum state of the molecule have an overall “anti-symmetry,” or sign change if the protons are interchanged, but the total symmetry is affected by both the spin alignment and the rotational motion. The result is that the tumbling motion of the molecule is not only quantized, but restricted to certain values of angular momentum that have the correct symmetry properties: Para-hydrogen always has an even integer number of units of rotational angular momentum, while ortho-hydrogen has an odd number. Of course, these rules apply to any diatomic molecule, such as $N2$, but heavier molecules are much harder to separate into pure beams of a single type, so the effects cannot normally be detected.
Crystalline lithium fluoride (LiF) has been used since the 1930s to study quantum properties of matter because it’s easily split to reveal ideal surfaces that can be kept clean at the atomic level. There is so much experience with scattering molecular beams from this surface, according to J. Peter Toennies, of the Max Planck Institute for Fluid Dynamics in Göttingen, Germany, that “we thought we knew everything about LiF.” But Geert-Jan Kroes, of Leiden University in the Netherlands recently predicted a significant difference in the scattering of ortho- and para-hydrogen from LiF and prodded Toennies to look for the effect experimentally.
Toennies, Kroes, and their colleagues have now confirmed the prediction by comparing the scattering of pure para-hydrogen with that of normal hydrogen, which is 75% ortho. The team measured the number of scattered molecules reaching each angular position around the crystal and found the expected differences between the two types of beams. The explanation, according to the authors, is that the charge distributions in the two types of hydrogen interact differently with the electric field at the LiF surface. Some of the ortho molecules, for example, tend to amplify the peaks and valleys of the LiF field, while other orientations tend to smooth out the effects of the field. Toennies says that these field interactions can be exploited to measure the field undulations on other surfaces in a more direct way than has been possible previously.
“It’s a fascinating paper,” says Joseph Manson of Clemson University in South Carolina. “This is a step up in our ability to measure subtle [variations] in the electric field” on surfaces, he says, because the field variations are usually difficult to separate from other effects.
## Related Articles
### More Atomic and Molecular Physics
Condensate in a Can
Synopsis | May 16, 2013
Remove the Noise
Synopsis | Apr 25, 2013
## New in Physics
Wireless Power for Tiny Medical Devices
Focus | May 17, 2013
Pool of Candidate Spin Liquids Grows
Synopsis | May 16, 2013
Condensate in a Can
Synopsis | May 16, 2013
Nanostructures Put a Spin on Light
Synopsis | May 16, 2013
Fire in a Quantum Mechanical Forest
Viewpoint | May 13, 2013
Insulating Magnets Control Neighbor’s Conduction
Viewpoint | May 13, 2013
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 4, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9073914289474487, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/math-topics/37981-electromagnetic-induction.html
|
# Thread:
1. ## Electromagnetic induction
Can anyone help me derive an expression for the e.m.f induced in a coil rotating in a uniform magnetic field.
let
w= angular velocity etc..........
2. When the coil is rotated at a uniform rate in a uniform horizontal magnetic field of intensity $\vec{B}$, then the induced e.m.f produced is given by:
$e = -\frac{d\phi}{dt}$
$= -\frac{d}{dt}[N\vec{B}\cdot \hat{n} A]$
Where N = number of turns in coil
and A = Area of coil
$= -\frac{d}{dt}[NBcos{\theta}A]$
$= -NAB\frac{d}{dt}[cos{\theta}]$
$= -NAB\left[-sin{\theta}\frac{d\theta}{dt}\right]$
$= NAB\omega sin{\omega t}$
Where $\omega = \frac{d\theta}{dt} = \frac{\theta}{t}, \text{as it is uniform}$
$\therefore \ e = (NAB\omega)sin(\omega t)$
$e = e_0 \ sin{(\omega t)}$
And there you go.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 10, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8605725765228271, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/185463/a-doubt-about-tensor-product-on-hilbert-spaces
|
A doubt about tensor product on Hilbert Spaces
An operator is a bounded (i.e., continuous) linear transformation between Hilbert spaces. Let $\mathcal{B}[\mathcal{H}]$ be the set of all operators in the Hilbert space $\mathcal{H}$.
Let $\mathcal{H}$ and $\mathcal{K}$ be any two Hilbert spaces. Consider $\mathcal{C}$ be the class of all strict contractions on $\mathcal{B}[\mathcal{H}]$ and let $\mathcal{L}$ be the class of all contractions on $\mathcal{B}[\mathcal{K}]$.
Let $\mathcal{H}\hat{\otimes}\mathcal{K}$ be the tensor product space between the Hilbert spaces $\mathcal{H}$ and $\mathcal{K}$, where $\hat{\otimes}$ denote the tensor product.
Question: What is the definition of $\mathcal{C}\hat{\otimes}\mathcal{L}$ on $\mathcal{B}[\mathcal{H}\hat{\otimes}\mathcal{K}]$. On other words, what is the definition for the tensor product of operators classes? Moreover, $T\in\mathcal{C}\hat{\otimes}\mathcal{L}$ if and only if $T=(A\hat{\otimes}B)$, such that $A\in\mathcal{C}$ and $B\in\mathcal{L}$ ?
-
1
If the answer to the last question is yes (I have to admit I don't know this) then it should be formulated more carefully, e.g. $T$ is in that set iff there exist $A$ and $B$ with that property. Note that $A\hat{\otimes}B=rA\hat{\otimes}1/rB$ and your classes are not closed under multiplication with scalars (if a contraction is what I guess that it is, namely an operator of norm less than $1$). – user20266 Aug 22 '12 at 16:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 23, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9338133931159973, "perplexity_flag": "head"}
|
http://nrich.maths.org/5742/note
|
### Cubes
Investigate the number of faces you can see when you arrange three cubes in different ways.
### I'm Eight
Find a great variety of ways of asking questions which make 8.
### Let's Investigate Triangles
Vincent and Tara are making triangles with the class construction set. They have a pile of strips of different lengths. How many different triangles can they make?
# Caterpillars
## Caterpillars
Well, here are the caterpillars!
Our caterpillars have numbers on each little part - numbers $1, 2, 3, 4, \ldots$ up to $16$.
You can see their pale blue head, and their body bending at right angles so that each part is lying in a square.
There are two explorations to go with these caterpillars:
1. You could look carefully at the shapes of the caterpillars and let them turn in other ways. Here are a few to start you off:
When you've discovered many new ones, with the shapes - and therefore the numbers - all showing differently, you could compare them. What is the same and what is different? Can you explain why?
You could try to put the caterpillars in shapes that are not squares!
OR
2.
a) Choose one of the caterpillars and, using the numbers and the way that they are arranged, explore the patterns and relationships you can find.
b) Then let that caterpillar grow nine more parts so that it becomes a $25$ caterpillar with the shape bending in just the same way. Explore those patterns and relations.
c) Finally, compare the two different groups of things you've discovered in a) and in b).
Do tell us about all the things you find out.
### Why do this problem?
This investigation is likely to capture the attention of many children and you might discover that they find things to explore which had not occurred to you! It is also an enriching activity that has links both with spatial awareness, number awareness and number patterns. It could be used to broaden pupils' experiences of number relationships.
### Possible approach
You could introduce this investigation by drawing a caterpillar on a grid, perhaps on the interactive whiteboard, to establish the context. Invite a child to draw a different caterpillar on another grid and then ask pairs to talk about what they notice.
This leads into the first activity so this could be pursued further by the group, or some might prefer to opt for the second activity. You will need to have some conversations about how they might record what they have found. Having squared paper available, or perhaps some templates of different sized grids, would be a good idea, but children might have their own suggestions.
### Key questions
What do you see?
Tell me about the numbers you've.
I like this shape of caterpiller you've made, tell me about it.
### Possible extension
Alan Parr (who has contributed many fantastic ideas to NRICH), also sent the following suggestions for investigating the Caterpillars in other ways:
• If a caterpillar has some segments which are black then you won't be able to see their numbers, so you can offer problems with missing numbers. You can have a surprisingly large number of missing numbers and still be able to fill in every one. For example, in caterpillar A you only need to know the $1, 5, 10,$ and $11$ and you can place all the other numbers. You could challenge children to fill in the rest of the numbers, but also invite them to make their own so that they give as few numbers as possible in order for a friend to be able to complete the caterpillar. You could also extend this to larger caterpillars!
• For a tougher challenge, you could change the caterpillars so that you can't see the individual segments, only four mega-sections each of which covers four segments in a $2 \times 2$ square. Each mega-section shows you the product of the four segments of the caterpillar. For example, the top-left quarter shows $24$, top-right $7920$, bottom-right $15288$, bottom-left $7200$. Can learners identify the number of each segment? (Alan attributes this idea to Mike Taylor - thank you!) You could also set this up in a "battleships-style" game where each player records a caterpillar in a square grid and player $1$ tells player $2$ the products of $2\times2$ blocks he/she asks about, and vice versa.
### Possible support
Here are some ideas to use if the pupils are new to the exploration of number patterns:
These show:
Adding horizontally, then vertically;
Adding in slanting lines - when you've done the two different sets, [North West to South East and then North East to South West] you can put them together in different ways;
Adding in squares and putting down the answer in a square. So, adding in a $2$ by $2$ square and performing all $9$ additions we would have:
There are lots of questions to be asked here, based on trying to explain the resulting totals.
Adding in $3$ by $3$ squares as above.
The NRICH Project aims to enrich the mathematical experiences of all learners. To support this aim, members of the NRICH team work in a wide range of capacities, including providing professional development for teachers wishing to embed rich mathematical tasks into everyday classroom practice. More information on many of our other activities can be found here.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 18, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9556770324707031, "perplexity_flag": "middle"}
|
http://mathblag.wordpress.com/
|
# Which powers of 2 do not contain the digit 7?
Posted on December 9, 2012 by
Which powers of 2 do not contain the digit 7? A quick search reveals the following examples.
```2^0 = 1
2^1 = 2
2^2 = 4
2^3 = 8
2^4 = 16
2^5 = 32
2^6 = 64
2^7 = 128
2^8 = 256
2^9 = 512
2^10 = 1024
2^11 = 2048
2^12 = 4096
2^13 = 8192
2^14 = 16384
2^16 = 65536
2^18 = 262144
2^19 = 524288
2^22 = 4194304
2^23 = 8388608
2^25 = 33554432
2^28 = 268435456
2^33 = 8589934592
2^41 = 2199023255552
2^42 = 4398046511104
2^49 = 562949953421312
2^50 = 1125899906842624
2^54 = 18014398509481984
2^61 = 2305843009213693952
2^71 = 2361183241434822606848```
These exponents are listed in the On-line Encyclopedia of Integer Sequences as A035062.
I have continued the search for exponents up to 1010, and found no other powers of 2 that do not contain the digit 7. If we suppose that the decimal digits of powers of 2 are random, then it is extremely unlikely that other examples exist. Observe that 2n has approximately $n\log_{10} 2$ decimal digits, so the probability that 2n does not contain the digit 7 is approximately rn where
$r = (0.9)^{\log_{10} 2} \approx 0.96878.$
Given that no examples exist for $72 \le n \le 10^{10}$, we can estimate the probability of an additional example via the infinite sum $\sum_{n=10^{10}}^{\infty} r^n = \frac{r^{10^{10}}}{1-r} \approx 5 \times 10^{-137743771}.$
This probability is so small that it almost defies comprehension. Imagine that every person in the State of New York picked a random combination for the Powerball lottery, that they all picked the same numbers purely by chance, and they all won the jackpot. Absurd as it is, this event is more probable than finding another power of 2 which does not contain the digit 7, unless there is some hidden pattern in the digits which has escaped our attention.
I wrote a Python program to conduct my search. Since powers of 2 grow very rapidly, it is not efficient to generate all of their digits, so I only kept track of the last 70 digits. If a 7 is not found in the last 70 digits, then the last 400 digits are checked using the three-argument pow function. If a 7 is not found in the last 400 digits, then n is printed, but no further checks are made. Here is my code:
```n = 0
N = 1
while 1:
if not('7' in str(N)) and not('7' in str(pow(2, n, 10**400))):
print "2^" + str(n)
n += 1
N = (N * 2) % 10**70
```
My interest in this question was inspired by a blog post by John D. Cook.
Posted in Uncategorized
# Bayes’ Theorem and the Fake Facebook Lottery Winner
Posted on December 3, 2012 by
After the recent \$587.5 million Powerball jackpot, the following picture was posted to Facebook, where it was shared over 2 million times.
It is not too hard to figure out that the picture is a fake. The most obvious clue is that the numbers are out of order. The numbers on an authentic Powerball ticket are always in ascending order, except for the last number. But I was doubtful for another reason — Bayes’ theorem.
There are two scenarios that must be considered. The first scenario is that Nolan actually did win the jackpot, and he wishes to share his good fortune with a random stranger. The second scenario is that Nolan did not win the lottery, and the picture is a fake. (Other scenarios are theoretically possible, but we will disregard them.)
Let us define some events. Let A be the event that a player selected at random wins the Powerball jackpot, and let B be the event that a player selected at random would make an offer similar to the one shown above. We wish to estimate the conditional probability of A given B. According to Bayes’ theorem, the probability can be computed as follows:
$P(A|B) = \displaystyle \frac{P(B|A) P(A)}{P(B|A) P(A) + P(B|\neg A) P(\neg A)}$
The probabilities in this formula are difficult to estimate, but let’s make an attempt. We know that there were two winners in the drawing, and I will guesstimate that about 20 million people bought tickets, so P(A) = 1/10,000,000.
The other probabilities are more difficult to predict. It certainly seems unlikely that a lottery winner would offer to give \$1 million to a complete stranger. Perhaps it’s even more unlikely that a (randomly selected) non-winner would pretend to win the lottery and offer to share it with a stranger. But it does not seem reasonable to suppose that the first event is 10 million times more likely than the second. So we must conclude that the denominator is dominated by its second term, hence P(A|B) must be close to zero. In other words, the picture is (probably) fake.
Posted in Uncategorized
# A function that is surjective on every interval
Posted on November 4, 2012 by
The intermediate value theorem states that if f is a continuous real-valued function on the closed interval [a, b], and if c is any real number between f(a) and f(b), then there exists c in [a, b] such that f(x) = c. A function that satisfies the conclusion of this theorem is called a Darboux function.
Although every continuous function is a Darboux function, it is not true that every Darboux function is continuous. Perhaps the simplest example is f(x) = sin(1/x) for x not 0, f(0) = 0. The graph of this function is known as the topologist’s sine curve. The importance of this curve lies in the fact that it is connected but not path-connected.
This function is only discontinuous at 0, but the British mathematician John H. Conway constructed a Darboux function that is discontinuous at every point. In fact, it has the stronger property that it is surjective on every nonempty open interval. That is, if a, b, and y are real numbers with a < b, then there exists a real number x such that f(x) = y. This function is called Conway’s base 13 function, because it is defined in terms of the base 13 digits of the argument.
I wish to propose another example of a function that is surjective on every interval. The function is defined as follows:
$f(x) = \displaystyle \lim_{n\to\infty} \tan(n!\, \pi x)$ if the limit exists,
$f(x) = 0$ otherwise.
In addition to being surjective on every interval, my function has a number of other appealing properties. It is defined using a simple formula instead of arcane digit manipulations. The function is equal to zero almost everywhere. Most remarkably, it is periodic and every positive rational number is a period.
I challenge the reader to verify these claims for herself or himself. My proof is available here.
Posted in Uncategorized
# What is a tangent line?
Posted on October 18, 2012 by
In this post I will give my answer to a question from David Wees.
Does anyone know of a good explanation of what tangent lines are that doesn’t dive straight into the definition of the derivative? #mathchat
— davidwees (@davidwees) October 18, 2012
Recall that a secant line to the graph of y = f(x) is a line that intersects the graph in at least one point (p, f(p)). This figure shows a secant line (in blue) to the curve y = x^2 (in red) at the point (1,1).
This particular secant line crosses from above to below, because the secant line lies above the curve when we are immediately left of the intersection point, and it lies below the curve when we are immediately to the right of the intersection point.
We could also draw another secant line that crosses from below to above, as shown here. Note that this new secant line has a greater slope than the previous secant line.
Now we can give a mathematically precise definition of a tangent line. A secant line to the curve y = f(x) at the point (p, f(p)) is said to be a tangent line if the following two conditions are satisfied.
1. Every secant line with lesser slope crosses from above to below at (p, f(p)).
2. Every secant line with greater slope crosses from below to above at (p, f(p)).
The following picture shows a tangent line to the graph of y = x^2. Notice that if the slope were increased then the line would cross from below to above, and if the slope were decreased then it would cross from above to below.
It is tempting to suppose that a tangent line cannot cross the curve, but this is not the case.
In this case, the tangent line crosses from above to below. If the slope were decreased, then it would still cross from above to below. However, if the slope were increased, then it would cross from below to above. This is consistent with our definition of the tangent line.
Credit: The idea for this definition of tangent line was inspired by the book Calculus Unlimited by Jerrold Marsden and Alan Weinstein, which develops calculus without the concept of limit.
Posted in Uncategorized
# Exploring patterns in Pascal’s triangle using Excel
Posted on September 25, 2012 by
There are many interesting patterns hidden within Pascal’s triangle. For example, if we color every odd number, we obtain a pattern resembling the Sierpinski triangle. (Image credit: Wikimedia commons)
Other patterns can be generated by coloring the numbers according to their remainder after dividing by 3, 4, 5, etc. Many of these patterns are explored in the book Visual Patterns in Pascal’s Triangle by Dale Seymour, and they are analyzed more deeply in a scholarly article by Andrew Granville with the improbable title Zaphod Beeblebrox’s Brain and the Fifty-ninth Row of Pascal’s Triangle.
It is a good activity for students to create these patterns themselves by calculating Pascal’s triangles and coloring the squares. But it is also possible to generate these patterns quickly in Excel using conditional formatting.
Pascal’s triangle can be generated quickly by copying a simple formula. Just enter the number 1 in D3, enter the formula =C3+D3 in D4, and then copy this formula to a square block of cells starting with D4 in the upper left corner.
Since the numbers grow very quickly, it is convenient to use the MOD function to reduce the numbers. I entered a modulus (initially 2) in B3, and changed the formula in D4 to =mod(C3+D3,\$B\$3). Finally, I created a conditional formatting rule to highlight all cells whose values are greater than 0.
Below is a screenshot of the result. Note that I have zoomed out as far as possible. My Excel spreadsheet is available here. (Warning: 2 MB)
Update: Patrick Honner brought to my attention a nice video by Debra Borkovitz which demonstrates how to create these patterns in Excel.
Posted in Uncategorized
# Parabola through three points
Posted on September 8, 2012 by
What happens when you rotate a parabola through 3 points?
— Chris Harrow (@chris_harrow) September 8, 2012
My interpretation is that we fix three non-collinear points in the plane, and we wish to describe all parabolas passing through these points. (There are infinitely many of these, since the axis of symmetry need not be vertical.)
For simplicity, I assumed that the three fixed points are (0,0), (1,0), and (0,1). This is reasonable because any three non-collinear points can be moved to (0,0), (1,0), and (0,1) by an affine transformation (linear transformation + translation), and because the image of a parabola under an affine transformation is again a parabola.
The general equation of a conic section is Ax^2 + Bxy + Cy^2 + Dx + Ey + F = 0, where A, B, and C are not all zero. A conic is a parabola (possibly degenerate) if and only if B^2 = 4AC. Since the parabola is assumed to contain (0,0), (1,0), and (0,1), we find that F = 0, A + D = 0, and C + E = 0. Furthermore, A cannot be zero, since we cannot draw a horizontal parabola through these points. So we may assume, after dividing through by A, that A = 1.
Combining these equations leaves us with a one-parameter family of parabolas:
$x^2 + Bxy + \frac{B^2}{4} y^2 - x - \frac{B^2}{4} y = 0.$
I created a GeoGebra animation to explore how changing the value of B affects the parabola. It would be interesting to explore the curves traced out by the vertex and the focus as B varies. Wolfram Alpha produced some formulas, but they are not easy to interpret.
Posted in Uncategorized
# Arranging numbers in a grid: Solution
Posted on September 8, 2012 by
In a previous post, I described the problem of counting arrangements of the numbers 1 through 16 in a 4×4 grid so that squares with consecutive numbers never touch, even at a corner.
This is a difficult problem, because the set of solutions has little obvious structure. It seems unlikely that a simple formula would exist. On the other hand, naive brute-force is not effective, because there are 16! = 20,922,789,888,000 permutations. Even using a computer, this is too many possibilities to check.
The Monte Carlo method is an effective way to estimate the number of solutions. We simply generate a large number of random arrangements, and count the arrangements which have no consecutive numbers touching. When I ran this simulation with one million trials, there were 838 successes; so the estimated total number of solutions is 16! * 838 * 10^-6, which is approximately 17.5 billion.
But I wanted to find the exact number of solutions. After several failed attempts, I settled on a divide-and-conquer algorithm.
For each partition {A, B} of the set {1,2,…,16} into two subsets of equal size, find all valid arrangements of A into the top half of the grid, and find all arrangements of B into the bottom half of the grid. Then we add up the pairs of arrangements that are compatible. An important fact is that compatibility is determined by the two middle rows, so we can “lump” some of the 4×2 arrangements together.
The final answer is 17,464,199,440, which is suspiciously close to the Monte Carlo estimate (other runs were not as close). My Python code is available here.
Posted in Uncategorized
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 8, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9401590824127197, "perplexity_flag": "head"}
|
http://math.stackexchange.com/users/29788/camilla-vaernes?tab=activity
|
# Camilla Vaernes
reputation
5
bio
website
location
age
member for 1 year
seen Feb 5 at 21:00
profile views 56
bio visits
101 reputation website member for 1 year
5 badges location seen Feb 5 at 21:00
# 34 Actions
Feb5 comment On the equivalence relation in a right Ore domain.Dear rschwieb sorry for my delay! I hadn't logged in for months. Thank you for writing that out.
Feb5 accepted On the equivalence relation in a right Ore domain.
Jul2 revised Why is multiplication well defined in this ring with the Ore condition?added 13 characters in body
Jul2 revised Why is multiplication well defined in this ring with the Ore condition?added 457 characters in body
Jul1 asked Why is multiplication well defined in this ring with the Ore condition?
Jun30 comment On the equivalence relation in a right Ore domain.Thanks rschwieb. I'm using your definition of $\sim$, and I see that $\sim$ is an equivalence relation, and I'm still assuming the common right multiple property. I have addition defined as $a/b+c/d=(ad_1+cb_1)/m$ where $m=bd_1=db_1\neq 0$, and $(a/b)(c/d)=ac_1/db_1$ where $cb_1=bc_1$ and $b_1\neq 0$, but I've struggled and don't know how to show these are well-defined. Do you know how to show that they are?
Jun29 revised On the equivalence relation in a right Ore domain.added 16 characters in body
Jun29 revised On the equivalence relation in a right Ore domain.edited title
Jun29 asked On the equivalence relation in a right Ore domain.
Jun29 accepted Is there a nice way to classify the ideals of the ring of lower triangular matrices?
Jun17 comment Is there a nice way to classify the ideals of the ring of lower triangular matrices?Ok, I think I see where I went wrong, thanks.
Jun16 awarded Commentator
Jun16 comment Is there a nice way to classify the ideals of the ring of lower triangular matrices?Oh, it just has to be any ideal of $\mathbb{Z}$ also containing $J_1$, right?
Jun16 comment Is there a nice way to classify the ideals of the ring of lower triangular matrices?So the ideal has form $$\begin{pmatrix} J_2 & 0 \\ ? & J_1\end{pmatrix}$$ for $J_2,J_1$ ideals of $\mathbb{Z}$. How does one describe what goes in the ? place? It has to be an ideal of $\mathbb{Z}$, and also contain $J_1$ I think?
Jun16 comment Is there a nice way to classify the ideals of the ring of lower triangular matrices?Thanks. I'm curious about what the entries for $M$ should look like for right ideals. Right multiplying I found $$\begin{pmatrix} r & 0 \\ m & s\end{pmatrix}\begin{pmatrix} z & 0 \\ 0 & 0\end{pmatrix} =\begin{pmatrix} rz & 0 \\ mz & 0\end{pmatrix}$$ $$\begin{pmatrix} r & 0 \\ m & s\end{pmatrix}\begin{pmatrix} 0 & 0 \\ z & 0\end{pmatrix} =\begin{pmatrix} 0 & 0 \\ sz & 0\end{pmatrix}$$ and $$\begin{pmatrix} r & 0 \\ m & s\end{pmatrix}\begin{pmatrix} 0 & 0 \\ 0 & z\end{pmatrix} =\begin{pmatrix} 0 & 0 \\ 0 & sz\end{pmatrix}$$.
Jun16 comment Is there a nice way to classify the ideals of the ring of lower triangular matrices?Are guess my main confusion is, are the ideals of $T$ here just those of form $K_1\oplus K_0\oplus K_2$ where $K_1,K_2$ are ideals of $\mathbb{Z}$, and $K_0$ is a submodule of $\mathbb{Z}$ containing $K_1+K_2$? Or does something change slightly?
Jun16 comment Is there a nice way to classify the ideals of the ring of lower triangular matrices?Thanks rschwieb. I don't have a copy of that book right now, do you mind explaining even briefly how to interpret the third result you linked to for lower triangular matrices?
Jun16 asked Is there a nice way to classify the ideals of the ring of lower triangular matrices?
Jun16 accepted Kaplansky's theorem of infinitely many right inverses in monoids?
Jun9 comment Kaplansky's theorem of infinitely many right inverses in monoids?Thanks ymar, this is a nice, concrete example.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 21, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9276313185691833, "perplexity_flag": "middle"}
|
http://unapologetic.wordpress.com/2009/08/26/elementary-matrices/?like=1&source=post_flair&_wpnonce=a832ba55c5
|
# The Unapologetic Mathematician
## Elementary Matrices
Today we’ll write down three different collections of matrices that together provide us all the tools we need to modify bases.
First, and least important, are the swaps. A swap is a matrix that looks like the identity, but has two of its nonzero entries in reversed columns.
$\displaystyle W_{i,j}\begin{pmatrix}1&&&&&&0\\&\ddots&&&&&\\&&0&\cdots&1&&\\&&\vdots&\ddots&\vdots&&\\&&1&\cdots&0&&\\&&&&&\ddots&\\{0}&&&&&&1\end{pmatrix}$
where the two swapped columns (or, equivalently, rows) are $i$ and $j$. The swaps generate a subgroup of $\mathrm{GL}(n,\mathbb{F})$ isomorphic to the symmetric group $S_n$. In fact, these are the image of the usual generators of $S_n$ under the permutation representation. They just rearrange the order of the basis elements.
Next are the scalings. A scaling is a matrix that looks like the identity, but one of its nonzero entries isn’t the identity.
$\displaystyle C_{i,c}=\begin{pmatrix}1&&&&&&0\\&\ddots&&&&&\\&&1&&&&\\&&&c&&&\\&&&&1&&\\&&&&&\ddots&\\{0}&&&&&&1\end{pmatrix}$
where the entry $c$ is in the $i$th row and column. The scalings generate the subgroup of diagonal matrices, which is isomorphic to $\left(\mathbb{F}^\times\right)^n$ — $n$ independent copies of the group of nonzero elements of $\mathbb{F}$ under multiplication. They stretch, squeeze, or reverse individual basis elements.
Finally come the shears. A shear is a matrix that looks like the identity, but one of its off-diagonal entries is nonzero.
$\displaystyle H_{i,j,c}=\begin{pmatrix}1&&&&&&0\\&\ddots&&&&&\\&&1&&c&&\\&&&\ddots&&&\\&&&&1&&\\&&&&&\ddots&\\{0}&&&&&&1\end{pmatrix}$
where the entry $c$ is in the $i$th row and $j$th column. If $i<j$, then the extra nonzero entry falls above the diagonal and we call it an “upper shear”. On the other hand, if $i>j$ then the extra nonzero entry falls below the diagonal, and we call it a “lower shear”. The shears also generate useful subgroups, but the proof of this fact is more complicated, and I’ll save it for its own post.
Now I said that the swaps are the least important of the three elementary transformations, and I should explain myself. It turns out that swaps aren’t really elementary. Indeed, consider the following calculation
$\displaystyle\begin{aligned}\begin{pmatrix}1&1\\{0}&1\end{pmatrix}\begin{pmatrix}1&0\\-1&1\end{pmatrix}\begin{pmatrix}1&1\\{0}&1\end{pmatrix}\begin{pmatrix}-1&0\\{0}&1\end{pmatrix}&\\=\begin{pmatrix}0&1\\-1&1\end{pmatrix}\begin{pmatrix}1&1\\{0}&1\end{pmatrix}\begin{pmatrix}-1&0\\{0}&1\end{pmatrix}&\\=\begin{pmatrix}0&1\\-1&0\end{pmatrix}\begin{pmatrix}-1&0\\{0}&1\end{pmatrix}&\\=\begin{pmatrix}0&1\\1&0\end{pmatrix}\end{aligned}$
So we can build a swap from three shears and a scaling. It should be clear how to generalize this to build any swap from three shears and a scaling. But it’s often simpler to just thing of swapping two basis elements as a single basic operation rather than as a composition of shears and scalings.
On the other hand, we can tell that we can’t build any shears from scalings, since the product of scalings is always diagonal. We also can’t build any scalings from shears, since the determinant of any shear is always ${1}$, and so the product of a bunch of shears also has determinant ${1}$. Meanwhile, the determinant of a scaling $C_{i,c}$ is always the scaling factor $c\neq1$.
### Like this:
Posted by John Armstrong | Algebra, Linear Algebra
## 5 Comments »
1. [...] might be familiar all the way back to high school mathematics classes. We’re going to use the elementary matrices to manipulate a matrix. Rather than work out abstract formulas, I’m going to take the [...]
Pingback by | August 27, 2009 | Reply
2. [...] Generated by Shears Okay, when I introduced elementary matrices I was a bit vague on the subgroup that the shears generate. I mean to partially rectify that now [...]
Pingback by | August 28, 2009 | Reply
3. [...] row operations. That is, transformations of matrices that can be effected by multiplying by elementary matrices on the left, not on the [...]
Pingback by | September 1, 2009 | Reply
4. [...] that every one of our elementary row operations is the result of multiplying on the left by an elementary matrix. So we can take the matrices corresponding to the list of all the elementary row operations and [...]
Pingback by | September 4, 2009 | Reply
5. [...] other hand if we use all shears and scalings we can generate any invertible matrix we want (since swaps can be built from shears and scalings). We clearly can’t build any matrix whatsoever from shears alone, since every shear has [...]
Pingback by | September 9, 2009 | Reply
« Previous | Next »
## About this weblog
This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”).
I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
• ## Feedback
Got something to say? Anonymous questions, comments, and suggestions at Formspring.me!
%d bloggers like this:
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 23, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9169421195983887, "perplexity_flag": "middle"}
|
http://polymathprojects.org/2012/06/12/polymath7-research-thread-1-the-hot-spots-conjecture/?like=1&_wpnonce=81a70c627a
|
# The polymath blog
## June 12, 2012
### Polymath7 research thread 1: The Hot Spots Conjecture
Filed under: hot spots,research — Terence Tao @ 8:58 pm
The previous research thread for the Polymath7 project ”the Hot Spots Conjecture” is now quite full, so I am now rolling it over to a fresh thread both to summarise the progress thus far, and to make it a bit easier to catch up on the latest developments.
The objective of this project is to prove that for an acute angle triangle ABC, that
1. The second eigenvalue of the Neumann Laplacian is simple (unless ABC is equilateral); and
2. For any second eigenfunction of the Neumann Laplacian, the extremal values of this eigenfunction are only attained on the boundary of the triangle. (Indeed, numerics suggest that the extrema are only attained at the corners of a side of maximum length.)
To describe the progress so far, it is convenient to draw the following “map” of the parameter space. Observe that the conjecture is invariant with respect to dilation and rigid motion of the triangle, so the only relevant parameters are the three angles $\alpha,\beta,\gamma$ of the triangle. We can thus represent any such triangle as a point $(\alpha/\pi,\beta/\pi,\gamma/\pi)$ in the region $\{ (x,y,z): x+y+z=1, x,y,z > 0 \}$. The parameter space is then the following two-dimensional triangle:
Thus, for instance
1. A,N,P represent the degenerate obtuse triangles (with two angles zero, and one angle of 180 degrees);
2. B,F,O represent the degenerate acute isosceles triangles (with two angles 90 degrees, and one angle zero);
3. C,E,G,I,L,M represent the various permutations of the 30-60-90 right-angled triangle;
4. D,J,K represent the isosceles right-angled triangles (i.e. the 45-45-90 triangles);
5. H represents the equilateral triangle (i.e. the 60-60-60 triangle);
6. The acute triangles form the interior of the region BFO, with the edges of that region being the right-angled triangles, and the exterior being the obtuse triangles;
7. The isosceles triangles form the three line segments NF, BP, AO. Sub-equilateral isosceles triangles (with apex angle smaller than 60 degrees) comprise the open line segments BH,FH,OH, while super-equilateral isosceles triangles (with apex angle larger than 60 degrees) comprise the complementary line segments AH, NH, PH.
Of course, one could quotient out by permutations and only work with one sixth of this diagram, such as ABH (or even BDH, if one restricted to the acute case), but I like seeing the symmetry as it makes for a nicer looking figure.
Here’s what we know so far with regards to the hot spots conjecture:
1. For obtuse or right-angled triangles (the blue shaded region in the figure), the monotonicity results of Banuelos and Burdzy show that the second claim of the hot spots conjecture is true for at least one second eigenfunction.
2. For any isosceles non-equilateral triangle, the eigenvalue bounds of Laugesen and Siudeja show that the second eigenvalue is simple (i.e. the first part of the hot spots conjecture), with the second eigenfunction being symmetric around the axis of symmetry for sub-equilateral triangles and anti-symmetric for super-equilateral triangles.
3. As a consequence of the above two facts and a reflection argument found in the previous research thread, this gives the second part of the hot spots conjecture for sub-equilateral triangles (the green line segments in the figure). In this case, the extrema only occur at the vertices.
4. For equilateral triangles (H in the figure), the eigenvalues and eigenfunctions can be computed exactly; the second eigenvalue has multiplicity two, and all eigenfunctions have extrema only at the vertices.
5. For sufficiently thin acute triangles (the purple regions in the figure), the eigenfunctions are almost parallel to the sector eigenfunction given by the zeroth Bessel function; this in particular implies that they are simple (since otherwise there would be a second eigenfunction orthogonal to the sector eigenfunction). Also, a more complicated argument found in the previous research thread shows in this case that the extrema can only occur either at the pointiest vertex, or on the opposing side.
So, as the figure shows, there has been some progress on the problem, but there are still several regions of parameter space left to eliminate. It may be possible to use perturbation arguments to extend validity of the hot spots conjecture beyond the known regions by some quantitative extent, and then use numerical verification to finish off the remainder. (It appears that numerics work well for acute triangles once one has moved away from the degenerate cases B,F,O.)
The figure also suggests some possible places to focus attention on, such as:
1. Super-equilateral acute triangles (the line segments DH, GH, KH). Here, we know the second eigenfunction is simple (and anti-symmetric).
2. Nearly equilateral triangles (the region near H). The perturbation theory for the equilateral triangle could be non-trivial due to the repeated eigenvalue here.
3. Nearly isosceles right-angled triangles (the regions near D,G,K). Again, the eigenfunction theory for isosceles right-angled triangles is very explicit, but this time the eigenvalue is simple and perturbation theory should be relatively straightforward.
4. Nearly 30-60-90 triangles (the regions near C,E,G,I,L,M). Again, we have an explicit simple eigenfunction in the 30-60-90 case and an analysis should not be too difficult.
There are a number of stretching techniques (such as in the Laugesen-Siudeja paper) which are good for controlling how eigenvalues deform with respect to perturbations, and this may allow us to rigorously establish the first part of the hot spots conjecture, at least, for larger portions of the parameter space.
As for numerical verification of the second part of the conjecture, it appears that we have good finite element methods that seem to give accurate results in practice, but it remains to find a way to generate rigorous guarantees of accuracy and stability with respect to perturbations. It may be best to focus on the super-equilateral acute isosceles case first, as there is now only one degree of freedom in the parameter space (the apex angle, which can vary between 60 and 90 degrees) and also a known anti-symmetry in the eigenfunction, both of which should cut down on the numerical work required.
I may have missed some other points in the above summary; please feel free to add your own summaries or other discussion below.
## 94 Comments »
1. [...] has been some progress in the polymath 7 project. See the new thread here. Like this:LikeBe the first to like this [...]
Pingback by — June 12, 2012 @ 10:00 pm
2. Here is a simple eigenvalue comparison theorem: if $0 = \lambda_1(D) \leq \lambda_2(D) \leq \ldots$ denotes the Neumann eigenvalues of a domain D (counting multiplicity), and $T: D \to TD$ is a linear transformation, then
$\|T\|_{op}^{-2} \lambda_k(D) \leq \lambda_k(TD) \leq \|T^{-1}\|_{op}^2 \lambda_k(D)$
for each k. This is because of the Courant-Fisher minimax characterisation of $\lambda_k(D)$ as the supremum of the infimum of the Rayleigh-Ritz quotient $\int_D |\nabla u|^2/ \int_D |u|^2$ over all codimension k subspaces of $L^2(D)$, and because any candidate $u \in L^2(D)$ for the Rayleigh-Ritz quotient on D can be transformed into a candidate $u \circ T^{-1} \in L^2(TD)$ for the Rayleigh-Ritz quotient on TD, and vice versa. (This is not the most sophisticated comparison theorem available – for instance, the Laugesen-Siudeja paper has a more delicate analysis involving comparison of one triangle against two reference triangles, instead of just one – but it is one of the easiest to state and prove.)
One corollary of this theorem is that if one has a spectral gap $\lambda_2(D) < \lambda_3(D)$ for some triangle D, then this spectral gap persists for all nearby triangles TD, as long as T has condition number less than $(\lambda_3(D)/\lambda_2(D))^{1/2}$. This should allow us to start rigorously verifying the simplicity of the eigenvalue for at least some of the regions of the above figure, and in particular in the vicinity of the points C,D,E,G,I,J,K,L,M where the eigenvalues are explicit. With numerics, we should be able to cover other areas as well, except in the vicinity of the equilateral triangle H where of course we have a repeated eigenvalue, but perhaps some perturbative analysis near that triangle can establish simplicity there too.
Comment by — June 12, 2012 @ 10:50 pm
• Stability of Neumann eigenvalues was studied by Banuelos and Pang (Electron. J. Diff. Eqns., Vol. 2008(2008), No. 145, pp. 1-13) and Pang (http://dx.doi.org/10.1016/j.jmaa.2008.04.026). They prove that multiplicity 1 is stable under small perturbations, while multiplicity 2 is not. Hence linear transformation above can be replaced with almost any small perturbation.
Comment by Bartlomiej Siudeja — June 12, 2012 @ 11:41 pm
• And a small last name correction: Siujeda should really be Siudeja. Here and in the main summary. [Oops! Sorry about that. Corrected, -T.]
Comment by Bartlomiej Siudeja — June 12, 2012 @ 11:46 pm
• Joe and I have a working high-order finite element code (to give increased order of approximation as we increase the resolution) . We’re working on a mapped domain (as described in a different thread), and are starting to explore the parameter space you suggested.
So far, no surprises, though we haven’t reached the perturbed equilateral triangle. We hope to post some results and graphics soon. Visualizing the results is taking some thought: for each point in parameter space, we want to record: whether the conjecture holds for the approximation; the approximate eigenvalue(s); the spectral gap; and some measure of the quality of the approximation.
Comment by — June 13, 2012 @ 4:25 am
3. Just a note: The rigorous numerical approach from [FHM1967] was used extensively to study eigenvalues of triangles by Pedro Antunes and Pedro Freitas. They studied various aspects of the Dirichlet spectrum using improvement of [FHM1967] due to Payne and Moler (http://www.jstor.org/stable/2949550). This method also works extremely well with bessel functions, even for far from degenerate triangles.
Comment by Bartlomiej Siudeja — June 12, 2012 @ 10:55 pm
• The Fox, Henrici and Moler paper is beautiful, and was updated by Betcke and Trefethen in SIAM Review in 2005. Barnett has a more recent paper discussing the method of particular solutions, based on Bessel functions, applied to the Neumann problem. This is harder, and the numerics are more challenging:
Comment by — June 13, 2012 @ 4:30 am
4. Continuing the ideas for Comments 13,14, and 18 of the previous thread,
Consider a super-equilateral isosceles triangle (I will call it a 50-50-80 triangle to make things clear). As discussed in Comment 14 and 18, since we know the second eigenfunction is anti-symmetric we can instead consider the 40-50-90 right triangle with mixed Dirichlet-Neumann.
Two comments/ideas:
-It should also be that we can now “unfold” the 40-50-90 triangle into a 40-40-100 triangle with mixed Dirichlet-Neumann and, intuitively at least, it should be the case that the first non-trivial eigenfunction there is the eigenfunction we are looking for (Though while I think that “folding in” is always legal, appealing to the Raleigh-Ritz formalism, in general “folding out” might introduce new first-non-trivial eigenfunctions). I am not sure if this really buys us anything though…
-Having reduced the problem to the Dirichlet-Neumann boundary case, maybe it is possible to implement the method of particular solutions as suggested by Nilima in Comment 13 (links provided there). The method of particular solutions, at least as presented in those papers, considered a Dirichlet boundary condition that an eigenfunction was chosen to try and match. For the mixed problem, we now have a Dirichlet boundary (the fact that the other two boundaries are Neumann shouldn’t matter as those are taken care of for free when choosing an eigenfunction consisting of “Fourier-Bessel” functions anchored at the opposite angle).
Comment by letmeitellyou — June 12, 2012 @ 11:48 pm
• On the first non-trivial eigenfunction for a triangle with mixed boundary conditions (two sides Neumann, and one side Dirichlet):
Intuitively, the following statement must be true for all such triangles: The maximum of the first non-trivial eigenfunction occurs at the corner opposite to the Dirichlet side.
Perhaps this is on the books somewhere? A probabilistic interpretation is as follows: The solution to the heat equation on the mixed-boundary triangle with initial condition $u_0 \equiv 1$ can be expressed probabilistically as
$u(x,t)=P_x(\tau>t)$
Where $\tau$ is the first time that $X_t$, a Brownian motion starting from $x$ and reflected on the Neumann sides, hits the Dirichlet side. Intuitively to keep your Brownian motion alive the longest you would start it at the opposite corner.
Of course this is all intuition and not a formal proof…
Comment by letmeitellyou — June 13, 2012 @ 12:28 am
• Probabilistic intuition is extremely convincing. In fact to make it even more appealing, think about “regular” polygon that can be built by gluing matching Neumann sides of many triangles. We get a “regular” polygon with Dirichlet boundary conditions. By rotational symmetry maximum survival time must happen at the center. Of course not every triangle gives a nice polygon (angles never add up to 2pi), and the ones we need never give one. We would need a multiple cover to make a polygon for arbitrary rational angles, but the intuition is kind of lost this way.
Comment by Bartlomiej Siudeja — June 13, 2012 @ 12:53 am
• Yah I was thinking about this as well… you would get sort of a spiral staircase no? But I think there might be some issue with defining the Brownian motion on this spiral staircase as it might flip out near the origin (i.e. it will have some crazy winding number). Although, with probability 1, the Brownian motion won’t actually hit the origin so maybe it isn’t a big deal.
On page 472 of the paper [BT2005] Timo Betcke, Lloyd N. Trefethen, Reviving the Method of Particular Solutions, they mention that how the eigenfunction for the wedge cannot be extended analytically unless an integer multiple of the angle is $2\pi$.
Comment by — June 13, 2012 @ 3:01 am
• Actually maybe a proof can be furnished using a synchronous coupling!
Consider a triangle with 1 side Dirichlet and 2 sides Neumann. Orient it so that it lies in the right half plane $x\geq 0$ and has its Dirichlet side along the y-axis (so that the point with the largest $x$-coordinate in the triangle is the opposite corner (where we claim the hotspot is).
Now consider two points $x$ and $y$ in the plane (I will abuse notation and call the points $x=(x_1,x_2)$ and $y = (y_1,y_2)$). Now consider a synchronously-coupled reflected Brownian motion $(X_t,Y_t)$ started from these two points (Synchronously coupled means that they are driven by the same brownian motion but they might of course reflect at different times).
If $y$ lies to the right of $x$, it ought to be the case that always $Y_t$ lies to the right of $X_t$. consequently $X_t$ is more likely to hit the Dirichlet boundary than $Y_t$.
It therefore would follow that the place to start to take the longest to hit the boundary is the point furthest to the right, i.e. the opposite corner as predicted.
Notes:
-The issues with coupled Brownian motions dancing around each other should be avoided here.. in the acute triangle with all three sides Neumann this was an issue but here there is only one corner to play around/bounce off of.
-This is really stating the following monotonicity theorem: If $u_0\equiv 1$ then $u(x,t)$ is monotonically increasing from left to right for all $t>0$. There might be a more direct analytic proof.
-Seeing as this was a very simple argument it is likely to be already known (or I could be wrong about the coupling preserving the orientation).
Comment by letmeitellyou — June 13, 2012 @ 2:53 am
• Unfortunately, I think the synchronous coupling can flip the orientation of $X_t$ and $Y_t$. Suppose for instance that $X_t$ and $Y_t$ are oriented vertically, and $Y_t$ hits one of the Neumann sides oriented diagonally. Then $Y_t$ can bounce in such a way that it ends up to the left of $X_t$.
But perhaps some variant of this coupling trick should work…
Comment by — June 13, 2012 @ 4:13 am
• Ah, good point! The points \$x\$ and \$y\$ would have to start such that the angle between them is smaller than the angle of the opposite side… this is actually a condition in the Baneulos-Burdzy paper as well (the “left-right” formalism is just a simpler way to discuss it). But I don’t think this will be an obstacle.
I will work on writing this up more clearly
Edit: While talking in terms of all these angles is messy, the succinct explanation is:
As long as the points $x$ and $y$ are such that the line segment connecting them is nearly horizontal (and it’s a wide range that is allowed based on the angles… basically anything from the angle you get if you ram them against the bottom line to the angle you get when you ram them against the top line), than what I wrote should hold. And that is sufficient to prove the lemma.
Comment by — June 13, 2012 @ 4:28 am
• Ok, here is a writeup which explains things more precisely
http://www.math.missouri.edu/~evanslc/Polymath/MixedTriangle
In there I only give an argument for the case that the angle opposite the Dirichlet side is acute… but I think the obtuse case should be true as well. It all boils down to whether the following probabalistic statement is true:
Consider the infinite wedge $\{(r,\theta)\vert 0\leq\theta\leq\gamma < \pi\}$. Let $(X_t,Y_t)$ be a synchronously coupled Brownian motion starting from points $x$ and $y$ such that (thought of as elements of the complex plane), $0\leq \arg(y-x) \leq \gamma$. Then $0\leq \arg(Y_t-X_t) \leq \gamma$ for all $t>0$.
Comment by — June 13, 2012 @ 5:48 am
• I think this does indeed work for acute angles, so this should settle the super-equilateral isosceles case, but I’ll try to recheck the details tomorrow. I think I can also recast the coupling arguments as a PDE argument based on the maximum principle – this doesn’t add anything as far as the results are concerned, but may be a useful alternate way of thinking about these sorts of arguments. (I come from a PDE background rather than a probability one, so I am perhaps biased in this regard.)
This type of argument may also settle the non-isosceles case in regimes in which we can show that the nodal line is reasonably flat, though I don’t know how one would actually try to show that…
Comment by — June 13, 2012 @ 6:44 am
• OK, I wrote up both a sketch of the Brownian motion argument and the maximum principle argument on the wiki at
http://michaelnielsen.org/polymath1/index.php?title=The_hot_spots_conjecture#Isosceles_triangles
So I think we can now move super-equilateral isosceles triangles (the lines HD, HJ, HK in the above diagram) into the “done” column, thus finishing off all the isosceles cases. (Actually the argument also works for the lowest anti-symmetric mode of the sub-equilateral triangles as well, though this is not directly relevant for the hot spots conjecture.) So now we have to start braving the cases in which there is no axis of symmetry to help us…
Comment by — June 13, 2012 @ 4:52 pm
• I’m a bit confused about the PDE proof of Corrolary 4. In the case where $x$ lies on the interior of $DB$, it is correct that $\nabla u$ is parallel to $DB$. However, we do not know what is its direction. If it has the same direction like the vector $DB$ then we are OK. But if its direction is $BD$ then it does not lie in the sector $S$.
Comment by — June 13, 2012 @ 8:08 pm
• By hypothesis, at this point $\nabla u$ lies on the boundary of the region $S_{\varepsilon(t+1)}$ (in particular, it is not in S). The only point on this boundary that is parallel to DB is the point which is a distance $\varepsilon(t+1)$ from the origin in the BD direction. (I should draw a picture to illustrate this but I was too lazy to do so for the wiki.)
Comment by — June 13, 2012 @ 8:18 pm
• Thanks for your clarification. I got that part.
I’m still confused though. In the proof, you basically performed the reflection arguments to consider the cases when $x$ lies on the interiors of $DB,\ AB$. By doing so, $x$ turns out to be an interior point of the domain and then it is pretty straightforward to deduce the result from classical maximum principle.
My concern is about the reflection arguments. Do you need sth like $\dfrac{\partial^2 u}{\partial n^2}(x)=0$ in order to do so?
Comment by — June 14, 2012 @ 5:15 am
• No, to reflect around a flat edge one only needs the Neumann condition $\partial u / \partial n = 0$. The second normal derivative $\partial^2 u / \partial n^2$ will reflect in an even fashion (rather than an odd fashion) around the edge, and so does not need to vanish; it only needs to be continuous in order to obtain a C^2 reflection. Once one has a C^2 reflection, one solves the eigenfunction equation in the classical sense in the unfolded domain, and elliptic regularity in that domain upgrades the regularity to $C^\infty$ (at least as long as one stays away from the corners).
Comment by — June 14, 2012 @ 2:58 pm
• Oh, I meant at the specific point $x$. Your argument should be OK for eigenfunctions. But here we are dealing with the heat equations, right?
In general, I think it would be really interesting to consider the heat equation $u_t - \Delta u =0$ in ${\rm ABC} \times (0,\infty)$ with the given initial data $u_0$ chosen in such a way that it is increasing along some specific directions. Let say $(u_0)_\xi \ge 0$ for some unit vector $\xi$. If we can use maximum principle to show that $u_\xi \ge 0$ by essentially killing the boundary cases then we are done.
Comment by — June 14, 2012 @ 4:13 pm
• Ah, fair enough, but even when reflecting a solution to the heat equation rather than an eigenfunction, one still gets a classical (C^2 in space, C^1 in time) solution to the heat equation on reflection as long as the Neumann boundary condition is satisfied (and providing that the original solution was already C^2 up to the boundary, which I believe can be established rigorously in the acute triangle case), and then by applying parabolic regularity instead of elliptic regularity one can ensure that this is a smooth solution. (Alternatively, one can unfold the triangle around the edge of interest at time zero, solve the heat equation with Neumann data on the unfolded kite region, and then use the uniqueness theory of the heat equation to argue that this solution is necessarily symmetric around the edge of unfolding, and that the restriction to the original triangle is the original solution to the heat equation.)
Comment by — June 14, 2012 @ 4:26 pm
• Oh, thank you. Probably now I see my source of confusion. Probable one needs $\dfrac{\partial u_0}{\partial n}=0$ on ${\rm AB, \ BC, \ CA}$ in order to get higher regularity when reflecting. I was confused about this part.
So why don’t we proceed by considering the heat equation with Neumann boundary condition in ${\rm ABC} \times (0,\infty)$ with given initial date \$u_0\$ satisfying sth like $\dfrac{\partial u_0}{\partial n}=0$ on ${\rm AB, \ BC, \ CA}$ and $(u_0)_\xi \ge 0$ for some unit direction \$\xi\$. If we then let $v=u_\xi$ then $v$ solves also the heat equation. We want to show that $v \ge 0$ or so by using maximum principle. As we know, $\max_{{\rm ABC} \times [0,T]} v = \max \{ \max_{\rm{ABC}} (u_0)_\xi, \max_{\rm{AB,BC,CA} \times (0,T)} v\}$. And since one can omit the boundary cases by performing reflection method, it should be OK.
Comment by — June 14, 2012 @ 6:24 pm
• I have done some computations to support my argument above. The point now is to build a function $u_0: {\rm ABC} \to \mathbb{R}$ so that $\dfrac{\partial u_0}{\partial n}=0$ on the edges and $(u_0)_\xi \ge 0$ for some unit vector \$\xi\$. Then $u$ inherits this monotonicity property of $u_0$, namely $u_\xi \ge 0$ in ${\rm ABC} \times (0,\infty)$.
Here is the first computation in case ${\rm ABC}$ is an acute isosceles triangle like in Corollary 4. Let’s assume $A=(0,1),\ B=(-a,0), \ C=(a,0)$ for some $0<a \le 1$. Then we can build $u_0$ which is antisymmetric around ${\rm OA}$ as $u_0(x,y)=\sin(\frac{\pi}{2a}x) \cos (\frac{\pi}{2}y)^{1/a^2}$. It turns out that \$(u_0)_x, \nabla u \cdot (\frac{1}{a},1) \ge 0\$ for $x \ge 0$. This is exactly the needed function for Corollary 4.
I will try to build such $u_0$ for general acute triangle to see if the shape of ${\rm ABC}$ has anything to do with the direction $\xi$. It may then help us to see where the min and the max of the second eigenfunctions locate.
Comment by — June 15, 2012 @ 4:33 am
• Great! Actually, half of my graduate thesis was on reflected Brownian motion and the other half was on maximum principles for systems… so it is cool to see that they are related.
And on a more practical note, rigorously arguing the geometric properties of coupled Brownian motion can be a bit of a mess (involving Ito’s formula) so if it can be avoided by appealing to the maximum principle, so much the better.
Comment by — June 13, 2012 @ 9:51 pm
• After a night’s rest, I think the statement I made above about “the infinite wedge preserving the angle” only holds true in the acute case. For the obtuse case, it isn’t to hard to see how the angle won’t always be preserved.
It still seems it should be the case that the first eigenfunction for the mixed triangle should be at the vertex opposite the Dirichlet side… but at this point I suppose we only need to know the acute case.
Edit: Actually I think the obtuse case might follow from the following paper by Mihai Pascu which uses an exotic “scaling coupling” to prove Hot-Spots results for $C^{1,\alpha}$ convex domains which are symmetric about one axis.
http://www.ams.org/journals/tran/2002-354-11/S0002-9947-02-03020-9/home.html
Reflecting the triangle across its Dirichlet side would give such a domain provided that we could “smooth out the corners” without affecting the eigenfunction too much.
Comment by — June 13, 2012 @ 9:48 pm
5. Chris, I am not sure this is pertinent to your argument. But the regularity of the eigenfunctions for the mixed Dirichlet-Neumann case must degenerate, as the angle between the Dirichlet and Neumann sectors becomes near pi. To see this, think about a sector of a circle with Dirichlet data on one ray and the curvilinear arc, and Neumann on the remaining ray. The solution (by seperation of variables) is again in terms of Bessel functions, but this time with fractional order. As long as the angle of the sector is less than pi, a reflection about the Neumann side would give you an eigenfunction problem with Dirichlet data, and you pick out the one with the right symmetry.
However, as the interior angle approached pi, after reflection the doubled sector gets closer to the circle with a slit. The resulting eigenfunction is not smooth.
This argument suggests that if, after reflections, you have a mixed boundary eigenproblem where the Dirichlet-Neumann segments are meeting at nearly flat angles, then there may be issues.
Comment by — June 13, 2012 @ 3:08 pm
• Well, for our application the Dirichlet-Neumann region of interest is a folded super-equilateral triangle, so one of the angles between Dirichlet and Neumann is a right angle (thus becomes not an angle at all when unfolded) and the other is between 30 and 45 degrees, so the regularity looks pretty good ($C^\infty$ at the right angle, $C^{2,\varepsilon}$ at the less-than-45-degree-angle, and $C^{3,\varepsilon}$ at the remaining angle between the two Neumann edges, which is less than 60 degrees. (From Bessel function expansion in a Neumann triangle we know that eigenfunctions have basically $\pi/\alpha$ degrees of regularity at an angle of size $\alpha$, and are $C^\infty$ when $\pi/\alpha$ is an integer. I think the same should also be true for solutions to the heat equation with reasonable initial data, though I didn’t check this properly.)
But, yes, things are probably more delicate once the Dirichlet-Neumann angles get obtuse. In the case when the Dirichlet boundary comes from a nodal line from a Neumann eigenfunction, the Dirichlet boundary should hit the Neumann boundary at right angles (unless it is in a corner or is somehow degenerate), so this should not be a major difficulty.
Comment by — June 13, 2012 @ 3:51 pm
• Hmm… it seems that we have shown that for a triangle with mixed boundary conditions (one side Dirichlet, two sides Neumann), that the extremum of the first eigenfunction lies at the vertex opposite the Dirichlet side, provided that angle is acute.
Such a triangle could have that the angle between the Dirichlet side and one of the Neumann sides is arbitrarily close to $\pi$… but things should still be ok (provided what I wrote in the previous paragraph is true).
In your example, you have two sides which are Dirichlet and only one side which is Neumann… maybe that is what makes the difference?
Comment by — June 13, 2012 @ 9:56 pm
• Chris, I tried the case where there where two Neumann sides and one Dirichlet. Same problem- but my argument is for a mixed problem where the junction angle is nearing pi. As Terry points out, this concern may not arise for the argument you are trying.
Comment by — June 14, 2012 @ 3:55 am
6. We’re exploring the parameter space corresponding to the region BDO in the triangle above. We’re taking a set of discrete points in this parameter set, and verifying the conjecture as well as computing the spectral gap for the corresponding domain . To debug, we’re taking a coarse spacing of pi/10 in each direction, but we will refine this. We’re using piecewise quadratic polynomials in an H^1 conforming finite element method, with Arnoldi iterations with shift to get the smaller eigenvalues.
I have a quick question- is there some target spacing you’d like? This will influence some memory management issues.
Comment by — June 13, 2012 @ 10:01 pm
• Hmm, good question. As a test case for a back-of-the-envelope calculation, let’s look at the range of stability for the isosceles right-angled (i.e. 45-45-90) triangle (point D in the diagram), say with vertices (0,0), (1,0), (1,1) for concreteness. This is half of the unit square and so the Neumann eigenvalues can in fact be read off quite easily by Fourier series. The second eigenvalue is $\pi^2$, with eigenfunction $\cos \pi x + \cos \pi y$, and then there is a third eigenvalue at $2\pi^2$ with eigenfunction $\cos \pi(x+y) + \cos \pi(x-y)$. So, by Comment 2, the second eigenvalue remains simple for all linear images TD of this triangle with condition number less than $\sqrt{2}$. To convert the 45-45-90 triangle into another right-angled triangle $(\pi/2-\alpha, \alpha,\pi/2)$ triangle for some $0 < \alpha < \pi/2$ requires a transformation of condition number $\cot \alpha$, which lets one obtain simplicity of eigenvalues for such triangles whenever $\alpha > 0.615$, or about 35 degrees – enough to get about two thirds of the way from point D on the diagram to point C. This extremely back of the envelope calculation suggests that increments of about 10 degrees (or about $\pi/20$) at a time might be enough to get a good resolution. But things may get worse as one approaches the equilateral triangle (point H) or the degenerate triangle (points B, F, O).
By permutation symmetry it should be enough to explore the triangle BDH instead of BDO. The Laugesen-Suideja paper at http://arxiv.org/abs/0907.1552 has some figures on eigenvalues in the isosceles case (Fig 2 and Fig 3) that could be used for comparison.
Comment by — June 13, 2012 @ 10:37 pm
• thanks, this is helpful. I’ll set this running with pi/50, to be on the safe side. This will take a few hours to run.
certainly the numerics suggest that the manner in which I approach the point for the equilateral triangle impacts the spectral gap. however, the resolution is not sharp enough to make this formal.
Comment by — June 13, 2012 @ 11:17 pm
• A detail which will not affect any analytical attack, but which should be noted for anyone else doing numerics on this.
As we search through parameter space, we look at what happens with a triangle with given edges – but we should probably fix one side, so we can compare eigenvalues. This is important since what we also want to examine is the spectral gap.
Joe and I’ve fixed one side of the acute triangle to have length 1. As we range through parameter space, the other sides, and the area of the triangles, change. We are recording this information.
May I recommend that if anyone else is doing numerics on this problem, they also make available the area of the triangles used (or at least one side) for each choice of angles? This way, we’ll be able to compare eigenvalues on triangles with the same angles.
Comment by — June 14, 2012 @ 4:07 am
7. I think i can show that second eigenfunction is simple. It involves a few not-overly complicated cases of comparisons between a given triangle and a few known cases (through linear mappings). There seems to be a way to do all of this using one very complicated comparison (with 4-5 reference triangles) and an extremely ugly upper bound for acute triangles (many pages to write it down), but that is probably not worth pursuing. I will try to write something tonight, at least one simple case. It appears that even around equilateral everything should be OK.
Comment by Bartlomiej Siudeja — June 13, 2012 @ 10:18 pm
• Here is a very rough write-up of just one case containing equilateral, right isosceles, and some non-isosceles cases. I am sure this case can be optimized to include larger area. Another 3-4 cases and all triangles should be covered. I will try to optimize the approach before I post all the cases. Near the end of the argument there is an ugly inequality involving triangle parametrization. It should reduce to polynomial inequality, so in the worst case we can evaluate a few (or a bit more) points and find rough gradient estimates.
http://pages.uoregon.edu/siudeja/simple.pdf
Comment by Bartlomiej Siudeja — June 14, 2012 @ 2:25 am
• I was playing with reference triangles a bit more, and it seems that one case with 3 reference triangles (near equilateral) and another with just 2 (near degenerate cases) should be enough to cover all acute triangles. Details to follow.
Comment by Bartlomiej Siudeja — June 14, 2012 @ 3:06 pm
• Great news! In addition to resolving one part of the hot spots conjecture, I think having a rigorous lower bound on the spectral gap $\lambda_3-\lambda_2$ will also be useful for perturbation arguments if we are to try to verify things by rigorous numerics.
Comment by — June 15, 2012 @ 1:26 am
• This thread is getting somewhat large!
I’d posted some of this information below, but this may be useful. A plot of the spectral gap for the approximated eigenvalues, \lambda_3-\lambda_2 multiplied by the area of the triangle \Omega as we range through parameter space is here:
http://www.math.sfu.ca/~nigam/polymath-figures/SpectralGap.jpg
Comment by — June 15, 2012 @ 1:48 am
• The simplest proof that eigenvalue is simple will have almost no gap bound. However, if one wants to get something for a specific triangle, one can use very complicated comparisons and upper bounds without much trouble. In particular upper bound can include 3 or more known eigenfunctions. Except that even with just 2 eigenfunctions there is no way to write down the result from Rayleigh quotient for the test function on general triangle without using many pages. This is obviously not a problem for a specific triangle. The Mathematica package I mentioned in 12 was written specifically for those really ugly test function.
Comment by Bartlomiej Siudeja — June 15, 2012 @ 2:26 am
8. In comment thread 4, Terry suggested looking at the nodal line for more arbitrary triangles, which would then divide the triangle into two mixed domains.
Running computer simulations (but only for the graphs $G_n$ as I am not setup to do more accurate numerical approximation), it seems that the nodal line is always near the sharpest corner. Perhaps it is even close to an arc? So then that mixed-boundary sub-domain might be handled by arguments similar to those in comment thread 4. But I am not sure what we would do on the other sub-domain as it would have a strange geometry…
A related question: Rather than divide into sub-domains by the nodal line, is it possible to divide with respect to another level curve, say $u = 2$? This would lead to the mixed boundary condition with Neumann boundary on some sides and “$u=2$” on some sides… but presumably the behavior of the heat flow on that region is the same as the mixed-Dirichlet-Neumann boundary heat flow after you subtract off the constant function $2$.
Comment by — June 13, 2012 @ 10:30 pm
• It may be easier to show that the extremum occurs at the sharpest corner than it is to figure out what happens to the other extremum (this was certainly my experience with the thin triangle case). See for instance Corollary 1(ii) of the Atar-Burdzy paper http://webee.technion.ac.il/people/atar/lip.pdf which establishes the extremising nature of the pointy corner for a class of domains that includes for instance parallelograms.
Once one considers level sets of eigenfunctions at heights other than 0, I think a lot less is known. For instance, the Courant nodal theorem tells us that the nodal line $\{u=0\}$ of a second eigenfunction is a smooth curve that bisects the domain into two regions, but this is probably false once one works with other level sets (though, numerically, it seems to be valid for acute triangles).
Comment by — June 13, 2012 @ 10:45 pm
• There is a paper of Burdzy at http://arxiv.org/pdf/math/0203017.pdf devoted to the study of the nodal line in regions such as triangles, with the main tool being mirror couplings; I haven’t digested it, but it does seem quite relevant to this strategy.
Comment by — June 14, 2012 @ 4:51 pm
9. I’ve been looking at the stability of eigenvalues/eigenfunctions with respect to perturbations, and it seems that the first Hadamard variation formula is the way to go.
A little bit of setup. Following the notation on the wiki, we perturb off of a “reference” triangle $\hat \Omega$ to a nearby triangle $B \hat \Omega$, where B is a linear transformation close to the identity. The second eigenfunction on $B\hat \Omega$ can be pulled back to a mean zero function on $\hat \Omega$ which minimizes the modified Rayleigh quotient
$\int_{\hat \Omega} \nabla^T u M \nabla u / \int_{\hat \Omega} u^2$
amongst mean zero functions, where $M = B^{-1} (B^{-1})^T$ is a symmetric perturbation of the identity matrix; this function then obeys the modified eigenvalue equation
$-\nabla \cdot M \nabla u = \lambda_2 u$
with boundary condition $n \cdot M \nabla u = 0$.
Now view B = B(t) as deforming smoothly in time with B(0)=I, then M also deforms smoothly in time with M(0)=I. As long as the second eigenvalue of the reference triangle is simple, I believe one can show that $\lambda$ and $u$ will also vary smoothly in time (after normalizing $u$ to have norm one). One can then solve for the derivatives $\dot \lambda_2(0), \dot u(0)$ at time zero by differentiating the eigenvalue equation and the boundary condition. What one gets is the first variation formulae
$\dot \lambda_2(0) = \int_{\hat \Omega} \nabla^T u(0) \dot M(0) \nabla u(0)$
and
$(-\Delta - \lambda_2(0)) \dot u(0) = \pi( \nabla \cdot \dot M(0) \nabla u(0) )$
subject to the inhomogeneous Neumann boundary condition
$n \cdot \nabla \dot u(0) = - n \cdot \dot M(0) \nabla u(0)$
where $\pi$ is the projection to the orthogonal complement of $u(0)$ (and to $1$) and $\dot u(0)$ is also constrained to this orthogonal complement.
I think that by using C^2 bounds on the reference eigenfunction $u(0)$, one should then be able to obtain $C^2$ bounds on the derivative $\dot u(0)$, though there is of course a deterioration if the spectral gap $\lambda_3(0)-\lambda_2(0)$ goes to zero. But this stability in C^2 norm should be enough to show, for instance, that if one has a reference triangle in which the second eigenfunction is simple and only has extrema in the vertices, then any sufficiently close perturbation of this triangle will also have this property. (Note from Bessel function expansion that if an extrema occurs at an acute vertex, then the Hessian is definite at that vertex, and so for any small C^2 perturbation of that eigenfunction, the vertex will still be the local extremum.) Thus, for instance, we should now be able to get the hot spots conjecture in some open neighborhood of the open intervals BD and DH (and similarly for permutations). Furthermore it should be possible to quantify the size of this neighborhood in terms of the spectral gap.
This argument doesn’t quite work for perturbations of the equilateral triangle H due to the repeated eigenvalue, but I think some modification of it will.
EDIT: I think the equilateral case is going to be OK too. The variation formulae will control the portion of $\dot u(0)$ in the complement of the second eigenspace nicely, and so one can write the second eigenfunction of a perturbed equilateral triangle (after changing coordinates back to the reference triangle) as the sum of something coming from the second eigenspace of the original equilateral triangle, plus something small in C^2 norm. I think there is enough “concavity” in the second eigenfunctions of the original equilateral triangle that one can then ensure that for any sufficiently small perturbation of that triangle, the second eigenfunction only has extrema at the vertices. Will try to write up details on the wiki later.
Comment by — June 14, 2012 @ 4:21 pm
• Using raw numerics (the finer-resolution calculation is not yet done), here is what I observe:
one can perturb from the equilateral triangle in a symmetric way, ie, by changing one angle by $-\epsilon$ and the others by $\epsilon/2$ Or one can perturb each angle differently. The spectral gap changes rather differently, depending on how one perturbs.
I should revisit these calculations by scaling by the Jacobian of the mapping B of the domain in each case (following the Courant spectral gap result).
Comment by — June 14, 2012 @ 6:03 pm
• Here are some graphics, to explore the parameter region (BDH) above. To enable visualization, I’m plotting data as functions of the \$lateex (\alpha,\beta)\$. I’m taking a rectangular grid oriented with the sides BD and DH, with 25 steps in each direction. So there are (25)^5 grid points.
Each parameter (alpha,beta,gamma) yields a triangle $\Omega$. I’m fixing one side to be of unit length. For details, please see the wiki.
For each triangle, the second Neumann eigenvalue and third eigenvalue (first and second non-zero Neumann eigenvalue) is computed. I also kept track of where max|u| occurs, where u is the second eigenfunction. This is because numerically I can get either u or -u. I
A plot of the 2nd Neumann eigenvalue as we range through parameter space is here: http://www.math.sfu.ca/~nigam/polymath-figures/Lambda2.jpg
A plot of the 3rdd Neumann eigenvalue as we range through parameter space is here: http://www.math.sfu.ca/~nigam/polymath-figures/Lambda3.jpg
A plot of the spectral gap, \lambda_3-\lambda_2 multiplied by the area of the triangle \Omega as we range through parameter space is here:
http://www.math.sfu.ca/~nigam/polymath-figures/SpectralGap.jpg
One sees that the eigenvalues vary smoothly in parameter space, and that the spectral gap is largest for acute triangles without particular symmetries.
For each triangle, I also kept track of the physical location of max|u|. If it went to the corner (0,0), I allocated a value of 1; if it went to (1,0) I allocated a value of 2, and if it went to the third corner, I allocated 3. If the maximum was not reported to be at a corner, I put a value of 0.
http://www.math.sfu.ca/~nigam/polymath-figures/LocationOfAbsMax.jpg
show the result. Note that we obtain some values of 0 inside parameter space. Please DON”T interpret this to mean the conjecture fails. Rather, this is a signal that eigenfunction is likely flattening out near a corner, and that the numerical values at points near the corner are very close.
I’m running these calculations with finer tolerances now, but it will take some hours.
Comment by — June 14, 2012 @ 8:49 pm
• Hi,
I think there may be something to do using analytic pertubation theory.
The first remark is that, using a linear diffeomorphism
we can pullback the Dirichlet energy form ($\int_T \left | \nabla u \right |^2 dxdy$) on a moving triangle $T$ to a quadratic form
on a fixed triangle $T_0$ that can be written $\int_{T_0} {}^t\nabla u A \nabla u dxdy$ for some symmetric matrix $A$ so that
studying the Neumann Laplacian on $T$ amounts to study the latter quadratic form restricted to $H^1(T_0)$ with respect to
the standard Euclidean scalar product. If we now let $A$ depend analytically on a real parameter $t$ then we get a real-analytic
family in the sense of Kato-Rellich so that the eigenvalues (and eigenvectors) are organized into real-analytic branches.
Let $(E(t), u(t))$ be such an analytic eigenbranch, we define the following function $f$ by $f(t)= \frac{ \| u(t)\|_{ \infty,\partial T_0 } }{ \| u(t)\|_{\infty,T_0} }$
(observe that now eveything is defined on $T_0$) and suppose we can prove that this function also is analytic (that is for any choice of analytic
perturbation and any corresponding eigenbranch). Then I think we can prove the following statement : “For any triangle $T$ there is a Neumann eigenfunction
whose maximum is on the boundary”. The proof would be as follows. Start from your triangle $T$ and move one of its vertices along the corresponding altitude.
This defines an analytic perturbation and for any $t$ small enough the obtained triangle is obtuse. For $t$ very small the second eigenbranch is simple and
satisfy the hotspot conjecture so that if we follow this particular branch, the corresponding $f$ is identically $1$ for $t$ small enough and since it is analytic
it is always $1.$ The claimed eigenfunction is the one that corresponds to this eigenbranch (because of crossings, it need not be the second one).
If we want to prove the real hotspot conjecture we can try to argue in the opposite direction : start from the second eigenvalue and follow the same perturbation.
We now have to prove the following things :
1- For $t$ small the branch becomes simple so that it corresponds to the $N$-th eigenvalue,
2- For any $N$ and any $t$ small enough the $N$-th eigenfunction has its maximum on the boundary.
Of course this line of reasoning relies heavily on the analyticity of $f$ which I haven’t been able to establish yet (observe that $t \mapsto u(t)$ is analytic
with values in $H^1$ which is not good enough for $C^0$ bounds). Recently I have been thinking that maybe we could instead try to prove
that \$f_r\$ is analytic where the subscript $r$ means that we have removed a ball of that radius near each vertex. It should be easier to prove that
this one is analytic (but then we need to prove something on the maximum of $u_2$ for any tobtuse triangle when we remove a ball near each vertex).
I finish by pointing at two references on multiplicities in the spectrum of triangles.
First some advertisement
- Hillairet-Judge Simplicity and asymptotic separation of variables, CMP, 2011, 302(2) (Erratum, CMP, 2012, 311 (3))
- Berry-Wilkinson Diabolical points in the spectra of triangles, Proc. Roy. Soc. London, 1984, 392(1802), pp.15-43
Comment by — June 15, 2012 @ 11:57 am
• [I was editing this comment and I accidentally transferred ownership of it to myself, which is why my icon appears here. Sorry, please ignore the icon; this is Nilima's post. - T.]
An analytic perturbation argument from known cases would certainly be great! I thought about a similar argument for the thin triangle case (http://michaelnielsen.org/polymath1/index.php?title=The_hot_spots_conjecture, under ‘thin not-quite-sectors). But I was thinking about perturbing from a sector to the triangle, and you’re thinking about perturbing from one triangle to another.
Let’s see if I follow your argument. Following the notation in (http://michaelnielsen.org/polymath1/index.php?title=The_hot_spots_conjecture, under ‘reformulation on a reference domain’), one can replace the reference triangle by any other. One then shows analyticity of the eigenvalues with respect to perturbations in the mapping B, and shows the domain of analyticity is large enough to cover all acute triangles. Is this correct?
Comment by — June 15, 2012 @ 2:59 pm
• I think it may be difficult to show analyticity of a sup norm; note that even the sup of two analytic functions $\max(f(t),g(t))$ is not analytic when the two functions cross (e.g. $|t| = \max( t, -t)$). The enemy here is that as one varies t, a new local extremum gets created somewhere in the interior of the triangle, and eventually grows to the point where it overtakes the established extremum on the vertices, creating a non-analytic singularity in the L^infty norm.
However, I think one does have analyticity as long as the extrema are unique (up to symmetry, in the isosceles case) and non-degenerate (i.e. their Hessian is definite), and the eigenvalue is simple. This is for instance the case for the non-equilateral acute isosceles and right-angled triangles, where we know that the eigenvalues are simple and the extrema only occur at the vertices of the longest side, and a Bessel expansion at a (necessarily acute) extremal vertex shows that any extremum is non-degenerate (it looks like a non-zero scalar multiple of the 0th Bessel function $J_0(\sqrt{\lambda} r)$, plus lower order terms which are $o(r^2)$ as $r \to 0$). Certainly in this setting, the work of Banuelos and Pang ( http://eudml.org/doc/130789;jsessionid=080D9E5423278BA5ACFC818847CA97FE ) applies, and small perturbations of the triangle give small perturbations of the eigenfunction in L^infty norm at least. This (together with uniform C^2 bounds for eigenfunctions in a compact family of acute triangles, which is sketched on the wiki, and is needed to handle the regions near the vertices) is already enough to give the hot spots conjecture for sufficiently small perturbations of a right-angled or non-equilateral acute isosceles triangle.
The Banuelos-Pang results require the eigenvalue to be simple, so the perturbation theory of the equilateral triangle (in which the second eigenvalue has multiplicity 2) is not directly covered. However, it seems very likely that for any sufficiently small perturbation of the equilateral triangle, a second eigenfunction of the perturbed triangle should be close in L^infty norm to _some_ second eigenfunction of the original triangle (but this approximating eigenfunction could vary quite discontinuously with respect to the perturbation). Assuming this, this shows the hot spots conjecture for perturbations of the equilateral triangle as well, because _every_ second eigenfunction of the equilateral triangle can be shown to have extrema only at the vertices, and to be uniformly bounded away from the extremum once one has a fixed distance away from the vertices (this comes from the strict concavity of the image of the complex second eigenfunction of the equilateral triangle, discussed on the wiki).
The perturbation argument also shows that in order for the hot spots conjecture to fail, there must exist a “threshold” counterexample of an acute triangle in which one of the vertex extrema is matched by a critical point either on the edge or interior of the triangle, though it is not clear to me how to use this information.
Comment by — June 15, 2012 @ 3:53 pm
• Thanks ! Actually what I had in mind was trying to prove that $t\mapsto u(t)$ is analytic with values in $C^0(T_0)$ but then I
imprudently jumped to think that this would imply the analyticity of the supnorm. So I am not sure there is something to save from the analyticity
approach I was suggesting.
Except maybe the following fact : I think that the set of triangles such that $\lambda_2$ is simple is open and dense
(and also full measure for a natural class of Lebesgue measure).
We have proved that for any mixed Dirichlet-Neumann boundary condition … except Neumann everywhere ! I have a sketch of proof
for the latter case but I never carried out the details (so there may be some bugs in the argument).
Last thing concerning analyticity of the eigenvalues and eigenfunctions, this holds only for one-parameter analytic families of triangles.
I don’t think the eigenvalues can be arranged to be analytic on the full parameter space (because there are crossings).
Comment by — June 15, 2012 @ 5:01 pm
10. I would like to propose a further probabilistic intuition, based on
comment 15 of thread 1, and another
possiblity for attacking the problem. It is based on relating free
Brownian motion with reflecting Brownian motion.
If $B$ is a one dimensional Brownian motion, and we define the
floor function $\lfloor \rfloor$ and the zig-zag function $f(x) = \| x - 2 \lfloor (x+1) / 2 \rfloor \|$,
then $R=f(B)$ is a reflecting Brownian motion on $[0,1]$ (as can be
rigorously proved using stochastic calculus and local time for example) and its density
is the fundamental solution of the heat equation with Neumann boundary
conditions. To write an expression of the transition density $p_t^R$ of $R$ in
terms of the transition density $p_t$ of $B$, write
$y_1\sim y_2$ if
$f(y_1)=f(y_2)$ and note that
(1) $p_t^R(x,y)=\sum_{\tilde y\sim y} p_t(x,\tilde y)$
if $x,y\in (0,1)$ but
$p_t^R(x,1)=2\sum_{\tilde y\sim 1}p_t(x,y)$
This explains why the boundary points 0 and 1 accumulate (or trap) heat at
twice the rate as interior vertices, and I believe that from here one
can conceptually prove hotspots in the very simple case of the interval.
For two dimensional reflecting Brownian motion, one needs a similar reflection
function. To construct it: think first of an equilateral triangle
constructed as a kaleidoscope with 3 sides of equal length. Each point
inside the triangle gives rise to a lattice of points in the plane
which will be identified via the equivalence relation $\sim$. We then write the fundamental solution to the heat equation with
Neumann boundary condition on the triangle via formula (1) for
points in the interior of the triangle. However, points at the sides
of the triangle accumulate heat at twice the rate while corner points
trap it at 6 times the rate (because the triangle is equilateral).
In general one would hope that a corner of angle alpha gets heated
$\lfloor 2\pi/\alpha\rfloor$ times faster
than interior points.
I think that stochastic calculus is not yet mature enough to prove
that reflecting brownian motion in the triangle can be constructed by
applying the reflection $\sim$ to free brownian motion (lacking a multidimensional Tanaka formula). However,
one can see if formula (1) does give the fundamental solution to the
heat equation with Neumann boundary conditions.
Comment by — June 14, 2012 @ 4:23 pm
• Hmm, I’m not so sure about the factor of 2 in the formula for $p_t^R(x,1)$, as this would imply that the heat kernel is discontinuous at the boundary, which I’m pretty sure is not the case. Note that the epsilon-neighbourhood of a boundary point in one dimension is only half as large as the epsilon-neighbourhood of an interior point, and so I think this factor of 1/2 cancels out the factor of 2 that one is getting from the folding coming from the zigzag function. So the heating at the endpoints is coming more from the convexity properties of the heat kernel than from folding multiplicity.
Still, this does in principle give an explicit formula for the heat kernel on the triangle as some sort of infinitely folded up version of the heat kernel on something like the plane (but one may have to work instead with something more like the universal cover of a plane punctured at many points if the angles do not divide evenly into pi). One problem in the general case is that the folding map becomes dependent on the order of the edges the free Brownian motion hits, and so cannot be represented by a single map f unless one works in some complicated universal cover.
Comment by — June 14, 2012 @ 4:38 pm
• I agree, the formula for $p_t^R(x,1)$ shouldn’t have the factor two and the
intuition there is incorrect. However, it does suggest a new one: since
the heat kernel $p_t$ decays rapidly, endpoints with nearby
reflections will accumulate more density (the notion of nearby depends on the
amount of time elapsed) and corners of angle $\alpha$ are points
where there are (mainly) $2\pi/\alpha$ nearby reflections.
Also, maybe one does not need to leave the plane since to construct the
reflecting brownian motion since two-dimensional free Brownian motion
does not visit the corners of the triangle (by polarity of countable
sets), so one only needs to keep changing the reflection edge as soon
as a new one is reached. The
transition density does indeed seem more complicated, but perhaps (1)
might provide sensible approximations.
Comment by — June 14, 2012 @ 5:44 pm
• It is true that once one shows that Neumann heat kernel is increasing toward boundary, the hot-spots conjecture is true. But this approach is much harder than just proving hot-spots conjecture. Until very recently there was Laugesen-Morpurgo conjecture stating the Neumann heat kernel for a ball is increasing toward boundary. This was settled by Pascu and Gageonea (http://www.sciencedirect.com/science/article/pii/S0022123610003526) in 2011 using mirror couplings.
Reflection argument seems very appealing, but even for an interval I have not seen a proof that Neumann heat kernel is increasing using explicit series of Gaussian terms coming from reflections. The above paper also settles interval case. One can also use Dirichlet heat kernel to prove this (http://pages.uoregon.edu/siudeja/neumann.pdf, slides 6 and 7).
For triangles reflections are not enough to cover the plane. You may have to also flip the reflected triangle along the line perpendicular to the reflection side in order to ensure that you can cover the plane. This however means that you loose continuity on the boundary.
Comment by Bartlomiej Siudeja — June 14, 2012 @ 6:07 pm
• A small correction: “diagonal of Neumann heat kernel should be increasing toward boundary”, so $p_t^R(x,x)$ should be increasing when x goes to boundary.
Comment by Bartlomiej Siudeja — June 14, 2012 @ 8:32 pm
• It does seem like such a procedure would be hard (perhaps hopelessly so) to implement for triangles that don’t tile the plane nicely (which are most triangles) for the reasons given in the other replies. But if such an argument were to work it would first need to be worked out for the case of an equilateral triangle. I’d be interested in seeing such an argument but I am not sure how it would go…
Suppose the initial heat is a point mass at one corner, and draw out a full tiling of the plane. Then the unreflected heat flow would have a nice Gaussian distribution, and the reflected heat flow could be recovered by folding in all the triangles… but how would you show that the hottest point upon folding is at the corner you started the heat flow at? You have an infinite sum and it is not the case that each triangle in this sum has its maximum at that corner…
Comment by — June 14, 2012 @ 8:26 pm
11. A couple of people asked for some pictures of nodal lines.
Here are some on triangles which aren’t isoceles or right or equilateral, and whose angles aren’t within pi/50 of those special cases, either:
http://www.math.sfu.ca/~nigam/polymath-figures/nodal-1.jpg
http://www.math.sfu.ca/~nigam/polymath-figures/nodal-2.jpg
Here are the nodal lines corresponding to the 2nd and 3rd Neumann eigenfunction on a nearly equilateral triangle. Note the multiplicity of the 2nd eigenvalue is 1, but the spectral gap \lambda_3-\lambda_2 is small. I found these interesting.
http://www.math.sfu.ca/~nigam/polymath-figures/nearly-equilateral-1.jpg
http://www.math.sfu.ca/~nigam/polymath-figures/nearly-equilateral-2.jpg
Comment by — June 14, 2012 @ 5:16 pm
• Is the nearly equilateral triangle isosceles? If it is, the nearly antisymmetric case should not look the way it does. Every eigenfunction on isosceles triangle must be either symmetric or antisymmetric. Otherwise corresponding eigenvalue is not simple. It is not impossible that the third one is not simple, but for nearly equilateral triangle that is extremely unlikely. Here antisymmetric case is the second eigenvalue, so it must be antisymmetric. Even if this triangle is not isosceles, the change in the shape of the nodal like is really huge.
Comment by Bartlomiej Siudeja — June 15, 2012 @ 3:51 am
• No, the nearly equilateral triangle is not isoceles.
Comment by — June 15, 2012 @ 3:54 am
• Also, do you have bounds on how the nodal lines should change as we perturb away from the equilateral triangle in an asymmetric fashion? This would be interesting to compare with.
Comment by — June 15, 2012 @ 4:01 am
• No, I do not think I have anything for nodal lines. One of the papers by Antunes and Freitas may have something, but they mostly concentrate on the way eigenvalues change. Nothing for nodal lines. It is quite surprising, and good for us, that the change is so big.
Comment by Bartlomiej Siudeja — June 15, 2012 @ 4:08 am
12. In case someone wants to see eigenfunctions of all known triangles and a square (right isosceles triangle), I have written a Mathematica package http://pages.uoregon.edu/siudeja/TrigInt.m. See ?Equilateral and ?Square for usage. A good way to see nodal domains is to use RegionPlot with eigenfunction>0. The package can also be used to facilitated linear deformation for triangles. In particular Transplant moves a function from one triangle to another(put {x,y} as function to see the linear transformation itself). There is a T[a,b] notation for triangle with vertices (0,0) (1,0) and (a,b). Function Rayleigh evaluates Rayleigh quotient of a given function on a given triangles (with one side on x-axis). There are also other helper functions for handling triangles. Everything is symbolic so parameters can be used. Put this in Mathematica to import the package:
AppendTo[\$Path,ToFileName[{\$HomeDirectory, "subfolder", "subfolder"}]];
<< TrigInt`
The first line may be needed for Mathematica kernel to see the file. After that
Equilateral[Neumann,Antisymmetric][0,1] gives the first antisymmetric eigenfunction
Equilateral[Eigenvalue][0,1] gives the second eigenvalue
Comment by Anonymous — June 14, 2012 @ 8:15 pm
• There is also a function TrigInt which is much faster than regular Int for complicated trigonometric functions. Limits for the integral can be obtained using Limits[triangle]. For integration it might be a good idea to use extended triangle notation T[a,b,condition] where condition is something like b>0.
Comment by Bartlomiej Siudeja — June 14, 2012 @ 8:20 pm
• I’m not a Mathematica user, so my question may be naive. Are the eigenfunctions being computed symbolically by Mathematica?
If not, could you provide some details on what you’re using to compute the eigenfunctions/values?
It would be great if you could post this information to the Wiki.
Comment by — June 15, 2012 @ 4:04 am
• They are computed using general formula. The nicest write-up is probably in the series of papers by McCartin. All eigenfunctions look almost the same, sum of three terms, each is a product of two cosines/sines. The only difference is integer coefficients under trigs. The same formula works for Dirichlet, just a bit different numbers.
Comment by Bartlomiej Siudeja — June 15, 2012 @ 4:12 am
• Here is a code from the package (with small changes for readability). First there are some convenient definitions.
h=1;
r=h/(2Sqrt[3]);
u=r-y;
v=Sqrt[3]/2(x-h/2)+(y-r)/2;
w=Sqrt[3]/2(h/2-x)+(y-r)/2;
Then a function that contains all cases, #1 and #2 are just integers, f and g are trig functions:
EqFun[f_,g_]:=f[Pi (-#1-#2)(u+2r)/(3r)]g[Pi (#1-#2)(v-w)/(9r)]+
f[Pi #1 (u+2r)/(3r)]g[Pi (2#2+#1)(v-w)/(9r)]+
f[Pi #2 (u+2r)/(3r)]g[Pi (-2#1-#2)(v-w)/(9r)];
All the cases:
Equilateral[Neumann,Symmetric]=EqFun[Cos,Cos]&;
Equilateral[Neumann,Antisymmetric]=EqFun[Cos,Sin]&;
Equilateral[Dirichlet,Symmetric]=EqFun[Sin,Cos]&;
Equilateral[Dirichlet,Antisymmetric]=EqFun[Sin,Sin]&;
Eigenvalue is the same regardless of the case. For Neumann you need 0<=#1<=#2. For Dirichlet: 0<#1<=#2. And antisymmetric cannot have #1=#2.
Equilateral[Eigenvalue]=Evaluate[4/27(Pi/r)^2(#1^2+#1 #2+#2^2)]&;
Comment by Bartlomiej Siudeja — June 15, 2012 @ 4:22 am
• I’m sorry, I’m really not familiar with this package. Am I correct, reading the script above, that you are computing an *analytic* expression for the eigenvalue? That is, if I give three angles of an arbitrary triangle(a,b,pi-a-b), your script renders the Neumann eigenvalue and eigenfunction in closed form?
Or is this code for the cases where the closed form expressions for the eigenvalues are known (equilateral, right-angled, etc)? This is also very nice to have, for verification of other methods of calculation.
When we map one triangle to another, the eigenvalue problem changes (see the Wiki, or previous discussions here). It is great if you have a code which can analytically compute the eigenvalues of the mapped operator on a specific triangle, or equivalently, eigenvalues on a generic triangle.
Comment by — June 15, 2012 @ 4:36 am
• This package is not fancy at all. It has formulas for equilateral, right isosceles and half of equilateral. These are known explicitly. Fro other triangles it just helps evaluate Rayleigh quotient on something like f composed with T (linear). This just gives upper bounds for eigenvalues. Or it might help speed-up calculations for Hadamard variation, since you do not need to think about what is the linear transformation from one triangle to another. And it can evaluate Rayleigh quotient on transformed triangle Was handy for proving bounds for eigenvalues, and to see nodal domains for known cases.
I wish I had any analytic formula for eigenvalues on arbitrary triangle.
Comment by Bartlomiej Siudeja — June 15, 2012 @ 4:42 am
• OK, thanks for the clarification! I was thrown by your initial comment about it accepting all parameters. Now I understand that you’re able to get good bounds, rather than the exact eigenvalues.
Comment by — June 15, 2012 @ 5:23 am
• The fact that there are quite a few known cases means that you can make a linear combination of known eigenfunctions (each transplanted to a given triangle) and evaluate Rayleigh quotient. PDEToolbox is not a benchmark for FEM, but I have seen cases where 16GB of memory was not enough to bring numerical result below upper bound obtained from a test function containing 5 known eigenfunctions.
Comment by Bartlomiej Siudeja — June 15, 2012 @ 5:12 am
• PDEtoolbox is great for generating a quick result, but not for careful numerics, and it doesn’t do high order. Yes, you could wait a long while to get good results if you relied solely on PDEToolBox. Joe Coyle (whose FEM solver we’re using) has implemented high-accuracy conforming approximants, and we’re keeping tight control on residual errors. Details of our approximation strategy are on the Wiki. I’m also thinking of implementing a completely non-variational strategy, so we have two sets of results to compare.
Comment by — June 15, 2012 @ 5:29 am
• I used to use PDEToolbox for visualizations, but I no longer have license for it. Besides, it does not have 3D, and eigenvalues in 3D behave much worse than in 2D. I have written a wrapper for eigensolver from FEniCS project (http://fenicsproject.org/). It is most likely not good for rigorous numerics, and I am not even a beginner in FEM. However, it works perfectly for plotting. In particular one can see that nodal line moves away very quickly from vertices. The nearly equilateral case Nilima posted must indeed be extremely close to equilateral. While Nilima crunches the data, anyone who wants to see more pictures is welcome to use my script. It is a rough implementation with no-so-good documentation, but it can handle many domains with any boundary conditions (also mixed). There is a readme file. Download link: http://pages.uoregon.edu/siudeja/fenics.zip. I have not tested this only on Mac, so I am not sure it will work in Windows or Linux, though it should.
To get a triangle one can use
python eig.py tr a b -N -s number -m -c3 -e3
tr is domain specification, a,b is the third vertex, -N gives Neumann, -s number is number of triangles, -m shows mesh, -c3 gives contours instead of surface plots (3 contours are good for nodal line), -e3 to get 3 eigenvalues
There are many options. python eig.py -h lists all of them with minimalistic explanations
Comment by Bartlomiej Siudeja — June 15, 2012 @ 4:57 pm
13. Some random thoughts about the nodal line:
1) I believe the Nodal Line Theorem guarantees that the nodal line consists of a curve with end points on the boundary and which divides the triangle into two sub-regions. It might be possible to prove that in fact the two endpoints of the nodal line lie on different sides of the triangle. (The alternate case, that the nodal line looks like a handle sticking out from one of the edges, feels wrong… in fact maybe it is the case that for no domain ever is it the case that the two endpoints of the nodal line lie on the same straight line segment of its boundary).
2) If 1) were true, then it would follow that the nodal line does in fact straddle one of the corners. Moreover, we know apriori that the nodal line is orthogonal to the boundary (so at least locally near the boundary it starts to “bow out”). The nodal line ought not to be too serpentine… that would cause the second eigenfunction to have a large $H^1$-norm while allowing the $L^2$-norm to stay small… which would violate the Raleigh-Ritz formulation of the 2nd eigenfunction.
3) Since the nodal line is “bowed out” at the boundary, and has incentive not to be serpentine, it seems like it shouldn’t “bow in”. If we could show that the slope/angle of the nodal-line stays within a certain range then the arguments used for the mixed Dirichlet-Neumann triangle could be applied to show that the extremum of the eigenfunction in this sub-region in fact lies at the corner the nodal line is straddling.
Of course this is all hand-wavy and means nothing without precise quantitative estimates :-/
In particular though, does any one know if the statement ” for no domain ever is it the case that the two endpoints of the nodal line lie on the same straight line segment of its boundary” is true? I can’t think of any domain for which that would be the case…
Comment by — June 14, 2012 @ 8:53 pm
• I think your last statement is true. Suppose a nodal line for the Neumann Laplacian in a polygonal domain end both its endpoints on the same line segment. Consider the domain Q enclosed by the nodal line and the piece of the line segment enclosed between the nodal line end-points. This region is a subset of the original domain.
Now, on Q, the eigenfunction u has the following properties: it satisfies $\Delta u + \Lambda u=0$ in Q, has zero Dirichlet data on the curvy part of the boundary of Q, and satisfies zero Neumann data on the straight line part of its boundary. Now reflect Q across the straight line segment, and you get a Dirichlet problem for $\Delta u + \Lambda u=0$ in the doubled domain.
I now claim $\Lambda$ cannot be an eigenvalue of the Dirichlet problem on this doubled domain. $\Lambda$ is the first eigenvalue of the mixed Dirichlet-Neumann problem on Q. This is easy- there are no other nodal lines in Q. Hence $\Lambda$ is smaller than the first eigenvalue of the Dirichlet problem on Q (fewer constraints). Doubling the domain just increases the value of the Dirichlet eigenvalue. So $\Lambda$ cannot be an eigenvalue on the doubled domain.
Finally, we have the Helmholtz problem $\Delta u + \Lambda u=0$ on the doubled domain, with zero boundary data. We’ve just shown \Lambda is not an eigenvalue, so the problem is uniquely solvable, and hence u=0 in the doubled domain.
This would be a contradiction.
Comment by — June 14, 2012 @ 9:35 pm
• I think there is something wrong with this argument. When you double the domain, the Dirichlet eigenvalue must go down. In fact $\Lambda$ is exactly equal to the first Dirichlet eigenvalue on doubled Q (which has Dirichlet condition all around). Doubled Q has a line of symmetry, hence by simplicity of the first Dirichlet eigenvalue, the eigenfunction must be symmetric. Hence it must satisfy Neumann condition on the straight part of the boundary of Q.
Comment by Bartlomiej Siudeja — June 14, 2012 @ 10:00 pm
• Yes, of course, you are correct.
Comment by — June 14, 2012 @ 11:29 pm
• Once we double the original domain and get a doubled Q with Dirichlet condition all around, we can claim that this domain has larger eigenvalues than the origin domain doubled with Dirichlet all around. Assuming the doubled domain is convex, we can use Payne-Levine-Weinberger inequality $\mu_3\le\lambda_1$, Neumann is below Dirichlet. Without convexity we just have $\mu_2 <\lambda_1$. Our original eigenfunction gives a eigenvalue on the doubled domain, but unfortunately it might not be second. If it was we would be done. Under convexity assumption it should be easier, but I am not sure yet how to finish the proof.
Comment by Bartlomiej Siudeja — June 14, 2012 @ 10:38 pm
• I like the idea of taking advantage of the fact that the boundary is flat to reflect across it, but for the reasons Siudeja mentions I don’t quite follow the argument.
Maybe it is possible to make an argument by reflecting the entire domain (not just the $Q$ in your notation) across the straight line segment. The reflected eigenfunction would then have a nodal line which is a circle…
Thus we would have an eigenfunction which has only *one* nodal line and it is a loop floating in the middle… does the Nodal Line Theorem preclude this?
Comment by — June 14, 2012 @ 11:27 pm
• The unit disk contains a Neumann eigenfunction $J_0(r/j_1)$ whose nodal line is a closed circle – but it isn’t the second eigenvalue. But it is the second eigenvalue amongst the radial functions, which already suggests one has to somehow “break the symmetry” (whatever that means) in order to rule out loops…
Comment by — June 15, 2012 @ 12:28 am
• I think that if one can prove that the second eigenfunction of an acute scalene triangle never vanishes at a vertex (i.e. the nodal line cannot cross a vertex), then by a continuity argument (starting from a very thin acute triangle, for instance) shows that for any acute scalene triangle, the nodal line crosses each of the edges adjacent to the pointiest vertex exactly once. I don’t know how to prevent vanishing at a vertex though. (Note that for an equilateral or super-equilateral isosceles triangle, the nodal line does go through the vertex, though as shown in the image http://people.math.sfu.ca/~nigam/polymath-figures/nearly-equilateral-1.jpg from comment 11, the nodal line quickly moves off of the vertex once one perturbs off of the isosceles case.)
I was looking at the argument that shows the nodal line is not a closed loop, hoping to get some mileage out of a reflection argument, but unfortunately it relies on an isoperimetric inequality and does not seem to be helpful here. (The argument is as follows: if the nodal line is a closed loop, enclosing a subdomain D of the original triangle T, then by zeroing out everything outside of the loop we see that the second Neumann eigenvalue of T is at least as large as the first Dirichlet eigenvalue of D, which is in turn larger than the first Dirichlet eigenvalue of T. But there are isoperimetric inequalities that assert that among all domains of a given area, the first Dirichlet eigenvalue is minimised and the second Neumann eigenvalue are maximised at a disk, implying in particular that the Neumann eigenvalue of T is less than or equal to the Dirichlet eigenvalue of T, giving the desired contradiction.)
Comment by — June 14, 2012 @ 10:55 pm
• This is exactly what I was trying to do above. I think that isoperimetric inequality is not needed. Neumann eigenvalue is just equal to Dirichlet in the loop (laplacian is local), which is larger than Dirichlet on the whole domain which is larger than second Neumann on the whole domain (Polya and others). For convex domains even the third Neumann eigenvalue is below the first Dirichlet. But even this is not enough for our case.
Comment by Bartlomiej Siudeja — June 14, 2012 @ 11:04 pm
• I have done a few numerical plots for super-equilateral triangles sheared by very small number. It seems that the speed at which nodal line moves away from the vertex when shearing is growing when isosceles triangle approaches equilateral. For triangle with vertices (0,0), (1,0) and (1/2+epsilon,sqrt(3)/ (2+epsilon)), nodal line looks almost the same regardless of epsilon. I tried epsilon=0.1, 0.01, 0.0001. Nodal line touches the side about 1/3 of the way from vertex.
Comment by Bartlomiej Siudeja — June 15, 2012 @ 6:31 pm
• I think reflection may actually work, unless I am missing something. Let T be the original acute triangle, Q the quadrilateral obtained by reflection, and S the reflection line. We assume that the second Neumann eigenfunction of T has endpoints on S. Now reflect to get interior Dirichlet domain D. This one is smaller than Q, so by domain monotonicity has strictly larger first Dirichlet eigenvalue than Q with Dirichlet boundary conditions. Due to convexity of Q we get that the third Neumann eigenvalue of Q is not larger than the first Dirichlet eigenvalue of Q (http://www.jstor.org/stable/2375044). We will be done if we can show that the second Neumann eigenfunction of T gives the second or third Neumann eigenfunction of Q. Due to line of symmetry in Q, every eigenfunction must be symmetric or antisymmetric. If not, we could reflect it, then add original and reflection to get symmetric. We could also subtract to get antisymmetric. Hence non symmetric eigenfunction of Q implies double eigenvalue. One of those must be symmetric, so it must be the Neumann eigenfunction of T, and we are done. So suppose that the second Neumann eigenfunction on Q is antisymmetric. If the third on is also antisymmetric, it must have additional nodal line, hence by antisymmetry must have at least 4 nodal domains. But this is not possible. Hence either the second or the third eigenfunction on Q must be symmetric, hence it must satisfy Neumann condition on S. Therefore it must be the second eigenfunction on T. Contradiction.
Comment by Bartlomiej Siudeja — June 15, 2012 @ 12:44 am
• Nice! (Though I’m not clear about the line “Non symmetric eigenfunction of Q implies double eigenvalue”, it seems that this is neither true nor needed for the argument. Also, I replaced your jstor link with the stable link.)
Comment by — June 15, 2012 @ 1:20 am
• No symmetry for eigenfunction means that we can reflect the eigenfunction to get a new one (different). Now take a sum to get something symmetric (Neumann on S), subtract to get antisymmetric (DIrichlet on S). Neither one will be 0, and they must be orthogonal. So eigenvalue must be double or higher. This just means that eigenspace for something symmetric can always be decompose into symmetric and antisymmetric.
Comment by Bartlomiej Siudeja — June 15, 2012 @ 1:29 am
• Oh, I see what you mean now. (I had confused “non symmetric” with “anti-symmetric”.) I put a quick writeup of the argument on the wiki.
Comment by — June 15, 2012 @ 1:48 am
• The reference I included was to a paper of Friedlander, where has cites a much older paper by Levine and Weinberger where the inequality is proved. There is also a nice paper by Frank and Laptev that gives good account of who proved what (http://www2.imperial.ac.uk/~alaptev/Papers/FrLap2.pdf).
Comment by Bartlomiej Siudeja — June 15, 2012 @ 2:16 am
14. Concerning the method of attack I suggested in the previous comment, it seems that 1) is proven (as the nodal line connects two edges, it does indeed straddle some vertex.
It occurs to me that 2) and 3) can be more succinctly phrased as the conjecture that the mixed boundary domain consisting of this corner and nodal line is *convex*.
I think showing that would be enough… because the nodal line intersects the boundary orthogonally, knowing this region is convex should control the slope of the nodal line enough that earlier arguments would get the extremum in the corner.
Comment by Chris Evans — June 15, 2012 @ 7:27 am
15. [...] proposed by Chris Evans, and that has already expanded beyond the proposal post into its first research discussion post. (To prevent clutter and to maintain a certain level or organization, the discussion gets cut up [...]
Pingback by — June 15, 2012 @ 1:58 pm
16. [...] previous research thread for the Polymath7 “Hot Spots Conjecture” project has once again become quite full, so [...]
Pingback by — June 15, 2012 @ 9:49 pm
• As you can see, I’ve rolled over the thread again as this thread is also approaching 100 comments and getting a little hard to follow. The pace is a bit hectic, but I guess this is a good thing, as it is an indication that we are making progress and understanding the problem better…
Comment by — June 15, 2012 @ 9:51 pm
17. [...] been quite an active discussion in the last week or so, with almost 200 comments across two threads (and a third thread freshly opened up just now). While the problem is still not completely [...]
Pingback by — June 15, 2012 @ 10:22 pm
18. [...] time to roll over the research thread for the Polymath7 “Hot Spots” conjecture, as the previous research thread has again become [...]
Pingback by — June 24, 2012 @ 7:22 pm
RSS feed for comments on this post. TrackBack URI
Theme: Customized Rubric. Blog at WordPress.com.
%d bloggers like this:
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 231, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9181517958641052, "perplexity_flag": "head"}
|
http://mathhelpforum.com/algebra/122276-why-wat-arbitrarily-used-there-mathemical-way-write.html
|
# Thread:
1. ## Why/wat is "arbitrarily" used (for) and is there a mathemical way to "write it"
Hi everyone,
First of all, I am not really sure wether I have posted this topic in the right section of the forum. If not, moderators, please move this topic to the correct forum :-).
I'm not from a country where English is the native language, so especially with more specific subjects such as math I'm having trouble understanding everything. One of these things is the use of "arbitrarily", which appears to have the meaning "limitless/with no limit, but still finite". I am wondering: how can something be limitless but not infinite, as it's finite?
Besides that, I'm working on an essay which is about chaos theory, and I have come accross a definition which states a set M (by a dynamical system) whose elements are the same as N's or come arbitrarily close to it, which is defined as dense orbits. What does arbitrarily mean in this sense, and how could I get "arbitrarily" into an equation (what symbol would be used aswell)?
Regards,
Jesper
2. Originally Posted by Jesper
Hi everyone,
First of all, I am not really sure wether I have posted this topic in the right section of the forum. If not, moderators, please move this topic to the correct forum :-).
I'm not from a country where English is the native language, so especially with more specific subjects such as math I'm having trouble understanding everything. One of these things is the use of "arbitrarily", which appears to have the meaning "limitless/with no limit, but still finite". I am wondering: how can something be limitless but not infinite, as it's finite?
Besides that, I'm working on an essay which is about chaos theory, and I have come accross a definition which states a set M (by a dynamical system) whose elements are the same as N's or come arbitrarily close to it, which is defined as dense orbits. What does arbitrarily mean in this sense, and how could I get "arbitrarily" into an equation (what symbol would be used aswell)?
Regards,
Jesper
Let me see if I can give you an example of getting "arbitrary close." Suppose the distance between you and the wall is say 8 ft. Then divide the distance in half and move to that point. You are now 4 ft from the wall. Now divide that distance in half again and move to that point. You are now 2ft from the wall. Now continue in this fashion cintinually divide your distance in half. So you then move to points
1ft, 0.5 ft, 0.25 ft, 0.124 ft etc.
Question: Will you ever reach the wall - exactly? Well no - you'll never have the distance between you and the wall = 0. But you sure will be close. Will you be within say 0.001 ft of the wall? Or say 0.000001 ft from the wall? How about $\varepsilon$ where $\varepsilon$ is any very small number? If so, then you can get as close as you wish or "arbitrary close."
If you want to write this mathematically then if you let n be the nth step towards the wall the the distance is
$<br /> \left( \frac{1}{2}\right)^n \cdot 8<br />$
and if you want this to be less than $\varepsilon$ then
$<br /> \left( \frac{1}{2}\right)^n \cdot 8 < \varepsilon.<br />$
3. Thank you for clearing this up for me, Danny, I really appreciate that :-).
Even though you have cleared it up, I am still looking for a way to put that in a better mathematical form.
The case is, there is a set M and I want a way to mathematically write the set N which contains all the elements which are arbitrary close to the elements of M, thus the set N which is arbitrary close to the set M. Would the way to go with this be limits or? Thanks!
4. Originally Posted by Jesper
The case is, there is a set M and I want a way to mathematically [write the set N which contains all the elements which are arbitrary close to the elements of M, thus the set N which is arbitrary close to the set M.
There is a mathematical way of doing exactly that.
You need to know some topological concepts, in particular about metric spaces.
In a metric space if $M$ is a set then its closure of $\overline{M}$ is the set of all points a distance of 0 from $M$.
That concept is often described as the set points arbitrarily close to $M$.
So your set is $N=\overline{M}$.
5. Thanks very much, that's the second thing I needed!
A big thanks to both of you :-D!
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 10, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9602295160293579, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/trigonometry/172013-manning-formula-solution-angle-edges-water.html
|
# Thread:
1. ## Manning Formula Solution for Angle to edges of Water
$1-2R/r=(Sin G)/G$
Where G is the angle to the edges of the water and is less then Pi radians, from the centre of the pipe. R is a known from the Manning Formula as we solve for it from V and a known Slope. r is the radius of the pipe in Feet and is chosen based on the community size. How do we solve for the Angle G?
2. Since that involves G both as the argument of a transcendental function (sine) and outside of it, there will not be any simple algebraic solution. You will have to use a numerical solution.
3. ## Explanation
I am looking for a numerical value. We have other formulae that supply the person with a value for R and r. We currently have them guessing at values for G until the get one where the (Sin G)/G is approx equal to the Left Side (LS) of the equation.
4. ## added Information
would it help if I told you that
G=2ArcCose((r-d)/r)
Where r is the same as previous formula and d is the depth of water in the pipe. Could you then rearrange formulae to solve for d?
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9446080327033997, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/37818/electrical-resistance-and-chemistry
|
# Electrical Resistance and chemistry
Can some one describe or explain what happens when too much current is passed through lets say a copper wire, I am looking for an explanation to do with physics and chemistry.
For instance we have all heard about electrical resitance and ohms law on calculating this resistance but what happens when we pass in to much current and the wire melts whats happening at that point?
Im looking for the correct physics of whats happening aswell as the chemistry, in chemistry you get taught about ionic and covalent bonding how electrons can shift the electron energy levels and bond matrials together via the outer electron energy levels trading or adding new electrons to the atom, are the free electrons in the copper lattice bouncing of the copper atoms and shifting the copper atoms electron energy levels at the melting point of copper when too much current is applied?
When I think of heating water to its boiling point I imagine how the water molecules are becoming extremely excited I am imagining the same for the copper wire but with the free electrons inside the copper latice and I am struggling with this as I doudt that the electrons have the capability to nock a copper atom? So it must be effecting the copper atoms energy level?
-
Ever heard about pinch effect? – Georg Sep 20 '12 at 16:01
## 1 Answer
A simple picture: The free electrons comprising the current collide with material impurities, lattice imperfections, and lattice vibrations (thermal motion which manifests as temperature), transferring energy to them and thereby increasing the vibrational energy (and temperature) (at a rate $P=I^2 R$). When the temperature reaches copper's melting point a phase transition occurs, and the solid turns to a liquid.
-
Read about pinch effect! – Georg Sep 20 '12 at 16:02
@Georg: I understand that very high currents can be pinched by their magnetic fields, but I don't think you need that effect to melt a wire. – Art Brown Sep 20 '12 at 16:32
Hey that sounds good but you dont explain what the phase transition is or what is happening to the copper lattice at that specific point? – Garrith Graham Sep 20 '12 at 20:10
@GarrithGraham: I can't say much beyond noting that at the melting point the lattice vibrations are sufficiently large to disrupt the lattice bonds. – Art Brown Sep 20 '12 at 22:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9147678017616272, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/109889/state-of-the-art-on-a-question-on-the-existence-of-dualizing-complex/109899
|
## State of the art on a question on the existence of dualizing complex
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Let A be a noetherian ring and D(A) be the derived category of modules on A. Recall that a dualizing complex for A is an object R in D(A) of finite injective dimension, with cohomology of finite type and such that the natural morphism of functors $Id \longrightarrow R\mathcal{H}om(R\mathcal{H}om(., R), R )$ is an isomorphism of functors.
In the book "Residues and Duality" (R. Hartshorne) (V.10), it is presented as an open problem to know if a noetherian local domain of dimension 1 admits a dualizing complex.
What is the state of the art on this question ?
-
## 2 Answers
R.Y. Sharp proved in the 80s that a Noetherian ring with a dualizing complex must be "acceptable". This is a weakening of "excellent" obtained by replacing every occurence of the word "regular" with "Gorenstein", so means (1) universally catenary, (2) Gorenstein formal fibers, and (3) Gorenstein locus open in any finitely generated algebra.
There is an example due to Ferrand and Raynaud of a one-dimensional noetherian domain for which the generic formal fiber is not Gorenstein. Such a ring does not, by Sharp's result, have a dualizing complex.
-
As reference, I mention that the title of the article of Ferrand and Raynaud is "Fibres formelles d'un anneau local noetherien" (available on numdam). The fact that a local domain with a dualizing complex has Gorenstein formal fiber is in Hartshorne's book so I don't need the stronger results of Sharp. Thanks a lot for the answer. – unknown (google) Oct 18 at 9:37
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Actually, more has been determined since Sharp's work in the 1980s. It is easy to see that a homomorphic image of a Gorenstein ring of finite Krull dimension has a dualizing complex. Sharp conjectured that this is the only way for a Noetherian ring to have a dualizing complex. In a pair of papers (2000 and 2002) in Transactions of the AMS, Takesi Kawasaki showed that Sharp's conjecture is true. That is, a Noetherian ring has a dualizing complex if and only if (1) it has finite Krull dimension and (2) it is the homomorphic image of a Gorenstein ring.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9120743870735168, "perplexity_flag": "head"}
|
http://www.quantummatter.com/the-physical-origin-of-electron-spin/
|
Milo Wolff's site on his
Space Resonance Theory.
A fresh look on
modern quantum physics.
Recent articles
# The Physical Origin of Electron Spin
September 13th, 2010 by Milo Wolff
## Abstract
It is shown how the spin of the electron and other charged particles arises out of the quantum wave structure of matter. Spin is a result of spherical rotation in quantum space of the inward (advanced) spherical quantum wave of an electron at the electron center in order to become the outward (retarded) wave. Wave rotation is required to maintain proper phase relations of the wave amplitudes. The spherical rotation, a unique property of 3D space, can be described using SU(2) group theory algebra. In the SU(2) algebra, the inward and outward waves of the charged particle are the elements of a Dirac spinor wave function. Thus all charged particles satisfy the Dirac Equation.
## 1. Introduction
A highly successful mathematical theory of spin has already been developed by P.A.M. Dirac and others (Eisele, 1960). It predicted the discovery of the positron (Anderson, 1922) and correctly gave the value of the spin, $\frac{h}{4\pi}$ angular momentum units. Prior to this paper, there has been no successful physical description of spin or any suggestion of its origin, although it was recognized to be a quantum phenomenon. The electron's structure, as well as its spin, had been a mystery. Providing a physical origin of spin for the first time is the purpose of this paper.
In Dirac's theoretical work the spin of a particle is measured in units of angular momentum, like rotating objects of human size. But particle spin is uniquely a quantum phenomenon, different than human scale angular momentum. Its value is fixed and independent of particle mass or angular velocity. However, spin properties are found to be related to other properties of the electron's quantum wave function; that is, mirror or parity inversion (P), time inversion (T), and charge inversion (C). For example, the quantum operation CPT on a particle is found to be invariant:
CPT= {Charge inversion} × {Parity inversion} × {Time inversion} = an invariant
This study returns to a proposal that was popular sixty years ago among the pioneers of quantum theory: namely that matter consisted of wave structures in space. Thus, it was proposed that matter substance, mass and charge, did not exist but were properties of the wave structure. Wyle, Schrödinger, Clifford, and Einstein were among those who believed that particles were a wave structure. Their belief was consistent with quantum theory, since the mathematics of quantum theory does not depend on the existence of particle substance or charge substance. In short, they proposed that quantum waves are real and mass/charge were mere appearances; Schaumkommen in the words of Schrödinger. The reality of quantum waves, as suggested by Cramer (1986), supports the original concept of W. K. Clifford (1876) that all matter is simply undulations in the fabric of space.
Wheeler and Feynman (1945) first modeled the electron as spherical inward and outward electro-magnetic waves seeking the response of the universe (from other matter) to explain radiation, but encountered difficulties because there are no spherical solutions of the electromagnetic equations using vector fields. Cramer (1986) discusses the response for real quantum waves. Using a quantum wave equation (scalar fields) and spherical quantum waves, Wolff (1990, '91, '93, '95, '97) found and described a wave structure of matter which successfully predicted the Natural Laws as experimentally measured. It has predicted all of the properties of the electron except one - its spin. Now, this paper completes those predictions with a physical origin of spin that is in accord with quantum theory, the Dirac Equation, and the previous structure of the electron.
Briefly summarizing Wolff, the electron is comprised of two spherical scalar waves, one inward and one outward. These waves are superimposed at the origin with opposite amplitudes, as shown in Figure 1 in the next section, to form a single resonant standing wave in space centered at the electron's location. A reversal of the inward wave occurs at the center where $r = 0$. Spin appears as a required rotation of the inward wave to become the outward wave. The outward wave induces a response of the universe when it encounters other matter in its universe and modulates their outward waves. The tiny Huygens components of those waves return to the center and become the inward wave. This simple structure, termed a Space Resonance (SR), produces all experimental properties of electrons.
This structure, the electron properties, and the laws of nature originate from three basic principles or assumptions. No other laws are required as these principles are the origin of the laws of nature. Briefly they are:
Principle I. A Wave Equation.
Determines the behavior of quantum waves.
Principle II. Wave Density Principle.
A quantitative generalization of Mach's principle, which determines the density of the quantum wave medium.
Principle III. Minimum Amplitude Principle (MAP).
The sum of wave amplitudes seeks a minimum at each point.
The following wave equation is the First Principle.
## 2. Wave Structure of the Electron
The structure of the electron consists of solutions of a general wave equation (Wolff, 1990). This equation governs the behavior of all particle waves in space, and is:
Formula 1
$\nabla^2\Psi - \dfrac{1}{c^2} \dfrac{\partial^2 \Psi}{\partial t^2} = 0$
where $\Psi$ is a scalar amplitude, $c$ is the velocity of light, and $t$ is the time. These waves are scalar quantum waves, not electromagnetic waves. This wave equation has two spherical solutions for the amplitude of the electron: one of them is an inward wave converging to the center; the other is a diverging outward wave. The two solutions are:
Formula 2
$\mbox{IN amplitude} = \Psi_{IN} = \dfrac{\Psi_{max}\ e^{i(wt + kr)}}{r}\\\mbox{OUT amplitude} = \Psi_{OUT} = \dfrac{\Psi_{max}\ e^{i(wt - kr)}}{r}$
where:
• $\omega = 2\pi \frac{mc^2}{h}$ = the angular frequency
• $k = \frac{2\pi}{wavelength}$ = the wave number.
The inward wave converges to its center and rotates to become a diverging outward wave. The superposition of the continuous inward and outward waves forms the electron, Figure 1, and is termed a Space Resonance. To transform the inward wave to an outward wave and obtain constructive interference with proper phase relation requires a rotation and phase shift at the center. This rotation produces a spin value $\frac{h}{4\pi}$, the same for all charged particles because all particles propagate in the same universal wave medium.
Figure 1. Electron Structure.
The upper diagram shows a cross-section of the spherical wave structure, something like the layers of an onion. It is comprised of an inward moving wave and an outward moving wave. The two waves combine to form a single dynamic standing wave structure with its center as the nominal location of the electron. Note that the amplitude of a quantum wave is a scalar number, not an electromagnetic vector. Thus these waves are part of quantum theory, not electric theory. At the center the quantum wave amplitude (and the electric potential) is finite, not infinite, in agreement with the observed electron (Wolff, 1995).
The lower diagram shows the same quantum wave amplitude plotted along a radius outwards from the electron center. The lower diagram is a 'slice' from the upper diagram.
## 3. Spherical Rotation
Rotation of the inward quantum wave at the center to become an outward wave is an absolute requirement to form a particle structure. Rotation in space has conditions. Any mechanism that rotates (to creates the quantum "spin") must not destroy the continuity of the space. The curvilinear coordinates of the space near the particle must participate in the motion of the particle. Fortunately, nature has provided a way - known as spherical rotation - a unique property of 3D space. In mathematical terms this mechanism, according to the group theory of 3D space, is described by stating that the allowed motions must be represented by the SU(2) group algebra which concerns simply-connected geometries.
Spherical rotation is an astonishing property of 3D space. It permits an object structured of space to rotate about any axis without rupturing the coordinates of space. After two turns, space regains its original configuration. This property allows the electron to retain spherical symmetry while imparting a quantized "spin" along an arbitrary axis as the inward waves converge to the center, rotate with a phase shift to become the outward wave, and continually repeat the cycle.
The required phase shift is a 180° rotation that changes inward wave amplitudes to become those of the outward wave. There are only two possible directions of rotation, CW or CCW. One choice is an electron with spin of $\frac{+h}{4\pi}$, and the other is the positron with spin of $\frac{-h}{4\pi}$.
It is an awesome thought that if 3D space did not have this geometric property of spherical rotation, particles and matter as we know them could not exist.
## 4. Dirac Theory of Electron Spin
Figure 2. Radial Plot of the Electron Structure.
When the IN and OUT quantum waves combine they form a standing wave. This detailed plot, the same as the approximate lower plot of Figure 1 above, corresponds exactly to the equations below. The envelope of the wave amplitude matches the Coulomb potential everywhere except at the center, where it is not infinite in agreement with the observations of Lamb and Rutherford. If the electron were moving and observed by another detector atom with relative velocity $v$, the deBroglie wavelength appears as a Doppler effect on both waves. The frequency $\frac{mc^2}{h}$ of the waves was first proposed by Schrödinger and deBroglie, proportional to the mass of the electron. This frequency is the mass so that mass measurements are actually frequency measurements. There is no mass 'substance' in nature.
The newly discovered quantum mechanics of the 1920s began to be applied to the physics of particles, seeking to further understand particles. Nobel Laureate P A.M. Dirac sought to find a relation between quantum theory and the conservation of energy in special relativity given by:
Formula 3
$E^2 = p^2\,c^2 + m_0^2\,c^4$
He speculated that this energy equation might be converted to a quantum equation in the usual way, in which energy $E$ and momentum $p$ are replaced by differential calculus operators:
Formula 4
$E = \dfrac{h}{i} \dfrac{\partial \Psi}{\partial t} \qquad\mbox{and}\\p = h ( \dfrac{\partial \Psi}{\partial x} + \dfrac{\partial \Psi}{\partial y} + ... \qquad\mbox{etc.}$
He hoped to find the quantum differential wave equation of the particle. Unfortunately, Equation 3 uses squared terms and Equation 4 cannot. The road was blocked! Dirac had a crazy idea:
"Let's try to find the factors of Equation 3 without squares, by writing a matrix equation"
Formula 5
${\bf\sf I}E = {\bf\sf \alpha}pc + {\bf\sf \beta}m_o\,c^2$
where: ${\bf I}$ is the identity matrix, ${\bf \alpha}$ and ${\bf \beta}$ are new matrix operators of a vector algebra.
Dirac was lucky! He found that if ${\bf \alpha}$ and ${\bf \beta}$ were 4-vector matrices then Equation 5 works okay. It is the famous Dirac Equation. Equations 4 and 5 can now be combined to get:
${\bf\sf I} (ih) \dfrac{\partial{\bf\sf \Psi}}{\partial t} = \dfrac{ch}{i} \left( \alpha_x \dfrac{\partial {\bf\sf \Psi}}{\partial x} + \alpha_y \dfrac{\partial{\bf\sf \Psi}}{\partial y} + \alpha_z \dfrac{\partial{\bf\sf \Psi}}{\partial z} + {\bf\sf \beta} m_0\,c^2{\bf\sf \Psi} \right)$
In general, $\Psi$ is a 4-vector:
${\bf\sf \Psi} = [\Psi_1, \Psi_2, \Psi_3, \Psi_4]$
For the electron, this reduces to:
${\bf\sf \Psi} = [0, 1, \Psi_3(E,p) , \Psi_4(E,p)]$
Dirac realized that for an electron only two wave functions, $Psi_3$ and $Psi_4$, were needed. These predicted an electron and a positron of energy $E$ with spin:
$E = \pm mc^2 \qquad \mbox{and} \qquad spin = \pm \dfrac{h}{4\pi}$
The positron was discovered five years later by Anderson (1931).
Dirac simplified the matrix algebra by introducing 2-vectors (number pairs) which he termed 'spinors.' Spin matrices, which operate on the vectors, were defined as follows:
${\bf spin_x} = \begin{bmatrix}0 & 1 \\1 & 0\end{bmatrix}\qquad{\bf spin_y} = \begin{bmatrix}0 & i \\-i & 0\end{bmatrix}\qquad{\bf spin_z} = \begin{bmatrix}1 & 0 \\0 & -1\end{bmatrix}$
${\bf Identity} = {\bf I} = \begin{bmatrix}1 & 0 \\0 & 1\end{bmatrix}$
Thus Dirac had created a two-number algebra to describe particles instead of our common single number algebra. This 'spinor' algebra, while eminently successful, was entirely theoretical and gave no hint of the physical structure of the electron. Now, in this paper, it is seen that the inward-outward quantum waves are the physical structure which corresponds to the Dirac spinor. The two waves form a Dirac spinor, as was shown by Battey-Pratt et al. (1986). The two physical spinor elements of the electron, or any charged particle, are as follows:
Formula 6
$\mbox{electron amplitude} = {\bf \Psi} =\begin{pmatrix}\Psi_{IN} \\\Psi_{OUT}\end{pmatrix} = \dfrac{\Psi_{max}}{r} \begin{pmatrix}e^{i(wt + kr)} \\e^{i(wt - kr)}\end{pmatrix}$
An easily read description of the algebra of the Dirac Equation is given in Eisele (1960).
## 5. Geometric Requirements of Electron Spin
Structuring particles out of space (the continuum) presents a problem if the particles are considered free to spin. If part of the continuum is part of the particle then another part of space would slide past the spinning particle. As a result, the coordinate lines used to map out the whole space would become twisted up and stretched without limit. The structure of space would be torn or ripped so that one part of the continuum would slide past another along a surface of discontinuity.
If you accept the philosophical position that "ripping of space" is unacceptable, then you have to postulate that the mathematical groups of the particle motion are simply connected and compact. In this case the motion in the continuum will be cyclic and the configuration of space can repeatedly return to an earlier initial phase. Does this occur in nature? Yes, nature accommodates this requirement. Mathematicians have long known of the spherical rotation property of 3D space in which a portion of space can rotate and return identically to an earlier state after two turns. This unusual motion was described in "Scientific American" (Rebbi, 1979) and in the book "Gravitation" (Misner et al., 1973). It is the basis of spin in this article.
What are the geometric requirements on the motion of a particle which does not destroy the continuity of the space? The curvilinear coordinates of the space near the particle must participate in the motion of the particle. This requirement according to the group theory of 3D space is satisfied by stating that the allowed motions must be represented by a compact simply-connected group. The most elementary such group for the motion of a particle with spherical symmetry is named SU(2). This group provides all the necessary and known properties of spin for charged particles, such as the electron.
## 6. Understanding Spherical Rotation
This seldom studied motion can be modeled by a ball held by threads attached to a frame. The threads represent the coordinates of the space and the rotating ball represents a property of the space at the center of a charged particle composed of converging and diverging quantum waves. The ball can be turned about any given axis starting from any initial position. If the ball is rotated indefinitely it will be found that after every two rotations the system returns to its original configuration.
In the traditional analysis of rotating objects, it is usual to assume that the process of inverting the axis of spin is identical to reversing the spin. However, if the object is an electron which is continuously connected to its environment as part of the space around it, this ceases to be true. A careful distinction must be made between the inversion and the reversal of particle spin. This distinction provides insight to one of the most fundamental properties of particles.
To reverse the spin axis, one can reverse time ($t \rightarrow -t$) or reverse the angular velocity ($\omega \rightarrow -\omega$). Either are equivalent to exchanging the outgoing spherical wave of an electron with the incoming wave. Then the spinor becomes:
$\mbox{amplitude} = {\bf \Psi} =\begin{pmatrix}e^{iwt} \\0\end{pmatrix}\rightarrow\begin{pmatrix}e^{-iwt} \\0 \end{pmatrix}$
To invert the spin axis of the structure of the particle, it is necessary to turn the structure about one of the axes perpendicular to the $z$ spin axis, for example the $y$ axis. Then the inverted spin state is given by the inversion matrix operation:
$\mbox{amplitude} = {\bf \Psi} =\begin{bmatrix}0 & -1 \\1 & 0\end{bmatrix}\begin{pmatrix}e^{iwt} \\0\end{pmatrix}\rightarrow\begin{pmatrix}0 \\e^{iwt}\end{pmatrix}$
Thus, inversion and reversal are not the same. The difference between these operations is characteristic of the quantum nature of the electron. They are distinct from our human-sized view of rotating objects and are important to understand particle structure.
## 7. The Group Mathematics of Spherical Rotation
Each configuration of the spherically rotating ball (or the electron center) can be represented by a point on a Euclidean 4D hypersphere which is also the space of the SU(2) mathematics group. A rotation in the spherical mode can be represented by any operator that will transform one vector into another position. It is usual to assign the hypersphere a unit radius. Then the rotations of the ball can be described by the mathematics of the SU(2) group. It is also convenient to place the center of the unit hypersphere at the origin and let the vector (1,0,0,0) represent an initial configuration of the ball or electron. Any other configuration is often chosen with the symbols (a,b,c,d). Then:
$a2+ b2 + c2 + d2 = 1$
A common representation for the hypersphere vectors is the quaternion notation:
$\mbox{amplitude} = a + ib + jc + kd$
It can be shown (Battey-Pratt and Racey, 1986) that the 4x4 quaternion operator is equivalent to a 2x2 operator as follows:
$\mbox{amplitude} = \begin{pmatrix} a + id c - ib c + i a - id \end{pmatrix}$
where the matrix elements (often just 1, i, or 0) are now complex numbers. You can see that the determinant of this is also $a^2 + b^2 + c^2 + d^2 = 1$, as above. The spinor (operand) form of (amplitude) is:
$\mbox{amplitude} = \begin{pmatrix} a + id \\ c + ib \end{pmatrix}$
This is the notation of the Spinors invented by Dirac to represent the electron configuration, as shown in Table 1. They also represent rotations in the spherical mode which are members of the closed uni-modular SU(2) group.
Table 1. Properties of Spherical Rotation for an electron in the SU(2) Representation
Operation (Dirac symbol) SU(2) operator Initial SU(2) spinor Final SU(2) spinor Equivalent quaternion operator
Leaves space as it is (Identity) $\begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix}$ $\begin{pmatrix} 1 \\ 0 \end{pmatrix}$ $\begin{pmatrix} 1 \\ 0 \end{pmatrix}$ 1
${\bf spin_x}$
Rotates space 180° about the $x$ axis.
$\begin{bmatrix} 0 & i \\ i & 0 \end{bmatrix}$ $\begin{pmatrix} 1 \\ 0 \end{pmatrix}$ $\begin{pmatrix} 0 \\ i \end{pmatrix}$ ${\bf i}$
${\bf spin_y}$
Rotates space 180° about the $y$ axis.
$\begin{bmatrix} 0 & -1 \\ 1 & 0 \end{bmatrix}$ $\begin{pmatrix} 1 \\ 0 \end{pmatrix}$ $\begin{pmatrix} 0 \\ 1 \end{pmatrix}$ ${\bf j}$
${\bf spin_y}$
Rotates space 180° about the $z$ axis.
$\begin{bmatrix} i & 0 \\ 0 & -i \end{bmatrix}$ $\begin{pmatrix} 1 \\ 0 \end{pmatrix}$ $\begin{pmatrix} i \\ 0 \end{pmatrix}$ ${\bf k}$
For example, the spherical quantum waves in space can be rotated 180° about the $z$ axis by the operator ${\bf spin_z}$. If there is continuous rotation of the quantum wave in space with angular velocity $\omega$, the spinor is represented by:
$\mbox{amplitude} = \begin{pmatrix} e^{iwt} \\ 0 \end{pmatrix}$
## 8. How Spin Arises from the Wave Structure of the Electron
The wave structure of the electron is composed of a spherical inward quantum wave and an outward wave traveling at light speed $c$ (Wolff, 1990, 1993, 1995). Figures 1 and 2 show the wave structure of an electron termed a Space Resonance. The outward (OUT) wave of an electron travels to and communicates with other matter in its universe. When these waves arrive at other matter, a signature is modulated into their outward waves. These outward-wave signatures are the response (Wheeler & Feynman, 1945; Cramer, 1986, Ryazanov, 1991) from the other matter. The total of response waves from other matter in the universe, as a Fourier combination, becomes the inward (IN) wave of the initial electron. The returned inward waves converge to the initial wave center and reflect with a phase shift rotating them to become the outward wave and repeating the cycle again.
The central phase shift is similar to the phase shift of light when it reflects at a mirror. The required phase shift is a 180° rotation of the wave, either CW or CCW. There are only two possible combinations of the rotating inward and outward waves. One choice of rotation becomes an electron, the other becomes a positron. The angular momentum change upon rotation is either $\frac{+h}{4}$ or $\frac{-h}{4}$. This is the origin of spin. One wave set is the mirror image of the other set producing the CPT invariance rule.
## 9. Conclusions
### 9.1. A COMPLETE SET OF ELECTRON PROPERTIES
The origin of spin completes the properties of the electron. All properties can now be derived from the space-resonance structure and match all experimental observations of the electron. There is now little doubt that matter is composed of spherical quantum wave structures that obey the three principles of wave structure of matter. But note that spin, and other properties, are attributes of the underlying quantum space rather than of the individual particle. This is why spin, like charge, has only one value for all particles. The properties depend on the structure of space.
### 9.2. A UNIVERSE OF QUANTUM WAVES AND SPACE
Although the origin of spin has been a fascinating problem of physics for sixty years, spin itself is not the important result. Instead, the most extraordinary conclusion of the wave electron structure is that the laws of physics and the structure of matter ultimately depend upon the waves from the total of matter in a universe. Every particle communicates its wave state with all other matter so that the particle structure, energy exchange, and the laws of physics are properties of the entire ensemble. This is the origin of Mach's Principle. The universal properties of the quantum space waves are also found to underlie the universal clock and the constants of nature.
This structure settles a century old paradox of whether particles are waves or point-like bits of matter. They are wave structures in space. There is nothing but space. As Clifford speculated 100 years ago, matter is simply, "undulations in the fabric of space".
### 9.3 THE SIMPLE ELECTRON
The elegance of the electron structure is its basic simplicity. It is only two spherical waves gracefully undulating around a center, each transforming into the other. Its spherical wave structure combines with the waves of other charged particles to create myriads of standing wave structures. These structures become the crystalline matter of the solid state. If you could see its wave structure, a crystal might appear like many shimmering bubbles neatly joined in geometric arrays. The arrays are held together with immense rigidity - a property of space.
The next frontier science of the future is to understand the meaning and structure of space.
## 10. References and Further Reading
1. Apeiron, V2, No. 4, Oct. 1995.
This contains eight articles discussing various interpretations of quantum theory.
2. E. Battey-Pratt, and T. Racey (1980) "Geometric Model for Fundamental Particles", Intl. J. Theor. Phys. 19, 437-475.
They recognized that electron spin was a geometric property of space and could exist in a spherical structure.
3. Louis Duc de Broglie (1924), PhD thesis "Recherché sur la Theorie des Quanta", U. of Paris.
He proposed a wavelength $\lambda = \frac{h}{p}$ for the quantum waves of an electron containing an oscillator of frequency, $\frac{mc^2}{h}$. as in the space resonance.
4. William Clifford (1956), "On the Space Theory of Matter", The World of Mathematics, p 568, Simon & Schuster, NY.
An English mathematician at the Royal Philosophical Society, he first suggested (1876) that matter was composed of pure waves.
5. John Cramer (1986), "The Transactional Interpretation of Quantum Mechanics", Rev. Mod. Phys 58, 647-687.
He interpreted the waves of quantum mechanics as real, in contrast to the unreal but popular "probability wave." He named an offer-wave (outward) and a response-wave (inward).
6. John A. Eisele (1960), "Modern Quantum Mechanics with Applications to Elementary Particle Physics". Wiley-Interscience, (John Wiley & Sons, NY, London).
This book discusses the Dirac Equation in detail.
7. C.W. Misner, K. Thorne, and J.A. Wheeler (1973), "Gravitation", W.H. Freeman Co. p1149.
This book covers many pioneering ideas including the 3D space property of spherical rotation.
8. C. Rebbi (1979) "Solitons" Scientific American, Feb., 92, 168.
He discusses spherical rotation applied to Solitons.
9. Giorgi Ryazanov (1991), Proc. 1st Int'l Sakharov Conf. Phys., Moscow, May 21-31, pp. 331-375, Nova Sci. Publ., NY.
He used a Wheeler-Feynman method to deduce that Natural Laws are the response of the universe.
10. J. Wheeler and R. Feynman (1945), "Interaction with the Absorber as the Mechanism of Radiation", Rev. Mod. Phys. 17, 157.
They modeled the electron with inward and outward waves to investigate the energy transfer mechanism to an absorber.
11. Milo Wolff (1990), "Exploring the Physics of the Unknown Universe", ISBN 0-9627787-0-2. (Technotran Press, CA)
A reader-friendly investigation of the natural laws with applications to particles and cosmology.
12. Milo Wolff (1991), "Microphysics, Fundamental Laws and Cosmology". Proc. 1st Int'l Sakharov Conf. Phys., Moscow, May 21-31, pp. 1131-1150, Nova Sci. Publ., NY.
13. Milo Wolff (1993), "Fundamental Laws, Microphysics and Cosmology", Physics Essays 6, pp 181-203.
14. Milo Wolff (1995), "Beyond the Point Particle - A Wave Structure for the Electron", Galilean Electrodynamics 6, No. 5, pp. 83-91.
15. Milo Wolff (1997A) "Exploring the Universe and the Origin of it Laws", Temple University Frontier Perspectives 6, No 2, pp. 44-56.
16. Milo Wolff (1997B) "The Eight-Fold Way of the Universe", Apeiron 4, no. 4. Oct (1997).
Comments are closed.
Avatars by Sterling Adventures
Books of Milo Wolff
Recent Comments
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 76, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9002735018730164, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/105834/reference-request
|
## Reference request
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I am looking for a reference or proof for the following problem:
Problem: Let $r$ be prime, then $2r$ is a Sylow $p$-number if and only if $2r=1+p^{2^n}$. Thanks in advance.
-
5
You need to expalin what a "Sylow $p$-number"is. I have not seen the term before, but I am guessing it is an integer which is the number of Sylow $p$-subgroups of some finite group. Even if this is correct, I supect that most readers will not be familiar with the term. – Geoff Robinson Aug 29 at 18:33
According to Sylow Numbers of Finite Groups (1995) by JP Zhang: "a natural number $n$ is said to be a Sylow number for a finite group $G$ if $n$ is the number of Sylow $p$-subgroups of $G$ for some prime $p$." In this same article (see: sciencedirect.com/science/article/pii/…) Zhang includes your claim, with its proof sourced to a manuscript of his under preparation, called Sylow Numbers of Finite Groups, II. Impossible Values. If this latter paper has appeared in the 17 year interim, it has been under a different name. – Benjamin Dickman Oct 13 at 17:29
See also the article whose abstract appears here: zentralblatt-math.org/portal/en/zmath/en/search/…. The article itself should be findable online, but it has not been translated into English. If you can read Chinese, great; if not, try to find someone to translate it. If this becomes a dire matter and you can't find a translator, email me (info on my user-page) and I'll give it a shot. – Benjamin Dickman Oct 13 at 17:50
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9224937558174133, "perplexity_flag": "middle"}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.