url
stringlengths 17
172
| text
stringlengths 44
1.14M
| metadata
stringlengths 820
832
|
---|---|---|
http://math.stackexchange.com/questions/tagged/discrete-mathematics?sort=votes&pagesize=15
|
Tagged Questions
Questions on discrete mathematics generally: "the study of mathematical structures that are fundamentally discrete rather than continuous"
1answer
6k views
How many fours are needed to represent numbers up to $N$?
The goal of the four fours puzzle is to represent each natural number using four copies of the digit $4$ and common mathematical symbols. For example, \$165=(\sqrt{4} + \sqrt{\sqrt{{\sqrt{4^{4!}}}}}) ...
11answers
3k views
Zero to the zero power - Is $0^0=1$?
Could someone provide me with good explanation of why $0^0 = 1$? My train of thought: $x > 0$ $0^x = 0^{x-0} = 0^x/0^0$, so $0^0 = 0^x/0^x = ?$ Possible answers: $0^0 * 0^x = 1 * 0^x$, so ...
1answer
569 views
A discrete math riddle
Here's a riddle that I've been struggling with for a while: Let $A$ be a list of $n$ integers between 1 and $k$. Let $B$ be a list of $k$ integers between 1 and $n$. Prove that there's a non-empty ...
4answers
685 views
Partitioning the integers $1$ through $n$ so that the product of the elements in one set is equal to the sum of the elements in the other
It is known that, for $n \geq 5$, it is possible to partition the integers $\{1, 2, \ldots, n\}$ into two disjoint subsets such that the product of the elements in one set equals the sum of the ...
2answers
346 views
Can a collection of points be recovered from its multiset of distances?
Consider $n$ distinct points $x_1,\dots,x_n$ on $\mathbb{R}$. Associated to these points is the multiset of all distances $d(x_i,x_j)$ between two points. Suppose one is only handed this multiset (you ...
15answers
3k views
Proof that $\sum\limits_{k=1}^nk^2 = \frac{n(n+1)(2n+1)}{6}$?
I am just starting into calculus and I have a question about the following statement I encountered while learning about definite integrals: $$\sum_{k=1}^n k^2 = \frac{n(n+1)(2n+1)}{6}$$ I really ...
1answer
590 views
Always oddly-many ones in the binary expression for $10^{10^{n}}$?
Update: Pending independent verification, the answer to the title question is "no", according to a computation of $q(10) = 11609679812$ (which is even). Let $q(n)$ be the number of ones in the ...
2answers
728 views
A stronger version of discrete “Liouville's theorem”
If a function $f : \mathbb Z\times \mathbb Z \rightarrow \mathbb{R}^{+}$ satisfies the following condition \forall x, y \in \mathbb{Z}, f(x,y) = \dfrac{f(x + 1, y)+f(x, y + 1) + f(x - 1, y) +f(x, ...
2answers
665 views
What is the millionth decimal digit of the (10^10^10^10)th prime?
What is the millionth decimal digit of the $10^{10^{10^{10}}}$th prime? (This prime is, of course, far larger than the largest currently "known" prime, the latter having nearly 13 million ...
7answers
2k views
Proof: If n is a perfect square, $\,n+2\,$ is NOT a perfect square
"Prove that if n is a perfect square, $\,n+2\,$ is NOT a perfect square." I'm having trouble picking a method to prove this. Would contraposition be a good option (or even work for that matter)? If ...
3answers
398 views
A (probably trivial) Induction Problem: $\sum_2^nk^{-2}\lt1$
So I'm a bit stuck on the following problem I'm attempting. Essentially, I'm required to prove that $\frac{1}{2^2}+\frac{1}{3^2}+\cdots+\frac{1}{n^2} < 1$ for all $n$. I've been toiling with some ...
5answers
11k views
What is the best book for studying discrete mathematics?
As a programmer, mathematics is important basic knowledge to study some topics, especially Algorithms. Many websites, and my fellows suggest me to study Discrete Mathematics before going to ...
4answers
254 views
Solve recursion $a_{n}=ba_{n-1}+cd^{n-1}$
Let $b,c,d\in\mathbb{R}$ be constants with $b\neq d$. Let $$\begin{eqnarray} a_{n} &=& ba_{n-1}+cd^{n-1} \end{eqnarray}$$ be a sequence for $n \geq 1$ with $a_{0}=0$. I want to find a closed ...
1answer
201 views
Showing $(x,y)$ pairs exist for $\sqrt{\quad\mathstrut}$
If we were to show that there exists infinitely many $(x,y)$ pairs in $\mathbb{Q}^2$ for which both $\sqrt{x^2+y^4}$ and $\sqrt{x^4+y^2}$ are rational. If the power root for $x$ and $y$ vary but never ...
4answers
731 views
Are these 2 graphs isomorphic?
They meet the requirements of both having an = number of vertices (7) They both have the same number of edges (9) They both have 3 vertices of deg(2) and 4 of deg(3) However, graph two has 2 ...
2answers
534 views
Is there a discrete version of de l'Hôpital's rule?
When considering asymptotics of runtime functions, you often have to find limits of quotients of discrete functions, e.g. \$\displaystyle\qquad \lim\limits_{n \to \infty} ...
2answers
541 views
Combinatorial interpretation of Binomial Inversion
It is known that if $f_n = \sum_{i=0}^{n} g_i \binom{n}{i}$ for all $0 \le n \le m$, then $g_n = \sum_{i=0}^{n} (-1)^{i+n} f_i \binom{n}{i}$ for $0 \le n \le m$. This sort of inversion is called ...
4answers
200 views
Proof of Irrationality of e using Diophantine Equations
I was trying to prove that e is irrational without using the typical series expansion, so starting off $e = a/b$ Take the natural log so $1 = \ln(a/b)$ Then $1 = \ln(a)-\ln(b)$ So unless I did ...
5answers
388 views
Notation Question: What does $\vdash$ mean in logic?
In a "math structures" class at the community college I'm attending (uses the book Discrete Math by Epp, and is basically a discrete math "light" edition), we've been covering some basic logic. I've ...
3answers
583 views
Twenty questions against a liar
Here's one that popped into my mind when I was thinking about binary search. I'm thinking of an integer between 1 and n. You have to guess my number. You win as soon as you guess the correct number. ...
2answers
7k views
A comprehensive list of binomial identities?
Is there a comprehensive resource listing binomial identities? I am more interested in combinatorial proofs of such identities, but even a list without proofs will do.
4answers
292 views
How to prove $\sum\limits_{k=0}^n{n \choose k}(k-1)^k(n-k+1)^{n-k-1}= n^n$?
How do I prove the following identity directly? $$\sum_{k=0}^n{n \choose k}(k-1)^k(n-k+1)^{n-k-1}= n^n$$ I thought about using the binomial theorem for $(x+a)^n$, but got stuck, because I realized ...
1answer
159 views
Monochromatic squares in a colored plane
Color every point in the real plane using the colors blue,yellow only. It can be shown that there exists a rectangle that has all vertices with the same color. Is it possible to show that there exists ...
8answers
1k views
Proof: For all integers $x$ and $y$, if $x^3+x = y^3+y$ then $x = y$
I need help proving the following statement: For all integers $x$ and $y$, if $x^3+x = y^3+y$ then $x = y$ The statement is true, I just need to know the thought process, or a lead in the right ...
3answers
979 views
Gay Speed Dating Problem
Here's an interesting problem that I came up with the other night. With straight speed dating, (assuming the number of men and women are equal) the number of iterations that need to be made before ...
3answers
2k views
Lights out game on hexagonal grid
I greatly enjoyed the Lights Out game described here (I am sorry I had to link to an older page because some wikidiot keeps deleting most of the page). Its mathematical analysis is here (it's just ...
2answers
436 views
A Weaker Version of the ABC Conjecture
The ABC conjecture states that there are a finite number of integer triples (a,b,c) such that $\frac {\log \left( c \right)}{\log \left( \text{rad} \left( abc \right) \right)}>1+\epsilon$, where ...
1answer
393 views
Factorial canceling on expansion of binomial coefficients on Concrete Mathematics
On Concrete Mathematics section 5.5, which is teaching the hypergeometric functions, generalized factorials is defined as: $$\frac 1 {z!} = \lim_{n \to \infty} \binom{n+z}{n}n^{-z}$$ where \[ ...
4answers
1k views
Proof by contradiction: $r - \frac{1}{r} =5\Longrightarrow r$ is irrational?
Prove that any positive real number $r$ satisfying: $r - \frac{1}{r} = 5$ must be irrational. Using the contradiction that the equation must be rational, we set $r= a/b$, where a,b are positive ...
1answer
218 views
In naive set theory ∅ = {∅} = {{∅}}?
In naive set theory, I believe ∅ = {∅} = {{∅}} is correct, but just wanted to make sure that I understood this correctly. ∅ is an empty set, so having an empty set as an element of a set that ...
2answers
633 views
Good upper bound for $\sum\limits_{i=1}^{k}{n \choose i}$?
I want an upper bound on $$\sum_{i=1}^k \binom{n}{i}.$$ $O(n^k)$ seems to be an overkill -- could you suggest a tighter bound ?
4answers
245 views
Number of possibilities to cross a hexagonal lattice.
An ant walks along the line segments in the hexagonal lattice shown, from start to finish. The ant must go in the direction shown if there is an arrow, and never goes on the same line segment twice. ...
2answers
309 views
Summation by parts of $\sum_{k=0}^{n}k^{2}2^{k}$
I want to evaluate this sum $$\sum_{k=0}^{n}k^{2}2^{k}$$ by summation by parts (two times) and I need to know, if my approach was right. I know the formula for summation by parts is \sum u\Delta ...
2answers
237 views
What is the probability that $\pi(x) + x$ is injective?
Let $S$ be a finite group with operator + and $\pi$ be a permutation on $S$. Then what is the probability that $\pi(x) + x$ is injective over choices of $\pi$? The concrete instantiation I'm ...
2answers
279 views
bijection = bijection + bijection on symmetric integer intervals
Given a bijection $f:\mathbb Z \to \mathbb Z$ where $\mathbb Z$ is the set of all integers, does there always exist two bijections $g:\mathbb Z \to \mathbb Z$ and $h:\mathbb Z \to \mathbb Z$ which ...
8answers
838 views
Showing that an equation holds true with a Fibonacci sequence: $F_{n+m} = F_{n-1}F_m + F_n F_{m+1}$
Question: Let $F_n$ the sequence of Fibonacci numbers, given by $F_0 = 0, F_1 = 1$ and $F_n = F_{n-1} + F_{n-2}$ for $n \geq 2$. Show for $n, m \in \mathbb{N}$: $$F_{n+m} = F_{n-1}F_m + F_n F_{m+1}$$ ...
3answers
128 views
Combinatorial proof of $\sum^{n}_{i=1}\binom{n}{i}i=n2^{n-1}$.
Prove $$\sum^{n}_{i=1}\binom{n}{i}i=n2^{n-1}$$ I can't find counting interpretations for either of the sides. A hint of "if $S$ is a subset of $\{1, . . . , n\}$ and $S^\prime$ is its complement ...
4answers
262 views
Prove $1+\frac{1}{\sqrt{2}}+\frac{1}{\sqrt{3}}+\dots+\frac{1}{\sqrt{n}} > 2\:(\sqrt{n+1} − 1)$
Basically, I'm trying to prove (by induction) that: $$1+\frac{1}{\sqrt{2}}+\frac{1}{\sqrt{3}}+\dots+\frac{1}{\sqrt{n}} > 2\:(\sqrt{n+1} − 1)$$ I know to begin, we should use a base case. In this ...
2answers
146 views
What is the converse of this statement and is it true?
If $a$ and $b$ are relatively prime, $a\mid c$ and $b\mid c$, then $(ab)\mid c$. I am lost. Would the converse be "If $(ab)\mid c$, then $a$ and $b$ are relatively prime and $a\mid c$ and $b\mid c$" ...
6answers
315 views
Proof that the sum of the cubes of any three consecutive positive integers is divisible by three.
So this question has less to do about the proof itself and more to do about whether my chosen method of proof is evidence enough. It can actually be shown by the Principle of Mathematical Induction ...
2answers
370 views
9 pirates have to divide 1000 coins…
A band of 9 pirates have just finished their latest conquest - looting, killing and sinking a ship. The loot amounts to 1000 gold coins. Arriving on a deserted island, they now have to split up the ...
5answers
2k views
Resources/Books for Discrete Mathematics
I am going to a Computer Science Course in University next year. I heard that Discrete Mathematics is whats required for Comp Sci so, I am looking for resources/books that I can read to get started ...
2answers
253 views
How does $2^{k+1} = 2 \times 2^k$?
I ask only because my textbook infers this in an example. Where should I go to learn more about this? I'm trying to learn mathematics by Induction but my knowledge of simplifying algebraic equations ...
4answers
275 views
“How many different integers does this give us?”
How many unique integers can you get from $\lceil2012/n\rceil$ where $n$ is a positive integer? I don't know at all where to begin to approach this problem. I thought it maybe had something to do ...
6answers
220 views
Can we always draw $n/3$ disjoint triangles from $n$ points in the plane in general position?
Suppose we are given $n$ points in the plane, where $n$ is a multiple of $3$ and no three of these points lie on a line. Is it possible to group all of these points into sets of three, so that if we ...
2answers
168 views
Prove that $\mathcal{P}(A)⊆ \mathcal{P}(B)$ if and only if $A⊆B$. [duplicate]
Here is my proof, I would appreciate it if someone could critique it for me: To prove this statement true, we must proof that the two conditional statements ("If $\mathcal{P}(A)⊆ \mathcal{P}(B)$, ...
2answers
923 views
Minimum degree of a graph and existence of perfect matching
I was reading a result where the following proposition appears as a preliminary step (and left as exercise): Claim: Suppose $G$ is a graph on $n$ vertices ($n$ even and $n \geqslant 3$) with ...
4answers
1k views
How do I figure out what kind of distribution this is?
i've sampled a real world process, network ping times. The "round-trip-time" is measured in milliseconds. Results are plotted in a histogram: Ping times have a minimum value, but a long upper tail. ...
2answers
259 views
If we have $m$ indistinguishable objects how many ways is it possible to put them in $n$ indistinguihable positions?
if we have $m$ indistinguishable objects, how many ways is it possible to put them in $n$ indistinguishable positions? (for 2 cases 1: without empty position allowed 2: empty positions are allowed.) ...
3answers
289 views
How would I figure out how many anagrams of mississippi don't contain the word psi?
I'm really confused how I'd calculate this. I know it's the number of permutations of mississippi minus the number of permutations that contain psi, but considering there's repetitions of those ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 124, "mathjax_display_tex": 9, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9355113506317139, "perplexity_flag": "head"}
|
http://mathhelpforum.com/trigonometry/4888-trig-help.html
|
# Thread:
1. ## Trig help!
Hey guys im abit confused with the different quadrants of the trig ratios thing.
I know all trig ratios are positive in 1st quadrant, then sine, tan, cosine only. However, why does:
cos x= -5/11 (solve for x if 0<x<180)
= 117 degrees?
I thought only sine could be positive when greater than 90 degrees and less than 180 (obtuse). Does - sign mean that this negates this rule?
Also, what would happen if sin x= -5/11 (solve for x if 0<x<180)?
I get two answers for x:
x= 153 degrees or 27 degrees. Since sine is positive both in acute and obtuse quadrants, does this mean both answers are valid?
Any help is much appreciated
Thanks in advance!
2. Originally Posted by rain_lover
Hey guys im abit confused with the different quadrants of the trig ratios thing.
I know all trig ratios are positive in 1st quadrant, then sine, tan, cosine only. However, why does:
cos x= -5/11 (solve for x if 0<x<180)
= 117 degrees?
I thought only sine could be positive when greater than 90 degrees and less than 180 (obtuse). Does - sign mean that this negates this rule?
This illustrates the point nicely. you are told that $\cos(x)$ is negative, which means
that $x$ is in quadrant 2 and/or 3, since $\cos(x)$ is positive only in quadrants 1 and 4.
The angle in quadrant 2 for which $\cos(x)=-5/11$ is $\approx 117^{\circ}$, and in quadrant 3
is $\approx 243^{\circ}$
RonL
3. Hey thanks but how did you get 243 degrees for the one in the 3rd quadrant?? Also, how would you find sin x = -5/11? Would you get 1 or 2 answers?
Thanks
4. Originally Posted by rain_lover
Hey thanks but how did you get 243 degrees for the one in the 3rd quadrant?? Also, how would you find sin x = -5/11? Would you get 1 or 2 answers?
Thanks
The 243 degrees comes from knowing that the other angle is 360-(180-117).
That is by knowing the symmetry properties of the trig functions.
It should be in your notes/text.
RonL
5. Originally Posted by rain_lover
Also, how would you find sin x = -5/11? Would you get 1 or 2 answers?
Thanks
Look for yourself: (the graph is in degrees)
Attached Thumbnails
6. Hello, rain_lover!
Your questions are confusing to me . . .
I know all trig ratios are positive in 1st quadrant, then sine, tan, cosine only.
However, why does: . $\cos x= -\frac{5}{11} = 117^o$ ?
I thought only sine could be positive when 90° < x < 180° (obtuse).
This is true . . . but why do you care? . . . This problem involves the cosine.
Does - sign mean that this negates this rule?
No, the rule for sine still holds . . . why wouldn't it?
Also, what would happen if $\sin x= -\frac{5}{11}$ (solve for 0 < x < 180) ?
Quite impossible . . .
As you pointed out, sine is positive in Quadrant 1 and 2 (0 < x < 180).
So, x cannot be negative there.
7. The 243 degrees comes from knowing that the other angle is 360-(180-117).
That is by knowing the symmetry properties of the trig functions.
Since the x/h value is -5/11, indicating that the angles are in the second and third quadrants, shouldn't the other angle (besides 117) be figured by adding 63 degrees to 180 degrees? Or by subtracting 117 from 360?
Just wanted to clear this up for the original poster, who seems to be having a hard time understanding this.
8. Originally Posted by spiritualfields
Since the x/h value is -5/11, indicating that the angles are in the second and third quadrants, shouldn't the other angle (besides 117) be figured by adding 63 degrees to 180 degrees? Or by subtracting 117 from 360?
Just wanted to clear this up for the original poster, who seems to be having a hard time understanding this.
Since they are the same thing you do whatever seems more natural to
you.
RonL
9. But 360 - (180 - 117) = 297, which is the fourth quadrant.
10. Originally Posted by spiritualfields
But 360 - (180 - 117) = 297, which is the fourth quadrant.
What does this refer to? The last post in this thread was:
Originally Posted by CaptainBlack
Originally Posted by spiritualfields
Since the x/h value is -5/11, indicating that the angles are in the second and third quadrants, shouldn't the other angle (besides 117) be figured by adding 63 degrees to 180 degrees? Or by subtracting 117 from 360?
Just wanted to clear this up for the original poster, who seems to be having a hard time understanding this.
Since they are the same thing you do whatever seems more natural.
The comment refers to 180+63 degrees being the same as 360-117.
RonL
11. My comments are with respect to posts # 1, 2, and 3, refering specifically to the cos x = -5/11, and the op's question about how could x be 117 degrees as well as 243 degrees. The way I arrive at the 243 is by adding 63 to 180 (because I know that 180 - 117 = 63) or by subtracting 117 from 360. You get to the same angle (243) either way. I'm no math whiz, but THIS much I know! Given that, and the context of posts 1, 2, and 3, I'm not sure what 360 - (180 - 117) is supposed represent. If I were a student in your class, I'd be calling you out on that one
12. Originally Posted by spiritualfields
My comments are with respect to posts # 1, 2, and 3, refering specifically to the cos x = -5/11, and the op's question about how could x be 117 degrees as well as 243 degrees. The way I arrive at the 243 is by adding 63 to 180 (because I know that 180 - 117 = 63) or by subtracting 117 from 360. You get to the same angle (243) either way. I'm no math whiz, but THIS much I know! Given that, and the context of posts 1, 2, and 3, I'm not sure what 360 - (180 - 117) is supposed represent. If I were a student in your class, I'd be calling you out on that one
You should try to quote at least some context, otherwise the thread
can become confusing and difficult to follow.
There are tricks that will allow you to quote from multiple previous posts
if you need to (Use two browser windows, with the post you are creating
in one window. Open the same thread in the other window and click the
quote button on any post you want to quote, select the text in the reply
box then <control>c will copy the text onto the clipboard. Change back to
the first window then <control>v will paste the text from the clipboard into
the current reply box. Repeat for any other posts you need to quote).
RonL
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 8, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9450037479400635, "perplexity_flag": "middle"}
|
http://cstheory.stackexchange.com/questions/9497/pseudo-random-function-families-whose-instances-have-full-domain/9514
|
# Pseudo-Random Function families whose instances have full domain
The GGM construction gives (PRF) pseudo-random function families whose instance's input's are binary strings of a single length.
I've convinced myself that one could get a PRF family whose instances have domain of all finite binary strings by taking a PRF for inputs of length $n$ and an independent PRNG seed, using the PRNG seed to take a value that is indistinguishable from being a primes chosen uniformly from $\; [2^{n-1},2^n) \;$, interpreting the input as a non-negative integer, reducing that integer mod the value from the PRNG, expressing the result as a binary string, and feeding that to the PRF instance.
(The PRF instance and PRNG seed could instead be the output of the PRNG on the key.)
However, looking at the construction, I see what would be much simpler and more efficient, and I think it would still be secure, although I don't understand the proof well enough to figure out if that can be suitably modified.
Does the following give a pseudo-random function family from
a pseudo-random generator that stretches by a factor of 3?
Let $G$ be the generator, and define $H_0,H_\#,H_1$ to return the first,middle,last $n$ bits of $G$'s output. $\;\;$ For all non-negative integers $n$ such that $\: n\lt \text{len}(x) \:$, $\:$ let $b_n$ be the $n$th bit of $x$. $\;\;$ $F_s(x)$ is then defined to equal $\; H_\#(H_{b_{\text{len}(x)-1}}(...(H_{b_2}(H_{b_1}(H_{b_0}(s)))))) \;$.
-
I'm not sure exactley about how your construction works. But when I've read "infinite" (I mean about the tree size) I've seen some kind of error, the error is that in the GGM construction's proof they use the hybrid argument in order to prove the PRF property. The loss in the security factor is proportional respect to the size of the tree (this is the weakness of the hybrid argument). – AntonioFa Dec 26 '11 at 20:28
I just edited to explain the construction. $\;\;$ I would imagine that one would show that for all positive integers $c$, the function's restriction to inputs of length at most $\: c+n^c \:$ was pseudo-random. $\;\;$ It would follow that the function itself its pseudo-random, since an efficient algorithm can only generate polynomially long inputs for the function. – Ricky Demer Dec 27 '11 at 7:05
## 1 Answer
Here is a proof sketch of the security of your construction.
Let $G: \{0,1\}^k \to \{0,1\}^{3k}$ be the PRG and let $G_0, G_1, G_2$ denote the first, middle, and last third of the output of $G$.
Define the function $F_{x,m}$ as follows:
$\qquad F_{x,m}(y) = \begin{cases} rand & \mbox{ if } y<x \mbox{ in lex order} \\ G_m( rand ) & \mbox{ if } y=\epsilon \\ G_m( F_{x,b}(y') ) & \mbox{ if } y=y'b \\ \end{cases}$
Here $rand$ denotes a value chosen at random for each $(y,m)$, independent of everything else.
Now $F_{\epsilon,2}$ is your GGM construction instantiated with a random seed. Suppose we picture the GGM construction as a labeling of the edges and vertices of a binary tree. Each vertex is associated with a finite binary string. Each vertex labels its two outgoing edges $G_0(\ell), G_1(\ell)$ and labels itself $G_2(\ell)$, where $\ell$ is the value assigned to its incoming edge. Then $F_{x,2}$ is the result of having vertices $y < x$ assign their relevant labels totally randomly.
Fix an adversary $A$ with running time $p(\cdot)$, to which we give oracle access to a function of this form. We'll consider the sequence of hybrids $F_{\epsilon,2}, F_{0,2}, F_{1,2}, F_{00,2}, \ldots, F_{\omega,2}$, where $|\omega| > p(k)$. From $A$'s point of view, $F_{\omega,2}$ has output distributed identically to a totally random function.
If $x$ is the successor of $x'$ in lex order, then $F_{x,2}$ and $F_{x',2}$ differ only in one application of the PRG. Thus, successive hybrids are indistinguishable. However, we've proposed an exponential number of hybrids, so we are not yet done! Intuitively $A$ can only "notice" the changes in a polynomial-sized portion of the tree. We formalize this as follows:
Conditioned on the event that $A$ never queries its oracle on a string beginning with $x$, then $F_{x,2}$ and $F_{x',2}$ give identical output distribution. Let $q_x$ denote the probability that $A$ queries its oracle on such a string, and let $\delta(\cdot)$ denote the negligible security error of the PRG. Then we have:
$\qquad\Big|\Pr[A^{F_{x,2}}(1^k) = 1] - \Pr[A^{F_{x',2}}(1^k) = 1] \Big| \le q_x \delta(k)$
Then by a hybrid argument, we have:
$\qquad\Big|\Pr[A^{F_{\epsilon,2}}(1^k) = 1] - \Pr[A^{F_{\omega,2}}(1^k) = 1] \Big| \le \delta(k)\sum_x q_x$
The first expression involves $A$ with oracle access to the GGM function, and the second expression involves $A$ with oracle access to a random function. Now we bound the sum $\sum_x q_x \le p(k)^2$. Since $\delta(k) p(k)^2$ is negligible, the construction is a PRF.
-
(Sorry for taking so long to respond.) $\:$ How do we "bound the sum"? $\:$ Each $q_x$ is defined by giving $A$ access to a different oracle, so $A$ could conceivably find $x$ and query the oracle on it. – Ricky Demer Dec 28 '11 at 23:45
$\sum_x \Pr[A \mbox{ queries an extension of } x]$ is at most the expected combined length of all the oracle queries. So the sum is at most $p(k)^2$ regardless of the oracle. You're right that the $q_x$ values depend on the oracle. To very formally address that issue, I think you'd have to more carefully define only a polynomial number of hybrids. – mikero Dec 29 '11 at 2:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 66, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9304420351982117, "perplexity_flag": "head"}
|
http://cs.stackexchange.com/questions/6773/finding-negative-cycles-for-cycle-canceling-algorithm
|
# Finding negative cycles for cycle-canceling algorithm
I am implementing the cycle-canceling algorithm to find an optimal solution for the min-cost flow problem. By finding and removing negative cost cycles in the residual network, the total cost is lowered in each round. To find a negative cycle I am using the bellman-ford algorithm.
My Problem is: Bellman-ford only finds cycles that are reachable from the source, but I also need to find cycles that are not reachable.
Example: In the following network, we already applied a maximum flow. The edge $(A, B)$ makes it very expensive. In the residual network, we have a negative cost cycle with capacity $1$. Removing it, would give us a cheaper solution using edges $(A, C)$ and $(C, T)$, but we cannot reach it from the source $S$.
Labels: Flow/Capacity, Cost
Of course, I could run Bellman-ford repeatedly with each node as source, but that does not sound like a good solution. I'm a little confused because all the papers I read seem to skip this step.
Can you tell me, how to use bellman-ford to find every negative cycle (reachable or not)? And if not possible, which other algorithm do you propose?
-
If a cycle cannot be reached via the source, how can it affect the total flow? – Nicholas Mancuso Nov 19 '12 at 21:30
It won't affect the flow value but the total cost. See the new example. – Patrick Schmidt Nov 19 '12 at 21:39
2
I think you should be running Bellman-Ford from the sink, no? If you find a maximum flow, $f$, then under the residual graph $G_f$ there will not be a path from $s$ to $t$. Therefore, Bellman-Ford should be run on $G_f$ with $t$. – Nicholas Mancuso Nov 20 '12 at 1:44
## 2 Answers
To expand upon my comment, remember, this algorithm for finding Min-Cost-Flow relies on the fact that $f$ is maximal. By first running Ford-Fulkerson to find $f$ and the resulting residual network $G_f$, the cost $f$ is then reduced by finding negative cycles in $G_f$. That is, by finding negative cycles in $G_f$ we do not change the amount of flow, $f$, but merely the cost.
Now by running Bellman-Ford from $t$ in $G_f$ we can trace backwards on edges that have non-negative flow (by definition of $G_f$). If cycles are adjacent to any edges in these paths, then we can "transfer" some amount of flow to other edges in the cycle. In other words, we keep the net-flow for some cycle the same, but are able to change the cost.
Notice an unreachable cycle from $t$ must have zero-flow. Otherwise we would have a contradiction in $f$ being maximal.
I apologize for the "hand-wavy-ness" of this explanation. I will try to be more formal when I have time tonight.
-
Thanks, your last sentence makes it clear. So, it is enough to deal with cycles which are reachable from $T$. – Patrick Schmidt Nov 20 '12 at 17:46
My suggestion: You have to start the algorithm from T, in order to find a negative cycle in your residual network. The result should be the same, but then you can reach the circle
-
1
This works for this graph, but you can have negative cycles that aren't connected to either S or T. I suspect that the OP wants a solution that works in general. – Peter Shor Nov 20 '12 at 12:15
yes, in general you cant find every negative cycle, but the OP wants to improve his residual Network by checking the costs. Then unreachable negative circles dont matter – Sven Jung Nov 20 '12 at 13:06
I want to use this to get a min cost flow. So the new question would be: Is it sufficient to eliminate every cycle that is reachable from the sink $T$ (In the residual network). Right now I can't find a counter example – Patrick Schmidt Nov 20 '12 at 13:12
You can view a flow as either originating at $S$ and going to $T$, or reverse every edge and view it as originating at $T$ and going to $S$. If eliminating every cycle that is reachable from the source $S$ doesn't work, then eliminating every cycle that is reachable from the sink $T$ won't work. The source and the sink behave symmetrically. – Peter Shor Nov 20 '12 at 13:37
of course it is the same if you reverse every edge and start from T, because nothing changed. But why dont start at T without reversing the edges?then you should find a reachable negative cycle, if existing. The question is, if the unreachable negative cycles really dont matter – Sven Jung Nov 20 '12 at 13:53
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 31, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9250918626785278, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/30191/software-for-tree-decompositions/31545
|
Software for Tree-Decompositions
Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Does anybody know about software that exactly calculates the tree-width of a given graph and outputs a tree-decomposition? I am only interested in tree-decompositions of reasonbly small graphs, but need the exact solution and a tree-decomposition. Any comments would be great. Thanks!
-
3 Answers
For general graphs there are no good algorithms known, as the problem of determining the treewidth of a graph is NP-hard. So if your graphs are not from some special class, and instances are small, then a brute force search over all decompositions of small width is a reasonable approach.
As a previous answer suggested, Röhrig's Diplomarbeit ranks highly in a Google search. His rather negative conclusion in 1998 was that when treewidth exceeds $4$, brute force enumeration was essentially the only realistic option; up to $4$ special-purpose algorithms were reasonable. This is not that surprising, as (intuitively speaking) iterating over all choices of bags of up to $k$ elements takes $\Omega(n^k)$ time, so the runtime grows quite fast.
Do your graphs have some special features? The ISGCI has many special graph classes, for some of which it is possible to find join-trees efficiently. (Join-tree decompositions are another name for tree decompositions, although this term nowadays seems to usually refer to join trees as used in Bayesian networks.)
Taking a really high level perspective, do you really want to compute tree decompositions? If you are decomposing trees because you need to do something with them, then consider an easier-to-compute decomposition. For instance, the modular decomposition can be computed in linear time, and also guarantees fast algorithms for many problems via the modular decomposition tree. There is a Perl implementation of an older algorithm, Nathann Cohen is currently working to incorporate a more recent C implementation into the Sage framework, or you could use Fabien de Montgolfier's C code directly if you read French (the papers describing the work are in English).
If you really do need tree decompositions, then have a look at the simple approach via induced width, which can be easily implemented (and parallelized) by considering each possible vertex ordering, then checking the induced width it corresponds to. Section 2.3 of Rina Dechter's draft version of her chapter from the Handbook of Constraint Programming is quite useful as a starting point.
-
You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Some tree decomposition software is given here: http://hein.roehrig.name/dipl/. I haven't used it so can say nothing about it's quality.
-
You could try the LibTW software, which is freely available from http://www.treewidth.com/ (also read their FAQ linked at the bottom of the page). It's written in Java so you can easily extend it with your own functionality.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9368988871574402, "perplexity_flag": "middle"}
|
http://catalog.flatworldknowledge.com/bookhub/reader/2992?e=coopermicro-ch06_s02
|
# Microeconomics: Theory Through Applications, v. 1.0
by Russell Cooper and A. Andrew John
Study Aids:
Click the Study Aids tab at the bottom of the book to access your Study Aids (usually practice quizzes and flash cards).
Study Pass:
Study Pass is our latest digital product that lets you take notes, highlight important sections of the text using different colors, create "tags" or labels to filter your notes and highlights, and print so you can study offline. Study Pass also includes interactive study aids, such as flash cards and quizzes.
Highlighting and Taking Notes:
If you've purchased the All Access Pass or Study Pass, in the online reader, click and drag your mouse to highlight text. When you do a small button appears – simply click on it! From there, you can select a highlight color, add notes, add tags, or any combination.
Printing:
If you've purchased the All Access Pass, you can print each chapter by clicking on the Downloads tab. If you have Study Pass, click on the print icon within Study View to print out your notes and highlighted sections.
Search:
To search, use the text box at the bottom of the book. Click a search result to be taken to that chapter or section of the book (note you may need to scroll down to get to the result).
View Full Student FAQs
## 6.2 The Revenues of a Firm
### Learning Objectives
1. What is the demand curve faced by a firm?
2. What is the elasticity of demand? How is it calculated?
3. What is marginal revenue?
A firm’s revenues are the money that it earns from selling its product. Revenues equal the number of units that a firm sells times the price at which it sells each unit:
revenues = price × quantity.
For example, think about a music store selling CDs. Suppose that the firm sells 25,000 CDs in a month at \$15 each. Then its total monthly revenues are as follows:
revenues = 15 × 25,000 = \$375,000.
There are two ways in which firms can obtain higher revenues: sell more products or sell at a higher price. So if a firm wants to make a lot of revenue, it should sell a lot of its product at a high price. Then again, you probably do not need to study economics to figure that out. The problem for a manager is that her ability to sell a product is limited by what the market will bear. Typically, we expect that if she sets a higher price, she will not be able to sell as much of the product:
$↑price→↓quantity$
Equivalently, if she wants to sell a larger quantity of product, she will need to drop the price:
$↑quantity→↓price$
This is the law of demand in operation (Figure 6.4 "A Change in the Price Leads to a Change in Demand").
Figure 6.4 A Change in the Price Leads to a Change in Demand
An increase in price leads to a decrease in demand. A decrease in price leads to an increase in demand.
## The Demand Curve Facing a Firm
There will typically be more than one firm that serves a market. This means that the overall demand for a product is divided among the different firms in the market. We have said nothing yet about the kind of “market structure” in which a firm is operating—for example, does it have a lot of competitors or only a few competitors? Without delving into details, we cannot know exactly how the market demand curve will be divided among the firms in the market. Fortunately, we can put this problem aside—at least for this chapter. For the moment, it is enough just to know that each firm faces a demand curve for its own product.
When the price of a product increases, individual customers are less likely to think it is good value and are more likely to spend their income on other things instead. As a result—for almost all products—a higher price leads to lower sales.
Toolkit: Section 17.15 "Pricing with Market Power"
The demand curve facing a firm tells us the price that a firm can expect to receive for any given amount of output that it brings to market or the amount it can expect to sell for any price that it chooses to set. It represents the market opportunities of the firm.
An example of such a demand curve is
quantity demanded = 100 − (5 × price).
Table 6.1 "Example of the Demand Curve Faced by a Firm" calculates the quantity associated with different prices. For example, with this demand curve, if a manager sets the price at $10, the firm will sell 50 units because 100 − (5 × 10) = 50. If a manager sets the price at$16, the firm will sell only 20 units: 100 − (5 × 16) = 20. For every \$1 increase in the price, output decreases by 5 units. (We have chosen a demand curve with numbers that are easy to work with. If you think that this makes the numbers unrealistically small, think of the quantity as being measured in, say, thousands of units, so a quantity of 3 in this equation means that the firm is selling 3,000 units. Our analysis would be unchanged.)
Table 6.1 Example of the Demand Curve Faced by a Firm
Price (\$) Quantity
0 100
2 90
4 80
6 70
8 60
10 50
12 40
14 30
16 20
18 10
20 0
Equivalently, we could think about a manager choosing the quantity that the firm should produce, in which case she would have to accept the price implied by the demand curve. To write the demand curve this way, first divide both sides of the equation by 5 to obtain
Now add “price” to each side and subtract “” from each side:
For example, if the manager wants to sell 70 units, she will need a price of \$6 (because 20 − 70/5 = 6). For every unit increase in quantity, the price decreases by 20 cents.
Either way of looking at the demand curve is perfectly correct. Figure 6.5 "Two Views of the Demand Curve" shows the demand curve in these two ways. Look carefully at the two parts of this figure and convince yourself that they are really the same—all we have done is switch the axes.
Figure 6.5 Two Views of the Demand Curve
There are two ways that we can draw a demand curve, both of which are perfectly correct. (a) The demand curve has price on the horizontal axis and quantity demanded on the vertical axis (b). The demand curve has price on the vertical axis, which is how we normally draw the demand curve in economics.
The firm faces a trade-off: it can set a high price, such as $18, but it will be able to sell only a relatively small quantity (10). Alternatively, the firm can sell a large quantity (for example, 80), but only if it is willing to accept a low price ($4). The hard choice embodied in the demand curve is perhaps the most fundamental trade-off in the world of business. Of course, if the firm sets its price too high, it won’t sell anything at all. The choke priceThe price above which no units of the good will be sold. is the price above which no units of the good will be sold. In our example, the choke price is \$20; look at the vertical axis in part (b) of Figure 6.5 "Two Views of the Demand Curve".A small mathematical technicality: the equation for the demand curve applies only if both the price and the quantity are nonnegative. At any price greater than the choke price, the quantity demanded is zero, so the demand curve runs along the vertical axis. A negative price would mean a firm was paying consumers to take the product away.
Every firm in the economy faces some kind of demand curve. Knowing the demand for your product is one of the most fundamental necessities of successful business. We therefore turn next to the problem Ellie learned about the demand curve for her company’s drug.
## The Elasticity of Demand: How Price Sensitive Are Consumers?
Marketing managers understand the law of demand. They know that if they set a higher price, they can expect to sell less output. But this is not enough information for good decision making. Managers need to know whether their customers’ demand is very sensitive or relatively insensitive to changes in the price. Put differently, they need to know if the demand curve is steep (a change in price will lead to a small change in output) or flat (a change in price will lead to a big change in output). We measure this sensitivity by the own-price elasticity of demandThe percentage change in quantity demanded of a good divided by the percentage change in the price of that good..
Toolkit: Section 17.2 "Elasticity"
The own-price elasticity of demand (often simply called the elasticity of demand) measures the response of quantity demanded of a good to a change in the price of that good. Formally, it is the percentage change in the quantity demanded divided by the percentage change in the price:
When price increases (the change in the price is positive), quantity decreases (the change in the quantity is negative). The price elasticity of demand is a negative number. It is easy to get confused with negative numbers, so we instead use
which is always a positive number.
• If −(elasticity of demand) is a large number, then quantity demanded is sensitive to price: increases in price lead to big decreases in demand.
• If −(elasticity of demand) is a small number, then quantity demanded is insensitive to price: increases in price lead to small decreases in demand.
Throughout the remainder of this chapter, you will often see −(elasticity of demand). Just remember that this expression always refers to a positive number.
## Calculating the Elasticity of Demand: An Example
Go back to our earlier example:
quantity demanded = 100 − 5 × price.
Suppose a firm sets a price of $15 and sells 25 units. What is the elasticity of demand if we think of a change in price from$15 to \$14.80? In this case, the change in the price is −0.2, and the change in the quantity is 1. Thus we calculate the elasticity of demand as follows:
1. The percentage change in the quantity is $125$ (4 percent).
2. The percentage change in the price is $−0.2 15 =175$ (approximately −1.3 percent).
3. −(elasticity of demand) is $1/25 1/75 =3.$
The interpretation of this elasticity is as follows: when price decreases by 1 percent, quantity demanded increases by 3 percent. This is illustrated in Figure 6.6 "The Elasticity of Demand".
Figure 6.6 The Elasticity of Demand
When the price is decreased from $15.00 to$14.80, sales increase from 25 to 26. The percentage change in price is −1.3 percent. The percentage change in the quantity sold is 4. So −(elasticity of demand) is 3.
One very useful feature of the elasticity of demand is that it does not change when the number of units changes. Suppose that instead of measuring prices in dollars, we measure them in cents. In that case our demand curve becomes
quantity demanded = 100 − 500 × price.
Make sure you understand that this is exactly the same demand curve as before. Here the slope of the demand curve is −500 instead of −5. Looking back at the formula for elasticity, you see that the change in the price is 100 times greater, but the price itself is 100 times greater as well. The percentage change is unaffected, as is elasticity.
## Market Power
The elasticity of demand is very useful because it is a measure of the market powerThe extent to which a firm produces a product that consumers want very much and for which few substitutes are available. that a firm possesses. In some cases, some firms produce a good that consumers want very much—a good in which few substitutes are available. For example, De Beers controls much of the world’s market for diamonds, and other firms are not easily able to provide substitutes. Thus the demand for De Beers’ diamonds tends to be insensitive to price. We say that De Beers has a lot of market power. By contrast, a fast-food restaurant in a mall food court possesses very little market power: if the fast-food Chinese restaurant were to try to charge significantly higher prices, most of its potential customers would choose to go to the other Chinese restaurant down the aisle or even to eat sushi, pizza, or burritos instead.
Ellie’s company had significant market power. There were a relatively small number of drugs available in the country to treat high blood pressure, and not all drugs were identical in terms of their efficacy and side effects. Some doctors were loyal to her product and would almost always prescribe it. Some doctors were not very well informed about the price because doctors don’t pay for the medication. For all these reasons, Ellie had reason to suspect that the demand for her drug was not very sensitive to price.
## The Elasticity of Demand for a Linear Demand Curve
The elasticity of demand is generally different at different points on the demand curve. In other words, the market power of a firm is not constant: it depends on the price that a firm has chosen to set. To illustrate, remember that we found −(elasticity of demand) = 3 for our demand curve when the price is $15. Suppose we calculate the elasticity for this same demand curve at$4. Thus imagine that that we are originally at the point where the price is \$4 and sales are 80 units and then suppose we again decrease the price by 20 cents. Sales will increase by 1 unit:
1. The percentage change in the quantity is $180$ (1.25 percent).
2. The percentage change in the price is $−0.2 4 =−120$ (5 percent).
3. −(elasticity of demand) is $1/80 1/20 =0.25.$
The elasticity of demand is different because we are at a different point on the demand curve.
When −(elasticity of demand) increases, we say that demand is becoming more elastic. When −(elasticity of demand) decreases, we say that demand is becoming less elastic. As we move down a linear demand curve, −(elasticity of demand) becomes smaller, as shown in Figure 6.7 "The Elasticity of Demand When the Demand Curve Is Linear".
Figure 6.7 The Elasticity of Demand When the Demand Curve Is Linear
The elasticity of demand is generally different at different points on a demand curve. In the case of a linear demand curve, −(elasticity of demand) becomes smaller as we move down the demand curve.
## Measuring the Elasticity of Demand
To evaluate the effects of her decisions on revenues, Ellie needs to know about the demand curve facing her firm. In particular, she needs to know whether the quantity demanded by buyers is very sensitive to the price that she sets. We now know that the elasticity of demand is a useful measure of this sensitivity. How can managers such as Ellie gather information on the elasticity of demand?
At an informal level, people working in marketing and sales are likely to have some idea of whether their customers are very price sensitive. Marketing and sales personnel—if they are any good at their jobs—spend time talking to actual and potential customers and should have some idea of how much these customers care about prices. Similarly, these employees should have a good sense of the overall market and the other factors that might affect customers’ choices. For example, they will usually know whether there are other firms in the market offering similar products, and, if so, what prices these firms are charging. Such knowledge is much better than nothing, but it does not provide very concrete evidence on the demand curve or the elasticity of demand.
A firm may be able to make use of existing sales data to develop a more concrete measure of the elasticity of demand. For example, a firm might have past sales data that show how much they managed to sell at different prices, or a firm might have sales data from different cities where different prices were charged. Suppose a pricing manager discovers data for prices and quantities like those in part (a) of Figure 6.8 "Finding the Demand Curve". Here, each dot marks an observation—for example, we can see that in one case, when the price was \$100, the quantity demanded was 28.
Figure 6.8 Finding the Demand Curve
(a) This is an example of data that a manager might have obtained for prices and quantities. (b) A line is fit to the data that represents a best guess at the underlying demand curve facing a firm.
The straight-line demand curves that appear in this and other books are a convenient fiction of economists and textbook writers, but no one has actually seen one in captivity. In the real world of business, demand curves—if they are available at all—are only a best guess from a collection of data. Economists and statisticians have developed statistical techniques for these guesses. The underlying idea of these techniques is that they fit a line to the data. (The exact details do not concern us here; you can learn about them in more advanced courses in economics and statistics.) Part (b) of Figure 6.8 "Finding the Demand Curve" shows an example. It represents our best prediction, based on available data, of how much people will buy at different prices.
If a firm does not have access to reliable existing data, a third option is for it to generate its own data. For example, suppose a retailer wanted to know how sensitive customer demand for milk is to changes in the price of milk. It could try setting a different price every week and observe its sales. It could then plot them in a diagram like Figure 6.8 "Finding the Demand Curve" and use techniques like those we just discussed to fit a line. In effect, the store could conduct its own experiment to find out what its demand curve looks like. For a firm that sells over the Internet, this kind of experiment is particularly attractive because it can randomly offer different prices to people coming to its website.
Finally, firms can conduct market research either on their own or by hiring a professional market research firm. Market researchers use questionnaires and surveys to try to discover the likely purchasing behavior of consumers. The simplest questionnaire might ask, “How much would you be willing to pay for product x?” Market researchers have found such questions are not very useful because consumers do not answer them very honestly. As a result, research firms use more subtle questions and other more complicated techniques to uncover consumers’ willingness to pay for goods and services.
Ellie decided that she should conduct market research to help with the pricing decision. She hired a market research firm to ask doctors about how they currently prescribed different high blood pressure medications. Specifically, the doctors were asked what percentage of their prescriptions went to each of the drugs on the market. Then they were asked the effect of different prices on those percentages. Based on this research, the market research firm found that a good description of the demand curve was as follows:
quantity demanded = 252 − 300 × price.
Remember that the drug was currently being sold for \$0.50 a pill, so
quantity demanded = 252 − 300 × 0.5 = 102.
The demand curve also told Ellie that if she increased the price by 10 percent to \$0.55, the quantity demanded would decrease to 87 (252 − 300 × 0.55 = 87). Therefore, the percentage change in quantity is $87−102 102 =−14.7.$ From this, the market research firm discovered that the elasticity of demand at the current price was
## How Do Revenues Depend on Price?
The next step is to understand how to use the demand curve when setting prices. The elasticity of demand describes how quantity demanded depends on price, but what a manager really wants to know is how revenues are affected by price. Revenues equal price times quantity, so we know immediately that a firm earns $0 if the price is$0. (It doesn’t matter how much you give away, you still get no money.) We also know that, at the choke price, the quantity demanded is 0 units, so its revenues are likewise $0. (If you sell 0 units, it doesn’t matter how high a price you sell them for.) At prices between$0 and the choke price, however, the firm sells a positive amount at a positive price, thus earning positive revenues. Figure 6.9 "Revenues" is a graphical representation of the revenues of a firm. Revenues equal price times quantity, which is the area of the rectangle under the demand curve. For example, at $14 and 30 units, revenues are$420.
Figure 6.9 Revenues
The revenues of a firm are equal to the area of the rectangle under the demand curve.
We can use the information in Table 6.1 "Example of the Demand Curve Faced by a Firm" to calculate the revenues of a firm at different quantities and prices (this is easy to do with a spreadsheet). Table 6.2 "Calculating Revenues" shows that if we start at a price of zero and increase the price, the firm’s revenues also increase. Above a certain point, however (in this example, \$10), revenues start to decrease again.
Table 6.2 Calculating Revenues
Price($) Quantity Revenues ($)
0 100 0
2 90 180
4 80 320
6 70 420
8 60 480
10 50 500
12 40 480
14 30 420
16 20 320
18 10 180
20 0 0
## Marginal Revenue
Earlier we suggested that a good strategy for pricing is to experiment with small changes in price. So how do small changes in price affect the revenue of a firm? Suppose, for example, that a firm has set the price at $15 and sells 25 units, but the manager contemplates decreasing the price to$14.80. We can see the effect that this has on the firm’s revenues in Figure 6.10 "Revenues Gained and Lost".
Figure 6.10 Revenues Gained and Lost
If a firm cuts its price, it sells more of its product, which increases revenues, but sells each unit at a lower price, which decreases revenues.
The firm will lose 20 cents on each unit it sells, so it will lose $5 in revenue. This is shown in the figure as the rectangle labeled “revenues lost.” But the firm will sell more units: from the demand curve, we know that when the firm decreases its price by$0.20, it sells another unit. That means that the firm gains $14.80, as shown in the shaded area labeled “revenues gained.” The overall change in the firm’s revenues is equal to$14.80 − $5.00 =$9.80. Decreasing the price from $15.00 to$14.80 will increase its revenues by \$9.80.
Look carefully at Figure 6.10 "Revenues Gained and Lost" and make sure you understand the experiment. We presume throughout this chapter that a firm must sell every unit at the same price. When we talk about moving from $15.00 to$14.80, we are not supposing that a firm sells 25 units for $15 and then drops its price to$14.80 to sell the additional unit. We are saying that the manager is choosing between selling 25 units for $15.00 or 26 units for$14.80.
Figure 6.11 Calculating the Change in Revenues
If a manager has an idea about how much quantity demanded will decrease for a given increase in price, she can calculate the likely effect on revenues.
Figure 6.11 "Calculating the Change in Revenues" explains this idea more generally. Suppose a firm is originally at point A on the demand curve. Now imagine that a manager decreases the price. At the new, lower price, the firm sells a new, higher quantity (point B). The change in the quantity is the new quantity minus the initial quantity. The change in the price is the new price minus the initial price (remember that this is a negative number). The change in the firm’s revenues is given by
change in revenues = (change in quantity × new price) + (change in price × initial quantity).
The first term is positive: it is the extra revenue from selling the extra output. The second term is negative: it is the revenue lost because the price has been decreased. Together these give the effect of a change in price on revenues, which we call a firm’s marginal revenueThe extra revenue from selling an additional unit of output, which is equal to the change in revenue divided by the change in sales..
Toolkit: Section 17.15 "Pricing with Market Power"
Marginal revenue is the change in revenue associated with a change in quantity of output sold:
We can write this asFor the derivation of this expression, see the toolkit.
## Marginal Revenue and the Elasticity of Demand
Given the definitions of marginal revenue and the elasticity of demand, we can write
It may look odd to write this expression with two minus signs. We do this because it is easier to deal with the positive number: −(elasticity of demand). We see three things:
1. Marginal revenue is always less than the price. Mathematically, Suppose a firm sells an extra unit. If the price stays the same, then the extra revenue would just equal the price. But the price does not stay the same: it decreases, meaning the firm gets less for every unit that it sells.
2. Marginal revenue can be negative. If –(elasticity of demand) < 1, then and When marginal revenue is negative, increased production results in lower revenues for a firm. The firm sells more output but loses more from the lower price than it gains from the higher sales.
3. The gap between marginal revenue and price depends on the elasticity of demand. When demand is more elastic, meaning −(elasticity of demand) is a bigger number, the gap between marginal revenue and price becomes smaller.
These three ideas are illustrated in Figure 6.12 "Marginal Revenue and Demand". The demand curve shows us the price at any given quantity. The marginal revenue curve lies below the demand curve because of our first observation: at any quantity, marginal revenue is less than price.When a demand curve is a straight line, the marginal revenue curve is also a straight line with the same intercept, but it is twice as steep. The marginal revenue curve intersects the horizontal axis at 50 units: when output is less than 50 units, marginal revenue is positive; when output exceeds 50, marginal revenue is negative. We explained earlier that a linear demand curve becomes more inelastic as you move down it. When the demand curve goes from being relatively elastic to relatively inelastic, marginal revenue goes from being positive to being negative.
Figure 6.12 Marginal Revenue and Demand
The marginal revenue curve lies below the demand curve because at any quantity, marginal revenue is less than price.
Earlier, we showed that when a firm sets the price at \$15, −(elasticity of demand) = 3. Thus we can calculate marginal revenue at this price:
What does this mean? Starting at \$15, it means that if a firm decreases its price—and hence increase its output—by a small amount, there would be an increase in the firm’s revenues.
When revenues are at their maximum, marginal revenue is zero. We can confirm this by calculating the elasticity of demand at $10. Consider a 10 percent increase in price, so the price increases to$11. At $10, sales equal 50 units. At$11, sales equal 45 units. In other words, sales decrease by 5 units, so the decrease in sales is 10 percent. It follows that
Plugging this into our expression for marginal revenue, we confirm that
At \$10, a small change in price leads to no change in revenue. The benefit from selling extra output is exactly offset by the loss from charging a lower price.
Figure 6.13 Marginal Revenue and the Elasticity of Demand
The demand curve can be divided into two parts: at low quantities and high prices, marginal revenue is positive and the demand curve is elastic; at high quantities and low prices, marginal revenue is negative and the demand curve is inelastic.
We can thus divide the demand curve into two parts, as in Figure 6.13 "Marginal Revenue and the Elasticity of Demand". At low quantities and high prices, a firm can increase its revenues by moving down the demand curve—to lower prices and higher output. Marginal revenue is positive. In this region, −(elasticity of demand) is a relatively large number (specifically, it is between 1 and infinity) and we say that the demand curve is relatively elastic. Conversely, at high quantities and low prices, a decrease in price will decrease a firm’s revenues. Marginal revenue is negative. In this region, −(elasticity of demand) is between 0 and 1, and we say that the demand curve is inelastic. Table 6.3 represents this schematically.
Table 6.3
−(Elasticity of Demand) Demand Marginal Revenue Effect of a Small Price Decrease
• > –(elasticity of demand) > 1 Relatively elastic Positive Increase revenues
−(elasticity of demand) = 1 Unit elastic Zero Have no effect on revenues
1 > –(elasticity of demand) > 0 Relatively inelastic Negative Decrease revenues
## Maximizing Revenues
The market research company advising Ellie made a presentation to her team. The company told them that if they increased their price, they could expect to see a decrease in revenue. At their current price, in other words, marginal revenue was positive. If Ellie’s team wanted to maximize revenue, they would need to recommend a reduction in price: down to the point where marginal revenue is \$0—equivalently, where −(elasticity of demand) = 1.
Some members of Ellie’s team therefore argued that they should try to decrease the price of the product so that they could increase their market share and earn more revenues from the sale of the drug. Ellie reminded them, though, that their goal wasn’t to have as much revenue as possible. It was to have as large a profit as possible. Before they could decide what to do about price, they needed to learn more about the costs of producing the drug.
### Key Takeaways
• The demand curve tells a firm how much output it can sell at different prices.
• The elasticity of demand is the percentage change in quantity divided by the percentage change in the price.
• Marginal revenue is the change in total revenue from a change in the quantity sold.
### Checking Your Understanding
1. Earlier, we saw that the demand curve was
quantity demanded = 252 − 300 × price.
1. Suppose Ellie sets the price at \$0.42. What is the quantity demanded?
2. Suppose Ellie sets a price that is 10 percent higher (\$0.462). What is the quantity demanded?
3. Confirm that −(elasticity of demand) = 1 when the price is \$0.42.
2. If a firm’s manager wants to choose a price to maximize revenue, is this the same price that would maximize profits?
3. If a demand curve has the same elasticity at every point, does it also have a constant slope?
Close Search Results
Study Aids
Need Help?
Talk to a Flat World Knowledge Rep today:
• 877-257-9243
• Live Chat
• Contact a Rep
Monday - Friday 9am - 5pm Eastern
We'd love to hear your feedback!
Leave Feedback!
Edit definition for
#<Bookhub::ReaderController:0x0000001cc5bdd8>
show
#<Bookhub::ReaderReporter:0x0000001d043270>
475762
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 9, "mathjax_tag": 0, "mathjax_inline_tex": 19, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9466105699539185, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/26207?sort=newest
|
## How can we explicitly find the maximum eigenvalue of a tridiagonal matrix?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I just came across a matrix of the form ```$A:=\begin{pmatrix}
0&-\frac{c_0}{b_0}&0&\cdots&0\\-\frac{a_1}{b_1}&0&-\frac{c_1}{b_1}&\cdots&0\\0&-\frac{a_2}{b_2}&0&\cdots&0\\ \vdots&\vdots&\ddots&\ddots&-\frac{a_{N-1}}{b_{N-1}}\\0&\cdots&\cdots&-\frac{a_N}{b_N}&0
\end{pmatrix}$``` for some N$\in \mathbb{Z}^+$ where `$a_n=-\frac{1}{2}\alpha(\beta^2n^2-rn), \ b_n=1+\alpha(\beta^2n^2+r), and \ c_n=-\frac{1}{2}\alpha(\beta^2n^2+rn)\ $`such that $\alpha, \beta,r$ are known real constants.
From the Gershgorin circle theorem, I know that its maximum eigenvalue must lie in the Gershgorin discs. However, despite it being quite sparse, I could not get an explicit formula for its maximum eigenvalue.
I have tried solving the Av=$\lambda$v equation, where v is an eigenvector and $\lambda$ is an eigenvalue, in which I obtain a recurrence relation, but I didn't have an initial boundary condition. The equation det($\lambda$I-A)=0, where I is the identity matrix, also gives me a complicated equation that I can't solve.
Can anyone tell me what I have missed or is this an impossible-to-solve problem?
-
1
I think you got the penultimate entry in the rightmost column wrong. And probably the bottom row, too, unless your matrix is (N+1)x(N+1). But why bother with -a_i/b_i, -c_i/b_i etc at all? Why not just write a_i and c_i, and be done with it? It's the same problem. – TonyK May 27 2010 at 23:23
I think the problem who tonyK suggest is little bit more complicated. because the recursive relations are more general. In another direction, I would like to mention that a particular and interesting case of the above matrices occurs when it is symmetric. Probably you know, but if not, they are called Jacobi matrices and are related to Orthogonal polynomials and Riemann-Hilbert problems. The Percy Deift's book have a nice exposition about it. Orthogonal Polynomials and Random matrices: A Riemann-Hilbert approach – Leandro May 27 2010 at 23:59
Leandro: TonyK merely suggested making the problem notationally nicer. They are the same problem. (For instance, to explicitly get from the original version to the new formulation, take the special case where all denominators are -1.) @unknown (google): I'm also wondering whether the entries are real or complex. I hope you don't mind that I incorporated TonyK's suggestions. If you want, you can revert the edit. – Jonas Meyer May 28 2010 at 1:06
Sorry, my question was a bit unclear. I hope it makes more sense now why I wrote -a_i/b_i,.etc I agree that it can be simplified according to TonyK. TonyK: My matrix is, in fact,(N+1)x(N+1). – unknown (google) May 28 2010 at 8:59
## 2 Answers
This matrix is not symmetric but it looks like the entries $\frac{c_i}{b_i}$ and $\frac{a_i}{b_i}$ are of the same sign. This matrix can then be made symmetric by a similarity transformation. A really clear explanation of the details are in: http://digilander.libero.it/foxes/matrix/convert_unsym_trid_to_sym.pdf
So we may suppose that your tridiagonal matrix is in fact symmetric of degree n+1. But then there is an easy recursive relation between the characteristic equation of the degree n+1 matrix and lower dimensional ones. See: http://www.physics.arizona.edu/~restrepo/475A/Notes/sourcea/node59.html
Solving this recursive equation should give you the characteristic equation of the symmetric tridiagonal matrix. Working out the roots of this equation should then give you all the eigenvalues for the original matrix as they are similar.
I leave you to work through the details.
Hope that helps.
-
To be more precise: the entries $\frac{c_i}{b_i}$ and $\frac{b_i}{a_i}$ may not be of the same sign but since all the constants are known you can check if this is true for every $i$. If this is satisfied then the procedure above works in theory. If they are not all the same sign then it's back to the drawing board. – alext87 May 28 2010 at 13:46
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Let me try to expand a little bit the problem (so it's too long for a usual comment).
Consider the determinant $D_N=D_N(\lambda;a_1,\dots,a_{N-1};b_1,\dots,c_{N-1})$ of the corresponding matrix $\lambda-A$. Expanding the determinant along the first row gives $$D_N(\lambda;a_1,\dots,a_{N-1};b_1,\dots,b_{N-1}) =\lambda D_{N-1}(\lambda;a_2,\dots,a_{N-1};b_2,\dots,b_{N-1}) -a_1b_1D_{N-2}(\lambda;a_3,\dots,a_{N-1};b_3,\dots,b_{N-1});$$ in other words, $$D_N/D_{N-1}=\lambda-\frac{a_1b_1}{D_{N-1}/D_{N-2}} =\dots =\lambda-\frac{a_1b_1}{\lambda-\dfrac{a_2b_2}{\lambda-\dfrac{a_3b_3}{\dots -\dfrac{a_{N-1}b_{N-1}}{\lambda}}}}.$$ In order to get some information about the asymptotics of the zero(s) of $D_N(\lambda)/D_{N-1}(\lambda)$ one really have to have some knowledge about the $a_ib_i$, $i=1,2,\dots$. This reduces the problem to a problem for the related family of orthogonal polynomials and even Deift's book is too advanced, it is the best source on this.
-
Sorry, I edited while you were writing and made the notation inconsistent. I'm not sure of the best way to fix this. – Jonas Meyer May 28 2010 at 1:27
1
After Jonas, the question was re-edited again. I won't edit my answer again. My remark would be that $c_n/b_n\sim a_n/b_n\sim-2$ as $n\to\infty$, and computing the determinant of the matrix with entries $\lambda$ along the main diagonal and $2$ along the auxiliary diagonals is an easy task. – Wadim Zudilin May 28 2010 at 10:53
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 19, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9215726256370544, "perplexity_flag": "head"}
|
http://en.wikibooks.org/wiki/Engineering_Acoustics/Piezoelectric_Transducers
|
# Engineering Acoustics/Piezoelectric Transducers
Authors · Print · License Edit this template Part 1: Lumped Acoustical Systems – 1.1 – 1.2 – 1.3 – 1.4 – 1.5 – 1.6 – 1.7 – 1.8 – 1.9 – 1.10 – 1.11 Part 2: One-Dimensional Wave Motion – 2.1 – 2.2 – 2.3 Part 3: Applications – 3.1 – 3.2 – 3.3 – 3.4 – 3.5 – 3.6 – 3.7 – 3.8 – 3.9 – 3.10 – 3.11 – 3.12 – 3.13 – 3.14 – 3.15 – 3.16 – 3.17 – 3.18 – 3.19 – 3.20 – 3.21 – 3.22 – 3.23 – 3.24
# Introduction
Piezoelectricity from the Greek word "piezo" means pressure electricity. Certain crystalline substances generate electric charges under mechanical stress and conversely experience a mechanical strain in the presence of an electric field. The piezoelectric effect describes a situation where the transducing material senses input mechanical vibrations and produces a charge at the frequency of the vibration. An AC voltage causes the piezoelectric material to vibrate in an oscillatory fashion at the same frequency as the input current.
Quartz is the best known single crystal material with piezoelectric properties. Strong piezoelectric effects can be induced in materials with an ABO3, Perovskite crystalline structure. 'A' denotes a large divalent metal ion such as lead and 'B' denotes a smaller tetravalent ion such as titanium or zirconium.
For any crystal to exhibit the piezoelectric effect, its structure must have no center of symmetry. Either a tensile or compressive stress applied to the crystal alters the separation between positive and negative charge sights in the cell causing a net polarization at the surface of the crystal. The polarization varies directly with the applied stress and is direction dependent so that compressive and tensile stresses will result in electric fields of opposite voltages.
# Vibrations & Displacements
Piezoelectric ceramics have non-centrosymmetric unit cells below the Curie temperature and centrosymmetric unit cells above the Curie temperature. Non-centrosymmetric structures provide a net electric dipole moment. The dipoles are randomly oriented until a strong DC electric field is applied causing permanent polarization and thus piezoelectric properties.
A polerized ceramic may be subjected to stress causing the crystal lattice to distort changing the total dipole moment of the ceramic. The change in dipole moment due to an applied stress causes a net electric field which varies linearly with stress.
# Dynamic Performance
The dymanic performance of a piezoelectric material relates to how it behaves under alternating stresses near the mechanical resonance. The parallel combination of C2 with L1, C1, and R1 in the equivalent circuit below control the transducers reactance which is a function of frequency.
## Frequency Response
The graph below shows the impedance of a piezoelectric transducer as a function of frequency. The minimum value at fm corresponds to the resonance while the maximum value at fn corresponds to anti-resonance.
# Resonant Devices
Non resonant devices may be modeled by a capacitor representing the capacitance of the piezoelectric with an impedance modeling the mechanically vibrating system as a shunt in the circuit. The impedance may be modeled as a capacitor in the non-resonant case which allows the circuit to reduce to a single capacitor replacing the parallel combination.
For resonant devices the impedance becomes a resistance or static capacitance at resonance. This is an undesirable effect. In mechanically driven systems this effect acts as a load on the transducer and decreases the electrical output. In electrically driven systems this effect shunts the driver requiring a larger input current. The adverse effect of the static capacitance experienced at resonant operation may be counteracted by using a shunt or series inductor resonating with the static capacitance at the operating frequency.
# Applications
## Mechanical Measurement
Because of the dielectric leakage current of piezoelectrics they are poorly suited for applications where force or pressure have a slow rate of change. They are, however, very well suited for highly dynamic measurements that might be needed in blast gauges and accelerometers.
## Ultrasonic
High intensity ultrasound applications utilize half wavelength transducers with resonant frequencies between 18 kHz and 45 kHz. Large blocks of transducer material is needed to generate high intensities which makes manufacturing difficult and is economically impractical. Also, since half wavelength transducers have the highest stress amplitude in the center, the end sections act as inert masses. The end sections are often replaced with metal plates possessing a much higher mechanical quality factor; giving the composite transducer a higher mechanical quality factor than a single-piece transducer.
The overall electro-acoustic efficiency is:
$\eta \approx 1 - \frac{1}{1 + k_{eff}^2Q_EQ_L} - \frac{1}{1 + \frac{Q_{m0}}{Q_L}}$
``` Qm0 = unloaded mechanical quality factor
QE = electric quality factor
QL = quality factor due to the acoustic load alone
```
The second term on the right hand side is the dielectric loss and the third term is the mechanical loss.
Efficiency is maximized when:
$Q_L = \frac{1}{k_{eff}} \sqrt{\frac{Q_{m0}}{Q_E}} = Q_{Lopt}$
then:
$\eta_{\max} = 1 - \frac{2}{k_\mathrm{eff} \sqrt{Q_E Q_{m0}}} \qquad \left ( k_\mathrm{eff} \sqrt{Q_E Q_{m0}} \ll 1 \right )$
The maximum ultrasonic efficiency is described by:
$I_{\mathrm{w}max} = \frac{1}{2} ( \omega_5 u_l )_{max}^2 \ \rho_{m\mathrm{w}} \nu_{\mathrm{w}} ( W / m^2)$
Applications of ultrasonic transducers include:
``` Welding of plastics
Atomization of liquids
Ultrasonic drilling
Ultrasonic cleaning
Ultrasound
Non destructive testing
etc.
```
# More Information and Source of Information
MorganElectroCeramics
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 4, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8655031323432922, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/216816/combinatorial-identity-related-to-the-volume-of-a-ball-in-mathbbr2k1
|
# Combinatorial identity related to the volume of a ball in $\mathbb{R}^{2k+1}$
Calculating the volume of a ball in $\mathbb{R}^{2k+1}$ in two different ways gives us the following formula:
$$\sum_{i=0}^k {k \choose i} \frac{(-1)^i}{2i+1} = \frac{(k!)^2 2^{2k}}{(2k+1)!}$$
Is there a more direct way to prove this identity? I'm interested if there is a more combinatorial or algebraic way to prove this. Given the sum on the left side, how would you find out the formula for it?
Added: This is how I found the identity. The volume of an ball of radius $r$ in $\mathbb{R}^{2k+1}$ is given by the formula $$\mathscr{L}^{2k+1}(B(0,r)) = \frac{\pi^k k! 2^{2k+1}}{(2k+1)!}r^{2k+1}$$
and in $\mathbb{R}^{2k}$ by the formula
$$\mathscr{L}^{2k}(B(0,r)) = \frac{\pi^k}{k!}r^{2k}$$
where $\mathscr{L}$ denotes Lebesgue measure. I was wondering if I could prove the formula for $\mathbb{R}^{2k+1}$ using the formula for $\mathbb{R}^{2k}$. With the formula for even dimension we can calculate
\begin{align*} \mathscr{L}^{2k+1}(B(0,r)) &= (\mathscr{L}^{2k} \times \mathscr{L})(B(0,r)) \\ &= \int_{[-r,r]} \mathscr{L}^{2k}(B(0,\sqrt{r^2 - y^2}))d \mathscr{L}(y) \\ &= \frac{\pi^k}{k!} 2 \int_0^r (r^2 - y^2)^k dy \\ &= \frac{\pi^k}{k!} 2r^{2k+1} \sum_{i=0}^k {k \choose i}\frac{(-1)^i}{2i+1} \end{align*}
Now equating the two formulas for $\mathscr{L}^{2k+1}(B(0,r))$ gives
$$\sum_{i=0}^k {k \choose i} \frac{(-1)^i}{2i+1} = \frac{(k!)^2 2^{2k}}{(2k+1)!}$$
-
## 2 Answers
Actually, we can derive the recurrence relation for the sum without going through integrals:
$$\begin{align} 2k(I_k-I_{k-1}) &= 2k\sum_{i=0}^k \left(\binom ki-\binom{k-1}i\right)\frac{(-1)^i}{2i+1} \\ &= 2k\sum_{i=0}^k \binom ki\left(1-\frac{k-i}k\right)\frac{(-1)^i}{2i+1} \\ &= 2k\sum_{i=0}^k \binom ki\frac ik\frac{(-1)^i}{2i+1} \\ &= \sum_{i=0}^k \binom ki(-1)^i\frac{2i}{2i+1} \\ &= \sum_{i=0}^k \binom ki(-1)^i\left(1-\frac{1}{2i+1}\right) \\ &= -\sum_{i=0}^k \binom ki(-1)^i\frac{1}{2i+1} \\ &= -I_k\;, \end{align}$$
where I made use of the fact that the alternating sum of $\binom ki$ over $i$ vanishes.
-
Thanks! This is the kind of thing I was looking for. – spin Oct 19 '12 at 12:10
You might not be happy about this answer, but I'd actually compute that sum exactly in the way in which I suspect you arrived at it:
$$\begin{align} \sum_{i=0}^k {k \choose i} \frac{(-1)^i}{2i+1} &= \int_0^1\sum_{i=0}^k {k \choose i} (-1)^iq^{2i}\mathrm dq \\ &= \int_0^1(1-q^2)^k\mathrm dq \\ &=\int_0^{\pi/2}(1-\sin^2\theta)^k\cos\theta\,\mathrm d\theta \\ &=\int_0^{\pi/2}\cos^{2k+1}\theta\,\mathrm d\theta \\ &=:I_k \;, \end{align}$$
and then integration by parts yields the recurrence relation
$$I_k=\int_0^{\pi/2}\cos^{2k}\theta\cos\theta\,\mathrm d\theta=2k\int_0^{\pi/2}\cos^{2k-1}\theta\sin^2\theta\,\mathrm d\theta=2k(I_{k-1}-I_k)$$
and thus
$$I_k=\frac{2k}{2k+1}I_{k-1}$$
with $I_0=1$, whose closed-form solution is readily seen to be your right-hand side.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 8, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8893731236457825, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/32368/hamiltonian-mechanics-and-special-relativity/32375
|
# Hamiltonian mechanics and special relativity?
Is there a relativistic version of Hamiltonian mechanics? If so, how is it formulated (what are the main equations and the form of Hamiltonian)? Is it a common framework, if not then why?
It would be nice also to provide an example --- a simple system with its Hamiltonian.
As far as I remember, in relativistic mechanics we were only taught to use conservation laws, that is integral invariants and thus I have a vague perception of relativistic dynamics.
-
– Qmechanic♦ Jul 19 '12 at 13:05
@Qmechanic does it mean that the Hamiltonian equations themselves (the structure of phase space) doesn't change? Is the only thing that changes the allowed form of Hamiltonian? I couldn't find in Wikipedia anything about relativistic Hamiltonian mechanics itself. – Yrogirg Jul 19 '12 at 13:12
– John Rennie Jul 19 '12 at 15:04
– genneth Jul 19 '12 at 16:56
– genneth Jul 19 '12 at 17:03
## 2 Answers
Relativistic Lagrangian and Hamiltonian mechanics can be formulated by means of the jet formalism which is appropriate when one deals with transformations mixing position and time.
This formalism is much advocated by G. Sardanashvily, please see his review article.
-
+1: thanks a lot for that link; my own musing went into the direction of modeling relativistic mechanics via local contact structures on the space of geodesics parametrized by arc length (ie proper time); that's just another way to arrive at $J^1_1Q$, and now that I know where I need to end up eventually, I might revisit that idea... – Christoph Jul 21 '12 at 14:03
One-particle Hamiltonian mechanics is easy to make relativistic, as the 0-component of the momentum 4-vector is the Hamiltonian. For example, $H=\sqrt{\mathbb{p}^2+m^2}$ for a free particle, and by minimal substitution one can add an external electromagnetic field.
Multiparticle Hamiltonian mechanics is somewhat awkward (and hardly ever pursued) as there is a no-go theorem for the ''natural'' situation; see Jordan-Currie-Sudarshan, Rev. Mod. Phys. 35 (1963), 350-375.
Relativistic classical field theory has again a good Hamiltonian formulation; see http://count.ucsc.edu/~rmont/papers/covPBs85.PDF
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9129198789596558, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/tagged/rocket+momentum
|
# Tagged Questions
3answers
247 views
### Conservation of Energy in Different Frames of Reference
Say I have a bucket of fuel that can produce 150J of energy by combustion. No matter what frame of reference an observer or the bucket of fuel is in, since the configuration of molecules stay the ...
3answers
415 views
### How does $F = \frac{ \Delta (mv)}{ \Delta t}$ equal $( m \frac { \Delta v}{ \Delta t} ) + ( v \frac { \Delta m}{ \Delta t} )$?
That's how it's framed in my Physics school-book. The question (or rather, the explanation) is that of the thrust of rockets and how the impulse is equal (with opposite signs) on the thrust-gases and ...
1answer
148 views
### Rocket drive and conservation of momentum
I am currently reading through some lecture notes of Physics 1 and in a chapter about the dynamics of the mass point, there is an example covering the rocket drive. Let $v$ be the velocity of the ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9311026930809021, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/calculus/114131-maximum-volume.html
|
# Thread:
1. ## Maximum Volume
A hemisphere of radius 7 sits on a horizontal plane. A cylinder stands with its axis vertical, the center of its base at the center of the sphere, and its top circular rim touching the hemisphere. Find the radius and height of the cylinder of maximum volume.
I have no clue how to go about solving this. Help, please?!?
2. Draw a right triangle inside the sphere, then we can see that
$r^{2}+h^{2}=7^{2}$, where R is the radius of the sphere
and r and h are the radius and height of the cylinder, respectively.
$r^{2}=49-h^{2}$
The volume of the cylinder is $V={\pi}(49-h^{2})h$
$\frac{dV}{dh}={\pi}(49-3h^{2})$
Set to 0 and solve for h. The rest will follow.
Attached Thumbnails
3. Originally Posted by galactus
Draw a right triangle inside the sphere, then we can see that
$r^{2}+h^{2}=7^{2}$, where R is the radius of the sphere
and r and h are the radius and height of the cylinder, respectively.
$r^{2}=49-h^{2}$
The volume of the cylinder is $V={\pi}(49-h^{2})h$
$\frac{dV}{dh}={\pi}(49-3h^{2})$
Set to 0 and solve for h. The rest will follow.
I tried what you said, and got 7(3^(1/2))/3 for h, which is incorrect, thereby making my value for r incorrect.
What did I do incorrectly?
4. Well, solving dV/dh=0, we get $h=\frac{7}{\sqrt{3}}$
This gives $r=\frac{7\sqrt{6}}{3}$
That appears correct to me.
You have $h=\frac{7\sqrt{3}}{3}$
You do realize this is the same thing as $\frac{7}{\sqrt{3}}$.
The denominator has been rationalized in your version, but they are equivalent.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 12, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9180039763450623, "perplexity_flag": "middle"}
|
http://www.ateacher.org/blog/?cat=1
|
# Divide By Zero
Do the impossible
Archive | everything else
October 27, 2010
## Juice Mixing
You have two glasses with an equal amount of juice in them. One has apple juice, the other: orange juice. You take some amount from the apple juice and put it into the
orange juice glass, then mix it up real nice. Then you take the same amount from the orange juice glass and put it into the apple juice glass, and mix it up.
This would be great with a video… I think the question you ask is which glass has more of it’s original juice now? Answer to follow.
May 19, 2010
## Workflow: Worksheet design for Factoring
I’ve been still trying to wrap my head around factoring, and I think I’m approaching a much reasoned presentation. The work I gave out today in both Algebra classes was more successful than it’s been before. In one class I gave them a very much scaffolded set of problems and explanations to help them tie together much of what we’ve learned. I haven’t made an answer key.
Ws Factoring Overview All Techniques
Hoping for a similar effect I used this sheet to introduce factoring $ax^2 + bx + c$‘s by splitting the middle in a different period. The justification I’m giving for splitting the middle is one I think extends beyond some of the simple $(2x + 3)(3x - 5)$ type expressions. Anyway, this isn’t a magical topic, but I think my students this year will have a more solid ability to reason about factoring than before. That’s a good thing, I suppose.
This starts on the 3rd page ‘cuz that’s where I kicked off the page with the class.
ws-ax2+bx+c-intro
I’ve got one more to share, which may get used later.
ws-ax2+bx+c-practice
In other news, I thought I’d post a picture of how I get this generated. I write the worksheets in LaTeX using Emacs to write the code, and I use Maxima to help simplify and factor expressions so that I know they’ll be correct on the worksheet. Here’s a snapshot of how that all works.
May 18, 2010
## Art Appreciation: A Geometry Project
My Geometry class is entering the 3-d part of the year. But instead of watching Avatar, we’re studying Area, Surface Area and Volume. Or at least that’s what the last three chapters in the book are. Thing is, in some ways the book throws a ton of easy pitches to the kids:
Find the surface area of this shape, given only the relevant dimensions and nothing else.
Same thing for volume and everything else. I thought I’d kick them off with a little detour to get them thinking about why we care about things like surface area and some of the problems that arise from surface area calculations as applied to real problems. I told the class that we were going to spend a day doing a little “art appreciation” and with no other introduction started the following slide show of images happily lifted from Christo and Jeanne-Claude’s site.
Christo Project
The slides naturally spark lots of “is this really art??? I could cover this place with stuff!” type sentiment. I took some straw polls like “How many of you think it would be easy to wrap our school building?” a split favoring those who voted “not easy” and then made the easy votes describe how they’d go about getting the school covered. We continued through the projects, discussing issues that seem to come up naturally whether or not they seemed especially mathematical. We stopped to read the press-releases as they were in the order. And by the time we reach the “WWCAJCD?” pyramid slide, they have lots of ideas about a project that might be done and the issues involved in making it happen.
We’ll be working on the project for two more days. Part of my hope is that good questions arise, and that students feel a little more confidence in working out solutions because there isn’t necessarily an answer they can check in the back of their book. This problem has not already been solved.
May 9, 2010
## Learning a lot outside of school
I haven’t taken the time to write, and I’ve barely been keeping up with all the great posts being written these days. Very big thanks to Dan for summarizing the many sessions at NCTM and NCSM.
I’m enrolled in a class titled Problem Solving for the MS Math Teacher. The class is great. We get three problems for each HW assignment, and they really forced us to justify everything about our solutions. If you set up an equation or a formula it needs to be justified, if you perform a calculation the theory behind it should be clearly explained. The emphasis on reasoning has been a great push, and it’s really valuable to see how so many others reason through the same problems. Here’s a sample of the problems we’ve been assigned
• Saved by Zero. How many zeroes occur at the end of the expanded numeral 999!?
• The Last Straw. Two piles of straws are on a table. A player can remove a straw from either pile, or a straw from both piles. The player who takes the last straw loses. If there are two players how should you play?
• The Case of the Grouchy Customers. Every morning at local cafes sleepy customers stumble in for their morning cup of coffee. One such cafe has a row of 10 seats at the counter. Typically, morning customers do not like to engage in conversation. How many different ways can three customers sit in those 10 seats so that no two customers are sitting adjacent to one another?
• Put Down the Ducky. A man selling ducks sold half his flock and half a duck to Amy. He then sold one third of what was left and 1/3 of a duck to Beth; then 1/4 of the remaining flock and 3/4 of a duck to Cathy, and finally he sold 1/5 of the remaining flock and 1/5 of a duck to Dina. He now has 19 ducks and he never cut a single duck (whew!) What was the size of the man’s original flock?
It’s been a fun challenge to figure out how best to explain myself as I approach these, and some of them have been good problems to have on hand for students.
I’m also taking Intro to Computer Science and Programming for no credit through the MIT OpenCourseWare offerings. As a model for online learning I love that MIT is doing this. I have already found myself applying python to multiple problems beyond this class in a more sophisticated manner than prior to beginning the lectures / readings / problem sets. It’d be great to be able to get credit for this, but you can’t beat the cost.
The entire offering is fantastic and well curated. A lecture I recently watched was entirely re-taped because technical glitches ruined the video of the original class session. The professor entirely re-did his near hour long lecture for the sake of OpenCourseWare. It is lecture based, so not for younger folk, but if you have the patience to really work through the materials provided you really could educate yourself with little more than internet access and time. Take a look at the offerings if you are interested!
March 28, 2010
## Exponent Rules, Polynomials, and Adaptations
It’s the end of Spring Break, and I’m just finishing up my Algebra plans for the coming week. The backstory to this post is the week before break I began exponent rules with the Algebra classes and successfully confused lotsa kids. Woosh. I’ll take responsibility for some of it, and I think I know what went wrong.
First, I tried to use Smart Slides to guide the classes through the material. For those of you who pre-prepare your slides for class, you’re stronger planners than I. My pre-prepared Smart notes have some significant flaws. For example, I might introduce a new concept like negative exponents with $2^{-2}$, get some head nods and then run the train straight into a brick wall with the next slide: $\left(\dfrac{2x^3}{3x^3y^5}\right)^{-2}$. I’m not kidding. And the best part? I won’t know it’s coming either! Kids will just look at the slide and think: “I had it. Then I lost it.”
This sounds gnarly. My problem is not that I can’t plan, and normally the gaps aren’t that gross, but I’m trying to make a point here. What you think will flow smoothly on Sunday at the Coffee Shop doesn’t always flow smoothly on Monday in class. And you’ll have a much better sense of what should come next when you’re in class. So, wouldn’t it be nice to not be tied to the next slide?
Solution: Return to the whiteboard. I realize that lecturing is not the most progressive education strategy available, but many of us still present content this way. So we should at least do it well. There’s a reason that many professors who lecture for a living, even in Computer Science, still use a chalkboard. It puts a subtle break on the amount you write, it gives the audience time to follow you, and allows you to adjust on the fly without being tied to your next slide. I connect more with the class when the material is coming from me not from the surprise next slide I created a few days ago. I’m also giving students a little more processing time because I can’t reveal paragraphs with the click of a button. The white board also doesn’t change with a click so choosing carefully what you put up there helps students because they’ll have the reference for the rest of class. This is very hard to do well with any brand of Powerpoint/Smart/Keynote presentation.
Not that I want to present everything by lecture, but Exponent Rules and some brand new skills are worth presenting this way. There are many rules, they’re not so tricky, but they deserve clear names and a multitude of examples. So I outlined my notes, the examples I want to give, and I’ll have a printed copy to work off in class this week as we revisit the confusing stuff from last week, and jump into the void of new stuff.
Here are my notes.
Plan Multdiv Rules Lecture Notes
Once kids have a decent grasp of the rules I’m going to give them a shot at two problems I like. There’s slight WCYDWT bend to the first problem.
What’s the largest open box you can fold from a sheet of paper?
Plan Boxfold
If you want to run calculations on the fly for this problem, you might want to download Maxima (a great Computer Algebra System). I’ve made a little file that can crunch numbers for this problem given arbitrarily sized pieces of paper, and fold lengths.
Here are the resources for the boxfold problem.
The rest of the files I have for this unit are in the box.
# Update!
By way of David Cox if you want to do the Box Folding problem, consider incorporating this virtual manipulative it’s perfect.
1. Note the second option in the “Cell” menu in Maxima is “Evaluate All Cells” this will crunch the numbers initially, and you should do it again whenever you change any of the input…
March 14, 2010
## Right Strategy, Wrong Time.
A few weeks ago I asked students in my Algebra class a problem akin to the typical
A car leaves Philly for DC at 3 going 60mph and another leaves DC for Philly at 4 going 65mph, if the two cities are 138 miles apart how long until they cross paths and how far from Philly will they be when they do?
Nobody had any idea how to think about motion like this. Despite having done a few problems over the course of the unit, they got lost trying to think this through. It didn’t help that up until I asked them on the test I had been making them translate this stuff into a system of equations.
What went wrong? Well, students lacked a clear understanding of the problem in its context. Other problems in which they needed to set up a system of equations were fine.
$7sodas + 8slices = \$15 \\ 8sodas + 9slices = \$17$
This stuck; they get the situation and could probably talk like an expert about how they set out to solve the problem. With the motion problems they don’t have this familiarity and easily accessible framework.
To try to put a point on it, the problem was: my students lacked a framework for understanding a problem and consequently the ability to translate the problem from situation to mathematical calculations. To solve this involved some backpedaling on my part. First, I was arbitrarily forcing the tool on my students to solve this problem, and, in this case, I was missing the point. Kids didn’t understand how to think about objects moving together and apart, especially with respect to their velocity.
We needed some good visual fodder and time for students to get their hands dirty. I found an interested student to volunteer to bring a Flip in and shoot a bunch of scenes of kids walking back and forth. We spent a day taking these shots. We came back in and watched them, watched them and drew on them. Kind of like Dan’s “Graphing Stories” except the goal of graphing was to get kids to see that in different situations it is appropriate to add or subtract the speeds of two objects. We also noticed that if two objects cross at a given point, the sum of their distance traveled is each to the total distance between them originally.
Lights started to come on for them, and our class discussions improved dramatically. Kids started to explain why if two objects are approaching each other you can add their velocity, and that if you know the distance between them it becomes easy to think about when they’ll cross and exactly where on the route that will take place.
We worked through this problem set and students had some good discussions. But I dragged it out too long, and didn’t have an answer key ready. That has been fixed, and the final page has answers to the activities.
ws-motionproblems
We spent a few more days talking about different problems. They worked through one which I collected and scanned, through I apparently didn’t get a double-sided scan so quite a few of these are cut off, but I still think it helps to see what they were doing. Edit: I’m going to check and see if it’s cool if I post work before it shows up here.
A few take aways. First, I shouldn’t be forcing a tool or strategy, say systems of equations, for solving problems onto students as a general rule. Instead of viewing problems through the lens of the current unit, it’s important to constantly be assessing whether or not students have a general familiarity, comfort and ability to work within the environment of a given situation. Before you get to any of the heavy lifting of solving problems in a tough context, create a solid understanding. Role play, demonstrate, enable them to perform the “back-of-the-napkin” calculation that demonstrates an ability to translate the context from a situation into calculation. Despite that I was teaching a unit on systems I shouldn’t force students to apply them blindly to every problem they see in the unit, and when past knowledge is sufficient for the task at hand, I should be praising them for choosing a successful strategy and make sure that others notice the connection as well. If I want to teach systems of equations I need to choose problems and situations that make them necessary and useful. In many cases, this is a place where our textbook fails us. The problems are often contrived, shallow, and/or easy to solve by much simpler strategies. In a few cases they get it right, this is where you want to spend your class time if you want your kids to appreciate the fancy new mathematical tricks they’re learning and not yearn for the old days.
Finally, this comes back to WCYDWT. We shot those videos, and I’ve been working on editing them. I think there will be a general wow effect when students see the final product, but for this class I don’t think that will translate into more meaningful work, we’ve spent our time in this context, they’re familiar now. As I’ve been editing them I can’t shake the feeling of being frequently rewarded when a calculation adds a meaningful piece of info into the scene or turning pixels roughly into meters and determining a subjects’ speed by calculating the difference in location over time. The math feels really useful. It’s hard to get that by having them watch the video. I had it by making it.
The magic of WCYDWT is when students are What-Can-You-Do-With-This-ing. Taking a situation and opening up a bag of math on it. Finding meaningful calculations, bringing ideas out of their repetoire and putting them to work to do something meaningful, interesting, and rewarding. That’s a hard thing to engineer. But it sure seems like a good way to spend your class time.
March 10, 2010
## Shawn Cornally, Think Thank Thunk
Having spent a good chunk of my evening reading over Shawn Cornally’s fantastic blog “Think Thank Thunk” I’m engaged. I enjoy so much having a window into how teachers make decisions around their curriculum and class design. Knowing how someone chooses to introduce concepts and motivate students is a big bonus, and something Cornally communicates deftly. Read two posts and see if you agree. I highly recommend it.
March 5, 2010
## Doug Lemov’s Taxonomy
Having had the luck to attend two of his seminars, I eagerly read through the NYT article on Building a Better Teacher. At his presentations he had a list of simple behavioral choices that improved teachers’ effect. I walked away from both presentations with lots of ideas of concrete changes to make in class, and recommend the article, and forthcoming book on the strength of his presentations.
Update I just checked out Uncommon Schools and found they have a page devoted to Lemov’s Taxonomy. I noticed there are a few more videos than in the Times’ article. That might be a good place to start if you’re looking for info on it. Though they don’t offer access to all their 700 clips. Too bad for everybody else.
February 22, 2010
## Equation Challenge
I took Maria’s MathType vs. LateX equation challenge. Partially out of curiosity for how long it would take and also to put out an example of how I use emacs and LaTeX on a regular basis in my planning and in the creation of materials. If you’re having trouble falling asleep, let me recommend this video to you.
Here’s the original document. And here’s the pdf I generated.
Equation Challenge
Download the LaTeX equationchallenge.tex file if you want.
February 17, 2010
## How To Teach: Will It Hit The Corner?
Here’s what I did. I taught this lesson today, and I had mixed results. For the record I’d say that kids in the second group that got the lesson where really into it, whereas the first group gave it a “so-what-else-did-you-cook-up-for-us?” rating. I started this post thinking I’d tell you what I did, but I kept thinking things like “if I’d only done this it would have gone better,” so here’s the if-I-would-have-done-this-then-it-would-have-gone-better version of what I did today. I know, too much hyphenation for one day.
At the end of this lesson, students will be able to
• Work backwards to solve a complex problem
• Use patterns and inductive reasoning to extrapolate data (logo positions on a large grid)
• Apply software to investigate (potentially) algebraic relationships
• Construct a rule for the corners problem
Setup: Show kids office clip #1 (see post). Then open up the 5 min. gridded version and play that in the background while acting like you’re trying to give directions. Get the kids to make guesses about whether or not it will hit a corner. Then after they’re done shouting about the epic moment at 1:43 (.333) when it does, get them to predict the next corner. It doesn’t happen. At least, not in the 5 minutes that Dan provided.
Distribute copies of the warm up below
Ws Office Boxcorner Warmups
As kids graduate from this, ask them if they’d like a bigger version (the desired answer being: “YES!”). Then hand them the first page of this sheet.
Ws Office Boxcorner
The answer here being that it doesn’t matter all starting squares result in a corner hit after varying numbers of steps. This can be shown with the software or through a pattern that many kids will notice after they have traced out sufficient numbers of steps. Talk about the patterns, ask them if they have any theories about how this might work on grids of varying dimension.
If you have access to a set of Mac’s you’re in luck. Ask the students if they have developed any general theories about these types of problems. How does the path to the corner depend on the starting location and grid dimensions? Allow students to work with the software to investigate and develop their theories. Use this sheet and have students download office-grid-basic.py, the simulator I designed.1
Ws Office Boxcorner Pythonlab
I leave off here at this incomplete point, because I’m not sure of a good way to finish this off.
Any ideas? How would you finish this lesson? Would you do anything differently? I’d love to hear, as I might be doing this again with different classes in a week or two.
1. If you intend to run the program, see the prior post for how-to info.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 5, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9541054964065552, "perplexity_flag": "middle"}
|
http://reference.iucr.org/dictionary/Incommensurate_magnetic_structure
|
# Incommensurate magnetic structure
### From Online Dictionary of Crystallography
Structure magnétique incommensurable (Fr.)
## Definition
An incommensurate magnetic structure is a structure in which the magnetic moments are ordered, but without periodicity that is commensurate with that of the nuclear structure of the crystal. In particular, the magnetic moments have a spin density with wave vectors that have at least one irrational component with respect to the reciprocal lattice of the atoms. Or, in the case of localized moments, the spin function S(n+rj) (where the jth atom has position rj in the unit cell) has Fourier components with irrational indices with respect to the reciprocal lattice of the crystal.
## Details
When the atoms of the basic structure are at positions n+rj, where rj is the position of the jth atom in the unit cell, then the spin function for an incommensurate magnetic structure is
$S( n+ r_j)~=~\sum_ {k} \hat{S}( k)_j \exp \left(2\pi i k.( n+ r_j)\right),~~ k=\sum_{i=1}^n h_i a_i^*~~(h_i ~~{\rm integer}).$
This spin structure is incommensurate if one component of the basis vectors ai* is irrational. Incommensurate magnetic structures may be linear, but occur as quite complicated, like fan structures etc. as well. Especially, in rare-earth compounds very complicated magnetic phase diagrams have been found.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8948100209236145, "perplexity_flag": "middle"}
|
http://crypto.stackexchange.com/questions/301/is-sha-512-bijective-when-hashing-a-single-512-bit-block
|
# Is SHA-512 bijective when hashing a single 512-bit block?
It's been said that CRC-64 is bijective for a 64-bit block.
It the corresponding statement true for SHA-2?
-
## 2 Answers
It would be very freakish if it turned out to be true. It is not an expected property of SHA-512 to have such bijectivity. It would be worrisome, even, because that's a kind of structure that should not appear in a proper cryptographic hash function.
Actually proving that SHA-512, for 512-bit blocks, is not bijective, would already be a kind of a problem. We do not expect to be able to prove such things without breaking the function.
One "simple" way to prove this would be a single collision (on short inputs), which in theory could be found by chance. But for finding such, we expect to have to calculate about $2^{256}$ hashes (and store/compare them to the other values) to have a non-neglible probability to find a collision.
For example, if I have one zettabyte of fast accessible storage (which would be more than half of humanity's currently stored data), I can store about $2^{62}$ SHA-512 hashes. The probability that between these is at least one duplicate would be about $2^{-389} \approx 10^{-117}$. If every human (around $2^{33}$ in some years) repeats this experiment about once a week (i.e. $2^6$ times a year) with $2^{62}$ new hashes, humanity each year has a chance of $2^{-351}$ of finding a collision. Assuming that humanity will work on this for 10 times as long as the universe already existed (i.e. 130 billions of years), we get a chance of $2^{-314} \approx 10^{-94}$. For comparison, the probability that a ticket wins the main prize in the German weekly lottery (6/49) is around $2^{-27}$, so the probability that humanity will ever find a collision (in the scenario outlined above) is lower than the probability of me winning the main prize each week, for 11 weeks in sequence (with one ticket per week).
So we can expect collisions to stay hidden until the end of times.
-
Assuming processing power doesn't skyrocket... – ObsessiveSSOℲ Jun 5 '12 at 20:00
Not even then. I won't detail the math in a comment, but if we were to construct the most energy-efficient computer theoretically possible, it would require all of the output from a supernova in order to cycle a 219-bit counter. And that's an almost imperceptible fraction of the energy necessary to run a counter through 256 bits. – Stephen Touset Feb 28 at 18:02
@StephenTouset And even then, SHA-512 is more expensive to compute than iterating a counter, so you can add a few more bits of required computational work to that. – Thomas Mar 1 at 10:00
It still astonishes me to consider that given the hundreds of zetabytes of data our species has ever stored digitally, we haven't even made perceptible progress in representing all the values capable of being stored in 32 bytes of the tiniest chip of RAM ever created. – Stephen Touset Mar 1 at 18:33
No. Cryptographic hash functions model a random function, not a random permutation. A significant fraction of output hash values are expected to be unreachable and another fraction have multiple preimages.
While bijectivity in general does not mean that the inverse is easy to calculate, for the types of constructs which are used in hash functions in practice, if it were bijective, it could be easily inverted and would thus not make a very good hash function.
There are other known bijective (candidates for) one-way functions, like the ones used for asymmetrical cryptography, but these constructs tend to be a lot slower, and are different from the ones used in hash functions.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9443723559379578, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/252425/convergence-of-infinite-potence-tower
|
# Convergence of (infinite) potence tower?
Let $(a_n)_{n\in\mathbb{N}}$ be a series in $\mathbb{C}$ or $\mathbb{R}$. Which contraints must $(a_n)$ match to make $b_n := a_1^{a_2^{...^{a_n}}}$ converge for $n\rightarrow\infty$?
For constant series with $e^{-e}<a_n<e^{1/e}$, the series $b_n$ converges; So I guess series converging toward a value within $(e^{-e},e^{1/e})$ make $b_n$ converge aswell (the first few $a_i$ in $a_1^{a_2^{a_3^{...}}}$ do not matter at some point?!)
Is this the constraint that is to be applied? That there exists an $N\in\mathbb{N}$ such that $\forall n>N: a_n\in(e^{-e},e^{1/e})$? Is there any literature about this? Potence towers are pretty interesting ;)
EDIT: The question has still not been solved. However, we were able to prove that my Idea was wrong (which was pretty easy): $$5^{1^{5^{1^...}}}$$ converges. (For any arbitrary number, 5 is just an example.) Further more, it has to be assumed that a maximum of 1 $a_n$ is equal to $0$ (for obvious reasons aswell)
When using logarithms to unify the Potwnce tower, using $$a^{b}=e^{\log{a}*b}$$ to reduce $a_1^{a_2^{a_3^{...}}}$ to $$e^{\log{a_1}*a_2^{a_3^{...}}}$$ and so on, it is required that the remaining term (strongly) converges to $0$ in order to keep the resulting term $e^{e^{e^{...}}}$ finite.
-
What is $b_1$ and does your recursion really express what you want to know, since it doesn't fit the title of the question? – WimC Dec 6 '12 at 17:49
Oh right, forgot. Fixed. – CBenni Dec 6 '12 at 17:53
1
There is an article of D.F.Barrow (1936) "Infinite exponentials" in the "American Mathematical Monthly" Vol 43, No.3, Pg150-160, where he deals exactly with this question and uncovers some properties of such infinite nonconstant exponential-towers. Unfortunately it is not openly online, but via JStore one can access it. – Gottfried Helms Dec 7 '12 at 20:04
1
– Gottfried Helms Dec 7 '12 at 20:12
1
I fiddle with "tetration" for several years now. A lot of literature comes across that time... Even online-correspondents give sometimes hints :-) – Gottfried Helms Dec 7 '12 at 22:25
show 6 more comments
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 20, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9538765549659729, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/differential-equations/212280-differential-equation-problem.html
|
# Thread:
1. ## Differential equation problem
Hello,
I'm trying to find the integration factor $\lambda(x,y)$ of this differential equation: $(siny-3x^2cosy)cosydx+xdy=0$, with exception that $\lambda$ being a function, dependent in only one variable, x or y.
Any ideas on solving this equation?
Thanks
2. ## Re: Differential equation problem
Originally Posted by patzer
Hello,
I'm trying to find the integration factor $\lambda(x,y)$ of this differential equation: $(siny-3x^2cosy)cosydx+xdy=0$, with exception that $\lambda$ being a function, dependent in only one variable, x or y.
Any ideas on solving this equation?
Thanks
Hint: If $Pdx+Qdy=0$ and $\lambda(x,y)=\mu (z)$, the equality $(\mu P)_y=(\mu Q)_x$ sometimes (only sometimes) allows to predict the form of $z$. Take into account that in general, if we don't know a priori the form of the integration factor, there is no a general method to find it.
3. ## Re: Differential equation problem
Hello sir,
This is my progress so far:
I know that $\frac{d\lambda}{\lambda}=-\frac{\frac{\partial P}{\partial y}-\frac{\partial Q}{\partial x}}{P}dy$ where $\frac{\frac{\partial P}{\partial y}-\frac{\partial Q}{\partial x}}{P}$= $\frac{3x^2sin(2y)+cos(2y)-1}{3x^2(cosy)^2-cosysiny}$
My problem now is with this expression $\frac{3x^2sin(2y)+cos(2y)-1}{3x^2(cosy)^2-cosysiny}$, I simply don't know how to simplify it.
Wolfram Alpha gives $2tan(y)$ as the answer. Can you help me on this point?
Thank you again.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 15, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9137072563171387, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/271688/projective-geometry-hyperplane?answertab=votes
|
# projective geometry hyperplane
For $j=0,\ldots,n$ consider the affine hyperplane $A_j:=e_j+\langle e_0,\ldots,e_{j-1},e_{j+1},\ldots,e_n\rangle$ in $\mathbb K^{n+1}$ and the associated embedding $\tau_j:\mathbb K^n\rightarrow\mathbb KP^n, \tau_j(x_1,\ldots,x_n):=[x_1:\ldots:x_j:1:x_{j+1}:\ldots:x_n]$, where $e_j\in\mathbb K^{n+1}$ the $j'$th unit vector is.
Now I come to my question; How can I show that the images of $\tau_j$ overlay whole $\mathbb KP^n$ or mathematically spoken: $\mathbb KP^n=\bigcup_{j=0}^n \tau_j(\mathbb K^n)$
I think it should be a short proof but since I am a newbie in projective geometry I do not really have an idea.
-
I changed $<e_0,...,e_n>$ to $\langle e_0,\ldots,e_n\rangle$. That is standard TeX usage. – Michael Hardy Jan 6 at 21:03
## 1 Answer
Given $[x_0:\ldots:x_n]$, there exists $j$ such that $x_j\neq0$; therefore $$\left[\frac{x_0}{x_j}:\ldots:\frac{x_n}{x_j}\right]=[x_0:\ldots:x_n]$$ has a $1$ in the $(j+1)$-th place, therefore sits in the image of $\tau_{j}$. As the point was generic, this shows that the images of these maps cover the whole projective space.
-
Thanks four your help, but did you use also the fact that $A_j$ is an affine hyperplane? Or because this is an affine hyplerplane it follows that there exists a $x_j$ such that $x_j\not=0$ ?? – Montaigne Jan 6 at 21:19
No, I used the definition of $\mathbb{KP}^{n+1}$: it is the quotient of $\mathbb{K}^{n+2}\setminus\{0\}$ by the equivalence relation $v\sim w$ iff $v=\lambda w$ with $\lambda\in\mathbb{K}$. Therefore, every element of $\mathbb{KP}^{n+1}$ is a homogeneous (n+2)-tuple with at least one non vanishing element. – wisefool Jan 6 at 21:50
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 26, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9331889748573303, "perplexity_flag": "head"}
|
http://mathhelpforum.com/discrete-math/185535-pigeon-hole-principle-2-a.html
|
# Thread:
1. ## Pigeon-hole Principle (2)
(a) Let n >= 2. We select n + 1 different integers from the set {1, 2, ..., 2n}. Is it true that there will always be two among the selected integers so that one of them is equal to twice the other?
(b) Is it true that there will always be two among the selected integers so that one is a multiple of the other?
My attempt:
(a) is false. A counterexample for n = 5: {1, 3, 4, 5, 7, 9}.
I think (b) is true but cannot seem to prove it. Any ideas?
2. ## Re: Pigeon-hole Principle (2)
This says "solved", but I don't see a solution here... I was interested to see if this was correct:
Let's let $A=\{1,2,\ldots ,n\},B=\{n+1,n+2,\ldots ,2n\}$. Suppose we wanted to attempt to pick $n+1$ elements such that no one is a multiple of another. Since $|B|=n$, at least one choice must come from $A$. Let's pick $x_1\in A$. This eliminates at least one possible choice from $B$; namely, $2x_1$. Now there are only $n-1$ possible choices left in $B$ (at most) and $n$ choices left to make. It follows that we need to choose another $x_2\in A$, which eliminates at least one more choice from $B$. Iterating this process, we see that we need to pick every element from $A$ and can't pick anything from $B$, which is of course bad, since we have only chosen $n$ elements. Of course, we could probably come to a contradiction a bunch of different ways before exhausting all of $A$, but this seems to work.
I think this is right. If someone could make sure, that would be great; and if there are better solutions, I'd be interested in seeing those too.
3. ## Re: Pigeon-hole Principle (2)
Originally Posted by topspin1617
Let's pick $x_1\in A$. This eliminates at least one possible choice from $B$; namely, $2x_1$.
What if $2x_1 \in A$?
4. ## Re: Pigeon-hole Principle (2)
a) is not true:
If n=5, 2n=10; {1,2,3,4,5,6,7,8,9,10}
Now we will pick 6 elements from the above group:
{1,3,4,5,7,9}
I didn't see your solution for a...
An idea for part b)
I will give the idea via example when n=10.
n=10 the 2n=20
X={1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19 ,20}
I will divide set X to some subsets:
A_1={1,2,4,8,16}
A_2={3,6,9,18}
A_4={5,10,20}
A_5={7,14}
A_6={11}
A_7={13}
A_8={17}
A_9={19}
If we take 11 elements there must be at least two of the that are in the same set!
5. ## Re: Pigeon-hole Principle (2)
Originally Posted by Also sprach Zarathustra
An idea for part b)
I will give the idea via example when n=10.
n=10 the 2n=20
X={1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19 ,20}
I will divide set X to some subsets:
A_1={1,2,4,8,16}
A_2={3,6,9,18}
A_4={5,10,20}
A_5={7,14}
A_6={11}
A_7={13}
A_8={17}
A_9={19}
If we take 11 elements there must be at least two of the that are in the same set!
6 and 9 are in the same subset but one is not a multiple of the other. Where are 12 and 15?
6. ## Re: Pigeon-hole Principle (2)
Originally Posted by alexmahone
What if $2x_1 \in A$?
Ugh. Of course.
Have to think some more.
7. ## Re: Pigeon-hole Principle (2)
Maybe the other guy was on to something up there. For $1\leq r\leq n$, let $A_r=\{2^k\cdot (2r-1)\mid k\geq 0, 2^k\cdot (2r-1)\leq 2n\}$. Certainly every element of $\{1,2,\ldots ,2n\}$ is in at least one of the $A_r$. (Of course all of the odd numbers are; for the even numbers, factor out the highest power of 2 possible, and see that it is in the set corresponding to the remaining odd factor.) Note that there are exactly $n$ of these sets $A_r$. Also, notice that for any fixed $r$, every element of $A_r$ has a divisibility relation with every other element of that same set. This means that, if we were picking elements which share no such relation, the best we could do is pick one element from each set. Since there are only $n$ sets, the best we could possibly do is pick $n$ elements.
8. ## Re: Pigeon-hole Principle (2)
Originally Posted by topspin1617
Maybe the other guy was on to something up there. For $1\leq r\leq n$, let $A_r=\{2^k\cdot (2r-1)\mid k\geq 0, 2^k\cdot (2r-1)\leq 2n\}$. Certainly every element of $\{1,2,\ldots ,2n\}$ is in at least one of the $A_r$. (Of course all of the odd numbers are; for the even numbers, factor out the highest power of 2 possible, and see that it is in the set corresponding to the remaining odd factor.) Note that there are exactly $n$ of these sets $A_r$. Also, notice that for any fixed $r$, every element of $A_r$ has a divisibility relation with every other element of that same set. This means that, if we were picking elements which share no such relation, the best we could do is pick one element from each set. Since there are only $n$ sets, the best we could possibly do is pick $n$ elements.
Yeah, that's right. Good job!
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 41, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9439727067947388, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/106147/is-partial-x-a-sphere-for-x-a-complete-cat0-space
|
## Is $\partial X$ a sphere for $X$ a complete CAT$(0)$ space?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Let $X$ be a complete CAT$(0)$ metric space, and $\partial X$ its boundary. One way to define $\partial X$ is as the equivalence class of geodesic rays $\gamma(t), \gamma'(t)$ that remain within a constant distance of one another for large $t$.
Under what conditions and for which $n$ is it known that the boundary of a complete CAT$(0)$ $n$-manifold is homeomorphic to the $(n{-}1)$-sphere $\mathbb{S}^{n-1}$ ?
I believe this is known if $X$ is a complete $n$-dimensional Riemannian manifold of nonpositive sectional curvature, but I have not found clear counterexamples otherwise. I am especially interested in $n{=}3$. Pointers would be appreciated, as this area is relatively new to me. Thanks!
Answered. Here is a snippet from the Davis-Januszkiewicz paper Igor cites, describing an $n{=}5$ example where $\partial X \neq \mathbb{S}^4$:
I would still be interested to learn if a similar example is known for $n < 5$.
-
If I remember right from "Metric spaces of nonpositive curvature" (a book by Bridson and Haeflinger), the Cartan-Hadamard theorem says that a Riemannian manifold is a CAT(0) space if and only if it has nonpositive sectional curvatures. So if I understand your question right the third paragraph implies the answer is "always." – Peter Samuelson Sep 2 at 2:15
2
@Peter Samuelson : It is true that a Riemannian manifold is CAT(0) if and only if it has nonpositive curvature. However, for $n \geq 4$ there exist smooth manifolds that can be given metrics (in the sense of "metric spaces") which are CAT(0), but which cannot be given nonpositively curved Riemannian metrics. This was proven for $n \geq 5$ by Davis-Januszkiewicz (see Igor's answer below for the ref) and very recently for $n=4$ by Davis-Januszkiewicz-LaFont (see math.osu.edu/~lafont.1/DMJ161.pdf). – Andy Putman Sep 2 at 2:34
That is surprising! (At least to someone who doesn't often think about non-smooth manifolds.) – Peter Samuelson Sep 2 at 3:03
2
@Peter Samuelson : These aren't non-smooth manifolds, just geodesic metrics on smooth manifolds that are not induced by Riemannian metrics. I'd have to think it through a bit, but I bet that you can arrange it so that the square of the distance function $M \times M \rightarrow \mathbb{R}$ is even smooth. But I agree that it is a surprising result. – Andy Putman Sep 2 at 3:09
## 1 Answer
For a PL manifold, the answer is YES. This is proved by M. Davis and T. Januszkiewicz in Davis, Michael W.(1-OHS); Januszkiewicz, Tadeusz(PL-WROC) Hyperbolization of polyhedra. J. Differential Geom. 34 (1991), no. 2, 347–388.
For a topological manifold the answer is NO, as shown in the same paper.
-
1
PDF download link for the paper cited by Igor: intlpress.com/JDG/archive/1991/34-2-347.pdf – Joseph O'Rourke Sep 2 at 13:54
3
I find the phrases "for a PL manifold" or "for a topological manifold" misleading. The version of Cartan-Hadamard proved on p348 of Davis-Januszkiewicz is for metrics that are piecewise Euclidean or piecewise hyperbolic on a PL manifolds. This is a very special class of metrics, which in a way is even more rigid than Riemannian metric of nonpositive curvature. In section 5 of the same paper they show that the theorem is false for piecewise Euclidean or piecewise hyperbolic on a topological manifolds. – Igor Belegradek Sep 3 at 1:34
2
I see no reason to expect the universal cover to be $\mathbb R^n$ for metrics which are NOT piecewise Euclidean or piecewise hyperbolic. Also a more recent relevant work is this paper math.uchicago.edu/~shmuel/existanceborel.pdf by Bartels-Lueck-Weinberger. – Igor Belegradek Sep 3 at 1:37
@Igor: Thanks, especially for the reference to the 2010 paper, "On hyperbolic groups with spheres as boundary"---very apropos! – Joseph O'Rourke Sep 3 at 1:48
2
@Joseph: if you have not yet done so, you may want to read the proof of Davis-Januszkiewicz's PL Cartan-Hadamard theorem. It is very illuminating. Essenially, the boundary at infinity is the inverse limit of large metric spheres which are connected sums of links in the singular points that lie inside the metric sphere. In the PL case the links are spheres, and with enough work the same follows for the inverse limit. In the non PL case there are points with non-1-connected links, so the fundamental group at infinity is nontrivial, and the universal cover cannot be $\mathbb R^n$. – Igor Belegradek Sep 3 at 2:43
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 26, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9190493822097778, "perplexity_flag": "head"}
|
http://mathinsight.org/prototypes_more_serious_questions_taylor_polynomials_refresher
|
# Math Insight
• Top
• In threads
When printing from Firefox, select the SVG renderer in the MathJax contextual menu (right click any math equation) for better printing results
### Prototypes: More serious questions about Taylor polynomials
Beyond just writing out Taylor expansions, we could actually use them to approximate things in a more serious way. There are roughly three different sorts of serious questions that one can ask in this context. They all use similar words, so a careful reading of such questions is necessary to be sure of answering the question asked.
(The word ‘tolerance’ is a synonym for ‘error estimate’, meaning that we know that the error is no worse than such-and-such)
• Given a Taylor polynomial approximation to a function, expanded at some given point, and given a required tolerance, on how large an interval around the given point does the Taylor polynomial achieve that tolerance?
• Given a Taylor polynomial approximation to a function, expanded at some given point, and given an interval around that given point, within what tolerance does the Taylor polynomial approximate the function on that interval?
• Given a function, given a fixed point, given an interval around that fixed point, and given a required tolerance, find how many terms must be used in the Taylor expansion to approximate the function to within the required tolerance on the given interval.
As a special case of the last question, we can consider the question of approximating $f(x)$ to within a given tolerance/error in terms of $f(x_o), f'(x_o), f''(x_o)$ and higher derivatives of $f$ evaluated at a given point $x_o$.
In ‘real life’ this last question is not really so important as the third of the questions listed above, since evaluation at just one point can often be achieved more simply by some other means. Having a polynomial approximation that works all along an interval is a much more substantive thing than evaluation at a single point.
It must be noted that there are also other ways to approach the issue of best approximation by a polynomial on an interval. And beyond worry over approximating the values of the function, we might also want the values of one or more of the derivatives to be close, as well. The theory of splines is one approach to approximation which is very important in practical applications.
#### Cite this as
Garrett P, “Prototypes: More serious questions about Taylor polynomials.” From Math Insight. http://mathinsight.org/prototypes_more_serious_questions_taylor_polynomials_refresher
Keywords: ordinary derivative, Taylor polynomial
#### Credits
The page is based off the Calculus Refresher by Paul Garrett.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9193845391273499, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/260597/linear-transformation?answertab=active
|
# Linear transformation
Let $T:\mathbb R^7 \longrightarrow \mathbb R^7$ be the linear transformation given by $$T(x_1, x_2, x_3, x_4 ,x_5, x_6, x_7) = (x_7, x_6, x_5 ,x_4 ,x_3 ,x_2 ,x_1)$$ Which of the following statements are true?
a. Determinant of $T$ is $1$
b. Basis of $\mathbb R^7$ w.r.t. $T$ is a diagonal matrix
c. $T^7 =I$
d. Smallest $n$ s.t. $T^n =I$ is even.
I am stuck on this problem and don't know where to begin. Can anyone help me please?
-
1
– Learner Dec 17 '12 at 9:42
5
It seems like you dont want to do anything and just grasp the answer here. Show some effort, try as much as you can and you will get hints from the community – Joyeuse Saint Valentin Dec 17 '12 at 9:49
At least you should try (c), which is pretty straightforward. Just compute $T(T(T(T(T(T(T(x_1,x_2,x_3,x_4,x_5,x_6,x_7)))))))$ and see if it is identical to $\left(x_1,x_2,x_3,x_4,x_5,x_6,x_7\right)$. – user1551 Dec 17 '12 at 9:53
## 2 Answers
Think about the problem visually. One of the ways of understanding a structured linear transformation is studying its action carefully on a generic vector. The following comments are aimed at solving this problem with minimum computation.
The linear transformation $T$ flips the vector vertically along the horizontal axis passing through the 4th co-ordinate. Clearly if you flip a vector twice, it becomes that vector again.
So $T^2 = I$ and option D is true. And thus option C must be false (Why?).
For option B, observe that for the matrix to be diagonal w.r.t to the standard basis, the vectors of the standard basis must be eigenvectors. Geometrically speaking, a flip cannot scale the length of a vector. Since, $T^2 = I$, eigenvectors of a flip can either fix a vector or scale the vector by $-1$. Therefore a flip cannot maintain the same direction, unless the vectors are symmetric/anti-symmetric about the horizontal axis passing through 4th co-ordinate. Hence the standard vectors cannot be eigenvectors.
However, the above reasoning tells us that there are four linearly independent eigenvectors associated with eigenvalue $1$: $$(1,0,0,0,0,0,1)^T,(1,1,0,0,0,1,1)^T, (1,1,1,0,1,1,1)^T,(1,1,1,1,1,1,1)^T$$
and 3 linearly independent eigenvectors associated eigenvalue $-1$:
$$(1,0,0,0,0,0,-1)^T,(1,1,0,0,0,-1,-1)^T, (1,1,1,0,-1,-1,-1)^T$$
Therefore $\det(T) = (-1)^3(1)^4 = -1$ and option A is wrong.
-
With respect to standard bases, $$T=\begin{pmatrix}0&0&0&0&0&0&1\\0&0&0&0&0&1&0\\0&0&0&0&1&0&0\\0&0&0&1&0&0&0\\0&0&1&0&0&0&0\\0&1&0&0&0&0&0\\1&0&0&0&0&0&0 \end{pmatrix}$$
As mentioned by @Learner, $T$ is a permutation matrix. Also note that $T^2x=T(Tx)=x$. Can you take from here?
-
I am confuse determinant of T is 1 or not.......By starting first row first column its 0 and By starting last row last column its 1??? – prakash Dec 17 '12 at 10:28
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 19, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9195690155029297, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/214/would-it-help-if-you-jump-inside-a-free-falling-elevator
|
# Would it help if you jump inside a free falling elevator?
Imagine you're trapped inside a free falling elevator. Would you decrease your impact impulse by jumping during the fall? When?
-
Your actual, literal answer to your actual question is "Yes, you would decrease your impact impulse". Your relevant answer is given below. – Justin L. Nov 5 '10 at 0:47
2
Another way to approach this: suppose shortly after the fall began, the elevator miraculously disappeared. Now it's just you in free-fall. I find this makes the outcome more intuitive. The elevator is not the problem, and so the problem cannot be solved by breaking your contact with the elevator. – Earwicker Nov 9 '10 at 23:35
2
@Earwicker: Nope, the elevator is very important. The falling human cannot change the total momentum of the system, but by jumping, he can transfer momentum to the elevator in the hope of reducing his own momentum. If there is no elevator, the human can't do anything (except drop his shoes, perhaps.) Instead of an elevator, imagine that the human has a jetpack. Now it is conceivable that he could reduce his own momentum at the expense of the ejected fuel. – Greg Graviton Jun 7 '11 at 13:20
2
@Greg Graviton - or alternatively, simply wait until you're in reach of the ground and use your leg muscles to push against it, thus transferring momentum to the Earth. Problem solved. – Earwicker Jun 7 '11 at 18:28
1
@Earwicker: Sure, but the disadvantage of this "method" is that it's very difficult to efficiently transfer muscle energy onto the ground in this situation. Then again, weightlessness in the falling elevator doesn't make it easy either. – Greg Graviton Jun 8 '11 at 14:06
## 7 Answers
As an addition to already posted answers and while realising that experiments on Mythbusters don't really have the required rigour of physics experiments, the Mythbusters have tested this theory and concluded that:
The jumping power of a human being cannot cancel out the falling velocity of the elevator. The best speculative advice from an elevator expert would be to lie on the elevator floor instead of jumping. Adam and Jamie speculated the attendant survived because the tight elevator shaft created an air cushion. This together with spring action from slack elevator cable could have slowed the car to survivable speeds.
(This myth is fueled by the story of an elevator attendant found alive but badly injured in an elevator car that had fallen down a shaft in the Empire State Building after a B-25 Medium Bomber crashed into it in 1945.)
-
1
Right, and the only reasonable fall you could survive that way is where the elevator's speed (after your jump is subtracted) isn't enough to kill you. Like that old joke about falling off the first step of a ladder. – Mark C Nov 4 '10 at 19:53
## Did you find this question interesting? Try our newsletter
email address
While everyone agrees that jumping in a falling elevator doesn't help much, I think it is very instructive to do the calculation.
# General Remarks
The general nature of the problem is the following: while jumping, the human injects muscle energy into the system. Of course, the human doesn't want to gain even more energy himself, instead he hopes to transfer most of it onto the elevator. Thanks to momentum conservation, his own velocity will be reduced.
I should clarify what is meant by momentum conservation. Denoting the momenta of the human and the elevator with $p_1=m_1 v_1$ and $p_2=m_2 v_2$ respectively, the equations of motion are
$$\dot p_1 = -m_1 g + f_{12}$$ $$\dot p_2 = -m_2 g + f_{21}$$
Here, $f_{21}$ is the force that the human exerts on the elevator. By Newton's third law, we have $f_{21} = -f_{12}$, so the total momentum $p=p_1+p_2$ obeys
$$\frac{d}{dt} (p_1 + p_2) = -(m_1+m_2) g$$
Clearly, this is not a conserved quantity, but the point is that it only depends on the external gravity field, not on the interaction between human and elevator.
# Change of Momentum
As a first approximation, we treat the jump as instantaneous. In other words, from one moment to the other, the momenta change by
$$p_1 \to p_1 + \Delta p_1, \qquad p_2 \to p_2 + \Delta p_2 .$$
Thanks to momentum "conservation", we can write
$$\Delta p := -\Delta p_1 = \Delta p_2 .$$
(Note that trying to find a force $f_{12}$ that models this instantaneous change will probably give you a headache.)
How much energy did this change of momentum inject into the system?
$$\Delta E = \frac{(p_1-\Delta p)^2}{2m_1} + \frac{(p_2+\Delta p)^2}{2m_2} - \frac{p_1^2}{2m_1} - \frac{p_2^2}{2m_2} .$$ $$= \Delta p(\frac{p_2}{m_2} - \frac{p_1}{m_1}) + (\Delta p)^2(\frac1{2m_1}+\frac1{2m_2}) .$$
Now we make use of the fact that before jumping, the velocity of the elevator and the human are equal, $p_1/m_1 = p_2/m_2$. Hence, only the quadratice term remains and we have
$$(\Delta p)^2 = \frac2{\frac1{m_1}+\frac1{m_2}} \Delta E .$$
Note that the mass of the elevator is important, but since elevators are usually very heavy, $m_1 \ll m_2$, we can approximate this with
$$(\Delta p)^2 = 2m_1 \Delta E .$$
# Energy reduction
How much did we manage to reduce the kinetic energy of the human? After the jump, his/her kinetic energy is
$$E' = \frac{(p_1-\Delta p)^2}{2m_1} = \frac{p_1^2}{2m_1} - 2\frac{\Delta p\cdot p_1}{2m_1} + \frac{(\Delta p)^2}{2m_1}.$$
Writing $E$ for the previous kinetic energy, we have
$$E' = E - 2\sqrt{E \Delta E} + \Delta E = (\sqrt E - \sqrt{\Delta E})^2$$
or
$$\frac{E'}{E} = (1 - \sqrt{\Delta E / E})^2 .$$
It is very useful to estimate the energy $\Delta E$ generated by the human in terms of the maximum height that he can jump. For a human, that's roughly $h_1 = 1m$. Denoting the total height of the fall with $h$, we obtain
$$\frac{E'}{E} = (1 - \sqrt{h_1/h})^2 .$$
Thus, if a human is athletic enough to jump $1m$ in normal circumstances, then he might hope to reduce the impact energy of a fall from $16m$ to a fraction of
$$\frac{E'}{E} = (1 - \sqrt{1/16})^2 \approx 56 \% .$$
Not bad.
Then again, jumping while being weightless in a falling elevator is likely very difficult...
-
Very good! +1. Amazing that no one but us bothered to make the calculation. – Carl Brannen Jun 8 '11 at 22:44
The reason that jumping can make a relatively large difference is that the kinetic energy is proportional to the square of the velocity. Thus relatively small changes to the velocity can result in relatively large changes to the kinetic energy. In addition, the velocity which a human can achieve in jumping is a substantial percentage of the velocity of fatal falls.
Let the human weigh $m$, let him jump with upward velocity $v$ and let the elevator fall from a height $H$. Then the human's initial potential energy will be $10mH$. What fraction of this potential energy can he avoid having turned into kinetic energy?
At any given point before jumping, the human's kinetic energy and potential energy add up to $10mH$. If he jumps at a height of $h$, his potential energy will be $mgh$ and his kinetic energy will be $mg(H-h) = 0.5mV^2$ where $V$ is the elevator (and human before jumping) velocity, taken as a positive number so that $V=\sqrt{2g(H-h)}$.
At the moment of jumping, he will not reduce potential energy, but instead will decrease his velocity. So his kinetic energy decreases from $0.5mV^2$ to $0.5m(V-v)^2$. Therefore his total energy will become:
$$mgh + 0.5m(V-v)^2$$ $$= mgh + 0.5m(\sqrt{2g(H-h)}-v)^2$$ $$= mgH +0.5mv^2 - mv\sqrt{2g(H-h)}.$$ The terms have a simple interpretation. $mgH$ is the energy in the absence of any jumping. $0.5mv^2$ is the energy of the jump (in the frame of reference of the human). And the remaining term is the reduction in energy due to the reference frame conversion.
We wish to make the third term as negative as possible. This occurs when h is small so we put $h=0$ (as our intuition suggests, indeed, the best time to jump is just as the elevator impacts). Then the remaining kinetic energy is: $$mgH +0.5mv^2 -mv\sqrt{2gH}.$$
An example of a height $H$ which is generally fatal for a human is $H=10m$. A maximum velocity for a very athletic human jump is on the order of $v=3.64$ m/s. Such a jump would give a maximum height of 0.66 meters. See: Vertical Jump Test calculator for data on human jumping capabilities by sex, age, and athletic ability. Using $g=10$ and $m=50$, the kinetic energy before and after jumping are: $$mgH = 5000J$$ $$mgH+0.5mv^2-mv\sqrt{2gH} = 2757J$$
Thus, in fact, jumping could reduce the kinetic energy suffered by a factor of two. The final collision with the floor would be reduced from a height of 10m = 32.8 feet, to a height of 5.5m = 18 feet.
-
From the question of simple reduction of velocity, the answer's already been given (yes, but not enough to make a significant difference) ... but there's one other issue at play here -- how the forces are transfered to the body.
If you're standing upright, it'd all be transfered through your legs; as Flaviu mentioned, laying down so the force is spread across a larger area would be a better option to this. But, if you could manage to jump at just the right time, and you knew how to take a fall (bend your knees, roll into it, etc.), it might be possible to spread the force over a greater time and distance, therefore reducing the impulse, and thus the actual damage to your body.
Unfortunately, I don't think the chances of timing it correctly would be very good, so it wouldn't be particularly advisable. You'd have to weigh the risk & benefit of this strategy vs. just lying down.
-
What if you jumped at exact moment when the elevator started to fall? Then you'd actually gain energy and would worsen your chances. – user145 Nov 9 '10 at 8:44
But if it's all transferred to your legs, won't the impact mostly break your legs and hips, rather than damaging vital organs? – Jerry Schirmer Jun 8 '11 at 15:03
@Jerry : it's a matter of force over distance (and time) ... You have additional distance and time to decelerate if you can control the fall, which would significantly reduce the force (the impulse would remain the same, but the time is increased). – Joe Jun 10 '11 at 3:06
It's not about whether or not you can jump up fast enough to cancel out a 60mph impact. If you could jump up at 60 mph, you wouldn't need to because passively absorbing the impact (60mph deceleration) would be less stressful than actively accelerating upward to 60 mph (total impact cancellation), because you would be subjecting yourself to the same-if not greater-'g' forces. It seems more practical to jump up at 30 mph (partial impact cancellation) which can effectively cut down the 'g' forces by distributing the braking distance, sort of like the last second "braking" rockets of cosmonaut space craft that fire just before a parachute landing.
Impact severity is largely defined by the shortness of stopping distance. There's little-if any-substitution for distributing stopping distance in alleviation of impact. So, the question is more one of whether or not jumping up is ever the wisest option.
Suppose you're in an elevator that's headed for an uncushioned landing at twice the velocity you can jump up. You're coming down at 10 mph, and can jump up at 5mph. If your feet leave the floor at the precise moment you reach 5 mph, the deceleration would be a 2X5mph impacts, each with 1/4 the kinetic energy of an unmodified 10 mph impact, which equals half the impact. The main problem is, you reach your highest (and lowest) speed of 5 mph at the critical part of your upward jump where you're in the worst position to absorb the balance of the impact by rolling into it. You might be safer ONLY doing that-namely flexing your knees, and rolling into what skydivers refer to as a parachute landing fall, or plf.
-
If you jumped just before impact, your speed towards the bottom of the elevator shaft would go down a little bit. But consider that the elevator falls tens of meters, while you jump about one meter. Your jumping ability is quite small and probably won't make a noticeable difference
-
"the elevator falls tens of meters, while you jump about one meter" - how does this difference of distance relate to a difference in speed? You're not saying they're proportional are you? – LarsH Nov 4 '10 at 16:48
They're not proportional. I'm just pointing out that one is much larger than the other. – Mark Eichenlaub Nov 4 '10 at 18:07
There is a relationship. Assuming falling from rest, constant gravitational acceleration and no air resistance, the square of your final speed is proportional to the distance fallen. – Dom Nov 9 '10 at 10:51
For example, you jump 1 metre, but fall 9, that's 9 times as much, the difference in the velocities is the square root of this, so the elevator's falling speed is 3 times as much as the speed of your jump. – Dom Nov 9 '10 at 10:53
No. You'd reduce your impact velocity, but only with a negligible small fraction.
-
1
The analyses by Carl and Greg suggest that the fraction is not negligible. Either do the math and explain why it is negligible (perhaps you're assuming that the elevator is very, very high, or that the jumper is extremely weak?), or I'd suggest deleting this wrong answer. – Kevin Vermeer Feb 25 '12 at 18:31
@Kevin Ok, negligible... The problem with Carl's and Greg's answers is that energy is not what kills you, the shears on tissues due to instant acceleration do -- 30% less velocity is not enough, one would need an order of magnitude. – mbq♦ Feb 25 '12 at 23:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 34, "mathjax_display_tex": 20, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9486187696456909, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/business-math/64875-need-help.html
|
# Thread:
1. ## Business Calculus - Need help! :(
I've no idea is it the right section to post this,
But I had some problems dealing with this problem.
Total costs
Market price
Tax (per unit produced)
Find:
1. Quantity to produce to maximise profits
2. What should t be, so as to maximise tax revenue?
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
What I've done:
I derived the profit function as:
Then, I differentiate the profit function:
Then I'm stuck cos' I can't find x.
2. Originally Posted by geekie
I've no idea is it the right section to post this,
But I had some problems dealing with this problem.
Total costs
Market price
Tax (per unit produced)
Find:
1. Quantity to produce to maximise profits
2. What should t be, so as to maximise tax revenue?
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
What I've done:
I derived the profit function as:
Then, I differentiate the profit function:
Then I'm stuck cos' I can't find x.
Conditional denotations:
${\text{TC}}\left( x \right)$ is function of total costs
${\text{P}}\left( x \right)$ is function of market price
${\text{TR}}\left( x \right)$ is function of revenue
${\text{T}}\left( x \right)$ is function of tax
${\text{Pr}}\left( x \right)$ is function of profit
$\begin{gathered}{\text{TC}}\left( x \right) = \frac{7}{8}x^2 + 5x + 1 \hfill \\{\text{P}}\left( x \right) = 15 - \frac{3}{8}x{\text{ }} \Rightarrow {\text{ TR}}\left( x \right) = x \cdot {\text{P}}\left( x \right) = x\left( {15x - \frac{3}{8}x^2 } \right) = 15x - \frac{3}{8}x^2 \hfill \\{\text{T}}\left( x \right) = tx \hfill \\ \end{gathered}$
$\begin{gathered}{\text{Pr}}\left( x \right) = {\text{TR}}\left( x \right) - {\text{TC}}\left( x \right) - {\text{T}}\left( x \right) = \hfill \\= 15x - \frac{3}{8}x^2 - \left( {\frac{7}{8}x^2 + 5x + 1} \right) - tx = 10x - \frac{5}{4}x^2 - tx - 1 \hfill \\ \end{gathered}$
$\begin{gathered}{\text{Pr}}\left( x \right) \to \max {\text{ if }}\frac{d}{{dx}}{\text{Pr}}\left( x \right) \to 0 \hfill \\\frac{d}{{dx}}{\text{Pr}}\left( x \right) = \frac{d}{{dx}}\left( {10x - \frac{5}{4}x^2 - tx - 1} \right) = 10 - \frac{5}{2}x - t \hfill \\ \end{gathered}$
$10 - \frac{5}{2}x - t = 0 \Leftrightarrow \frac{5}{2}x = 10 - t \Leftrightarrow 5x = 20 - 2t \Leftrightarrow \left[ \begin{gathered}x = 4 - \frac{2}{5}t \hfill \\t = 10 - \frac{5}{2}x \hfill \\ \end{gathered} \right.$
Copyright © 2005-2013 Math Help Forum. All rights reserved.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 9, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8884677886962891, "perplexity_flag": "middle"}
|
http://scicomp.stackexchange.com/questions/5328/complex-least-squares-problem
|
# Complex least-squares problem
Having a matrix $\mathbf{A} \in \mathcal{C}^{m\times n}$ I solve following least-squares problem $$Re(\mathbf{A}^H \mathbf{A})x=Re(\mathbf{A}^H\mathbf{b}).$$ If the matrix $\mathbf{A}$ was a real matrix, the solution to the equation above could have been written as $$\mathbf{x} = \sum_{i=1}^{rank(\mathbf{A})} \frac{\mathbf{u}_i^T\mathbf{b}}{s_i}\mathbf{v}_i,$$ where $\mathbf{u}_i$ and $\mathbf{v}_i$ are corresponding left and right singular vectors and $s_i$ is an $i$th singular value.
My question is whether the solution to the least-squares problem stated in the first equation can be written in a similar way given the SVD of $\mathbf{A} = \mathbf{U}\mathbf{S}\mathbf{V}^H$?
I know that one could split matrix $\mathbf{\tilde A} = [Re(\mathbf{A});~Im(\mathbf{A})]$ and equivalently solve the real problem, but the condition to stay within the complex SVD of the original matrix is of main concern here.
Thank you.
-
That should be $u_{i}^{H}b$ rather than $u_{i}^{T}b$. – Brian Borchers Feb 19 at 22:33
@BrianBorchers: thanks, but somehow I can't edit it... Have you got any idea how to write this equation given the complex SVD so that it solves stated normal eqs? – Alexander Feb 20 at 7:32
## 2 Answers
I can't see how it can be done, but others might be able to help you further. I get, with $A=USV^H$, $A^H=VS^H U^H$,
$Re(A^H A)x = Re(A^H b) => Re(V S^H U^H U S V^H) x = Re(V S^H U^H b) => Re(V S^H S V^H) x = Re(V S^H U^H b)$
Normally, you would now reduce $V S^H$ away, but since they are inside the real part, and since $Re(V S^H S V^H) \neq Re(V S^H) Re(S V^H)$, I am not sure how to proceed. Can anyone else get an idea?
-
Check out the LAPACK routines ZGELLS or ZGELLD. They solve the LS problem using SVD. See the official LAPACK documentation for the routine.
-
But OP only wants to solve for the real part? – OscarB Feb 21 at 13:37
@OscarB: that is not what I read in the OP's post. He has a matrix with complex entries and wants to solve the complex LS problem... – GertVdE Feb 21 at 16:36
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 15, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9441178441047668, "perplexity_flag": "head"}
|
http://mathhelpforum.com/differential-equations/97387-difficult-differential-equation-system.html
|
# Thread:
1. ## Difficult Differential Equation System
The following is a model for HIV.
Infected cells = T
Concentration of viral particles = C
$\dot{T} = 0.06V - \frac{T}{2} , \dot{V} = 100T - cV$
where $c > 0$ is the rate constant for viral clearance.
1) Show that there are 2 possible types of critical points at the origin and one dividing case, and state the values of c which correspond to each case. In each case clearly state the stability of characteristics of the critical point at the origin.
2) If c = 5.5 what will the phase portrait of the system look like. (teacher said to look at the phase portrait non-uniformly, say take (-0.01, 0.01) * (-1,-1)).
3) Under treatment, the model changes to
$\dot{T} = 0.06v - \frac{T}{3}, \dot{V} = -cV$
What are the possible types of critical point at the origin in this model?
4) Introducing a new variable, $V_N$ which is the non-infected particles. The new system is
$\dot{T} = 0.06v - \frac{T}{3}, \dot{V} = -cV , \dot{V_N} = 100T - cV_N$
Prove that the eigenvalues of the linearized matrix are all negative in this case and find out what they are.
Why are the solutions to this no simply exponentials?
I know this is a massive question, so I am going to show my working in a separate post shortly.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9022042751312256, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/319?sort=newest
|
## Spectrum of the Grothendieck ring of varieties
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Here's a problem that may ultimately require just simple algebraic-geometry skills to be solved, or perhaps it's very deep and will never be solved at all. From the comments, some literature and my memory it appears this was posed by Grothendieck as part of the big program of motives.
Consider classes of complex algebraic varieties X modulo relations
```` [X] - [Y] = [X\Y],
[X x Y] = [X] x [Y],
````
Also, if you're familiar with taking inverse of an affine line, let's do that too: $$\exists \mathbb A^{-1}\quad \text{such that}\quad [\mathbb A] \cdot [\mathbb A^{-1}] = [\mathbb A^0].$$
(+ if you want, you can also take idempotent completion and formal completion by A^-1).
It's not hard to see that you can add (formally) and multiply (geometric product as above) those things, so they form a ring. Let's denote this ring `Mot` (It's actually very close to what Grothendieck called baby motives.)
And for things that form a ring you can study their `Spec`. For example, you can talk about points of the ring — each point is by definition a homomorphism to complex numbers.
Question: what are the properties of `Spec Mot`? How to describe its points?
For example, one point is Euler characteristics `$\chi \in \text{Spec}\,\mathbf{Mot}$`, since it's additive and multiplicative (it's even integral!) Any other homomorphism to complex numbers is thus sometimes called generalized Euler characteristics.
There's also a plane there given by mixed Hodge polynomials (that is, polynomials whose coefficients are weighted Hodge numbers $h^{p,q}_k$), since Hodge polynomial at a given point satisfies those relations too (see the references below).
As Ben says below, things would become even more interesting if we considered this ring for schemes over $\mathbb Z$, because then each $q$ would give a generalized Euler characteristic $\chi_q$ that counts points of $X(\mathbb F_q).$
-
ilya- you're not so far from 250. I agree with you that requirement is a little low at the moment, but once we get more users, it will be easy to get 100 reputation from a good answer, and 250 won't seem so bad. – Ben Webster♦ Oct 11 2009 at 23:23
The ring, before inverting the Lefschetz motive L=[A], is usually called the Grothendieck ring of varieties. It is known that it has zero-divisors (at least in char. 0). A presentation of this in terms of generators and relations has been given by F. Bittner (arxiv:math/0111062v1). I think that it is common to pass to certain completions of the ring with respect to the Lefschetz motive. – David Rydh Oct 12 2009 at 18:34
By the way, I think you've made a bit too strong a claim about Hodge numbers. Think about A^1 versus A^1 minus a point unioned with a point. You're right that weighted Euler characteristic is invariant under scissors congruence, but I'm not sure if you can get much more out of Hodge theory – Ben Webster♦ Oct 12 2009 at 19:13
See the references below. – Ilya Nikokoshev Dec 31 2009 at 18:36
## 4 Answers
This ring is very important for motivic integration; so it might be useful for you to read surveys on this subject.
Yet I would say that this ring is too large and complicated. A reasonable factor-ring of it is K_0 of Chow motives. If you take Chow motives with rational coefficients then as a group it (conjecturally!) would be a free abelian group with generators being isomorphism classes of indecomposable numerical motives.
You could also be interested in weight complexes: see H. Gillet, C. Soule, Descent, motives and K-theory, J. Reine Angew. Math. 478 (1996) or my own paper http://arxiv.org/abs/math/0601713
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
I think I'll be collecting references I found in this answer, rather then in the original (already large) post:
It's a community wiki — feel free to add!
-
One interesting fact about Spec M is that it isn't integral; i.e., the ring M has zero divisors. This was proved by Poonen in 2002:
"The Grothendieck ring of varieties is not a domain"
Re points of Spec M: I suppose if you considered varieties over R instead of C, you would in addition have the map sending X to the Euler characteristic of X(R), though I've never seen this used.
Update: Oh, I've never seen this used because it's totally wrong. For instance, A^0(R) and A^1(R) have Euler characteristic 1 but P^1(R) doesn't have Euler characteristic 2. I think the mod-2 Euler characteristic would probably be OK here.
-
Yes, there are many interesting directions one might explore over different fields! I think it's quite straightforward that Mot is not a domain over C, yes. – Ilya Nikokoshev Oct 17 2009 at 9:32
3
That's just because you're measuring real Euler characteristic wrong. A better, for certain purposes, Euler characteristic is given by Schanuel in MR0842922 and other articles. For finite cell complexes (like CW complexes, but don't require that each cell have compact closure), the formula is just the alternating sum of the number of cells at each dimension. So e.g. R has one 1-cell and nothing else: Schanuel's euler characteristic is -1 in this case. – Theo Johnson-Freyd Oct 17 2009 at 19:46
Cool! So does this actually give a homomorphism from K_0(Var/R) to Z? – JSE Oct 18 2009 at 0:01
2
Incidentally, another article by Schanuel, namely MR1173024, is more mathematical, and certainly more interesting for this discussion. For example, he computes the Burnside rings for various geometric categories, mostly those whose objects comprise the Boolean ring generated by positive solutions to polynomials/R (resp linear functions) and whose maps are piecewise polynomial (resp affine). He also discusses the case of varieties/C, but remarks that the Burnside ring is too complicated to actually compute (he computes a useful quotient). – Theo Johnson-Freyd Oct 18 2009 at 0:50
If you considered varieties over Z instead of over C, you would have homomorphisms given by counting points over all the different finite fields.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9457117915153503, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/67611/self-dual-representations
|
self-dual representations
Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Let $V$ be a finite-dimensional irreducible representation (complex or $\ell$-adic) of a group $G$ (compact Lie group or algebraic group etc.). Does there always exist a linear character $\rho$ of $G$, such that $V\otimes\rho$ is a self-dual irrep. of $G?$ Namely $V\otimes\rho\simeq(V\otimes\rho)^*.$ If not, is there any necessary/sufficient conditions on $V$ for it to be "twisted self-dual"?
If this is always the case, then in particular, if $G$ has no non-trivial linear characters (e.g. $G$ is a simply-connected compact Lie group or a perfect finite group), then every irrep. of $G$ is self-dual.
Thanks.
-
1
The question isn't formulated precisely enough at this point to be answered directly. But in any case the complex irreducible representations of a simply connected compact Lie group certainly aren't always self-dual as one sees immediately in rank 2 examples. – Jim Humphreys Jun 12 2011 at 22:33
Isn't the natural representation of $\mathrm{SL}(n)$ a counter example to the first question? – Bruce Westbury Jun 12 2011 at 22:34
@Bruce: Yes, or similarly for compact groups such as `$SU(n)$` in the context of the question and my comment. – Jim Humphreys Jun 12 2011 at 23:35
This already fails for finite groups. For instance, $PSL(2,7)$ has no nontrivial linear characters, but has two non-isomorphic 3-dimensional irreducible reps which are dual to each other. – Kevin Ventullo Jun 13 2011 at 6:48
1 Answer
If you want a representation $V$ to be self-dual up to a character, then either $S^2V$ or $\Lambda^2V$ (considered as representations of $G$) should have a 1-dimensionanal summand (corresponding to the isomorphism $V \to V^*\otimes\chi$). But as it was mentioned by Jim there are a lot of representations $V$ for which both $S^2V$ and $\Lambda^2V$ are irreducible. For example, this is the case for $G = SL(n)$ and $V$ the standard representation --- the example suggested by Bruce.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 24, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9215624332427979, "perplexity_flag": "head"}
|
http://unapologetic.wordpress.com/2011/11/30/homotopies-as-2-morphisms/?like=1&source=post_flair&_wpnonce=5d5a17dad5
|
# The Unapologetic Mathematician
## Homotopies as 2-Morphisms
Last time, while talking about homotopies as morphisms I said that I didn’t want to get too deeply into the reparameterization thing because it could get too complicated. But since when would I, of all people, shy away from 2-categories? In case it wasn’t obvious then, it’s because we’re actually going to extend in the other direction.
Given any two topological spaces $M$ and $N$, we now don’t just have a set of continuous maps $\hom(M,N)$, we have a whole category consisting of those maps and homotopies between them. And I say that composition isn’t just a function that takes two (composable) maps and gives another one, it’s actually a functor.
So let’s say that we have maps $f_1,f_2:M\to N$, maps $g_1,g_2:N\to P$, and homotopies $F:f_1\to f_2$ and $G:g_1\to g_2$. From this we can build a homotopy $G\circ F:g_1\circ f_1\to g_2\circ f_2$. The procedure is obvious: for any $t\in[0,1]$ and $m\in M$, we just define
$\displaystyle[G\circ F](m,t)=G(F(m,t),t)$
That is, the time-$t$ frame of the composed homotopy is the composition of the time-$t$ frames of the original homotopies. It should be straightforward to verify that this composition is (strictly) associative, and that the identity map — along with its identity homotopy — acts as an (also strict) identity.
What we need to show is that this composition is actually functorial. That is, we add maps $f_3:M\to N$ and $g_3:N\to P$, change $F$ and $G$ to $F_1$ and $G_1$, and add homotopies $F_2:f_2\to f_3$ and $G_2:g_2\to g_3$. Then we have to check that
$(G_2*G_1)\circ(F_2*F_1)=(G_2\circ F_2)*(G_1\circ F_1)$
That is, if we stack $G_2$ onto $G_1$ and $F_2$ onto $F_1$, and then compose them as defined above, we get the same result as if we compose $G_2$ with $F_2$ and $G_1$ with $F_1$, and then stack the one onto the other.
This is pretty straightforward from a bird’s-eye view, but let’s check it in detail. On the left we have
$\displaystyle\begin{aligned}{}[(G_2*G_1)\circ(F_2*F_1)](m,t)&=[G_2*G_1]([F_2*F_1](m,t),t)\\&=\left\{\begin{array}{lr}G_1([F_2*F_1](m,t),2t)&0<t<\frac{1}{2}\\G_2([F_2*F_1](m,t),2t-1)&\frac{1}{2}<t<1\end{array}\right.\\&=\left\{\begin{array}{lr}G_1(F_1(m,2t),2t)&0<t<\frac{1}{2}\\G_2(F_2(m,2t-1),2t-1)&\frac{1}{2}<t<1\end{array}\right.\end{aligned}$
Meanwhile, on the right we have
$\displaystyle\begin{aligned}{}[(G_2\circ F_2)*(G_1\circ F_1)](m,t)&=\left\{\begin{array}{lr}[G_1\circ F_1](m,2t)&0<t<\frac{1}{2}\\{}[G_2\circ F_2](m,2t-1)&\frac{1}{2}<t<1\end{array}\right.\\&=\left\{\begin{array}{lr}G_1(F_1(m,2t),2t)&0<t<\frac{1}{2}\\G_2(F_2(m,2t-1),2t-1)&\frac{1}{2}<t<1\end{array}\right.\end{aligned}$
And so we do indeed have a 2-category with topological spaces as objects, continuous maps as 1-morphisms, and continuous homotopies as 2-morphisms. Of course, if we’re in a differential topological context we get a 2-category with differentiable manifolds as objects, smooth maps as 1-morphisms, and smooth homotopies as 2-morphisms.
### Like this:
Posted by John Armstrong | Differential Topology, Topology
## 1 Comment »
1. [...] we’ve seen that differentiable manifolds, smooth maps, and homotopies form a 2-category, but it’s not [...]
Pingback by | December 2, 2011 | Reply
« Previous | Next »
## About this weblog
This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”).
I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 32, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9258056879043579, "perplexity_flag": "head"}
|
http://mathoverflow.net/revisions/16830/list
|
Return to Answer
1 [made Community Wiki]
In question #14739, I asked whether the product of two ideals of a commutative ring $R$ could be defined lattice-theoretically the same way the sum and intersection can. Bjorn Poonen gave a great counterexample that shows the answer is no! This supports a point fpqc had been trying to make to me earlier that the relationship between $R$ and the Zariski topology on $\text{Spec } R$ was more subtle than I had thought: in particular, it has more structure than just the Galois connection.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9746671915054321, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/26861?sort=votes
|
## Explicit ordering on set with larger cardinality than R
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Is it possible to construct (without using Axoim of Choice) a totally ordered set S with cardinality larger than $\mathbb{R}$?
Motivation: A total ordering is often called a “linear ordering”. I have heard the following explanation: “If you have a total ordering on a set S, you can plot the set on the real line such that elements to the right are greater than elements to the left”. Formally this means that there exist a function $\phi:S\rightarrow \mathbb{R}$ such that for all $a$ ,$b\in S$, x < y $\Leftrightarrow \phi(x)$ < $\phi(y)$. This is of course correct if the set is finite or countable (and it gives a good intuition on what a total ordering is), but obviously not if $|S|>|\mathbb{R}|$, and using the axiom of choice it is easy to “construct” a total ordering on, say, the power set of $\mathbb{R}$. But I would prefer to have a more concrete counterexample, and this is why I asked myself this question.
Later I realized that is was possible to construct a total ordering on a set $|S|=|\mathbb{R}|$, such that no such function $\phi$ exist, but I still think that the above question is interesting.
-
Lexicographic order on R^R? – Qiaochu Yuan Jun 2 2010 at 22:23
3
For lexicographic, the exponent should be well-ordered (or maybe reverse well-ordered). With the usual ordering on the exponent R, you can have two members of R^R but no least spot where they disagree. – Gerald Edgar Jun 3 2010 at 1:09
## 5 Answers
Yes. By Hartogs' theorem, there is an ordinal that has no injection into $R$. The minimal such ordinal is the smallest well-ordered cardinal not injecting into $R$. It is naturally well-ordered by the usual order on ordinals. None of this needs AC.
One can think very concretely about the order as follows: Consider all subsets of $R$ that are well-orderable. By the Axiom of Replacement, each well-order is isomorphic to a unique ordinal. Let $\kappa$ be the set of all ordinals that inject into $R$ in this way. One can show that $\kappa$ itself does not inject into $R$, and this is the Hartogs number for the reals.
More generally, of course, there is no end to the ordinals, and they are all canonically well-ordered, without any need for AC.
But in terms of the remarks in your "motivation" paragraph, there are linear orders that do not map order-preservingly into $R$ that are not larger than $R$ in cardinality. For example, the ordinal $\omega_1$ cannot map order-preservingly into $R$, since if it did so, then there would be an uncountable family of disjoint intervals (the spaces between the successive ordinals below $\omega_1$), but every such family is countable by considering that the rationals numbers are dense. Another way to see this is to observe that the real line has countable cofinality for every cut, but $\omega_1$ has uncountable cofinality.
Lastly, there is a subtle issue about your request that the order by "larger than $R$". The examples I give above via Hartog's theorem are not technically "larger than $R$", although they are not less than $R$ in size. The difficulty is that without AC, the cardinals are not linearly ordered, and so these two concepts are not the same. But you can turn the Hartogs argument into a strict example of what you requested by using the lexical order on $R\times\kappa$, where $\kappa$ is the Hartog number of $R$. This order is strictly larger than $R$ in size, and it is canonically linearly ordered by the lexical order.
-
Doh! I was only trying to define a total ordering on the powers set of R or on R^R. Is this possible? – Sune Jakobsen Jun 2 2010 at 20:54
1
"there are linear orders that do not map order-preservingly into R that are not larger than R in cardinality". Yes, the example I thought of was R^2 with lexicographical ordering. – Sune Jakobsen Jun 2 2010 at 21:01
1
Sune, in general it is a weak choice principle that every set admits a linear order. I'm not sure if this extends down to $R^R$, but I think it might. Perhaps $2^R$ is also a natural case: Can you linearly order $2^R$ without AC? I think that in the usual model with $\neg AC$, the set $2^R$ has no linear order... – Joel David Hamkins Jun 2 2010 at 21:12
1
@Joel: The original question has a slightly cheaper answer than the product of $\mathbb R$ and its Hartogs ordinal. You could just use the disjoint union of $\mathbb R$ and its Hartogs ordinal, ordered by putting all of $\mathbb R$ before all the ordinals. – Andreas Blass Jul 29 at 3:01
Andreas, yes, I agree. – Joel David Hamkins Jul 30 at 17:30
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
This is a response to Joel's comment about whether $2^{\mathbb R}$ can be linearly ordered without choice. In general, no. There is a concrete obstacle, actually: Vitali's equivalence relation. Recall that this relation is defined by $x\sim y$ iff $x-y\in{\mathbb Q}$. Now consider ${\mathbb R}/\sim$, the collection of equivalence classes. This is a concrete subset of $2^{\mathbb R}$ that in general cannot be linearly ordered without some appeal to choice.
For example, under determinacy, this set is not linearly orderable, so in $L({\mathbb R})$ there is no linear ordering of it in the presence of large cardinals. In short, under reasonable assumptions, there is no way of linearly order this set without appealing to choice.
Things get interesting. For example, in $L({\mathbb R})$ (the smallest model of ZF that contains all the reals), in the presence of large cardinals, a set is linearly orderable iff ${\mathbb R}/\sim$ does not inject into it, and a set is well-orderable iff ${\mathbb R}$ does not inject into it.
Here are some details, it is not a complete argument, it requires knowing some descriptive set theory (and there may be some typos), but the sketch should give a decent idea.
I'll actually work with $2^\omega/E_0$ (which is another manifestation of Vitali's relation). I learned this from Benjamin Miller, by the way, and it immediately became key for some results Richard Ketchersid and I have been working on. The result itself, that under AD this quotient does not admit a linear ordering, has been known to descriptive set theorists for ages, I am not sure who first noticed it.
Recall that $x\mathrel{E_0}y$, for $x,y\in2^\omega$, iff there is some $n$ such that for all $m\ge n$ we have $x(m)=y(m)$. It suffices to assume that all sets of reals have the Baire property.
Suppose $R$ is a linear ordering of $2^\omega/E_0$. Then the pullback $\hat R$ of $R$ is a quasi-ordering of $2^\omega$. Begin by noticing that $\hat R$ is not meager. Otherwise, $2^\omega$ itself would be meager, being the union of $\hat R$ and $\hat R^{-1}$ (its flip'').
Note that the set
{$x \mid${ $y \mid x \mathrel{\hat R} y$} is non-meager }
is itself non-meager, by the Kuratowski-Ulam theorem, so we can fix some $s \in 2^{<\omega}$ such that
{$x \mid${ $y \mid x \mathrel{\hat R} y$} is co-meager in $N_s$}
is non-meager, where $N_s$ is the basic neighborhood consisting of sequences in $2^\omega$ that begin with $s$.
The key point is that if a set has the Baire property and $E_0$ restricted to that set is smooth (i.e.,there is a Borel reduction to the identity on that set), then the set is actually meager (this follows from the Glimm-Effros dichotomy of Harrington-Kechris-Louveau).
Note that $E_0$ is smooth on the set
{$x\mid$ there are $y$, $z$ such that $x \mathrel{E_0} y \mathrel{E_0} z$, and exactly one of {$y'\mid y' \mathrel{\hat R} y$}, {$z'\mid z \mathrel{\hat R} z'$ } is co-meager in $N_s$}.
This is not hard, but needs a tiny bit of thought. The point is that any $E_0$-class admits a natural ${\mathbb Z}$-ordering, and on the set above we can pick representatives from each class, since we actually have a way of assigning an origin'' to this ordering.
It follows that that the set
{ $x \mid$ for all $x' E_0 x$ the set { $y \mid x' R y$ } is co-meager in $N_s$ }
is non-meager.
Now: This set is $E_0$-invariant, and therefore it must actually be co-meager.
But then $\hat R$ itself is co-meager in $N_s \times N_s$. Now let $E$ be the equivalence relation $\hat R\cap \hat R^{-1}$. Then $E$ is also co-meager in $N_s \times N_s$. But then it admits an equivalence class which is co-meager in $N_s$. Since $E$ actually contains $E_0$, we then have that it is co-meager in all of $2^\omega$.
But then $R$ cannot be a linear order, as it cannot distinguish between co-meager many $E_0$-classes.
[As a final remark: One can of course organize the whole thing using Lebesgue measurability rather than the property of Baire, and Fubini's theorem rather than Kuratowski-Ulam. But the argument using the Baire property shows that this is equiconsistent with ZF (by Shelah), while using measurability would in consistency require an inaccessible.]
-
Thanks, Andres. Would it be possible for you briefly to explain why $R/\sim$ cannot be linearly ordered under AD? – Joel David Hamkins Jun 3 2010 at 5:15
An easier solution was given by Sierpinski, see Theorem 6 of Sur une proposition qui entraîne l'exsistence des ensembles non-mesurables, Fund. Math. 34 - matwbn.icm.edu.pl/ksiazki/fm/fm34/fm34121.pdf – François G. Dorais♦ Jun 3 2010 at 11:21
François : Sorry for not replying earlier, I just noticed I hadn't. First, thanks for the reference, I didn't know this dated to Sierpinski. And I guess it is a matter of taste, but to me his argument is nit simpler. His looks more direct since he argues in terms of Lebesgue measurability (which I don't for consistency strength issues) and for the reals and the Vitali relation rather than $2^\omega$ and $E_0$ (but this is mainly because it is in this fashion that one usually needs to apply the result nowadays in descriptive set theory, so it made sense to give the presentation that way.) – Andres Caicedo Nov 24 2010 at 17:00
3
Here's a yet easier (in my opinion) solution, probably due to Mycielski in one of the early papers on AD. Given a linear ordering of `$2^\omega/E_0$`, let $A$ be the set of those `$x\in2^\omega$` such that the `$E_0$`-class of $x$ precedes that of its reflection (i.e., $n\mapsto 1-x(n)$). If $A$ were Lebesgue measurable, it would have measure 0 or 1, because it's $E_0$-invariant. But it's sent to its complement by reflection, which preserves measure, so the measure of $A$ must be 1/2 --- contradiction. – Andreas Blass Nov 30 2010 at 15:14
@Andreas: This is better, thanks! Pretty sure I read it at some point. I need to stop forgetting these things... – Andres Caicedo Nov 30 2010 at 15:32
As Joel pointed out, for every set there $X$ is a first ordinal $\aleph(X)$ which does not inject into $X$ — the Hartogs number of $X$. If $X$ is a linear ordering of any kind, there cannot be an order embedding of $\aleph(X)$ into $X$.
If all you want is a linear ordering that does not order-embed into $\mathbb{R}$ then $\aleph(\mathbb{R})$ might be much bigger than necessary. The first uncountable ordinal $\aleph_1$ has this property. Indeed, if $\phi:\aleph_1\to\mathbb{R}$ were an order-embedding, then for every $\alpha \in \aleph_1$, we could find a rational number in the interval $(\phi(\alpha),\phi(\alpha+1))$. These rationals must all be distinct so this gives an injection of $\aleph_1$ into $\mathbb{Q}$. Since $\mathbb{Q}$ is countable and $\aleph_1$ isn't, such an injection cannot exist.
Finally, the powerset of any ordinal number $\kappa$ has a natural ordering which can be described as follows. Given distinct $x, y \in \mathcal{P}(\kappa)$, let $\delta(x,y)$ be the minimal element of the symmetric difference $(x \cup y) \setminus (x \cap y)$. Note that $\delta(x,y)$ belongs to exactly one of $x$ or $y$. Define $x < y$ if $\delta(x,y) \in y$. It's a nice exercise to verify that this is transitive.
-
3
If one thinks of $P(\kappa)$ as $2^\kappa$, then François' order is also known as the lexical order. – Joel David Hamkins Jun 2 2010 at 21:16
[edited nonsense]
With regard to the motivational problem, we have the following theorem:
If $(X,\leq)$ is a total order, then there exists a function $\phi:X\to\mathbb{R}$ such that $x< y$ iff $\phi(x)<\phi(y)$ if and only if there exists a countable set $C$ such that whenever $x < y$, there exists $c\in C$ with $x\leq c\leq y$.
Proof: Suppose such a function $\phi$ exists. We call a pair of elements in X a jump if they are different and there is no other element strictly between them. If $(x_1,x_2)$ and $(x_3,x_4)$ are jumps, then the intervals $(\phi(x_1),\phi(x_2))$ and $(\phi(x_3),\phi(x_4))$ are disjoint. Since there are at most countably many disjoint open intervals of real numbers, there are at most countably many jumps. Let $J$ be the set of all elements that occur in a jump. J is countable.
Pick for each pair of rational numbers $q_1, q_2$ with $q_1< q_2$ such that $\phi(X)\cap(q_1,q_2)\neq\emptyset$ a point x such that $\phi(x)\in\phi(X)\cap(q_1,q_2)$ and collect them in a set B. Since there are countable many such pairs, B is countable. Let $C=B\cup J$. Now if $x< y$, then either $x,y$ form a jump in which case we are done, or there is an element in B strictly between them.
On the other hand, if such a countable set $C=${$c_1,c_2,\ldots$} exists, the function $\phi:X\to\mathbb{R}$ given by $\phi(x)=\sum_{c_n< x}1/2^n-\sum_{c_n > x}1/2^n$ is easily seen to do the job.
The sufficiency of such a set C is due to Debreu: "Representation of a Preference Ordering by a Numerical Function (Lemma II). I don't know who has first shown the necessity part, but it can be found in many books on decision theory.
-
Is this not a trivial question? First thing I thought of was the set of real-valued functions on the unit interval. The ordering is just the obvious lexicographic ordering. This is of course not a well-ordering, but you didn't say you wanted one.
-
3
I suppose that means that $f < g$ if for the least $x\in[0,1]$ such that $f(x)\ne g(x)$ one has $f(x) < g(x)$. So, if $f(x)=x\sin(1/x)$ for $x > 0$ and $f(0)=0$ and $g(x)=-f(x)$, then does $f < g$ or does $g < f$? – Robin Chapman Nov 30 2010 at 10:25
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 171, "mathjax_display_tex": 0, "mathjax_asciimath": 2, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9515208005905151, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/269439/how-to-simplify-this-equality-factorials/269445
|
# How to simplify this equality (factorials)?
This was in one of the examples of the textbook, but I couldn't figure out how they solved it. They say they multiply the left hand side by $\frac{n!}{n!}$ to get the right hand side:
$$\frac{2^n \cdot (2n-3)!!}{n!} = 2\frac{(2n-2)!}{n!(n-1)!}$$
The double factorial stands for the product of all odd integers from 1 to 2$n$-3.
I've given this problem a lot of time, and I'm all out of ideas at this point.
-
2
@Clive: I think this is fairly standard notation, and it is used in the common references en.wikipedia.org/wiki/Factorial#Double_factorial and mathworld.wolfram.com/DoubleFactorial.html (I don't know if there is another common notation for it.) – Jonas Meyer Jan 2 at 22:48
@JonasMeyer: Oh wow, I stand corrected! Thanks for pointing that out, it'd have been even more embarrassing in a few years :) – Clive Newstead Jan 2 at 22:51
3
@CliveNewstead Though it is a fairly standard notation, I believe it is a very poor notation. So there is no reason for you to feel embarrassed by this poor notation :). – user17762 Jan 2 at 22:52
## 2 Answers
Notice that $$2^n n! = 2(n) \cdot 2(n-1) \cdot 2(n-2) \cdots 2(2) \cdot 2(1) = (2n)(2n-2) \cdots (4)(2)$$
is the product of all even integers between $1$ and $2n$. So when you multiply the numerator and denominator by $n!$ you end up with
$$\dfrac{2n(2n-2)!}{n!n!}$$
since the even terms between $1$ and $2n-2$ interweave between the odd terms.
Cancelling the rogue $n$ on the numerator gives the result you seek.
-
All you need is: $$K:=2^{n-1}(n-1)!=(2\cdot 1)\cdot (2\cdot 2)\cdot (2\cdot 3)\cdots (2\cdot(n-1))=2\cdot 4\cdots (2n-2).$$ This gives for $K$ the product of all even integers between $1$ and $2n-2$. Therefore: $$(2n-2)!/K=(2n-3)!!$$ And this why your LHS turns into your RHS.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9291670322418213, "perplexity_flag": "head"}
|
http://mathhelpforum.com/advanced-algebra/90136-problem-idempotent-matrix.html
|
Thread:
1. Problem of Idempotent matrix
Assume $A$ is idempotent matrix of order n,in other words: $A^2 = E_{n}$,and $det(A) > 0$.show that: $det(A + E_{n}) \not=0$
2. Originally Posted by Xingyuan
Assume $A$ is idempotent matrix of order n,in other words: $A^2 = E_{n}$,and $det(A) > 0$.show that: $det(A + E_{n}) \not=0$
wait a second! idempotent means $A^2=A$ not $A^2=I.$ are you sure you wrote the question correctly?
3. I am not sure. I am reading a book of Linear Algebra. this problem is in that book. if n is odd ,and $A^2 = E_{n}$,I have proved $det(A + E_{n}) \not=0$. when n is even ,i have no idea.
If $A^2 = A$,I am not sure whether the conclusion was right?
thanks very much.
4. Originally Posted by Xingyuan
I am not sure. I am reading a book of Linear Algebra. this problem is in that book. if n is odd ,and $A^2 = E_{n}$,I have proved $det(A + E_{n}) \not=0$. when n is even ,i have no idea.
If $A^2 = A$,I am not sure whether the conclusion was right?
thanks very much.
well, this is not very helpful! post the question exactly as you've seen it. do not add anything to it! then i'll help you to solve it.
5. OK, Assume $A^2 = E_{n}$,and $det(A) > 0$,show that : $det(A + E_{n}) \not= 0$
thanks very much !
6. Originally Posted by Xingyuan
OK, Assume $A^2 = E_{n}$,and $det(A) > 0$,show that : $det(A + E_{n}) \not= 0$
thanks very much !
assuming that $E_n$ is the identity matrix (this is an unusual notation for the identity matrix and you should've mentioned what it means!) this still can't be the question because it's false!
a trivial counter-example: $n$ any even number and $A = -E_n.$
7. Oh,yes .If n is odd .the conclusion is correct.thanks very much.
If $A^2 = A$,and $det(A) > 0$. can we get $det(A + E_{n})\not=0$?
8. Originally Posted by Xingyuan
Oh,yes .If n is odd .the conclusion is correct.thanks very much.
for odd values of $n$ the problem is still false. a counter example is $A=\begin{bmatrix}-1 & 0 & 0 \\ 0 & -1 & 0 \\ 0 & 0 & 1 \end{bmatrix}.$ we have $A^2=I, \ \det A= 1 > 0$ but $\det(A + I)=0.$
If $A^2 = A$,and $det(A) > 0$. can we get $det(A + E_{n})\not=0$?
yes, and we only need $\det A \neq 0.$ the proof is very simple: $A(A+I)=A^2 + A=2A.$ thus: $(\det A) \det (A+I)=2^n \det A$ and hence $\det(A+I)=2^n,$ because $\det A \neq 0.$
9. Thanks Very Much!!
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 40, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.949053943157196, "perplexity_flag": "middle"}
|
http://crypto.stackexchange.com/questions/3251/is-mac-better-than-digital-signature
|
# Is MAC better than digital signature?
MACs differ from digital signatures in the sense that MAC values are both generated and verified using a shares secret key. Does this in any way put MAC on a disadvantage as compared to digital signatures? How is one of them better than the other?
-
## 1 Answer
Was, as you stated, a MAC "are both generated and verified using a shared secret key", while with a digital signature, the signatures are generated with one key, and are verified with another (and it is infeasible to sign anything with the verifier key alone).
So, with digital signatures, we can give someone the ability to verify signatures, but not generate them; with a MAC, we cannot do so.
Is this difference important? Well, that depends entirely on what we're doing, and whether we can whether someone who can verify messages can also create them.
For example, consider the case of Alice sending messages to Bob. Bob wants to make sure those messages really come from Alice, and so they share keys; Alice signs/MACs the messages, and Bob verifies those messages. If that's the only what that key is used, we don't mind if Bob can also use his key to generate messages, because we can trust that he won't. After all, the only one who will ever validate a message is, in fact, Bob. If Bob does generate his own message, well, the only one he would be able to fool with it would be himself, and there's no point to that.
For an example in the other direction, consider the example of distributed stock quotes, where (say) the New York Stock Exchange publishes the current price of stocks. Now, people will need to be able to determine if these quotes came from NYSE, and so they need to be verifiable somehow. If these quotes were MACed, that means that NYSE would need to distribute the MAC secret key (so people can verify quotes); that would mean that someone could generate a fake stock quote (one that says that, say the price of GM just went drastically down); because he has the MAC secret key, he can compute the correct MAC, and that would fool someone else, causing them to dump all their GM stock. To prevent this, any such system would need to use a signature method, with the NYSE signature key being distributed. That way, anyone could validate a stock quote, but they wouldn't be able to generate fake quotes of their own.
Now, the next obvious question is "if a digital signature can do everything a MAC can do, why would we ever use a MAC"? Well, the answer is that while a digital signature could be used where we currently use a MAC, it is also much more expensive; a MAC is much (several orders of magnitude) cheaper to compute and verify, and is also somewhat shorter. No one uses a digital signature when a considerably cheaper MAC would do the job.
-
"while a digital signature could be used where we currently use a MAC" I disagree. One-to-one and one-to-many authentication are different operations, and can't easily substitute for each other. Often Alice doesn't want Bob to be able to prove to Carol what she said. – CodesInChaos Jul 16 '12 at 19:39
@CodeInChaos: if Bob had the private key, how would he prove to Carol that Alice signed the message, and not Bob? Remember, when you're using a signature as a MAC, you distribute both the private and the public key to everyone that would get the MAC secret key. – poncho Jul 16 '12 at 20:20
He wouldn't, he would instead give Carol the public key and not the private key. $\:$ (How much of $\hspace{0.6 in}$ an effect this possibility has depends on how difficult it would be to obfuscate MAC verification.) – Ricky Demer Jul 16 '12 at 22:37
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9659760594367981, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/27255/instanton-moduli-space-with-a-surface-operator?answertab=oldest
|
# Instanton Moduli Space with a Surface Operator
I would like to understand the mathematical language which is relevant to instanton moduli space with a surface operator.
Alday and Tachikawa stated in 1005.4469 that the following moduli spaces are isomorphic.
1. the moduli space of ASD connections on $\mathbb{R}^4$ which are smooth away from $z_2=0$ and with the behavior $A\sim (\alpha_1,\cdots,\alpha_N)id\theta$ close to $r\sim 0$ where the $\alpha_i$ are all distinct and $z_2=r\exp(i\theta)$. (Instanton moduli space with a full surface operator)
2. the moduli space of stable rank-$N$ locally-free sheaves on $\mathbb{P}^1\times \mathbb{P}^1$ with a parabolic structure $P\subset G$ at $\{z_2=0\}$ and with a framing at infinities, $\{z_1=\infty\}\cup\{z_2=\infty\}$. (Affine Laumon space)
I thought the moduli space ${\rm Bun}_{G,P}({\bf S}, {\bf D}_\infty)$ in [B] also corresponds to the instanton moduli space with a surface operator. Note that ${\rm Bun}_{G,P}({\bf S}, {\bf D}_\infty)$ is the moduli space of principal $G$-bundle on ${\bf S}=\mathbb{P}^2$ of second Chern class $-d$ endowed with a trivialization on ${\bf D}_\infty$ and a parabolic structure $P$ on the horizontal line ${\bf C}\subset{\bf S}$.
[B] http://arxiv.org/abs/math/0401409
However, [B] considers the moduli space of parabolic sheaves on $\mathbb{P}^2$ instead of $\mathbb{P}^1\times \mathbb{P}^1$. What in physics does ${\rm Bun}_{G,P}({\bf S}, {\bf D}_\infty)$ correspond to? Is it different from the affine Laumon space?
In addition, I would like to know the relation between [B] and [FFNR].
[FFNR] http://arxiv.org/abs/0812.4656
Do \mathfrak{Q}{\underline d} and $\mathcal{Q}_{\underline d}$ in [FFNR] correspond to $\mathcal{M}_{G,P}$ and $\mathcal{QM}_{G,P}$ in the section 1.4 of [B]? (Sorry, this does not show \mathfrak properly. \mathfrak{Q}{\underline d} is the one which appears the first line of the section 1.1 in [FFNR].)
-
To the readers who are interested in this subject, I would recommend to watch the following videos delivered by Braverman and Finkelberg. media.scgp.stonybrook.edu/video/… sms.cam.ac.uk/media/… – Satoshi Nawata Oct 8 '11 at 4:33
1
Satoshi, do you know you can formally accept the answer by clicking the big white check mark at the left of the answer? – Yuji Oct 8 '11 at 15:03
Oh, I didn't know that. Thanks for enlightening me, Yuji. – Satoshi Nawata Oct 9 '11 at 0:02
## 1 Answer
Let me try to answer. For your first question the statement is that you can work with either ${\mathbb P}^2$ or ${\mathbb P}^1\times {\mathbb P}^1$ - the moduli space is the same. More generally, if $S$ is any surface which contains ${\mathbb A}^2$ as an open subset and $D_{\infty}$ is the divisor at $\infty$ then $Bun_G(S,D_{\infty})$ is independent of $S$.
For the second question: it is true that ${\mathfrak Q}={\mathcal M}_{G,P}$ (for $P$ being the Borel subgroup and $G=SL(n)$) but it is not true that $Q={\mathcal QM}_{G,P}$. The point is that the quasi-maps' space ${\mathcal QM}_{G,P}$ is defined for any $G$ and it is singular; for $G=SL(n)$ (and only in that case) it has a nice resolution of singularities which is given by the Laumon space. If you are interested to know more, you can read my 2006 ICM talk ("Spaces of quasi-maps into the flag varieties and their applications") - the above questions are discussed there.
-
Thank you very much. This is exactly the answer I wanted. It is such an honor to have your response. – Satoshi Nawata Oct 8 '11 at 4:31
1
You are welcome. If you have any further questions, I'll be happy to try answer. – Alexander Braverman Oct 9 '11 at 19:35
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 40, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9283555746078491, "perplexity_flag": "head"}
|
http://mathhelpforum.com/calculus/67091-optimization-problem.html
|
# Thread:
1. ## Optimization Problem
Find the larger of two numbers whose sum is 30 for which the sum of their squares is a minimum.
2. Let the first number $x_1$ and the second number $x_2$.
Then $f\left( {x_1 ,x_2 } \right) = x_1^2 + x_2^2 \to \min$ and $x_1+x_2=30\Rightarrow$.
$\Rightarrow x_2=30-x_1\Rightarrow f\left( {x_1 } \right) = x_1^2 + \left( {30-x_1} \right)^2 \to \min$
Do you undestand?
3. Originally Posted by abclarinetuvwxyz
Find the larger of two numbers whose sum is 30 for which the sum of their squares is a minimum.
Denote these two numbers to be $x$ and $y$.
#1: Their sum must be 30, so: $x+y=30$, or $y=30-x$
#2: And you want to minimize: $x^2+y^2$
Given #1 as a constraint, #2 becomes: $x^2+(30-x)^2$. Differentiate this expression, set it equal to zero (to minimize), and find $x$
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 12, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9344733357429504, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/22352/can-there-be-such-a-thing-as-a-classification-of-classification-theorems
|
# Can there be such a thing as a classification of classification theorems?
Can there be such a thing as a classification of classification theorems?
-
7
Sure. Every classification theorem is of the form "every P is of the form Q, R, S, T..." for some P, Q, R, S, T.... – Qiaochu Yuan Feb 16 '11 at 13:32
But I mean are there distinctions between various classification theorems to such a degree to allow for a non-trivial classification of them. – 24601 Feb 16 '11 at 13:55
I don't really understand what that could mean. Do you have a precise definition of "classification theorem"? – Qiaochu Yuan Feb 16 '11 at 14:16
1
The term 'classification' is quite ambiguous indeed. For instance the finite fields are in correspondence with primepowers, but these aren't really 'classified'. So there's some sense in saying that finite fields are classified, relative to the set of primes. Here's a similar situation: goo.gl/rybue – Myself Feb 16 '11 at 14:39
Given some family of objects, we classify all the different types of objects up to some equivalence. From this we say all objects from this family are of the form A,B,C,... up to some equivalence relation, that is both X and X' are the same object from this classification where prior their only distinction was one (if any) that has been subsequently removed by the equivalence relation. Perhaps maybe in a category-theoretic manner, distinctions can be made between particular classifications, something distinct in the particular structure of the classification such that they can be classified. – 24601 Feb 16 '11 at 14:43
show 1 more comment
## 2 Answers
In a sense, yes.
The object of descriptive set theory is to understand "definable" sets of reals (as opposed to arbitrary sets). So, for example, we concern ourselves with Borel sets, or their continuous images, or the complements of those images, or countable unions of such things, etc. Also, we study other spaces, not just the set of reals.
The basic setting is that of Polish spaces, i.e., complete metrizable spaces with a countable dense subset. This of course includes ${\mathbb R}$, but many other spaces that appear in practice are here.
It turns out that many classification problems that occur in practice have the form: We have a Polish space $X$ and an equivalence relation $E$ on $X$ (typically, $E$ is either Borel as a subset of $X\times X$, or the continuous image of a Borel set). We then study the complexity of the quotient space $X/E$.
This can be measured in several ways. For example: Can we pick in a "Borel fashion" a representative of each equivalence class? If not, can we in a sense approximate the graphs of "choice functions" even if we cannot actually single out one of them?
It turns out that we can prove results that say that certain classification problems are strictly harder than certain others, and we can study the "partial ordering" of classification problems according to complexity.
To be a bit more precise, consider the problem: Given two $5\times 5$ matrices with real entries, are they similar? This is a classification problem: We want to pair matrices that are similar, and the question is how hard is it to decide whether they are paired. It turns out that the Jordan canonical form of two matrices is the same iff they are similar, and the Jordan form can be handily coded by a real number, so the problem reduces to "can we identify if two reals are equal?"
There are two ways of measuring the complexity of this problem. One is in terms of complexity theory, and then we need to talk about how the reals are "given" to us. The other way is the descriptive set theoretic one: We have a Polish space: ${\mathbb R}$; and an equivalence relation: equality. This is as simple as a problem gets.
Another problem is: Given two auto-homeomorphisms of the unit square, when are they conjugate? This is a harder problem, meaning, there is no "Borel" map that to an auto-homeomorphism assigns a real number so that two homeomorphisms are conjugate iff they have been assigned the same real.
A good place to learn about this (and about the wide variety of examples that this approach covers) is "A survey of current and recent work on the theory of Borel equivalence relations" by G. Hjorth. It is available from his webpage, at http://www.math.ucla.edu/~greg/research.html
-
4
We are so extremely lucky to have two such marvelously knowledgeable logicians on call! :) – Mariano Suárez-Alvarez♦ Feb 16 '11 at 21:00
Many thanks, Mariano! – Andres Caicedo Feb 16 '11 at 21:45
@Andres: Should the problem about real matrices use the Jordan canonical form, or the rational canonical form? Not every real matrix has a JCF over the reals. I guess you can argue that if two real matrices have the same complex Jordan canonical form, then they are conjugate over $\mathbb{C}$, and hence over $\mathbb{R}$... still, seems like using the RCF would be simpler. – Arturo Magidin Feb 17 '11 at 2:54
@Arturo: You are right, but in the descriptive set theoretic sense both approaches have the same complexity. The point was that there is a "simple" procedure that assigns to each matrix what is essentially a number. Simple means that the map (matrix $\mapsto$ number) is Borel, and of course some Borel sets that are much more complex than others, but (currently) we largely ignore this fine structure. – Andres Caicedo Feb 17 '11 at 3:34
1
@Arturo: I understand. The thing is that ${\mathbb C}$ and ${\mathbb R}$ are both uncountable Polish spaces, so we can "translate" between them and from the "Borel viewpoint" they are indistinguishable. – Andres Caicedo Feb 17 '11 at 3:40
show 1 more comment
Allow me to supplement Andres's excellent answer by copying over the following answer that I gave over at this MO question:
How can we understand in a precise general way the idea that a given classification problem is complicated or simple? How are we to compare the relative difficulty of two classification problems?
These questions form the central motivation for the emerging subject known as Borel equivalence relation theory (see Greg Hjorth's survey article, greatly missed after his recent death). The main idea is that many of the most natural equivalence relations arising in many parts of mathematics turn out to be Borel relations on a standard Borel space. To give one example, the isomorphism problem on finitely generated groups, but of course, there are hundreds of other examples. A classification problem for an equivalence relation E is really the problem of finding a way to describe the E-equivalence classes, of finding an E-invariant function that distinguishes the classes.
Harvey Friedman defined that one equivalence relation E is Borel-reducible to another relation F if there is a Borel function f such that x E y if and only if f(x) F f(y). That is, the function f maps E classes to F classes in such a way that different E classes get mapped to different F classes. This provides a classification of the E classes by using the F classes. The concept of reducibility provides a precise, robust way to say that one relation F is at least as complex as another E. Two relations are Borel equivalent if they reduce to each other, and we are led to the hierarchy of equivalence relations under Borel reducibility. By placing an equivalence relation into this hierarchy, we come to understand how complex it is in comparision with other equivalence relations. In particular, we say that one equivalence relation E is strictly simpler than F, if E reduces to F but not conversely.
It sometimes happens that one has a classification problem E and is able to provide a classification by assigning to each structure a countable list of data, such that two structures are equivalent iff they have the same data. This amounts to a reduction of E to the equality relation =, for two structures are E equivalent iff their data is equal. Such relations that reduce to equality are called smooth, and lay near the bottom of the hierarchy of Borel equivalence relations. These are the simplest equivalence relations. Thus, one way of showing that a relation is comparatively simple, is to show that it is smooth, and to show it is comparatively hard, show that it is not smooth.
The subject of Borel equivalence relation theory, as now developed by A. Kechris, G. Hjorth, S. Thomas and many others, is focused on placing many of the natural classification problems of mathematics into this hierarchy. Some of the main early results are the following interesting dichotomies:
Theorem.(Silver dichotomy) Every Borel equivalence relation E either has only countably many equivalence classes or = reduces to E.
The relation E0 says that two binary sequences are equivalent iff they agree from some point onward. It is easy to see that = reduces to E0, and an elementary argument shows that E0 does not reduce to =. Thus, E0 is strictly harder than equality. Moreover, it is a kind of next-step up in the hiearchy, in light of the following.
Theorem.(Glimm-Effros dichotomy) Every Borel equivalence relation E either reduces to = or E0 reduces to E.
The subject continues with many interesting results that gradually illuminate more and more of the hierarchy of Borel equivalence relations. For example, the Feldman-Moore theorem shows that every Borel equivalence relation E having every equivalence class countable is the orbit equivalence of a countable group of Borel bijections of the space. The relation Eoo is the orbit equivalence of the left-translation action of the free group F2 on its power set. This relation is complete for the countable Borel equivalence relations, in the sense that every countable Borel equivalence relation reduces to it. It's great stuff!
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9449928402900696, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/535/why-does-kinetic-energy-increase-quadratically-not-linearly-with-speed/538
|
# Why does kinetic energy increase quadratically, not linearly, with speed?
As Wikipedia says:
[...] the kinetic energy of a non-rotating object of mass $m$ traveling at a speed $v$ is $mv^2/2$.
Why does this not increase linearly with speed? Why does it take so much more energy to go from 1 m/s to 2 m/s than it does to go from 0 m/s to 1 m/s?
-
3
It doesn't increase exponentially (a^x); it increases quadratically (x^2). Can you correct the title? – nibot Nov 11 '10 at 0:09
1
What I found most counter-intuitive about this is that kinetic energy is a coordinate-dependent quantity. In a different inertial frame, the object will have a different kinetic energy. This contradicts the intuitive notion that kinetic energy should be independent of the coordinate system. – nibot Nov 11 '10 at 0:12
Did you mean quadratically ($v^2$)? Exponentially means something different ($~\exp(v)$). I believe there is derivation in every standard high school level textbook ($dE = F dx = m a dx = m \frac{dv}{dt} dx = m \frac{dx}{dt} dv = m v dv$). – Piotr Migdal Nov 11 '10 at 0:14
I took care of fixing the title for you, @Generic Error (hope you don't mind). – David Zaslavsky♦ Nov 11 '10 at 0:36
I'm tempted to answer...because it just is? Really, it's the way the universe works. – Noldorin Nov 11 '10 at 0:47
show 3 more comments
## 13 Answers
The question is especially relevant from a didactical point of view because one has to learn to distingish between energy (work) and momentum (quantity of motion).
The kinematic property that is proportional to $v$ is nowadays called momentum, it is the "quantity of motion" residing in a moving object, it's definition is $p:= mv$.
The change of momentum is proportional to the impulse: impulse is the product of a force $F$ and the timespan $\Delta t$ it is applied. This relation is also known as the second law of Newton: $F \Delta t = \Delta p$ or $F dt = dp$. When one substitutes $mv$ for $p$ one gets it's more common form: $F= m \frac{\Delta v}{\Delta t} = ma$.
Now for an intuitive explanation that an object with double velocity has four times as much kinetic energy.
Say A has velocity $v$ and B is an identical object with velocity $2v$.
B has a double quantity of motion (momentum) - that's were your intuition is correct!
Now we apply a constant force $F$ to slow both objects down to standstill. From $F \Delta t = \Delta p$ it follows that the time $\Delta t$ needed for B to slow down is twice as much (we apply the same force to A and B). Therefore the braking distance of B will be a factor of 4 bigger then the braking distance of A (its starting velocity, and therefore also its mean velocity, being twice as much, and its time $\Delta t$ being twice as much, so the distance, $s = \bar{v}\Delta t$, increases 2x2=4 times).
The work $W$ needed to slow down A and B is calculated as the product of the force and the braking distance $W=Fs$, so this is also four times as much. The kinetic energy is defined as this amount of work, so there we are.
-
Thank you for a nice clear explanation. – Generic Error Nov 12 '10 at 6:53
for years I've had this question about what momentum is that makes it so special in the second law. Thank you for explaining it so simply. – Preet Sangha Jun 21 '11 at 1:42
12
This answer is vacuous, work is force times distance is synonymous with, and equally mysterious as, energy is half momentum times velocity, but at least it isn't wrong. – Ron Maimon Oct 28 '11 at 22:36
Agree with the comment above. – Diego Jan 26 '12 at 8:14
3
@maimon: your own answer is indeed very interesting; the answer above has its own merit. – Gerard May 4 '12 at 12:42
The previous answers all restate the problem as "Work is force dot/times distance". But this is not really satisfying, because you could then ask "Why is work force dot distance?" and the mystery is the same.
The only way to answer questions like this is to rely on symmetry principles, since these are more fundamental than the laws of motion. Using Galilean invariance, the symmetry that says that the laws of physics look the same to you on a moving train, you can explain why energy must be proportional to the mass times the velocity squared.
First, you need to define kinetic energy. I will define it as follows: the kinetic energy E(m,v) of a ball of clay of mass m moving with velocity v is the amount of calories of heat that it makes when it smacks into a wall. This definition does not make reference to any mechanical quantity, and it can be determined using thermometers. I will show that, assuming Galilean invariance, E(v) must be the square of the velocity.
E(m,v), if it is invariant, must be proportional to the mass, because you can smack two clay balls side by side and get twice the heating, so
$$E(m,v) = m E(v)$$
Further, if you smack two identical clay balls of mass m moving with velocity v head-on into each other, both balls stop, by symmetry. The result is that each acts as a wall for the other, and you must get an amount of heating equal to 2m E(v).
But now look at this in a train which is moving along with one of the balls before the collision. In this frame of reference, the first ball starts out stopped, the second ball hits it at 2v, and the two-ball stuck system ends up moving with velocity v.
The kinetic energy of the first ball is mE(2v) at the start, and after the collision, you have 2mE(v) kinetic energy. But the heating is the same, so
$$mE(2v) = 2mE(v) + 2mE(v)$$
$$E(2v) = 4 E(v)$$
which implies that E is quadratic.
### Noncircular force-times-distance
Here is the noncircular version of the force-times-distance argument that everyone seems to love so much, but is never done correctly. In order to argue that energy is quadratic in velocity, it is enough to establish two things:
• Potential energy on the Earth's surface is linear in height
• Objects falling on the Earth's surface have constant acceleration
The result then follows.
That the energy in a constant gravitational field is proportional to the height is established by statics. If you believe the law of the lever, an object will be in equilibrium with another object on a lever when the distances are inversely proportional to the masses (there are simple geometric demonstrations of this that require nothing more than the fact that equal mass objects balance at equal center-of-mass distances). Then if you tilt the lever a little bit, the mass-times-height gained by 1 is equal to the mass-times-height gained by the other. This allows you to lift objects and lower them with very little effort, so long as the mass-times-height added over all the objects is constant before and after.This is Archimedes' principle.
Another way of saying the same thing uses an elevator, consisting of two platforms connected by a chain through a pulley, so that when one goes up, the other goes down. You can lift an object up, if you lower an equal amount of mass down the same amount. You can lift two objects a certain distance in two steps, if you drop an object twice as far.
This establishes that for all reversible motions of the elevator, the ones that do not require you to do any work (in both the colloquial sense and the physics sense--- the two notions coincide here), the mass-times-height summed over all the objects is conserved. The "energy" can now be defined as that quantity of motion which is conserved when these objects are allowed to move with a non-infinitesimal velocity. This is Feynman's version of Archimedes.
So the mass-times-height is a measure of the effort required to lift something, and it is a conserved quantity in statics. This quantity should be conserved even if there is dynamics in intermediate stages. By this I mean that if you let two weights drop while suspended on a string, let them do an elastic collision, and catch the two objects when they stop moving again, you did no work. The objects should then go up to the same total mass-times-height.
This is the original demonstration of the laws of elastic collisions by Christian Huygens, who argued that if you drop two masses on pendulums, and let them collide, their center of mass has to go up to the same height, if you catch the balls at their maximum point. From this, Huygens generalized the law of conservation of potential energy implicit in Archimedes to derive the law of conservation of square-velocity in elastic collisions. His principle that the center of mass cannot be raised by dynamic collisions is the first statement of conservation of energy.
For completeness, the fact that an object accelerates in a constant gravitational field with uniform acceleration is a consequence of galilean invariance, and the assumption that a gravitational field is frame invariant to uniform motions up and down with a steady velocity. Once you know that motion in constant gravity is constant acceleration, you know that
$$mv^2/2 + mgh = C$$
so that Huygens dynamical quantity which is additively conserved along with Archimedes mass times height is the velocity squared.
-
4
By far the best answer. Note: the assumption $E(0) = 0$ should be made explicit. – Johannes Oct 28 '11 at 22:52
9
That E(0)=0 follows from the fact that a stationary clay ball will not heat up the wall. – Ron Maimon Oct 29 '11 at 6:28
5
+1 This is the answer. – becko Oct 29 '11 at 19:01
1
@RonMaimon If you base the definition of kinetic energy on the concept of heat, how have to define heat first. How do you do that? – student Jan 29 '12 at 8:50
2
@student: you can define it by the temperature change (as measured by volume expansion) of some reference material (mercury, for instance) for a large enough volume so that the change in temperature is infinitesimal. This is not an ideal practical definition--- you could use ideal gasses--- but for large volume the amount of heat flowing into a mercury thermometer can be accurately measured, and then you can calibrate your thermometers and measure specific heats. It's not a conceptually difficult thing, but it was historically difficult. – Ron Maimon Jan 29 '12 at 14:27
show 16 more comments
The only real physical reason (which is not really a fully satisfying answer) is that $E \sim v^2$ is what experiments tell us. For example, gravitational potential energy on the Earth's surface is proportional to height, and if you drop an object, you can measure that the height it falls is proportional to the square of its speed. Thus, if energy is to be conserved, the kinetic energy has to be proportional to $v^2$.
Of course, you could question why gravitational potential energy is proportional to height, and once that was resolved, question why some other kind of energy is proportional to something else, and so on. At some point it becomes a philosophical question. The bottom line is, defining kinetic energy to be proportional to the square of the speed has turned out to make a useful theory. That's why we do it.
On the other hand, you could always say that if it were linear in velocity, it would be called momentum ;-)
P.S. It may be worth mentioning that kinetic energy is not exactly proportional to $v^2$. Special relativity gives us the following formula:
$K = mc^2\left(1 - 1/\sqrt{1 - v^2/c^2}\right)$
For low speeds, this is essentially equal to $mv^2/2$.
-
6
The physical reason is, in some twisted way, Nöther's Theorem: the Energy is the conserved quantity with respect to time translations. And this can be calculated and shown to be the formula we all know and love. – Daniel Nov 11 '10 at 14:23
As Piotr suggested, accepting the definition of work $W=\mathbf{F}\cdot d\mathbf{x}$, it follows that the kinetic energy increases quadratically. Why? Because the force and the infinitesimal interval depend linearly on the velocity. Therefore, it is natural to think that if you multiply both quantities, you need to end up with something like $K v^{2}$, where $K$ is an 'arbitrary' constant.
A much more interesting question is why the Lagrangian depends on the velocity squared. Given the homogeneity of space, it can not contain explicitly $\mathbf{r}$ and given the homogeneity of time, it can not depend on the time. Also, since space is isotropic, the Lagrangian can not contain the velocity $\mathbf{v}$. Therefore, the next simplest choice should be that the Lagrangian must contain the velocity squared. I do think that the Lagrangian is more fundamental in nature than the other quantities, however, its derivation involves the definition of work or equivalently, energy. So probably you won't buy the idea that this last explanation is the true cause of having the kinetic energy increasing quadratically, although, I think it is much more satisfactory than the first explanation.
-
1
+1: for going Lagrangian! – rubenvb Nov 11 '10 at 14:07
+1 for arguments of homogeneity and isotropy – P O'Conbhui Apr 21 '12 at 16:18
This can expanded by demanding that the Lagrangian give rise to invariant equations of motion under Galilean transformations. If you do this you find it must depend on v^2. – Mark Eichenlaub May 8 at 23:30
Let me just throw in an intuitive explanation. You could re-phrase your question as:
Why does velocity only increase as the square root of kinetic energy, not linearly?
Well, drop a ball from a height of 1 meter, and it has velocity v when it hits the ground.
Now, drop it from a height of 2 meters. Will it have a velocity of 2v when it hits the ground?
No, because it travels the second meter in a lot less time (because it's already moving), so it has less time to gain speed.
-
In comes down to definitions.
Momentum is defined as $p = mv$. Momentum grows linearly with velocity making momentum a quantity that is intuitive to understand (the more momentum the harder an object is to stop). Kinetic energy is a less intuitive quantity associated with an object in motion. KE is assigned such that the instantaneous change in the KE yields the momentum of that object at any given time:
$\frac{dKE}{dv} = p$
A separate question one might ask is why do we care about this quantity? The answer is that in a system with no friction, the sum of the kinetic and potential energies of an object is conserved:
$\frac{d(KE + PE)}{dt} = 0$
-
Actually, the momentum isn't equal to the time derivative of kinetic energy. – David Zaslavsky♦ Nov 11 '10 at 1:28
@David, you are obviously correct. Now I have to rethink my answer... – Ami Nov 11 '10 at 1:31
Looks a lot better now ;-) I wouldn't describe $d/dv$ as an instantaneous change, exactly, but that's semantics I suppose. +1 – David Zaslavsky♦ Nov 11 '10 at 1:39
One way to look at this question of yours is as follows:
$$E(v) = \frac{m v^2}{2} \; .$$
So, if we multiply the velocity by a certain quantity, i.e., if we scale the velocity, we get the following,
$$E(\lambda v) = \frac{m (\lambda v)^2}{2} = \lambda^2 \frac{m v^2}{2} = \lambda^2 E(v)\; .$$
That is, if you scale your velocity by a factor of $\lambda$, your Energy is scaled by a factor of $\lambda^2$ — this should answer your question (just plug in the numbers).
-
2
Yes, but what is the physical reason behind energy increasing quadratically with velocity? – wrongusername Nov 11 '10 at 0:35
What's the physical reason that $F = m a$? Using this and a bit of differential calculus, it's possible to prove that the Energy is conserved (with respect to time derivatives) — and the formula for the Energy is this one you already know. – Daniel Nov 11 '10 at 14:21
– Daniel Nov 11 '10 at 16:11
For every relatively equal (in percents) increase of the speed, the applied force must be present over increasingly (quadratically) long travel distance. F=m*a. At the same time force*distance=work, where work=energy.
-
I think it follows from the first law of Thermodynamics. It turns your definition of work into a conserved property called energy. If you define work in the $Fdx$ style (as James Joule did) then the quadratic expression for kinetic energy will follow with the symmetry arguments.
In his excellent answer, Ron Maimon cleverly suggests using heat to avoid a reference to work. To determine the number of calories he uses a thermometer. A perfect thermometer will measure $\partial{E}/\partial{S}$ so when he's done defining entropy, he still needs a non-mechanical definition of work. (In fact, I believe it is Joule's contribution to show that the calorie is a superfluous measure of energy.) The weakness in Ron's answer is that he also needs the second law of thermodynamics to answer the question.
To see this explicitly, write the first law in terms of the Gibbs equation: $$dE = TdS + vdp + Fdx$$ This equation defines $v = \partial{E}/\partial{p}$. For a conservative system set $dE=0$ and to follow Huygens, set $dS=0$ to get $vdp = - Fdx$ and to follow Maimon we set $dx=0$ to get $vdp = -TdS$. These are two ways of measuring kinetic energy.
Now to integrate. Huygens assumes $p$ is only a function of $v$. For small changes in $v$ we make the linear approximation $p = mv$, where $m \equiv dp/dv$. Plug that in, integrate, and you get the quadratic dependence. In fact, it's not too hard to see that if you use gravity for the force that $F = mg$ which leads to $$\frac{1}{2} m v^2 + mgh = C .$$ Raimon also has to assume the independence of $p$ on $S$. To integrate he will have to evaluate $T$ as a function of $S$ (and possibly $p$) or use the heat capacity.
Now notice that we required the changes in $v$ to be small. In fact, kinetic energy is not always proportional to $v^2$. If you go close to the speed of light the whole thing breaks down and for light itself there is no mass, but photons do have kinetic energy equal to $c p$ where $c$ is the speed of light. Therefore, it's better to think of kinetic energy as $$E_{kin} = \int v dp$$ and just carry out the integration to find the true dependence on $v$.
So, in summary, I suggest the "why" of the question is the same as the "why" of the first law.
-
Imagine a cannon that fires a bed spring at 10 m/s (22 mph); the momentum of the fired bed spring carries it 25 meters. A second cannon fires the bed spring at 20 m/s (44 mph); the momentum of the spring carries it 50 meters. Doubling the speed, doubles the momentum.
In a second set of tests the first cannon fires the spring into an immovable brick wall from a distance of one meter. Upon impact the spring which has a length of 20 cm compresses to a length of 16 cm (20% compressed). When the second cannon fires the spring into the wall, traveling at twice the speed, the 24 cm spring compresses to a length of 4 cm (80% compressed); this of course requires 4 times more energy than the first collision with the wall. Double the speed, quadruple the kinetic energy!
Please correct me if I'm off base. thanks
-
The general form of the kinetic energy includes higher order corrections due to relativity. The quadratic term is only a Newtonian approximation valid when velocities are low in comparison to the speed of light c.
There is another fundamental reason for which kinetic energy cannot depend linearly with the velocity. Kinetic energy is a scalar, velocity is a vector. Moreover, if the dependency was linear this would mean that the kinetic energy would vary by substituting $\mathbf{v}$ by $-\mathbf{v}$. I.e. the kinetic energy would depend of the orientation, which again makes no sense. The Newtonian quadratic dependence and the relativistic corrections $v^4$, $v^6$... satisfy both requirements: kinetic energy is a scalar and invariant to substituting $\mathbf{v}$ by $-\mathbf{v}$.
-
1
Why can't it depend on the speed then? – Larry Harson Oct 13 '12 at 14:08
The kinetic energy for photons depends linearly on the speed. $E = |\mathbf{p}|c$. But this describes particles moving always at constant speed. – juanrga Oct 15 '12 at 10:06
Basically, momentum is related to force times time, and KE is related to force times distance. It is all a metter of frame of reference, either time or distance. The relationship between time and distance for a starting velocity of zero is $d = \frac{at^2}{2}= \frac{tV}{2}$. Plug this into the the equations you get the KE$= \frac{pV}{2} = \frac{p^2}{2m}$
Woolah - magic!
-
Your intuition is NOT wrong and I will show you why. Say you have a machine that can accelerate a ball to 5 meters per second consistently. Put that machine on a car that is traveling at 5 meters per second and have it accelerate a ball. How fast is that ball traveling? The answer is 10 meters per second (5+5=10) as any physicist will tell you. If the machine is not moving, it changes the ball's velocity from rest to 5 m/s. When it is moving, it changes the ball's velocity from 5 to 10 m/s. The machine does not magically know that it is moving and so it always uses the same amount of energy, say electrical, whenever it accelerates the ball. If the kinetic energy formula were actually valid, that machine would not be able to cause the ball to accelerate to 10 m/s in this situation.
Those that tell you differently are only parroting the usual physics lessons. They can "prove" the kinetic energy formula is valid but you have to assume that work (the product of force and distance) is a proven fact; it is not. They might show an experiment that equates potential energy with kinetic that seems valid; this also depends on the assumption that work is a valid scientific concept. The example I gave in the first paragraph can be "explained" away by a clever physicist, maybe, but it is a direct test proving your intution is correct.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 75, "mathjax_display_tex": 9, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9397539496421814, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/22910?sort=newest
|
## Exactness of 2nd-Order Differential Equations via Differential Forms
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
This (probably very elementary) question came up the last time I taught differential equations, and I've been toying with it for a while with no success:
A 1st-order differential equation $M(x,y)dx+N(x,y)dy=0$ is exact if $$M(x,y)dx+N(x,y)dy=f_x(x,y)dx+f_y(x,y)dy$$ for some differentiable function $f(x,y)$ defined on the domain of $\omega$. In this case, we easily arrive at an implicitly-defined solution to the differential equation. Importantly, there is a nice test for exactness stemming from Clairaut's theorem -- for everywhere smooth $M$ and $N$ (for simplicity/laziness...obvious generalizations abound), the differential equation is exact iff $N_y=M_x$. Of course, this procedure is easily re-interpreted as saying that by the triviality of $H^1(\mathbb{R}^2)$, a one-form is closed if and only if it is exact.
Now let's move one degree higher. Boyce and Di Prima define a 2nd-order differential equation $P(x)y''+Q(x)y'+R(x)y=0$ to be exact if there exists a differentiable function $f(x,y)$ such that the differential equation can be written
$$P(x)y''+Q(x)y'+R(x)y=[P(x)y']'+[f(x)y]'=0.$$
The analogous expression to Clairaut's theorem seems to be that (again, for sufficiently smooth inputs) an equation of that form is exact iff $P''(x)-Q'(x)+R(x)=0.$ Of importance is that such forms can be integrated once to leave us with a 1st-order differential equation. So we've successfully lowered the degree of our problem.
This feels to me very much like an analogous $H^2$ calculation. We have a condition on some coefficients that very much looks like an alternating sum coming from a $d$ map on forms, and lets us conclude that the equation "comes from" a one-degree-smaller differential equation.
But! (and here's the question) I can't seem to fit any 2-forms into this picture that would explain this analogy. Presumably there's some big story here linking the two notions of exactness about which I'd love to be enlightened.
Side remark: I once received a partial response that there might be a link with Cartan tableau, which I've been unsuccessful in pursuing, if that helps spark an idea.
-
2
I think exactness just refers to the fact that you prove it by solving df = (Q - P')dy + Rydx. So it seems to still be exactness of a 1-form you're interested in, not a 2-form. Though I agree the alternating sum is suggestive... – Lucas Culler Apr 29 2010 at 0:17
3
I vaguely remember a talk by Chern on ideas similar to yours. His collected works should be available. I can't say I recall the exact trick for writing ODE as an exterior differential system. The name most directly linked, these days, with this methodology is Robert Bryant, currently Director of msri. Anyway, look at some of the suggested reading, and authors, in en.wikipedia.org/wiki/Differential_system and for a ton of examples of prolongation without tons of vector bundles, Global Differential Geometry of Surfaces by Alois Svec. – Will Jagy Apr 29 2010 at 5:17
@Lucas: Ah, nice point. I'd been 2-focused on 2-forms to notice the 1-form relation. @Will: Ah, yes, I'd heard that name as a reference as well. Thanks for the references. I'll go check them out. (Or if you or anyone else can explain the tie-in, I'd be happy to accept the answer, even if it doesn't fully finish off the question.) – Cam McLeman May 1 2010 at 2:22
## 1 Answer
What you are looking for nowadays goes by the name of the Rumin complex and is defined on any contact manifold. Moreover, there is a vast generalization of this that sometimes goes by the name of 'the variational bicomplex' and sometimes by the name 'characteristic cohomology'. Here is a brief description that is suited for the question you asked:
On $M = \mathbb{R}^3$ with coordinates $x,y_0,y_1$, consider the differential ideal $\mathcal{I}\subset\Omega^\ast(M)$ generated by the $1$-form $\omega = dy_0 - y_1\ dx$, i.e., $\mathcal{I}$ is the set of linear combinations of all multiples of $\omega$ and $d\omega$. Note that $\mathcal{I}$ is a homogeneous ideal and equals $\Omega^\ast(M)$ in degrees 2 and 3. Because $\mathcal{I}$ is closed under exterior derivative, it is a sub-complex of $\bigl(\Omega^\ast(M),d\bigr)$. Thus, there is a graded quotient complex, call it $\bigl(\mathcal{Q},\bar d\bigr)$, that vanishes in degrees above $1$. Note that $\mathcal{Q}^0 = \Omega^0(M)= C^\infty(M)$, since $\mathcal{I}$ vanishes in degree $0$.
Now, say that an element $\phi \in \mathcal{Q}^1$ is exact if $\phi = \bar d f$ for some $f\in \mathcal{Q}^0 = \Omega^0(M)= C^\infty(M)$. Unfortunately, unlike $\bigl(\Omega^\ast(M),d\bigr)$, the complex $\bigl(\mathcal{Q},\bar d\bigr)$ is not locally exact in positive degree. In fact, $\bar d \phi =0$ for all $\phi\in\mathcal{Q}^1$, even though $\bar d: \mathcal{Q}^0\to \mathcal{Q}^1$ is not onto.
Let me pause just a moment to explain how this fits into your question. Your equation $P(x)y'' + Q(x)y' +R(x)y = 0$ is encoded as the $1$-form $\phi = P(x) dy_1 + (Q(x)y_1 + R(x) y_0) dx$ (which represents the same class as the $1$-form $P(x) dy_1 + Q(x)dy_0 + R(x) y_0 dx$ in $\mathcal{Q}^1$), and you are asking when there is a function $f(x,y_0,y_1)$ such that $\phi = \bar d f$. (You should verify that $f = P(x) y_1 + (Q(x)-P'(x))y_0$ works when your equation is satisfied and that, otherwise, nothing does.)
Now, how can we test for exactness in this sense? This is where the Rumin complex (aka the variational bicomplex, etc.) comes in. It turns out that there is a way to embed the operator $\bar d:\mathcal{Q}^0\to\mathcal{Q}^1$ into a complex that provides a fine resolution of the constant sheaf, the same way that the exterior derivative does for the full complex of exterior differential forms.
What you do is this: Let $\mathcal{E}^2\subset\Omega^2(M)$ be the set of multiples of $\omega$ and let $\mathcal{E}^3=\Omega^3(M)$.
We now want to define a complex $$0\longrightarrow \mathcal{Q}^0 \buildrel{\bar d}\over\longrightarrow \mathcal{Q}^1 \buildrel{D}\over\longrightarrow \mathcal{E}^2 \buildrel{d}\over\longrightarrow \mathcal{E}^3 \longrightarrow 0.$$ The map from $\mathcal{E}^2$ to $\mathcal{E}^3$ is the usual exterior derivative, so the only thing left to define is the map $D:\mathcal{Q}^1\to \mathcal{E}^2$. To do this, we first define a (first-order) operator $\delta: \mathcal{Q}^1\to\Omega^1(M)$, by requiring that $\delta(\phi)$ be a 1-form representing $\phi$ in the quotient complex and that $d\bigl(\delta\phi\bigr)$ lie in $\mathcal{E}^2$, i.e., that it be a multiple of $\omega$. (I'll let you write down the formula for $\delta$ in local coordinates.) Now, define $D\phi$ to be $d\bigl(\delta\phi\bigr)$. (Sounds almost trivial doesn't it?) The operator $D$ is easily verified to be second order and linear.
Now, it is not hard to verity that this complex is locally exact in positive degrees. (It also gives a fine resolution of the constant sheaf, so its cohomology on $M$ is canonically isomorphic to the deRham cohomology of $M$.) In particular, the local condition that $\phi\in\mathcal{Q}^1$ be exact is that $D\phi=0$.
You should verify (after you have defined $D$) that this reproduces your condition precisely in the linear case you asked about.
-
This is fantastic -- thanks very much. – Cam McLeman Aug 11 2011 at 16:31
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 68, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9503823518753052, "perplexity_flag": "head"}
|
http://mathhelpforum.com/number-theory/48024-determine-number-terminates-base-m-but-will-not-terminate-base-n.html
|
Thread:
1. Determine that a number terminates in base m but will not terminate in base n?
Is there a way to determine that for some number $a$ in base m that it will or will not have a terminating equivalent number $b$ in base n?
For example, when converting number 0.315 in base 10 to base 2, we get the base 2 number 0.01010000101000111101011100001010... I assume this will be nonterminating. How can I actually prove that this number will be nonterminating?
Thanks
2. In base $b$, the numbers which "terminate" are of the form $\frac{a}{b^k}$ for some $a,k\in\mathbb{Z}$.
A necessary and sufficient condition for a rational ${p\over q}$, where $p,q$ are relatively prime, to "terminate" in base $b$ is therefore that the prime divisors of $q$ are prime divisors of $b$. (which is equivalent to saying that $q$ divides $b^k$ for some $k$)
For instance, in base 10, ${p\over q}$ "terminates" iff the only prime divisors of $q$ are 2 and 5 (supposing again that $gcd(p,q)=1$). In base 2, $q$ has to be even.
I hope I was clear. If not, just ask.
Laurent.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 17, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9141542911529541, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/27785/what-compactifications-of-the-poincare-group-have-been-studied
|
# what compactifications of the Poincare group have been studied?
as we know the Poincare group is non-compact. Poincare invariance have been observed in velocities and energies up to $10^{20}$ eV in cosmic rays. The other day i was thinking in how $SU(2)$ homeomorphism in $SO(3)$ imposes a double cover, and i keep wondering if something like that could exist in the Poincare group, but of course the main problem is that the group is not compact.
I wonder if it is possible at all to make a compactification of the Group that is consistent with low-energy physics and still preserves some form of isotropy of space-time. For instance, i considered indentifying the different connected components (either CP or PT inverted) of the group at some boundary consistent with energies of the order of $10^{28}$ eV, but with meaningful dimensional analysis, but have not succeeded analysising the symmetry properties of the resulting manifold and the algebraic properties of it (it is still a Lie group after such identification?)
The physical interpretation of such identification is up to discussion, but i think that it would basically stand for a duality that maps continuously (in the concrete example compactification i gave) particles with energies above $E_p$ (some abritrary boundary energy) with particles with energy below $E_p$ and $P$ or $CP$ reversed. This latter would make for instance, electric charge conservation an approximate symmetry.
Has something like this been attempted? or are there good reasons known why this could not work?
-
Could you explain what you mean by a group compactification? Do you have an example in mind? The connected component of the identity of the Lorentz group $O(1,3)$ is isomorphic to $PSL_2(\mathbf{C})$, which has an obvious double cover $SL_2(\mathbf{C})$. This also extends to the Poincare group. – Pavel Safronov Nov 19 '11 at 18:15
well, thats the part i'm not sure because i don't know if there is a well-defined compactification procedure for groups as there are for manifolds. What i was hoping is to take the Lie group as a manifold, apply the compactification (basically by identifying it with other stuff at a prescribed boundary), and see what "needs to happen" in the boundary so that the resulting manifold is still a Lie group – lurscher Nov 19 '11 at 18:29
1
The double cover of the identity component of the Poincare group is a standard object. There is no problem with the group being non-compact – Squark Nov 19 '11 at 19:02
## 1 Answer
You cannot embed the Poincare group or the Lorentz group into a compact Lie group $G$. Indeed, denote the Lie algebra of $G$ as $\mathfrak{g}$ and the Lorentz algebra as $\mathfrak{l}=\mathfrak{o}(1,3)\cong\mathfrak{sl}_2(\mathbf{C})$.
The Killing form on $\mathfrak{g}$ is non-positive-definite, but then so is its restriction to $\mathfrak{l}$. Restriction of the Killing form on $\mathfrak{g}$ to $\mathfrak{l}$ is $\mathfrak{l}$-invariant and is therefore proportional to the Killing form on $\mathfrak{l}$, since the latter is a simple real Lie algebra. Finally, the Killing form on $\mathfrak{l}$ has signature $(3,3)$, contradiction.
By the same reasoning, you cannot mod out by a discrete subgroup of the Lorentz group and get a compact group: the Lie algebra does not change, so the Killing form cannot become non-positive-definite.
On the other hand, there are well-known 'compactifications' of the translation group. You can either mod out $\mathbf{R}/\mathbf{Z}=S^1$ or immerse $\mathbf{R}\rightarrow T^2$ as an irrational winding depending on what kind of compactifications you are interested in.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 25, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9448845982551575, "perplexity_flag": "head"}
|
http://mathhelpforum.com/calculus/120374-find-maximum-size-cylinder-fits-sphere.html
|
# Thread:
1. ## Find maximum size of Cylinder that fits in sphere
I'm kinda stuck on this.
I need to find the maximum size of a Cylinder that fits in sphere of radius 1.
Ive come up with the following, .. am I on the right track so far?
Volume of Cylinder = Pi(r^2)h = Pi(r^2)(sqrt(1^2 - r^2))
Volume of Sphere = 4/3(Pi)(r^3)
Pi(r^2)(sqrt(1^2 - r^2)) = 4/3(Pi)(r^3)
Now I just need to solve for r. Is this correct? Where does the differentiation come into it?
2. Originally Posted by floater
I'm kinda stuck on this.
I need to find the maximum size of a Cylinder that fits in sphere of radius 1.
Ive come up with the following, .. am I on the right track so far?
Volume of Cylinder = Pi(r^2)h = Pi(r^2)(sqrt(1^2 - r^2))
Volume of Sphere = 4/3(Pi)(r^3)
You are good up to here.
Pi(r^2)(sqrt(1^2 - r^2)) = 4/3(Pi)(r^3)
No, the volume of the cylinder is not equal to the volume of a sphere. You have already used "fits inside a sphere of radius 1" to deduce that $h= \sqrt{1- r^2}$. Differentiate $\pi r^2(1- r^2)^{1/2}$ and set the derivative equal to 0 to find the r that gives maximum volume.
Now I just need to solve for r. Is this correct? Where does the differentiation come into it?
3. $\pi r^2(1-r^2)^{1/2}$
Well i make the derivitive of the above to be :
$\pi2r(1-r^2)^{1/2} + r^2 1/2(1-r^2)^{-1/2}2r$
I dont know how i would factor that out. Could you give me some hints?
4. Am i right in thinking that i have to factor it first? Into something like
()() = 0
?
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9090490937232971, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/48411/are-there-any-abelian-2-groups-with-complete-holomorphs-other-than-c2-2-and-c/48431
|
## Are there any abelian 2 groups with Complete Holomorphs Other than $C^2_2$ and $C^4_2$?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
The title says it. I'm reviewing some group theory concepts I haven't touched in quite a while, since I don't teach abstract algebra in my current position, and could not find the answer to this question. I've searched on google and found some papers that discuss other types of groups, but not 2-groups. I know that the holomorph of all non-trivial finite abelian groups of odd order are complete groups, due to a theorem by Miller(1908), where complete means trivial center and all automorphisms are inner. Also, since the holomorph of a finite abelian group is the direct product of the holomorphs of its Sylow subgroups, again Miller(1903?), does that imply the following:
Let G be an even ordered abelian group. If the holomoprh of the Sylow 2-subgroup of G is complete then the holomorph of G is complete.
As an added note, the automorphism group of $C^n_2$ is isomorphic to $PSL(n,2)$ since $C^n_2$ can be thought of as an $n$ dimensional vector space over the finite field $Z_{2}$. In the case where $n$ = 4, $PSL(4,2)$ is isomorphic to $A_{8}$. Is this connected to the holomorph being complete? Is the answer known for $C^n_2$, if not for all 2-groups?
Thanks
-
## 2 Answers
The completeness of ${\rm AGL}(n,2)$ for $n \ne 3$ follows easily from the following two facts:
1. ${\rm GL}(n,2)$ is complete for all $n \ge 1$.
CORRECTION: Sorry - that was very careless of me! As Greg and Jack have pointed out, ${\rm GL}(n,2)$ is NOT complete for $n \ge 3$. The inverse-transpose automorphism which, in terms of groups of Lie type is the graph automorphism of groups of Lie-type $A_n$, is an outer automorphism of ${\rm GL}(n,2)$. However, as Jack pointed out, this does not induce an automorphism of ${\rm AGL}(n,2)$, because it interchanges subspaces of $V$ of dimension $r$ with subspaces of dimension $n-r$, and so does not act on $V$ itself.
1. The cohomology group $H^1({\rm GL}(n,2), V)$ (with $V$ the natural module) is trivial for $n \ne 3$. This is equivalent to saying that for $n \ne 3$ all complements of the elementary abelian normal subgroup of order $2^n$ in ${\rm AGL}(n,2)$ are conjugate in ${\rm AGL}(n,2)$.
I believe that the second of these was first proved in:
D.G. Higman, Flag-transitive collineation groups of finite projective spaces. Illinois J. Math. 6 (1962), 434-466.
I don't know when the first statement was first proved, but it follows from then general theory of automorphism groups of finite groups of Lie type.
Note that, for $n=3$, there are two conjugacy classes of complements of the $2^3$ normal subgroup, and they are interchanged by an outer automorphism of ${\rm AGL}(3,2)$.
-
Thanks for the reference! This might help with my older question: math.stackexchange.com/questions/5226/… ? At the very least, this gives a very clear sufficient condition. – Jack Schmidt Dec 6 2010 at 14:36
@Derek Holt Sorry, I'm a little confused. GL(4,2) is isomorphic to $A_8$ which a non-abelian finite simple group, but is not complete. – Greg Gibson Dec 6 2010 at 17:54
@Greg: Good catch! The general theory shows that Out(GL(n,2)) = 2 for n ≥ 3. The extra automorphism takes a matrix to its inverse-transpose (conjugates by a transposition in the A8 form). I guess this automorphism is induced by conjugation by some element of V. Derek's argument fits really nicely into a decomposition of Aut(G|xV) for any semi-direct product, not just holomorphs, but it does need G complete to clearly work. – Jack Schmidt Dec 6 2010 at 19:48
3
Oh, I see: the inverse transpose automorphism of GL(n,2) does not survive to AGL(n,2), because it takes V to a non-isomorphic module, V*. It is an automorphism of GL, but it does not commute with the action on V (the C2^n). Let me know if you want me to write up the semi-direct product stuff. Derek probably has a clearer description. – Jack Schmidt Dec 6 2010 at 19:52
@Jack Schmidt Thanks, I'd appreciate some more insight. – Greg Gibson Dec 7 2010 at 2:22
show 1 more comment
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
I believe the holomorph of the elementary abelian group of order 2n is complete unless n=1 or 3. It is called AGL(n,2), the affine general linear group of dimension n over the field of size 2.
In other words, n=4 isn't the exception, but rather the rule. GL(3,2) is all sorts of weird, and I guess GL(1,2) is just too small.
The holomorph of 4×4×2×2 is also complete. There are no other (non-elementary) abelian groups of order dividing 32 whose holomorph is complete.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 35, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9385370016098022, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/177488/computing-the-volume-of-a-set-efficiently?answertab=oldest
|
# Computing the volume of a set efficiently
Given a set of vectors $\mathbf v_i$ for $i=1,\dots,k$, $\mathbf v_i \in \{0,1\}^n$, is that possible to efficiently find the volume of the set,
$$\left\{\mathbf x \in [0,1]^n:\mathbf x \le \sum_{i=1}^k \alpha_i\mathbf v_i\ \text{for some $\alpha_i$}\right\},$$ such that $\sum_i \alpha_i=1$ and $\alpha_i \ge 0$. The comparison $\mathbf x \le \sum_{i=1}^k \alpha_i\mathbf v_i$ is taken componentwise.
Note: the above question arose from another question asked by me previously.
-
1
To all concerned: This is the union of the boxes $[0,x_1]\times[0,x_2]\times\cdots\times[0,x_n]$ for all $\mathbf x=(x_1,x_2,\ldots,x_n)$ in the convex hull of the input vectors. – Rahul Narain Aug 1 '12 at 8:43
– Rahul Narain Aug 2 '12 at 0:44
@Managu: Thanks for your comment. I don't know how to fix it either. Maybe it's not editable at all? – Mohsen Aug 3 '12 at 20:16
## 1 Answer
I believe we can reduce the stated problem to your initial question about finding the volume of a convex polytope.
First, the set you describe, call it $A$ is convex. In fact, we can specify a generating set for it. For each 0-1 vector $\mathbf b\in\{0,1\}^n$, let $p_\mathbf b(\mathbf x)=(x_1\cdot b_1, x_2\cdot b_2,\ldots,x_n\cdot b_n)$. That is, $p$ takes a vector $x$ and, in each place, replaces the coordinate with $0$ or leaves the coordinate alone, according to whether $b$ is $0$ or $1$. Then $A$ is the convex hull of $P=\left\{p_b(\mathbf{v}_i) : 0<i\leq k, \mathbf{b}\in \{0,1\}^n\right\}$. Note that this set contains at most $k\cdot 2^n$ elements.
So we're back to, given a finite set, how do we calculate the volume of its convex hull? A little reading around suggests that this is a hard problem. See, e.g. http://mathoverflow.net/questions/979/algorithm-for-finding-the-volume-of-a-convex-polytope.
-
– Mohsen Aug 3 '12 at 21:17
Everything I've looked at suggests that really bad behavior tends to happen as the dimension goes up moreso than as the number of points goes up. And a lot of the funky examples are given in the halfspace representation of a polytope, instead of the vertex representation as I'm discussing here. If your dimension is fixed, I suspect the volume can be computed in time `poly(|V|)`. – Managu Aug 7 '12 at 2:22
Going in the other direction, the complex you describe subsumes the idea of the chain polytope of a poset, given in terms of the poset's maximal antichains. Which, is #P to compute given just a description of the poset (it has the same volume as the order polytope which your link talks about). But this doesn't prove hardness of your question -- a poset may have a number of maximal antichains exponential in its description size. – Managu Aug 7 '12 at 3:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 20, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9132903218269348, "perplexity_flag": "head"}
|
http://mathhelpforum.com/calculus/129945-need-demonstrate-slope-strictly-positive.html
|
# Thread:
1. ## Need to demonstrate that the slope is strictly positive
If I have a multivariable equation whose slope is always positive, say $f(x,y,z)=x^2+y^2+z^2$ how do I demonstrate that the slope is always positive?
I imagine this involves partial derivatives but need some guidance.
Thanks
2. Originally Posted by rainer
If I have a multivariable equation whose slope is always positive, say $f(x,y,z)=x^2+y^2+z^2$ how do I demonstrate that the slope is always positive?
I imagine this involves partial derivatives but need some guidance.
Thanks
How do you define the slope of a multivariable function??
Tonio
3. Oh yeah, good point.
Let me give a few more parameters.
First, I reduce the equation to 3 variables:
$f(x,y)=x^2+y^2$
So it's a 3D graph. I am interested in the slope on the 2D x-y plane or "cross-section" of the origin.
Does that narrow it down enough?
4. I don't understand your definition. The definiton I am familiar with says a function $f:\mathbb{R}^n\to \mathbb{R}$ has positive slope iff for every injective curve $c=(c^1,\cdots,c^n)<img src=$a,b)\to \mathbb{R}^n" alt="c=(c^1,\cdots,c^n)a,b)\to \mathbb{R}^n" /> whose coordinate functions $c^i$ are increasing, the derivative of $f \circ c:\mathbb{R}\to \mathbb{R}$ is positive. Is this what you want?
5. Originally Posted by maddas
I don't understand your definition. The definiton I am familiar with says a function $f:\mathbb{R}^n\to \mathbb{R}$ has positive slope iff for every injective curve $c=(c^1,\cdots,c^n)<img src=$a,b)\to \mathbb{R}^n" alt="c=(c^1,\cdots,c^n)a,b)\to \mathbb{R}^n" /> whose coordinate functions $c^i$ are increasing, the derivative of $f \circ c:\mathbb{R}\to \mathbb{R}$ is positive. Is this what you want?
Hmmm, this definition is really cool-looking. But not understanding the half of it I will have to say I don't know if it's what I need or not.
It looks like I need to ruminate and clarify whatever it is I am trying to ask. So let's leave off here and maybe I'll post a clarified version of my question in a new thread.
Thanks a lot
6. The difficulty appears to be that you do not know what you mean by "slope" of a multivariable function. In order to be able to talk about slope being positive, you must mean it to be a number, but what number?
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 11, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9384924173355103, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/30366/why-wet-is-dark/30436
|
# Why wet is dark?
When something gets wet, it usually appears darker. This can be observed with soil, sand, cloth, paper, concrete, bricks ...
What is the reason for this? How does water soaking into the material change its optical properties?
-
## 3 Answers
When you look at a surface like sand, bricks, etc, the light you are seeing is reflected by diffuse reflection.
With a flat surface like a mirror, light falling on the surface is reflected back at the same angle it hit the surface (specular reflection) and you see a mirror image of the light falling on the surface. However a material like sand is basically lots of small grains of glass, and light is reflected at all the surfaces of the grains. The result is that the light falling on the sand gets reflected back in effectively random directions and the reflected light just looks white.
The reflection comes from the refractive index mismatch at the boundary between between air (n = 1.004) and sand (n $\approx$ 1.54). Light is reflected from any refractive index change. So suppose you filled the spaces between the sand grains with a liquid of refractive index 1.54. If you did this there would no longer be a refractive index change when light crossed the boundary between the liquid and the oil, so no light would be reflected. The result would be that the sand/liquid would be transparent.
And this is the reason behind the darkening you see when you add water to sand. The refractive index of water (n = 1.33) is less than sand, so you still get some reflection. However the reflection from a water/sand boundary is a lot less than from an air/sand boundary because the refractive index change is less. The reason that sand gets darker when you add water to it is simply that there is a lot less light reflected.
The same applies to brick, cloth, etc. If you look at a lot of material close up you find they're actually transparent. For example cloth is made from cotton or man made fibres, and if you look at a single fibre under a microscope you'll find you can see through it. The reason the materials are opaque is purely down to reflection at the air/material boundaries.
-
– John Rennie Jun 19 '12 at 14:37
1
The answer sounds plausible, however there is one thing which is still unclear to me: why when the wet surfaces freezes it turns bright again (often even brighter than before)? Refraction index of ice is more or less the same as for water. Perhaps the the ice not filling the space in between the grains as liquid water did? – Suma Jun 19 '12 at 17:41
1
I haven't done the experiment, but freezing may form a fine film of ice crystals on the top surface, and you'd then get diffuse scattering from the ice crystals. Maybe I'll try putting a water sand mixture in my freezer to see what happens ... – John Rennie Jun 19 '12 at 17:46
The effect seems to be related to the fuzzyness of a surface.
Dry cloth is very fuzzy and therefore reflects light in more directions. If you wet it you bend small fibers on the surface towards it so that total reflection occurs in less directions.
It is comparable to glass, which also seems white when shattered into small enough pieces but only reflects into a small interval of angles when flat and intact. Another example is soap which is some color until you make a foam out of it.
-
This explanation cannot be possibly true for the sand or soil - no fibers to bend on it. – Suma Jun 19 '12 at 17:42
That's right. In sand or soil the liquid fills up the space between particles and thereby smoothens the surface. This effect also additionally takes place in fiber, of course. – user9886 Jun 20 '12 at 8:37
I was told by a physicist that it is because sand clumps together when it is wet, so there is less surface area within it that can reflect the light.
-
This would not explain the phenomenon for concrete, asphalt ... – Suma Jun 20 '12 at 20:29
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9460586905479431, "perplexity_flag": "middle"}
|
http://crypto.stackexchange.com/questions/933/when-is-an-asymmetric-scheme-considered-broken
|
# When is an asymmetric scheme considered broken?
Does the following quote imply that valid encrypted data can be created and decrypted by someone other than the owner of a private key:
An asymmetric encryption scheme is considered to be broken if an attacker can decrypt a given ciphertext, even if he can convince you to decrypt arbitrary other ciphertexts
What conditions must exist for this to occur? (Specifics would be appreciated)
-
## 1 Answer
I think you are referring to Colin Percival's Everything you need to know about Cryptography, in one hour in which he observes:
An asymmetric authentication scheme is considered to be broken if an attacker with access to the verification key can generate any valid ciphertext, even if he can convince you to sign arbitrary other plaintexts.
This is called a chosen ciphertext attack, abbreviated CCA in literature.
I'm going to look at the RSA case. In the case of RSA, decryption given a ciphertext $c$ is $r = c^d \mod n$ and signing is $s = p^d \mod n$. As you have no doubt gathered, these both use $d$ the private key. So for the purposes of RSA, the two operations are related (this is a trapdoor permutation). However, assuming they weren't, being able to convince an attacker to sign arbitrary plaintexts may compromise the signing key and allow the attacker to pass themselves off as you.
Next up, actually exploiting a cryptosystem such as RSA. This paper discusses encrypting RSA properly (the use of padding) but also contains the observation that:
$$\epsilon_{n,e}(m_1)\epsilon_{n}{e}(m_2) = \epsilon_{n,e}(m_1m_2)\mod(n)$$
Re-writing this using slightly more familiar notation:
$$(m_1^e \mod n)(m_2^e \mod n) = m_1^e m_2^e \mod n = (m_1m_2)^e \mod n$$
Now, the paper notes that chosing $C\prime = C\cdot 2^e$ then $C\prime = m^e2^e$ which is equivalent to $(2m)^e$. Decrypted this is $(2m)^ed = 2m \mod n$ and so the attacker, having access to $2m \mod n$ knows $m$.
From a practical point of view, the attacker still needs to acquire the decrypted output. The attack doesn't make any assumptions as to how they achieve that, but it does demonstrate an exploit of RSA. To counter this, continue reading that paper; specifically, one should use an appropriate padding scheme.
Any crypto-system which has a similar homomorphic property will be vulnerable. For example, these slides explain similar behaviour in ElGamal.
-
Actually, the quote given in the question is also in the same document (page 20). – Paŭlo Ebermann♦ Oct 10 '11 at 10:40
@PaŭloEbermann ah, so it is! – Antony Vennard Oct 10 '11 at 10:44
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9267410039901733, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/55250/how-to-tell-if-a-complex-exponential-blows-up?answertab=votes
|
# How to tell if a complex exponential blows up
I'm following Griffiths' Introduction to Quantum Mechanics, where he's discussing the general solution to the delta-function potential problem. The solution he refers to is $$\psi(x)=Ae^{ikx}+Be^{-ikx}$$
Now, he says that neither of the the terms blows up. The case he is dealing with is $x<0$. I cannot see this, because it seems to me that the second term blows up if $x$ is negative. I suspect he is using some argument that involves the complex nature of the terms. I would like to know if indeed this is the case. If so, what is the argument to show that the terms do not blow up? Thanks.
-
3
One could expand these terms as sines and cosines, and then they would be finite regardless of the value of x. Is this the correct method? – Joebevo Feb 27 at 3:46
## 1 Answer
$$\psi(x)=Ae^{ikx}+Be^{-ikx}$$
Therefore,
$$\psi(x)=A\cos({kx}) + iA\sin(kx)+B\cos({-kx}) + iB\sin(-kx)$$
To arrive at the above equation, I utilised the fact that $e^{\theta i} = \cos\theta + i\sin\theta$.
Now I can further simplify the above expression to give:
$$\psi(x)=A\cos({kx}) + iA\sin(kx)+B\cos({kx}) - iB\sin(kx)$$
Therefore, $$\psi(x)=(A+B)\cos({kx}) + i(A-B)\sin(kx)$$
Therefore $\psi$ is a complex function with an oscillating real part, and an oscillating imaginary part. Also, evidently, the function does not "blow up".
So in conclusion you are correct, it is due to the "i"s in the function that prevent it from blowing up, which would occur without the i.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.947637677192688, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/72059/matrix-inversion-lemma-with-pseudoinverses
|
## Matrix inversion lemma with pseudoinverses
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
The utility of the Matrix Inversion Lemma has been well-exploited for several questions on MO. Thus, with some positive hope, I'd like to field a question of my own.
Suppose we pick $n$ values $x_1,\ldots,x_n$, independently sampled from $N(0,1)$ (mean 0, unit variance gaussian). Then, we form the (rank 3 at best) positive semidefinite matrix: $$A = \alpha ee^T + [\cos(x_i-x_j)],$$ where $e$ denotes the vector of all ones, and $\alpha > 0$ is a fixed scalar.
For $n \ge 3$, simple experiments lead one to conjecture that: $$e^TA^\dagger e = \alpha^{-1},$$ where $A^\dagger$ is the Moore-Penrose pseudoinverse of $A$ (obtained in Matlab using the 'pinv' function).
This should be fairly easy to prove with the right tools, such as a Matrix inversion lemma that allows rank deficient matrices or pseudoinverses. So my question is:
How to prove the above conjecture (without too much labor, if possible)?
-
1
If you just let $x_1,\dots,x_n$ be distinct scalars instead of specifying a particular probability distribution, shouldn't you get the same result? – Michael Hardy Aug 4 2011 at 4:13
Actually, I suspected it to be true as long as all the $x$'s were such that their contribution remains independent of $ee^T$ (Mikael makes this explicit in the answer below) – S. Sra Aug 4 2011 at 16:08
Why can the rank never exceed 3? – Michael Hardy Aug 5 2011 at 21:22
1
@Michael: expand $\cos(x-y)=\cos x\cos y + \sin x \sin y$ to notice that $A$ is a sum of three rank-1 matrices. – S. Sra Aug 5 2011 at 23:13
## 1 Answer
In fact more generally for any positive semidefinite matrix $A = \sum_{i=0}^k e_i e_i^T$ with $e_i$'s linearly independent, we have that $e_i^T B e_i = 1$, where $B$ is the Moore-Penrose pseudoinverse of $A$. This applies here since almost surely your matrix $A$ is of this form with $k=3$ and $e_1 = \sqrt \alpha e$.
Proof: Let $E$ be the linear span of the $e_i$'s. If I understood correctly the notion of Moore-Penrose pseudoinverse, $B$ is described in the following way: as a linear map, $B$ is zero on the orthogonal of $E$, and on $E$ it is the inverse of the restriction of $A$ to $E$. Let `$\beta_{i,j}$` be defined by `$B e_i = \sum_j \beta_{i,j} e_j$`, so that `$e_i^T B e_i = \sum_j\beta_{i,j} e_i^T e_j$`. Expressing that `$A B e_i = e_i$`, we get in particular that `$\sum_j\beta_{i,j} e_i^T e_j = 1$`, QED.
-
Nice clean answer Mikael. It seems so easy once somebody proves it :-) thanks! – S. Sra Aug 4 2011 at 16:09
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 29, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9073982834815979, "perplexity_flag": "head"}
|
http://mathhelpforum.com/advanced-applied-math/175637-game-theory-hotelling-game-3-players.html
|
# Thread:
1. ## Game Theory: Hotelling game with 3 players
Hi,
The problem is relatively well-known. IN its basic form there are two firms competing either on location or on some product characteristic. They can each choose a number in [0;1] and the consumers are uniformly distributed along [0;1]. Each consumer buys from the firm that is closer to his preferences. The Nash equilibrium is not hard to foresee: both firms will end up at 0.5 .
My question is what will happen if there are three firms? Intuitively it seems that there can be no Nash equilibrium.
Can anyone help me to show this algebraically, or at least logically?
Thank you
EDIT: An important assumption needed (especially) for the 3 player case is the fact that if the consumers are indifferent between two or three firms they choose at random. THis means that if all 3 firms a re in the same location, they all get 1/3 of the profits.
2. i suggest this approach;
1) show that there is no pooling equlibrium (where they all choose the same number). This is easy enough - if all 3 firms are on the same spot they get 1/3; but if one firm moves one $\epsilon$ towards the centre, it will get at least 0.5. In the special case where all 3 firms are at the center already; any firm can increase profits by moving one $\epsilon$ away from the centre .
2) show that where at least one firm is not "on top of" the other two, it can increase its profits by moving slightly. By the same logic above, the firm on the "outside" of the group can increase profits by moving one $\epsilon$ towards the other two. The same logic would apply if all 3 firms were in different positions; either outside firm can increase profits by moving towards the middle one.
Since there is no equlibrium with all firms in the same spot, or with them separated, there is no equilibrium.
3. Thanks, that's what did!
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9567151069641113, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/41054/pade-approximation-usability-in-iterative-algorithms/41085
|
## Padé approximation - usability in iterative algorithms
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Firstly, I have to say that I don't understand Padé approximation well.
But I discovered that, it is more precise than Taylor series.
I have to create approximation for these functions: Log(x) and Tanh. And I have to create iterative algorithms (I must compute result with variable precision).
So my questions are:
Is Padé approximation usable (and more efficient than simple Taylor series) for this task?
If no, is there any better way to approximate these functions?
-
## 1 Answer
A qualitative reason for using rational approximations (e.g. Padé) instead of polynomial ones (e.g. Taylor) is that rational approximations can exhibit behavior (e.g. poles and asymptotes) that polynomials are hard-pressed to emulate; they thus tend to be slightly more accurate (there are always exceptions, though).
What you can do that is equivalent to using a Padé approximant (SFAICT, there's no simple method for generating the coefficients of the numerator and denominator polynomials for the two functions you have, except by solving the appropriate Toeplitz system) is to use continued fraction expansions, which $\ln(1+x)$ and $\tanh(x)$ have by their virtue of being expressible as hypergeometric functions.
Of course, for proper use, you have to perform appropriate argument reductions (e.g. for $\tanh$, compute $x^{\star}=\frac{x}{2^n}$ where $n$ is an appropriate integer such that $x^{\star}$ is "small" enough, evaluate the continued fraction at $x^{\star}$, and use the double-argument formula for $\tanh$ to undo your previous transformation).
For evaluating continued fractions, a (reasonably) robust way of going about it is to use the "modified Lentz method" due to Lentz, Thompson, and Barnett; the algorithm's details are in Numerical Recipes or Gil/Segura/Temme's Numerical Evaluation of Special Functions.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9153637290000916, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/181023/balanced-tree-number-of-nodes
|
# Balanced tree number of nodes
This should be a simple one but maybe I'm dumb or maybe I'm just tired, but how to prove that
$$n = 1 + 2^1 + 2^2 + \cdots + 2^h$$
is equal to
$$n = 2^{h+1} - 1$$
?
-
2
Find $2n - n$. Solve for $n$. – ladaghini Aug 10 '12 at 13:06
2
multiply the first expression by $(2 - 1)$ and expand. A bunch of stuff cancels and gives you the second expression. – countinghaus Aug 10 '12 at 13:19
– MJD Aug 10 '12 at 13:59
You can try induction. – ᴊ ᴀ s ᴏ ɴ Aug 10 '12 at 14:12
## 2 Answers
-
Thank you, I didn't think of the geometric series! – John Smith Aug 10 '12 at 13:15
It’s a straightforward geometric series, as noted by rbm, but there are other ways to see it.
(1) Write it in binary: $2^n$ in binary is a $1$ followed by $n$ zeroes. Thus, in binary you’re adding $$1+10+100+\ldots+1\underbrace{0\dots0}_h=\underbrace{1\dots1}_{h+1}\;.$$ But clearly $\underbrace{1\dots1}_{h+1}+1=1\underbrace{0\dots0}_{h+1}$, which is the binary representation of $2^{h+1}$. Thus, $$1+2^1+2^2+\ldots+2^h=2^{h+1}-1\;.$$
(2) Prove it by induction on $h$. It’s certainly true for $h=0$: $1=2^1-1$. Suppose that for some $h\ge 0$ we have $$1+2^1+2^2+\ldots+2^h=2^{h+1}-1\;.$$ Then
$$\begin{align*} 1+2^1+2^2+\ldots+2^h+2^{h+1}&=\left(1+2^1+2^2+\ldots+2^h\right)+2^{h+1}\\\\ &=\left(2^{h+1}-1\right)+2^{h+1}\\\\ &=2\cdot2^{h+1}-1\\\\ &=2^{(h+1)+1}-1\;, \end{align*}$$
as desired.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 6, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.950211226940155, "perplexity_flag": "middle"}
|
http://crypto.stackexchange.com/questions/6171/why-is-feed-forward-mechanism-used-in-hash-functions/6172
|
# Why is feed-forward mechanism used in hash functions?
The compression function of SHA-1 when used in Davies-Meyer mode adds its input to the chaining values at the final step. For the first message block, the IV is used as the input and in the next step, the previous block's output (as specified by Merkle-Damgard) is used. These values are fed to the last round of the compression function.
Why would removing the feed-forward part (i.e.,the final step where the input value is added to the result to get the final output) make the hash function insecure? Apart from the fact that the compression function would not satisfy desired randomness properties (being a permutation for each fixed message).
-
## 1 Answer
If you remove the feed-forward part, then you reduce the strength of the hash against preimage attacks. We expect a hash with an $N$ bit output to require $O(2^N)$ steps to find a preimage (that is, an input that hashes to a specified value). However, without the feed-forward, we can find a preimage with $O(2^{N/2})$ steps.
Here's why: without the feed-forward, the compression operation is reversible; if you know the output of the compression function and the block of data being hashed, you could compute the input to the compression function.
Here is how this works: suppose we have SHA-256 (to take a concrete example), and we want to find a preimage for a specific value $x$. We consider two block messages (after padding); we construct $2^{128}$ initial blocks $A_i$ and compute what the state $a_i$ of the SHA-256 hash after each such initial block.
We also construct $2^{128}$ final blocks (which include the SHA-256 internal padding) $B_j$ and compute what the state $b_j$ of the SHA-256 hash would be previous to that (assuming that the final state of the SHA-256 hash is $x$.
We scan through the lists $a_i$ and $b_j$ looking for a match; if we find a specific pair $a_i = b_i$, we then know that the message $A_i B_j$ will hash to the value $x$; that's because the initial block $A_i$ will set the SHA-256 state to $a_i$, and then the final block $B_j$ will convert that state $b_j$, which is the same as $a_i$ to the desired final state $x$. Thus, we found a preimage to a 256-bit hash with about $2^{128}$ steps.
In essense, we use an internal collision to find the preimage. Adding the feed-forward makes this attack impossible.
(One note: if you try this attack against SHA-384, you find it takes $2^{256}$ steps, not $2^{192}$ as this analysis would seem to imply. It is still easier than the $2^{384}$ steps would would expect from a 384 bit hash.
-
btw. SHA-3 has no feed-forward, but its internal state is big enough to prevent this attack, just like it wouldn't be a problem in SHA-512/256. – CodesInChaos Jan 30 at 14:49
@CodesInChaos: the question was "why do they do something a Merkle-Damgard hash"; since SHA-3 isn't a M-D hash, how it works doesn't really apply. And, yes, SHA-512/256 avoids the problem; however, that was designed considerably after the SHA-512 compression function; the original reasoning did apply to how the SHA-512 compression function was envisioned to be used. – poncho Jan 30 at 16:09
ok, to simplify, if we look only at 1-block messages, then, with the feed-forward it is not possible to efficiently find pseudo-preimages, whereas without feed-forward this is easily done. then this extends to a regular preimage for 2-block messages as explained above – wu7 Feb 9 at 14:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 26, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9236269593238831, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/discrete-math/211979-question-intervals.html
|
# Thread:
1. ## Question on intervals
My question is:
i) Is the following statement true or false, and explain why: "If a is less than or equal to b, then (-infinity, a) is a subset of (-infinity, b)".
I drew a diagram which showed that this was true, but how would you go about proving it formally?
Any help would be appreciated
2. ## Re: Question on intervals
Assume $a \leq b$ then if a number $p \in (-\infty, a)$ then $p < a$, so $p < a \leq b$ thus $p < b$ so $p \in (-\infty, b)$
3. ## Re: Question on intervals
The standard way to prove "A is a subset of B" is to start "if $x\in A$, then use the definitions of A and B to conclude " $x\in B$".
To prove "If a is less than or equal to b, then (-infinity, a) is a subset of (-infinity, b)":
If $p\in (-\infty, a)$ then p< a. Since $a\le b$ and "<" is transitive, $p<a\le b$ so $p\in (-\infty, b)$.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 12, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9131286144256592, "perplexity_flag": "middle"}
|
http://nrich.maths.org/2280/note
|
nrich enriching mathematicsSkip over navigation
### Cuisenaire Rods
These squares have been made from Cuisenaire rods. Can you describe the pattern? What would the next square look like?
### Making Boxes
Cut differently-sized square corners from a square piece of paper to make boxes without lids. Do they all have the same volume?
### A Right Charlie
Can you use this information to work out Charlie's house number?
# Odd Squares
### Why do this
problem?
This problem is a wonderful example of a context in which a proof is accessible to children via an image with algebra not being required. It would be a good choice to try with your pupils once they are familiar with square numbers. As well as encouraging visualisation, it gives learners opportunities to conjecture, justify and generalise.
### Possible approach
You could introduce this problem orally. Ask each learner to think of a number and to go through the operations mentally. Invite everyone to jot down the result and share what they have with a neighbour. It might be a good idea to encourage pairs to check each other's arithmetic too! Ask pairs to talk about anything they notice about their two numbers. You could share some of these observations with the whole group. Go through the process again, asking each child to choose a different starting number and again, to note down the end result. Do all four numbers that each pair now has share any properties?
At this stage, you could collect results on the board for all to see. What do the class notice about all the numbers? Give them time to discuss why they think the result is always even in pairs or small groups. This is a chance for them to offer some suggestions, however 'polished' the explanation might be.
At this point, reveal the diagrams (or draw them on the board as you go through the steps again). Without saying anything else, give the group time to discuss further. Ask each pair or small group to develop an oral explanation which they can share with everyone. As a whole group then, you can create an explanation together which uses the pictures to prove, whatever the starting number, the result is always even. Learners may want to include further images. (It's important that learners understand that this will be the case whatever the starting number. The image given happens to be for a starting number of $5$. We can't draw images for every possible starting number so how do we know the result will always be even? This is the key to generalisation and proof.)
It would be great to try and capture this for a display. You could jot down the steps of the explanation on the board as the children build it up and then the final version could be put up on the wall with the problem itself and the images. It would be good to display any other proofs which the class has come up with alongside the visual proof as well.
### Key questions
What do you notice about the result each time?
Is this always going to be the case? How do you know?
Can you describe what is happening in the images?
What can you say about the pattern of dots on each side of the red line in the third image?
If there was a fourth picture, what could it look like?
What is the starting number in the picture?
Can you draw a similar series of pictures for different starting numbers?
### Possible extension
Picturing Triangle Numbers is another problem which focuses on visual proof. Although it leads into algebra, many children will be able to offer written or oral proofs.
### Possible support
Some learners might find it useful to use counters or cubes to represent the numbers and therefore to build up a picture of what is going on in this way. This will also allow them physically to take away the diagonal line which is crossed out in the third image if that helps.
The NRICH Project aims to enrich the mathematical experiences of all learners. To support this aim, members of the NRICH team work in a wide range of capacities, including providing professional development for teachers wishing to embed rich mathematical tasks into everyday classroom practice. More information on many of our other activities can be found here.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9499742984771729, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/45854?sort=newest
|
## Explicit Coquasi-Triangular Quantised Coordinate Algebra of a Complex Semi-Simple Lie Group?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Let $SL_q(N)$ be usual quantised coordinate algebra of the special linear group. As is well-known, this is co-quasi-triangular algebra with coquasi-triangular structure given by $$R(u^i_j \otimes u^k_l) = q^{-\frac{1}{2}}.(q^{\delta_{ij}}\delta_{im}\delta_{jn} + (q-q^{-1})\theta (i-j)\delta_{in}\delta_{jm}),$$
Now consider the much more general case of $G_q$ the quantised coordinate algebra of a complex semi-simple Lie group. This is defined dually in terms of a Drinfield-Jimbo quantised enveloping Lie algebra $\mathfrak{g}$. As is also well-known, these algebras are also co-quasi-triangular. My question is does there exist a general formula for the co-quasi-triangular structure $R(u^i_j \otimes u^k_l)$ in terms of the Cartan data of $\mathfrak{g}$?
-
## 1 Answer
The coefficients of $R$ are essentially the coefficients of the braiding of the vector representation of $U_q(\mathfrak{g})$. So, more or less, you are asking for a general formula in terms of Cartan data for the braiding on the vector representation.
I've never seen a general formula for these, although the book by Klimyk and Schmudgen does give a general formula for the vector representations of the 4 infinite families of complex simple Lie algebras. The formulas for these are in Chapter 8. The stuff on coquasitriangularity of the corresponding quantized function algebras is in Chapter 10.
There is a pretty nice way to compute these things by hand if necessary. Remark 2.1 in the paper De Rham complex for quantized irreducible flag manifolds says:
Note that the braiding $\hat{R}$ is uniquely determined if one demands that [the braiding] is a $U_q(\mathfrak{g})$-module homomorphism satisfying \begin{equation} \label{braid} \hat{R}_{V,W} (v \otimes w) = q^{(wt(v), wt(w))} w \otimes v + \sum_i w_i \otimes v_i, \end{equation} where $wt(w_i) \prec wt(w)$ and $wt(v_i) \succ wt(v)$.
Here $V$ and $W$ are any finite-dimensional $U_q(\mathfrak{g})$-modules. The point is that this immediately determines the braiding on $v \otimes w$ whenever $v$ is a highest weight vector. These vectors generate $V \otimes W$ as a $U_q(\mathfrak{g})$-module.
Now you can go ahead and act on $v \otimes w$ with the $F_i$'s (lowering the weights of the vectors), and use the fact that $\hat{R}$ has to commute with the action of the $F_i$'s to compute $\hat{R} (v \otimes w)$ for all weight vectors $v$ and $w$.
This is not an explicit formula, but it does determine the braiding in terms of the pairings between the weights of the vector representation, which is not exactly the Cartan data, but close.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 25, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8981307744979858, "perplexity_flag": "head"}
|
http://mathhelpforum.com/calculus/145421-finding-limit-finding-maclaurin-series.html
|
# Thread:
1. ## Finding a limit. Finding Maclaurin series.
How can these problems be worked out?
1) lim as x goes to 0 of (1/x - 1/sinx)
2) find the 1st 4 derivatives of y= ln(1+sinx) and the Mclaurin series up to the term containing x^4
I know some calculus, but I need help with this. Thank you.
2. Originally Posted by skydiver1921
How can these problems be worked out?
1) lim as x goes to 0 of (1/x - 1/sinx)
2) find the 1st 4 derivatives of y= ln(1+sinx) and the Mclaurin series up to the term containing x^4
I know some calculus, but I need help with this. Thank you.
$\lim_{x \to 0}\left(\frac{1}{x} - \frac{1}{\sin{x}}\right) = \lim_{x \to 0}\left(\frac{\sin{x} - x}{x\sin{x}}\right)$.
Since this $\to \frac{0}{0}$ you can use L'Hospital's Rule
$\lim_{x \to 0}\left(\frac{\sin{x} - x}{x\sin{x}}\right) = \lim_{x \to 0}\left(\frac{\cos{x} - 1}{x\cos{x} + \sin{x}}\right)$.
This still $\to \frac{0}{0}$, so use L'Hospital's Rule again...
$\lim_{x \to 0}\left(\frac{\cos{x} - 1}{x\cos{x} + \sin{x}}\right) = \lim_{x \to 0}\left(\frac{-\sin{x}}{-x\sin{x} + \cos{x} + \cos{x}}\right)$
$= \lim_{x \to 0}\left(\frac{\sin{x}}{x\sin{x} - 2\cos{x}}\right)$
$= \frac{0}{-2}$
$= 0$.
So $\lim_{x \to 0}\left(\frac{1}{x} - \frac{1}{\sin{x}}\right) = 0$.
3. Originally Posted by skydiver1921
How can these problems be worked out?
1) lim as x goes to 0 of (1/x - 1/sinx)
2) find the 1st 4 derivatives of y= ln(1+sinx) and the Mclaurin series up to the term containing x^4
I know some calculus, but I need help with this. Thank you.
$y = \ln{(1 + \sin{x})}$.
Assume that $y$ can be written as a polynomial. Then
$\ln{(1 + \sin{x})} = c_0 + c_1x + c_2x^2 + c_3x^3 + c_4x^4 + \dots$.
By letting $x = 0$, we can see that $c_0 = 0$.
Differentiate both sides:
$\frac{\cos{x}}{1 + \sin{x}} = c_1 + 2c_2x + 3c_3x^2 + 4c_4x^3 + 5c_5x^4 + \dots$.
By letting $x = 0$, we can see that $c_1 = 1$.
Differentiate both sides:
$\frac{-\sin{x}(1 + \sin{x}) - \cos^2{x}}{(1 + \sin{x})^2} = 2c_2 + 3\cdot 2c_3x + 4\cdot 3c_4x^2 + 5\cdot 4c_5x^3 + 6\cdot 5c_6x^4 + \dots$
$\frac{-\sin{x} - \sin^2{x} - \cos^2{x}}{(1 + \sin{x})^2} = 2c_2 + 3\cdot 2c_3x + 4\cdot 3c_4x^2 + 5\cdot 4c_5x^3 + 6\cdot 5c_6x^4 + \dots$
$\frac{-(1 + \sin{x})}{(1 + \sin{x})^2} = 2c_2 + 3\cdot 2c_3x + 4\cdot 3c_4x^2 + 5\cdot 4c_5x^3 + 6\cdot 5c_6x^4 + \dots$
$-(1 + \sin{x})^{-1} = 2c_2 + 3\cdot 2c_3x + 4\cdot 3c_4x^2 + 5\cdot 4c_5x^3 + 6\cdot 5c_6x^4 + \dots$
By letting $x = 0$ we can see that $c_2 = -\frac{1}{2}$.
Differentiate both sides:
$\cos{x}(1 + \sin{x})^{-2} = 3\cdot 2c_3 + 4\cdot 3\cdot 2c_4x + 5\cdot 4\cdot 3c_5x^2 + 6\cdot 5\cdot 4c_6x^3 + 7\cdot 6\cdot 5c_7x^4 + \dots$
By letting $x = 0$, we can see that $c_3 = \frac{1}{3\cdot 2} = \frac{1}{6}$.
Differentiate both sides:
$-2\cos^2{x}(1 + \sin{x})^{-3} - \sin{x}(1 + \sin{x})^{-2} = 4\cdot 3\cdot 2c_4 + 5\cdot 4\cdot 3\cdot 2c_5x + 6\cdot 5\cdot 4\cdot 3c_6x^2 + \dots$
By letting $x = 0$, we can see that $-\frac{1}{4\cdot 3} = -\frac{1}{12}$.
So up to the $x^4$ term, we have
$\ln{(1 + \sin{x})} = x - \frac{1}{2}x^2 + \frac{1}{6}x^3 - \frac{1}{12}x^4 + \dots$.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 31, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9335070252418518, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/287676/stabilizer-of-a-4-by-4-skew-symmetric-matrix-by-orthogonal-matrix?answertab=votes
|
# Stabilizer of a 4 by 4 skew symmetric matrix by orthogonal matrix
Matrices are over the field of complex numbers, and $X^t$ means transpose of a matrix $X$.
Consider the group action of $O(4)=\{P\mid PP^t=I\}$ on $SK(4)=\{M\mid M^t=-M\}$ by $(P,M) \rightarrow PMP^t$. Does anyone know what is the stabilizer of a general $M$?
Thank you!
-
## 1 Answer
This is only a paritial answer. The major difficulty of this question is that for every eigenvector $v$ corresponding to a nonzero eigenvalue of $M$, we have $v^Tv=0$. This directly conflicts with the conjugation of the form $PMP^T$, where $PP^T=I$. In other words, unlike the real case or the unitary equivalence case, we cannot directly use the eigenvectors of $M$ as columns of $P$ to simplify the structure of $M$.
Yet, when $M$ is diagonalizable and has nonzero distinct eigenvalues, the problem is easily solvable. As $M$ is skew symmetric, all its nonzero eigenvalues must have their negative counterparts. So, suppose the eigenvalues of $M$ are $\lambda,-\lambda,\mu,-\mu$, where $\lambda,\mu\not=0$ and $\lambda\not=\mu$. Let $v_1, v_2, v_3, v_4$ be the four corresponding unit eigenvectors. Note that when $x,y$ are eigenvectors of $M$ that correspond to some two eigenvalues $p,q$ with $p\not=-q$ and $p,q\not=0$, we have $$px^Ty = (y^Tpx)^T = (y^TMx)^T = -x^TMy = -qx^Ty$$ and hence $x^Ty=0$. Therefore, in our case, for $k\le j$, we have $v_k^Tv_j=0$ whenever $(k,j)\not=(1,2),(3,4)$.
Note that $v_1^Tv_2\not=0$ (and similarly $v_3^Tv_4\not=0$), otherwise we would have $\langle v_j,\bar{v}_1\rangle=0$ for $j=1,\ldots,4$, which is impossible. Therefore, if we define $c^2=1/(2v_1^Tv_2),\,w_1=c(v_1+v_2)$ and $w_2=ic(v_1-v_2)$ (here $i=\sqrt{-1}$) and define $w_3,w_4$ with $v_3,v_4$ analogously, it can be verified that $w_k^Tw_j=\delta_{kj}$, i.e. the matrix $W$ containing the $w_i$s as columns is complex orthogonal ($W^TW=I_4$). Hence $$W^TMW = \begin{pmatrix}i\lambda J_2\\&i\mu J_2\end{pmatrix}; \ J_2 = \begin{pmatrix}0&1\\-1&0\end{pmatrix}.$$ The stabilizers of $J_2$ are "rotation matrices" of the form $R(z)=\begin{pmatrix}\cos z&\sin z\\ -\sin z&\cos z\end{pmatrix}$, where $z\in\mathbb{C}$. Therefore the stabilizers of $M$ are given by $P=W\left(R(z_1)\oplus R(z_2)\right)W^T$.
The other cases, such as $M$ has zero eigenvalues, or some eigenvalues of $M$ have geometric multiplicity $>1$, etc., are more difficult to handle. I will try to cook up something if I can and if I have time.
-
Thank you! That is very helpful! – Danny Jan 31 at 5:45
The Jordan canonical form of M can only be diag{\lambda_1, -\lambda_1, \lambda_2, -lambda_2} or diag{M_1, M_2} where M_i's are 2 by 2 Jordan blocks with two \lambda's on the diagonal of M_1 and two -\lambda's on the diagonal of M_2. I found this by googling "The Jordan Canonical Forms of complex orthogonal and skew-symmetric matrices", one of the top 5 entries with this website: citeseerx.ist.psu.edu – Danny Jan 31 at 6:02
Actually I have some doubts on your result. So you mean there are only finitely many (16 as you mentioned) stabilizers for a general M, right? What I think so far is there may be a two-dimensional family of such stabilizers for a general M. – Danny Jan 31 at 6:10
@Danny And I had read the thesis you mentioned before I answered the question. According to thm 1.2.5, it is also possible that the Jordan form of $M$ is $J_3(0)\oplus J_1(0)$. What it says is that except for odd-order Jordan blocks that correspond to the zero eigenvalue, all other Jordan blocks of $M$ (of odd or even orders) must occur in $(\lambda,-\lambda)$ pairs (and $\lambda$ can be zero). – user1551 Jan 31 at 7:36
So, when $n=4$, there are three possibilities: (i) $\operatorname{diag}(a, -a, b, -b)$, (ii) $J_2(a)\oplus J_2(-a)$ or (iii) $J_3(0)\oplus J_1(0)$, where $a,b$ may be zero and are not necessarily distinct. – user1551 Jan 31 at 7:40
show 1 more comment
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 61, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9282072186470032, "perplexity_flag": "head"}
|
http://stats.stackexchange.com/questions/29651/is-it-possible-to-estimate-the-odds-of-winning-a-multi-entry-contest-when-i-don
|
# Is it possible to estimate the odds of winning a multi-entry contest, when I don't know the breakdown of entries?
Suppose I am entered into a contest, with the following rules:
• Every person may get up to 6 entries
• All the entries will be pooled, and 25% of the entries will be selected to be winners, with a maximum of 25.
• Each person can only win once, regardless of the number of their entries. If someone's name gets drawn again, it is discarded and a new name drawn.
• I know how many entries I have (the maximum, 6)
• I know how many total entries there are, broken down by type of entry
• I do not know how many of the entries are repeat entries by the same person.
The count of entries by type is as follows:
Type 1: 42 Type 2: 72 Type 3: 119 Type 4: 217 Type 5: 156 Type 6: 178
Is it possible to estimate my odds of winning in this situation? I'm a bit confused by the fact that I can't predict how the early winners will effect my chances, since I don't know how many entries each winner will remove from the pool.
I'm interested in the solution given the data set, but I'm also interested in the proper procedure/algorithm for calculating it.
-
What do the "types" signify? – Macro Jun 1 '12 at 18:23
@Marco The different types of tickets you can earn. So you can earn lottery ticket types 1 through 6, and 42 people won ticket type 1 – Rachel Jun 1 '12 at 18:24
I have trouble following the sequence of posts because they run from bottom to top. But given the time to the right of the posters name I think i have figured it out. So would someone please tell me if I have this straight. I think whuber's answer can't be right because of the ambiguity between entries and entrants. Rachel's strategy to get worst case and best case scenarios is right but she made a math error by adding percentages when they can be based on different denominators. So if we fix that error we have the right bounds on the solution. – Michael Chernick Jun 1 '12 at 20:54
I thought it was odd for Rachel to refer to the best case scenario as the one that gave the highest winning percentage and the worst case the one that gave the lowest. Winning is good right? The last point that i would like clarified: Cardinal states that in this case case 25% of the entrants exceeds 25, so there will only be 25 winners. He gets this by knowing that at least 178 people are entered and of course once the number of entrants exceeds 100 the winner totals is cut off at 25 based on the rules. – Michael Chernick Jun 1 '12 at 21:06
@cardinal how did you come up with the number 178? I added all the entries by types to get a total of 784. In the worst case for me as a player everyone got 6 entries and 784/6 = 130.7. So I conclude that there must be at least 130 entrances. This still means the cutoff of 25 applies, but how did you arrive at the higher number? – Michael Chernick Jun 1 '12 at 21:09
show 4 more comments
## 2 Answers
The possible chances lie between 17.7% and 18.7%.
The worst case occurs when everybody but you has exactly one entry in the lottery: this is a configuration consistent with the data (although unlikely!).
Let's count the number of possibilities in which you do not win. This is the number of ways of drawing $25$ tickets out of the $784-6$ remaining tickets, given by the Binomial coefficient $\binom{784-6}{25}$. (It's a huge number). The total number of possibilities--all of them equally likely in a fair drawing--is $\binom{784}{25}$. The ratio simplifies to $(784-25)\cdots(784-30) / [(784)\cdots(784-5)]$, which is about 82.22772%: your chances of not winning. Your chances of winning in this situation therefore equal 1 - 82.22772% = 17.7228%.
The best case occurs when there are as few individuals involved in the lottery as possible and as many as possible have $6$, and then $5$, etc, tickets. Given that the "gem" counts are $(42, 72, 119, 156, 178, 217)$ (in ascending order), this implies
• At most $42 = a_6$ people can have $6$ entries each.
• At most $72-42=30 = a_5$ people can have $5$ entries each.
...
• At most $178-156=22 = a_2$ people can have $2$ entries each.
• $217-178=39 = a_1$ people have $1$ entry each.
Let $p(\mathbf{a}, l, j)$ designate the chance of winning when you hold $j$ (between $1$ and $6$) tickets in a lottery with data $\mathbf{a}=(a_1,a_2,\ldots,a_6)$ and $l=25$ draws. The total number of tickets therefore equals $1 a_1 + 2 a_2 + \cdots + 6 a_6 = n$. Consider the next draw. There are seven possibilities:
1. One of your tickets is drawn; you win. The chance of this equals $j/n$.
2. Somebody else's tickets are drawn. The chance of this equals $(n-j)/n$. If they hold $i$ of them, then all $i$ tickets are removed from the lottery. If $l \ge 1$, drawing continues with the new data: $l$ has been decreased by $1$ and $a_i$ has been decreased by $1$ as well. The chance that some person with $i$ tickets in the lottery is chosen, given that yours are not, equals $ia_i/(n-j)$. This gives six disjoint possibilities for $i=1,2,\ldots,6$.
We add these chances because they partition all outcomes with no overlap.
The calculation continues recursively down this probability tree until all the leaves at $l=0$ are reached. It's a lot of computation (about $25^6$ = 244 million calculations), but it only takes a few minutes (or less, depending on the platform). I obtain 18.6475% chances of winning in this case.
Here's the Mathematica code I used. (It is written to parallel the preceding analysis; it could be made a little more efficient through some algebraic reductions and tests for when $a_i$ is reduced to $0$.) Here, the argument `a` does not count the $j$ tickets you hold: it gives the distribution of counts of tickets everyone else holds.
````p[a_, l_Integer, j_Integer] /; l >= 1 := p[a, l, j] = Module[{k = Length[a], n},
n = Range[k] . a + j;
j/n + (n - j)/n ParallelSum[
i a[[i]] / (n - j) p[a - UnitVector[k, i], l - 1, j], {i, 1, k}]
];
p[a_, 0, j_Integer] := 0;
(* The data *)
a = Reverse[Differences[Prepend[Sort[{42, 72, 119, 217, 156, 178}], 0]]];
j = 6; l = 25;
(* The solution *)
p[a - UnitVector[Length[a],j], l, j] // N
````
As a reality check, let us compare these answers to two naive approximations (neither of which is quite correct):
1. 25 draws with 6 tickets in play should give you around 6*25 out of 784 chances of winning. This is 19.1%.
2. Each time your chance of not winning is about (784-6)/784. Raise this to the 25th power to find your chance of not winning in the lottery. Subtracting it from 1 gives 17.5%.
It looks like we're in the right ballpark.
-
1
I like this problem because it provides a real example of two kinds of uncertainty: probabilistic uncertainty in the lottery and lack of knowledge about the true distribution of ticket ownership within the lottery. I have effectively treated the latter uncertainty using interval analysis, which simply attempts to bound the possibilities as tightly as possible. Others might go ahead and adopt some prior distribution to describe this epistemic uncertainty, but I can conceive of no valid way to justify any such prior given the information at hand. – whuber♦ Jun 1 '12 at 21:21
But you are assuming that no one can have 2 or more of any particular type ticket ("gem"). As far as I can see this is not specified in the OPs (agent86s) description of the problem. – Michael Chernick Jun 1 '12 at 21:53
@Michael You're right, it's not perfectly clear in the game rules, although it is strongly implied that nobody collects more than one of each type of gem. Vide rule 1 in the original question: "every person may get up to 6 entries." – whuber♦ Jun 1 '12 at 21:55
1
As far as I am aware (and has been demonstrated during the contest), the assumption from the given information is correct - no one person can have more than 6 entries, one of each "type." – agent86 Jun 1 '12 at 21:57
1
Thank you so much for taking the time to answer this! I've been thinking of this problem since yesterday, and woke up this morning determined to figure out this if it killed me, and I'm happy to see a great explanation already posted so now I don't have to :) – Rachel Jun 2 '12 at 16:45
show 4 more comments
If I did the math right, you have between `19.43%` and `21.15%` chance of winning a prize
The `19.43%` is the best-case scenario, where every entrant has 6 tickets
The `21.15%` is the worst-case scenario, where every entrant has 1 ticket except you
Both scenarios are extremely unlikely, so your actual odds of winning probably fall somewhere in between, however a roughly 1/5 chance at winning seems like a fairly solid number to go by
The details on how those numbers were obtained can be found in this Google spreadsheet, however to summarize how they were obtained:
2. Get chance at winning (`6 / 784 = 0.77%`)
3. Subtract 6 for best-case, or 1 for worst-case from `TotalEntries`
4. Get chance of winning (`6/778` for best case `6/783` for worst case)
5. Repeat steps 3-4 until you have 25 percentages
6. Add the 25 percentages together to find out your overall chance at winnning something
Here's an alternative way to get the approximate percentage that is simpler, but is not as accurate since you are not removing duplicate entries every time you draw a winner.
````6 (your tickets) / 784 total tickets = 0.00765
0.00765 chance to win * 25 prizes = 19.14 % chance to win
````
EDIT: I'm fairly sure I'm missing something in my math and that you cannot simply add percentages like this (or multiply percent chance to win by # of prizes), although I think I'm close
Whobar's comment gives a 17.4% chance of winning, although I still need to figure out the formula he gave and make sure it's accurate for the contest. Perhaps a weekend project :)
-
I'll just point out that this assumes that you have 6 gems. – murgatroid99 Jun 1 '12 at 18:20
@murgatroid99 Yes, the question stated `I know how many entries I have (the maximum, 6)` :) I can make the spreadsheet editable by anyone who wants to figure out their odds of winning – Rachel Jun 1 '12 at 18:21
2
I think these figures are in the right ballpark generally, but are off by a couple percent. It's difficult to tell since a description of the calculation you've done hasn't been provided in the post itself. – cardinal Jun 1 '12 at 18:51
1
From your description, it appears the discrepancy likely arises from the fact that you haven't incorporated the probability of getting to the $k$th step before getting chosen. For example, in the worst case scenario, the probability of being selected at the third draw is $(778\cdot 777\cdot 6)/(784 \cdot 783 \cdot 782)$. – cardinal Jun 1 '12 at 19:18
2
Rachel, $1-\frac{\binom{n-6}{25}}{\binom{n}{25}}$ gives the chance that a person with $6$ tickets among $n$ will have at least one of them chosen when 25 are drawn. (It is based on counting how many ways that person's tickets could not be drawn, dividing by the total number of possible draws, and subtracting that ratio from $1$.) For $n=784$ the value is 17.7%. I don't know whether this is how the lottery is intended to be run, though. – whuber♦ Jun 1 '12 at 19:33
show 5 more comments
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 47, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9508302807807922, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/84435/why-is-the-mollified-process-a-semimartingale
|
# Why is the mollified process a semimartingale?
Let $X_t$ be a continuous adapted stochastic process and let $X_t^n$ be the mollified process defined by $$X_t^n= n \int\limits_{(t-\frac{1}{n})^{+}}^{t} X_s \,ds$$ Prove that $X_t^n$ is a semimartingale.
-
## 1 Answer
At fixed $n$,and for $t>\frac{1}{n}$ it is clear that this process is an adapted (continuous) finite variation process which entails that it is a (continuous) semi-martingale.
Regards
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9351226687431335, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/124159/find-root-of-non-continuous-function-using-a-numerical-method/124167
|
# Find root of non-continuous function using a numerical method
Given the formula (source) $$M = P \times \frac{J}{1 - (1 + J)^{-N}}$$ Assume $P$ and $J$ remain constant. I want to be able to find $N$ for given $M.$
Could you please give an example of a numeric method (like secant to solve the same)? [My] problem is how to find initial 2 values for the secant formula.
-
## 1 Answer
If I understood your question correctly, we're given, $M,P,J$ and we're trying to find $N.$
Well, we have $$1 - (1+J)^{-N} = \frac{PJ}{M} \\ (1+J)^{-N} = 1 - \frac{PJ}{M}$$ Take $\log$ both sides: $$-N \log{(1+J)} = \log{(1 - \frac{PJ}{M})} \\ N = - \frac{\log{(1 - \frac{PJ}{M})}}{\log{(1+J)}}$$ Since $M, P, J$ are given, we can very well compute the RHS, and hence compute $N.$
Did I mis-understood your question?
-
Of course, assuming $\dfrac{PJ}{M} < 1$ and $J > -1.$ – user2468 Mar 25 '12 at 5:47
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8716879487037659, "perplexity_flag": "head"}
|
http://cms.math.ca/10.4153/CMB-2011-066-2
|
Canadian Mathematical Society
www.cms.math.ca
| | | | |
|----------|----|-----------|----|
| | | | | | |
| Site map | | | CMS store | |
location: Publications → journals → CMB
Abstract view
# Limit Sets of Typical Homeomorphisms
Read article
[PDF: 145KB]
http://dx.doi.org/10.4153/CMB-2011-066-2
Canad. Math. Bull. 55(2012), 225-232
Published:2011-04-14
Printed: Jun 2012
• Nilson C. Bernardes,
Departamento de Matemática Aplicada, Instituto de Matemática, Universidade Federal do Rio de Janeiro, Caixa Postal 68530, Rio de Janeiro, RJ, 21945-970, Brasil
Features coming soon:
Citations (via CrossRef) Tools: Search Google Scholar:
Format: HTML LaTeX MathJax PDF
## Abstract
Given an integer $n \geq 3$, a metrizable compact topological $n$-manifold $X$ with boundary, and a finite positive Borel measure $\mu$ on $X$, we prove that for the typical homeomorphism $f \colon X \to X$, it is true that for $\mu$-almost every point $x$ in $X$ the limit set $\omega(f,x)$ is a Cantor set of Hausdorff dimension zero, each point of $\omega(f,x)$ has a dense orbit in $\omega(f,x)$, $f$ is non-sensitive at each point of $\omega(f,x)$, and the function $a \to \omega(f,a)$ is continuous at $x$.
Keywords: topological manifolds, homeomorphisms, measures, Baire category, limit sets
MSC Classifications: 37B20 - Notions of recurrence 54H20 - Topological dynamics [See also 28Dxx, 37Bxx] 28C15 - Set functions and measures on topological spaces (regularity of measures, etc.) 54C35 - Function spaces [See also 46Exx, 58D15] 54E52 - Baire category, Baire spaces
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 16, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.6071253418922424, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/tagged/relativity+time
|
Tagged Questions
1answer
85 views
How much time has passed for Voyager I since it left the Earth, 34 years ago?
34 years have passed since Voyager I took off and it's just crossing the solar system, being approximately at 16.4 light-hours away. How much time have passed for itself, though?
1answer
97 views
How do photons experience time? [duplicate]
I know that as velocity approaches the speed of light the time dilation shoots to infinity as shown below. 1)So I want to know how time is perceived from the point of view of the photon? 2)Since ...
1answer
91 views
Deriving infinitesimal time dilation for arbitrary motion from Lorentz transformations
I'm trying to derive the infinitesimal time dilation relation $dt = \gamma d\tau$, where $\tau$ is the proper time, $t$ the coordinate time, and $\gamma = (1-v(t)^2/c^2)^{-1/2}$ the time dependent ...
3answers
305 views
Why do clocks measure arc-length?
Apologies in advance for the long question. My understanding is that in GR, massive observers move along timelike curves $x^\mu(\lambda)$, and if an observer moves from point $x^\mu(\lambda_a)$ to ...
4answers
553 views
Does the future already exist? If so, which one?
In the NOVA Fabric of the Cosmos program, Brian Greene explains a theory in which there is no "now", or more specifically, now is relative. He describes an alien riding a bicycle on a far off planet ...
3answers
682 views
How to calculate time dilation in approaching speed of light
If a spaceship travels close to the speed of light (say, at 0.9c), how do I calculate the time as the spaceship pilot experience it? I thought the formula was $$t = \frac{t_0}{\sqrt{1-v^2/c^2}}$$ ...
2answers
586 views
How can time be relative?
I don't understand how time can be relative to different observers, and I think my confusion is around how I understand what time is. I have always been told (and thought) that time is basically a ...
9answers
672 views
understanding time: Is time simply the rate change?
Is time simply the rate of change? If this is the case and time was created during the big bang would it be the case that the closer you get to the start of the big bang the "slower" things change ...
1answer
135 views
Is time the property of an object?
I don't know if the title makes much sense, but hopefully it will become clear with the text. Temperature is not a property of a point in the three dimensions, but actually of the object occupying ...
4answers
595 views
Special Relativistic Time Dilation — A computer in a very fast centrifuge
Ok, I've stumbled onto what I think is a bit of a paradox. First off, say you had some computer in a very fast(near light speed) centrifuge. You provide power to this computer via a metal plate on ...
6answers
2k views
Is time travel possible? Is it possible to go back in time?
I read somewhere that according to relativity, black holes and other space related stuff it is possible to jump into past. Is it possible for anything to go back in time either continuously or by ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9483948945999146, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/290939/polar-decomposition-for-non-square-matrices?answertab=active
|
# polar decomposition for non square matrices
I know that for a square complex matrix $A\in\mathbb{C}^{n\times n}$ there exist matrices $O$ and $P$ in $\mathbb{C}^{n\times n}$ with $O$ unitary and $P$ hermitian positive semidefinite such that $A=OP$. The proof starts with applying the spectral theorem to the matrix $AA^*$.
Now I read that similarly for $A\in \mathbb{C}^{n\times m}$, $A=OP$ with $O\in\mathbb{C}^{n\times m}$, $P\in\mathbb{C}^{m\times m}$, $P$ hermitian positive semidefinite and $O$ isometric, if $m<n$. How can this be proved?
-
In English it's called a "square matrix", not "quadratic". – Robert Israel Jan 30 at 23:54
Fixed, thank you. – user35359 Feb 2 at 19:31
## 2 Answers
The usual polar decomposition for complex matrices is $A = O P$ where $O$ is a partial isometry and $P$ is (hermitian) positive semidefinite, not necessarily symmetric. Namely $P$ is the positive semidefinite square root of $A^* A$. Since $\|P x\|^2 = x^* A^* A x = \|A x\|^2$ for any $x$, the map $Ax \mapsto Px$ on $\text{Ran}(A)$ is an isometry (you have to show that this is well-defined and linear, but that's not hard). To complete the definition of the partial isometry $O$, you take any partial isometry from the orthogonal complement of $\text{Ran}(A)$ to the orthogonal complement of $\text{Ran}(P)$.
-
I think it should be $Px\mapsto Ax$ on $ran(P)$. Is that correct? Thanks for the answer. – user35359 Feb 2 at 19:32
Do you know singular value decomposition? http://en.wikipedia.org/wiki/Singular_value_decomposition
Polar decomposition is trivial consequence of singular value decomposition, which is defined for any matrix.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 29, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8840225338935852, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/76798?sort=votes
|
A question about J.H. Conway’s SURREAL NUMBERS
Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
My quesion is: What set theory are the mathematicians who are developing the theory of these numbers working in-or are they, in fact, working outside any of the standard set theories?. Each surreal number is a mapping of an ordinal number into the pair (+,-) so that the collection S of all these numbers is a proper class. Moreover S is a real closed (ordered) field containing sub-collections which are ordinally similar to the class of ordinal numbers and to the set of real numbers (in their usual order). Since S is densely ordered but not order-complete, there exists an order-complete ordered collection C (constructed from the Dedekind cuts of S), which contains a dense sub-collection that is ordinally similar to S. Now the elements of C are proper classes and if we are going to have theorems about sub-collections of C (such as closed intervals), then the underlying set theory (if any) must be one that allows some proper classes to be elements of collections.
-
These sorts of issues also come up in category theory. I have seen this notion of a hyper class that people speak about when collections of classes are needed. Another thing that is done ( if you are not too squeamish about foundations) is to admit a few inaccessible cardinals. I think 2 or 3 inaccessible cardinals should work for this, but I will leave that to people more knowledgeable about such issues. – Lunasaurus Rex Sep 29 2011 at 20:24
The numbers themselves are sets. While ZFC does not allow proper classes as objects, it certainly allows discussing them. Inner model theory is a lot about it, so is forcing to some extent. Lastly, the NBG set theory is a conservative extension of ZFC which allows classes as objects. – Asaf Karagila Sep 29 2011 at 20:27
3
Conway discusses this matter in the Appendix to Part Zero of On Numbers and Games. Recommended reading. – Timothy Chow Sep 30 2011 at 0:34
3 Answers
As you said, each surreal number can be regarded as a set, but the collection of all of them is a proper class. The set-theoretic issues involved in "developing the theory of these numbers" are the same as those involved in developing the theory of sets. For most purposes, ZFC suffices, since particular proper classes can be handled as "virtual classes" (essentially, formulas with set parameters). If one really needs to quantify over proper classes (whether of sets or of surreal numbers), then a set-class theory like Morse-Kelley becomes appropriate. If even higher types are needed, then I would be inclined to drop this one-step-at-a-time approach and instead assume that there is an inaccessible cardinal $\kappa$ and that Conway was really working in the universe of sets of rank $<\kappa$.
Note that difficulties with proper classes had to be faced in the foundations of category theory, and several approaches have been developed, including Grothendieck's universes and Feferman's approach based on the reflection principle of ZFC. As far as I can see, these approaches can be adapted to deal with the analogous problems that arise in connection with surreal numbers.
-
You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Philip Ehrlich at Ohio University has written extensively on Conway's surreal numbers, and somewhere in his work he has the details for formalizing the theory of surreal numbers in NBG set theory. This should qualify as a "standard" set theory despite its use of classes as it's a conservative extension of ZFC, as mentioned above. His forthcoming paper for the Bulletin of Symbolic logic gives his paper
Absolutely saturated models. Fund. Math. 133 (1989), no. 1, 39–46.
as a reference for this formalization.
Best,
Todd
-
Link to paper: matwbn.icm.edu.pl/ksiazki/fm/fm133/fm13313.pdf – François G. Dorais♦ Sep 29 2011 at 22:38
Garabed, I believe the class of surreals can be encoded by a class formula in ZFC.
Surreal numbers are particular kinds of Conway games, and each Conway game can be expressed as a well-founded rooted tree of bounded (i.e., non-class) size equipped with a labeling of the edges by symbols $L$ and $R$. (The nodes are positions in the game, with the root at initial position, and an edge from a node to a child is labeled $L$ or $R$ according to whether the child is a left or right option of the node.) So the full structure of a Conway game is fully specified by a set, and the class of Conway games can be given by a ZFC class formula. The relation $\lt$ on games, and the predicate that says a game is a number, are recursive and can be given by formulas in ZFC. Similarly, the equality predicate on numbers is recursive and given by a ZFC formula.
I don't have On Numbers and Games immediately to hand, but my memory is that Conway discusses these issues.
-
Thanks alot for your answers and suggestions. I think the main problem here is that the largest "large cardinal numbers" are elements of S because S contains a sub-collection ordinally similar to the class of all ordinal numbers. The theory of S has been much developed since Conway published "On Numbers and Games". Perhaps the theory of S can be formalized in Morse-Kelley set theory but only if that set theory allows the existence of continuously ordered collections containing dense sub-collections which are proper classes. – Garabed Gulbenkian Oct 2 2011 at 17:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.949762761592865, "perplexity_flag": "middle"}
|
http://mark.reid.name/blog/prediction-with-expert-advice-as-online-convex-optimisation.html
|
Prediction with Expert Advice as Online Convex Optimisation
I have been working with Bob Williamson and Tim Van Erven recently to better understand the notion of mixability in what is known as the Prediction With Expert Advice (PWEA) setting for online learning. I was curious as to how this setting relates to another one that is commonly studied in learning theory: online convex optimisation (OCO).
It is already known that PWEA is a special case of OCO (see, for example, Peter Bartlett’s summer school course or Kalai and Vempala’s JCSS paper) but I wanted to work out the correspondence explicitly for myself. Since there is one of those obvious-in-hindsight tricks involved I thought it would be worth writing up and sharing it.
Introduction
Prediction With Expert Advice is typically posed as a game where in each round a learner receives advice in the form of predictions from a set of experts about a future outcome and then merges these expert opinions to form its own prediction. The outcome is then revealed and the learner and all of the experts receive a penalty depending on how well their prediction fits with the revealed outcome. This penalty is determined by a fixed loss function that is known to the learner. The aim of the learner in this game is to incur an aggregate penalty over many rounds that is not much worse than the best expert.
You can easily imagine playing such a game yourself: each day you check a dozen different weather forecasts then make up your own mind about the chance of rain tomorrow, e.g., you predict a 75% chance of rain. The next day it will either rain or not and imagine that you and the experts lose points depending on how bad your predictions were: predicting a 75% chance when it is sunny loses you more points than if you predicted a 20% chance of rain. The function that determines exactly how many points you lose for predicting p% chance of rain when the outcome is sunny is called the loss function.
Mixability is a property of a loss function that characterises when learning can occur efficiently in a PWEA game. That is, when it is possible to make the difference between the learner and the best expert—the regret—decrease rapidly (specifically like $1/T$ after $T$ rounds).
In our recent COLT paper we were able to characterise mixability in terms of the curvature of the loss for a natural class of losses known as proper losses. These losses are “sensible” in that if the true probability of an outcome is $p$ then the expected loss is minimised by predicting $p$. This seemingly innocent requirement actually gives rise to a lot of geometric structure that has been well studied in the economics literature, and that we exploit in our paper.
Online Convex Optimisation is a similar type of game to PWEA in that both are competitive online prediction games: a learner repeatedly makes predictions and receives a penalty based on that prediction and its performance is compared to a class of simple alternatives. The main differences between PWEA and OCO are that: the learner does not have access to expert predictions and their penalties; the regret of the learner is relative to a possibly uncountable set of alternatives; and the loss functions involved are assumed to be convex.
Despite these differences, it is possible to present Prediction With Expert Advice as a very special case of Online Convex Optimisation. After formalising the two games, I’ll present the “trick” for turning the former into the latter.
Prediction with Expert Advice
In the general Prediction with Expert Advice (PWEA) game a learner competes against $K$ experts in a game consisting of $T$ rounds. Each round, the each expert reveals a prediction from $\Delta^N$, the set of probabilities over $N$ outcomes. The learner observes and combines these to form its own prediction from $\Delta^N$. The world then reveals one of $N$ outcomes $y \in [N] = 1, \ldots, N$ and the experts’ and learner’s predictions are assessed via a loss function $\ell : \Delta^N \to R^n$ so that a prediction $p$ incurs a penalty $\ell_y(p)$ when outcome $y$ occurs.
Expressed in a kind of pseudo-code, the game is:
For $t = 1, …, T$:
1. Experts make predictions $p^{1,t}, … p^{K,t} \in \Delta^N$
2. Learner predicts $p^t$ based on expert predictions
3. World reveals outcome $y^t \in \{ 1, … n \}$
4. Experts incur penalties $\ell_{y^t}(p^{k,t})$ and the learner incurs $\ell_{y^t}(p^t)$
The aim of the learner in this game is to minimise its total loss $L^T = \sum_t \ell_{y^t}(p^t)$ relative to the smallest expert loss $\min_k L^T(k) = \min_k \sum_t \ell_{y^t}(p^{k,t})$. The difference $L^T - \min_k L^T(k)$ is called the regret.
Online Convex Optimisation
Online Convex Optimisation (OCO) is similar in that a sequential game is played over $T$ rounds where a learner makes prediction from some convex set $X \subset R^d$. However, as mentioned above, the OCO game is simpler in that there are no (explicit) experts and more general in that the finite number of outcomes that the world can reveal is replaced by an arbitrary set of convex functions $F$. The function $f\in F$ chosen by the world and used to assign a penalty $f(x)$ to the learner.
Expressed in pseudo-code, OCO is the following game:
For $t = 1, …, T$:
1. Learner predicts $x^t \in X \subset R^d$
2. World reveals a convex function $f^t \in F$
3. Learner incurs penalty $f^t(x^t)$
The learner’s aim here is to minimise the regret relative to the best single prediction $x \in X$ in hindsight. That is, the learner wants to minimise the difference between $L^T = \sum_t f^t(x^t)$ and $\min_x L^T(x) = \min_x \sum_t f^t(x)$. Once again, the difference $L^T - \min_x L^T(x)$ is called the regret.
PWEA is a special case of OCO
We can show that PWEA is a special case of OCO by defining an OCO game that mimics the PWEA game.
The main trick is to define the set of functions $F$ for OCO so that step 1 in the PWEA game (where the experts reveal their predictions) can be simulated. Specifically, if for each $t\in[T]$ in the PWEA game expert $k$ makes prediction $p^{k,t}$ and the outcome is $y^t$, we define a OCO game via a sequence of linear (and thus convex) functions $f^t$. These are defined so that $f^t(e^k) = \ell_{y^t}(p^{k,t})$ where $e^k$ the vertices of $\Delta^K$ and are linearly extended to all mixtures of $K$ experts, denoted $x \in X = \Delta^K$, by defining $f^t(x) = \sum_k x_k f^t(e^k)$.
This construction means that the learner in the OCO game can always mimic the performance of a single, fixed expert $k$ in the PWEA game by constantly playing $e^k$. In some sense, this is how step 1 of the PWEA game is recovered in the OCO game.
Now consider what happens when we minimise the total loss for this OCO game. This involves finding a mixture $x \in \Delta^K$ such that $L^T(x) = \sum_t f^t(x)$ is minimised. Since $f^t(x) = \sum_k x_k \ell_{y^t}(p^{k,t})$ we see that $L^T(x) = \sum_k x_k \sum_t \ell_{y^t}(p^{k,t}) = \sum_k x_k L^T(k)$ where $L^T(k)$ is the total loss for expert $k$ in the PWEA game. This weighted sum is clearly minimised by choosing the mixture $x \in \Delta^K$ that puts all its mass on the single expert $k$ corresponding to the smallest $L^T(k)$ term. Furthermore, for that choice of $x = e^k$ we have $L^T(x) = L^T(k)$ and so $\min_x L^T(x) = \min_k L^T(k)$.
The above argument shows that any PWEA game can be presented as an OCO game and that the best single expert in the PWEA game corresponds to the best single prediction in the corresponding OCO game.
Regret Bounds
Since the minimal total loss in the PWEA and OCO games are equivalent, we can look at the regrets for both games by just considering the total loss for each. If a learner playing the OCO game predicts $x^t$ at round $t$ the loss it incurs is $f^t(x^t) = \sum_k x_k^t \ell_{y^t}(p^{k,t})$. If all of the partial losses $\ell_y$ are convex then we see that predicting $p^t = \sum_k x_k^t p^{x,t}$ in the PWEA game will incur a penalty $\ell_{y^t}(p^t) \le f^t(x^t)$ in that round.
Therefore, any regret bound that holds for OCO will also hold for an OCO-simulated PWEA game with convex losses since the OCO regret dominates the PWEA regret achieved by just playing convex combinations of the expert predictions. For a recent summary of lower and upper bounds for various types of online optimisation games, I point the reader to the COLT 2008 paper by Jake Abernethy and co-authors.
What happens if the PWEA losses $\ell_y$ are not convex? The same reduction argument can be run only if for every mixture $x \in \Delta^K$ there exists prediction $p(x) \in \Delta^N$ such that for all outcomes $y$ we have $\ell_y(p(x)) \le \sum_k x_k \ell_y(p(x))$. This is similar condition required of the substitution function needed in the Weak Aggregating Algorithm so I suspect this condition is related to mixability but will leave the details for another time.
Mark Reid September 15, 2011 Canberra, Australia
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 74, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.944887101650238, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/64249?sort=votes
|
## Automorphism groups and etale topological stacks
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Recall that an etale topological stack is a stack $\mathscr{X}$ over the category of topological spaces (and open covers) which admits a representable local homeomorphism $X \to \mathscr{X}$ from a topological space. Equivalently, it is a topological stack arising from an etale topological groupoid. It is well known that a differentiable stack is etale if and only if all of its automorphism groups are discrete, but the proof involves foliation theory. It seems this proof cannot be extended to the topological setting. However, clearly every etale topological stack has discrete isotropy groups. This begs the question:
If a topological stack has all of its isotropy groups discrete, is it necessarily etale?
EDIT: By a topological stack, I mean a stack $\mathscr{X}$ over the category of topological spaces (and open covers) which admits a representable epimorphism $X \to \mathscr{X}$ (not necessarily a local homeomorphism). This is equivalent to saying $\mathscr{X}$ is the stack of torsors for a topological groupoid.
Remark: This question is equivalent to asking if a topological groupoid all of whose isotropy groups are discrete must be Morita equivalent to an etale topological groupoid.
-
What do you mean by a topological stack? Is it in the sense of Noohi? – Angelo May 8 2011 at 4:21
Yes, depending on which article of his. I mean what he refers to as a "pretopological stack" in "Foundations of Topological Stacks I". I'll add this to the question. – David Carchedi May 8 2011 at 10:43
Surely you want the representable map $X \to \mathcal{X}$ to have some additional property like being surjective? – Chris Schommer-Pries May 8 2011 at 14:46
@Chris: Yes, I meant to say representable epimorphism. – David Carchedi May 8 2011 at 21:31
1
This is not particularly important, but it might be good to change "begs the question" to "suggests the question", since the former is often used to mean something else. – S. Carnahan♦ May 10 2011 at 13:14
show 1 more comment
## 2 Answers
I don't think this is true. Let $X$ be the quotient of the action of $\mathbb Q$ on $\mathbb R$ by translation. This is a sheaf, and its automorphism groups are trivial. Suppose that there exist a local homeomorphism $U \to X$, where $U$ is non-empty a topological space. Let $V \to U$ be the pullback to $U$ of the $\mathbb Q$-torsor $\mathbb R \to X$; then $V\to \mathbb R$ is a local homeomorphism. By restricting $U$ we may assume that $V = U \times \mathbb Q$; but then $V$ can't be locally connected, and this is a contradiction.
-
Angelo: Your stack certainly arises from an etale topological groupoid. So I don't think that it's a counterexample to what Dave is asking. – André Henriques May 8 2011 at 12:24
Dear André, I am probably confused, since you do understand these matters much better than I do; but I still think that if I give $\mathbb Q$ the topology coming from the euclidean topology, the stack does not come from an étale groupoid (this is what the argument should show). – Angelo May 8 2011 at 13:00
To clarify further, I am thinking of the groupoid $\mathbb Q \times \mathbb R \to \mathbb R$, where the maps come from the action by translation. Its associated stack is a non-representable sheaf, and it is what I had in mind. – Angelo May 8 2011 at 13:02
2
Ahh. So there are two versions of this groupoid, corresponding to two different sheaves. Both are presented by action groupoids of $\mathbb{Q}$ acting on $\mathbb{R}$. In one $\mathbb{Q}$ has the discrete topology, and the result is an etale groupoid. In the other $\mathbb{Q}$ has the usual topology and the result is not an etale groupoid. Both stacks (sheaves) are covered by $\mathbb{R}$. It seems like the later is indeed a counter-example. – Chris Schommer-Pries May 8 2011 at 14:38
1
To Chris: yes, that's what I was trying to say. – Angelo May 8 2011 at 14:46
show 2 more comments
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Here's a counterexample: the stack associated to the relative pair groupoid of the map $$([0,1]\times\{0\}) \cup (\{1\}\times[0,1]) \cup ([1,2]\times\{1\})\;\;\to\; [0,2] \qquad\qquad\qquad\qquad\qquad$$
$$\qquad\qquad\qquad\qquad(x,y)\qquad\qquad\mapsto\;\;\; x$$
Equivalently, this stack can be described as the pushout in the 2-category of stacks of the diagram $[0,1]\leftarrow \{1\} \rightarrow [1,2]$ (where we identify a space with the stack it represents).
-
As a pushout, this stack admits a map to the space [0,2]. How exactly does it differ from this space? – Chris Schommer-Pries May 8 2011 at 14:43
@Chris Schommer-Pries: Let $X$ be the above pushout stack. As you noted, there is a map from $X$ to $[0,2]$, but that map is not an isomorphism. For any topological space $T$, the induced map $\hom(T,X)\to \hom(T,[0,2])$ is injective, and the subset $\hom(T,X)\subset \hom(T,[0,2])$ can be characterized. A map $f:T\to [0,2]$ comes from a map $T\to X$ iff $T$ has an open cover $T=T_1\cup T_2$ such that $f(T_1)\subset [0,1]$ and $f(T_2)\subset [1,2]$. – André Henriques May 8 2011 at 19:23
Maybe I'm being dense. Doesn't any map f satisfy that property? Take $T_1$ to be the inverse image of the complement of [1,2], and likewise $T_2$ to be the inverse image of the complement of [0,1]? – Chris Schommer-Pries May 8 2011 at 20:09
The two sets that you describe do not form a cover of $T$. You're missing the inverse image of {1}. – André Henriques May 8 2011 at 20:20
Ooops! You're right. Okay, this also looks like a counter example to me. – Chris Schommer-Pries May 8 2011 at 23:00
show 2 more comments
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 42, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9324885606765747, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/87106/when-finitely-generated-free-algebras-are-finite
|
## When finitely generated free algebras are finite
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
The variety (in the sense of universal algebra) of Boolean algebras, for example, has the property that finitely generated free algebras have finite cardinality; in that case specifically $|F_n|=2^{2^n}$, in the obvious notation.
Can one usefully characterize varieties whose finitely generated free algebras have finite cardinality?
Can one characterize natural number sequences arising as $|F_n|$ in association with such varieties?
-
1
I don't understand the title. – Mariano Suárez-Alvarez Jan 31 2012 at 4:13
7
The key words to use are "locally finite" and "free spectra". When you have more specific questions to ask after searching, we will be here awaiting those questions. Gerhard "Ask Me About System Design" Paseman, 2012.01.30 – Gerhard Paseman Jan 31 2012 at 5:14
1
For example, a good question would start with "What other (locally finite) varieties have free spectra whose growth rate is like that of the free spectra of (the variety of) Boolean algebras?", and include appropriate motivation and background. Gerhard "Ask Me About System Design" Paseman, 2012.01.30 – Gerhard Paseman Jan 31 2012 at 5:21
2
I agree with Mariano and Gerhard, but this is an interesting topic (imho) and I hope you will reformulate your question and ask a new one with a better title. Something your question leads me to wonder about (though I'm not sure this would make a "good" question): for which varieties are there alternative characterizations of "locally finite?" See, for example, jstor.org/pss/2040508 – William DeMeo Jan 31 2012 at 7:41
3
I replaced "free" by "finite" in the title, because I'm sure it was meant this way. I hope that this clears up some confusion. – Martin Brandenburg Jan 31 2012 at 10:11
show 8 more comments
## 3 Answers
The Burnside problem for groups asks whether the variety $x^n=1$ is locally finite. By work of Adian and Novikov they are not locally finite for $n$ odd and large enough (I think at least 667) and in the even case results are by Ivanov and Lysenok. For n=2,3,4,6 local finiteness is known. For n=5 it is unknown. Mark Sapir classified locally finite semigroup varieties modulo the group case.
Varieties generated by a finite algebra are locally finite by a result of Birkhoff.
Added. By Zelmanov's solution to the restricted Burnside problem a variety of groups is locally finite iff it is generated by a set of finite groups with uniformly bounded exponent. The analogue is false for semigroups.
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
I asked George McNulty, and here is his answer. It partially coincides with my answer here, but is much more complete.
========
I think the first result of this kind is an immediate if unstated consequence of a result in Peter Perkins dissertation
P. Perkins, Decision Problems of Equational Theories of Semigroups and General Algebras'', University of California, Berkeley 1966.
In a signature with two binary operation symbols and two constant symbols, Perkins proves (his Theorem 36) that the collection of finite sets of equations that are bases of finite algebras is undecidable. He does this by reducing the word problem on a particular finitely presented semigroup to this question. Loosely, if the word w is a consequence of the semigroup presentation, then the associated finite set of equations will be a base of a finite algebra, whereas if w is not a consequence then the free algebra on one generator in the variety based on the set of equations will be infinite. Of course, this also shows that the locally finiteness problem is also undecidable. This part of Perkins dissertation was published as
P. Perkins, Unsolvable problems for equational theories'', Notre Dame Journal of Formal Logic, vol. 8 (1967) 175--185.
One of the things I did in my 1972 dissertation was to establish various extensions of Perkins work on this topic. In particular, I showed that the above result holds for any finite signature that has an operation symbol of rank at least two. I had a rather long list of properties of finite sets of equations or of the varieties based on finite sets of equations that I could prove to be undecidable, but I didn't put all the proofs even in my dissertation. By the time I came to write it up for publication, I had figured out a handful of results from which most of the undecidability results I knew would follow. I published these in
G. McNulty, Undecidable properties of finite sets of equations''. Journal of Symbolic Logic, vol. 41 (1976) 589-604.
You can find in that paper a long list of such properties, but local finiteness is not on the list, while being the base of a finite algebra is. The local finiteness business follows in the same way as it did from Perkins result.
There are only a handful of other papers that address undecidable properties of finite sets of equations (mostly, I think, because undecidability seems to prevail---although some result like the Adjan-Rabin Theorem is unknown). Here they are:
V.L. Murskii, Nondiscernible properties of finite system of identity relations'', Doklady Akademii Nauk SSSR vol. 196 (1971) 520--522.
This paper is independent of my work or Perkins work. There is a large overlap between Murskii's findings and what is in my dissertation, although this 3 page account of Murskii's work is, of course, very terse. I don't think Muskii's work covers either being the base of a finite algebra or being the base of a locally finite variety, but it is very interesting. Murskii was the first to frame a general condition on collections of finite sets of equations that would ensure undecidability. It was Murskii's paper that spurred me to frame other general conditions that you can find in the paper of mine above. (I also include there a second proof of Murskii's general condition.
Douglas Smith in his 1972 Penn State dissertation found another undecidability result. It is in
D. Smith, the non-recursiveness of the set of finite sets of equations whose theories are one-based.'' Notre Dame Journal of Formal Logic, vol. 13 (1972) 135--138.
Ralph McKenzie wrote
R. McKenzie, On spectra and the negative solution of the decision problem for identities having a nontrivial finite model.'' Journal of Symbolic Logic, vol. 40 (1975) 186--196.
Don Pigozzi wrote
D. Pigozzi, Base-undecidable properties of universal varieties.'' Algebra Universalis, vol. 6 (1976), no. 2, 193–223.
Among other things, Pigozzi shows that it is undecidable whether the variety based on a finite set of equations has the amalgamation property or the Schreier property (subalgebras of free algebras are free).
C. Kalfa, Decision problems concerning properties of finite sets of equations. Journal of Symbolic Logic, vol. 51 (1986) 79--87.
Here Cornelia Kalfa shows that the joint embedding property is undecidable, as is whether the elementary theory of the infinite models is model complete.
The latest paper I know about is
C. O'Dunlaing, Undecidable questions related to Church-Rosser Thue systems.'' Theoretical Computer Science, vol. 23 (1983) 339--345.
Here it is shown that it is undecidable whether of finite set of equations is logically equivalent of a finite confluent set of equations.
Recently Ralph Freese and some collaborators have found fairly quick algorithms for a lot of the kind of properties above when the equations examined have certain restricted forms.
=================
George also wrote:
=================
Also I noticed the interest expressed about free spectra in the original posting. There are a lot of papers on this (and related topics). Perhaps Joel Berman is the person who knows all about it.
-
Thank you. Somehwat off topic, I think it was Berman who wrote an article which was a catalog on the (isomorphism types of) three element binary groupoids , which Burris and Berman later analyzed and grouped by a variety of properties, e.g. Abelian. Is there an online version of the catalog that is not behind a paywall? I was hoping to refer another poster to such a copy regarding a commutative idempotent groupoid they posted. Gerhard "Wants It For Himself, Too" Paseman, 2012.02.16 – Gerhard Paseman Feb 16 2012 at 21:18
In general a variety can be given in two different ways. First - by a finite (or recursive) set of identities and second - by a generating algebra. In the first case, the local finiteness of a variety is undecidable in general. But in some cases (for example, for semigroups with "nice" subgroups) the algorithm exists. In the second case, the generating algebra should be "uniformly locally finite" (say, finite, as in the case of Boolean algebras). See the survey "Algorithmic problems in varieties" here http://www.math.vanderbilt.edu/~msapir/ftp/pub/survey/survey.pdf .
Edit. The undecidability result not exactly in my survey but can be deduced from it. Here is a correct reference: Perkins, Peter Unsolvable problems for equational theories. Notre Dame J. Formal Logic 8 1967 175–185. Perkins proves that there is no algorithm that, given a finite system of identities, says whether it is a basis of identities of a finite algebra (theorem 13). In fact he proves more. He constructs an algebra $E$ with undecidable word problem, and for every two terms $u,v$ of $E$ he constructs a finite set of identities $I(u,v)$ such that if $u=v$ in $E$ then the set $I(u,v)$ is the set of identities of a finite algebra, and if $u\ne v$, $I(u,v)$ holds on an infinite 1-generated algebra. Since we cannot decide whether $u=v$, we cannot decide, given a finite set of identities, it is a basis of a locally finite variety.
If you are interested in just one free object, the situation is even easier. It is known (Markov) that the finiteness of a 2-generated semigroup is undecidable. Now consider the signature consisting of the semigroup operation plus two 0-ary operations giving the generators. Then any finitely presented semigroup becomes a relatively free object in a variety given by a finite number of identities (involving the 0-ary operations). Thus it is undecidable, given a finite number of identities in that signature whether the 2-generated free algebra in the variety given by these identities is finite (that is almost exact quote from the survey).
-
I am interested in the first case, and will peruse your survey to find more about undecidability of local finiteness given a recursive equational theory. Would you please mention an author or a reference where this result appeared? (Also, to make sure I do not get confused, the first case is NOT, or not closely related to, Tarski's problem which McKenzie solved by 1994. Right?) Gerhard "Still Unsure About Undecidability Methods" Paseman, 2012.02.09 – Gerhard Paseman Feb 9 2012 at 15:46
Thank you. I will follow up. Slightly off topic, I am looking at some decidability issues related to hyperidentities. I have not seen anything in my searches. If you even half remember something about this, I would be grateful for a name besides Denecke or Wismath who worked on it as well. Thanks again. Gerhard "Appreciates The Kindness Of Others" Paseman, 2012.02.09 – Gerhard Paseman Feb 9 2012 at 18:19
@Gerhard: I modified the answer, including a precise reference. I know very little about hyperidentities. Only the papers by Movsisyan. For example: Movsisyan, Yu. Hyperidentities and hypervarieties. Sci. Math. Jpn. 54 (2001), no. 3, 595–640. – Mark Sapir Feb 9 2012 at 20:22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 0, "mathjax_asciimath": 9, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9320672750473022, "perplexity_flag": "middle"}
|
http://mathematica.stackexchange.com/questions/2317/changing-variables-algebraically?answertab=oldest
|
# Changing variables algebraically
Suppose one has two functions, $y(x)$ and $z(x)$, and one seeks to obtain $y(z)$ by substituting $x(z)$ into $y(x)$. Can this be done in a single step? Or must $z(x)$ first be inverted independently? For the sake of illustration, suppose the functions consist of transcendental functions combined by the elementary operations of addition, subtraction, multiplication, division, and exponentiation. For instance,
$y(x) = a\ln x + \displaystyle\frac{\exp(bx)}{c+x}$
$z(x) = \ln x \cdot \left(1 - \cos x \exp(a^2x) \right)^{r}$
-
What are your $y(x)$ and $z(x)$? – rm -rf♦ Feb 25 '12 at 8:18
– Simon Feb 25 '12 at 8:35
No, I just wanted to know how to take two functions and re-express one in terms of the inverse of the other in Mathematica. I can invert my functions by hand (I just made the two up that are in the question, so I did not try to invert those), but wanted to verify my results by using Mathematica. – user001 Feb 25 '12 at 8:38
I am aware of how this is done with simple functions (e.g., `Solve[Eliminate[{y == x + a, z == x - b}, x], y]`), but kept getting error when I did things with transcendental functions. – user001 Feb 25 '12 at 8:47
## 3 Answers
Well, I guess you could try using InverseFunction. However, this will only give explicit algebraic expressions when the function that is to be inverted is fairly simple.
````y[x_, a_, b_, c_] := a Log[x] + Exp[b x]/(c + x)
z[x_, a_, r_] := Log[x] (1 - Cos[x] Exp[a^2 x])^r
````
And you can evaluate and plot $y(x(z))$ where $z \mapsto x(z)$ is the inverse function of $x \mapsto z(x)$
````y[InverseFunction[z[#, a, r]&][x], a, b, c]
(* a Log[InverseFunction[z[#1, a, r] &][x]]
+ E^(b InverseFunction[z[#1, a, r]&][x])/
(c + InverseFunction[z[#1, a, r] &][x]) *)
Plot[Evaluate[% /. {a -> 1, b -> 1, c -> 1, r -> 1}], {x, 1, 2}]
````
Note that `InverseFunction` does not always behave like you think it might.
-
To visualize `y[z]` without having to go through inversion, you can use `ParametricPlot`:
```` Manipulate[
ParametricPlot[{y[x, a, b, c], z[x, a, r]}, {x, 0, 1},
AxesLabel -> {z, y}, AspectRatio -> 1,
PlotRange -> {{-10, 10}, Automatic}],
{{a, 1}, 0, 5, .1}, {{b, 1}, 0, 5, .1}, {{c, 1}, 0, 5, .1},
Delimiter, {{r, 1}, .5, 2, .1},
ControlPlacement -> Left]
````
-
I am aware of two functions that can eliminate variable algebraically: `Eliminate` and `Solve`. They are described in this guide:
These functions work with polynomial equations (or equations that can be reduced to polynomials in some way).
`Reduce` is more generic, but it doesn't give any means of eliminating $x$ without solving for it first. The syntax to use would be
````Reduce[y == f[x] && z == g[x], {y, x}]
````
It can be guided by giving additional assumption about the variables in the original set of equations/inequalities (for example $x \in \mathbb{R}$ or $x > 0$) or specifying a domain.
That said, a more convenient way to verify the $y = f(z)$ solution you get is to just substitute the original expressions for $y$ and $z$ in terms of $x$ into $y = f(z)$ and verify that the relation hold.
-
lang-mma
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 21, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8918293714523315, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/37070?sort=votes
|
## Why can’t proofs have infinitely many steps?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I recently saw the proof of the finite axiom of choice from the ZF axioms. The basic idea of the proof is as follows (I'll cover the case where we're choosing from three sets, but the general idea is obvious): Suppose we have $A,B,C$ non-empty, and we would like to show that the Cartesian product $A \times B \times C$ is non-empty. Then $\exists a \in A$, $\exists b \in B$, $\exists c \in C$, all because each set is non-empty. Then $a \times b \times c$ is a desired element of $A \times B \times C$, and we are done.
In the case where we have infinitely (in this case, countably) many sets, say $A_1 \times A_2 \times A_3 \times \cdots$, we can try the same proof. But in order to use only the ZF axioms, the proof requires the infinitely many steps $\exists a_1 \in A_1$, $\exists a_2 \in A_2$, $\exists a_3 \in A_3$, $\cdots$
My question is, why can't we do this? Or a better phrasing, since I know that mathematicians normally work in logical systems in which only finite proofs are allowed, is: Is there some sort of way of doing logic in which infinitely-long proofs like these are allowed?
One valid objection to such a system would be that it would allow us to prove Fermat's Last Theorem as follows: Consider each pair $(a,b,c,n)$ as a step in the proofs, and then we use countably many steps to show that the theorem is true.
I might argue that this really is a valid proof - it just isn't possible in our universe where we can only do finitely-many calculations. So we could suggest a system of logic in which a proof like this is valid.
On the other hand, I think the "proof" of Fermat's Last Theorem which uses infinitely many steps is very different from the "proof" of AC from ZF which uses infinitely many steps. In the proof of AC, we know how each step works, and we know that it will succeed, even without considering that step individually. In other words, we know what we mean by the concatenation of steps $(\exists a_i \in A_i)_{i \in \mathbb{N}}$. On the other hand, we can't, before doing all the infinitely many steps of the proof of FLT, know that each step is going to work out. What I'm suggesting in this paragraph is a system of logic in which the proof of AC above is an acceptable proof, whereas the proof of FLT outlined above is not acceptable.
So I'm wondering whether such a system of logic has been considered or whether any experts here have an idea for how it might work, if it has not been considered. And, of course, there might be a better term to use than "system of logic," and others can give suggestions for that.
-
2
I think something has gone wrong with copy/paste here. – Eric Tressler Aug 29 2010 at 17:56
5
I object to your "proof" of FLT: you have not convinced me that every step in your proof is valid (or rather, I know it to be valid because I trust the peer-review system enough to believe that FLT is true, but if this is what your proof rests on, then it does not illustrate your discussion). – Theo Johnson-Freyd Aug 29 2010 at 20:15
22
In principle why not; but the problems will start when you try to submit it. I tell you, nobody will publish infinitely many pages. – Pietro Majer Aug 29 2010 at 20:26
1
Another thing that would be "fixed" by proofs with infinitely many steps: The existence of polynomials in several variables such that it is undecidable whether they have an integer root (cf mathoverflow.net/questions/32892/…), which seems absurd. It "should" be possible to just check all points in $\mathbf{Z}^n$, and either find a root or don't, but logic says no. – Mike Hall Aug 29 2010 at 23:04
5
To Pietro: no problem, you split the proof in infinitely many papers. – Angelo Jun 23 at 11:40
show 3 more comments
## 6 Answers
Even if logic were extended to allow infinitely long proofs, your attempted proof of the countable axiom of choice would still have a gap or two. After the infinitely many steps asserting that there exists an $a_i$ in $A_i$ (one step for each $i$), you still need to justify the claim that there's a function assigning, to each $i$, the corresponding $a_i$. The immediate problem is that your infinitely many steps haven't exactly specified which (of the many possible) $a_i$'s are the corresponding ones; the $a_i$'s in your formulas are just bound variables. Worse, even if the meaning of "the corresponding $a_i$" were perfectly clear, so that there's no doubt about which ordered pairs $(i,a_i)$ you want to have in your choice function, you'd still need to prove that there is a set consisting of just those ordered pairs. No ZF axiom does that job. I think you'd need an infinitely long axiom saying "for all $x_1,x_2,\dots$, there exists a set whose members are exactly $x_1,x_2,\dots$."
If you're willing to accept not only proofs consisting of infinitely many statements but also single statements of infinite length, and if you're willing to add some such infinite statements as new axioms, then I think you can "prove" the countable axiom of choice (and fancier choice principles if you allow even longer new axioms). But, as long as you need to add some axioms to ZF for this purpose, it seems simpler to just add the countable axiom of choice. It's a finite statement, so you can reason with it using the usual rules of logic.
One could view the axiom of choice as a sort of finitary (and therefore usable) surrogate for the infinitely long axioms and proofs that would come up in your approach. In fact, some of Zermelo's later work (he introduced the axiom of choice in 1904, and the work I'm thinking of dates from the late 20's or early 30's) takes an "infinitary logic" approach to the foundations of set theory (and is, in my opinion, not entirely clear).
-
6
+1 for an infinitely long proof having "a gap or two" :-) – ex falso quodlibet Jan 31 2011 at 7:03
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
-
14
I think this was better have been written as a comment, as answers should be slightly more verbal than a mere link to wikipedia. – Asaf Karagila Aug 29 2010 at 18:13
32
+1 : This is a perfectly good answer. – Andy Putman Aug 29 2010 at 19:15
6
When there is a good short answer that could be written, then just giving a link to Wikipedia is indeed lazy, and not as good as an answer that gives the link and then extracts the relevant parts so that someone can get the gist of the answer quickly on the spot. But in an open-ended question like this, there’s not really any full answer shorter than the Wikipedia article — and this answer gets that point across very, very well :-) – Peter LeFanu Lumsdaine Jan 7 2011 at 17:50
Andreas Blass has nicely explained why it is not helpful to use infinitary logic in an attempt to prove the axiom of choice.
It may be worth adding that the seemingly similar idea, of considering countably infinite proofs in number theory, is helpful in a way (though not to prove FLT!). The so-called $\omega$-rule -- from proofs of $\varphi(0),\varphi(1), \varphi(2),\ldots$ to infer $\forall n\varphi(n)$ -- was used by Schütte around 1950 to simplify Gentzen's 1936 consistency proof for Peano arithmetic, PA.
Assuming $\varepsilon_0$-induction, it turns out to be easier to prove the consistency of a system PA$_\omega$ with the $\omega$-rule than to prove the consistency of the PA system it contains.
-
2
And I think it's worth adding that this idea is still finding application in proof theory and computer science, where an "infinite proof" can be seen as just a way of speaking about a lazily computed object. See for example [linta.de/~aehlig/university/pub/… On the Computational Complexity of Cut-Reduction] by Aehlig and Beckmann, from LICS 2008. – Noam Zeilberger Aug 30 2010 at 12:24
Your example concerning Fermat's theorem shows why we cannot do infinite proofs: your proof strategy is to check infinitely many instances, which a human being cannot do in a finite amount of time. Hence we only accept proofs dealing with infinitely many instances if we have some finite (finitistic) way of reasoning about all these instances, for instance by induction.
The fact that AC does not follow from the Zermelo-Fraenkel axioms, even AC for countable families of sets, shows that in first order logic we cannot do infinitary proofs as you suggest. But first order logic is pretty unique with respect to its nice properties.
Richard Borcherds mentions infinitary logic, which definitely has its uses, but I don't see how you can use infinitary logic to prove instances of AC, if your base theory is, say, ZF.
(Every first order sentence is also a sentence in infinitary logic.)
Infinitary logic could talk, for example, about infinitely many classes at the same time, but if you start from ZF, the infinitary logic (say $L_{\omega_1,\omega_1}$ to be specific) generated from it will have the same models as usual first order logic. If you add infinitary axioms that allow to prove more instances of AC, this means that you just add instances of AC to your axioms (possibly instances that you could not formulate by finitary first order axioms).
-
I should mention that I'm not trying in any way to use this to prove number-theoretic statements like FLT, at least for a standard definition of "use." The point is a theoretical idea about the foundations of mathematics. – David Corwin Jun 21 at 21:27
James Brotherston and Alex Simpson worked on non-well-founded proofs, see
J. Brotherston and A. Simpson: Sequent calculi for induction and infinite descent. J Logic Computation (2010)
as well as the talk "On Proof by Infinite Descent" by Alex Simpson at "Algebra & Coalgebra meet Proof Theory" in Bern, April 2012.
-
Somewhat semi-related, but I think also fitting here since connected to infinitary logic, are oracle-turing-machines. In the end, an Oracle can solve the finite Entscheidungsproblem - for your countable axiom of choice, or the theorem of fermat, you could just write an algorithm to search for a counterexample and pass it to your oracle, which will tell you whether it will terminate. Its not that this solves the problem by any means - its just a theoretical concept to do research on that kind of problem.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 33, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9590794444084167, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/tagged/lie-algebroids
|
## Tagged Questions
1answer
362 views
### What’s an example of a commutative algebra over $\mathbb Q$ that fails to satisfy this version of the “PBW theorem”
In a recent question, I recalled the notion of differential operator, polyderivation, and principal symbol for a commutative algebra $A$ over some fixed commutative ring $k$. (I w …
1answer
378 views
### For which algebras does \{Differential Operators\} satisfy a PBW-like theorem?
Let $k$ be a commutative ring, $A$ a commutative $k$-algebra, and for some other part of why I'm asking this question I only care about the case when $k \supseteq \mathbb Q$. Reca …
0answers
128 views
### Continuous and smooth Lie groupoid cohomology
In the paper by Weinstein and Xu: Extensions of symplectic groupoids and quantization, J. Reine Angew. Math. 417 (1991), there are two versions of Lie groupoid cohomology. The same …
3answers
605 views
### Is there any relation between deformation and extension of Lie algebras?
In a paper of A. Weinstein on the geometry of Poisson manifolds, he relates the formal linearization around a zero, p, of the Poisson bivector to extensions of the Lie algebra indu …
3answers
583 views
### Examples of Lie Algebroids
The concept of a Lie Algebroid is given an important geometric meaning in the framework of Generalized Complex Geometry. For reference, the (barebones) definition of a Lie Algebroi …
3answers
519 views
### Geometry and Integrability in Other Bundles
Background: Suppose $E=TM$ is the tangent bundle to some differentiable manifold $M^n$. If we specify some subbundle $D\subset TM$ (distribution of $k$-planes) then there are two n …
2answers
196 views
### Is the cohomology of the corresponding Lie algebroid an invariant under equivalence of source-simply-connected Lie groupoids?
Recall the related notions of Lie groupoid, Lie algebroid, generalized morphism of Lie groupoids, and cohomology of Lie algebroid. Henceforth, I will drop the word "Lie" for all t …
1answer
191 views
### When does a VBLA induce an isomorphism on Lie algebroid cohomology?
This question is geared towards the experts, so I will only briefly gloss the definitions. Everything I say is in the category of finite-dimensional smooth manifolds, and whenever …
3answers
275 views
### What is an obviously coordinate-independent description of the Chevellay-Eilenberg complex for a Lie algebroid?
I've read in many places, including the n-Lab page, that a Lie algebroid (which I think of as in the first definition on the n-Lab page) is the same as a vector bundle $A \to X$ an …
1answer
291 views
### Do Lie algebroids pull back (along submersions)?
There are more general definitions, but for my purposes a Lie algebroid on a smooth manifold $X$ is a vector bundle $A \to X$, a map $\rho: A \to {\rm T}X$ of vector bundles over \$ …
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 15, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8605542182922363, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/81079/primes-of-the-form-x2ny2mz2-and-congruences/81124
|
## Primes of the form $x^2+ny^2+mz^2$ and congruences.
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
This is a sequel of this question where I asked for which positive integer $n$ the set of primes of the former $x^2+ny^2$ was defined by congruences (a set of primes $P$ is defined by congruences if there is a positive integer $d$ and a subset $A$ of $\mathbb{Z}/d\mathbb{Z}$ such that a prime $p$ is in $P$ if and only if $p$ mod $d$ is in $A$, up to a finite number of exceptions). I was taught there that the answer was "exactly when $n$ is idoneal", that there is finitely many idoneal numbers, and that all are known but perhaps one.
When is the set of primes of the form $x^2+ny^2+mz^2$ ($x,y,z \in \mathbb{Z})$ defined by congruences?
My motivation is not just an idle ternary generalization of the binary case. I really met this question while working on a problem concerning modular forms, and also the slightly more general question, given a fixed positive integer $a$: when is the set of primes $p$ such that $ap$ has the form $x^2+ny^2+mz^2$ ($x,y,z \in \mathbb{Z})$ defined by congruences?
I am well aware that since the set of integers represented by a ternary quadratic form is not stable by multiplication, it is much less natural to ask the question for prime numbers instead of all positive integers than in the case of a binary quadratic form. Yet this is really the question for primes that appears in my study (for about a dozen specific ternary forms, actually).
I have found a very interesting paper by Dickson (Ternary quadratic forms and congruences. Ann. of Math. (2) 28 (1926/27), no. 1-4, 333–341.) which solves the question for the integers represented by $x^2+ny^2+mz^2$: there is only a finite explicit numbers of $(n,m)$ such that this set of integers is defined by congruences (in the obvious sense). But the proof does not seem (to me) to be easily generalizable to primes. Other mathscinet research did not give me any more informations.
When I try to think to the question, I meet an even more basic (if perhaps slightli more sophisticated) question that I can't answer:
When is the set of primes of the form $x^2+ny^2+mz^2$ ($x,y,z \in \mathbb{Z})$ Frobenian? (Is it "always"?)
A set of primes $P$ is called Frobenian (a terminology probably introduced by Serre) if there is a finite Galois extension $K/\mathbb{Q}$, and a subset $A$ of Gal$(K/\mathbb{Q})$ stable by conjugacy such that a prime $p$ is in $P$ if and only if Frob${}_p \in A$, except for a finite number of exceptions. A set determined by congruences is a Frobenian set for which we can take $K$ cyclotomic over $\mathbb{Q}$, which is the same by Kronecker-Weber as abelian over $\mathbb{Q}$. For a quadratic binary quadratic form (for example $x^2+ny^2$), the set of represented primes is always Frobenian ($K$ can be taken as the ring class field of $\mathbb{Z}[\sqrt{-n}]$, and $A={1}$, as explained in Cox's book). But I fail to see the reason (which may nevertheless be trivial) for which the same result would be true for a general ternary quadratic form. I should add that for my specific ternary forms, I can show that the set is Frobenian, but I am not sure how to extend the argument to all ternary quadratic forms.
Finally, let me say that I would be interested in any book, survey or references on this kind of question (which surely must have been studied), and that I am also interested in analog questions for quaternary quadratic forms (which might be easier, because of multiplicative properties related to quaternions).
-
1
Bhargava's article on the 15 theorem contains information on the numbers represented by ternary forms that should be useful for making your questions more precise. – Franz Lemmermeyer Nov 16 2011 at 18:20
Dear Jo$\ddot{e}$l, you will not have been notified of edits I made. Any positive ternary form misses only a finite set of prime numbers in comparison with its genus, the same statement holding true for squarefree numbers. Furthermore, Duke and Schulze-Pillot (1990) is indeed the correct reference. Back with binary forms, as in the Cox book page 186, a good example is $x^2 + 27 y^2,$ which represents only those primes $p \equiv 1 \pmod 3$ for which $2$ is a cubic residue. See also Cox, Theorem 9.12, page 188. The other primes $q \equiv 1 \pmod 3$ are $q = 4 x^2 \pm 2 x y + 7 y^2.$ – Will Jagy Nov 18 2011 at 20:57
## 3 Answers
Mostly, you should look at a number of items at
http://zakuski.math.utsa.edu/~kap/forms.html
including Dickson_Diagonal_1939.pdf and Kap_Jagy_Schiemann_1997.pdf to begin with.
Now that I think of it, you also need to read Kap_All_Odd_1995.pdf at the same place, also a new preprint by Jeremy Rouse on his 451 Theorem(s), as it is readily decided whether a form represents the single number 2. In a different direction, you need Wai Kiu Chan and Byeong-Kweon Oh, "Positive Ternary Quadratic Forms with Finitely Many Exceptions," Proceedings of the A.M.S., Volume 132, Number 6, Pages 1567-1573 (2004).
The overriding fact is that ternary forms, like binary forms, are collected together in genera. Unlike binary forms, these vary in size (class number) for a fixed discriminant. The good thing is the result of Jones, every number given by congruence conditions (a finite number of "progressions") is, in fact, represented by at least one form in the genus.
So, an example, $x^2 + 4 y^2 + 9 z^2$ is not regular, so it is not in Dickson's list. The genus of this form represents all numbers not of shape $9 n \pm 3, \; 8 n + 3, \; 4^k (8n+7).$ The other class in the genus is $x^2 + y^2 + 36 z^2.$ Between the two forms, all eligible numbers are represented. It is not difficult to prove that $x^2 + 4 y^2 + 9 z^2$ misses only the single number 2 out of the eligible numbers. So, if you were so minded, you could say that $x^2 + 4 y^2 + 9 z^2$ represents all primes that pass the above restrictions as well as not being $0 \pmod 2.$ As the restriction $9n \pm 3$ does not affect larger primes, just larger composite numbers, one could also say that $x^2 + 4 y^2 + 9 z^2$ represents all primes $p \equiv 1 \pmod 4.$
So there is a built in problem with your formulation. If one of your positive ternaries $x^2 + m y^2 + n z^2$ has finitely many exceptions, in particular finitely many prime exceptions $p_1,p_2,\ldots,p_k,$ the form can be said to represent every prime $p$ fitting the original restrictions and the new restrictions $p \neq 0 \pmod {p_k.}$ As a result, your list of ternary forms is infinite and unprovable. As Franz says, you need a tighter formulation.
If you like, email me your list of forms, we can discuss it.
A better example of the possible horror: Ono and Soundararajan (1997) showed that Ramanujan's form $x^2 + y^2 + 10 z^2$ has only squarefree numbers as sporadics (numbers represented by some form in the genus but not by this form, "exceptions" for Chan and Oh). They also showed that GRH implies that the known list is complete. So, GRH implies that $x^2 + y^2 + 10 z^2$ represents all primes not divisible by any of 3, 7, 31, 43, 67, 79, 223, 307, 2719. The other sporadics are composite. At the same time, it is easy to show that the form represents all numbers $n \equiv 5 \pmod 6,$ first pointed out in a letter from J.S.Hsia to Kaplansky, later a cheap proof by me, and one by Oh.
EDIT: it occurs to me that an alternate property could be: a positive ternary form will be defined to be fungible if its sporadics are all composite. Or perhaps funicular. I looked it up, the best would be frangible.
EDIT TOOOOO: I thought I might find examples of forms $x^2 + m y^2 + n z^2$ that seem to be fungible, or perhaps funicular, or frangible, despite lacking proof. The first example is $$x^2 + y^2 + 48 z^2 \neq 21 \cdot 9^k$$ compared with the other form in that genus, $2 x^2 + 2 y^2 + 13 z^2 + 2 y z + 2 z x,$ checked on numbers up to 1,250,000. Very similar, $$x^2 + 4y^2 + 20 z^2 \neq 77$$ compared with the other form in that genus, $4 x^2 + 4y^2 + 5 z^2,$ also checked on numbers up to 1,250,000. In this second case it is easy to show that each form of the genus represents 4 times any number represented by the other form, and no numbers $2 \pmod 4$ are represented anyway, so only odd numbers come up. Anyway, 21 and 77 are composite. I have not proved these completely, just checked on computer.
EDIT TOOTOOTOO: I got an opinion from Jeremy Rouse. He points out that any positive ternary has two possible causes for having infinitely many numbers missed (compared to its genus), those being high divisibility by anisotropic primes or spinor exceptional classes. These two phenomena affect only finitely many squareclasses. In both cases, we do not increase the set of primes missed, with the result that a positive ternary fails to represent only a finite number of eligible (by congruence conditions) primes. This also explains, to some degree, the reference to Duke and Schulze-Pillot (1990). The final corollary says that any sufficiently large number that is primitively represented by some form in the same spinor genus is represented by the form of interest. There are only a few spinor exceptional squareclasses, so, even in an irregular spinor genus, we can only miss finitely many squarefree numbers, as those other than the spinor exceptional integers are represented by something in the same spinor genus, and primes are squarefree and therefore represented primitively if at all. I think I've caught up now. Note the D_S-P results give no effective bound, so we cannot identify the primes missed without some fortunate accident such as regularity, spinor regularity, regularity with regard to all odd numbers, and so forth.
-
Thanks a lot, Will. I need some time to digest your answer and make the necessary readings and I'll comment more. – Joël Nov 17 2011 at 17:29
Well, Thanks very much. I realized that there is a world of very beautiful mathematics in my close neighborhood of which I had no idea. Reading the papers you mentioned help me see why my question was not really natural, and this realization helped me abandon a wrong track I was following on my modular forms problem. Just a last thing: is there a textbook or introductory survey on the theory of representations of integers by quadratic forms in three (or more) variables? (If not, someone should write it). – Joël Dec 1 2011 at 14:51
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Dear Joel,
I noticed your request for texts. The most informative chapter on positive ternaries, with the intent of predicting the represented integers, is in Dickson M.E.N.T. (1939). I have typed up a list of my books. For the moment, my websites are down, the host computer fried a power supply. So, I am including the link to my preprints on the arXiv, the papers with Alex Berkovich may be just the thing, overlap of modular forms and quadratic forms. The fundamental result is the weighted representation measure of Siegel. Again, as far as numbers integrally represented, the books of Jones, Watson, and Cassels are most helpful. I'm also including the Lattice website, although the emphasis there is classifying interesting lattices (positive forms) rather than finding the numbers represented (squared norms, often just called norms). I've included SPLAG and Ebeling, again I do not mainly use the lattice viewpoint, but there you go.
Carl Ludwig Siegel
Lectures on the Analytical Theory of Quadratic Forms (Second Term 1934/35)
Leonard Eugene Dickson
Studies in the Theory of Numbers (1930)
Modern Elementary Theory of Numbers (1939)
Burton Wadsworth Jones
The Arithmetic Theory of Quadratic Forms (1950)
George Leo Watson
Integral Quadratic Forms (1960)
John William Scott Cassels
Rational Quadratic Forms (1978)
Jean-Pierre Serre
A Course in Arithmetic (English translation 1973)
John Horton Conway
The Sensual Quadratic Form (1997)
Sphere Packings, Lattices and Groups (1988, with Neil J.A. Sloane)
Wolfgang Ebeling
Lattices and Codes (2nd, 2002)
Gordon L. Nipp
Quaternary Quadratic Forms (1991)
O. Timothy O'Meara
Introduction to Quadratic Forms (1963)
Yoshiyuki Kitaoka
Arithmetic of Quadratic Forms (1993)
Larry J. Gerstein
Basic Quadratic Forms (2008)
http://arxiv.org/find/math/1/au:+Jagy_W/0/1/0/all/0/1
http://www.math.rwth-aachen.de/~Gabriele.Nebe/LATTICES/
There are also some excellent, influential books by Lam. I sometimes ask him questions, the response is typically that he does not do forms over rings. So, among many threads that might be called quadratic forms, I put Lam in with the name Pfister. Again, I recently got involved with the lattice viewpoint, see SPLAG and Ebeling. The trick there is that it is possible to relate ideas such as covering radius to class number. This relationship is so easy that there really ought to be a short article on "here is how you do this, which you would never know by surveying the literature." But the entire matter is dismissed in a single paragraph on page 378 of SPLAG. When asking for help, I told Richard Borcherds that I sometimes wanted to write a book Here's how YOU can do quadratic forms, and he agreed that one can do a good deal with very little machinery.
-
Yes. See "Representation of integers by positive ternary quadratic forms and equidistribution of lattice points on ellipsoids", Duke and Schulze-Pillot.
-
1
Could you please give some more detail? I'm looking at Duke and Schulze-Pillot and trying to see where the Frobenian property comes up... – Will Jagy Nov 17 2011 at 5:23
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 64, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9515267014503479, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/32149/fatous-lemma-and-the-bounded-convergence-theorem
|
## Fatou’s Lemma and the bounded convergence theorem. [closed]
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I have been studying measure theory of late, and i was stuck up in these two things.
BOUNDED CONVERGENCE THEOREM: It states that $f_n$ is a sequence of measurable functions, defined on a measurable set $E$ and if $f_{n} \to f$ pointwise on $E$,and is $f_{n}$ is uniformly bounded, that is if $|f_{n}(x)| \leq M$ for each $x$ and $n \in \mathbb{N}$ then $$\lim \int f_{n} = \int f$$
Could anyone tell me as to why this will fail if we don't assume the uniformly bounded criterion. Please elaborate.
-
4
This isn't true as stated: one needs $E$ to have finite measure. When $E$ has finite measure a standard sort of counterexample with $f_n$ not uniformly bounded is to take say $E=(0,1)$ with Lebesgue measure and $f_n$ supported on $(0,1/n)$ with integral whatever you like. – Robin Chapman Jul 16 2010 at 11:05
1
Just to add one observation: the usual proofs of the bounded convergence theorem (or the dominate convergence theorem as Franklin alluded to in his answer) are fairly straight forward. It is quite easy to identify where each assumption is used in the proof. A good exercise to help in learning the subject is to look at that and try to see how it can fail. In your case the boundedness is used to control the difference of the integrals on sets where the difference of the functions is large. By pointwise convergence the measure of the set is small. So if the functions are bounded, the integrals... – Willie Wong Jul 16 2010 at 11:31
...will be small. (Assuming that E has finite measure.) Flip that around and you have that the errors are not controllable if the integral on those smaller and smaller sets remain approximately constant. Which will then require the function to be larger and larger on those smaller sets. – Willie Wong Jul 16 2010 at 11:35
2
I note that this question has been downvoted twice so far (not by me!) without a corresponding comment having been left. My feeling here is that this is because the answer to your question is not very difficult to look up. For example, your question is answered on the Wikipedia pages for Fatou's lemma and the Dominated Convergence Theorem. Indeed, the latter of these two pages is the first Google hit I get upon searching for "bounded convergence theorem". – Ian Morris Jul 16 2010 at 13:26
4
I would classify this as an (easy) exercise in measure theory, and thus not appropriate for MO. – Keenan Kidwell Jul 16 2010 at 13:46
show 1 more comment
## 2 Answers
I am not sure if the question fits MO standards as it is an elementary measure theory question (and if it's not, expect to be tazered by the MO police). Here goes an answer, anyway.
To expand on Robin Chapman's comment, first, the theorem as stated is false withouth the assumption that $E$ has finite measure. The correct generalization is the Lebesgue dominated convergence where the sequence $f_{n}$ is such that there is an integrable $g$ such that $\|f_n{x}\|\leq g$.
To see why, it fails without the boundedness condition consider the sequence of intervals $E_n= [0, 1/n]$ and take the sequence $(n\chi(E_n))$ where $\chi(E_n)$ is the characteristic function (or indicator functions) of $E_n$. This sequence converges pointwise to $0$ but
````$\int n\chi(E_n) = 1$
````
so that the sequence of integrals does not converge to $0$. What is happening is that you are shrinking the support of the functions but the same time increasing their "amplitude" so that the two cancel each other out and the integral stays constant while the functions themselves converge to zero. The uniform bound on the sequence, prevents their "amplitudes" of running off to infinity and screwing up the integrals.
Examples can be concocted where the convergence is uniform instead of just pointwise. The idea is to do the reverse of the previous example: shrink the amplitude of the functions (to guarantee their uniform convergence) while enlarging their support. This will need a measure space of infinite measure. I will leave that as an exercise.
Regards, G. Rodrigues
-
1
+1 just for the remark about tasers ;o) – Ian Morris Jul 16 2010 at 12:05
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
If $f_n$ is the characteristic function of the interval $[n,2n]$ then $f_n\rightarrow0$ but $\int f_n = n$ which does not tend to zero. You need even more than the boundedness. Adding that your space is of finite measure will help is you want to bound with a constant. Alternatively you can bound with an integrable function. Without a bound the example above can be even worst.
-
@Ben: Dear Ben, since you are a moderator i request you to remove this question. – Chandrasekhar May 10 2011 at 15:25
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 28, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9478420615196228, "perplexity_flag": "head"}
|
http://crypto.stackexchange.com/questions/647/digital-signature-algorithm-signature-creation
|
# Digital Signature Algorithm signature creation
I was studying DSS from "Cryptography and Network Security" by William Stallings. What puzzled me was the DSS approach figure described in the text. It says it uses Public and Private Keys for creating signature.
But the algorithm for creating signature in DSA doesn't use it.
$$r = (g^{k}\mod p)\mod q$$ $$s = (k^{-1}(H(M)+ x\cdot r)) \mod q$$
Someone please tell me the importance of public key here. Is it needed in verification only?
-
## 1 Answer
It depends how you define what a "public key" is.
Typically it is the value of the key itself ($y$), plus information about the group (safe prime $p$, subgroup size $q$, generator of subgroup $g$). For signing a message, you do not need the value of the public key itself. So if you are strict in defining a "public key" to only be $y$, it is not needed to sign (only to verify).
On the other hand, you do need the group description ($p,q,g$) to sign a message, as well as the secret key ($x$). If you are liberal and define "public key" to include the group description, then the public key (or parts of it) is needed to sign.
I am not sure what is meant by PU$_\mathrm{G}$ in the diagram, but I suspect it is the group description? In that case, Stallings is being liberal in his definition of a public key in that it includes the group information ($p,q,g$).
As an aside, even though you do not need the public key, you do need $g$ and $x$ and since $y=g^x$, you do "know" the public key at signing time even if you do not use the value $y$ explicitly.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 13, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9418500661849976, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/118383/convergence-of-sum-limits-n-1-infty-left-dfrac-1-cdot3-cdots-2n-1
|
# convergence of $\sum \limits_{n=1}^{\infty }\left\{ \dfrac {1\cdot3 \cdots (2n-1)} {2\cdot 4\cdots (2n)}\cdot \dfrac {4n+3} {2n+2}\right\} ^{2}$
I am investigating the convergence of $$\begin{split}\sum _{n=1}^{\infty }\left\{ \dfrac {1\cdot 3\cdots (2n-1)} {2\cdot 4\cdots (2n)}\cdot \dfrac {4n+3} {2n+2}\right\} ^{2} &= \sum _{n=1}^{\infty }\left\{ \dfrac {\prod _{t=1}^n (2t-1)} {\prod _{t=1}^n (2t)}\cdot \dfrac {4n+3} {2n+2}\right\} ^{2} \\ &=\sum _{n=1}^{\infty }\left\{ \prod _{t=1}^n\left( 1-\dfrac {1} {2t}\right) \dfrac {4n+3} {2n+2}\right\} ^{2} \end{split}$$ which after some manipulations I have reduced to $$\sum _{n=1}^{\infty }e^ \left\{ 2\ln \left(2 -\dfrac {1} {2n+2}\right) +2\cdot \sum _{t=1}^{n}\ln \left( 1-\dfrac {1}{2t}\right) \right\}$$ and from an alternative approach I was able to reduce it to $$\sum _{n=1}^{\infty } \dfrac{\left( 4n+3\right) ^{2}}{4\left(n+1\right)^{2}} \prod _{t=1}^n\left( 2+\dfrac{1}{2t^{2}}-\dfrac{2}{t}\right)$$ I am unsure how to proceed from here in either of the two cases. Any help would be much appreciated.
-
One thing i just realized while revisiting my notes which I missed was i have n't given any thought to ratio test. – Hardy Mar 9 '12 at 22:23
1
It seems to me that ratio test is inconclusive, because $\displaystyle \lim_{n\to \infty} \frac{a_{n+1}}{a_n} =1$. – Pacciu Mar 9 '12 at 22:28
Actually i recall a result if $\left| \dfrac {u_{n+1}} {u_{n}}\right| =1+\dfrac {A_{1}} {n }+O\left( \dfrac {1} {n^{2}}\right)$, where $A_{1}$ is independent of $n$, then the series is absolutely convergent if $A_{1} < -1$. Is that help full here ? – Hardy Mar 9 '12 at 22:34
1
YOu might want to take a look at Runge's or Gauss' critieron for convergence. – Peter Tamaroff Mar 9 '12 at 23:22
– Hardy Mar 9 '12 at 23:29
show 2 more comments
## 2 Answers
We can prove by induction that
$$\dfrac {1\cdot 3\cdots (2n-1)} {2\cdot 4\cdots (2n)} \ge \frac{1}{\sqrt{4n}}$$
You can also notice that
$$\dfrac {1\cdot 3\cdots (2n-1)} {2\cdot 4\cdots (2n)} = \dfrac{\binom{2n}{n}}{4^n}$$
and try using the approximation
$$\dfrac{\binom{2n}{n}}{4^n} = \frac{1}{\sqrt{\pi n}} \left(1 + \mathcal{O}\left(\frac{1}{n}\right)\right)$$
-
– Aryabhata Mar 9 '12 at 22:34
a very powerful inequality (+1) – Chris's wise sister Aug 17 '12 at 19:15
Denote by $a_n$ the general term, which is positive. We can rewrite it as $\left(\frac{(2n)!}{4^nn!n!}\right)^2\left(\frac{4n+3}{2n+2}\right)^2$, which is equivalent to $b_n:=4\left(\frac{(2n)!}{4^nn!n!}\right)^2$. Now we use Stirling's formula, which states that $n!\overset{+\infty}{\sim}\left(\frac ne\right)^n\sqrt{2\pi n}$. We get \begin{align*} b_n&\overset{+\infty}{\sim} 4\left(\frac{\left(\frac{2n}e\right)^{2n}\sqrt{4n\pi}}{4^n\left(\frac ne\right)^{2n}2\pi n}\right)^2\\ &=\frac 4{n\pi}, \end{align*} and using the fact that the harmonic series diverges, we get that the series $\sum_n a_n$ is divergent.
-
Sorry Buddy i could only pick one answer but i found your answer very slick and educational too. – Hardy Mar 9 '12 at 22:44
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 6, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9208152294158936, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/tagged/antimatter
|
# Tagged Questions
The antimatter tag has no wiki summary.
1answer
33 views
### Would an antimatter beam create multiple matter-antimatter explosions?
To expand upon the question - in an atmosphere, would a beam of pure antimatter (disregarding technical difficulties creating such a beam or passing it through a medium) interact with the matter in ...
2answers
87 views
### What if the antimatter in the center of our galaxy could annihilate and cause a chain reaction?
Being said that the antimatter - matter reaction is faster than that of a fission and fusion, what if the antimatter cloud found at the center of our galaxy could really able to react with matter from ...
1answer
43 views
### Could the outer structure of the Universe be made from Antimatter?
Cern recently stated that antimatter may be repelled by matter, much like the opposite effect of Gravity. So is it possible that antimatter is actually repelled to the edges of the Universe to create ...
1answer
110 views
### Strings and QFT: particles moving backward in time?
New question: In string theory and QFT, do particles travel back in time? Not related to antimatter: Do they travel back and forth in time in reality or are these just interpretations of mathematical ...
1answer
72 views
### Antimatter in the universe [duplicate]
Is it possible that some parts of the universe contain antimatter whose atoms have nuclei made of antiprotons and antineutrons, sorrounded by antiprotons? If yes what can be the ways to detect this ...
1answer
124 views
### How to guarantee that a kilogram of antimatter will quickly annihilate another kilogram of matter?
What I mean is, suppose we could somehow get a kilogram of matter and contain it safely. Now lets say we want to make a bomb using this kilogram, now, we have two ways, either store another kilogram ...
2answers
102 views
### How much energy is carried away by neutrinos in matter-antimatter annihilation?
Some people say that neutrinos carry away most of the energy, some others say just a fraction. So what is the truth ? what is the percentage of energy lost due to neutrinos ?
1answer
122 views
### How fast is the matter and antimatter reaction compared to nuclear chain reaction?
What I mean is, the nuclear chain reactions take microseconds for every generation and that is the reason that nuclear weapons exist. Because in nuclear reactors the reaction rate is much slower thus ...
1answer
41 views
### Gravitational potential energy with regards to annihilation
Given particles A, B, C and D, where: A and B have an equivalent mass C and D have an equivalent mass, both larger than A (or B) D is the antiparticle of C. A and B start close to C, but with ...
2answers
82 views
### Effect of mass matter-antimatter annihilation? [duplicate]
Possible Duplicate: What actually happens when an anti-matter projectile collides with matter? If a large amount of antimatter is suddenly released or launched in the open and expands, ...
1answer
53 views
### Could much of the “missing” antimatter make up neutrons?
A neutron can decay into a proton, an electron, and neutrino. Could an antiproton, a positron, and a neutrino combine into a neutron? Could this be where much of the "missing" antimatter is?
3answers
141 views
### Do particle pairs avoid each other? Please end my musings
Can you explain what happens when a particle and its antiparticle are created. Do they whiz away from each other at the speed of light or what? I suppose that they don't because otherwise they would ...
1answer
48 views
### When is the FAIR accelerator supposed to be finished? [closed]
The FAIR accelerator is a planned facility for antiproton and ion research. Ground water wells are being put in, the forest is being cleared... But when is it supposed to be completed? Does anyone ...
2answers
133 views
### how do we know that the base of entire universe is the proton (hydrogen) and not the antiproton?
It may be that the base of a part of the world is anti-proton, We've always been on the planet Earth and the Milky Way. how do we know that the base of entire universe is proton (hydrogen atom)?
2answers
110 views
### matter anti-matter world
let us suppose the gedanken experiment a man isolated into a room he ask if he is made of matter oder of antimatter could he set some experiments to see if he is made of matter or if he is made of ...
2answers
336 views
### Schrödinger's equation, time reversal, negative energy and antimatter
You know how there are no antiparticles for the Schrödinger equation, I've been pushing around the equation and have found a solution that seems to indicate there are - I've probably missed something ...
1answer
818 views
### Anti-matter black hole and time
I have recently read some hard science-fiction story based on an assumption that if time stops (from external observer's perspective) on the event horizon of black hole, then in an anti-matter black ...
0answers
52 views
### Complete annihilation of matter-antimatter [duplicate]
Possible Duplicate: More on matter and anti-matter Everyone knows that when matter and anti-matter come in contact, they result in pure energy. I want to know that if atoms and anti-atoms ...
2answers
313 views
### Creation of particle anti-particle pairs
I was reading some QFT notes and there is one point that I don't understand, they are justifying why we need QFT saying that the number of particles is not preserved once we consider special ...
1answer
100 views
### On non-local physics
Recently I've encountered work by prof. B.V Alekseev, in which he claims that some physical problems can be easily solved if we consider non-local interactions in kinetic theory (interactions of ...
2answers
199 views
### Can a neutron be created from pure energy
Is it possible to create a neutron out of pure energy, i.e. not by bringing a bunch of already-existing quarks together? (A quick calculation using E = mc2 shows the energy required would be about 1.5 ...
1answer
833 views
### Exotic Matter — What is it?
I have a conceptual understanding of some theoretical physics concepts, so without getting into the math, what is exotic matter? I have read some articles that say it is anything not normal, while ...
0answers
62 views
### positronium BEC stability
After reading this article regarding Positronium BEC formation (for lasing purposes), there is a mention in there regarding Ps "up" atoms not annihilating with "down" atoms, the article is pretty ...
0answers
67 views
### scaling laws for density and temperature of high-energy explosions
I'm wondering if there are heuristic ways to derive how the peak density and temperature of nuclear explosions scale with the amount of fissile/fusible material. Does it matter what the explosion ...
2answers
209 views
### Antimatter bomb
I stumbled upon this wikipedia article on antimatter weaponry. Being greatly appalled by the sad fact that large sums of money are being wasted on this, I could not stop myself from thinking for a ...
3answers
175 views
### supressing certain decay paths and enhancing others with interference
In a scattering reaction, there are many possible final states for the products, each with different production rates. Question: Is there a way in which we could in general supress certain rates ...
1answer
102 views
### How many anti-particles hit the ground?
I am curious to know the amount of flux of anti-particles that arrive to the ground in the cosmic rays. The reason is that I thought it should be very improbable that an anti-particle traveling ...
1answer
323 views
### Anti particles: What exactly is inverted?
http://en.wikipedia.org/wiki/Antiparticle says "Corresponding to most kinds of particles, there is an associated antiparticle with the same mass and opposite electric charge." and What is anti-matter? ...
0answers
146 views
### What makes *electric* charge special (wrt. CPT theorem)?
I'm wondering why the 'C' in CPT - charge conjugation - refers specifically to electric charge. Of course you could say that C is just defined as $e^+ \leftrightarrow e^-$... but there has to be ...
1answer
45 views
### Bigger anti-matter particles
I have learned about the existence of positrons as a decay product from uranium fission - if I'm not mistaken. Is there any evidence for higher 'mass' anti-matter, or is that mere speculation or ...
4answers
158 views
### Can cosmic inflation be explained by matter antimatter reactions?
The big bang theory proposes that equal amounts of matter and antimatter were created in the beginning. Shortly afterwards most of it annihilated. Could that have produced enough energy to drive ...
2answers
228 views
### How does Annihilation work?
I'm wondering why matter and antimatter actually annihilates if they come into contact. What exactly happens? Is that a known process? Is it just because of their different charges? Then what about ...
2answers
1k views
### What would happen after the collision matter and the anti-matter [duplicate]
Possible Duplicate: What actually happens when an anti-matter projectile collides with matter? Suppose 1 kg of a stray meteorite anti-matter moves to the earth. What would happen after ...
0answers
54 views
### Energy efficiency of antimatter producion [duplicate]
Possible Duplicate: Matter - Antimatter Reactory Practicality Certain reactions in particle accelerators lead to production of antimatter, which can then be collected in noticeable ...
1answer
191 views
### Anti-Matter for Neutrons
The anti-particle corresponding to a proton or an electron is a particle with an equal mass, but an opposite charge. So what is the anti-particle corresponding to a neutron (which does not possess a ...
2answers
97 views
### Massless particle as a result of annihilation of “heavy” particles
How can a massless particle such a photon be the result of electron-positron annihilation? What about the law of conservation of energy? Is a valid explanation that the pair's energy transforms itself ...
2answers
116 views
### Matter - Antimatter Reactory Practicality
With current technology, would the energy released by a matter-antimatter annihilation be more than the energy needed to created the antimatter in the first place? Would it be worth it? Just curious, ...
3answers
154 views
### mechanism of annihilation
Can the annihilation of matter and antimatter be explained by the electro-weak interaction? Can pair-production be explained in the same way?
2answers
215 views
### Baryon asymmetry
Baryon asymmetry refers to the observation that apparently there is matter in the Universe but not much antimatter. We don't see galaxies made of antimatter or observe gamma rays that would be ...
4answers
610 views
### What is anti-matter?
Matter-- I guess I know what it is ;) somehow, at least intuitively. So, I can feel it in terms of the weight when picking something up. It may be explained by gravity which is itself is defined by ...
3answers
147 views
### What barriers exist to prevent us from turning a baryon into a anti-baryon?
At present the only way we can produce anti-matter is through high powered collisions. New matter is created from the energy produced in these collisions and some of them are anti-matter particles ...
2answers
495 views
### Do particles and anti-particles attract each other?
Do particles and anti-particles attract each other? From the very basic understanding that they are created out of nothing mutually and collide to annihilate each other seems to indicate this happens ...
4answers
861 views
### If an anti-matter singularity and a normal matter singularity, of equal masses, collided would we (outside the event horizon) see an explosion?
If an anti-matter singularity and a normal matter singularity, of equal masses, collided would we (outside the event horizon) see an explosion?
2answers
105 views
### Can the charge of particles spontaneously flip from positive to negative or vice versa?
I'm thinking of matter antimatter annihilation, are there reactions where normal matter converts to antimatter?
2answers
207 views
### Can different species of particles annihilate with other species
Obviously electrons annihilate with positrons, but can a muon annihilate with an positron, or can an anti-taon cancel with a muon? similarly for quarks of different species, e.g. u and anti-strange. ...
3answers
449 views
### What was missing in Dirac's argument to come up with the modern interpretation of the positron?
When Dirac found his equation for the electron $(-i\gamma^\mu\partial_\mu+m)\psi=0$ he famously discovered that it had negative energy solutions. In order to solve the problem of the stability of the ...
2answers
438 views
### What happens if we put together a proton and an antineutron?
A hydrogen nucleus consists of a single proton. A 2-hydrogen (deuterium) nucleus consists of a proton and a neutron. A tritium nucleus consists of a proton and two neutrons. This makes me wonder how ...
2answers
335 views
### the causality and the anti-particles
How can I quantitatively and qualitatively understand the fact that there is a relevence between the existence of anti-particles and the causality?
3answers
635 views
### Why do or don't neutrinos have antiparticles?
This was inspired by this question. According to Wikipedia, a Majorana neutrino must be its own antiparticle, while a Dirac neutrino cannot be its own antiparticle. Why is this true?
1answer
118 views
### Is it possible that portions of the universe are made of antimatter? [duplicate]
Possible Duplicate: Experimental observation of matter/antimatter in the universe I've heard a bit about the antimatter, matter inbalance. But I don't understand how it has been decided ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9222823977470398, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/3525/when-are-probability-distributions-completely-determined-by-their-moments/4787
|
## When are probability distributions completely determined by their moments?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
If two different probability distributions have identical moments, are they equal? I suspect not, but I would guess they are "mostly" equal, for example, on everything but a set of measure zero. Does anyone know an example of two different probability distributions with identical moments? The less pathological the better. Edit: Is it unconditionally true if I specialize to discrete distributions?
And a related question: Suppose I ask the same question about Renyi entropies. Recall that the Renyi entropy is defined for all `a` ≥ 0 by
Ha(p) = log(∑j pja)/(1-a)
You can define `a`=0,1,∞ by taking suitable limits of this formula. Are two distributions with identical Renyi entropies (for all values of the parameter `a`) actually equal? How "rigid" is this result? If I allow two Renyi entropies of distributions `p` and `q` to differ by at most some small ε independent of `a`, then can I put an upper bound on, say, || p - q ||1 in terms of ε? What can be said in the case of discrete distributions?
-
## 8 Answers
Roughly speaking, if the sequence of moments doesn't grow too quickly, then the distribution is determined by its moments. One sufficient condition is that if the moment generating function of a random variable has positive radius of convergence, then that random variable is determined by its moments. See Billingsley, Probability and Measure, chapter 30.
A standard example of two distinct distributions with the same moment is based on the lognormal distribution:
f0(x) = (2π)1/2 x-1 exp(-(log x)2/2).
which is the density of the lognormal, and the perturbed version
fa(x) = f0(x) (1 + a sin (2π log x))
These have the same moments; namely the nth moment of each of these is exp(n2/2).
A condition for a distribution over the reals to be determined by its moments is that lim supk → ∞ (μ2k)1/2k/2k is finite, where μ2k is the (2k)th moment of the distribution. For a distribution supported on the positive reals, lim supk → ∞ (μk)1/2k/2k being finite suffices.
This example is from Rick Durrett, Probability: Theory and Examples, 3rd edition, pp. 106-107; as the original source for the lognormal Durrett cites C. C. Heyde (1963) On a property of the lognormal distribution, J. Royal. Stat. Soc. B. 29, 392-393.
-
1
Interesting that you get a weaker condition for distributions supported on the positive reals, but I can see why that should be so. If X is a non-negative random variable then you can set Y=ε√X, where ε=+1,-1 each with probability 1/2 and independent of X. Then, the odd moments of Y are zero, the 2n'th moment of Y equals the n'th moment of X, and determining the distribution of Y is enough to find the distribution of X. – George Lowther Oct 31 2009 at 17:15
Thanks Michael! Do you know anything about the "Renyi" part of the question? – Steve Flammia Oct 31 2009 at 20:08
Steve, I don't know anything about the Renyi part. I wish I did. – Michael Lugo Nov 3 2009 at 1:42
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
This sounds like one of the classical "moment problems" that have been much studied, although I'm afraid I don't know the literature. Wikipedia suggests that the term to look for is Hamburger moment problem
A quick Google also throws up an article by Stoyanov which ought to have some examples of non-uniqueness and pointers to the literature.
As you might know, if we know in advance that the density is confined to some bounded interval (say [-1,1] for sake of argument), then the moments do indeed determine the density. (This basically follows because the density is determined by its values when integrated against continuous functions, and continuous functions on a closed bounded interval can be approximated to arbitrary accuracy by polynomials)
-
As has been mentioned in previous answers, the moments do not uniquely determine the distributions unless certain conditions are satisfied, such as bounded distributions. One thing you can say, is that the distribution of a random variable X is uniquely determined by the characteristic function φX(a)=E[exp(iaX)]. Letting mn=E[Xn] be the n'th moment, this can be expanded as
φX(a) = Σninanmn/n!
which is valid within its radius of convergence. So, the moments will uniquely determine the distribution as long as this has infinite radius of convergence, which is the case as long as limsupn→∞|mn/n!|1/n=0. Stirling's formula simplifies it a bit to limsupn→∞|mn|1/n/n=0. This can be proven using the dominated convergence theorem.
For example, a distribution is bounded by K if |mn|≤Kn, which satisfies this condition.
On the other hand, it is possible to construct distinct distributions supported in the positive integers and with the same moments. To do this, you need to find a sequence of real numbers cn satisfying Σncnnr=0 for all r (and converging absolutely). This doesn't involve anything more than solving some linear equations to solve this for any finite set of powers r. Then, by keeping adding more terms to extend to all positive integers r, you get the infinite sequence cn. The two distributions can then be obtained by taking the positive and negative parts of cn.
-
Suppose all moments exist for X and Y.
1) If X and Y have bounded support, the CDFs of X and Y are equal if and only if all moments are equal.
2) If the moment generating functions exist and M_X(t) = M_Y(t) for all t in an open neighborhood of 0, then the CDFs of X and Y are equal.
-
Can you give a reference or an explanation for why the second result is true? I believe Yemon explained why the first one is true. – Anton Geraschenko♦ Oct 31 2009 at 14:58
Here's a reference: Statistical Inference by Casella and Berger (bit.ly/2ZMNV0) 2nd edition, page 65. – John D. Cook Oct 31 2009 at 16:11
The Renyi entropy depends only on the probabilities, and not on the values the RV take; any 1-1 function of the RV have the same entropy.
If you's asking whether the Renyi entropy determines the sequence of probabilities pi, then the answer is yes. Assume WLOG that pi are in descending order. Then the limit when a tends to infinity, of Ha is p0. Once you know p0, it is easy to calculate the entropy for the sequence p1, p2,.. which then allows us to find p1, etc.
-
Thinking about the Renyi part of this question again today, I realized that there is a simple and elegant way to show the equivalence of knowing the Renyi entropies and knowing the probabilities (in principle) without taking limits. See Ori's comments, also.
Suppose we have just a finite number outcomes. Then we can place all of the probabilities for each outcome on the diagonal of a large matrix. The Renyi entropies are basically just the traces of the powers of this matrix for integer values of $\alpha$. We would like to show that knowing these trace powers is equivalent to knowing the probabilities themselves. Intuitively, this seems clear, since it is just an overdetermined system of polynomial equations, but a priori it isn't clear that there isn't some weird degeneracy hidden somewhere that would preclude a unique solution. So, we have the trace powers, and as a function of the probabilities, these are just the power sums. We can use the Newton-Girard identities to transform these into the elementary symmetric polynomials. Then we can express the characteristic polynomial of our large matrix as a sum over these. The roots of this polynomial are of course the eigenvalues, which are just the probabilities in question.
-
Very nice. I'm not sure how does this method work for the infinite support case. Notice also that when taking limits you only need to know the (exact) tail behavior of the Renyi entropy. – Ori Gurel-Gurevich Nov 10 2009 at 1:53
Yes, it is not clear how to make this work in the infinite support case. For the finite case, I just think it is nice because you only use as many values of the Renyi entropy as there are outcomes in the probability vector. So it feels somehow very economical and more satisfying than taking limits. However I agree with you that, for any actual computation, this is much harder to work with than taking limits. – Steve Flammia Nov 10 2009 at 2:34
I've heard (from my undergraduate stats profs) the answer is that 2 distributions can have the same moments but different distributions. I either don't remember or never had an actual example though. I'd guess you could (maybe) look for an example by camparing a discrete distribution and a continous one.
-
I don't have it on hand, but Billingsley's book "Probability and Measure" has a nice section on this issue, including the classic example of a distribution not uniquely determined by its moments: the log-normal distribution (i.e., the distribution of e^Z, where Z~N(0,1)).
There are known (but not to me off the top of my head) necessary and sufficient conditions for a distribution to be determined by its moments, in terms of the rate of growth of the moments; I think but I'm not sure those are in Billingsley. If not, I'd check Feller next. In any case, I expect that the situation is not better for discrete distributions - you can discretize the log-normal distribution in a way that increases the size of the moments to get a discrete distribution. Then you get a discrete probability distribution with the same moments as some other probability distribution. I don't know a proof that you can arrange for the second distribution also to be discrete, but I'd guess you can.
As for your second question, unless I'm misunderstanding something then I think a discrete counterexample to the first question also provides a counterexample to the second.
-
Billingsley says that a sufficient condition for a distribution to be determined by its moments is that the moment generating function converges. That means that the moments grow at most exponentially fast. – Michael Lugo Oct 31 2009 at 15:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9305882453918457, "perplexity_flag": "head"}
|
http://mathhelpforum.com/trigonometry/33472-trig-equations-multiple-trig-functions-cont.html
|
Thread:
1. Trig Equations with Multiple Trig Functions cont.
Find, to the nearest tenth of a degree, all values of x in the interval 0≤x≤360 that satisfy the equation.
tan x=cos x
also solve for x in the interval 0≤x≤2π
cos ½x= cos x
2. Originally Posted by ~berserk
tan x=cos x
$\frac{sin x}{cos x} = cos x$
$sin x = cos^2 x$
$sin x = 1 - sin^2 x$
$sin^2 x + sin x - 1 = 0$
Solve for x. (Let Sin x = k and solve using the quadratic formula.)
Originally Posted by ~berserk
cos ½x= cos x
$cos x - cos \frac{x}{2} = 0$
This is true where x is 0 and there also seems to be a solution between 45 and 60 degrees, as well as between 15 and 30 degrees. I suggest you draw an accurate graph to find the points.
3. for the second one I suggest doing it this way
$cos\bigg(\frac{x}{2}\bigg)=cos(x)$...now using the identity for $cos\bigg(\frac{x}{2}\bigg)$ you obtain $\sqrt{\frac{1+cos(x)}{2}}=cos(x)$...now by squaring each side and simplifying you get $2cos(x)^2-1-cos(x)=0$ Define u $sin(x)$ and solve by quadratic formula
4. i think i messed up the second problem yesterday because I did it and solved it like this
5. Originally Posted by ~berserk
i think i messed up the second problem yesterday because I did it and solved it like this
IDk...but here you go you factored it right so therefore $cos(x)=1$...so $x=cos^{-1}(1)=0$.... $cos\bigg(\frac{1}{2{\cdot{0}/bigg)=cos(0)$...and doing the other one you get $x=\frac{2\pi}{3}$ which is a valid solution
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 14, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9397661685943604, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/math-topics/49330-define-rational-function.html
|
# Thread:
1. ## Define rational function
From graph of the rational function, you must define $a, b, c, d$ in $f(x)=\frac{ax+b}{cx+d}$.
(I'm assuming both x-axis and y-axis are crossed at $-\frac{1}{2}$)
I start with this set-up: $\frac{b}{d}=-\frac{1}{2}$ (because of $f(0)=-\frac{1}{2}$)
$\frac{a}{c}=2$ (asymptote)
$-\frac{1}{2}a+b=0$ (because at $x=-\frac{1}{2}$ the function cross the x-axis and is equal to zero)
$2c+d=0$ (because pole of the function is at $x=2$).
So it comes down to 4 equations with 4 unknown variables, but I've tried many times solving, and I constantly get trapped in loop (you know, where your "new" equation is actually some old ones mixed up?)!!
Where is the flaw in my logic?
2. Hello,
Your logic is perfect ! I think I'm getting what's looking wrong.
I get these equations from yours :
$\begin{aligned} d=-2b \\ a=2c \\ a=2b \\ d=-2c \end{aligned}$
From the second and the third (or the first and the fourth), we have $\boxed{b=c}$.
So it reduces to :
$\boxed{d=-2b}$
$\boxed{a=2b}$
The equation is now $y=\frac{2bx+b}{bx-2b}$
Simplify by b
3. Makes it a lot easier to insert into $f(x)$... but, shouldn't there be a solution "directly" from the 4 original equations (as there are 4 unknowns)?
How would you solve it without putting variables into equation and without getting trapped in "loop" (is it even possible?)? Is it even possible?
4. Originally Posted by courteous
Makes it a lot easier to insert into $f(x)$... but, shouldn't there be a solution "directly" from the 4 original equations (as there are 4 unknowns)?
How would you solve it without putting variables into equation and without getting trapped in "loop" (is it even possible?)? Is it even possible?
It depends.
Here for example, the trap was that there is sort of a division of the coefficients. So it was likely that you had a linear combination of a and b proportional to a linear combination of c and d. No matter what, because it would be simplified while putting it in f(x).
Here, it appeared that all were proportionally related.
You can have exercises where you have to find a,b,c,d and where there won't be any loop. But it won't be a quotient like the one you have here.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 17, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9629621505737305, "perplexity_flag": "middle"}
|
http://unapologetic.wordpress.com/2009/10/08/cauchys-invariant-rule/?like=1&_wpnonce=401286028f
|
# The Unapologetic Mathematician
## Cauchy’s Invariant Rule
An immediate corollary of the chain rule is another piece of “syntactic sugar”.
If we have functions $g:X\rightarrow\mathbb{R}^n$ and $f:Y\rightarrow\mathbb{R}^p$ for some open regions $X\subseteq\mathbb{R}^m$ and $Y\subseteq\mathbb{R}^n$ so that the image $g(X)$ is contained in $Y$, we can compose the two functions to get a new function $f\circ g:X\rightarrow\mathbb{R}^p$. In terms of formulas, we can choose coordinates $y^i$ on $\mathbb{R}^n$ and write out both the function $f(y^1,\dots,y^n)$ and the component functions $g^1(x),\dots,g^n(x)$. We get a formula for $\left[f\circ g\right](x)$ by substituting $g^i(x)$ for $y^i$ in the formula for $f$ and write $y^i=g^i(x)$.
The language there seems a little convoluted, so I’d like to give an example. We might define a function $f(x,y)=e^{x^2+y^2}$ for all points $(x,y)$ in the plane $\mathbb{R}^2$. This is all well and good, but we might want to talk about the function in polar coordinates. To this end, we may define $x=r\cos(\theta)$ and $y=r\sin(\theta)$. These are the component functions describing a transformation $g$ from the region $(r,\theta)\in(0,\infty)\times(-\pi,\pi)\subseteq\mathbb{R}^2$ to the region where $(x,y)\neq(0,0)$. We can substitute $r\cos(\theta)$ for $x$ and $r\sin(\theta)$ for $y$ in our formula for $f$ to get a new function $f\circ g$ with formula
$\displaystyle f(g(r,\theta))=e^{r^2\cos(\theta)^2+r^2\sin(\theta)^2}=e^{r^2}$
This much is straightforward. The thing is, now we want to take differentials. What Cauchy’s invariant rule tells us is that we can calculate the differential of $f\circ g$ by not only substituting $g^i(x)$ for $y^i$, but also substituting $dg^i(x;t)$ for $s^i$ in the formula for $df(y;s)$. That is, if $h=f\circ g$ then we have the equivalence
$\displaystyle dh(x;t)=df(g^1(x),\dots,g^n(x);dg^1(x;t),\dots,dg^n(x;t))$
In our particular example, we can easily calculate the differential of $f$ using our first formula:
$df(x,y)=2xe^{x^2+y^2}dx+2ye^{x^2+y^2}dy$
or using our second formula:
$df(r,\theta)=2re^{r^2}dr$
We want to call both of these simply $df$. But can we do so unambiguously? Indeed, if $x=r\cos(\theta)$ then we find
$\displaystyle dx=\cos(\theta)dr-r\sin(\theta)d\theta$
and if $y=r\sin(\theta)$ then we find
$\displaystyle dy=\sin(\theta)dr+r\cos(\theta)d\theta$
We substitute these into our formula for $df(x,y)$ to find
$\displaystyle\begin{aligned}df(r,\theta)&=2r\cos(\theta)e^{r^2\cos(\theta)^2+r^2\sin(\theta)^2}\left(\cos(\theta)dr-r\sin(\theta)d\theta\right)+2r\sin(\theta)e^{r^2\cos(\theta)^2+r^2\sin(\theta)^2}\left(\sin(\theta)dr+r\cos(\theta)d\theta\right)\\&=2r\cos(\theta)e^{r^2}\left(\cos(\theta)dr-r\sin(\theta)d\theta\right)+2r\sin(\theta)e^{r^2}\left(\sin(\theta)dr+r\cos(\theta)d\theta\right)\\&=2r\cos(\theta)e^{r^2}\cos(\theta)dr+2r\sin(\theta)e^{r^2}\sin(\theta)dr-2r\cos(\theta)e^{r^2}r\sin(\theta)d\theta+2r\sin(\theta)e^{r^2}r\cos(\theta)d\theta\\&=\left(2r\cos(\theta)^2e^{r^2}+2r\sin^2(\theta)e^{r^2}\right)dr+\left(2r^2\cos(\theta)\sin(\theta)e^{r^2}-2r^2\cos(\theta)\sin(\theta)e^{r^2}\right)d\theta\\&=2re^{r^2}dr\end{aligned}$
just the same as if we calculated directly from the formula in terms of $r$ and $\theta$.
That is, we can substitute our formulæ for the coordinate functions $y^i=g^i(x)$ before taking the differential in terms of $x$, or we can take the differential in terms of $y$ and then substitute our formulæ for the coordinate functions $y^i=g^i(x)$ and their differentials $dy^i=dg^i(x)$ into the result. Either way, we end up in the same place, so we don’t have to worry about ending up with two (or more!) “different” differentials of $f$.
So, how do we verify this using the chain rule? Just write out the differentials out using partial derivatives. For example, we know that
$\displaystyle df(y;s^1,\dots,s^n)=\frac{\partial f}{\partial y^i}\biggr\vert_ys^i$
and so on. So, performing our substitutions we can find:
$\displaystyle\begin{aligned}df(g(x);dg^1(x;t),\dots,dg^n(x;t))&=\frac{\partial f}{\partial y^i}\biggr\vert_{y=g(x)}dg^i(x;t)\\&=\frac{\partial f}{\partial y^i}\biggr\vert_{y=g(x)}\frac{\partial g^i}{\partial x^j}\biggr\vert_xt^j\\&=\frac{\partial\left[f\circ g\right]}{\partial x^j}\biggr\vert_xt^j\\&=d\left[f\circ g\right](x;t)\end{aligned}$
The important part here is the passage from products of two partial derivatives to single partial derivatives of $f\circ g$. This works out because when we consider differentials as linear transformations, the matrix entries are the partial derivatives. The composition of the linear transformations $df(g(x))$ and $dg(x)$ is given by the product of these matrices, and the entries of the resulting matrix must (by uniqueness) be the partial derivatives of the composite function.
About these ads
### Like this:
Like Loading...
Posted by John Armstrong | Analysis, Calculus
## 5 Comments »
1. [...] Product and Quotient rules As I said before, there’s generally no product of higher-dimensional vectors, and so there’s no generalization of the product rule. But we can multiply and divide real-valued functions of more than one variable. Finding the differential of such a product or quotient function is a nice little exercise in using Cauchy’s invariant rule. [...]
Pingback by | October 9, 2009 | Reply
2. [...] Differential Operators Because of the chain rule and Cauchy’s invariant rule, we know that we can transform differentials along with functions. For example, if we [...]
Pingback by | October 12, 2009 | Reply
3. [...] complicated than our first-order derivatives. In particular, they don’t obey anything like Cauchy’s invariant rule, meaning they don’t transform well when we compose functions. As an example, let’s go [...]
Pingback by | October 16, 2009 | Reply
4. [...] first term here is the second differential in terms of the . If there were an analogue of Cauchy’s invariant rule, this would be all there is to the formula. But we’ve got another term — one due to the [...]
Pingback by | October 19, 2009 | Reply
5. [...] with the tools from the last couple days, being careful about when we can and can’t trust Cauchy’s invariant rule, since the second differential can transform [...]
Pingback by | November 25, 2009 | Reply
« Previous | Next »
## About this weblog
This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”).
I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
• ## RSS Feeds
RSS - Posts
RSS - Comments
• ## Feedback
Got something to say? Anonymous questions, comments, and suggestions at Formspring.me!
%d bloggers like this:
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 62, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9123837351799011, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/30338/what-is-the-relation-between-physicists-functional-derivatives-and-frechet-der/30339
|
# What is the relation between (physicists) functional derivatives and Fréchet derivatives
I´m wondering how can one get to the definition of Functional Derivative found on most Quantum Field Theory books:
$$\frac{\delta F[f(x)]}{\delta f(y) } = \lim_{\epsilon \rightarrow 0} \frac{F[f(x)+\epsilon \delta(x-y)]-F[f(x)]}{\epsilon}$$
from the definitions of Functional Derivatives used by mathematicians (I´ve seen many claims that it is, in effect, the Fréchet derivative, but no proofs). The Wikipedia article says it´s just a matter of using the delta function as “test function” but then goes on to say that it is nonsense.
Where does this $\delta(x-y)$ comes from?
-
Waiting for an actual answer, but note that Dirac delta comes from the fact that $F[x]$ is the functional applied to the function $x$ and the functional derivative is with respect to the function $y$ – Nivalth Jun 18 '12 at 23:07
## 3 Answers
Whenever I have troubles with functional derivative things, I just do the replacement of a continuous variable $x$ into a discrete index $i$. If I'm not mistaken this is what they call a "DeWitt notation".
The hand waiving idea is that you can think of a functional $F[f(x)]$ as of a "ordinary function" of many variables $F(f_{-N},\cdots,f_0,f_1,f_2,\cdots,f_N) = F(\vec{f})$ with $N$ going to "continuous infinity".
In that language your functional derivative transforms into partial derivative over one of the variables: $$\frac{\delta F}{\delta f(x)} \to \frac{\partial F}{\partial f_i}$$ And the delta-function is just an ordinary Kronecker delta: $$\delta(x-y) \to \delta_{ij}$$
So, gathering this up we for your expression: $$\frac{\delta F}{\delta f(x)} = \lim_{\epsilon\to\infty}\frac{F[f(x)+\epsilon\delta(x-y)]-F[f(x)]}{\epsilon} \to$$ $$\frac{\partial F}{\partial f_j} = \lim_{\epsilon\to\infty}\frac{F[f_i+\epsilon\delta_{ij}]-F[f_i]}{\epsilon}$$ Which is, to my taste, a bit redundant. But true.
-
This is a formal notation for the following general thing:
$$F(f+\delta f) = F(f) + \int A(x) \delta f(x)$$
Where $\delta f$ is the infinitesimal change in f, and it is a smooth test function, and then on the right hand side, $A(x)$ is just a linear operator on the space of functions. The notation for the $A(x)$ is then
$$A(x) = {\delta F\over \delta f(x)}$$
Because if you formally substitude $\delta f(x) = \delta(x-y)$, you find $A(y)$ as the value of the integral. This is just a notational trick--- $\delta f$ is an everywhere small variation, which is impossible if it is infinite at one point. Another way of saying this is that the point-delta-function limit has to be taken after the small epsilon limit in the definition you give, so that the variation becomes small before it becomes infinitely concentrated.
-
The physicist's derivative notation denotes the components of a Frechet derivative in the direction of the delta-function supported at $y$.
This is one of those places where the habit of denoting the function $f$ by its value $f(x)$ gets confusing. It's somewhat clearer if you write $\delta_y$ for the delta function at $y$, and
$$\frac{\delta F}{\delta (\delta_y)}[f] = \lim_{\epsilon \to 0} \frac{1}{\epsilon} ( F[f + \epsilon \delta_y] - F[f]).$$
Obviously, the delta function isn't actually a function. But this use of it makes exactly as much sense as the position basis (and the latter can be made perfectly rigorous using rigged Hilbert spaces).
-
That´s exactly my problem with this definition (Dirac delta subtleties apart). It seems your are defining the partial derivative in the specific direction of $\delta_y$, but this very definition is used everywhere as the partial derivative in any direction (are they all the same?). Another way of expressing my problem: is the first equality below valid? Why? (I tried to follow your notation, $f_y$ means f calculated at point $y$): $$\frac{\delta F}{\delta (f_y)}[f] = \frac{\delta F}{\delta (\delta_y)}[f] = \lim_{\epsilon \to 0} \frac{1}{\epsilon} ( F[f + \epsilon \delta_y] - F[f]).$$ – Forever_a_Newcomer Jun 19 '12 at 18:10
The definition works for any function. You define the derivative of $F$ at $f$ in the direction of $f+g$ for any function $g$ to be $\frac{\delta F}{\delta g} = \lim_{\epsilon \to 0} \frac{1}{\epsilon} (F[f+\epsilon g] - F[f])$. This is just like in multivariable calculus; it tells you how $F$ changes to first order if you move from $f$ towards $f+g$. The special degenerate case $g = \delta_y$ tells you how $F$ changes if you modify $f$ only by changing its value at $y$. This is the source of the crazy physics notation. – user1504 Jun 19 '12 at 22:49
Also, please don't use the notation $f_y$ for $f(y)$; this does not make the world a better place. $f(y)$ is the value of a function. $\delta_y$ (which seems to have inspired you) is a distribution which is non-zero only at $y$. If it helps, you can imagine that when physicists write $f(y)$ in the denominator of a functional derivative, they are actually taking the derivative along the "coordinate" $f \mapsto f(y)$. – user1504 Jun 19 '12 at 22:54
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 42, "mathjax_display_tex": 9, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.922774076461792, "perplexity_flag": "head"}
|
http://jdh.hamkins.org/about/
|
# About
My main research interest lies in mathematical and philosophical logic, particularly set theory, with a focus on the mathematics and philosophy of the infinite. I have worked particularly with forcing and large cardinals, those strong axioms of infinity, and have been particularly interested in the interaction of these two central set-theoretic concepts. I have worked in the theory of infinitary computability, introducing (with A. Lewis and J. Kidder) the theory of infinite time Turing machines, as well as in the theory of infinitary utilitarianism and, more recently, infinite chess. My work on the automorphism tower problem lies at the intersection of group theory and set theory. Recently, I am preoccupied with various mathematical and philosophical issues surrounding the set-theoretic multiverse, engaging with the emerging debate on pluralism in the philosophy of set theory, as well as the mathematical questions to which they lead, such as my work on the modal logic of forcing and set-theoretic geology. I was recently interviewed by Richard Marshall at 3:AM Magazine about my work.
My permanent position is Professor at The City University of New York, at the Graduate Center of CUNY and the College of Staten Island of CUNY. I have held academic faculty positions at various other universities and institutions around the world:
Appointments
• The City University of New York, since 1995
• College of Staten Island, CUNY
• Professor of Mathematics
• The Graduate Center of CUNY
• Professor of Mathematics, doctoral faculty
• Professor of Philosophy, doctoral faculty
• Professor of Computer Science, doctoral faculty
• Fields Institute, Toronto, Scientific Researcher, August 2012
• Isaac Newton Institute for Mathematical Sciences, Cambridge, U.K., Visiting Fellow, March–April, June, 2012.
• New York University, Visiting Professor of Philosophy, July-December, 2011.
• University of Vienna, Kurt Gödel Research Center, Guest Professor, June, 2009.
• Universiteit van Amsterdam, Institute for Logic, Language & Computation
• NWO Bezoekersbeurs Visiting Researcher, June–August 2005, June 2006.
• Visiting Professor, April–August 2007.
• Universität Münster, Institut für mathematische Logik, Germany, Mercator-Gastprofessor, DFG, May–August 2004.
• Georgia State University, Associate Professor of Mathematics and Statistics, 2002–2003.
• Carnegie Mellon University, Visiting Associate Professor of Mathematics, 2000–2001.
• Kobe University, Japan, JSPS Research Fellow, Jan–Dec 1998.
• Univ. California at Berkeley, Visiting Assistant Professor of Mathematics, 1994–1995.
Education
• Ph.D. in mathematics, 1994, University of California at Berkeley
• C.Phil., 1991, University of California at Berkeley
• B.S. in mathematics (with honor), 1988, California Institute of Technology
## 4 thoughts on “About”
1. Ellena Caudwell on November 5, 2011 at 1:08 pm said:
Dear Prof Dr Joel David Hamkins,
My name is Ellena Caudwell and I am a 3rd year Undergraduate Mathematics student from the University of Exeter, UK.
I am currently involved in a group project based around D.E.Knuth’s book ‘Surreal Numbers’, and we were hoping to find examples of where this book has been used as a tool for teaching mathematics. I found an old page on the University of Amsterdam’s website explaining a course you ran in the 2nd Semester 2004/05: Surreal Numbers (http://www.illc.uva.nl/MScLogic/courses/Projects-0405-IIc/Hamkins.html) and it appears this is exactly what we were looking for.
I understand this was a long time ago, and you must be very busy, but we would be very grateful if you could answer a couple of questions to help us in our work.
Firstly, what level were the students who took part in the course, and in what way were they assessed?
Second, D.E. Knuth states in the postscript of his book that he wasn’t really trying to teach the theory of surreal numbers, but to ‘provide some material that would help to overcome… the lack of training for research work’. Therefore it is questionable how well this book helps teach surreal numbers.
1. Do you feel this course showed this book can be used successfully to teach surreal numbers? Was this your aim?
2. Do you feel it effectively shows the process of exploration and discovery of mathematical proof?
Any other thoughts or ideas on the topic would be most useful.
Thank you for your time and I look forward to hearing from you.
Ellena Caudwell
• Joel David Hamkins on November 6, 2011 at 11:08 am said:
The course was a lot of fun for all of us, and I count it as a success. I would definitely be interested in running such a course again, and I think the book works very well for this kind of course. Shorter than a regular semester course, the course was filled mostly with masters degree students, with a few PhD students, but there would be no problem running a similar course for much longer. It was a small class, with about 8 students, which I think was relevant for its success. Following the idea of Knuth that you mention, we used the book not only to learn about the surreals but also to illustrate the practice of mathematical research. In particular, the students themselves presented much of the material on a rotating schedule, filling out the ideas of the book. Thus, the practice was that they would read the next part of the book, master the material, including whatever proofs needed filling in (which is the nature of the book), and make a presentation on that topic, at which we all would ask further questions and figure things out together. In addition, as is my general practice, the students wrote term papers, on a topic chosen in discussion with me. Assessment was based on the presentations and the paper.
To answer your specific questions: (1) The book can definitely be used successfully to teach surreal numbers, and yes, this was a major part of my aim. But this book requires more work from the students than a regular textbook, laying out all the material, since it is more open-ended. (2) As a result, yes, I do feel that the book and the way we used it in that course gave a good introduction to the process of mathematical research, particularly at the masters degree level. I think it could also work well with advanced undergraduates, if they were very motivated.
2. Ali Bleybel on October 1, 2012 at 10:59 am said:
Dear professor Hamkins,
I am looking for a reviewer to my paper in progress “$\mathcal{L}_{\omega_1\omega}$ -transfer principle in algebraic geometry”. Are you or someone you know interested in reviewing my work?
Sincerely yours,
Ali Bleybel
• Joel David Hamkins on October 1, 2012 at 2:16 pm said:
Sure, I’d be happy to take a look at it, but not sure whether I might find anything useful to say. Kindly send it to me at jhamkins@gc.cuny.edu.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9527355432510376, "perplexity_flag": "middle"}
|
http://mathematica.stackexchange.com/questions/tagged/polynomials+algebraic-manipulation
|
Tagged Questions
0answers
57 views
Apart may use Padé method: what's that?
How does Apart work? The page tutorial/SomeNotesOnInternalImplementation#7441 says, "Apart ...
7answers
215 views
Defining a function that completes the square given a quadratic polynomial expression
How can I write a function that would complete the square in a quadratic polynomial expression such that, for example, CompleteTheSquare[5 x^2 + 27 x - 5, x] ...
1answer
92 views
Is there any way to force Mathematica to collect a symbol in a polynomial?
Let's say that I have a polynomial like this: a + b + c Is there any way that I can get Mathematica to transform it to: ...
4answers
511 views
How to get exact roots of this polynomial?
The equation $$64x^7 -112x^5 -8x^4 +56x^3 +8x^2 -7x - 1 = 0$$ has seven solutions $x = 1$, $x = -\dfrac{1}{2}$ and $x = \cos \dfrac{2n\pi}{11}$, where $n$ runs from $1$ to $5$. With ...
3answers
211 views
Is it possible to use Composition for polynomial composition?
I want to do this: $P = (x^3+x)$ $Q = (x^2+1)$ $P \circ Q = P \circ (x^2+1) = (x^2+1)^3+(x^2+1) = x^6+3x^4+4x^2+2$ I used Composition for testing if that could ...
4answers
299 views
“Evaluating” polynomials of functions (Symbols)
I want to implement the following type evaluation symbolically $$(f^2g + fg + g)(x) \to f(x)^2 g(x) + f(x) g(x) + g(x)$$ In general, on left hand side there is a polynomial in an arbitrary number of ...
3answers
460 views
What function can I use to evaluate $(x+y)^2$ to $x^2 + 2xy + y^2$?
What function can I use to evaluate $(x+y)^2$ to $x^2 + 2xy + y^2$? I want to evaluate It and I've tried to use the most obvious way: simply typing and evaluating $(x+y)^2$, But it gives me only ...
3answers
2k views
Factoring polynomials to factors involving complex coefficients
I've run into some problems using Factor on polynomials with complex coefficient factors. Reading the documentation it looks like it only factors over the ...
2answers
277 views
expanding a polynomial and collecting coefficients
I'm trying to expand the following polynomial ...
6answers
2k views
Finding real roots of negative numbers (for example, $\sqrt[3]{-8}$)
Say I want to quickly calculate $\sqrt[3]{-8}$, to which the most obvious solution is $-2$. When I input $\sqrt[3]{-8}$ or Power[-8, 3^-1], Mathematica gives the ...
6answers
761 views
How do I replace a variable in a polynomial?
How do I substitue z^2->x in the following polynomial z^4+z^2+4? z^4+z^2+4 /. z^2->x ...
4answers
407 views
Is there a way to Collect[] for more than one symbol?
Oftentimes you find yourself looking for polynomials in multiple variables. Consider the following expression: a(x - y)^3 + b(x - y) + c(x - y) + d as you can ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 18, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.865795373916626, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/8709/does-mass-affect-speed-of-orbit-at-a-certain-distance?answertab=oldest
|
Does mass affect speed of orbit at a certain distance?
Does the mass of both the parent object, and the child object affect the speed at which the child object orbits the parent object?
I though it didn't (something like T^2 = R^3) until I saw an planet on the iphone exoplanet app, that is closer to it's star than a planet in another system, yet takes longer to complete one orbit. Both of the planets were a similar mass, as were the stars.
-
4 Answers
In the limit where $m_2 \ll m_1$, only the mass of the heavy body matters (along with the semi-major axis of the orbit, of course).
Where that limit does not apply, varying the mass of either body changes the reduced mass:
$$\mu = \frac{m_1 m_2}{m_1 + m_2} .$$
Since the system acts as if a negligibly massive object was moving in the field of one having the reduced mass, this does alter the period.
Notice that in the limit above $\mu \to m_2$ and we recover the expected behavior.
Marion and Thorton give the full expression for the period $\tau$ in the form
$$\tau^2 = \frac{4 \pi}{G} \frac{a^3}{m_1 + m_2}$$
where $a$ is the length of the semi-major axis of the orbit and $G$ is the gravitational constant. It should be obvious that in the limit of a heavy primary this reduces to $\tau^2 = \frac{4 \pi}{G} \frac{a^3}{m_1}$.
Side comment: The rule you recall is the one Kepler found for planets in our Solar System. In this case the mass of the sun dominates in every case. Jupiter is about 0.001 solar masses, so the largest correction in at the tenth of a percent level. Observable, but not at all large.
-
1
I think you've got some things mixed up here. It is not true that "the system acts as if a negligibly massive object was moving in the field of one having the reduced mass." In the case $m_2\ll m_1$, the reduced mass $\mu\approx m_2$, so this would say that the system acts like one in which the gravitational field was that of the planet, not the star. Reduced mass isn't really important here. As the Marion-Thornton formula makes clear, all that matters is the total mass $m_1+m_2$. The small-planet approximation arises because $m_1+m_2\approx m_1$, not because $\mu\approx m_2$. – Ted Bunn Apr 15 '11 at 21:39
As @dmckee's answer says, in the limit where the mass of the planet is much less than the mass of the star, the mass of the planet does not have a significant effect on the period. I just want to add a more explicit comment on this part of your question:
I saw an planet on the iphone exoplanet app, that is closer to it's star than a planet in another system, yet takes longer to complete one orbit.
The reason for this is almost certainly not the masses of the planets but rather the masses of the two stars. The systems you're looking at in the app almost certainly satisfy the rule $m_{\rm planet}\ll m_{\rm star}$, so that the planet's mass is unimportant. You say that the masses of the stars are "similar," but I bet they're different enough that that's the explanation for what you're seeing.
One way of writing Kepler's third law, as it applies to planets orbiting other stars, is $$T^2={R^3\over M},$$ which is valid only in a certain choice of units: periods in years, radii in astronomical units, masses in solar masses. dmckee gives the more general formula. This version corresponds to a choice of units that makes the combination of constants $4\pi/G$ come out to 1.
Does the app give you specific information about the numerical values of the various quantities? If so, you could check this. If not, are you sure about your statement that the masses are "similar"?
-
You can always think of it like this: Start with one planet of mass m orbiting the star at a certain speed. Now add a 2nd planet of mass m in the same orbit. Same speed, right? Now let them be touching each other in the orbit. Same speed, right? Now spot weld them together. You've got a single planet of mass 2m. Same speed.
-
Crudely, if you make the force of gravity equal to the centripetal force (which it is for stationary, well behaving orbits), $\frac{MmG}{r^2}=\frac{mv^2}{r}$, and so $\frac{MG}{r}=v^2=\frac{4 \pi^2r^2}{T^2}$, which contains Kepler's 3rd law.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 15, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9532542824745178, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/280079/sum-of-two-squares-modulo
|
# Sum of two squares modulo
In the following all variables are non-negative integers.
It is well known (Fermat) that a number $a$ is a sum of two squares $a = x^2 + y^2$ if and only if the prime factors of $a$ of the form $p = 4k+3$ occurs to an even power. Example: $$7^2 \times 5 = 245 = 7^2 + 14^2 \ .$$
I want to count how many solutions there are to the equation $$x^2 + y^2 \equiv 23 \pmod {93}$$ (it is enough to take $x$ and $y$ from the set $A = \{ 0,1,2,...,92 \}$).
In this case Fermat's Theorem does not hold as it is. For example, we have $23=4 \times 5 + 3$ but $23 + 93 = 116 = 10^2 + 4^2$ is indeed a sum of two squares modulo 93. This also shows that this equation has at least one solution.
There are several strategies to this question:
1. Check all the possible values of $(x,y) \in A \times A$ explicitly.
2. Check all the possible values of $\{ b = 23 + 93k \mid k \in \mathbb{Z} \}$ such that $b$ not exceeding $2 \times 93^2$, and see if they are decomposable to sum of two squares.
Both options require a computer program to do all the calculations in a reasonable time. I want a way based on theory with calculations which can be done by hand on the blackboard.
-
First minor observation: you can skip the case where k is a multiple of 4 (for strategy 2). – Mike Jan 16 at 15:30
## 1 Answer
First, you can factor $93$ and solve $x^2+y^2=2 \pmod 3$ and $a^2+b^2=23 \pmod {31}$, then combine the results using the Chinese Remainder theorem. The first is easy-we must have $x,y=\pm 1 \pmod 3$. For the second, you only have to try $0$ through $11$, as $a^2=(-a)^2$ and the first few you know. The squares $\pmod {31}$ are $0,1,4,9,16,25,5,18,2,19,7,28,20,14,10,8$. Now look through this list for pairs that sum to $23$ or $64$. It is not too hard to see that the solutions are $4+19, 16+7, 5+18, 9+14$. So we can have $a=\pm 2, b=\pm 9 \pmod {31}$, giving four solutions that have to be combined with being $1 \pmod 3$ by CRT. $a=2, b=9$ gives $(2,40), (64,40), (2,71), (64,71)$, which you can probably do by inspection-just add $31$'s until you don't have a multiple of $3$. The three other choices of sign will each give four more. The three other choices for sums will each give sixteen more answers, so there are $64$ in all. I think this is within the range of blackboard work.
-
Thank you very much. – LinAlgMan Jan 16 at 16:20
1
I think you forgot $14+9=23$ and that explains why there are more solutions. In general, $$|(x,y)| \times |(a,b)| = \mbox{Number of total solution}$$ and one doesn't actually need to combine all the solutions to check for doubles since the CRT ensure unique solution for each pair. – LinAlgMan Jan 16 at 16:54
@LinAlgMan: you are right. I got confused and thought I was working $\pmod 23$ at that point and could stop at $11^2$, but really needed to go up to $15^2$. I have fixed. That accounts for $64$ as there are four pairs and four choices of sign, so $2^4 \cdot 4=64$. – Ross Millikan Jan 16 at 16:59
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 38, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.962886393070221, "perplexity_flag": "head"}
|
http://mathhelpforum.com/geometry/159021-perpendicular-bisector-print.html
|
# Perpendicular bisector
Printable View
• October 10th 2010, 04:47 AM
Greener
Perpendicular bisector
The main help i need is with this question:
1)****The perpendicular bisector of a straight line joining the points (3,2) and (5,6) meets the x axis at A and the y at B. Prove that the distance AB is equal to 6root5.
I just need to have theses checked to make sure there right, would love if you could just skim it and check :D
1)What is the gradient of the line perpendicular to y=3x-5. hence find the equation of the line which goes through (3,1) and is perpendicular to y=3x-5
a) to find the gradient I did, m2=-1/m1. m1=3 so it was -1/3 = -1/3
b) We know the coordinates (3,1) which is the y and X, so 1=(-1/3*3)+c
1=-1+c, so
2=C all together the answer is y=-1/3x+2
2)Find the equation of line B which is perpendicular to line A and goes though the mid-point.
I am given the coordinates for line A, (2,6) and (4,8). Using that i can find both the midpoint and the gradient of line A = midpoint = (3,7) Gradient = 1
So M2=1/M1 so -1/1=-1
we know from midpoint, y=7 and x=3 so
7=(-1*3)+c
7+3=c
10=c so the equation is Y=-x+10
• October 10th 2010, 05:32 AM
Plato
Quote:
Originally Posted by Greener
The main help i need is with this question:
1)****The perpendicular bisector of a straight line joining the points (3,2) and (5,6) meets the x axis at A and the y at B. Prove that the distance AB is equal to 6root5.
There is a lot wrong with that.
Start with the midpoint between $(3,2)~\&~(5,6)$ is $(4,4)$.
• October 11th 2010, 04:29 PM
bjhopper
perpendicular bisector
Hi greener,
Did Plato's hint help you solve this problem? My guess is that you may still be struggling with it.If so here are a few suggestions. Plot the two given points Connect them Draw a slope diagram. From that calculate the slope of the line between the points .
What then is the slope of the perpendicular bisector It passes thru the midpoint 4,4.Use the point slope formula to find the equation.I hope you can handle the rest.
bjh
All times are GMT -8. The time now is 09:59 AM.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9009182453155518, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/63898/isolated-conics-on-a-del-pezzo-surface/63901
|
## Isolated conics on a del Pezzo surface
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Is there anything known about isolated conics in a del Pezzo surface: their number, arrangement, and the corresponding elements of the class group of surface's minimal desingularization? (Isolated means not belonging to a continuous family of conics in the surface.)
A description similar to the one for isolated lines would be of most interest: "A del Pezzo surface has only finitely many lines. They correspond to curves E such that E^2 = E·K = −1 (so-called -1-curves) on the desingularization."
More specifically, the question is about del Pezzo surfaces of degrees 5 and 6. References not requiring much background in algebraic geometry are greatly appreciated.
And if we have a surface in C^3, whose linear normalization is a degree 5 or 6 Del Pezzo surface, can we say anything about isolated conics in this situation?
[Edit2] I have found the following related result in the literature:
"Any surface is a projection from its linear normalization. The projection is birational, and it preserves the degree of the surface and the degree of any curve not contained in the singular locus."
Notice that the conics contained in the singular locus are also interesting for me.
Additional question about surfaces in C^3 still unanswered.
-
## 2 Answers
While the number of lines on Del Pezzo surfaces are finite, the number of conics is infinite. More precisely, there are finitely many families $X\to P^1$ whose fibers are plane conics. Let me explain this in more detail.
As you probably know, a degree $d$ Del Pezzo surface $X$ can be realized as the blow-up of $P^2$ in $r=9-d$ points in general position. The Picard group of $X$ has rank $r+1$ and is generated by the classes of the exceptional divisors $E_1,\ldots, E_r$ and $L$ which is the pullback of a general line in $P^2$ via the blow-up morphism $\pi:X\to P^2$. The intersection form on $N^1(X)=\mbox{Pic }X$ is given by $$E_i\cdot E_j=-\delta_{ij}, \qquad E_i\cdot L=1, \qquad L^2=1.$$Also, the anticanonical class equals $-K=3L-E_1-\ldots-E_r$ in this basis.
If $X$ has degree $\ge 4$, then $-K$ is very ample, and the conics on $X$ correspond precisely to the effective divisor classes such that $$-K.D=2 \mbox{ and } D^2=0$$Examples are $L-E_i$ (pullback of a line through the point $p_i$) and $2L-E_1-E_2-E_3-E_4$ (pullback of a conic avoiding $p_5$). Using the AM-GM inequality, one can show that the number of such classes is finite.
In fact it is easy to see that any conic can be written as the sum of two exceptional curves (which form the generators for the effective cone $\overline{NE}(X)$). So $D=E+F$ for some $E,F$ with $E.F=1$. Moreover, using this description, it is not hard to verify that the conic divisors $D$ are even base-point free and so by Riemann-Roch, define morphisms $X\to \mathbb{P}^1$. These morphisms are conic bundles, i.e., every fiber is isomorphic to a plane conic in $X$.
On the other hand, the lines on $X$ correspond to classes satisfying $-K.E=1 \mbox{ and } E^2=-1$ so they don't 'move' in linear systems like the conics do, which explains why their number is finite.
EDIT: There can not be any isolated conics on $X$, since if $D$ is any isolated rational curve, then $D^2<0$ and the adjunction formula implies that $D^2=-1$, so $D$ is an exceptional curve, i.e., a line.
-
1
Thank you very much for your detailed answer. Can there be any ISOLATED conics on the surface in addition to families of conics you mention? Does equation −K.D=2 and D^2=0 hold for effective divisor classes of ALL conics, or only of those belonging to continuous families? Sorry if asking something obvious. – mikhail skopenkov May 4 2011 at 10:45
1
There can be no isolated conics on the Del Pezzo surface. I added an explanation in the answer above. – J.C. Ottem May 4 2011 at 10:50
1
Well, if you embed X by -2K instead of -K, then isolated lines will embed as isolated conics. Measuring the degree of your curves is relative to the embedding. – mdeland May 4 2011 at 12:58
1
If you embed X by -2K, and then project down to P^3 - you can see isolated conics on the (singular) image, right? Do you have a bound on the degree of your surface in 3-space? – mdeland May 8 2011 at 22:42
1
again, if your surface has N "-1 curves", and then you embed by -2K, project to P^3, then the result will have N isolated conics. Since they came from -1 curves, you would have already computed their divisor classes, etc. What is it that you want to know? – mdeland May 9 2011 at 18:56
show 8 more comments
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
If by a del-Pezzo surface you mean what is written here : http://en.wikipedia.org/wiki/Del_Pezzo_surface, i.e. surface such that $-K$ is ample, then there are no isolated conics on such surfaces at all. Indeed a smooth rational curve $C$ is isolated on a surface iff $C^2<0$. On the other hand by adjunction formula we have $(K+C)C=-2$, i.e., $-KC= 2+C^2$. Hence $-KC$ is positive only on a rational curve $C$ only if $C^2\ge -1$. But if $C^2=-1$, then $C$ is a line, if $C^2\ge 0$ it is not isolated.
Sometimes by del-Pezzo surface people mean rational surface with $-K$ semi-ample and with $K^2>0$ (thanks to Artie for making this precise), as it is in the following article http://www.staff.science.uu.nl/~looij101/coble6.pdf . More standard terminology for such surfaces are weak del-Pezzo surfaces. They indeed have exceptional curves $C$ with $C^2=-2$. The number of such curves is finite too. This is described, for example in the book of Dolgachev topics in classical algebraic geometry, beginning of chapter 8, http://www.math.lsa.umich.edu/~idolga/topics.pdf
-
1
Just one comment, by a conic , in algebraic geometry people often mean a rational curve, also, often one speaks of "conic bundles". – Dmitri May 4 2011 at 11:04
Aah... And your answer covers this as well! – mikhail skopenkov May 4 2011 at 11:40
2
Hi Dmitri, a very minor nitpick: the first sentence of your second paragraph "Sometimes by del-Pezzo surface people mean rational surface with −K semi-ample, as..." is a bit misleading. If you look at the link, they require the extra condition that (-K)^2 is between 1 and 9, in particular not 0. That rules out things like rational elliptic surfaces, which certainly should not count as del Pezzo. (Of course you know all this stuff, but maybe a casual reader could get the wrong idea by skimming your answer without looking at the link.) – Artie Prendergast-Smith May 11 2011 at 8:17
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 50, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9418035745620728, "perplexity_flag": "head"}
|
http://mathhelpforum.com/calculus/29577-concavity-points-inflection.html
|
# Thread:
1. ## Concavity / Points of Inflection
I am stuck on this problem, and I am thinking that I am not getting the derivatives (1st and 2nd) correct. Here is the problem:
F(X)= 2X(X+4)^3
I know I have to get the first and second deriviative's but don't I need to cube the inside function first and then distribute the 2x to it? Or does the power rule come into effect here? I have tried cubbing the inside terms and then distributing the 2x but it doesn't seem to give me the correct zeros's of the graph so if someone could illustrate. Please Help, Thanks.
2. Originally Posted by kdogg121
I am stuck on this problem, and I am thinking that I am not getting the derivatives (1st and 2nd) correct. Here is the problem:
F(X)= 2X(X+4)^3
I know I have to get the first and second deriviative's but don't I need to cube the inside function first and then distribute the 2x to it? Or does the power rule come into effect here? I have tried cubbing the inside terms and then distributing the 2x but it doesn't seem to give me the correct zeros's of the graph so if someone could illustrate. Please Help, Thanks.
Let's go through the derivatives together...
$f(x)=2x(x+4)^3$
Apply the product rule
$f'(x)=2(x+4)^3+6x(x+4)^2$
Let's do it again!
$f''(x)=6(x+4)^2 + 6(x+4)^2+12x(x+4) \Rightarrow 12(x+4)[(x+4)+x] = (12x+48)(2x+4)$
3. I'm sorry for the first derivative how are you getting the 6X? Is this the power rule being used within the product rule?
How would I determine where the function is increasing and decreasing, where its concave up and down and the inflection points. I know you have to know the zero's of the derivative(s).
4. Originally Posted by kdogg121
I'm sorry for the first derivative how are you getting the 6X? Is this the power rule being used within the product rule?
The derivative of (x + 4)^3 is 3(x + 4)^2.
Personally, since you need to solve f''(x) = 0 (and, of course, test the resulting solutions), I'd expand f(x) first and then differentiate (twice) term-by-term. I think that's the easier road to hoe.
5. So are you saying it would be easier to use the product rule? I still don't see where the 6X(X+4)^2 comes from, but how would I determine my zero's from here which will result in the x points on the graph?
My book says the function is increasing for x> -1 decreasing for x< -1 concave upward for x<-4 and x> -2 concave downward at -4< x < -2 minimum at (-1) and inflection at (-4,0) and (-2,-32)
Could someone show how I find these?
6. Originally Posted by kdogg121
So are you saying it would be easier to use the product rule? I still don't see where the 6X(X+4)^2 comes from, but how would I determine my zero's from here which will result in the x points on the graph?
I've just re-read my reply and it's pretty plain what I said: "Personally ... I'd expand f(x) first ... I think that's the easier road to hoe."
I cannot see how you could possibly think I said it would be easier to use the product rule.
On the topic of the product rule, do you understand how it works in this question? In particular, do you understand that the second term is 2x times 3(x + 4)^2? Is it becoming clearer now where the 6X(X+4)^2 comes from?
Edit: Your potential x-coordinates for the stationary points will come form solving a quadratic equation. I assume solving a quadratic is money for jam for you ....
7. Originally Posted by kdogg121
[snip]
My book says the function is increasing for x> -1 decreasing for x< -1
Mr F says: Increasing when f'(x) > 0. Decreasing when f'(x) < 0. Your derivative function (which is a cubic) factorises into two simple linear factors (one of which is repeated) ......
concave upward for x<-4 and x> -2 concave downward at -4< x < -2
Mr F says: Concave up when f''(x) > 0. Concave down when f''(x) < 0. In each case you'll be solving a quadratic inequality. Note: The quadratic has two simple linear factors.
minimum at (-1)
Mr F says: Stationary points are found by solving f'(x) = 0. Note: The derivative (which is a cubic) factorises into two simple linear factors (one of which is repeated). Nature is found using either the sign test or the double derivative test.
and inflection at (-4,0) and (-2,-32)
Mr F says: Potential inflection points are found by solving f''(x) = 0. You'll be solving a quadratic equation. Note: The quadratic has two simple linear factors. The nature of these solutions must then be tested. Inflection points correspond to turning points of the derivative .....
Note: One of the inflection points is a stationary point of inflection. The other one is non-stationary.
Could someone show how I find these? Mr F says: Done. Please post if there are details you still can't manage.
..
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9561033844947815, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/182428/proving-that-22n-5-is-always-composite-by-working-modulo-3?answertab=votes
|
# Proving that $2^{2^n} + 5$ is always composite by working modulo 3
By working modulo 3, prove that $2^{2^n} + 5$ is always composite for every positive integer n.
No need for a formal proof by induction, just the basic idea will be great.
-
2
– Martin Sleziak Aug 14 '12 at 13:03
## 5 Answers
Obviously $2^2 \equiv 1 \pmod 3$.
If you take the above congruence to the power of $k$ you get $$(2^2)^k=2^{2k} \equiv 1^k=1 \pmod 3$$ which means that $2$ raised to any even power is congruent to $1$ modulo $3$.
What can you say about $2^{2k}+5$ then modulo 3?
It is good to keep in mind that you can take powers of congruences, multiply them and add them together.
If you have finished the above, you have shown that $3\mid 2^{2k}+5$. Does this imply that $2^{2k}+5$ is composite?
-
1
Then $2^{2k} + 5 ≡ 0 mod 3$ so $3|2^{2k}+5$ and thus $2^{2k}+5$ can't be prime and is composite. Correct? – MinaHany Aug 14 '12 at 13:43
What you wrote is correct. Maybe you should also mention that $2^{2k}+5>3$. (If you know that $3<s$ and $3\mid s$ then $s$ is composite. If you omit the condition that $3<s$, then this is not true, since you can take $s=3$, which is prime.) – Martin Sleziak Aug 14 '12 at 13:46
Got it! Thank you sir. – MinaHany Aug 14 '12 at 13:49
Thanks @RossMillikan, I've corrected that. – Martin Sleziak Aug 14 '12 at 14:24
The basic idea is, work modulo 3. What happens, modulo 3, when you raise 2 to an even power?
-
I have no idea actually, I'm pretty new to mod arithmetic – MinaHany Aug 14 '12 at 13:09
8
You're not supposed to have an idea, you're supposed to actually do it! Raise 2 to an even power! See what happens modulo 3! Raise 2 to another even power! See what happens modulo 3! Repeat, until you do have an idea! Then try to prove it! – Gerry Myerson Aug 14 '12 at 13:16
Hint: Rewrite the base using the fact that $2\equiv -1 \bmod 3$.
-
+1 This is even shorter and simpler. – DonAntonio Aug 14 '12 at 18:05
In order to work out this problem, we start by noticing $2^2\equiv 1 ~~~(\text{mod } 3)$.
What this means is $2^2$ (which is $4$) is $1$ greater than some multiple of $3$ (that multiple, in this case is obviously, $3$)
So if $2^2\equiv 1 ~~~(\text{mod } 3)~~\implies (2^2)^n\equiv (1)^n ~~~(\text{mod } 3)\implies 2^{2n}\equiv 1~~(\text{mod } 3)$,
Again, this means that any even power of $2$ is $1$ greater than some multiple of $3$.
So if $2^{2n}$ is $1$ greater than some multiple of $3$ (another way to say this is that $2^{2n}$ leaves a remainder of $1$ when divided by $3$), then what can you say about $2^{2n}+5$?.
To answer this, forget about $2^{2n}$ and just think about the remainder (i.e $1$). This is how modular arithmetic makes our live a lot easier. If $2^{2n}$ is $1$ greater than some multiple of $3$, then $2^{2n}+5$ should be $1+5=6$ greater than that multiple of $3$, right? But you know that $6$ is, by itself, a multiple of $3$. So, this should mean that $3|2^{2n}+5$.
A more formal (and a neat) argument could be:
$3|2^{2n}-1\implies 3|2^{2n}+5$, since $2^{2n}+5=(2^{2n}-1+6)$
Now since it has been shown that $2^{2n}+5$ is infact a multiple of $3$, it should be apparent that $2^{2n}+5$ is after all, composite.
I Hope it helps!
-
Weird explanation and weird disclaimer: what "technical jargon" did Gerry's answer use? Idea, work, raise...? The only technicallity there is modulo 3, which is both (1) boringly elementary, and (2) even the OP uses it...and of course, so do you. So what technical jargon did you avoid using that the other two answers did? – DonAntonio Aug 14 '12 at 18:04
First of all, please do note that I am not comparing my answers with others. Now that I read my answer, I realize that I badly phrased my little disclaimer :) What I mean to say is I have included parts like "...this means that any even power of 2 is 1 greater than some multiple of 3..." in my answer which are obvious. So I guess I need to edit/ delete the disclaimer, eh? – Bidit Acharya Aug 14 '12 at 18:11
1
I didn't say anything about you "comparing", but about you implying that your answer won't be "technical" as the other ones, whereas it actually was. That's all. Nothing to be too touchy about. – DonAntonio Aug 14 '12 at 18:35
$2^(2n) +5$ is the same as $4^{n} +5 =(3+1)^n +5$
If you expanded the $(3+1)^n$ part , every term would be divisible by $3$ (except the last term $1$) The entire expression would be of the form $(3m +1) +5 = 3m +6$ which is divisible by $3$.
-
Heavily recommended: to take a peek at the LaTeX section to properly write mathematics in this forum – DonAntonio Aug 14 '12 at 18:38
To expand a little on DonAntonio's comment: For some basic information about writing math at this site see e.g. here, here and here. – Martin Sleziak Aug 15 '12 at 5:31
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 60, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9504909515380859, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/35600/what-are-your-favorite-puzzles-toys-for-introducing-new-mathematical-concepts-to/35638
|
## What are your favorite puzzles/toys for introducing new mathematical concepts to students?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
We all know that the Rubik's Cube provides a nice concrete introduction to group theory. I'm wondering what other similar gadgets are out there that you've found useful for introducing new math to undergraduates and/or advanced high school students.
-
3
community wiki? – Kaveh Aug 15 2010 at 1:21
Sounds good. The question is now CW. – Neal Harris Aug 15 2010 at 18:05
2
Zometool is an amazing toy, though I'd like to leave it to a mathematician who knows more about zometool to post (there are some interesting connections with higher-dimensional polytopes). – David Corwin Aug 15 2010 at 23:07
## 13 Answers
I made these ropes with rare earth magnets in the ends for demonstrating knots. The materials (rope, magnets, PVC pipe and glue) are inexpensive. I've used Tangle before to play with knots, but it doesn't tend to move over itself very easily and it can be hard to see at a distance which strand is on top at a crossing.
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
A few months back we taught a course on curves and surfaces to undergraduates and asked them to slice a bagel into two linked halves as in here. Of course, you need at least two bagels per student since inevitably most of them end up cutting the first bagel into two unlinked pieces.
The 15-tile sliding puzzle (may be a bit outdated by now) is also a good way to introduce permutation groups and even permutations in particular.
And lastly, the game of Sim (not to be confused with sim city) where two players take turns in drawing edges in red and blue on set of 6 vertices. The rule is that if an edge already exists between a and b then one cannot draw another one. The aim is to avoid a triangle in your own colour. It is known that this game always has a winner. Obvious generalizations to more colours and more vertices lead to Ramsey theory. I actually took this route while lecturing to high school kids and they get into it if you start your talk by playing a few games on the blackboard.
-
2
The 15 puzzle is a good way of introducing groupoids. – Alfonso Gracia-Saz Aug 17 2010 at 3:23
The Lights Out game, for the utility of linear algebra over nonobvious fields (here, ${\mathbb F}_2$). It's easy to find on-line versions.
-
My first undergrad research was extending the well known results on this game! Completely unimportant, but helped me realize I wanted to do math! – B. Bischof Aug 18 2010 at 16:58
Slightly off the mark, however you can build a toy model to accomplish the same objective.
I took my section out of the classroom to a spot not far away when I was a TA for calculus. There was a domed window with panels on it that curved. It was a bright day, and we could see the shadow below. I used this as a model to demonstrate the need for a Jacobian in doing multivariate change of variables in integration. I probably could have drawn on a balloon for a pedagogical equivalent, but I thought it was good for the section to walk to an example.
-
Very low-tech - I cut a square out of a piece of cardboard and use it to illustrate the group of symmetries of a square, first day of a group theory course.
-
2
Is there anyone here who does not have cardboard models of the Platonic solids in their offices?! :) – José Figueroa-O'Farrill Aug 14 2010 at 23:53
1
I only have a buckyball, or rather, its 1-skeleton, but it's made of plastic. – Victor Protsak Aug 15 2010 at 1:58
1
@José: Much more fun than making cardboard models: build them with NeoCube! :) theneocube.com (And those who don't want to build anything themselves can just buy a set of roleplaying dice.) – Hans Lundmark Aug 16 2010 at 14:07
Please forgive me for tooting my own horn, but you might be interested in this link and this link (scroll down to the section "sporadic simple puzzles"). I worked on these puzzles around the same time that I learned about basic group theory, and thinking about them really helped clarify certain ideas (such as group actions, conjugation, and the utility of studying the orders of groups and their elements). I think they could be quite valuable as teaching aids.
-
(A remark now that I figured out how to display the picture: I was not involved in the design or construction of the gadget depicted, only the computer programs discussed in the links.) – Paul Siegel Aug 18 2010 at 23:12
The Tangle is a plastic manipulative toy that can be used to introduce students to knot theory. This is what the Tangle looks like:
Colin Adams has published a book entitled Why Knot: An Introduction to the Mathematical Theory of Knots with Tangle.
The publisher's blurb says: "Each copy of Why Knot? is packaged with a plastic manipulative called the Tangle®. Adams uses the Tangle because 'you can open it up, tie it in a knot and then close it up again.' The Tangle is the ultimate tool for knot theory because knots are defined in mathematics as being closed on a loop. Readers use the Tangle to complete the experiments throughout the brief volume."
The Tangle that is included with the book is much longer than the one shown in the photograph above, so it can be bent to create fairly complicated knots.
-
I don't happen to own a planimeter, but there are simulated ones on the internet. These use Green's theorem to compute area of a traced curve - you trace out the curve and a gadget mechanically collects the dot products of a fixed vector field and your tangent vector. If you take your vector field to have constant curl, then the gadget has computed for you the area of a region. I have demonstrated this to calculus students after teaching them Green's theorem, and they find it impressive, generally.
Here is a link to one: http://www.hpmuseum.org/planim/planimtr.htm#the_applet . There is a link to a guide which describes how it works, but you may have to work it out yourself, since it's rather terse.
Another classic is to ask them to construct a Mobius strip out of duct tape, and to cut it down the middle, three levels deep (so 1 + 1 + 2 = 4 cuts in all, recording their observations. This serves as an enticement for vector calculus students to learn topology.
-
"but there are simulated ones on the internet" - Could you please provide a link? – David Corwin Aug 15 2010 at 22:59
okay, link is posted – David Jordan Aug 16 2010 at 13:07
I enjoyed playing the card game Set. http://www.setgame.com/set/index.html
-
In what mathematical context? – Dan Ramras Aug 15 2010 at 0:35
2
I don't really think this is relevant to the question. – Micah Milinovich Aug 15 2010 at 2:32
9
The game of Set does not have much to do with set theory. The game asks you to recognize lines in affine 4-space over the field with 3 elements. The symmetry group is larger and less obvious than the symmetries of a rigid physical object. I think it would be reasonable to use Set as an example in group theory or combinatorics. – Douglas Zare Aug 15 2010 at 5:43
2
There is also projective set, a version designed by a graduate student at Waterloo, which requires recognizing lines in five-dimensional projective space over the field with 2 elements. – Alfonso Gracia-Saz Aug 17 2010 at 3:25
1
The game is played by initially dealing out 12 of the 81 cards face up, with players competing to see who first identifies a grouping of 3 cards equivalent to a line in the 4-dimensional lattice of size $3^4$. If all players concur that the 12 cards dealt do not contain a so-called "set" (a line as Douglas Zare described above), then 3 more cards are dealt. If, again, all people concur, 3 more cards are dealt. I wrote a program back in 1993 using a backtracking algorithm to search for the largest size set of the "Set" cards which are not collinear in affine 4-space. – sleepless in beantown Aug 29 2010 at 13:24
show 3 more comments
David Bachman has been experimenting with using 3d printing to make models for multivariable calculus. For example, models of the graph of $z = x^2 - y^2$, showing the vertical slices and the level curves.
-
I have used Polydrons (triangles,squares, etc. that snap together) to illustrate why there are only 5 platonic solids, to describe their symmetry groups. They also are useful to describe Euler characteristic.
I have also used Set (as others have mentioned) to give an application of modular arithmetic and ask interesting probability/combinatorics questions, such as: "what is the largest possible number of Set cards that contains no set?"
You can also use two jump ropes to illustrate the group PSL(2,Z) as in Conway's "rational tangles." See Conway's lecture on it.
-
I have not yet experimented this, but I plan to use a hat and a skirt (with bottom larger than top) to illustrate curvature.
-
3
You can get people to eat slices of pizza and ask why we always fold a slice of pizza radially while eating it. The answer is Gauss Theorema Egregium! – Somnath Basu Aug 15 2010 at 19:10
Indeed! The same works with those little pods used to maintain music partitions upright. – Benoît Kloeckner Aug 16 2010 at 11:26
Replying to Dan Brumleve's recommendation of the card game "Set" as an answer because my comment was exceeding the maximum allowed size.
The card game "Set" is most analogous to playing tic-tac-toe on a 4-dimensional lattice of size $3 \times 3\times 3\times 3$, allowing for lines to wrap around in that 4-space.
As Douglas Zare described it, the "game asks you to recognize lines in affine 4-space over the field with 3 elements." There are $3^4=81$ cards, with each card showing a design that can be described by 4 attributes, with each attribute containing 3 elements: 3 colors, 3 shapes, 3 levels of shading (outlined, striped, solid), and 3 cardinalities (1,2, or 3).
Twelve cards are initially dealt with players competing to find a grouping of 3 cards such that their attributes are collinear in the 4-d space: either all the same or all different. Thus each line in affine space can also be described by the vector $v\in${$-1, 0,+1$}$^4$ (but excluding ${0}^4$) and a representative member of that grouping. Each card can be also be seen as describing a permutation on the group of the 81 cards.
One quick question that comes out of this is
• Are 12 cards sufficient to guarantee that the cards dealt contains such a collinear grouping?
The answer to that is no. If the people playing concur that the 12 cards initally dealt does not contain 3-points in the 4-d lattice that are collinear, then 3 more cards are dealt, etc.
• What is the minimum number of cards that must be selected to guarantee that that it contains a collinear group of three? The answer to that (21) must be one more than the answer to
What is the maximum number of cards that can be played which do not contain a collinear group of three? (ans=20)
The card game misuses the mathematical term "set" in its name and in its directions for playing the game, since it asks the players to yell out "set!" when they find such a collinear grouping in the 4-d lattice. It should be rightfully called "line", or perhaps most correctly "4-dimensional affine space line over the field with 3 elements". But yelling that out each time would certainly slow down playing the game. :)
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9535638689994812, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/21925/what-is-the-proper-way-to-connect-two-light-bulbs-in-a-circuit-in-series-or-par?answertab=votes
|
# what is the proper way to connect two light bulbs in a circuit? in series or parallel?
What is the proper way to connect two light bulbs in a circuit? in series or in parallel and why?
My thought is that it's better to hook them in parallel, since if we take into account Ohm's Law, the sum of their resistance will be less than when connected in series therefore the battery will last longer.
What do you think?
-
1
What is your criterion for "better"? Power usage or brightness, or something else? Also, since I imagine you're studying this kind of thing, have you tried figuring it out yourself? Is there some particular aspect of solving the problem that gives you trouble? – David Zaslavsky♦ Mar 5 '12 at 20:08
1
By the way, "What do you think?" is not really a good way to end a question around here. We strive to collect answers, not opinions. – David Zaslavsky♦ Mar 5 '12 at 20:09
I messed this up too didn't I :) – dom Mar 5 '12 at 20:12
1
Well it could be asked better, but it's a better question for this site than your other one... and in any case, don't worry about it, you're not offending anyone or anything like that! As long as you're willing to take some constructive advice on how best to participate in the site, we're happy to have you and your questions here. We understand that people don't come in automatically knowing how we do things around here. – David Zaslavsky♦ Mar 5 '12 at 20:16
Thank you so much David! I'll try to update my question so it makes more sense. – dom Mar 5 '12 at 20:18
## 1 Answer
If you want the battery to last longer, you want the resistance to be HIGHER. $P = V^2/R$. Connect in parallel for bright bulbs, connect in series for dim bulbs over longer time.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9652923345565796, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/number-theory/40924-simple-congruence.html
|
Thread:
1. Simple congruence
Does $\frac{2^x-1}{3}=n$, n natural, have odd solutions in the naturals for x? How do you know (does Euler's theorem tell me this)?
2. Originally Posted by sleepingcat
Does $\frac{2^n-1}{3}$ have odd solutions in the naturals? How do you know?
All odd natural numbers (n) can be written $n=2k+1, k \in \mathbb{N} \cup 0$
For the above to have solutions in the odd naturals this must be true for some k
$(2^{n}-1 \equiv 0 \mod \\\ {3}) \iff (2^{2k+1}-1 \equiv 0 \mod \\\ {3}) \iff (2\cdot 4^{k}-1 \equiv 0 \mod \\\ {3})$
but $[4^k]=[4]^k=[1]^k$ mod 3
$(2\cdot 4^{k}-1 \equiv 0 \mod \\\ {3})\implies (2\cdot 1^{k}-1 \equiv 0 \mod \\\ {3}) \implies (2-1 \equiv 0 \mod \\\ {3})$
This false so there is not a solution.
3. Originally Posted by sleepingcat
Does $\frac{2^x-1}{3}=n$, n natural, have odd solutions in the naturals for x? How do you know (does Euler's theorem tell me this)?
Note that $2^x - 1 = 1+2+2^2+...+2^{x-1}$ (this identity appears in The Elements).
Next, $2 \equiv -1 (\bmod 3)$.
Thus, $1+2+2^2+...+2^{x-1} = \pm 1(\bmod 3)$ (depending whether $x$ is even or odd).
Thus, it is impossible to have this congruent to $0$ mod 3.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 12, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8816298246383667, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/102696?sort=votes
|
## Realizing not-quite-barycentric subdivision of a polytope
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Given a poset $S$, one can form a new poset $I(S)$ whose elements are intervals in $S$ (i.e. either $\emptyset$ or $[a,b]$ for some $a\leq b\in S$) with ordering by (set) inclusion. If $S$ is ranked, then $I(S)$ will also be ranked (by $r([a,b])=r(b)-r(a)$).
If $S$ is the face lattice of a $d$-dimensional polytope $P$, is there a canonical way to construct a $d+1$-dimensional polytope $I(P)$ with face lattice $I(S)$? Is there a name for this construction?
Notes:
1) The 2-faces will always be quadrilaterals.
2) The underlying cellular complex is not the barycentric subdivision, whose faces are the chains of $S$, not the intervals.
3) If you apply the construction to a simplex, you should get a cube (of one higher dimension).
4) Of course the best construction should preserve symmetries and intertwine the inclusion of a face $F$ into $P$ with that of $I(F)$ into $I(P)$.
5) The only polytope I actually need an answer for right now is the regular 3-dimensional cube. If this construction only works for, say, simple polytopes, I'm fine with that.
-
Note that posets are enriched in themselves and that $I(S)$ is representable; it is just $\text{Hom}(0 \to 1, S)$. I don't think a similar construction will work for polytopes though... – Qiaochu Yuan Jul 19 at 20:12
## 1 Answer
Edit: I was including the empty set as a face in my earlier answer, which gives the wrong poset I(S). I have now corrected my answer below so as not to include the empty set, significantly changing the answer.
The poset I(S) cannot be the face poset of any polytope, because it will have multiple maximal elements of the form $[v,P]$ for the various vertices of P. I(S) should be the face poset of a subdivision of P. This subdivision is less refined than the barycentric subdivision of the polytope. Its vertices are the barycenters of faces, but its edges only connect barycenters of faces of consecutive dimensions, in contrast to the barycentric subdivision of the original polytope where they would come from all face inclusion pairs. One can continue upward in dimension, likewise describing the faces of the subdivison by progressively filling in its lower skeleta.
You also ask about the cube specifically. Your subdivision in that case is cubical, breaking each i-dimensional cubical face of the original cube into 2^i cubical faces of dimension i.
-
I meant for the empty set to be a face - precisely to avoid the problem you mentioned. – Alexander Woo Jul 20 at 17:14
In that case you have a different problem, which I addressed in my original answer -- if the empty set is a face, then the face poset for the boundary of your desired polytope will be the poset I discussed above. Hence the face poset of the boundary of your desired polytope will be the face poset of a regular CW complex which is homeomorphic to a ball. This is impossible, since the boundary of a polytope is homeomorphic to a sphere, and you cannot have two non-homeomorphic regular CW complexes with the same face poset. – Patricia Hersh Jul 20 at 22:05
Sorry for the confusion. Are you assuming that the original relations in the face poset of $P$ are supposed to also hold in the face poset of $I(P)$? My intention is that they do NOT. I know there is such a construction that turns simplices into cubes - each face of the simplex becomes a vertex; each incidence relation between a $d$-face and $d+1$-face becomes an edge, and so on. You can visualize that the Hasse diagram for a the boolean lattice (the face lattice of a simplex) becomes the edge graph of a cube. The problem is that I don't know how this works in general if it does at all. – Alexander Woo Jul 21 at 1:18
No, I'm not. I am using the order you describe by interval containment. This means (1) that each element of your original poset gives a minimal element in $I(S)$, since that gives a closed interval having a single element, (2) that each cover relation $a\prec b$ of the original poset gives an element of $I(S)$ which covers two of the minimal elements, namely covers $[a,a]$ and $[b,b]$ since $[a,b]$ is the set of size two with these two elements, (3) each $[a,c]$ where there exists $b$ with $a\prec b \prec c$ in the original poset will be at rank 2 with $[a,b]$ and $[b,c]$ just below it, etc. – Patricia Hersh Jul 22 at 18:07
Ah - reading more closely... there is a second set of maximal faces in the boundary, namely $[\emptyset, F]$ where $F$ is a facet of the original polytope. I'm not confident that makes the boundary into a sphere always, but it does if we start with a simplex or a 2-dimensional polygon. – Alexander Woo Jul 22 at 22:14
show 2 more comments
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 40, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9274539351463318, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/243899/any-forest-on-5-or-more-vertices-contains-an-independent-set-of-size-3?answertab=oldest
|
# Any forest on 5 or more vertices contains an independent set of size 3.
I am looking for a short proof of this fact. This is clearly true by drawing these trees, but I am having trouble putting it into writing. Somehow I need to select 3 of the 5 vertices and show that there must be a $3$-cycle or an independent set of size $3$. (Is this a Ramsey number or something like that?)
Is there an obvious/short argument here, or perhaps a lemma I could use to knock it out quickly?
-
4
I might be missing something here but this seems too simple. Take a bipartite coloring of your forest and take the largest color cover. – EuYu Nov 24 '12 at 19:47
How is it possible to have cycles in a forest ? – Amr Nov 24 '12 at 19:56
1
@EuYu All you're missing is that I'm bad at graph theory. :) That is a great short proof. – Alexander Gruber Nov 24 '12 at 20:23
@EuYu Can your method with color coverings be generalized to graphs of finite girth? For example, could it be used to show that the only graphs with girth $4$ without an independent set of size $3$ is the $4$-cycle? – Alexander Gruber Nov 24 '12 at 20:41
1
The argument is dependent more on the fact that forests are bipartite than on their girth. I'm not sure it will generalize nicely for girths. The most general result I could give is that if a graph of order $n$ is $k$-chromatic, then there is an independent set of at least $\frac{n}{k}$. This is a well-known trivial bound on the independence number. – EuYu Nov 24 '12 at 20:53
show 4 more comments
## 1 Answer
In general let $F$ be a forest of order $n$ with $k$ connected components. It follows that every connected component is a tree, thus every component is bipartite (because trees do not have odd cycles). Thus the ith connected component contains an independent set of size greater than ceiling($n_i/2$) (Where $n_i$ is the order of the ith connected component) . Since the union of all these independent sets is again independent (because they come from different connected components), thus we have:
$$\sum_{i=1}^k ceiling(n_i/2)\leq \alpha(F)$$
A weaker version of this inequality would be: $$n/2=\sum_{i=1}^k (n_i/2)\leq \alpha(F)$$
Since the order of $F$ in your question is 5, we know that $5/2\leq \alpha(F)$ hence $3\leq \alpha(F)$
-
Perfect, thank you. – Alexander Gruber Nov 24 '12 at 20:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 16, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.93829345703125, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/165402/solving-y-frac1x-y-1-frac-cot-xx-y-0-by-rank-reduction
|
# Solving $y'' - \frac{1}{x} y' + (1+\frac{\cot x}{x}) y = 0$ by rank reduction
With the substitution $$y(x) = \sin x \int u(x) \, dx\tag{*}$$ I managed to get to$$u'(x) = \left(\frac{1}{x}-2\cot x\right)u(x)$$ Solving which gave me $$u(x) = C_1 \frac{x}{\sin^2 x}$$ Inserting that back into $(*)$ $$y(x) = (\sin x) C_1\int \frac{x}{\sin^2 x} \, dx = (\sin x) C_1 \left(\log(\sin x) - \frac{x}{\cot x} + C_2\right)$$
Which doesn't seem to be the correct solution(s). I don't know where I went wrong though.
-
## 2 Answers
We have a theorem:
There exists a fundamental set of solutions for a homogenous linear 2th-differential equation $a_2(x)y''+a_1(x)y'+a_0(x)y=0$ on an interval $I$.
You have a second order homogenous differential equation which is linear as well. This is essential to have an independent set of solutions $\{y_1(x),y_2(x)\}$ for further aim. As you noted, there is a method in which we can construct a second solution from a known solution such that the latter set is a fundamental set (called reducing the order). It can be proved that if $y_1(x)$ is a known solution then the second one satisfying the theorem is $$y_2(x)=y_1(x)\int\frac{e^{-\int\frac{a_1(x)}{a_2(x)}dx}}{y_1^2(x)}dx$$
So in your equation we have $$y_2=\sin(x)\int\frac{e^{\int\frac{1}{x}dx}}{\sin^2(x)}dx=\sin(x)\int\frac{x}{\sin^2(x)}dx$$ $$y_2=\sin(x)(-x\cot(x)+\ln(\sin(x)))=-x\cos(x)+\sin(x)\ln(\sin(x))$$ Now, your general solution is as $y(x)=C_1y_1(x)+C_2y_2(x)$.
-
Thanks. As it turns out, that was my solution, I just made an error copying it over. – Cubic Jul 1 '12 at 20:16
$$y(x) = \sin(x) \int_0^x u(t) dt \implies y'(x) = \cos(x) \int_0^x u(t) dt + \sin(x) u(x)$$ $$y''(x) = -\sin(x) \int_0^x u(t) dt + \cos(x) u(x) + \cos(x) u(x) + \sin(x) u'(x)\\ = \sin(x) \left( u'(x)-\int_0^x u(t) dt\right) +2 \cos(x) u(x)$$ Hence, $$y''(x) + y(x) = \sin(x) u'(x) +2 \cos(x) u(x)$$ $$-y' + \cot(x) y = -\sin(x) u(x)$$ \begin{align} y''(x) - \dfrac{y'(x)}x + \left(1 + \dfrac{\cot(x)}x \right)y(x) & = \sin(x) u'(x)- \sin(x) \dfrac{u(x)}x +2 \cos(x) u(x)\\ & = \sin(x) \left( u'(x) - \dfrac{u(x)}x\right) +2 \cos(x) u(x) \end{align} Hence, we get that $$u'(x) + \left( 2 \cot(x) - \dfrac1x\right) u(x) = 0$$ Now you should be able to finish it off. Hence, we get that $u(x) = c\dfrac{x}{\sin^2(x)}$. Hence, $$y(x) = c \sin(x) \int_0^x \dfrac{t}{\sin^2(t)} dt$$ The error is in performing the integral. $$\int \dfrac{t}{\sin^2(t)}dt = \log(\sin(t)) - t \cot(t) + k$$ Now you should get the right solution.
-
I get $y''(x) = -\sin(x) \int_0^x u(t) dt + \cos(x) u(x) + \cos(x) u(x) + \sin(x) u'(x)$ - I have no idea where you got the '-' in front of the second cos from. $cx^2\sin x$ can't be the solution because $\sin x$ is a solution. – Cubic Jul 1 '12 at 20:12
@Cubic Sorry. I was wrong. The error is in the integral $$\int \dfrac{t}{\sin^2(t)} dt = \log(\sin(t)) - t \cot(t) + k$$ – user17762 Jul 1 '12 at 20:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 15, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9468216896057129, "perplexity_flag": "head"}
|
http://mathhelpforum.com/trigonometry/111224-solving-x-trig-function.html
|
# Thread:
1. ## Solving for x with a trig function
How do I solve for x when f = 4(-cos^2(x) + sin^2(x) + sin(x))?
I can regroup the function in various ways, and I factored out the 4 already, but I don't know how get the function so that I can solve for x... If you get me started on the right path, or hint at an identity that I can use, that would help. Thanks.
2. Originally Posted by Maziana
How do I solve for x when f = 4(-cos^2(x) + sin^2(x) + sin(x))?
I can regroup the function in various ways, and I factored out the 4 already, but I don't know how get the function so that I can solve for x... If you get me started on the right path, or hint at an identity that I can use, that would help. Thanks.
$-cos^2(x) = -(1-sin^2(x)) = sin^2(x)-1$
$<br /> 4(sin^2(x)-1+sin^2(x)+sin(x)) = 4(2sin^2(x)+sin(x)-1)$
This is now a quadratic in sin(x)
3. f(x) = 4(-cos^2(x) + sin^2(x) + sin(x)) = 4(2sin^2(x)+sin(x)-1),
y = f(x) = 4(2sin^2(x)+sin(x)-1), or 4(2sin^2(x)+sin(x)-1) = 0 if you want to solve for the roots.
USE QUADRATIC FORMULA: solve for x then, . . . . see graph of the function
Attached Thumbnails
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8926571607589722, "perplexity_flag": "middle"}
|
http://m-phi.blogspot.co.uk/2013/01/are-there-causally-active-mixed.html
|
# M-Phi
A blog dedicated to mathematical philosophy.
## Saturday, 5 January 2013
### Are there causally active mixed mathematical objects?
At the heart of the indispensability arguments against nominalism lie mixed mathematical objects - henceforth MMOs. A simple example of an MMO is the set of US presidents. Or the set of eggs in a fridge. The object counts as mathematical because it is a set, and it counts as "mixed" because its elements are concrete entities. Another example would be any function $f$ from the set of US presidents to $\{0,1\}$. This function $f$ would map a president $p$ to a number $f(p) \in \{0,1\}$. Another example would be an quantity: a quantity maps concrete things to abstract values. (This is a puzzling case though. Must there be a concretum for every value? Must there be a concrete for each value of mass-in-kilograms? Possibly one should say that a quantity is really some kind of structure built of properties.) A final example would be a physical field, such as the electric and magnetic fields, $\bf{E}$ and $\bf{B}$, usually unified into the electromagnetic field (written $F_{ab}$ in tensor notation). The fields $\bf{E}$ and $\bf{B}$ are vector fields on spacetime: they are (mixed) functions which assign abstract values to points in spacetime.
The question I am interested in is whether any of these MMOs ever counts as being causally active. For it is usually (and presumably rightly) assumed that pure mathematical entities---the set of natural numbers, the sine function, $\pi$, $\aleph_{57}$, etc.---are not causally active. But it seems to me that, according to physics itself, the electromagnetic field (which is, remember, an MMO, a mixed mathematical entity) is causally active.
For example, the Lorentz force law says, for a point particle of mass $m$ and charge $q$ and position vector $\mathbf{r}(t)$,
$m \frac{d^2 \mathbf{r}}{dt^2} = q(\mathbf{E} + \frac{d \mathbf{r}}{dt} \times \mathbf{B})$
So, the motion of the particle (its acceleration) is determined by the fields, $\mathbf{E}$ and $\mathbf{B}$, which are MMOs. Consequently, it seems that there are causally active mixed mathematical objects --- namely, physical fields.
#### 24 comments:
1. This comment has been removed by the author.
2. Hi Jeff,
As far as I can see, this sort of indispensability arguments is based on conflating two distinct (putative) entities--i.e. (a) the electromagnetic field itself (if there is any such thing) and (b) the mathematical object(s) we use to represent it (if there are any such things). So, I think that the solution to the puzzle is that, while the EM field is causally active, the piece of math we use to represent it is not. Am I missing something?
(Could you please expand your comment about the quantity case? I'm not sure I see what you are trying to say)
3. Hi Gabriele,
According to physics itself, the magnetic field is a vector field, a function on spacetime. If the magnetic field is not a vector field, what is it?
Similarly, quantities are functions too.
Cheers,
Jeff
1. hi jeff,
i think we should carefully distinguish between theories and our interpretations of them. strictly speaking there are no interesting metaphysical claims that are true "according to physics itself". as far as i can see, our best interpretation of classical em is that the magnetic field is best represented mathematically as a vector field not that it is a vector field. (although some physicists sometimes talk of it as if it actually is a vector field we shouldn't take that talk liiterally). i hope you wouldn't want to claim that the compass needle points towards the north pole because vectors are making it do so! surely the vectors are there to represent the influence the earth's magnetic field has on the needle and other objects but they di not constitute the magnetic fiels!
2. sorry for all the typos--i'm writing this from my phone!
3. Hi Gabriele,
"i hope you wouldn't want to claim that the compass needle points towards the north pole because vectors are making it do so!"
Yes!!
That's precisely what I'm saying.
(The dipole's vector field aligns with the magnetic field's vectors to minimise the energy.)
Physical laws normally express relationships between physical quantities, which are usually mixed functions. I don't think the magnetic field $\mathbf{B}$ is "represented" by a vector field. I think $\mathbf{B}$ *is* a vector field. And I think this is the way physics thinks of it too (though physics could be mistaken, of course). A physical field is a mixed function that maps spacetime points to values in some space (usually a linear space).
I think the idea you're advocating depends on *nominalizing* the physics, a la Hartry Field, and thereby replacing the mixed physical functions with primitive spatiotemporal predicates, and then showing that a (standard) model for the resulting theory can be represented by reintroducing the mixed functions. But this simply assumes nominalism and the success of a certain kind of nominalization.
Cheers,
Jeff
4. This comment has been removed by the author.
5. This is damn interesting guys. Let's see if I manage to post something brief.
Obviously, if Jeff is right, then mixed mathematical entities (as he calls them) can be causally efficacious, since they are instantiated as concrete spatiotemporally located objects, however de-localised they may be (e.g. fields). However, I am inclined to take this as a reduction of the view Jeff is exploring and, in the spirit of Gabriele's comment, as pointing to the need to distinguish a mathematical and a physical concept. This is typically easy enough to do for any branch of experimental physics, although admittedly harder for em theory, or more generally mathematical physics, perhaps because the maths is so entrenched there. But the idea is that there is a mathematical concept of a "field", which is indeed displayed by pointed vectors in a abstract mathematical space. Then there would be a physical concept of a field which is best thought of in purely physicalistic or even operationalist terms. Something like Faraday's lines of force, or more generally force fields. Appropriately, Maxwell's work contains extended discussions of both, and how to relate them.
6. Hi Mauricio,
The thing is, the usual field quantities in physics, such as $\mathbf{B}$, $g_{ab}$, $\Psi(t)$, etc., are mixed, and so relate the two domains that one wants to keep separate (i.e., spacetime points & concreta on the one hand, and the mathematical value ranges on the other).
So, when you say the problem is to make sense of "a physical concept of a field ... thought of in physicalistic terms", I think this has to mean "thought of in nominalistic terms". ("Operational" would be going too far; usually those replying to indispensability arguments, such as Field, Melia, Leng, Sober, and others, are scientific realists.)
Hartry Field does try to achieve this in his Science Without Numbers. He agrees that a physical field quantity $\Phi$ is a function on spacetime to some abstract range; however, the usual theory T of the field $\Phi$ is nominalized, and replaced by a theory T* which uses more complicated primitive spatiotemporal predicates (e.g., "the $\Phi$-value at p1 lies between the $\Phi$ values at p2 and p3"). The success of such a view hinges on the viability of nominalized physics.
Cheers,
Jeff
4. E.g., here's Hartry Field discussing the indispensability argument:
"After all, the theories that we use in explaining various facts about the
world not only involve a commitment to electrons and neutrinos, they
involve a commitment to numbers and functions and the like. (For
instance, they say things like ‘there is a bilinear differentiable function,
the electromagnetic field, that assigns a number to each triple consisting
of a space-time point and two vectors located at that point, and it obeys
Maxwell’s equations and the Lorentz force law’.) I think that this sort
of argument for the existence of mathematical entities (the Quine-
Putnam argument, I’ll call it) is an extremely powerful one, at least
prima facie." (Field 1989, Realism, Mathematics and Modality, Introduction.)
To stress: "there is a bilinear differentiable function,
the electromagnetic field, that assigns a number to each triple consisting
of a space-time point and two vectors located at that point, and it obeys
Maxwell’s equations and the Lorentz force law"
There's no conflation here, as far as I can see.
And this is one of the premises of the indispensability argument. Consequently, Field tries to reformulate the laws without referring to such functions, just using primitive spatiotemporal predicates.
Cheers,
Jeff
1. is this an argument exi auctoritate? :-)
im sorry but i can't take field's talk literally here. the magnetic field is no more a function than hartry field is (and no hf is not a function). this is a typical case of conflation of the thing reprsented with the thing used to represent it and the fact that the mf is unobservable males the conflation harder to see.
2. i meant 'ex' and 'makes'--damn phone!
3. Hi Gabriele,
"the magnetic field is no more a function than hartry field is"
I'm quite happy to say that physical systems are complicated tensor functions. :)
"this is a typical case of conflation of the thing reprsented with the thing used to represent"
But this view will only work if you can nominalize physics. What is the thing represented? What is the thing used to represent?
Suppose we accept your view: then
(i) there is a function F;
(ii) there is a magnetic field M;
(iii) F represents M.
(And then it is required that the reference to the function F can be nominalized away.) But what is this function F a function on? Spacetime? What is its range? A vector space?
What is this entity M? What are its properties? If M somehow "like" a vector field on spacetime?
What does "represents" mean?
Is the usual magnetic field $\mathbf{B}$ the same as F or M?
Cheers,
Jeff
4. Damn good questions. Of course, if one is by instinct or default a nominalist, one would start from the other end of Jeff's starting point - and what is surprising there is the effectiveness of maths in representing physics. From Jeff's non-nominalist perspective, there can hardly be a surprise there - the maths is the physics. Maybe the nominalist view makes more sense for areas of physics, such as quantum mechanics, or experimental physics, where there is de facto underdetermination of maths by physics. In em theory, or even worse, spacetime theory, it is much harder to see how such nominalist distinctions play a role
5. "But this view will only work if you can nominalize physics. [(a)] What is the thing represented? [(b)] What is the thing used to represent?"
I have no idea of what people mean when they talk of "nominalizing" physics because I find that both the rules and the object of the game are unclear. As far as I can see, physical theories are not committed to the existence of anything other than concrete objects. The fact that they employ (putative) mathematical objects to describe (the behaviour of) those concrete objects is beside the point.
Anyway, here are my quick answers to your questions:
(a) the target of the representation (the thing that is represented) is a concrete entity (e.g. the EM field).
(b) the vehicle of the representation (the thing that is used to represent the target) is (typically) a fictional object (e.g. a vector field).
Of course, I would need to give an account of fictional objects that does not appeal to abstract entities in order for this to work but I think this can be done (along the lines of Kendall Walton's pretense account of fiction).
If I have just nominalized physics, clearly it wasn't that hard. If I haven't, I just can't possibly understand what would count as "nominalizing" physics.
"I think the idea you're advocating depends on *nominalizing* the physics, a la Hartry Field, and thereby replacing the mixed physical functions with primitive spatiotemporal predicates, and then showing that a (standard) model for the resulting theory can be represented by reintroducing the mixed functions. But this simply assumes nominalism and the success of a certain kind of nominalization."
With regards to the alleged question-begging, as far as I can see this is (an admittedly oversimplified sketch of) the dynamic of the dialecitc. Pre-theoretically we all accept that the existence of concrete objects such as chairs, trees, and magnetic needles. But we don't all accept the existence of "abstract entities" such as number, vectors, etc. However, it turns out that talking and thinking as if there are abstract entities is very useful. Is that a reason to believe that those entities exist? I don't see why, if we can make-do with the fictionalist story I just sketched. Is this begging the question in favour of nominalism? I don't think so, for it seems thatthe burden of proof is on those who think that the fictionalist story I sketched is not enough and that we should add abstract entities to our ontology beside the concrete ones we used to believe in (if we still believe in those once we start philosophizing that is).
1. Hi Gabriele,
Do you think that, e.g.,"$\nabla \cdot \mathbf{B} = 0$" does not imply that there is a function mapping spacetime points to vectors, whose divergence is 0. But this is surely just a logical implication. Or do you deny the existence of $\mathbf{B}$? But this then implies that Maxwell's equations are false, which scientific realists wish to resist.
"(a) the target of the representation (the thing that is represented) is a concrete entity (e.g. the EM field).
(b) the vehicle of the representation (the thing that is used to represent the target) is (typically) a fictional object (e.g. a vector field)."
But now I'm confused about what you think the EM field is. You deny the existence of $\mathbf{B}$, the function from spacetime to a vector space. But you say that the target is the EM field, so let's call it M. Is M not $\mathbf{B}$? Isn't M a vector-like field on spacetime? If not, then I can't make sense of what your entity M is ... it seems strangely noumenal (but I think it preserves some operational properties).
Also, your vehicle of representation also is a certain abstract object, and yet you deny that there are abstract objects. Hence, there is no vehicle of representation.
The only way I can see anything like this working is to take Maxwell's theory T and nominalize it, as T*, also giving certain representation theorems relating models of T* to models of T. Additionally, one gives certain conservativeness theorems guaranteeing that one can introduce mixed functions and sets without disturbing what one says about concreta.
Cheers,
Jeff
2. This comment has been removed by the author.
3. Hi Gabriele,
I forgot to answer your question about quantities, "(Could you please expand your comment about the quantity case? I'm not sure I see what you are trying to say)"
Take the case of the quantity Mass.
The thing is, the range of this quantity is, in some sense (up to isomorphism), all positive reals, $\mathbb{R}_{\ge 0}$. And given some mass scale $m$ (say, mass-in-kg), then, for certain kinds of physical system c, we have $m(c) = r$.
But there is not always a physical system $c$ for each $r \in \mathbb{R}_{\ge 0}$. So, there are some mass properties---having mass of $r$ kg---for which there is no actual instantiation.
This suggests that the mass properties are basic.
Another reason is connected to acceptable scale ("gauge") transformations. The property of having mass $1$ kg is the same property as the property of having mass $1000$ g. So, we want to identify these properties, even though they're indexed by different real numbers.
This allows us to understand quantity values as what Carnap and Quine called "impure numbers", such as $100 \text{ g}$, $25^{\circ}C$, etc. And we can write equations between them, like
$150 \text{ g} = 0.15 \text{ kg}$
Cheers,
Jeff
6. "Do you think that, e.g.,"∇⋅B=0" does not imply that there is a function mapping spacetime points to vectors, whose divergence is 0. But this is surely just a logical implication."
"∇⋅B=0" is a statement about a fictional object along the lines "Sherlock Holmes lives on Baker Street" and neither implies the existence of the object it is supposedly about. However, both are true within the relevant fiction and entail that there are such-and-such objects in the relevant fiction.
"Or do you deny the existence of B? But this then implies that Maxwell's equations are false, which scientific realists wish to resist.
Maxwell's equations are a piece of math and, as such, in and of themselves are neither true nor false of the world because they don't say anything about the world--they just describe a fictional object. However, those very equations can be used to describe certain dependency relations between physical quantities in the real world. Those descriptions, unlike the equations, are capable of being true or false, but they are purely about concrete physical stuff. To take a simpler example, the equation "F = G(Mm/r^2)" is just a piece of math (and, as an equation, it is neither true nor false). It is only when the equation is taken to express something about the relation between the magnitude of the gravitational force between two concrete objects, the masses of those two objects and the distance between them that it becomes part of a physical description of the world and can be true or false. But the math is simply providing us with a langauge that allows us to express stuff that we cannot express in ordinary language.
7. This comment has been removed by the author.
8. "But now I'm confused about what you think the EM field is. You deny the existence of B, the function from spacetime to a vector space. But you say that the target is the EM field, so let's call it M. Is M not B? Isn't M a vector-like field on spacetime? If not, then I can't make sense of what your entity M is ... it seems strangely noumenal (but I think it preserves some operational properties)."
No, M is not B. Take this magnet on my desk (I always keep one there to make realistic philosophical examples ;-)). The magnet is surrounded by a magnetic field, which is as concrete as the magnet itself. The fact that the magnetic field is unobservable makes it harder for us to distinguish the field itself from its mathematical representation, but the field is no less of a concrete entity than the magnet and, if you n fact if you sprinkle some iron dust around the magnet, you'll see the effects of the magnetic field. You claim that my magnetic field (M) is noumenal. I have to say I think this is just philosophical name-calling, but why is it so? What makes your magnetic field (B)less noumenal? If I understand your position correctly, you seem to think that what makes the iron dust form those patterns is the vectors that make up the magnetic field; I think it's the magnetic force the magnet exerts on the iron dust. (Sorry this is very rough but I don't have the time to get into the details at the moment.) Why would your position be less noumenal? You are postulating the existence of causally efficacious abstract entities!!! What could be more noumenal than that?!?
Furthermore, you call whatever causes the pattern of the iron dust an abstract entity (B), I say no it's just a concrete entity (M), which can be represented mathematically by B. But what arguments do you have for the cause of the pattern to be an abstract object? Both nominalists and platonists seem to agree that causal efficacy is the hallmark of the concrete and that abstract entities are causally inefficacious, so what would make your causally efficacious abstract entities abstract in the first place? And what would make them vectors?
"Also, your vehicle of representation also is a certain abstract object, and yet you deny that there are abstract objects. Hence, there is no vehicle of representation."
And Santa's got a beard that's long and white and yet I deny that he exists. Hence, there is no person who puts presents under the tree. :-)
No, seriously, I don't think I can answer this question satisfactorily within the limits of a blog comment, but if you are interested in my take on these issues, I can send you a copy of my book on models and representation as soon as it's out sometime this year (sorry for the little ad!) and then we can talk about it over a pint of real ale next time I'm in your neck of the woods. Deal? ;-)
9. Nice one, Gabriele. I too agree that the magnetic field is not a mathematical entity, but a physical one, which is in turn appropriately represented by a mathematical entity, namely the vector field B. This disposes of Jeff's worry regarding the causal efficacy of mathematical entities. On my view they are abstract, do not live in spacetime and have no causal powers. Jeff observes that the field B is a function that ascribes vectorial quantities to each spacetime point, which suggests to him that B lives in spacetime after all, and may be causally efficacious. But this seems to me to assume that the space and time parameters that appear in the equations that define B (implicit in the use of time and space derivatives, dt, dx, etc) are the real concrete physical points of spacetime, while I think they are just one more mathematical representation of it.
Where I part company with you is in your insistence to read Maxwell's equations as fictional. In fact you seem to want to defend that the source of any representation (the 'vehicle' as you call it) is fictional. Admittedly this is a popular view nowadays, but I think it is both mistaken and not particularly enlightening (it seems to me to raise more questions than it answers). My view, by contrast, is that in most mathematical sciences the maths represents the physics directly, without any detour via any further entity, whether fictional or not. There are no genuine questions regarding "the fiction that Maxwell's equations are true within". This is just philosophical gobbledegook that physicists have no time for. The equations are true in the sense that they appropriately represent the em field - and there are no further issues of truth or semantics involved here.
My fictionalism rather emphasizes how in scientific representation, typically targets (not sources!) are fictional or highly idealized constructs. Maxwell provides a beautiful example since throughout his life he thought he was representing the ether through his equations. (I have a forthcoming paper on this topic, if you'd like to look at it ....) Cheers, Mauricio.
1. Hi Mauricio,
"But this seems to me to assume that the space and time parameters that appear in the equations that define B (implicit in the use of time and space derivatives, dt, dx, etc) are the real concrete physical points of spacetime, while I think they are just one more mathematical representation of it."
Ok, but then there are two objects here!
(i) the magnetic field $\mathbf{B}: M \to \mathbb{R}^3$, a mixed (axial) vector field, defined on spacetime $M$;
(ii) the co-ordinate representation $\mathbf{B} \circ \phi^{-1}$ (where $\phi : M \to \mathbb{R}^4$ is a co-ordinate chart) which is purely mathematical function from $\mathbb{R}^4$ to $\mathbb{R}^3$.
So, I think the magnetic field just is the spacetime field $\mathbf{B}$, which is a mixed vector field, whereas $\mathbf{B} \circ \phi^{-1}$ is indeed a purely mathematical entity. Then I suggest that the mixed vector field $\mathbf{B}$ is causally efficacious. I don't mean that the co-ordinate representation $\mathbf{B} \circ \phi^{-1}$ is causally efficacious, as that would conflate two separate entities, one mixed and the other pure.
But, if I understand it right, Gabriele's view involves actually denying the existence of the magnetic field $\mathbf{B}$ on spacetime, as well as its co-ordinate representation $\mathbf{B} \circ \phi^{-1}$. What there is, "physically", is some other entity, M, but whose properties seem to me ineffable: for example, M isn't a vector field, with zero divergence, whose vector product with velocity of a point particle gives the local force vector. All we know about this entity M is
(R) $\mathbf{B}$ represents M.
But then, given that "represents" is defined in terms of the existence of isomorphic mappings on Gabriele's theory, I now suspect that this view will face a severe Newman problem, so that Maxwell's theory, thus construed, collapses to its empirical consequences.
So, provisionally, it seems that we have a form of scientific anti-realism with a Newman problem.
Cheers,
Jeff
2. Hi Gabriele and Mauricio,
Thanks chaps for a very good discussion of these topics.
Gabriele - I'll look forward to the book and definitely have a chat if you're nearby!
Cheers,
Jeff
Subscribe to: Post Comments (Atom)
## Blogroll
• 24 minutes ago
• 15 hours ago
• 15 hours ago
• 1 day ago
• 2 days ago
• 4 days ago
• 1 week ago
• 1 week ago
• 1 week ago
• 3 weeks ago
• 4 weeks ago
• 5 weeks ago
• 1 month ago
• 2 months ago
• 3 months ago
• 1 year ago
## Archive
• May (3)
• April (10)
• March (19)
• February (9)
• January (14)
• December (10)
• November (5)
• October (7)
• September (3)
• August (12)
• July (13)
• June (10)
• May (5)
• April (5)
• March (11)
• February (5)
• January (8)
• December (6)
• November (3)
• October (10)
• September (13)
• August (12)
• July (15)
• June (14)
• May (23)
• April (21)
• March (14)
• February (2)
• September (1)
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 56, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9493899345397949, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/15/what-experiment-would-disprove-string-theory/19086
|
# What experiment would disprove string theory?
I know that there's big controversy between two groups of physicists:
1. those who support string theory (most of them, I think)
2. and those who oppose it.
One of the arguments of the second group is that there's no way to disprove the correctness of the string theory.
So my question is if there's any defined experiment that would disprove string theory?
-
2
None as far as I know. It's also darn hard to disprove it. There may be some ancillary evidence however, but nothing direct (certainly at present). I'll leave someone more knowledgeable than me to answer this however. – Noldorin Nov 2 '10 at 19:57
27
It's not as big a controversy as you might think. The vast majority of physicists don't work on anything close to string theory at all. – j.c. Nov 2 '10 at 20:25
7
There is no experiment to prove the correctness of any theory. So the argument you're referring to is that it's extremely difficult (maybe impossible?) to do an experiment which would disprove string theory. – dF_ Nov 2 '10 at 20:25
2
have you thought about giving the correct answer to Lubos? – John McVirgo Nov 9 '11 at 15:16
3
A person (particularly a layman) can easily be drawn in to the idea of 'camps' in science. The OP suggests that there is a 'for string theory' camp and an 'against string theory camp. The whole point of science is not to emotionally attach oneself to a hypothesis (positively or negatively) but rather to search for the truth, no matter what it is. I imagine it would be a bit disheartening to string-theorists if it were proven wrong, but its ruling out would provide guidance for competing theories. – user12345 Sep 9 '12 at 19:58
show 3 more comments
## 14 Answers
String theory should come with a proposal for an experiment, and make some predictions about the results of the experiment; then we could check against the real results.
If a theory cannot come with any predictions, then it will disprove itself little by little ...
The problem is that, with string theory, this is extremely difficult to do, and string theorists have year in front of them to go in that direction; but if in 100 years we are still at the same status, then it would be a proof that string theory is unfruitful...
-
17
While a fair point, I don't think this really answers the question, as it will not be a disproof. :/ – BBischof Nov 2 '10 at 22:40
12
Certain ancient Greek philosophers theorized that matter came in chunks - the word 'atom' is Greek if I recall. They were right though it wasn't until only 100 years ago anyone could be sure. So a long time passing without verification should not discredit an idea. – DarenW Nov 16 '10 at 3:28
7
It is very much the case that if a theory cannot make any verifiable predictions, it is a useless theory, whether it is correct or not, and should therefore not be pursued. – Noldorin Nov 20 '10 at 20:32
4
I loved DarenW's cogent first comment, though the second seems based purely on speculation. (What would be the mechanism for failure of a disprovable theory, anyway?) I got involved in string theory because we don't know how two electrons interact with each other, gravitationally. We still don't know for sure. It doesn't matter, since we can't measure the gravitational effects anyway... but still. Most people are pragmatists and don't care. Some care. None of this is controversial. The root of the controversy probably boils down to resentment in the allocation of hiring resources. – Eric Zaslow Nov 20 '10 at 21:14
12
-1 There is no content in this answer. – Erick Robertson Jan 26 '11 at 13:34
show 8 more comments
One can disprove string theory by many observations that will almost certain not occur, for example:
1. By detecting Lorentz violation at high energies: string theory predicts that the Lorentz symmetry is exact at any energy scale; recent experiments by the Fermi satellite and others have showed that the Lorentz symmetry works even at the Planck scale with a precision much better than 100% and the accuracy may improve in the near future; for example, if an experiment ever claimed that a particle is moving faster than light, string theory predicts that an error will be found in that experiment
2. By detecting a violation of the equivalence principle; it's been tested with the relative accuracy of $10^{-16}$ and it's unlikely that a violation will occur; string theory predicts that the law is exact
3. By detecting a mathematical inconsistency in our world, for example that $2+2$ can be equal both to $4$ as well as $5$; such an observation would make the existing alternatives of string theory conceivable alternatives because all of them are mathematically inconsistent as theories of gravity; clearly, nothing of the sort will occur; also, one could find out a previously unknown mathematical inconsistency of string theory - even this seems extremely unlikely after the neverending successful tests
4. By experimentally proving that the information is lost in the black holes, or anything else that contradicts general properties of quantum gravity as predicted by string theory, e.g. that the high center-of-mass-energy regime is dominated by black hole production and/or that the black holes have the right entropy; string theory implies that the information is preserved in any processes in the asymptotical Minkowski space, including the Hawking radiation, and confirms the Hawking-Bekenstein claims as the right semiclassical approximation; obviously, you also disprove string theory by proving that gravitons don't exist; if you could prove that gravity is an entropic force, it would therefore rule out string theory as well
5. By experimentally proving that the world doesn't contain gravity, fermions, or isn't described by quantum field theories at low energies; or that the general postulates of quantum mechanics don't work; string theory predicts that these approximations work and the postulates of quantum mechanics are exactly valid while the alternatives of string theory predict that nothing like the Standard Model etc. is possible
6. By experimentally showing that the real world contradicts some of the general features predicted by all string vacua which are not satisfied by the "Swampland" QFTs as explained by Cumrun Vafa; if we lived in the swampland, our world couldn't be described by anything inside the landscape of string theory; the generic predictions of string theory probably include the fact that gravity is the weakest force, moduli spaces have finite volume, and similar predictions that seem to be satisfied so far
7. By mapping the whole landscape, calculating the accurate predictions of each vacuum for the particle physics (masses, couplings, mixings), and by showing that none of them is compatible with the experimentally measured parameters of particle physics within the known error margins; this route to disprove string theory is hard but possible in principle, too (although the full mathematical machinery to calculate the properties of any vacuum at any accuracy isn't quite available today, even in principle)
8. By analyzing physics experimentally up to the Planck scale and showing that our world contains neither supersymmetry nor extra dimensions at any scale. If you check that there is no SUSY up to a certain higher scale, you will increase the probability that string theory is not relevant for our Universe but it won't be a full proof
9. A convincing observation of varying fundamental constants such as the fine-structure constant would disprove string theory unless some other unlikely predictions of some string models that allow such a variability would be observed at the same time
The reason why it's hard if not impossible to disprove string theory in practice is that string theory - as a qualitative framework that must replace quantum field theory if one wants to include both successes of QFT as well as GR - has already been established. There's nothing wrong with it; the fact that a theory is hard to exclude in practice is just another way of saying that it is already shown to be "probably true" according to the observations that have shaped our expectations of future observations. Science requires that hypotheses have to be disprovable in principle, and the list above surely shows that string theory is. The "criticism" is usually directed against string theory but not quantum field theory; but this is a reflection of a deep misunderstanding of what string theory predicts; or a deep misunderstanding of the processes of the scientific method; or both.
In science, one can only exclude a theory that contradicts the observations. However, the landscape of string theory predicts the same set of possible observations at low energies as quantum field theories. At long distances, string theory and QFT as the frameworks are indistinguishable; they just have different methods to parameterize the detailed possibilities. In QFT, one chooses the particle content and determines the continuous values of the couplings and masses; in string theory, one only chooses some discrete information about the topology of the compact manifold and the discrete fluxes and branes. Although the number of discrete possibilities is large, all the continuous numbers follow from these discrete choices, at any accuracy.
So the validity of QFT and string theory is equivalent from the viewpoint of doable experiments at low energies. The difference is that QFT can't include consistent gravity, in a quantum framework, while string theory also automatically predicts a consistent quantum gravity. That's an advantage of string theory, not a disadvantage. There is no known disadvantage of string theory relatively to QFT. For this reason, it is at least as established as QFT. It can't realistically go away.
In particular, it's been showed in the AdS/CFT correspondence that string theory is automatically the full framework describing the dynamics of theories such as gauge theories; it's equivalent to their behavior in the limit when the number of colors is large, and in related limits. This proof can't be "unproved" again: string theory has attached itself to the gauge theories as the more complete description. The latter, older theory - gauge theory - has been experimentally established, so string theory can never be removed from physics anymore. It's a part of physics to stay with us much like QCD or anything else in physics. The question is only what is the right vacuum or background to describe the world around us. Of course, this remains a question with a lot of unknowns. But that doesn't mean that everything, including the need for string theory, remains unknown.
What could happen - although it is extremely, extremely unlikely - is that a consistent, non-stringy competitor to string theory that is also able to predict the same features of the Universe as string theory can emerges in the future. (I am carefully watching all new ideas.) If this competitor began to look even more consistent with the observed details of the Universe, it could supersede or even replace string theory. It seems almost obvious that there exists no "competing" theory because the landscape of possible unifying theories has been pretty much mapped, it is very diverse, and whenever all consistency conditions are carefully imposed, one finds out that he returns back to the full-fledged string/M-theory in one of its diverse descriptions.
Even in the absence of string theory, it could hypothetically happen that new experiments will discover new phenomena that are impossible - at least unnatural - according to string theory. Obviously, people would have to find a proper description of these phenomena. For example, if there were preons inside electrons, they would need some explanation. They seem incompatible with the string model building as we know it today.
But even if such a new surprising observation were made, a significant fraction of the theorists would obviously try to find an explanation within the framework of string theory, and that's obviously the right strategy. Others could try to find an explanation elsewhere. But neverending attempts to "get rid of string theory" are almost as unreasonable as attempts to "get rid of relativity" or "get rid of quantum mechanics" or "get rid of mathematics" within physics. You simply can't do it because those things have already been showed to work at some level. Physics hasn't yet reached the very final end point - the complete understanding of everything - but that doesn't mean that it's plausible that physics may easily return to the pre-string, pre-quantum, pre-relativistic, or pre-mathematical era again. It almost certainly won't.
-
19
By saying this: "By mapping the whole landscape, calculating the predictions of each vacuum for the particle physics, and by showing that none of them is compatible with the experimentally measured parameters of particle physics; this route to disprove string theory is hard but possible in principle, too". You're basically saying, "we've created a model with a huge number of parameters, and we've found a great fit!". And string theory is qualitatively different from Relativity and QM, as the latter two theories have made predictions that were tested, not just satisfied consistency tests. – Jerry Schirmer Jan 17 '11 at 18:47
30
So basically, if the universe behaves in a way that contradicts the predictions of QFT with weak gravity (extremely unlikely), or if information is destroyed by black holes (almost inconceivably difficult to test, even with Planck energy accelerators) or by mapping the landscape, and showing it doesn't match our universe (incredibly computationally infeasible). – Peter Shor Jan 17 '11 at 19:51
11
Dear Peter, you have just apparently omitted 90% of the methods, not sure why, but OK. At any rate, the falsification is hard but this is true, almost by definition, for any conceivable mathematically consistent hypothesis that addresses the Planck scale - or any other very high-energy - physics. This difficulty is here because of the very problems we're trying to answer - and it has no impact on the validity of string theory whatsoever. – Luboš Motl Jan 18 '11 at 9:01
28
Items 1-5 and 9 seem like they would be pretty shocking whether or not string theory is a true description of the world. Saying "string theory is disproven if QM, or the equivalence principle, or the soundness of mathematics itself is disproven" seems a bit like saying "GR would be falsified if it turns out there's no such thing as gravity." Technically true, but not nearly as useful as being able to say "such-and-such star will have such-and-such apparent position during the next solar eclipse." – Tim Goodman Feb 21 '11 at 6:35
7
In other words, I'd think the predictions of a theory which aren't part of previously accepted physics are the ones we'd really like to test. Items 6-8 seem more in this vein, although 8 sure isn't coming any time soon. – Tim Goodman Feb 21 '11 at 6:40
show 15 more comments
Since many people seem to have very odd ideas about this, let's address this from a much simpler point of view.
Let's suppose you have a friend who only knows math at the level of arithmetic of positive integers. You try to tell him about the existence of negative numbers, and he tells you,
That's stupid, there's obviously no such thing as "negative" numbers, how can I possibly measure something so stupid? Can you have negative one apple? No, you can't. I can owe you positive one apple, but there's clearly no such thing as negative apples.
How can you start to argue that there is such a thing as negative numbers?
A very powerful first step is mathematical consistency. You can list all of the abstract properties you believe to characterize everything about positive integer arithmetic:
• For all a,b,c, a(b+c) = ab+ac
• For all a,b, a+b=b+a, ab=ba
• There exists a number, called 0, such that, for all a, a+0=0+a=a, a0=0a=0
• There exists a number, called 1, such that, for all a, 1a=a1=a
(note that, in sharp contrast to the case of real numbers, the first property can be proved with induction, and need not be an axiom. Similarly, other listed properties can be proved from other ones designated to be more basic if one wishes, which can not be done in the case of the reals.)
So, once you both agree that these axioms characterize the positive integers completely, you can show that these hypothetical negative numbers, based on their formal properties, are consistent with the above axioms. What does this show?
The positive integers, with the addition of the negative integers, can do at least as much as the positive integers by themselves.
(STOP At this point, pause to realize how powerful this constraint is!! How many other ways could one generalize arithmetic, at this level, to something else that is consistent with the properties you want? Zero. There is absolutely no other way to do it. This is incredibly suggestive, and you should keep this in mind for the rest of the cartoon argument, and see how every argument that follows is secretly an aspect of this one!)
Sure, you can write down toy models like that, and they may be consistent, but they don't correspond to reality.
Now, what else do you need to demonstrate to your friend to convince him of the validity of the negative numbers?
You find something else they can do that you can't do with the positive numbers alone. Simply, you can state that every positive-integer-valued algebraic equation does not have a solution:
x + 1 = 0
does not have a solution.
But, it is a trivial fact that extending to the negative numbers allows you to solve such equations. Then, all that's left to convince your friend of the validity of negative numbers is to show that this is equivalent to solving an ("a priori") different problem which only involved arithmetic of positive integers:
x+1=0 <=> y + 1 = 1
So, y = 0, and y=x+1 is equivalent to the other problem.
To be complete, we also have to consider problems that are "unique" to the negatives, such as (-1)(-1) = 1, but in the realm of integers, these are trivial matters that are reducible to the above. Even in the case of reals, given the other things we've shown, these consequences are almost "guaranteed" to work out intuitively obviously.
Now, assuming your friend is a reasonable, logical, person, he must now believe in the validity of negative numbers.
What have we shown?
• Consistency, both with previous models and with itself
• The ability to solve new problems
• The reduction of some problems in the new language to problems in the old language
Now, to decide if this a good model for a particular system, you must look at the subset of problems that did not have a solution before, and see if the new properties characterize that system. In this case, that's trivial, because the properties of negative numbers are so obvious. In the case of applying more complicated things to describe the details of physical situations, it's less obvious, because the structure of the theory, and the experiments, is not so simple.
How does this apply to string theory? What must we show to convince a reasonable person of its validity? Following the above argument, I claim:
• String theory reproduces (by construction) general relativity
• String theory reproduces (by construction) quantum mechanics (and by the above, quantum field theory)
So string theory is at least as good as the rest of the foundations of physics. Stop again to marvel at how powerful this statement is! Realistically, how many ways are there to consistently and non-trivially write a theory that reduces to GR and QFT? Maybe more than one, but surely not many!
Now the question is--what new do we learn? What additional constraints do we get out of string theory? What problems in GR and QFT can be usefully written as equivalent problems in string theory? What problems can string theory solve that are totally outside of the realm of GR and QFT?
Only the last of these is beyond the reach of current experiments. The "natural" realm where string theory dominates the behavior of an experiment is is at very high energies, or equivalently, very short distances. Simple calculations show that these naive regions are well outside of direct detection by current experiments. (Note that in the above example of negative numbers, the validity of the "theory" strictly in the corresponding realm didn't need to be directly addressed to make a very convincing argument; pause to reflect on why!)
However, theoretical "problems" with the previous theories, such as black hole information loss, can be solved with string theory. Though these can't be experimentally verified, it's very suggestive that they admit the expected solution in addition to reproducing the right theories in the right limits.
There are two major successes of string theory that satisfy the other two requirements.
AdS/CFT allows us to solve purely field theory problems in terms of string theory. In other words, we have solved a problem in the new language that we could already solve in the old language. A bonus here is that it allows us to solve the problem precisely in a domain where the old language was difficult to deal with.
String theory also constrains, and specifies, the spectrum and properties of particles at low energies. In principle (and in toy calculations), it tells us all of the couplings, generations of particles, species of particles, etc. We don't yet know a description in string theory that gives us exactly the Standard Model, but the fact that it does constrain the low-energy phenomenology is a pretty powerful statement.
Really, all that's left to consider to convince a very skeptical reader is that one of the following things is true:
• It is possible for string theory to reproduce the Standard Model (e.g., it admits solutions with the correct gauge groups, chiral fermions, etc.)
• It is not possible for string theory to reproduce the Standard Model (e.g., there is no way to write down chiral theories, it does not admit the correct gauge groups, etc. This is the case in, e.g., Kaluza-Klein models.)
I claim, and it is generally believed (for very good reasons), that the first of these is true. There is no formal, complete, mathematical proof that this is the case, but there is absolutely no hint of anything going wrong, and we can get models very similar to the standard model. Additionally, one can show that all of the basic features of the Standard Model, such as chiral fermions, the right number of generations, etc, are consistent with string theory.
We can also ask, what would it mean if string theory was wrong? Really, this would signal that,
• The theory was mathematically inconsistent (there is no reason to believe this)
• At a fundamental level, either quantum mechanics or relativity failed in some fairly pathological way, such as a violation of Lorentz invariance, or unitarity. This would indicate that a theory of everything would look radically different than anything written down so far; this is a very precarious claim--consider what would happen in the example of arithmetic in the above if there were something "wrong" with addition.
• The theory is consistent, and a generalization of GR and QFT, but is somehow not a generalization in the right "limit" in some sense. This happens in, e.g., Kaluza-Klein theory, where chiral fermions can't be properly written down. In that case, a solution is also suggested by a sufficiently careful analysis (and is one potential way to get to string theory).
Of these three possibilities, the first two are extremely unlikely. The third is more likely, but given that it is known that all the basic features can show up, it would seem very strange if we could almost reproduce what we want, but not quite. This would be like, in the arithmetic example, being able to reproduce all the properties we want, except for 1+ (-1) = 0.
If you're careful, you can phrase my argument in a more formal way, in terms of what it precisely means to have a consistent generalization, in the sense of formal symbolic logic, if you like, and see what must "fail" in order for the contrapositive of the argument to be true. (That is, (stuff) => strings are true, so ~strings => ~(stuff), and then unpack the possibilities for what ~(stuff) could mean in terms of its components!)
-
5
Your answer is a tour de force. I give it +1, with the caveat that I disagree with your statement that "String theory reproduces (by construction) quantum mechanics". I do not know of any aspect of string theory or of any claims (speculative or otherwise) which could substantiate such a statement. We start of with the action (Nambu-Goto/Polykaov) for a string. This action describes a classical object. We go on to "quantize" this action following the standard prescriptions of quantum mechanics. In no way does QM arise from a stringy basis, IMHO. Please correct me if you think I'm mistaken. – user346 Jan 18 '11 at 3:36
Well it doesn't from an operator theory point of view, fix the quantization procedure or anything, so you can't really quite construct an "axiomatic qft" based off of it. But it is still designed to be appropriately quantum mechanical in the right limits. But knowing, and using, quantum mechanics in advance is not an impediment to constructing it or to my statement ;). – Mr X Jan 20 '11 at 20:42
4
I'm not accepting string theory as a consistent well-defined theory of physics until string theorists agree on the answer to the question: How does information escape from a black hole? Right now, I'm not even sure string theorists can correctly formulate the conditions that a string theory state is a black hole. – Peter Shor Jan 21 '11 at 2:34
There are many papers addressing this. And there are interesting statements you can make in, e.g., AdS/CFT by thinking about how the field theory corresponds to black holes in the gravity dual. I believe Lubos's blog has a few discussions of this in fairly simple terms. Of course, many details are not yet understood, but the general idea certainly is. – Mr X Jan 22 '11 at 17:16
4
I like the metaphor of the extension of the positive integers to the signed integers as a model for the extension of relativity+quantum theory to string theory. But I think a better metaphor is the extension of Euclid's axioms. It turned out there were multiple consistent (even "correct") extensions of the axioms. But of course, only one extension is "true" (or "real) for any given universe. This is rather the question being asked here: is there any experimental way to know which extension reflects reality? – Mark Beadles Jan 31 '12 at 2:20
String theory, was constructed with the idea that at low energies it should reduce to the quantum mechanical and particle world that we see every day. This is analogous to the correspondence principle in quantum mechanics.
In some sense, any experiment that discredits quantum mechanics will cause a serious re-examination of string theory, however, as many experimentalists will joke, a theorist will always find a way to fix his theory to match the observations. In any case, quantum theory seems very well supported and is unlikely that it will be discredited any time soon
Direct tests of string theory, however, will have to wait until we can probe much higher energies.
-
4
...or someone finds a much more powerful and clever "magnifier" for string theoretical effects then is currently known. – dmckee♦ Nov 18 '10 at 3:51
1
And what are those effects? What string theory predicts for high energies that other theories do not? – Anixx Mar 17 '12 at 19:01
The only way to 'test' string theory, is to actually figure out what it predicts first, which is ambigous at the moment.
Contrary to most claims, String theory is actually a unique theory in that there are no adjustable free parameters. However it has a large or possibly infinite amount of classical solutions or 'vacua'. Most of these vacua look nothing like the real world, some (by some I mean it could be ~ 10^500) are very similar (in that they seem to correspond to low energy physics like the standard model), and at most one corresponds to the real world.
When you have specified a vacua, you then have fixed the predictions identically -- for everything (thats what it means to be a theory of everything). So for instance, you could compute the mass of the electron to however many decimal places and then test it in the lab and also the vacua might make an unambigous prediction for cosmological objects (eg the existence of cosmic strings or domain walls). We don't know exactly.
Of course absent a way to figure out which vacua corresponds to reality, we are left with the same problem that quantum field theory has. Namely you actually have to go out and measure certain things first in order to make predictions (so for instance in the standard model we need to pin down the 26 adjustable free parameters, like the Yukawa couplings, masses of elementary particles and so forth).. But once you have done that, you can then predict infinitely many other things, like scattering cross sections.
So in string theory, that would mean some way of wittling down 10^500 vacua to a more human number, like 50, 10 or better 1 or 0 (0 means the theory is falsified). Which requires doing scattering experiments at the Planck scale, which means a particle accelerator roughly on the scale of the Milky Way.
Of course, it may turn out that string theory makes unambigous falsifiable predictions, but that really requires a great deal of theoretical work b/c it implies knowing something about not just the vacua that corresponds to the real world, but also the 10^500 other ones if you get my meaning. Still, we know that it does make some predictions. For instance, the existence of the quanta of gravity is universal across all solutions. Likewise, the fact that quantum mechanics and special relativity must hold is another robust prediction.
-
1
Nice answer @Columbia. My question is about your last sentence: the fact that quantum mechanics and special relativity must hold [in string theory] is another robust prediction. I've heard this said before and I don't see how these are "predictions" of string theory. The string action is covariant so relativity is obeyed. Quantization follows the rules of quantum mechanics. These ingredients are part of the setup from the get go. Perhaps I am misinterpreting the statement. – user346 Jan 27 '11 at 7:25
@SpaceCadet. It is true that if you start learning st from GSW in the usual way that world sheet LI and Dirac QM are automatically there from the beginning. So in a sense it is not surprising that they stay that way, although you could in principle imagine Lorentz breaking by anomalies. However if you start from equally valid formulations of st where LI is not manifest ex: Light cone gauge, then it is always the case that it is recovered. These are non trivial consistency checks. Similarly with quantum mechanics, you often recover Dirac rules whenever you might have forgotten them – Columbia Jan 27 '11 at 22:02
3
The point I think is that String theory is very tightly constrained, perhaps moreso than any other physical theory invented and its a bit of a culture shock to pause for a minute and appreciate just how much we took for granted in other theories. There is no wiggle room really for theorists to come in and insert something by hand, b/c if one thing breaks then everything breaks and it usually happens in a very obvious and violent way. – Columbia Jan 27 '11 at 22:05
From where this number of variants came from, 10^500? Why you do not know whether the number of possibilities finite or not? – Anixx Mar 17 '12 at 19:20
It is very hard to probe the scales of quantum gravity experimentally so there are very few possibilities for testing if string theory is incorrect as a unified theory encompassing quantum gravity. One of the few observations that has probed the Planck scale was the Fermi gamma ray telescope observation which showed that photons of different energy travel very close to the same speed over cosmic distances. If the result had shown a dispersion of speed it would have disproved string theory and encouraged physicist to look at other ideas.
Of course any observation that disproves quantum theory or general relativity will also disprove string theory, but the real interest is in observations that relate directly to quantum gravity because the separate theories are already well established in their own regimes.
Although it is hard to get direct experimental input for any theory of quantum gravity the constraints on any theory from the logical requirement to combine quantum theory and gravity into one consistent theory for physics are already tremendously strong. In particular there should be some perturbative low energy limit that describes gravity in terms of gravitons which interact with matter. Despite much effort string theory is the only approach that can accomplish this and it is very hard to conceive of a second way. In fact it is very surprising that there is this one way because it requires almost miraculous cancellations of anomalies to make it work. This gives many people the confidence that string theory is the correct path to follow.
Ultimately there needs to be some definitive observation of a quantum gravitational effect that supports string theory. As I said, there are not many possibilities for such observations at present, but this would be a problem for any alternative theory of quantum gravity too.
It is possible that we might get lucky and observe large extra dimensions at the LHC, but there is no reason to expect that. Another possibility that I think is a little more plausible is that supersymmetry would be observed and found to take a form that supports a supergravity origin. Still we have no moral right to expect the universe to hand us such an easy clue and string theory does not promise one.
Such difficulties do not mean that string theory is wrong as some opponents say. It just means that it is going to be difficult to explore quantum gravity empirically.
There has still been constant progress in understanding string theory from the theoretical side and that is continuing. More work needs to be done on the non-perturbative side of string theory so that we can better understand its implications for cosmology. There is some hope that an observation of relic gravitational waves or even low frequency radio waves left over from the big bang may have a characteristic signature dependent on quantum gravitational effects. Again we have no moral rights to demand that such an observation will be forthcoming but it might if we are lucky.
-
Your indication of the importance of quantum gravity to this question is a great point. – Mark Beadles Jan 31 '12 at 1:43
String excitations, rspt lack thereof. Problem is however that unless you believe in a theory with a lowered Planck scale, you'll have to go to energies we'll never be able to reach in the lab to test this regime. But at least in principle it's falsifiable.
-
As suggested by other posters, the key question is energy.
At very high energy levels, approaching some of the quantum gravity 'limits', the string-like nature of fundamental particles would become increasingly apparent. (In terms of an experiment, at a high enough energy level, for instance, there would likely be new specific 'resonances' of the material that could be identified.)
You might also find this article interesting.
-
To disprove the string theory I would say is not very likely; but there are two ways that would cause trouble in string theory: one theoretical and one by experiment. Both unlikely to happen soon, (an both unlikely to ever disprove it, because the string theory seems to be in the right way, at least until now by far.) - If one would like to "dissprove it" at least in an initial approximation to cause some scepticism, should find a theory which solves at least as many problens as the string theory solves, and deals with at least some of the remaining problems of the string theory. (Even if this other theory exists, it will take several decades after its discovery to evolve anyway..)
-Experimentally as everyone else said we need to achieve higher energies on earth, or some better way to locate and detect these higher energies in universe possibly. To achieve or locate somewhere, the energies where string theory effects are detectable will take many many years from now-at least that is the common belief.
-The "argument" that sometimes I listen the string theory is not a theory or is wrong because it can not be verified or disproved is competely wrong. Simply because this is not an argument. If we have to reach higher energies to see the string theory, then we just have to; is very possible that this is the physics. In any case even simpler physics stories like the one of neutrino which critised (with the critics of that time to be:'not even wrong' or the most optimistic "there is no practical way of observing the neutrino"); with Pauli and Fermi doing the theoretical prediction in ~1930 and the discovery of the Cowan and Reines in 1956 can teach us some things...
In one sentence the answer to your last line is: Until now there is no experiment or detector that can verify the string theory. In the future it is a belief that this will happen when we will be able to achieve or locate somewhere these higher energies that the stringy effects become detectable.
-
If understood correctly (?) something Lubos once said, string theory requires torsion (in GR) to be zero. There are experiments underway/planned right now to measure torsion. Therefore not only is there an experimental disproof of string theory, but we should get the data soon.
-
My experience with the predictability of string/brane theory is as follows.
(1) Anything that is observed can be retrodicted by string/brane theory.
(2) Anything that is not observed can explained via one of the 10^500 versions of string/brane theory.
Sources: various Boltzmann brains, e.g., Lenny Susskind, Brian Greene, M. Kaku, Gordon Kane, L. Motl, etc.
Love that postmodern pseudo-science!
-
1
This is not true--- SU(800) gauge group for the Higgs can't be retrodicted, neither can a new charge on the proton of tiny magnitude to have escaped detection until now (or any other ultra-weak charges without ultra-light particles). It predicts a lot of relations between black hole and particle physics, and the mathematical consistency requirements are essentially forbidding of any other quantum gravity construction. – Ron Maimon Aug 13 '12 at 4:27
String theory considers the Lorentz symmetry and existence of extradimensions between others postulates. The problem is, these extradimensions will manifest itself just with violation of Lorentz symmetry, which makes string theory a fringe theory at the rudimentary logic level - until it finds a way, how to demonstrate the existence of extradimensions without violation of Lorentz symmetry.
You can understand this problem for example with water surface model of space-time. Until the wave spreading is not affected with underwater, it remains perfectly background independent in accordance to relativity theory. The only way, how to observe the additional extradimension of 2D surface is the dispersion of surface ripples in 3D underwater. Unfortunately, just this dispersion will break the background invariance of surface ripple spreading. This explains, why string theory can never get some definite solution, being stuck at the landscape of 10E+500 possible solutions.
It means, string theory cannot be falsified, in strict scientific sense it's disproved already, being based on inconsistent postulate set.
-
5
This is just plainly wrong. Extra dimensions in string theory do not violate 3+1 dimensional Lorentz symmetry and are perfectly compatible with observation. – WIMP Jan 17 '12 at 11:32
The answer is simply - NO. There is no way of disproving String Theory without first disproving Quantum Field Theory (with QM inside) and/or General Relativity. This is because String Theory cannot predict any performable experiment that cannot be explained by those two theories. Therefore, the question of string theory being right or wrong is just plain philosophy until we devise accelerators that entangle the whole galaxy. Very costly philosophy, as String Research costs a lot of money.
-
2
This is just noncunstructive trolling. The purpose of physics SE is to discusse physics and answer questions in a CONSTRUCTIVE manner such that everybody can learn something from it. It is not about ranting against research You personally dislike for some reason: -1 – Dilaton Jan 31 '12 at 8:44
3
I just answered the question. Furthermore, If you compare my answer with the answer which has most +1's you'll find that they do not differ in essence. It is because people from strings take such discussions really emotional. There is simply no experiment availible now (and in forseeable future) for the humanity to disproove ST, wheather you like it, or not. The fact that ST agrees with current physics so well is its geratest strength and geratest weakness at the same time. Remember - all great theories of the past have shown measurable invalidity of theories they replaced. – Terminus Feb 1 '12 at 12:20
1
And if you think I am being arrogant and a troll - than what can you say about this: "By detecting a mathematical inconsistency in our world, for example that 2+2 can be equal both to 4 as well as 5; such an observation would make the existing alternatives of string theory conceivable alternatives because all of them are mathematically inconsistent as theories of gravity; clearly, nothing of the sort will occur; also, one could find out a previously unknown mathematical inconsistency of string theory - even this seems extremely unlikely after the neverending successful tests" – Terminus Feb 1 '12 at 12:25
2
String theory is not philosophy--- it predicts extremely precise relations between Planckian scattering that reveals the internal geometry of the little dimensions, and ordinary low-energy physics. Unfortunately, we are in a position where we can't afford to probe the Planck scale directly, that's all. – Ron Maimon Aug 13 '12 at 4:28
As noted by other posters, the string was CONSTRUCTED in order to reproduce general relativity and quantum physics "all in one". But to do that, it required a number of new "ideas" that were used to have "enough room" for relativity and quantas. One of such ideas is so called "extra dimensions". Those "extra dimensions" had never been observed by anyone, but they were introduced to the string theory because string theorists were unable to "unify" relativity and quantum physics in 4 known dimensions. They were unable to do it not because it is not possible at all, but mainly because they restricted themselve to use of limited number of instruments (such as group theory etc.).
Theoretical physicists like to play with models that are beyond reality. It is well known that it is much easier to develop a model of a horse with 26 (or 10) legs rather than horse with 4 legs. But string theorists can argue that remaining 22 (or 6) legs are "so tiny" that we do not see them.
-
1
Totally fails to understand the origin of the extra dimensions---they we not "introduced" they fall out of the theory as an absolute necessity and then a patch has to be devised to explain their non-observation. – dmckee♦ Nov 30 '12 at 16:24
The string theory is inconsistent in 4 dimensions. Is this the only reason to claim "absolute necessity" of extra dimensions? – Murod Abdukhakimov Dec 1 '12 at 14:03
@MurodAbdukhakimov The theory is constructed of mathematical groups (a technical term). Groups can have representation in various dimension, but most groups have some minimum number of dimensions at which representations exist. The groups of string theory (the ones that combine all the symmetries of known physics) don't have representations with only four space-time dimensions. The extra dimensions are a slighly embarrasing requirement of the theory, not something that was added by hand. – dmckee♦ Jan 8 at 23:07
1
No doubt it is embarrassing. Taking account that the world IS 4 dimensional, the only natural conclusion is that there is NO NEED to combine all the symmetries of known physics into one mathematical group (technical term). Regretfully too many people think that this is the only option. Unification is not just combining the groups into one group. – Murod Abdukhakimov Jan 9 at 7:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9479137063026428, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/20142/find-the-unit-vector-of-a-three-dimensional-vector?answertab=oldest
|
# Find the Unit Vector of a Three Dimensional Vector
How can I find the unit vector of a three dimensional vector? For example, I have a problem that I am working on that tells me that I have a vector $\hat{r}$ that is a unit vector, and I am told to prove this fact:
$\hat{r} = \frac{2}{3}\hat{i} - \frac{1}{3}\hat{j} - \frac{2}{3}\hat{k}$
I know that with a two-dimension unit vector that you can split it up into components, treat it as a right-triangle, and find the hypotenuse. Following that idea, I tried something like this, where I found the magnitude of the vectors $\hat{i}$ and $\hat{j}$, then using that vector, found the magnitude between ${\hat{v}}_{ij}$ and $\hat{k}$:
$\left|\hat{r}\right| = \sqrt{\sqrt{{\left(\frac{2}{3}\right)}^{2} + {\left(\frac{-1}{3}\right)}^{2}} + {\left(\frac{-2}{3}\right)}^{2}}$
However, this does not prove that I was working with a unit vector, as the answer did not evaluate to one. How can I find the unit vector of a three-dimensional vector?
-
1
The pythagorean theorem works for any number of dimensions, not just two. – Mike Dunlavey Jan 28 '12 at 20:15
## 2 Answers
Since this is homework, we are not supposed to give you the answer. But one mistake you made is in your formula for the magnitude of $r$ - the inner square root needed to be squared. So the length of $r$ is simply the square root of the sum of the squares of the $i$, $j$ and $k$ lengths.
Good luck...
-
– spryno724 Jan 28 '12 at 20:21
1
You should end up with $\sqrt((2^2+1^2+2^2)/ 3^2)$ - isn't that $1$? – FrankH Jan 28 '12 at 20:41
Rats, I made a typo in WolframAlpha, I ended up cubing the $\hat{i}$ direction in my calculation. Thank you, Frank! – spryno724 Jan 28 '12 at 20:44
No problem...... – FrankH Jan 28 '12 at 20:46
Although you already have an answer, I want to show you a visualization. The dark black vector is $\hat{r}$ and in green is the projection on the `XY` plane (ignoring the `z`-axis). In blue is only the `z` axis component vector. These form an orthogonal triangle and if you want to know the length of the hypotenuse ($\hat{r}$) you will need the length of the other two vectors. Now the length of the green vector you said you know how to get, and the length of the blue vector is trivial. If you work it out, you will arrive at the 3D formula for vector lengths.
PS. Sketches were done in GeoGebra 5.0 beta (which has some 3D capabilities now).
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 17, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9420921802520752, "perplexity_flag": "head"}
|
http://crypto.stackexchange.com/questions/1963/how-large-should-a-diffie-hellman-p-be/1964
|
# How large should a Diffie-Hellman p be?
In a Diffie-Hellman exchange, the parties need to agree on a prime `p` and a base `g` in order to continue. Assuming some application that's going to want to initiate handshakes with some large portion of its users, each of which only need to be realistically secure for a few hours,
• Approximately how large should `p` be?
• How often should `p` be changed, if ever? Every n handshakes, every m hours/days/weeks?
• Is there a trade-off between dynamic generation/size of `p`? That is, is it better to find a single ~120 digit prime and constantly reuse it, or to generate a shit-ton of ~28-38 digit primes and randomly pick one per handshake?
• Am I even asking something approaching the right questions (and if not, could you point me in a better direction)?
Intuitively, it seems that size of the chosen secret integers has more to do with the security of the channel than the uniqueness of `p`, but I'm still asking since I'm no mathematician.
-
## 1 Answer
Well, to answer your questions in order:
• How big should $p$ be? Well, it should be large enough to defend against the known attacks against it. The most efficient attack is NFS; that has been used against numbers on the order of $2^{768}$ (a 232 digit number). It would appear wise to pick a $p$ that's considerably bigger than that; around 1024 bits at a minimum, and more realistically at least 1536 bits. Notes:
• What was actually done is use NFS to factor a 232 digit number; NFS can be adjusted to perform discrete logs to bases of the same size without an undue increase of complexity.
• Now, you said that the connections need be secure for only a few hours. Now, NFS on numbers of that size is a large effort, almost certain to take longer than a few hours. Now, if you really don't care if someone can retrieve the keys after the connection has ended, it might seem safe to use a smaller modulii; I would personally recommend against it.
In addition, there's another important point about $p$; $p-1$ should have a large prime factor $q$, and you should know what the factorization of $p-1$ is (so you can pick a value $g$ that is of the order $q$; that is, the smallest value $x>0$ where $g^x = 1 \mod p$ is $x=q$). If you pick a random prime $p$, and a random generator $g$, well, you're probably secure, but you won't be certain (and you'll might leak a few bits of the private exponent if the order of your random $g$ happens to have some small factors).
• How often should $p$ be changed; well, if you pick good values for $p$ and $g$, they don't need to be changed.
• Is there a trade-off between dynamic generation and the size of p? Well, you're far better off picking one large (and well chosen) prime p and g, and sticking to it. From the NFS analysis, a 120 digit prime is of some questionable security; a 28-38 digit prime is far from adequate.
Now, as you might be able to tell from the above discussion, picking good $p$ and $g$ values is not straightforward (at least, if you don't understand the mathematics). One good news is that people have already done the work, and have published good values. See this for some well chosen $p$ and $g$ values; these were originally intended for use in the IKE protocol, but they can be used for other purposes as well. I would personally recommend the 2048 bit value.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 22, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9654405117034912, "perplexity_flag": "middle"}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.