url
stringlengths 17
172
| text
stringlengths 44
1.14M
| metadata
stringlengths 820
832
|
---|---|---|
http://mathhelpforum.com/math-topics/65164-projectile-motion-question.html
|
# Thread:
1. ## Projectile Motion Question
Alright, so I have the following projectile motion problem:
A daredevil jumps a canyon 12 m wide, to do so, he drives a car up a 15 degree incline. What minimum speed must he achieve to clear the canyon, and if he jumps at this minimum speed, what will his speed be when he reaches the other side?
I actually have the answer to the first question, I got it to be 15.46 using the two projectile launched at an angle formulas, subbing in 12.42/vi for t in the horizontal formula. What I'm looking for is how to solve the second part of the problem. Thanks for the help =)
2. The speed will be exactly the same as the starting speed. An easy way to see this is by looking at the kinetic energy of the car: as the car rises, its kinetic energy is converted to gravitational potential energy, which is changed back into kinetic energy as the car goes down again. Since the car rises and falls the same distance, the same amount of energy is converted so the car ends up at the speed it started at.
3. You could try using to formula $v^{2}_{x}=v^{2}_{0_{x}}+2a_{x}\Delta x$
where $v^{2}_{x}$ is the final horizontal velocity, $x^{2}_{0_{x}}$ is the initial horizontal velocity (plug in what you found for the first part here), $a_x$ is the horizontal acceleration (im assuming this is zero?), and $\Delta x$ is the horizontal displacement, or 12m in this case.
4. You're right, there is no horizontal acceleration since there are no horizontal forces mentioned.
$v_{0x}^2 = v_{fx}^2 + 2a_x \Delta x$
$v_{0x}^2 = v_{fx}^2 + 2(0)\Delta x$
$v_{0x}^2 = v_{fx}^2$
$v_{0x} = v_{fx}$
But this is only in the x direction, the y-direction's change in velocity DOES affect the final velocity.
5. Originally Posted by Crysolice
Alright, so I have the following projectile motion problem:
A daredevil jumps a canyon 12 m wide, to do so, he drives a car up a 15 degree incline. What minimum speed must he achieve to clear the canyon, and if he jumps at this minimum speed, what will his speed be when he reaches the other side?
I actually have the answer to the first question, I got it to be 15.46 using the two projectile launched at an angle formulas, subbing in 12.42/vi for t in the horizontal formula. What I'm looking for is how to solve the second part of the problem. Thanks for the help =)
v sin15=y
v cos15= x
x y
delta x=1/2at^2+vt delta y= 1/2at^t + vt
delta x=vt 0=1/2(-9.8)t^2=vt
12=v*cos15*t 0=-4.9t^2+v sin15 t
12.37/v=t 0=-4.9(12.37/v)^2 + v sin15 (12.37/v)
749.8/v^2 = v sin15 (12.37/v)
749.8/v^2 = 3.20v/v
749.8 = 3.20v^2
239.3125=v^2
15.46
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 9, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9091385006904602, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/tagged/interpolation+real-analysis
|
# Tagged Questions
1answer
159 views
### Interpolation inequality
Lef $u$ be at least a $C^2$ function on $\mathbb{R}^n$. Let's denote the gradient by $D$. Also, (using the multiindex notation), define the seminorm ||D^ku|| = ...
1answer
38 views
### Unique solution on subspaces whose union is dense implies unique solution globally?
Let $V$ denote the space of all $f : [0,1] \to {\mathbb R}$ such that the second derivative $f''$ is continuous except on a finite set, equipped with the norm $N(f)=|f(0)|+|f’(0)|+||f''||_{\infty}$ ...
1answer
128 views
### Why is this a linear interpolation?
Let $J_{k,n}$ be the dyadic partition of $[0,1]$, i.e. $n\in \mathbb{N}_0,k=1,\dots,2^n$, $J_{k,n}:=((k-1)2^{-n},k2^{-n}]$ and we denote with $\phi_{n,k}$ the Schauder functions over $J_{k,n}$, i.e. ...
0answers
76 views
### explicit error bounds for Multivariate interpolation
I want to interpolate a function of $d$ variables over a Cartesian grid, using multivariate interpolation, while characterizing interpolation error in terms of bounds on partial derivatives of the ...
1answer
230 views
### A problem on Lagrange interpolation polynomials
Based on a previous question, I had the following conjecture and was wondering if anyone knew how to prove it or find a counterexample. Consider the polynomial ...
1answer
114 views
### Short argument/reference for uniform continuity of piecewise linear interpolation
I have a piecewise linear interpolation: $$B(t) = \frac{t_{l+1}-t}{t_{l+1}-{t_{l}}} B_l + \frac{t-t_l}{t_{l+1}-{t_{l}}} B_{l+1} \quad \text{ if $t \in (t_l, t_{l+1})$;}$$ $B(t_l)=B_{l}$ and \$B(t) = ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 18, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8569876551628113, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/66745/on-engel-anticommutative-algebras/67430
|
## On Engel-anticommutative algebras
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Let $\mu:\mathbb{R}^n \times \mathbb{R}^n \longrightarrow \mathbb{R}^n$ be a alternating bilinear map, i.e. $\mu(X,Y)=-\mu(Y,X)$ (anticommutativity) and so, let $\mathfrak{a}=(\mathbb{R}^n,\mu)$ be a skew-symmetric algebra (this one is not necessarily a Lie algebra).
Questions:
1. Is there a "famous" example of a skew-symmetric algebra that is not a Lie algebra?
2. We assume that for any $X \in \mathfrak{a}$, $\mu(X,\cdot):\mathbb{R}^n \longrightarrow \mathbb{R}^n$ is a nilpotent linear transformation. Is it true that $\mathfrak{a}$ is a nilpotent algebra? (recall that $\mathfrak{a}$ is not necessarily a Lie algebra)
-
If you don't assume associativity, then the multiplication of imaginary octonions gives you a famous example of 1. – José Figueroa-O'Farrill Jun 3 2011 at 12:04
## 1 Answer
1. To rephrase José Figueroa-O'Farrill's comment above - a 7-dimensional simple exceptional Malcev algebra (which is a quotient of octonions under the commutator by 1-dimensional center). But, really, I find the question is formulated badly: just skew-commutativity is a very mild restriction to say something meaningful about an algebra in general, so it's almost the same as to ask for "famous" examples of a (nonassociative) algebra.
2. No, this is not true. I was able to construct counterexamples on computer, as finite-dimensional quotients of free algebras in respective "Engel varieties", but all they are large and cumbersome. Shorther examples can be found in the following paper by Koreshkov and Kharitonov, which, apparently, was published twice:
About nilpotency of Engel algebras, Formal Power Series and Algebraic Combinatorics (ed. D. Krob et al.), Springer, 2000, 461-467; ZBL: 0983.17003 [available on google books and amazon]
Nilpotency of the Engel algebras, Russ. Math. (Izv. VUZ) 45 (2001), No. 11, 15-18; ZBL: 1103.17300
They prove also that this is true for algebras of dimension $\le 4$. It is also easy to see (and is recorded, for example, in: V.T. Filippov, Binary Lie algebras satisfying the third Engel condition, Siber. Math. J. 49 (2008), N4, 744-748; DOI: 10.1007/s11202-008-0071-3) that the second Engel condition implies nilpotency of degree $4$.
On the other hand, Engel(-like) theorems were established for many particular ("famous"?) classes of anticommutative algebras considered in the literature: binary-Lie, Malcev, and some other generalizations of Lie algebras, and it is an open question, as far as I know, for Sagle algebras.
-
For some reason I like your "which, apparently, was published twice" remark a lot. – Vladimir Dotsenko Jun 11 2011 at 10:56
@Vladimir Dotsenko: No offense or sarcasm meant, really. Just a mere (bibliographical) fact. I like this paper and have used it on some other occasion. – Pasha Zusmanovich Jun 11 2011 at 11:11
Oh I did understand that. It's the option of a tongue-in-cheek interpretation that makes it lovely. – Vladimir Dotsenko Jun 11 2011 at 22:26
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8967957496643066, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/differential-geometry/135166-complex-power-series.html
|
# Thread:
1. ## Complex Power Series
hello all,.. I need help with this problem
Suppose that $f$ is holomorphic in an open set containing the closed unit disc, except for a pole at $z_0$ on the unit circle. Show that if
$\sum_{n=0}^{\infty} a_nz^n$
denotes the power series expansion of $f$ in the open unit disc, then
$\lim_{n \to \infty} \frac{a_n}{a_{n+1}} = z_0$.
I don't have any idea where should I start. thank for any comment and suggestion
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9264906048774719, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/43922?sort=votes
|
## Examples where the analogy between number theory and geometry fails
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
The analogy between $O_K$ ($K$ a number field) and affine curves over a field has been very fruitful. It also knows many variations: the field over which the curve is defined may have positive or zero characterstic; it may be algebraically closed or not; it may be viewed locally (by various notions of "locally"); it may be viewed through the "field with one element" (if I understand that program) and so forth.
Often when I've dealt with this analogy the case is that the geometric analog of a question is easier to deal with than the arithmetic one, and is strongly suggestive for the veracity of the arithmetic statement.
My question is: what are some examples of where this analogy fails? For example, when something holds in the geometric case, and it is tempting to conjecture it's true in the arithmetic case, but it turns out to be false. If you can attach an opinion for as to why the analogy doesn't go through in your example that would be extra nice, but not necessary.
-
6
The abc-conjecture without the epsilon in it? – KConrad Oct 28 2010 at 8:46
Oh, for a reference, in Lang's Algebra he gives examples showing the epsilon is needed for the abc-conjecture over Z. – KConrad Oct 28 2010 at 8:48
Periodicity of zeta functions (may be cheating). – S. Carnahan♦ Oct 28 2010 at 8:53
Scott, what do you have in mind: that it has periodic in s? That wouldn't seem tempting to conjecture for the arithmetic case. – KConrad Oct 28 2010 at 12:49
6
Example 1: vanishing of ${\rm{H}}^1(k,G)$ for global function fields $k$ and connected semisimple $k$-groups $G$ that are simply connected. (Over number fields one has to account for effect of real places.) Example 2: if a nonzero element of a global function field is an $n$th power in all completions then it is globally an $n$th power, but this is false for certain number fields and certain $n$. (Over number fields one has to account for the effect of 2-adic places.) In both cases, the phenomena underlying failure over number fields were known before the proofs were found over fn fields. – BCnrd Oct 28 2010 at 20:27
show 2 more comments
## 2 Answers
Of the commenters on this question, two are authors (with Harald Helfgott) of the very nice paper "Root numbers and ranks in positive characteristic", which gives an example (under parity conjecture) of a non-isotrivial 1-parameter family of elliptic curves over a global function field $K = \mathbf{F}_q(u)$ (any odd $q$) such that each fiber $E_t$ for $t \in K$ has rank strictly greater than that of the generic fiber. This is conjectured to be impossible in the number field case.
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
If $A/K$ is an abelian variety, $v$ is a place of $K$, $h$ is the global height function and $\lambda_v$ is the local height function at $v$, then comparing $h(P)$ and $\lambda_v(P)$ for $P \in A(K)$ varies a lot depending on the situation. $\lambda_v(P) = O(1)$ if $K$ is a function field of characteristic zero, $\lambda_v(P) = O(h(P)^{1/2})$ (usually) in positive characteristic and this cannot be improved, and $\lambda_v(P) = O(\log(h(P)))$ conjecturally for number fields (and is definitely not $O(1)$).
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 27, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9401040077209473, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/tagged/soft-question+category-theory
|
# Tagged Questions
2answers
112 views
### Path Algebra for Categories
For a while I had been thinking that the path algebra of a quiver $Q$ over a commutative ring $R$ is the same as the "category ring" $R[P]$ (analogous to "group ring", "monoid ring", "semigroup ring", ...
3answers
75 views
### When is it important to distinguish between an object in a category and that object's identity morphism?
When is it important to distinguish between an object in a category and that object's identity morphism? I am wondering if the only reason that we consider objects at all is to avoid infinitely ...
1answer
68 views
### Can we think of an adjunction as a homotopy equivalence of categories?
There is a way in which we can think about a natural transformation $\eta: F \rightarrow G$ as a homotopy between functors $F,G:\mathcal{C}\rightarrow \mathcal{D}$. Now, an adjunction $F \dashv G$ ...
0answers
67 views
### Accessible introduction to category theory from the point of view of preorders. [duplicate]
Are there books renowned for introducing category theory in a very accessible way? An emphasis on the point of view that categories generalize preorders would be especially appreciated. My goal is to ...
2answers
101 views
### The category Set seems more prominent/important than the category Rel. Why is this?
There's a lot of talk about Set, but less about Rel. As an outsider to category theory, this surprises me, because Rel seems "more closed." In particular, The converse of a function needn't be a ...
3answers
147 views
### What is the difference between analytic combinatorics and the theory of combinatorial species?
Yesterday I asked the question Why should a combinatorialist know category theory?, where Chris Taylor suggested me to have a look at combinatorial species. I had heard the term before but I haven't ...
1answer
482 views
### Why should a combinatorialist know category theory?
I know almost nothing about category theory (I have just skimmed the first chapters of Aluffi's algebra book), reading this question got me thinking... why should someone mostly interested in ...
0answers
98 views
### What are the “correct” modules over locally ringed spaces?
\begin{array}{ccccc} \text{schemes} & \longrightarrow & \text{locally ringed spaces} & \longrightarrow & \text{ringed spaces} \\ | && | && | \\ \text{quasi-coherent ...
3answers
289 views
### What is category theory useful for?
Okay so I understand what calculus, linear algebra, combinatorics and even topology try to answer, but why invent category theory? In wikipedia it says it is to formalize. As far as I can tell it sort ...
2answers
57 views
### Is there a “partial function” approach to subobjects in category theory?
Given a relation $f : X \rightarrow Y$, lets define that the source of $f$ is $X$, and that the domain of $f$ is the set of all $x$ such that there exists $y \in Y$ satisfying $(x,y) \in f$. Thus the ...
3answers
153 views
### In what sense is the forgetful functor $Ab \to Grp$ forgetful?
One sometimes hears about "the forgetful functor $Ab \to Grp$." Given that the image of an object under this functor is still abelian, in what sense is this "forgetful"?
1answer
138 views
### A pedantic question about defining new structures in a path-independent way.
Sometimes there are multiple equivalent ways of defining the same structure; for example, topological spaces are determined by their open sets, but also by their closed sets. I'm looking for a way of ...
1answer
79 views
### What is the minimum required background to understand articles in the nLab?
I am interested in learning more about the nLab categorical perspective on several mathematical subjects such as topology and logic, but found that my understanding of category theory was not ...
3answers
147 views
### Reference request - being rigorous about a common abuse of notation.
I've completely rewritten this question, in accordance with this advice. As a motivating example, suppose we're working in ETCS. Let $\bar{1}$ denote the canonical singleton set, and assert that by ...
2answers
197 views
### Does there exist another way of obtaining a topological space from a metric space equally deserving of the term “canonical”?
Every metric space is associated with a topological space in a canonical way. According to this source, this amounts to a full functor from the category of metric spaces with continuous maps to the ...
1answer
45 views
### Interesting verification of functoriality
Functors and morphisms of functors (aka natural transformations) have become powerful tools in all areas of pure (and meanwhile also applied) mathematics. There are lots of nontrivial constructions of ...
2answers
348 views
### A structural proof that $ax=xa$ forms a monoid
During the discussion on this problem I found the following simple observation: If $M$ is a monoid and $a \in M$ then $\{x: ax = xa\}$ is a submonoid. This is trivial to prove by checking ...
2answers
196 views
### Learning category theory before abstract algebra
I'm reading this excellent pdf http://www.mimuw.edu.pl/~jarekw/pdf/Algebra0TextboookAluffi.pdf which is an algebra book, beginning with category theory and then use it for groups, rings,... My ...
3answers
147 views
### How to do diagram chasing effectively?
I am trying to teach myself some homological algebra, and the book I am using is Aluffi's wonderful Algebra: Chapter 0, which introduces homology at the end of chapter 3. I have spent a lot of time ...
2answers
106 views
### The importance of parallel arrows in a commutative square
I noticed that whenever there is a commutative square, the relation it imposes on parallel morphisms is usually very important (e.g. natural transformations, pullbacks). In contrast, there's usually ...
3answers
389 views
### Category Theory usage in Algebraic Topology
First my question: How much category theory should someone studying algebraic topology generally know? Motivation: I am taking my first graduate course in algebraic topology next semester, and, ...
6answers
1k views
### Why don't analysts do category theory?
I'm a mathematics student in abstract algebra and algebraic geometry. Most of my books cover a great deal of category theory and it is an essential tool in understanding these two subjects. Recently, ...
2answers
192 views
### Why is full- & faithful- functor defined in terms of Set properties?
Wikipedia entry or Roman's "Lattices and Ordered Sets" p.286, or Bergman's General Algebra and Universal Constructions, p.177 and in fact every definition of full and/or faithful functor is defined in ...
2answers
132 views
### What things can be defined in terms of universal properties?
We can define some mathematical objects using universal properties, for example the tensor product, the free group over a set or the Stone–Čech compactification. I'm wondering about how to develop my ...
3answers
529 views
### Why do we look at morphisms?
I am reading some lecture notes and in one paragraph there is the following motivation: "The best way to study spaces with a structure is usually to look at the maps between them preserving structure ...
1answer
336 views
### How to find exponential objects and subobject classifiers in a given category
In a course I'm learning about Topos theory, there are a lot of exercises which require you to prove explicitly some category is an elementary topos: i.e. to construct exponentials and a subobject ...
2answers
330 views
### What should be the next step?
This is a soft/educational question and I'll flag it to be made community wiki. A little bit of background, first. I am in my last undergraduate year, and I took a graduate course in category theory; ...
1answer
514 views
### Is category theory useful in higher level Analysis?
What I mean by higher level before this gets closed is functional analysis, complex analysis and harmonic analysis? I've read looked at the examples in most category theory books and it normally has ...
5answers
446 views
### Algebraic topology, etc. for Mac Lane's “Categories for the Working Mathematician”
[NOTE: For reasons that I hope the question below will make clear, I am interested only in answers from those who have read Mac Lane's Categories for the working mathematician [CWM], or at least have ...
3answers
515 views
### A Concrete Approach to Category Theory
Is there a way to learn Category Theory without learning so many concepts of which you have never seen examples?
1answer
269 views
### Mathematical structures
Preamble: My previous education was focused either on classical analysis (which was given in quite old traditions, I guess) or on applied Mathematics. Since I was feeling lack of knowledge in 'modern' ...
1answer
209 views
### Concrete Categories Where Epis are Just Surjections
Before I begin let me provide some background to fix notation/make the post more readable to interested outsiders. In a category $\mathscr{C}$ we say that a morphism $X\xrightarrow{f}Y$ is an ...
1answer
288 views
### Mathematics needed for higher dimensional category theory?
I'm a undergrad(third year, Manchester uni) that is thinking of doing a PhD in this area or category theory in general. Just wondering, what branches of Maths should I focus on? As I've been told ...
1answer
225 views
### Looking for an “arrows-only” intro to category theory
I have often seen it remarked in passing that the "collection of objects" that appears in the standard definition of a category is, strictly speaking, superfluous, and that it is possible to give an ...
3answers
453 views
### Looking for student's guide to diagram chasing
I'm teaching myself some category theory, and I find that I'm very slow with diagram chasing. It takes me some times a very long time to decide whether adding an arrow to a diagram preserves the ...
3answers
104 views
### Maps that assign points to maps
Consider a set $X$ and a set $Y$. Once can the define a map from $X$ to $Y$ that assigns to each point in $X$ a point in $Y$. On the other hand, if $F(X,Y)$ denotes the set of all functions from $X$ ...
2answers
768 views
### motivation and use for category theory?
From reading the answers to different questions on category theory, it seems that category theory is useful as a framework for thinking about mathematics. Also, from the book Algebra by Saunders Mac ...
1answer
165 views
### Introductory texts for weak $\omega$-categories
As I'm constantly running across higher categories these days, I'm wondering what is a good starting point to get into the theory? While I am aware of nLab and the n-Category Café, I am having a real ...
3answers
485 views
### Introduction to Bourbaki structures, and their relation to category theory
I just opened vol.1 of the Bourbaki treatise to take a look at how they define mathematical structure. I was amazed by its sheer complexity. Can you recommend an introductory text that wouldn't ...
1answer
304 views
### Reading commutative diagrams?
Sorry for this whole bunch of questions. Please note, that I know what a commutative diagram is, and that I can somehow read them, at least the simpler ones. But often enough the diagrams are labelled ...
2answers
422 views
### Category Theory with and without Objects
Slight Motivation: In Mac Lane and Freyd's books (the latter being a reprint of an older book called "Abelian Categories") they note that instead of defining any Objects in a category we may define an ...
3answers
982 views
### What are the prerequisites for learning category theory?
Is category theory worth learning for the sake of learning it? Can it be used in applied mathematics/probability? I am currently perusing Categories for the Working Mathematician by Mac Lane.
19answers
5k views
### Good book/lecture notes about category theory
what are the best books or lecture notes on category theory?
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 32, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9296168684959412, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/155089/are-there-methods-to-well-order-a-finite-group-in-a-meaningful-way
|
# Are there methods to well order a finite group in a meaningful way?
Can some finite groups be well ordered in a "meaningful" way? I mean, it is clear that we can trivially find a bijection between $\{1,...,n\}$ and a finite group $G$ with $n$ elements, but I am interested in well ordering that are based on some scheme or pattern with respect to some group property (for example, it is immediate to well-order a cyclic group).
-
5
How is it immediate to well-order a cyclic group? If you mean the ordering "identity, generator, square of generator, ...", then which of the many generators will you choose? – Chris Eagle Jun 7 '12 at 11:52
You can obtain $n-1$ different wos, isn't it? – Oo3 Jun 7 '12 at 11:54
3
Other than the trivial group, there is no well-ordering on any finite group in a way that makes the group operation order-preserving. (Obvious.) – Zhen Lin Jun 7 '12 at 12:10
## 5 Answers
A finite group has a finite number of generators, which you can name $a,b,c,\dots$ (or $a_1,a_2,\dots,a_r$ for some $r$). Then you can write every group element as a word in the generators; among the many expressions for any given group element, consider only those of minimal length, and among those, pick the one that's lexicographically first. Then order the $n$ resulting expressions lexicographically.
A lot of choices to make along the way, so there's nothing canonical here, but that didn't seem to bother you for cyclic groups, so perhaps you'll accept it here, too.
-
This is crazy!! Well, +1 for the moment, waiting for further contributions... – Oo3 Jun 7 '12 at 13:13
1
This is a lot less crazy than you think - this is one of the key ideas behind algorithms for (infinite) automatic groups, where there's a fast (linear in the size of the words) algorithm for computing the lexicographically first representation of the product of two words of the group. This can be useful, for instance, in doing calculations in some symmetry groups. – Steven Stadnicki Jun 7 '12 at 15:15
I'm not sure what you want but you cannot have an order that respects the group operation in the sense that $a<b$ implies $ac < bc$.
Indeed, suppose $1<a$ and $a$ has order $m$. Then $1 < a < a^2 < \cdots < a^{m-1} < a^m=1$, a contradiction. The same holds if $1>a$.
-
I am interested in more exotic ways to yield an order. – Oo3 Jun 7 '12 at 13:09
If you are prepared to represent the group as a permutation group on $\{1,...,n\}$, then the stabilizer chain produced by the Schreier-Sims algorithm leads to a fairly natural order (though of course the permutation representation itself is not canonical).
That is, given $\Delta\subseteq \{1,...,n\}$, let $G_{\Delta}$ be the pointwise stabilizer of $\Delta$ in $G$. Then there is a stabilizer chain:
$G = G_{\{\}} \supseteq G_{\{1\}} \supseteq G_{\{1,2\}} \supseteq ... \supseteq G_{\{1,...,n-1\}} \supseteq G_{\{1,...,n\}} = 1$.
Given some link $G_{\{1,...,i\}} \supseteq G_{\{1,...,i+1\}}$ in the chain, the cosets of $G_{\{1,...,i+1\}}$ in $G_{\{1,...,i\}}$ can be ordered by their action on i. This extends to an order on the whole group.
For example, $S_{3}$ would be ordered as: 1, (23), (12), (123), (13), (132).
First, the elements that send 1 to 1, then the elements that send 1 to 2, finally the elements that send 1 to 3. So these are the three cosets of $G_{\{1\}}$. Then, within the coset $(13)G_{\{1\}}$ (for example), we order recursively using the same order relation.
-
What do you mean exactly when you say: "if you are prepared to represent the group as permutation group on $\{1, \dots ,n\}$? – Oo3 Jun 7 '12 at 18:08
@Oo3 Well, many finite groups arise as permutation groups on {1,...,n} (eg the symmetric, alternating, and dihedral groups). Still others have natural permutation actions on small sets (eg PGL(n,Fq)), so we just have to decide on a way to number the small set. Every finite group can be represented as a permutation group - it acts on itself by multiplication on the right. However, in this case, we would already need to have decided on a numbering of the group elements, which defeats the point. – DavidA Jun 7 '12 at 19:46
One nice thing about this answer (other than it is actually used by gap) is that it replaces a large enumeration problem (the whole group) to a series of smaller ones (cosets of k+1 point stabilizer inside a k point stabilizer). In other words, you are just labeling the cosets with {1...n} instead of the group elements. There are many problems that become simpler when considered over a chain of subgroups than when just looking at the entire group. – Jack Schmidt Jun 8 '12 at 3:47
The most meaningful way to well-order a finite group would be in such way that the addition coheres with the order. However no finite group can be ordered, since $$1<a<a^2<a^3<\ldots<a^{\mathrm{ord}(a)}=1$$
The above shows that you cannot take an order which respects the group operation on a finite group. To the question "more exotic" one has to think about, and note that every finite set can be given several (possibly interesting) different group structures. In fact, if I have a set $\{a,b,c\}$ then there are several ways in which it can be made into $\mathbb Z/3\mathbb Z$.
We do not have a canonical way of choosing a group structure on a set. Once there are very arbitrary choices there is no real way to generate interesting well orders. If you agree that a canonical way would be to take a group of a finite cyclic group then you still have to ask yourself which element of your set is $1$.
Once you agreed which element is $1$ then there is one reasonable way of choosing an enumeration of the group, namely $n\cdot 1$. However this is still very arbitrary.
-
I beat you to it... :-) – lhf Jun 7 '12 at 12:53
Yes, and I have to go teach now... so you have time to beat me again! – Asaf Karagila Jun 7 '12 at 12:54
Who's counting? – lhf Jun 7 '12 at 12:54
– Asaf Karagila Jun 7 '12 at 12:55
I believe that if you just take into account addition and order that go together, then my question would have had not make any sense. – Oo3 Jun 7 '12 at 13:06
show 7 more comments
The comment was too short, so I'm posting this as an answer.
Well order implies some linear structure so I don't see any meaningful orderings for groups other than cyclic (see Cayley graphs). For example you could take a group in which there are two (or even more) different elements that behave in the same way, e.g. $(0,1)$ and $(1,0)$ in $\mathbb{Z}_2\times\mathbb{Z}_2$, or generators of non-cyclic free group.
Note, that this is also the case also for cyclic groups, e.g. if $a$ is the generator, then is $a^{-1}$ too. In a way, they are symmetric, and any well order would destroy that.
On the other hand, I think that there are many well-founded partial orders that could apply here, and if you wish, you could make them total (by something like topological sorting). Moreover, this would actually correspond to the fact that you choose one element over the other (and thus destroy the symmetry).
Hope that helps ;-)
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 38, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9493463635444641, "perplexity_flag": "head"}
|
http://meshfreemethods.blogspot.com/
|
Meshfree Methods
Tuesday, December 12, 2006
EFG Matlab Routines
These used to be hosted at Northwestern, but the files were taken down some time ago. The original 1d and 2d Matlab routines for the element-free Galerkin method are now located at
http://www.duke.edu/~jdolbow/EFG/programs.html
These routines are described in detail in the paper
J. Dolbow and T. Belytschko (1998), "An Introduction to Programming the Meshless Element Free Galerkin Method," Archives of Computational Methods in Engineering, vol. 5, no. 3, pp. 207--242.
posted by John D at 6:05 AM | 0 comments
Tuesday, November 21, 2006
Frequently Asked Questions
posted by John D at 8:28 PM | 0 comments
Friday, September 01, 2006
Where Do We Stand on Meshfree Approximation Schemes?
In a previous posting, Timon provided a nice overview of meshfree methods— starting from SPH and leading up to some of the key developments over the past decade (diffuse element method, element-free Galerkin, reproducing kernel particle method/RKPM). Rather than present details on any particular method per se, here I focus on the most common approximations that are used in meshfree methods. In doing so, my goal is to bring out the commonalities, distinctions, and some recent perspectives and improved understanding that has come about in the realm of data approximation and its ties to meshfree methods. Where appropriate, I will try to point out how the properties of the approximant lead to positive or negative consequences when used within a Galerkin method. The important issues of imposing essential boundary conditions and numerical integration in Galerkin meshfree methods are also discussed. In the interest of space, equations are inlined and no figures are included. Links to cited references (journal articles, web resources, or author's web page) are provided; the full citation of the references is available here. The reader will notice that the title of this post is an adaptation of Jaynes's (1979) article—Where Do We Stand on Maximum Entropy?
Given a set of nodes {xa} ($a = 1$ to $n$) in Rd, we construct an approximation for a scalar-valued function u(x) in the form: uh(x) = φa(x)ua (Einstein summation convention implied). Most, if not all meshfree methods are based on some variant of either radial basis functions (RBFs), moving least squares (MLS) approximants (in computational geometry, Levin's (1998) MLS approximant is adopted), or natural neighbor-based (nn) interpolants such as Sibson coordinates and Laplace interpolant. Recently, maximum-entropy approximants have also come to the forefront. The mathematical analysis of meshfree approximants has been carried out by Babuska et al. (2003). In meshfree methods, the construction of the nodal basis functions φa(x) is independent of the background element structure (unlike finite elements), and different approaches are used to construct a linearly independent set of basis functions. In this sense, these approximations are referred to as meshfree. A brief description of the above schemes follows.
Radial Basis Functions
In the radial basis function approximation, φa is constructed from translates of a fixed radial function φ with centers at xa. If global polynomial reproducibility is desired, a polynomial term is added to the approximation, which engenders an additional side condition. For certain choices of φ(.), for example, Gaussian radial basis function [exp(-r2/c2)], multiquadrics [(r2 + c2)½], or thin-plate splines, the matrix Kab = φa(||xb-xa||) is positive-definite and invertible, and hence the data interpolation problem has a unique solution for the coefficients ua. Note that uh interpolates, but the radial functions φa do not satisfy the Kronecker-delta property, φa(xb) = δab. In approximation theory, basis functions with the property φa(xb) = δab are known as a cardinal basis. For a cardinal basis set, we immediately see that the basis functions are linearly independent. When a cardinal basis is used, the coefficients ua are more commonly referred to as nodal values (finite element terminology). The use of RBFs in collocation-based meshfree methods was initiated by Kansa (1990), and collocation methods that are based on global (full matrix with exponential convergence) as well as compactly-supported RBFs are an active area of current research.
Moving Least Squares Approximants
In the standard least squares approach, given a polynomial basis with m terms (a quadratic basis in one dimension is p(x) = {1, x, x2}T), the best fit to nodal data ua is sought. To this end, we let uh(x) = pTa, where the constant parameter vector a is found such that the error vector PTa - u is minimized. Here, P is a constant m x n matrix with the a-th column consisting of p(xa). As the objective function, the square of the L2 norm of the error is chosen: I(a) = 1/2(PTa - u)T(PTa- u). This leads to a quadratic minimization problem, and hence a linear system of equations (normal equations) is obtained for the unknown vector a.
In the moving least squares approximation, a local weighted least squares fit at each point x is carried out. A non-negative compactly-supported weight function (derived from a Gaussian or polynomial/spline function) is associated with each node: wa(x) ≡ w(||x -xa||/da), where da is the radius of support (circular or tensor-product supports are typically used) of the nodal weight function. Instead of the standard least squares objective, a weighted quadratic least squares minimization problem is solved to determine a(x) (parameters are now functions of x): I(a) = 1/2(PTa- u)TW(PTa - u). Here, W is a n x n matrix with non-zero entries wa(x) only on the diagonal of the a-th row. On carrying out the minimization, the basis function vector is: φ(x) = BT(x)A-1(x)p(x) = BT(x)α(x), with A(x)α(x) = p(x), and A(x) = PW(x)PT (moment matrix) and B(x) = PW(x). The intermediate steps in the derivation, computer implementation of MLS basis functions, and reviews on its applications for partial differential equations (PDEs) can be found in the literature (see Belytschko et al. (1996), Dolbow and Belytschko (1999), and Fries and Mathies (2004) for details).
Two particular attractions of the MLS approach are: first, the approximation uh reproduces all functions that are contained in the basis vector p, which can include polynomials as well as non-polynomial functions (this has been used as a means for intrinsic enrichment, for e.g., by incorporating crack-tip functions within the basis vector); and secondly, if the weight function is Ck and the basis vector p is smooth, then φa are also Ck, which is pertinent for higher-order gradient continua, phase transformations, and thin-plate and thin-shell analyses that place C1 continuity requirements on the trial and test approximations. Construction of C1 finite element bases on arbitrary meshes is in general a non-trivial task; use of subdivision finite elements (Cirak et al., 1999)) is a promising alternative. The positive attributes of MLS are counter-balanced by the fact that nodal interpolation is lost, and furthermore on the boundary of the domain interior nodal basis functions have in general a non-zero contribution. So, in a standard Galerkin variational formulation, the condition that MLS test functions must vanish on an essential boundary is not met. Hence, modifications in the test function or in the variational form are required to impose essential boundary conditions. The weight function and its support size (must be above a lower bound to ensure that A is invertible for all x in the domain) are free parameters in an analysis—this parallels the choice of the `shape parameter' c in RBF methods for the solution of PDEs (see Wertz et al. (2006)).
In lieu of what is to follow, we also mention an alternative formulation of MLS. The unconstrained minimization problem that we posed earlier for MLS can be recast in the so-called primal-dual framework. The vector α(x) is the solution of the primal problem (P): maxα -M(α) = minα M(α), with M(α) = 1/2αTAα - αTp (pardon the abuse of notation), and the dual problem (D) is: min D(φ) = 1/2φTW-1φ subject to the under-determined linear constraints Pφ = p. The variables φ (basis function vector) and α (Lagrange multiplier vector) are related to each other via duality. The curious reader can write out the Lagrangian of the dual problem, set its first variation to zero and back-substitute the basis function vector in the Lagrangian functional to verify that the primal problem is obtained. From the dual problem (D), we clearly see that the reproducing conditions appear as equality constraints in the MLS approximation. Of course, the reproducibility of the basis vector p(x) is easily verified from the previous derivation: Pφ = PBTA-1p = PWPTA-1p = AA-1p = p. On a related note, if W = I (identity matrix), then the minimum norm approximant (Morse-Penrose or pseudo- or generalized-inverse, φ = P+p) is obtained. The MLS approximation can be viewed as a weighted minimum norm approximant, or equivalently the minimum Euclidean norm of the transformed vector W - ½φ.
Natural Neighbor-Based Interpolants
For a set of nodes in Rd, the Delaunay and Voronoi tessellation are dual geometric structures. Classical finite element bases are constructed on the Delaunay triangulation. On using the Voronoi diagram, Sibson (1980) introduced the concept of natural neighbors and natural neighbor (Sibson) interpolation. The Delaunay triangulation satisfies the empty circumcircle criterion (besides the vertex nodes of triangle T, no other nodes are located within the circumcircle of T). This property is used to define the natural neighbors of a point x that is inserted within the convex hull of the nodal set. If x lies within the circumcircle of a triangle T, then the vertex nodes of T are natural neighbors of x. Let x have n natural neighbors. Defining the area of overlap of the original Voronoi cell of node a with the Voronoi cell of point x as Aa(x) and the area of the Voronoi cell of point x as A(x), φa(x) = Aa(x)/A(x), and the basis functions sum to unity by construction. A different natural neighbor interpolant was proposed by Christ et al. (1982), which was re-discovered in applied mathematics and computational geometry. This interpolant (coined as Laplace since it is a discrete solution to the Laplace equation) is constructed by using measures that are solely based on the Voronoi cell associated with x. These interpolants are also linearly precise, and hence they are suitable for use within a Galerkin implementation for second-order PDEs. The appealing aspect of nn-interpolation is that it is well-defined and robust for very irregular distribution of nodes since the Voronoi diagram (and ergo natural neighbors) for a nodal set is unique. This is unlike the Delaunay triangulation, which is non-unique (four co-circular nodes in two dimensions leads to two possible triangulations and hence two different interpolants—data-dependent triangulation is well-known). The basis function supports automatically adapt (anisotropic supports) with changes in the nodal distribution, and hence no user-defined parameters are required to define nodal basis function supports. Further details on the construction of nn-interpolants are available here. Braun and Sambridge (1995) introduced the use of the Sibson interpolant in a Galerkin method (natural element method), and many new and emerging applications of the method can be found here.
Natural neighbor interpolation schemes share many common properties with the Delaunay finite element interpolant. They are linearly precise, strictly non-negative, and on convex domains they are piece-wise linear on the boundary. These permit the imposition of essential boundary conditions as in finite elements. Cueto et al. (2000) combined Sibson interpolation with the concept of α-shapes to describe a domain discretized by a cloud of nodes and to track its evolution in large deformation analysis. The Sibson interpolant is C1\xa (derivatives are discontinuous at the nodes). Unlike MLS approximations, the development of higher-order continuous nn-interpolants is not straight-forward. In this direction, Farin (1990) proposed a C1 Sibson interpolant using the Bernstein-Bézier representation, and higher-order generalizations of nn-interpolants have also appeared (see Hiyoshi and Sugihara (2004)). An interesting advance due to Boissonnat and Flötotto (2004) is the extension of the Sibson interpolant to smooth approximations on a surface ((d-1)-manifold in Rd). An implementation of natural neighbor interpolation is available in the Computational Geometry Algorithms Library (CGAL).
Maximum-Entropy Approximants
In tracing the roots of data approximation, a common theme that emerges is that many approximants have a variational basis and are posed via an unconstrained or constrained optimization formulation. Cubic splines and thin-plate splines are prime examples, with MLS, RBFs, Laplace, discrete harmonic weight (see Pinkall and Polthier (1993)), and Kriging being a few notables that are linked to meshfree approximations. The reproducing conditions, Pφ = p, have been the guiding principle behind the developments in meshfree (notably, RKPM of Liu and co-workers) and partition of unity methods. In the RKPM, the basis function vector of the form φ(x) = WPT(x)α(x) is considered; in the literature, often an additional multiplicative term (nodal volume) is included in the basis function definition. If the same nodal volume is assigned to each node, this approximation is identical to MLS. In general, the reproducing conditions can be seen as constraints, with the choice of the objective function being left open. In MLS, as was indicated earlier, a particular choice of the objective function was made. On imposing the requirement of linear precision, the problem is ill-posed in d dimensions if $n > d+1$. This is so since there are only $d+1$ equality constraints. As a means for regularization, an objective functional that is least-biased is desired. The principle of maximum entropy is a suitable candidate—initially used to demonstrate that Gibbs-Boltzmann statistics can be derived through inference and information theory, and in years thereafter has been successfully applied in many areas of pure and applied sciences where rationale inductive inference (Bayesian theory of probability) is required. In the presence of testable information (constraints) and when faced with epistemic (ignorance) uncertainty, the maximum entropy (MAXENT) formulation using the Shannon entropy functional (Shannon (1948), Jaynes (1957)) provides the least-biased statistical inference solution for the assignment of probabilities—Wallis's combinatorial derivation as well as the maximum entropy concentration theorem provide justification.
The Shannon entropy of a discrete probability distribution is: H(φ) = -φa ln φa. Historically, discrete probability measures have been seen as weights and hence their association with the construction of non-negative basis functions is natural. This led to the use of the maximum-entropy formalism to construct non-negative basis functions (S, 2004, Arroyo and Ortiz [AO], 2006). These developments share common elements with the work of Gupta (2003) in supervised learning. In S (2004), the Shannon entropy is used within the maximum entropy variational principle to construct basis functions on polygonal domains, whereas in AO (2006), a modified entropy functional is adopted to construct local MAXENT approximation schemes for meshfree methods. The latter researchers noted its links to convex analysis, and coined such approximants with the non-negative constraint, φa ≥ 0, as convex approximation schemes. Natural neighbor-based interpolants as well as barycentric constructions on convex polygons are convex approximation schemes. The Delaunay interpolant is also the solution of an optimization problem, which was shown by Rajan (1991). The modified entropy functional is a linear combination (in the sense of pareto optimality) of Rajan's functional and the Shannon entropy functional, and the solution of the variational problem provides a smooth transition from Delaunay interpolation as a limiting case at one end to global MAXENT approximation at the other end of the spectrum. Geometry has a lot to offer in computations, and once again, it is pleasing to see yet another connection emerge between geometry-and-approximation. Non-negative basis functions have many positive attributes (variation diminishing, convex hull property, positive- definite mass matrices, optimal conditioning), and their merits in computational mechanics has been recently demonstrated by Hughes et al. (2005) who used NURBS basis functions in isogeometric analysis.
A general prescription for locally- and globally-supported convex approximation schemes can be derived using the Kullback-Leibler (KL) distance or directed divergence. This was introduced in S (2005) and is further elaborated in a forthcoming article. It was recognized (see Jaynes (2003)) that for the differential (continuous) entropy to be invariant under a transformation it must be of the form $\int - \phi ln \phi /m dx$, which in the discrete case is: H(φ,m) = -φa ln φa/ma, where m is a known prior distribution (weight function) for φ. The KL-distance, which is the negative of H, is non-negative (established using Jensen's inequality), and minimization of the relative entropy is the corresponding variational principle. We determine the non-negative basis functions, φa ≥ 0, by maximizing H, subject to the $d+1$ linear constraints, Pφ = p. This is the primal problem for entropy maximization, which has a unique solution for any point x within the convex hull of the nodal set. Outside the convex hull, the equality constraints and the non-negative restriction on the basis functions constitute an infeasible constraint set. To see this fact via a simple example, consider one dimensional approximation with n nodes located in [0,1] and let $x = -\delta $, where δ is positive. The first-order reproducing condition is: $\phi $axa = -δ, and since all xa are non-negative and δ > 0, there does not exist any non-negative basis function vector φ that can satisfy this constraint. The proof for the case when x > 1 proceeds along similar lines. The prior is a weight function that is chosen a priori (e.g., globally- or compactly-supported radial basis functions, weights used in MLS, R-functions, etc.), and the above formulation provides a correction on the prior so that the basis functions satisfy the reproducing conditions. If a Gaussian radial basis function is used as a prior, then the modified entropy functional considered in AO (2006) is recovered.
On using the method of Lagrange multipliers, the MAXENT basis functions are obtained (exponential form): φa(x) = Za/Z, Za = ma(x) exp(-λα(x) pα(xa)), where λα (α = 1,2,...,d) are the Lagrange multipliers associated with the d first-order reproducing conditions, and Z is the partition function. For a smooth prior, the basis functions are also smooth within the convex hull of the nodal set. For a constant prior (state of complete ignorance), H is identical to the Shannon entropy (modulo a constant). From the above expression, the satisfaction of the partition of unity property or the zeroth-order moment constraint (∑a φa = 1) is evident. On considering the dual problem (λ* = argminλ ln Z(λ)), well-established numerical algorithms (steepest descent, Newton's method) are utilized to solve the unconstrained convex minimization problem. Once the Lagrange multipliers are determined, the basis functions are computed using the above equation. As with the appeal of radial basis function approximations, here also the spatial dimension does not pose a limitation, since the maximum-entropy formulation and its numerical implementation readily extends to any space-dimension.
Essential Boundary Conditions
Imposition of essential boundary conditions and numerical integration of the Galerkin weak form are the two main chinks in the armor of meshfree methods. In AO (2006), the key properties of convex approximants are established, among which, the facet-reducing-property is pertinent. On any facet (point x belongs to the facet) of the boundary of the convex hull, only nodes that are located on the facet have non-zero basis function values at x. This immediately implies that essential boundary conditions can be imposed as in finite elements—note that on weakly convex polygons (polygons with mid-side nodes), interpolation is not met at the middle node. For imposition of essential boundary conditions, cardinality is not a necessary condition. This has not been well-recognized in the meshfree literature, where nodal interpolation through singular weight functions, use of transformations, or other approaches have been pursued. Among the existing techniques to impose essential boundary conditions, Nitsche's method and the blending technique of Huerta and Fernández-Méndez (2000) are promising; use of Lagrange multipliers, modified variational principles, or techniques that directly couple finite elements to MLS approximations are less appealing within a standard Galerkin method. Imposing linear essential boundary conditions in maximum- entropy meshfree methods can be done as in finite elements for any weight function as a prior (globally- or compactly-supported). This appears to be a simple and elegant means to impose essential boundary conditions in meshfree methods.
Numerical Integration
The issue of essential boundary condition has been discussed, and now the topic of numerical integration is briefly touched upon. If background cells are used within a Galerkin implementation, all the approximation schemes that we have discussed would induce numerical integration errors (with Gauss quadrature) since the intersection of the supports of the basis functions do not coincide with the background cells. Rather than integrating over the precise supports of the basis functions or develop more sophisticated integration rules (both are not very viable alternatives), the development of nodal integration (collocation) schemes is a potentially fruitful direction. Research in stabilized nodal integration techniques for meshfree methods emanated from the work of Chen et al. (2001). In a Lagrangian formulation, on using nodal integration no remapping is required since all quantities are stored at the nodal locations. Large deformation analysis is one of the main application areas where meshfree methods can potentially replace finite elements. The caveat on nodal integration techniques is that ensuring exactness on the patch test alone is insufficient. A better understanding of its relationship with assumed strain methods, stabilization techniques to prevent pressure oscillations, and robust performance in the incompressible limit are needed. Some of these issues are discussed in greater depth for the four-node tetrahedron by Puso and Solberg (2005). Ultimately, for meshfree methods to gain prominence and to reach the mainstream, the conception of nodally integrated stable meshfree (particle) methods is deemed to be critical. All comments and feedback are most welcome.
posted by N. Sukumar at 2:20 PM | 2 comments
Friday, August 11, 2006
MESHFREE METHODS
Meshfree methods go back to the seventies. The major difference to finite element methods is that the domain of interest is discretized only with nodes, often called particles. These particles interact via meshfree shape functions in a continuum framework similar as finite elements do although particle “connectivities” can change over the course of a simulation. This flexibility of meshfree methods was exploited in applications with large deformations in fluid and solid mechanics, e.g. free-surface flow, metal forming, fracture and fragmentation, to name a few. Most meshfree methods are pure Lagrangian in character though there are a few publications on meshfree methods formulated in an Eulerian (or ALE) description, e.g. Fries 2005. The most important advantages of meshfree methods compared to finite elements are: their higher order continuous shape functions that can be exploited e.g. for thin shells or gradient-enhanced constitutive models; higher smoothness; simpler incorporation of h- and p-adaptivity and certain advantages in crack problems (no mesh alignment sensitivity; some methods do not need to enforce crack path continuity). The most important drawback of meshfree methods is probably their higher computational cost, regardless of some instabilities that certain meshfree methods have.
One of the oldest meshfree methods is the Smooth Particle Hydrodynamics (SPH) developed by Lucy and Gingold and Monaghan in 1971. SPH was first applied in astrophysics to model phenomena such as supernova and was later employed in fluid dynamics. In 1993, Petschek and Libersky extended SPH to solid mechanics. Early SPH formulations suffered from spurious instabilites and inconsistencies that were a hot topic of investigations, especially in the 90s. Many corrected SPH versions were developed that improved either the stability behavior of SPH or its consistency. Consistency, often referred to as completeness in a Galerkin framework, means the ability to reproduce exactly a polynomial of certain order. A method is called n-th order consistent (or complete) if it is able to reproduce a polynomial of order n exactly. While most SPH methods are based on the strong form, a wide class of methods was developed based on the weak form.
Based on an idea of Lancaster and Salkauskas and probably motivated by the purpose to model arbitrary crack propagation without computational expensive remeshing, the group of Prof. Ted Belytschko developed the elementfree Galerkin (EFG) method in 1994. The EFG method is based on an MLS approximation and avoids inconsistencies inherent of some SPH formulations. In 1995, the group of Prof. W.K. Liu proposed a similar method, the Reproducing Kernel Particle Method (RKPM). Though the method is very similar to the EFG method, it originates from wavelets rather than from curve-fitting. The first method that employed an extrinsic basis was the hp-cloud method of Duarte and Oden. In contrast to the EFG and RKPM method, the hp-cloud method increases the order of consistency (or completeness) by an extrinsic basis. In other words, additional unknowns were introduced into the variational formulation to increase the order of completeness. This idea was later adopted (and modified) in the XFEM context though the extrinsic basis (or extrinsic enrichment) was used to describe the crack kinematics rather than to increase the order of completeness in a p-refinement sense. The group of Prof. Ivo Babuska discovered certain similarities between finite element and meshfree methods and formulated a general framework, the Partition of Unity Finite Element Method (PUFEM), that is similar to the generalized Finite Element Method (GFEM) of Strouboulis and colleagues. Another very popular meshfree method worth mentioning is the Meshless Local Petrov Galerkin (MLPG) method developed by the group of Prof. S.N. Atluri in 1998. The main difference of the MLPG method to all other methods mentioned above is that local weak forms are generated over overlapping sub-domains rather than using global weak forms. The integration of the weak form is then carried out in these local sub-domains. In this context, Atluri introduced the notion “truly” meshfree methods since truly meshfree methods do not need a construction of a background mesh that is needed for integration.
The issue of integration in meshfree methods was a topic of investigations since its early times. Methods that are based on a global weak form may use three different types of integration schemes: nodal integration, stress-point integration and integration (usually Gauss quadrature) based on a background mesh that does not necessarily need to be aligned with the particles. Nodal integration is from the computational point of view the easiest and cheapest way to build the discrete equations but similar to reduced finite elements, meshfree methods based on nodal integration suffer from an instability due to rank deficiency. Adding stress points to the nodes can eliminate (or at least alleviate) this instability. The term stress-point integration comes from the fact that additional nodes were added to the particles where only stresses are evaluated. All kinematic values are obtained from the "original" particles. The concept of stress points was actually first introduced in one dimension in an SPH setting by Dyka. This concept was introduced into higher order dimensions by Randles and Libersky and the group of Prof. Belytschko. There is a subtle difference between the stress point integration of Belytschko and Randles and Libersky. While Randles and Libersky evaluate stresses only at the stress points, Belytschko and colleagues evaluate stresses also at the nodes. Meanwhile, many different versions of stress point integration were developed. The most accurate way to obtain the governing equations is Gauss quadrature. In contrast to finite elements, integration in meshfree methods is not exact. A background mesh has to be constructed and usually a larger number of quadrature points as in finite elements are used. For example, while usually 4 quadrature points are used in linear quadrilateral finite elements, Belytschko recommend the use of 16 quadrature points in the EFG method.
Another important issue regarding the stability of meshfree methods is related to the kernel function, often called window or weighting function. The kernel function is somehow related to the meshfree shape function (depending on the method). The kernel function can be expressed in terms of material coordinates or spatial coordinates. We then refer to Lagrangian or Eulerian kernels, respectively. Early meshfree methods such as SPH use an Eulerian kernel. Many meshfree methods that are based on Eulerian kernels have a so-called tensile instability, meaning the method gets unstable when tensile stresses occur. In a sequence of papers by Belytschko, it was shown that the tensile instability is caused by the use of an Eulerian kernel. Meshfree methods based on Lagrangian kernels do not show this type of instability. Moreover, it was demonstrated that for some given strain softening constitutive models, methods based on Eulerian kernels were not able to detect the onset of material instability correctly while methods that use Lagrangian kernels were able to detect the onset of material instability correctly. This is a striking drawback of Eulerian kernels when one wishes to model fracture. However, a general stability analysis is difficult to perform and will of course also depend on the underlying constitutive model. Note also, that Libersky proposed a method based on Eulerian kernels and showed stability in the tension region though he did not consider strain softening materials. For too large deformations, methods based on Lagrangian kernels tend to get unstable as well since the domain of influence in the current configuration can become extremely distorted. Some recent methods to model fracture try to combine Lagrangian and Eulerian kernels though certain aspects still have to be studied, e.g. what happens in the transition area or how are additional unknowns treated (in case an enrichment is used).
In meshfree methods, we talk about approximation rather than interpolation since the meshfree shape functions do not satisfy the Kronecker-delta property. This entails certain difficulties in imposing essential boundary conditions. Probably the simplest way to impose essential boundary conditions is by boundary collocations. Another opportunity is to use the penalty method, Lagrange multipliers or Nitsche’s method. Coupling to finite elements is one more alternative that was extensively pursued in the literature-in this case, the essential boundary conditions are imposed in the finite element domain. In the first coupling method by Belytschko, the meshfree nodes have to be located at the finite element nodes and a blending domain is constructed such that the shape functions are zero at the finite element boundary. In this first approach, discontinuous strains were obtained at the meshfree-finite element interface. Many improvements were made and methods were developed that exploit the advantage of both meshfree methods and finite elements, e.g. the Moving Particle Finite Element Method (MPFEM) by Su Hao et al. or the Reproducing Kernel Element Method (RKEM) developed by the group of Prof. W.K. Liu. Meanwhile, several textbooks on meshfree methods have been published, W.K.Liu and S. Li, T. Belytschko, S.N.Atluri and some books by Prof. G.R. Liu.
Many meshfree methods were developed and applied in fracture mechanics to model arbitrary crack growth. The crack was initially modeled with the visibility criterion, i.e. the crack was considered to be opaque and the meshfree shape functions were cut at the crack surface. Later, the diffraction and transparency method was used instead of the visibility criterion since they remove certain inconsistencies of the visibility criterion. With the development of the extended finite element method (XFEM) in 1999, meshfree methods got a very strong competitor. The major drawback of meshfree methods with respect to XFEM is their higher computational cost. It is also less complex to incorporate XFEM into existing FE codes. There are still some efforts to modify meshfree methods with respect to material failure and fracture. However, it seems that much less attention is paid to the development of meshfree methods these days compared to the 90s. Nevertheless, meshfree methods still are applied frequently in many different areas, from molecular dynamics, biomechanics to fluid dynamics.
posted by TimonRabczuk at 12:10 PM | 2 comments
Thursday, August 03, 2006
Wikipedia Entry for Meshfree Methods
Wikipedia is the free, on-line encyclopedia.
The philosophy behind Wikipedia is an interesting one. Anyone with access to the internet and a web browser can edit an entry. Over time, entries will develop as more and more people find them on the web and make changes. While it is entirely possible that incorrect information can be posted, the notion is that, with time, it will subsequently be removed.
Along these lines it may make sense for the community to (collectively) edit the Wikipedia entry for Meshfree Methods.
posted by John D at 8:15 AM | 0 comments
Wednesday, August 02, 2006
Welcome to the Meshfree Methods Blog
This blog was established in August of 2006, by the USACM Specialty Committee on Meshfree Methods. The goal is to provide a central resource for researchers working with meshfree and related methods.
Suggestions as to links, content, or anything else from the community are welcome in the comments section to this post (or any other).
If you would like to post on this blog and are a member of the USACM or IACM, please send an email to John Dolbow at jdolbow@duke.edu to be added to the member list.
posted by John D at 12:53 PM | 2 comments
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 8, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9242250323295593, "perplexity_flag": "middle"}
|
http://quant.stackexchange.com/questions/tagged/option-pricing+differential-equations
|
Tagged Questions
3answers
712 views
What tools are used to numerically solve differential equations in Quantitative Finance?
There are a lot of Quantitative Finance models (e.g. Black-Scholes) which are formulated in terms of partial differential equations. What is a standard approach in Quantitative Finance to solve these ...
1answer
262 views
An equation for European options
So, any European type option we can characterize with a payoff function $P(S)$ where $S$ is a price of an underlying at the maturity. Let us consider some model $M$ such that within this model ...
10answers
1k views
Using Black-Scholes equations to “buy” stocks
From what I understand, Black-Scholes equation in finance is used to price options which are a contract between a potential buyer and a seller. Can I use this mathematical framework to "buy" a stock? ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9176531434059143, "perplexity_flag": "middle"}
|
http://en.m.wikibooks.org/wiki/Robotics/Computer_Control/Control_Architectures/Swarm_Robotics
|
# Robotics/Computer Control/Control Architectures/Swarm Robotics
## Overview:
Swarm robotics is a relatively new study in the world of autonomous robotics. Swarm optimizations however have been around for over a decade for some functions. These techniques have recently been adapted and applied to autonomous robotic applications. On the topic of swarm robotics the word swarm is defined with several variations. On first hearing the phrase “swarm robotics” many simply think of basic interactions of robots such as follow the leader or simply data relayed between robots all traveling towards a common goal. However swarm robotics has taken further advancement in complex problem solving. Robots may now be given a goal and a basic suggestion on how to meet that goal. This suggestion may not be the most efficient way to achieve the goal much less even make it to the goal. So there must be a way to improve upon or optimize that suggestion. Optimization functions are then introduced. Several techniques covered here include Ant Colony Optimization (ACO) and Particle Swarm Optimization (PSO). These optimization functions will be covered in depth in the following.
↑Jump back a section
## Basic Swarm Robotics:
As stated earlier basic swarm robotics includes simple functions such as follow the leader and robot-to-robot interactions such as tag. In these scenarios the robots do not actively improve upon their current goal. They simply interact with one another in basic principles to achieve a predefined goal.
### Examples:
• EU Funded: I-SWARM
300px
Swarm robotics is seen many times on a microrobotic scale as pictured above due to a number of factors. One of the main contributing factors to size is size in itself. A robot swarm composed of large robots can be bulky, expensive, and inefficient in data gathering. Obviously size does limit onboard equipment but that is where the principle of swarm shines. All the robots combined contribute a little more detail to the situation offering a complete picture. The swarm above funded by the EU is designed to combine all electrical aspects onto a single flexible board. The swarm has no major pending goals at this time and their behaviors are modeled after biological insects.
• Rice University: James McLurkin
5px
James McLurkin, assistant professor at Rice University has developed his own DARPA funded swarm. His experimentation has explored physical data structures such as swarm arrangement according to ID number. Along with this he has also written a uniform dispersion algorithm. Additional swarm research includes his work on Robot Speed Ratios which is the study of message propagation versus the physical speed of the robot. This study was to uncover issues with the robots moving faster than they can physically interpret relayed messages.
• Carnegie Mellon: Magnetic Swarms
Research at Carnegie Mellon has been underway with a magnetic swarm of robots. The goal of this project is to develop a swarm of robots that could magnetically shape shift without any plane limitations. This could be used on the microscopic level as a three dimensional physical modeling aid. The robots pictured above use an array of electromagnets to control interactions between adjacent robots.
ACO is one of the simplest swarm optimization techniques proven to work reliably. This optimization technique was drawn directly from nature itself, showing that it was already a viable solution. The concept requires a brief understanding of how ants function in nature. Keep in mind, this definition has been tailored to our “perfect” example. In nature, ants wander at random. If an ant is to discover food it begins emitting pheromones and starts to wander back to its nest. At some point another ant discovers food and wanders back to the nest. This cycle happens many times. If an ant wandering back to the colony is to discover one of these pheromone trails it is more likely to travel it. If a fork is encountered biasing occurs via pheromone intensity. This means, ants are more likely to travel a trail with a stronger intensity of pheromones. Pheromones being volatile will evaporate over time. This means that the more ants traveling a trail the more pheromones are emitted. However if the same number of ants travel a long trail and a short trail over the same period of time the intensity of pheromones will be greater on the shorter trail. This means bias will always be placed on the shorter, more efficient trails thus naturally selecting the best solution.
This same process can be simulated in our electrical world. A proposed solution is established and the “ants” navigate to it. The initial solution is then modified to create the most efficient solution. Below is a basic depiction of ACO. As seen there are initially two trails established which happen to overlap in the middle. These two trails offer up four possible solutions to the optimized problem. Ants then randomly travel all four possibilities. The possibilities are narrowed down via the pheromones. All four solutions are traveled by the same number of ants but the shorter solutions have a greater intensity of pheromones biasing more ants towards the shorter trails thus eliminating all other solutions and converging on the best fit solution.
PSO steps further away from stochastic functions towards an even more intelligent solution to a problem. Much of the PSO functionality is modeled after natural associations such as a flock of birds or a school of fish. In a PSO function, a generic solution is first given. Then a swarm of “particles” are initialized. They begin to attempt to make it to their goal. As they move towards their goal, they monitor certain other particles within a user-defined range of their own location. The particles around them contribute to a local best (lbest). This is the most fit solution discovered so far. Each particle also monitors its personal best (pbest) along with the global best (gbest) so far. Keep in mind that in some simulations lbest=gbest and the particles around each other do not have a local effect on each other. The particles are then biased towards a compilation of the lbest and pbest along with gbest. Their new velocity vector also includes a random scaling factor to prevent too early of a convergence on an inefficient solution. An equation is provided below to provide basic understanding of a swarm monitoring their pbest and gbest.
```General PSO Equation
$v[] = v[] + c1 * rand() * (pbest[] - present[]) + c2 * rand() * (gbest[] - present[]) (a) present[] = persent[] + v[] (b)$
```
This equation also provides learning factors which in this case are scaled to “2”. From this equation we can see that the new velocity is equal to the scaled value of the sum of the difference of pbest-present and gbest-present. As it can be seen this optimization techniques offers a highly accurate convergence on the best fit solution to the problem.
450px
• Provided above is a link to a java applet developed to help understand the operation of PSO. The animation applet can be found here.
↑Jump back a section
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9429823160171509, "perplexity_flag": "middle"}
|
http://www.haskell.org/haskellwiki/index.php?title=User:Michiexile/MATH198/Lecture_8&diff=31700&oldid=31531
|
# User:Michiexile/MATH198/Lecture 8
### From HaskellWiki
(Difference between revisions)
| | | | |
|----------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| | | Current revision (15:45, 22 November 2009) (edit) (undo)m | |
| (One intermediate revision not shown.) | | | |
| Line 91: | | Line 91: | |
| | Now, unwrapping the definitions in place, we notice that we will have elements <math>o, n(o), n(n(o), \dots</math> in <math>I</math>, and the initiality will force us to not have any ''other'' elements floating around. Also, intiality will prevent us from having any elements not in this minimally forced list. | | Now, unwrapping the definitions in place, we notice that we will have elements <math>o, n(o), n(n(o), \dots</math> in <math>I</math>, and the initiality will force us to not have any ''other'' elements floating around. Also, intiality will prevent us from having any elements not in this minimally forced list. |
| | | | |
| - | We can rename the elements to form something more recognizable - by equating an element in <math>I</math> with the number of applications of <math>n</math> to <math>o</math>. This yields, for us, elements <math>0, 1, 2, \dots</math> with one function that picks out the <math>0</math>, and another that gives us the cussessor. | + | We can rename the elements to form something more recognizable - by equating an element in <math>I</math> with the number of applications of <math>n</math> to <math>o</math>. This yields, for us, elements <math>0, 1, 2, \dots</math> with one function that picks out the <math>0</math>, and another that gives us the successor. |
| | | | |
| | This should be recognizable as exactly the natural numbers; with just enough structure on them to make the principle of mathematical induction work: suppose we can prove some statement <math>P(0)</math>, and we can extend a proof of <math>P(n)</math> to <math>P(n+1)</math>. Then induction tells us that the statement holds for all <math>P(n)</math>. | | This should be recognizable as exactly the natural numbers; with just enough structure on them to make the principle of mathematical induction work: suppose we can prove some statement <math>P(0)</math>, and we can extend a proof of <math>P(n)</math> to <math>P(n+1)</math>. Then induction tells us that the statement holds for all <math>P(n)</math>. |
| Line 172: | | Line 172: | |
| | to form the diagram | | to form the diagram |
| | :[[Image:LambekDiagram.png]] | | :[[Image:LambekDiagram.png]] |
| - | where <math>f</math> is induced by initiality, since <math>Pa:PPa\to Pa</math> is also a <math>P</math>-algebra. | + | where <math>f</math> is induced by initiality, since <math>Pa \colon PPA \to PA</math> is also a <math>P</math>-algebra. |
| | | | |
| | The diagram above commutes, and thus <math>af = 1_{PA}</math> and <math>fa = 1_A</math>. Thus <math>f</math> is an inverse to <math>a</math>. QED. | | The diagram above commutes, and thus <math>af = 1_{PA}</math> and <math>fa = 1_A</math>. Thus <math>f</math> is an inverse to <math>a</math>. QED. |
## Contents
### 1 Algebras over monads
We recall from the last lecture the definition of an Eilenberg-Moore algebra over a monad T = (T,η,μ):
Definition An algebra over a monad T in a category C (a T-algebra) is a morphism $\alpha\in C(TA, A)$, such that the diagrams below both commute:
While a monad corresponds to the imposition of some structure on the objects in a category, an algebra over that monad corresponds to some evaluation of that structure.
#### 1.1 Example: monoids
Let T be the Kleene star monad - the one we get from the adjunction of free and forgetful functors between Monoids and Sets. Then a T-algebra on a set A is equivalent to a monoid structure on A.
Indeed, if we have a monoid structure on A, given by $m:A^2\to A$ and $u:1\to A$, we can construct a T-algebra by
α([]) = u
$\alpha([a_1,a_2,\dots,a_n]) = m(a_1,\alpha([a_2,\dots,a_n]))$
This gives us, indeed, a T-algebra structure on A. Associativity and unity follows from the corresponding properties in the monoid.
On the other hand, if we have a T-algebra structure on A, we can construct a monoid structure by setting
u = α([])
m(a,b) = α([a,b])
It is clear that associativity of m follows from the associativity of α, and unitality of u follows from the unitality of α.
#### 1.2 Example: Vector spaces
We have free and forgetful functors
$Set \to^{free} k-Vect \to^{forgetful} Set$
forming an adjoint pair; where the free functor takes a set S and returns the vector space with basis S; while the forgetful functor takes a vector space and returns the set of all its elements.
The composition of these yields a monad T in Set taking a set S to the set of all formal linear combinations of elements in S. The monad multiplication takes formal linear combinations of formal linear combinations and multiplies them out:
3(2v + 5w) − 5(3v + 2w) = 6v + 15w − 15v − 10w = − 9v + 5w
A T-algebra is a map $\alpha: TA\to A$ that acts like a vector space in the sense that $\alpha(\sum\alpha_i(\sum\beta_jv_j)) = \alpha(\sum\alpha_i\beta_jv_j)$.
We can define $\lambda\cdot v = \alpha(\lambda v)$ and v + w = α(v + w). The operations thus defined are associative, distributive, commutative, and everything else we could wish for in order to define a vector space - precisely because the operations inside TA are, and α is associative.
The moral behind these examples is that using monads and monad algebras, we have significant power in defining and studying algebraic structures with categorical and algebraic tools. This paradigm ties in closely with the theory of operads - which has its origins in topology, but has come to good use within certain branches of universal algebra.
An (non-symmetric) operad is a graded set $O = \bigoplus_i O_i$ equipped with composition operations $\circ_i: O_n\oplus O_m\to O_{n+m-1}$ that obey certain unity and associativity conditions. As it turns out, non-symmetric operads correspond to the summands in a monad with polynomial underlying functor, and from a non-symmetric operad we can construct a corresponding monad.
The designator non-symmetric floats in this text o avoid dealing with the slightly more general theory of symmetric operads - which allow us to resort the input arguments, thus including the symmetrizer of a symmetric monoidal category in the entire definition.
To read more about these correspondences, I can recommend you start with: the blog posts Monads in Mathematics here: [1]
### 2 Algebras over endofunctors
Suppose we started out with an endofunctor that is not the underlying functor of a monad - or an endofunctor for which we don't want to settle on a monadic structure. We can still do a lot of the Eilenberg-Moore machinery on this endofunctor - but we don't get quite the power of algebraic specification that monads offer us. At the core, here, lies the lack of associativity for a generic endofunctor - and algebras over endofunctors, once defined, will correspond to non-associative correspondences to their monadic counterparts.
Definition For an endofunctor $P:C\to C$, we define a P-algebra to be an arrow $\alpha\in C(PA,A)$.
A homomorphism of P-algebras $\alpha\to\beta$ is some arrow $f:A\to B$ such that the diagram below commutes:
This homomorphism definition does not need much work to apply to the monadic case as well.
#### 2.1 Example: Groups
A group is a set G with operations $u: 1\to G, i: G\to G, m: G\times G\to G$, such that u is a unit for m, m is associative, and i is an inverse.
Ignoring for a moment the properties, the theory of groups is captured by these three maps, or by a diagram
We can summarize the diagram as
$1+G+G\times G \mapsto^{[u,i,m]} G$
and thus recognize that groups are some equationally defined subcategory of the category of T-algebras for the polynomial functor $T(X) = 1 + X + X\times X$. The subcategory is full, since if we have two algebras $\gamma: T(G)\to G$ and $\eta: T(H)\to H$, that both lie within the subcategory that fulfills all the additional axioms, then certainly any morphism $\gamma\to\eta$ will be compatible with the structure maps, and thus will be a group homomorphism.
We shall denote the category of P-algebras in a category C by P − Alg(C), or just P − Alg if the category is implicitly understood.
This category is wider than the corresponding concept for a monad. We don't require the kind of associativity we would for a monad - we just lock down the underlying structure. This distinction is best understood with an example:
The free monoids monad has monoids for its algebras. On the other hand, we can pick out the underlying functor of that monad, forgetting about the unit and multiplication. An algebra over this structure is a slightly more general object: we no longer require $(a\cdot b)\cdot c = a\cdot (b\cdot c)$, and thus, the theory we get is that of a magma. We have concatenation, but we can't drop the brackets, and so we get something more reminiscent of a binary tree.
### 3 Initial P-algebras and recursion
Consider the polynomical functor P(X) = 1 + X on the category of sets. It's algebras form a category, by the definitions above - and an algebra needs to pick out one special element 0, and one endomorphism T, for a given set.
What would an initial object in this category of P-algebras look like? It would be an object I equipped with maps $1 \to^o I \leftarrow^n I$. For any other pair of maps $a: 1\to X, s: X\to X$, we'd have a unique arrow $u: I\to X$ such that
commutes, or in equations such that
u(o) = a
u(n(x)) = s(u(x))
Now, unwrapping the definitions in place, we notice that we will have elements $o, n(o), n(n(o), \dots$ in I, and the initiality will force us to not have any other elements floating around. Also, intiality will prevent us from having any elements not in this minimally forced list.
We can rename the elements to form something more recognizable - by equating an element in I with the number of applications of n to o. This yields, for us, elements $0, 1, 2, \dots$ with one function that picks out the 0, and another that gives us the successor.
This should be recognizable as exactly the natural numbers; with just enough structure on them to make the principle of mathematical induction work: suppose we can prove some statement P(0), and we can extend a proof of P(n) to P(n + 1). Then induction tells us that the statement holds for all P(n).
More importantly, recursive definitions of functions from natural numbers can be performed here by choosing an appropriate algebra mapping to.
This correspondence between the initial object of P(X) = 1 + X is the reason such an initial object in a category with coproducts and terminal objects is called a natural numbers object.
For another example, we consider the functor $P(X) = 1 + X\times X$.
Pop Quiz Can you think of a structure with this as underlying defining functor?
An initial $1+X\times X$-algebra would be some diagram
$1 \to^o I \leftarrow^m I\times I$
such that for any other such diagram
$1 \to^a X \leftarrow^* X\times X$
we have a unique arrow $u:I\to X$ such that
commutes.
Unwrapping the definition, working over Sets again, we find we are forced to have some element * , the image of o. Any two elements S,T in the set give rise to some (S,T), which we can view as being the binary tree
The same way that we could construct induction as an algebra map from a natural numbers object, we can use this object to construct a tree-shaped induction; and similarily, we can develop what amounts to the theory of structural induction using these more general approaches to induction.
#### 3.1 Example of structural induction
Using the structure of $1+X\times X$-algebras we shall prove the following statement:
Proposition The number of leaves in a binary tree is one more than the number of internal nodes.
Proof We write down the actual Haskell data type for the binary tree initial algebra.
```data Tree = Leaf | Node Tree Tree
nLeaves Leaf = 1
nLeaves (Node s t) = nLeaves s + nLeaves t
nNodes Leaf = 0
nNodes (Node s t) = 1 + nNodes s + nNodes t```
Now, it is clear, as a base case, that for the no-nodes tree
Leaf
:
`nLeaves Leaf = 1 + nNodes Leaf`
For the structural induction, now, we consider some binary tree, where we assume the statement to be known for each of the two subtrees. Hence, we have
```tree = Node s t
nLeaves s = 1 + nNodes s
nLeaves t = 1 + nNodes t```
and we may compute
```nLeaves tree = nLeaves s + nLeaves t
= 1 + nNodes s + 1 + nNodes t
= 2 + nNodes s + nNodes t
nNodes tree = 1 + nNodes s + nNodes t```
Now, since the statement is proven for each of the cases in the structural description of the data, it follows form the principle of structural induction that the proof is finished.
In order to really nail down what we are doing here, we need to define what we mean by predicates in a strict manner. There is a way to do this using fibrations, but this reaches far outside the scope of this course. For the really interested reader, I'll refer to [2].
Another way to do this is to introduce a topos, and work it all out in terms of its internal logic, but again, this reaches outside the scope of this course.
#### 3.2 Lambek's lemma
What we do when we write a recursive data type definition in Haskell really to some extent is to define a data type as the initial algebra of the corresponding functor. This intuitive equivalence is vindicated by the following
Lemma Lambek If $P: C\to C$ has an initial algebra I, then P(I) = I.
Proof Let $a: PA\to A$ be an initial P-algebra. We can apply P again, and get a chain
$PPA \to^{Pa} PA \to^a A$
We can fill out the diagram
to form the diagram
where f is induced by initiality, since $Pa \colon PPA \to PA$ is also a P-algebra.
The diagram above commutes, and thus af = 1PA and fa = 1A. Thus f is an inverse to a. QED.
Thus, by Lambek's lemma we know that if $P_A(X) = 1 + A\times X$ then for that PA, the initial algebra - should it exist - will fulfill$I = 1 + A\times I$, which in turn is exactly what we write, defining this, in Haskell code:
`List a = Nil | Cons a List`
#### 3.3 Recursive definitions with the unique maps from the initial algebra
Consider the following PA(X)-algebra structure $l: P_A(\mathbb N)\to\mathbb N$ on the natural numbers:
```l(*) = 0
l(a,n) = 1 + n```
We get a unique map f from the initial algebra for PA(X) (lists of elements of type A) to $\mathbb N$ from this definition. This map will fulfill:
```f(Nil) = l(*) = 0
f(Cons a xs) = l(a,f(xs)) = 1 + f(xs)```
which starts taking on the shape of the usual definition of the length of a list:
```length(Nil) = 0
length(Cons a xs) = 1 + length(xs)```
And thus, the machinery of endofunctor algebras gives us a strong method for doing recursive definitions in a theoretically sound manner.
### 4 Homework
Complete credit will be given for 8 of the 13 questions.
1. Find a monad whose algebras are associative algebras: vector spaces with a binary, associative, unitary operation (multiplication) defined on them. Factorize the monad into a free/forgetful adjoint pair.
2. Find an endofunctor of Hask whose initial object describes trees that are either binary of ternary at each point, carrying values from some A in the leaves.
3. Write an implementation of the monad of vector spaces in Haskell. If this is tricky, restrict the domain of the monad to, say, a 3-element set, and implement the specific example of a 3-dimensional vector space as a monad. Hint: [3] has written about this approach.
4. Find a $X\mapsto 1+A\times X$-algebra L such that the unique map from the initial algebra I to L results in the function that will reverse a given list.
5. Find a $X\mapsto 1+A\times X$-algebra structure on the object 1 + A that will pick out the first element of a list, if possible.
6. Find a $X\mapsto \mathbb N+X\times X$-algebra structure on the object $\mathbb N$ that will pick out the sum of the leaf values for the binary tree in the initial object.
7. Complete the proof of Lambek's lemma by proving the diagram commutes.
8. * We define a coalgebra for an endofunctor T to be some arrow $\gamma: A \to TA$. If T is a comonad - i.e. equipped with a counit $\epsilon: T\to 1$ and a cocomposition $\Delta: T\to T\times T$, then we define a coalgebra for the comonad T to additionally fulfill $\gamma\circ T\gamma = \gamma\circ\Delta$ (compatibility) and $\epsilon_A\circ\gamma = 1_A$ (counitality).
1. (2pt) Prove that if T is an endofunctor, then if T has an initial algebra, then it is a coalgebra. Does T necessarily have a final coalgebra?
2. (2pt) Prove that if U,F are an adjoint pair, then FU forms a comonad.
3. (2pt) Describe a final coalgebra over the comonad formed from the free/forgetful adjunction between the categories of Monoids and Sets.
4. (2pt) Describe a final coalgebra over the endofunctor P(X) = 1 + X.
5. (2pt) Describe a final coalgebra over the endofunctor $P(X) = 1 + A\times X$.
6. (2pt) Prove that if $c: C\to PC$ is a final coalgebra for an endofunctor $P:C\to C$, then c is an isomorphism.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 52, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8837273120880127, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/250416/probability-density-function-of-sigma-x-mu?answertab=votes
|
# Probability density function of $\sigma X + \mu$.
I need a head check on this one. Suppose $\sigma,\mu \in \mathbb{R}$ and $\sigma \neq 0$. Let $X$ be a random variable with density $f_X(x)$. I think that the random variable $Z:= \sigma X + \mu$ has density $$f_Z(x) = \frac{1}{|\sigma|}f_X\left(\frac{x-\mu}{\sigma} \right).$$
Splitting the cases $\sigma > 0$ and $\sigma<0$ my proof comes down to a simple change of variables. However, I can't find any mention of a formula like this on wikipedia or google. Is it so simple that no one thought to mention it, or am I misunderstanding?
-
For one thing, you expect that $\int_{\mathbb R} f_Z(x) \, dx = 1$, but if you use the formula for $f_Z(x)$ that you have, you don't get 1. – echoone Dec 4 '12 at 2:57
@echoone And why not? – Sasha Dec 4 '12 at 3:08
@echoone $\int_{\mathbb{R}} f_Z(x)dx = \int_{\mathbb{R}} f_X(x)dx = 1$. Don't forget the Jacobian when you change variables. – nullUser Dec 4 '12 at 3:09
Oops. Yeah, you guys are right. What I am wondering now is how you are dealing with the $\sigma < 0$ case. – echoone Dec 4 '12 at 3:37
1
– Learner Dec 4 '12 at 3:44
show 1 more comment
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9176804423332214, "perplexity_flag": "head"}
|
http://mathhelpforum.com/differential-equations/107844-behavior-solutions.html
|
# Thread:
1. ## behavior of solutions
If a>0, b>0, c=0, show that all solutions of ay''+by'+cy=0 approach a constant that depends on the initial conditions as t approaches to infinity. Determine this constant for the initial conditons $y(0)=y_0, y'(0)=y_0'$
-------------------------------------------------------
my approach:
$y=c_1e^{r_1 t}+c_2e^{r_2 t}$
$b^2-4ac=b^2$
$r_1=0, r_2= \frac{-b-b}{2a}$
$y=c_1e^0+c_2e^{\frac{-bt}{a}}$
$y(0)=c_1=y_0$
$y'(t)=c_2(-{\frac{b}{a})e^{\frac{-bt}{a}}}$
$t=0, c_2=y_0'(-{\frac{a}{b}})$
$y=c_1e^0+c_2e^{\frac{-bt}{a}}$
$=c_1+c_2e^{\frac{-bt}{a}}$
$=y_0+{y_0'}(-{\frac{a}{b}})e^{\frac{-bt}{a}}$
which approaches to $y_0$ as t approaches to infinity.
(according to the solution manual, the constant should be $y_0+{\frac{a}{b}}y_0'$ )
2. Hello elmo
Originally Posted by elmo
If a>0, b>0, c=0, show that all solutions of ay''+by'+cy=0 approach a constant that depends on the initial conditions as t approaches to infinity. Determine this constant for the initial conditons $y(0)=y_0, y'(0)=y_0'$
-------------------------------------------------------
my approach:
$y=c_1e^{r_1 t}+c_2e^{r_2 t}$
$b^2-4ac=b^2$
$r_1=0, r_2= \frac{-b-b}{2a}$
$y=c_1e^0+c_2e^{\frac{-bt}{a}}$
$\color{red}y(0)=c_1=y_0$
$y'(t)=c_2(-{\frac{b}{a})e^{\frac{-bt}{a}}}$
$t=0, c_2=y_0'(-{\frac{a}{b}})$
$y=c_1e^0+c_2e^{\frac{-bt}{a}}$
$=c_1+c_2e^{\frac{-bt}{a}}$
$=y_0+{y_0'}(-{\frac{a}{b}})e^{\frac{-bt}{a}}$
which approaches to $y_0$ as t approaches to infinity.
(according to the solution manual, the constant should be $y_0+{\frac{a}{b}}y_0'$ )
See the line I've highlighted in red. Can you see your mistake? $e^0=1$, not $0$.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 28, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9018083810806274, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/tagged/special-relativity?page=8&sort=newest&pagesize=15
|
# Tagged Questions
The special theory of relativity describes the motion and dynamics of objects moving at significant fractions of the speed of light.
1answer
71 views
### Is it best to look at light as a particle when trying to understand special relativity?
So my course about special relativity explains time dilation using a moving train, where one sends up (i.e. perpendicular to the direction of movement) a light pulse which gets reflected etc. (a ...
2answers
648 views
### Definitions: 'locality' vs 'causality'
I'm having trouble unambiguously interpreting many answers here due to the fact that the terms locality and causality are sometimes used interchangeably, while other times seem to mean very different ...
1answer
327 views
### Two trains at the speed of light [closed]
Okay. Two trains travelling towards each other at the speed of light. So, from one train (let's call it train A), the other is moving towards it at the speed of light. The other train shines a torch. ...
1answer
109 views
### Galileo's dictum and how light cannot violate it
Okay. So I've been told that the speed of light is constant and cannot violate Galileo's dictum, but even if it weren't constant (in a vacuum), how would it violate it anyway? Say you are on a train ...
2answers
274 views
### Special Relativity: What differential equation describes an accelerated object from a non-inertial reference frame?
I am looking for a set of differential equations (to be solved numerically for an educational program) that would describe the position and apparent time of an accelerated clock relative to a ...
2answers
217 views
### A relative time dilation paradox.
Let us assume that there are two astronauts A and B who are floating in space. A sees B passing by and vice versa. A sends signals to B every minute. According to A since B is moving his clock will be ...
2answers
284 views
### relativistic acceleration equation
A Starship is going to accelerate from 0 to some final four-velocity, but it cannot accelerate faster than $g_M$, otherwise it will crush the astronauts. what is the appropiate equation to constraint ...
11answers
2k views
### Could the Heisenberg Uncertainty Principle turn out to be false?
While investigating the EPR Paradox, it seems like only two options are given, when there could be a third that is not mentioned - Heisenberg's Uncertainty Principle being given up. The setup is this ...
1answer
162 views
### Expression for the (relativistic) mass of the photon [closed]
I started learning a bit ahead from an old physics book, and they were discussing the photoelectric effect and after that Planck's hypotheses and energy quantas. The book said that the mass of a ...
5answers
420 views
### Special Relativity Second Postulate
That the speed of light is constant for all inertial frames is the second postulate of special relativity but this does not means that nothing can travel faster than light. so is it possible the ...
2answers
161 views
### Why does the (relativistic) mass change & why?
I studied that when an object moves with a velocity comparable to the velocity of light the (relativistic) mass changes...but I am really eager to know how does this alteration take place....If anyone ...
2answers
222 views
### Using Lorentz Invariance of Charge To Calculate Current Density
I'm attempting a problem from Zwiebach: A First Course in String Theory and am completely stuck. Could anyone give me a hint? The problem is as follows. Consider $S$, $S'$ two Lorentz frames with ...
4answers
333 views
### Why are objects at rest in motion through spacetime at the speed of light?
I read that an object at rest has such a stupendous amount of energy, $E=mc^2$ because it's effectively in motion through space-time at the speed of light and it's traveling through the time dimension ...
2answers
610 views
### Does the speed of light vary in noninertial frames?
The speed of light is the same in all inertial frames. Does it change from a non-inertial frame to another? Can it be zero? If it is not constant in non-inertial frames, is it still bounded from ...
3answers
191 views
### First Postulate of Special Relativity: What does it mean?
Wikipedia has this quote: Special principle of relativity: If a system of coordinates K is chosen so that, in relation to it, physical laws hold good in their simplest form, the same laws hold ...
2answers
116 views
### Relativity - time dilation
I'm learning about relativity and I'm having some issues with it and the twin paradox. I found many questions and answers on this subject but they did not answer my specific problem. In my thought ...
1answer
248 views
### Time dilation - why the observers see each other the slow one but then one of them is older or younger?
I'm in trouble with time dilation: Suppose that there's two people on the Earth (A,B), they are twins and each other has a clock. (So they are at the same reference frame). B travels in a spaceship ...
2answers
204 views
### Does the potential energy related to a particle determines its rest mass?
Would it be possible to determine the rest mass of a particle by computing the potential energy related to the presence (existence) of the particle, if this potential energy could be determined ...
2answers
178 views
### Inner product of four-vectors in special relativity
Reference) "Feynman lectures on Physics Vol.3 , p.7-4 ." With four vectors $x_{\mu} = (t,x,y,z)\ , \ p_{\mu} = (E,p_{x},p_{y},p_{z})$ the inner product of these two four vectors is scalar invariant ...
3answers
134 views
### Having trouble seeing the similarity between these two energy-momentum tensors
Leonard Suskind gives the following formulation of the energy-momentum tensor in his Stanford lectures on GR (#10, I believe): T_{\mu \nu}=\partial_{\mu}\phi \partial_{\nu}\phi-\frac{1}{2}g_{\mu ...
0answers
114 views
### How do I extend the Lorentz transformation metric to dimensions>4?
How do I extend the general Lorentz transformation matrix (not just a boost along an axis, but in directions where the dx1/dt, dx2/dt, dx3/dt, components are all not zero. For eg. as on the Wikipedia ...
2answers
111 views
### What should I call an n>4 dimensional Minkowski metric?
I am manipulating an $nxn$ metric where $n$ is often $> 4$, depending on the model. The $00$ component is always tau*constant, as in the Minkowski metric, but the signs on all components might be ...
1answer
119 views
### quantum curvature
If a state can be a superposition of energy states, and mass equals energy (special relativity), and mass curves space-time (general relativity), then could we say that space-time around a quantum ...
2answers
257 views
### Hamiltonian mechanics and special relativity?
Is there a relativistic version of Hamiltonian mechanics? If so, how is it formulated (what are the main equations and the form of Hamiltonian)? Is it a common framework, if not then why? It would be ...
0answers
86 views
### What is the proper time used in relativistic non equilibrium statistical physics?
In the literature one often finds covariant relativistic generalizations of classical non equilibrium statistical equations (Boltzmann, Vlasov, Landau, fokker-planck, etc...) but I wonder what is the ...
0answers
88 views
### maximum distance between accelerating objects started at different times [closed]
Let there be two objects that have zero relative velocity with respect to each other in an inertial frame. If they both undergo identical accelerations, but one starts the acceleration at t = T1 and ...
1answer
282 views
### Is 4-volume element a scalar or a pseudoscalar in special relativity?
In general relativity 4-volume element $\mathrm{d}^4 x = \mathrm{d} x^0\mathrm{d} x^1 \mathrm{d} x^2\mathrm{d} x^3$ is clearly a pseudoscalar (or scalar density) of weight 1 since it transforms as ...
1answer
207 views
### Relative Speed vs speed of light [duplicate]
Possible Duplicate: Travelling faster than the speed of light Someting almost faster than light traveling on something else almost faster than light I've got two questions which are ...
2answers
358 views
### Is the potential energy in a compressed spring a Lorentz invariant?
The total energy of an object comes from the time part of the four-momentum, and so isn't a Lorentz invariant. On the other hand, is the potential energy of a compressed spring a Lorentz invariant?
3answers
166 views
### Does the Lorentz transformation not apply to light?
Since you would know that light always travels at the constant velocity with respect to all frame of reference ....according to relativity whenever we are traveling at speed of light our time with ...
1answer
171 views
### Designing a plausible faster than light drive: the Space Skip Drive [duplicate]
Possible Duplicate: Is the Portal feasible in real life? I'm designing a plausible faster-than-light (FTL) drive for a SF universe. Here's what I have so far. I'm aware of existing ...
1answer
195 views
### Speed of light is not fixed?
In my research, I found that the speed of light is not fixed. IS it true? Namely, We know that light refracts when the medium it travels through changes. Actually, light travels in the same medium ...
3answers
668 views
### “Relativistic Baseball”
On Randall Munroe’s new blog “what if”, he answers the question: “What would happen if you tried to hit a baseball pitched at 90% the speed of light?” http://what-if.xkcd.com/1/ He concludes: ...
6answers
820 views
### Simple Experiment to Demonstrate Special Relativity
I am trying to think of a good experiment that can be done for under \$250 or so that would demonstrate some aspect of Special Relativity. Ideally this will be done in a few years with my kids when ...
1answer
541 views
### Phase space volume and relativity
Much of statistical mechanics is derived from Liouville's theorem, which can be stated as "the phase space volume occupied by an ensemble of isolated systems is conserved over time." (I'm mostly ...
2answers
283 views
### Why is ${\partial^i}{\partial_i\phi}$ = ${\partial^i {\phi}}{\partial_i{\phi}}$?
This notation can be found on page 254 of Victor Stenger's Comprehensible Cosmos and in David Tong's Lectures on QFT (Equation 2.4 http://www.damtp.cam.ac.uk/user/tong/qft/two.pdf), and in EDIT: on ...
5answers
1k views
### How do photons travel at a speed that should be impossible to attain?
If it requires infinite amount of energy to travel at the speed of light then how photon attains this speed? Its source is never infinitely sourced.
4answers
530 views
### Why did we need relativity to derive $E=mc^2$?
Okay, so the way I understand one of the "derivations" of $E=mc^2$ is roughly as follows: We observe a light bulb floating in space. It appears motionless. It gives off a brief flash of light. We ...
0answers
154 views
### Lorentz transformations of the polarization vector
Let $\bf{n}'$ be a unit vector in the direction of a wavevector in the plasma rest frame and $\bf{B'}$ be a unit vector along the magnetic field in the plasma rest frame. The electric field of a ...
1answer
62 views
### Tower redshift paradox
If photons are emitted at intervals a, from the top of a tower of height $h$, down to earth, is this formula correct for the intervals b in which they are received at earth? $b=a(1-gh/c^2)$ If so, how ...
2answers
477 views
### How to calculate speed difference between objects close to the speed of light?
If two different objects (for example two rockets) move in opposite direction at close to the speed of light (for example 0.8c and 0.9c), how do I calculate the difference in speed between the two ...
1answer
75 views
### In what subfields and how fare can the “naive limit” of special relativity be carried?
Even if many interesting similarities between the classical and the quantum mechanical framework have been worked out, e.g. in the subject of deformation quantization, in general, there are some ...
3answers
227 views
### How to connect Einstein's Special Relativity(SR) with General Relativity(GR)?
How Einstein's SR becomes GR? $$ds^2=dr^2-c^2dt^2,$$ $$ds^2=g_{\mu\nu}dx^{\mu}dx^{\nu}.$$ When the $s$ is constant $ds^2=0$, isn't it true? How to connect Einstein's SR with GR? What is the ...
1answer
104 views
### What kinds of inconsistencies would one get if one starts with Lorentz noninvariant Lagrangian of QFT?
What kinds of inconsistencies would one get if one starts with Lorentz noninvariant Lagrangian of QFT? The question is motivated by this preprint arXiv:1203.0609 by Murayama and Watanabe. Also, what ...
2answers
161 views
### Are there any known potentially useful nontrivial irreducible representations of the Lorentz Group $O(3,1)$ of dimension bigger than 4? Examples?
Are there any known potentially useful, nontrivial, irreducible representations of the Lorentz Group $O(3,1)$ of dimension more than $4$? Examples? A $5$-dimensional representation? EDIT: Is there ...
4answers
346 views
### Does $p=mc$ hold for photons?
Known that $E=hf$, $p=hf/c=h/\lambda$, then if $p=mc$, where $m$ is the (relativistic) mass, then $E=mc^2$ follows directly as an algebraic fact. Is this the case?
2answers
168 views
### Are the higher-order terms in the series for energy really negligible?
To show that energy in special relativity reduces to $E=m+mv^2/2$ for low velocities, if we make a Taylor expansion of $m\gamma$ around $v=0$ we get $$E=m+mv^2/2+3mv^4/8+\cdots$$ But why can we cutoff ...
1answer
204 views
### Here's a way to transmit data faster than the speed of light [duplicate]
Possible Duplicate: Is it possible for information to be transmitted faster than light by using a rigid pole? Assume there is a long rod or a string connecting two points separated by a ...
2answers
194 views
### why is there only one inertial frame that $ct$ and $x$ are orthogonal?
It is very long time ago that I took a physics lesson, so I want to refresh my memory. I think I learned that there is only one inertial frame in Minkowski spacetime (or special relativity time) that ...
1answer
194 views
### Why can't this speed be measured?
Superman and Supergirl were playing catch. When Superman is moving with a speed of 0.800c relative to Supergirl, he threw a ball to Supergirl with a speed of 0.600c relative to him. a. ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 36, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9221042394638062, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/advanced-algebra/87535-noetherian-ring-finitely-generated-module-print.html
|
# Noetherian ring, finitely generated module
Printable View
• May 4th 2009, 10:19 PM
xianghu324
Noetherian ring, finitely generated module
I need help with these two questions:
1. Let $R$ be a Noetherian ring, $I$ an ideal, and $N \subseteq M$ be $R$-modules. Suppose $R$ is reduced and $P_1, \ldots, P_n$ are the minimal primes of $R$. Prove that $M$ is a finitely generated $R$-module iff $\frac{M}{P_iM}$ is a finitely generated $\frac{R}{P_i}$-module for each $i=1, \ldots, n$.
2. Now suppose $R$ is Artinian (need not be reduced). Prove that $M$ is Noetherian iff $M$ is Artinian.
I know how to do #1 $(\Rightarrow)$. But, I don't see how to do #1 $(\Leftarrow)$. Also, for #2, I am stuck on both implications right now. Thanks in advance.
• May 5th 2009, 12:13 AM
NonCommAlg
Quote:
Originally Posted by xianghu324
I need help with these two questions:
1. Let $R$ be a Noetherian ring, $I$ an ideal, and $N \subseteq M$ be $R$-modules. Suppose $R$ is reduced and $P_1, \ldots, P_n$ are the minimal primes of $R$. Prove that $M$ is a finitely generated $R$-module iff $\frac{M}{P_iM}$ is a finitely generated $\frac{R}{P_i}$-module for each $i=1, \ldots, n$.
since $R$ is reduced, the nilradical of $R$ is (0) and thus $P_1 P_2 \cdots P_n=(0).$ now see my solution to part 2) of the problem in this thread: http://www.mathhelpforum.com/math-he...-r-module.html
Quote:
2. Now suppose $R$ is Artinian (need not be reduced). Prove that $M$ is Noetherian iff $M$ is Artinian.
it'd help if instead of just posting your problem, you'd also tell us what you know! for example, do you know that every Artinian ring is Noetherian? (Hopkins-Levitzki theorem) or do you know
about semisimple rings or composition series?
• May 5th 2009, 09:56 AM
xianghu324
Quote:
Originally Posted by NonCommAlg
since $R$ is reduced, the nilradical of $R$ is (0) and thus $P_1 P_2 \cdots P_n=(0).$ now see my solution to part 2) of the problem in this thread: http://www.mathhelpforum.com/math-he...-r-module.html
it'd help if instead of just posting your problem, you'd also tell us what you know! for example, do you know that every Artinian ring is Noetherian? (Hopkins-Levitzki theorem) or do you know
about semisimple rings or composition series?
Hi NonCommAlg,
We have not covered semi-simple rings yet. However, I do know that:
$M$ has a comp series $\Leftrightarrow$ $M$ is Artinian and Noetherian.
$R$ is Artin ring $\Leftrightarrow$ $R$ is Noetherian and $\text{dim} (R)=0$.
$R$ is Artin ring $\Leftrightarrow$ $R$ is Noetherian and each prime ideal is maximal.
Using comp series seems the best way to go. But I am not seeing how to use a comp series on $M$, as we have much info on M right now.
• May 5th 2009, 12:09 PM
NonCommAlg
Quote:
Originally Posted by xianghu324
2. Now suppose $R$ is Artinian (need not be reduced). Prove that $M$ is Noetherian iff $M$ is Artinian.
if $M$ is Noetherian, then it's finitely generated and we know that a finitely generated module over an Artinian ring is Artinian. conversely, suppose $M$ is Artinian. let $\overline{R}=\frac{R}{\text{Nil}(R)}.$
then $\overline{R}$ is a reduced Noetherian ring (because every Artinian ring is Noetherian). we know that $R$ has finitely many primes and every prime is maximal (so all primes are minimal).
let $P_1, \cdots , P_n$ be the prime ideals of $R.$ let $\overline{P_j}=\frac{P_j}{\text{Nil}(R)}.$ then $\overline{P_j}$ are the (minimal) primes of $\overline{R}.$ let $\overline{M}=\frac{M}{\text{Nil}(R)M}.$ then $\overline{M}$ is an $\overline{R}$ module and $\overline{M_j}=\frac{\overline{M}}{\overline{P_j} \ \overline{M}}$ is a Noetherian
$R_j=\frac{\overline{R}}{\overline{P_j}}$ module because $R_j$ is a field and $\overline{M_j}$ is an Artinian $R_j$ module and we know that over a field "Artinian" and "Noetherian" are equivalent. thus $\overline{M_j}$ is finitely generated $R_j$
module and so by part 1) of your problem, $\overline{M}$ is finitely generated $\overline{R}$ module. so by the link i already gave you, $M$ is finitely generated $R$ module and hence Noetherian, since $R$
is Noetherian.
All times are GMT -8. The time now is 09:19 AM.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 77, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.925948441028595, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/5697/coloring-the-faces-of-a-hypercube
|
# Coloring the faces of a hypercube
I will restate the 3-D version of the problem. In how many ways can you color a regular cube with 2 colors up to a rotational isometry. The answer is of course a special case of Burnsides Lemma which can be used to show that the number of distinct face permutations is $\frac{1}{24}(N^6 + 3N^4 + 12N^3 + 8N^2)$ where $N$ is the number of colors used, $2$ in this case, which gives us an answer of $10$ distinct permutations.
My question is how can you expand this to a tesseract, and then more generally, to any hypercube. The rotational isometries of a cube are somewhat simple to comprehend, but the rotational isometries o hypercube are difficult to grasp (even after an hour of playing with the 4d rubiks cube app)
My initial thought was to consider the expansion of a tesseract as 8 interconnected cubelets. For the two color case each one of these cubelets has 10 distinct states. Which gives us $10^8$ non-distinct permutations of the hypercube. Or more generally, this reduces to how many distinct ways one can color a 8 faced 3-dimensional figure using 10 colors. So the complicated question of 4-dimensional isometries reduces to 3 dimensional rotational isometries of a octahedron.
But I'm a physicist so what the hell do I know :D
-
For d-dimensional polytopes the faces of dimension d-1 are usually referred to as facets of the polytope. For a 4-polytope there are two-dimensional faces (2-faces) and the 3-faces or facets. – Joseph Malkevitch Sep 29 '10 at 18:05
## 3 Answers
Once someone else has done the hard work and come up with the answer, as Qiaochu has done, then it's easy to prove it on an ad hoc basis as this interloper (me) will do.
Consider an $n$-cube with faces coloured red and blue. Consider a set of $n$ balls. Colour the $j$-th ball red, blue or green according to the colours of the $j$-th pair of faces in the cube, that is if they both have the same colour give the ball that colour, else if they have different colours colour the ball green. Now a symmetry operation on the cube will shuffle the balls so the number of each colour of balls will be the same. Conversely given the $n$ coloured balls there will be various coloured cubes giving rise to the colour distribution of the balls, but these will all be equivalent under the full symmetry group of the cube. However for $n\ge 2$ each $2$-coloured cube has a non-rotational symmetry, so any two coloured cubes giving rise to the same colour distribution of balls are related by a rotation. Hence the number of $2$-coloured $n$-cubes up rotation is the same as the number of triples $(r,b,g)$ of nonnegative integers adding to $n$ and there are ${n+2\choose 2}$ of these.
One can repeat this with $k$-colours for the cube. This time one needs ${k+1\choose 2}$ colours for the balls, and the argument ensuring that rotations and the full symmetry group give the same answer requires $n>{k\choose 2}$ (why?).
Added (30/9/2010) One can get the general result using these considerations. For $n>{k\choose 2}$ one gets $${n+(k^2+k)/2-1\choose (k^2+k)/2-1}$$ colourings up to rotations. For $n\le {k\choose 2}$ there are $k$-colourings having no improper (determinant $-1$) symmetries. But any such colouring has no nontrivial symmetries at all. If two opposite faces have the same colour, one can reflect through a hyperplane parallel to them. So assume that no colour is opposite itself. Then if two pairs of opposite faces have the same two colours between them, one can reflect in a plane $x_i=x_j$or $x_i=-x_j$. Hence in these "special" colourings each pair of opposite faces has a distinct pair of distinct colours. Up to symmetry there are $${(k^2-k)/2\choose n}$$ of these special colourings. So for $n\le{k\choose 2}$ there are $${(k^2-k)/2\choose n}+{n+(k^2+k)/2-1\choose (k^2+k)/2-1}$$ up to rotational symmetries.
-
I am sure this works, but could you explain more carefully the step "However for n≥2 each 2-coloured cube has a non-rotational symmetry, so any two coloured cubes giving rise to the same colour distribution of balls are related by a rotation"? – Qiaochu Yuan Sep 30 '10 at 8:38
First let's state the special case of Burnside's lemma that is relevant here.
Lemma: Let $G$ be a finite group acting on a finite set $X$. The number of ways to color the elements of $X$ with $z$ different colors, up to the action of $G$, is
$$\frac{1}{|G|} \sum_{g \in G} z^{c(g)}$$
where $c(g)$ is the number of cycles in the cycle decomposition of $g$ acting on $X$. (Proof.)
Here $X$ is the set of faces of a hypercube. In $n$ dimensions there are $2n$ such faces. $G$ is the subgroup of index $2$ in the hyperoctahedral group with determinant $1$, also known as the Coxeter group $D_n$. So our job now is to count, for each $k$, the number of elements of $D_n$ with $k$ cycles in the action on $X$.
Now, note that to analyze the action of $D_n$ on the faces it suffices to analyze the action of $D_n$ on the midpoints of the faces. But these are precisely the $2n$ points $(0, 0, ... \pm 1, ..., 0, 0)$, so writing the elements of $D_n$ as signed permutation matrices is very well-suited to analyzing their action on these points; in particular, it suffices to figure out the answer for a single signed cycle. But this turns out to be very simple: there are either one or two cycles depending on whether the product of the signs is $-1$ or $+1$.
(It might be helpful here to play with a specific example. Consider $\left[ \begin{array}{cc} 0 & 1 & 0 \\ 0 & 0 & -1 \\ -1 & 0 & 0 \end{array} \right]$ acting on the six points $(\pm 1, 0, 0), (0, \pm 1, 0), (0, 0, \pm 1)$ to get a feel for what's going on in the general case.)
From here I think it's easiest to work with generating functions because the combinatorics get a little messy. Begin with the identity
$$\sum_{m \ge 0} Z(S_n) t^n = \exp \left( z_1 t + \frac{z_2 t^2}{2} + \frac{z_3 t^3}{3} + ... \right)$$
where $Z(S_n)$ is the cycle index polynomial for the action of $S_n$ on $\{ 1, 2, ... n \}$. Each $z_i$ is the term that controls cycles of length $i$. We want to modify this generating function so that it tells us how the cycles in $D_n$ work. There are $2^i$ signed cycles of length $i$ which come in two flavors: half of them have positive sign product (two unsigned cycles) and half of them have negative sign product (one unsigned cycle), so to keep track of the total number of unsigned cycles we should replace $z_i$ with $2^{i-1} z^2 + 2^{i-1} z$. We also have to keep in mind that the determinant of a signed cycle is its sign product multiplied by $(-1)^{i+1}$, and we only want permutations with determinant $1$. So the generating function we want is
$$\sum_{n \ge 0} f_n(z) \frac{t^n}{n!} = \frac{1}{2} \left( \exp \left( \sum_{i \ge 1} \frac{2^{i-1} z^2 + 2^{i-1} z) t^i}{i} \right) + \exp \left( \sum_{i \ge 1} (-1)^{i+1} \frac{2^{i-1} z^2 - 2^{i-1} z) t^i}{i} \right) \right)$$
where $f_n(z) = \sum_{g \in D_n} z^{c(g)}$. After some simplification the above becomes
$$\sum_{n \ge 0} \frac{1}{|D_n|} f_n(z) t^n = \frac{1}{(1 - t)^{(z^2+z)/2}} + (1+t)^{(z^2-z)/2}.$$
Substituting $z = 2$ gives, at last, the answer
$$\sum_{n \ge 0} \frac{1}{|D_n|} f_n(2) t^n = \frac{1}{(1 - t)^3} + 1 + t.$$
In other words, for $n \ge 2$ we simply have $\frac{1}{|D_n|} f_n(2) = {n+2 \choose 2}$. This is such a simple answer that there should be a direct proof of it. I'll keep working on it. (There is a rather straightforward proof of the corresponding result with "hypercube" replaced by "simplex," so my guess is something along those lines is possible here.)
-
I am having trouble describing the direct proof. It goes something like this: for the analogous problem for a simplex we basically sort the colors according to some order by permutation. Here we instead sort opposite pairs of colors, but there is a low-dimensional restriction on how we can do this because of the determinant thing. – Qiaochu Yuan Sep 29 '10 at 20:16
I presume you want to colour the faces of the cube? If so there are $2^6$ colourings as there are six faces. But do you want two colourings which are equivalent under rotations (or rotations and reflections) to count as as the same? If so then this is a classic application of Burnside's lemma.
Added If you want to do the same in higher dimensions then you need a good handle on the symmetries of an $n$-cube. Its convenient to centre the cube at the origin and take its vertices to be all points $(\pm1,\ldots,\pm1)$. Then all symmetries of the cube fix the origin and their matrices are "signed" permutation matrices: each row and column is all zero save for one entry which is $\pm1$. The rotations correspond to signed permutation matrices of determinant 1. The faces are the sets defined by $x_i=1$ and $x_i=-1$. Again use Burnside's lemma but the book-keeping gets fiddlier.
-
1
The question was really about extending Burnside's lemma to a higher dimensional figure. How would one answer the same question, but about a hypercube. – crasic Sep 29 '10 at 9:19
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 77, "mathjax_display_tex": 8, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9385862350463867, "perplexity_flag": "head"}
|
http://mathhelpforum.com/calculus/114568-finding-values.html
|
# Thread:
1. ## Finding Values
For the function f(x)= x^(6)(Aln(x)-1), where A is a constant.
How would I find the value(s) of A if e^(5) is a critical point of f(x)?
2. Originally Posted by ctran
For the function f(x)= x^(6)(Aln(x)-1), where A is a constant.
How would I find the value(s) of A if e^(5) is a critical point of f(x)?
If $e^5$ is a critical point of $f(x)$, then
$f'\left(e^5\right) = 0$.
Solve for A.
3. Hello, ctran!
Given the function: $f(x)\:=\: x^6(A\ln x-1)$, where $A$ is a constant.
How would I find the value(s) of $A$ if $e^5$ is a critical point of $f(x)$ ?
I assume you know what a critical point is . . .
If $x = e^5$ is a critical point of $f(x)$, then: . $f'(e^5) \:=\:0$
Find $f'(x)\!:\;\;f'(x) \;=\;x^6\cdot\frac{A}{x} + 6x^5(A\ln x - 1) \;=\;x^5(A + 6A\ln x - 6)$
Since $f'(e^5) \,=\,0$, we have:
. $f'(e^5) \;=\;(e^5)^5\bigg[A + 6A\ln(e^5) - 6\bigg] \:=\:0 \quad\Rightarrow\quad e^{25}\bigg[A + 6A\!\cdot\!5 -6\bigg] \:=\:0$
. . $e^{25}\bigg[31A - 6\bigg] \:=\:0 \quad\Rightarrow\quad 31A - 6 \:=\:0 \quad\Rightarrow\quad 31A \:=\:6$
Therefore: . $A \;=\;\frac{6}{31}$
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 16, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8207865953445435, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/calculus/65253-integral-heaviside.html
|
# Thread:
1. ## Integral and Heaviside
How to calculate integral if Heaviside sign is under integral? What does that Heaviside sign change? For example what does happen if I substitute Heaviside sign with plain number 1?
Attached Thumbnails
2. Hello,
Originally Posted by totalnewbie
How to calculate integral if Heaviside sign is under integral? What does that Heaviside sign change? For example what does happen if I substitute Heaviside sign with plain number 1?
Actually, it helps to restrain the limits of integration.
We know that the Heaviside function is 1 iff its argument is positive.
Hence :
$\bold{1}(1-x^2-y^2)=\left\{\begin{array}{ll} 1 \quad if ~ 1-x^2-y^2>0 \\<br /> 0 \quad if ~ 1-x^2-y^2 \leq 0 \end{array}\right.$
So find the new boundaries of x and y by "solving" $1-x^2-y^2>0$
That is $x^2+y^2<1$
Do you know how to deal with that ?
3. Originally Posted by Moo
Hello,
Actually, it helps to restrain the limits of integration.
We know that the Heaviside function is 1 iff its argument is positive.
Hence :
$\bold{1}(1-x^2-y^2)=\left\{\begin{array}{ll} 1 \quad if ~ 1-x^2-y^2>0 \\<br /> 0 \quad if ~ 1-x^2-y^2 \leq 0 \end{array}\right.$
So find the new boundaries of x and y by "solving" $1-x^2-y^2>0$
That is $x^2+y^2<1$
Do you know how to deal with that ?
Not sure.
4. Originally Posted by totalnewbie
Not sure.
It's like dealing with double integrals.
$x^2+y^2<1$
So $x^2<1-y^2$
Since $x^2 \geq 0$, $1-y^2>0$, so that makes y between -1 and 1.
And $x^2<1-y^2 \implies |x|< \sqrt{1-y^2}$ (we can write it because 1-y²>0)
So x is between $-\sqrt{1-y^2}$ and $\sqrt{1-y^2}$
The integral is now :
$\int_{-1}^1 \int_{-\sqrt{1-y^2}}^{\sqrt{1-y^2}} a*1 ~ dxdy$
5. Originally Posted by Moo
It's like dealing with double integrals.
$x^2+y^2<1$
So $x^2<1-y^2$
Since $x^2 \geq 0$, $1-y^2>0$, so that makes y between -1 and 1.
And $x^2<1-y^2 \implies |x|< \sqrt{1-y^2}$ (we can write it because 1-y²>0)
So x is between $-\sqrt{1-y^2}$ and $\sqrt{1-y^2}$
The integral is now :
$\int_{-1}^1 \int_{-\sqrt{1-y^2}}^{\sqrt{1-y^2}} a*1 ~ dxdy$
Moo, I am not trying to insult you, but there seems to be an easier way.
When you integrate, $\iint_D 1 ~ dA$ you just need to compute the area of $D$.
6. Originally Posted by ThePerfectHacker
Moo, I am not trying to insult you, but there seems to be an easier way.
When you integrate, $\iint_D 1 ~ dA$ you just need to compute the area of $D$.
I don't know the formula
And furthermore when I see what you've written, I cannot stop thiking about measures ><
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 26, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9349558353424072, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/1064/perfect-set-without-rationals/1067
|
# Perfect set without rationals
Give an example of a perfect set in $\mathbb R^n$ that does not contain any of the rationals.
(Or prove that it does not exist).
-
5
What is a perfect set? Also, this looks like a homework problem. – Kevin Lin Jul 28 '10 at 21:36
2
Why are you asking if you apparently know the answer? – Mariano Suárez-Alvarez♦ Jul 28 '10 at 21:37
3
@Kevin: Line Bundle is not asking homework questions - no-one would have homework on so many different areas at once – Casebash Jul 28 '10 at 21:37
1
– Mariano Suárez-Alvarez♦ Jul 28 '10 at 21:38
2
– Kevin Lin Jul 28 '10 at 21:40
show 3 more comments
## 5 Answers
An easy example comes from the fact that a number with an infinite continued fraction expansion is irrational (and conversely). The set of all irrationals with continued fractions consisting only of 1's and 2's in any arrangement is a perfect set of irrational numbers.
-
4
This is a really nice answer! – Akhil Mathew Jul 29 '10 at 4:52
2
Beautiful answer! – BBischof Jul 30 '10 at 7:16
Consider the set of reals x whose binary expansion, if you look only at the even digit places, is some fixed non-eventually-repeating pattern z. This is perfect, since we have branching at the odd digits, but they are all irrational, since z is not eventually repeating.
You can draw a picture of this set, and it looks something like the Cantor middle third set, except that you divide into four pieces, and take either first+third or second+fourth, depending on the digits of z.
Another solution: Begin with an interval having irrational endpoints, and perform the usual Cantor middle-third construction, except that at stage n, be sure to exclude the n-th rational number (with respect to some fixed enumeration), using a subinterval having irrational endpoints. By systematically excluding all rational numbers, you have the desired perfect set of irrationals.
(Hi François!)
-
2
Hi Joel! Nice answer! – François G. Dorais Jul 30 '10 at 18:30
It is well-known that $C$ is homeomorphic to $C \times C$, where $C$ is the Cantor set, as both are zero-dimensional compact metric spaces without isolated points. So $C$ contains uncountably many disjoint homeomorphic copies of $C$ and all but countably many of them can contain rationals...
-
It can be proven that the Cantor set is perfect. Certainly, this contains infinitely many rationals. How about modifying the construction of the Cantor set by defining: $I_1 = [\sqrt{2},\sqrt{2}+1/3] \cup [\sqrt{2}+2/3,\sqrt{2}+1]$, $I_2 = [\sqrt{2},\sqrt{2}+1/9] \cup [\sqrt{2}+2/9,\sqrt{2}+1/3]\cup[\sqrt{2}+2/3,\sqrt{2}+7/9]\cup[\sqrt{2}+8/9,\sqrt{2}+1]$, etc and setting $P = \cap_{i=1}^\infty I_i$? Each of end points of any interval that appears in the construction is a member of $P$ and is irrational. However, is it true that all the members of $P$ must be an end point of a certain interval? I am tempted to think so because we can prove that $P$ does not contain any interval.
-
There are only countably many endpoints, but a nontrivial perfect set is uncountable. – JDH Jul 30 '10 at 1:45
Let $C$ represent the Cantor set. Consider the set $C+\alpha$, where $\alpha=\sum_{n=1}^{\infty}10^{2^n}$.
-
5
And this works because...? – Andres Caicedo Feb 4 '11 at 6:22
– Andres Caicedo May 5 at 19:37
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 15, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9299696087837219, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/3400/probabilistic-knot-theory/4806
|
## probabilistic knot theory
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Take a smooth closed curve in the plane. At each self-intersection, randomly choose one of the two pieces and lift it up just out of the plane. (Perturb the curve so there are no triple intersections.) I don't really know anything about knot theory, so I don't even know if I'm asking the right questions here, but I'm wondering: What is the probability that this is the trivial knot? What can we say about how knotted this knot might be, and with what probabilities? (Measure "knottedness" in whatever way you like.) More generally, can we say anything about the probability of the various possible values in the usual invariants that people use to study knots?
I only have an idea of how to approach the first question, and even then it's only by brute force. I was just playing around with the easiest cases, and I think that with 0, 1, or 2 intersections, all knots are trivial, and with 3 intersections the knot is trivial with probability 75%.
A general analysis should presumably involve calculating the probability that we can simplify using various Reidemeister moves, but I don't know how to incorporate this. I'd imagine a computer could brute-force the first few cases pretty easily (I'm not so bold as to venture an order-of-magnitude guess on whether it's the first few hundred or the first few million)...
-
## 7 Answers
One possible route to a model of random knots would be through the braid group. Every knot can be expressed (non-uniquely) as the closure of a braid. So, for example, you could apply the braid generators uniformly $n$ times across $k$ strands, close the braid using your favorite closure, and then ask this question sensibly. I don't think you can directly ask about the $n \to \infty$ limit for the braid group, though, because I don't think there is a notion of uniform measure for that group. Actually, perhaps I will post this as a separate question, but is the braid group amenable? I would wager that in this model, the probability of having the unknot decreases very quickly with $n$ and $k$.
To test if you have the unknot, it is conjectured that you just have to check the Jones polynomial. But even this is still hard in general, unless even if you happen to have a quantum computer. :)
(Edit: Thanks Greg Kuperberg, below, for the correction.)
-
1
Even if you have a quantum computer. arxiv.org/abs/0908.0512 – Greg Kuperberg Nov 4 2009 at 4:37
1
Consider the subgroup consisting of braids where all but the last strand stand still, and the last strand winds around them. This subgroup is clearly free, being the fundamental group of an (n-1)-punctured disk, and so braid groups are not amenable. – Tom Church Nov 6 2009 at 23:44
As an aside, the process of "combing" a braid is given by filtering by the cosets of such subgroups. This exhibits the pure braid group as an iterated extension of free groups; this was used by Arnol'd in his beautiful computation of the cohomology ring of the pure braid group [MR242196]. (Arnold's paper is very readable, but the translation can be a bit hard to track down; for anyone who is interested, I have some handwritten notes on the cohomology of braid groups, including Arnold's proof, on my website.) – Tom Church Nov 6 2009 at 23:45
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
I suspect that answering this question would be very difficult. A more reasonable question would be to try to understand the distribution of the various numerical knot invariants. I don't know any references off hand, but I know I've heard talks on the subject.
If you want to try to make conjectures about this kind of thing, then I highly recommend Livingston's table of knot invariants, which contains an amazing amount of data.
-
The model you propose for random knots obviously depends on the curve you draw initially, so I'm not sure this is the most natural model to consider. People have certainly looked at various probability distributions of (various classes of) knots (or knot projections). One of the immediate problems is that even just doing computer simulations is hard since determining the knot type - or just unkottedness - of a given knot diagram is highly non-trivial.
A paper which does this with Vassiliev Invariants (a certain important class of polynomial-like invariants of knots) appears in the volume "Random Knotting and Linking", edited by Millett and Summers (look at the paper by Deguchi and Tsurusaki). Other papers in this volume may interest you, too.
To the best of my knowledge, there's no really model of random knots for which the question "what is the probability that the knot is trivial" has a known answer, except that as the number of crossing tends to infinity this probability likely approaches 0 (as anyone who left a set of mobile headphones in his pocket for more than five minutes knows).
-
Is it not clear which Reidemeister moves one should apply to simplify a knot? I guess I'd imagine it could be that it needs to get more complicated before it can get any simpler. – Aaron Mazel-Gee Oct 30 2009 at 5:59
Yes, that is correct: it is not clear which move to apply to a given not diagram in order to simplify it (whatever "simplify" means), and some trivial knot diagrams have the property that they cannot be reduced to the unknot without introducing some additional intersections first. If this hadn't been the case, this whole beautiful theory would have been reduced to a simple algorithm... – Alon Amit Oct 30 2009 at 6:09
In general, deciding which Reidemeister moves to do is a very difficult problem. While algorithms have been known for a long time (at least since the work of Haken in the '60's), they are very complicated and not at all practical. In particular, there are examples where you have to introduce a huge number of new crossings before your knot can start to be simplified. An accessible source of examples is Kauffman-Lambropoulou's paper "Hard Unknots and Collapsing Tangles", available on the arXiv here : arxiv.org/abs/math/0601525 – Andy Putman Oct 30 2009 at 6:09
1
One approach that might be interesting and which avoids hard unknots is to represent a knot as a grid diagram and see whether it admits any series of commutation moves followed by a destabilization (these are some of the grid diagram analogues of Reidemeister moves). It's not clear what such a series of moves would be or how to figure this out efficiently, but a knot is the unknot iff you can repeat this until you get the trivial 2x2 diagram. See Dynnikov's paper "Arc-presentations of links. Monotonic simplification", arXiv:0208153. – Steven Sivek Oct 30 2009 at 13:17
You should look at the Knot Atlas, which contains lots of tabulated knot invariants, although often not in as convenient form as Livingston's site.
Really, though, you want to download the KnotTheory` package (presupposing you have access to Mathematica), available at the Knot Atlas. With a bit of fiddling, you can easily run experiments of the type you describe. It can calculate many invariants from the presentation of a knot.
Best of all, you should go and think about "physically realistic" models of random knots, and then try to implement such a model using one of the many knot notations the KnotTheory` package understands. There are some good papers written about this subject, and even some real life experiments with strings in boxes being shaken up and down! :-)
-
People studying the topology of DNA use various models of random knots. Most of them have some geometric input as DNA has an actual length and doesn't want to bend too much.
-
I believe there are a few known "random knotting" type results out there. Not the kind of results the original poster requested, but related. Take n points in R^3 generated by a random walk, join them up (cyclicly) by straight lines. That's generically a knot. And with probability 1 (as n gets large) it's non-trivial and has a trefoil knot summand. The paper by Deguchi and Tsurusaki in "Lectures at Knots '96" provides references for these results although I've never read them in detail.
-
Something isn't right there! You must be thinking about some limit as n goes to the infinity? – Scott Morrison♦ Nov 4 2009 at 4:10
You're too fast. Edit made before I read your comment. – Ryan Budney Nov 4 2009 at 4:12
Just to reply to comments above: if you stick to "random" diagrams with at most say 30 crossings, I am confident that SnapPea will give you answers essentially immediately.
Also, to second suggestions already made, the probabilities you get will depend very sensitively on the model you choose. (Which is why this question is not going to get a real answer!)
-
SnapPea won't recognise torus knots, nor non-prime knots, or knots whose complements have incompressible tori. So if your random knots have a lot of prime summands (which is common to a lot of random knot generators), SnapPea will choke most of the time. – Ryan Budney Nov 10 2009 at 3:16
Burton, Rubinstein, Jaco and Tillmann are getting pretty close to having efficient algorithms for recognising such knots. – Ryan Budney Nov 10 2009 at 3:19
Sorry to disagree, but if the JSJ decomposition has a hyperbolic piece then SnapPea will present the splitting torus in the "splitting window". It does this by tracking the degeneration of the tetrahedra, and so finds the quad type (the speed of degeneration tells you the number of quads!) SnapPea can also sometimes guess at SL(2,R) representations (ie detect Seifert fibred spaces). – Sam Nead Nov 10 2009 at 3:24
Re: Burton, Rubinstein, Jaco, Tillmann. I assume that their techniques will still be at least exponential time. SnapPea is not an algorithm, but it has the virtue of being fast! There are ways to kill SnapPea (eg feed it surface bundles where the monodromy is a high power and then ask it for a Dirichlet domain), but it is pretty hard to kill SnapPea with a hand-drawn knot... – Sam Nead Nov 10 2009 at 3:30
I'm not sure how you're disagreeing with me. If your knot is a connect-sum of n trefoils, n=0,1,2,3,... does SnapPea ever say anything informative? – Ryan Budney Nov 10 2009 at 3:41
show 9 more comments
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9513573050498962, "perplexity_flag": "middle"}
|
http://en.wikipedia.org/wiki/Multiple_comparisons
|
# Multiple comparisons
In statistics, the multiple comparisons, multiplicity or multiple testing problem occurs when one considers a set of statistical inferences simultaneously[1] or infers a subset of parameters selected based on the observed values.[2] Errors in inference, including confidence intervals that fail to include their corresponding population parameters or hypothesis tests that incorrectly reject the null hypothesis are more likely to occur when one considers the set as a whole. Several statistical techniques have been developed to prevent this from happening, allowing significance levels for single and multiple comparisons to be directly compared. These techniques generally require a stronger level of evidence to be observed in order for an individual comparison to be deemed "significant", so as to compensate for the number of inferences being made.
## History
The interest in problem of multiple comparisons began in the 1950s with the work of Tukey and Scheffé. The interest increased for about two decades and then came a decline. Some even thought that the field was dead. However, the field was actually alive and more and more ideas were presented as the answer to the needs of medical statistics. The new methods and procedures came out: Closed testing procedure (Marcus et al., 1976), Holm–Bonferroni method (1979). Later, in the 1980s, the issue of multiple comparisons came back. Books were published Hochberg and Tamhane (1987), Westfall and Young (1993), and Hsu (1996). At 1995 the work on False discovery rate and other new ideas had begun. In 1996 the first conference on multiple comparisons took place in Israel. This meeting of researchers was followed by such conferences around the world: Berlin (2000), Bethesda (2002), Shanghai (2005), Vienna (2007), and Tokyo (2009). All these reflect an acceleration of increase of interest in multiple comparisons.[3]
## The problem
The term "comparisons" in multiple comparisons typically refers to comparisons of two groups, such as a treatment group and a control group. "Multiple comparisons" arise when a statistical analysis encompasses a number of formal comparisons, with the presumption that attention will focus on the strongest differences among all comparisons that are made. Failure to compensate for multiple comparisons can have important real-world consequences, as illustrated by the following examples.
• Suppose the treatment is a new way of teaching writing to students, and the control is the standard way of teaching writing. Students in the two groups can be compared in terms of grammar, spelling, organization, content, and so on. As more attributes are compared, it becomes more likely that the treatment and control groups will appear to differ on at least one attribute by random chance alone.
• Suppose we consider the efficacy of a drug in terms of the reduction of any one of a number of disease symptoms. As more symptoms are considered, it becomes more likely that the drug will appear to be an improvement over existing drugs in terms of at least one symptom.
• Suppose we consider the safety of a drug in terms of the occurrences of different types of side effects. As more types of side effects are considered, it becomes more likely that the new drug will appear to be less safe than existing drugs in terms of at least one side effect.
In all three examples, as the number of comparisons increases, it becomes more likely that the groups being compared will appear to differ in terms of at least one attribute. Our confidence that a result will generalize to independent data should generally be weaker if it is observed as part of an analysis that involves multiple comparisons, rather than an analysis that involves only a single comparison.
For example, if one test is performed at the 5% level, there is only a 5% chance of incorrectly rejecting the null hypothesis if the null hypothesis is true. However, for 100 tests where all null hypotheses are true, the expected number of incorrect rejections is 5. If the tests are independent, the probability of at least one incorrect rejection is 99.4%. These errors are called false positives or Type I errors.
The problem also occurs for confidence intervals, note that a single confidence interval with 95% coverage probability level will likely contain the population parameter it is meant to contain, i.e. in the long run 95% of confidence intervals built in that way will contain the true population parameter. However, if one considers 100 confidence intervals simultaneously, with coverage probability 0.95 each, it is highly likely that at least one interval will not contain its population parameter. The expected number of such non-covering intervals is 5, and if the intervals are independent, the probability that at least one interval does not contain the population parameter is 99.4%.
Techniques have been developed to control the false positive error rate associated with performing multiple statistical tests. Similarly, techniques have been developed to adjust confidence intervals so that the probability of at least one of the intervals not covering its target value is controlled.
### Classification of m hypothesis tests
It has been suggested that portions of be moved or incorporated into this section. (Discuss)
It has been suggested that portions of be moved or incorporated into this section. (Discuss)
It has been suggested that portions of be moved or incorporated into this section. (Discuss)
The following table gives a number of errors committed when testing $m$ null hypotheses. It defines some random variables that are related to the $m$ hypothesis tests.
Null hypothesis is True (H0) Alternative hypothesis is True (H1) Total
Declared significant $V$ $S$ $R$
Declared non-significant $U$ $T$ $m - R$
Total $m_0$ $m - m_0$ $m$
• $m$ is the total number hypotheses tested
• $m_0$ is the number of true null hypotheses
• $m - m_0$ is the number of true alternative hypotheses
• $V$ is the number of false positives (Type I error) (also called "false discoveries")
• $S$ is the number of true positives (also called "true discoveries")
• $T$ is the number of false negatives (Type II error)
• $U$ is the number of true negatives
• $R$ is the number of rejected null hypotheses (also called "discoveries")
• In $m$ hypothesis tests of which $m_0$ are true null hypotheses, $R$ is an observable random variable, and $S$, $T$, $U$, and $V$ are unobservable random variables.
## Example: Flipping coins
For example, one might declare that a coin was biased if in 10 flips it landed heads at least 9 times. Indeed, if one assumes as a null hypothesis that the coin is fair, then the probability that a fair coin would come up heads at least 9 out of 10 times is (10 + 1) × (1/2)10 = 0.0107. This is relatively unlikely, and under statistical criteria such as p-value < 0.05, one would declare that the null hypothesis should be rejected — i.e., the coin is unfair.
A multiple-comparisons problem arises if one wanted to use this test (which is appropriate for testing the fairness of a single coin), to test the fairness of many coins. Imagine if one was to test 100 fair coins by this method. Given that the probability of a fair coin coming up 9 or 10 heads in 10 flips is 0.0107, one would expect that in flipping 100 fair coins ten times each, to see a particular (i.e., pre-selected) coin come up heads 9 or 10 times would still be very unlikely, but seeing any coin behave that way, without concern for which one, would be more likely than not. Precisely, the likelihood that all 100 fair coins are identified as fair by this criterion is (1 − 0.0107)100 ≈ 0.34. Therefore the application of our single-test coin-fairness criterion to multiple comparisons would be more likely to falsely identify at least one fair coin as unfair.
## What can be done
For hypothesis testing, the problem of multiple comparisons (also known as the multiple testing problem) results from the increase in type I error that occurs when statistical tests are used repeatedly. If n independent comparisons are performed, the experiment-wide significance level $\bar{\alpha}$, also termed FWER for familywise error rate, is given by
$\bar{\alpha} = 1-\left( 1-\alpha_\mathrm{\{per\ comparison\}} \right)^n$.
Hence, unless the tests are perfectly dependent, α increases as the number of comparisons increases. If we do not assume that the comparisons are independent, then we can still say:
$\bar{\alpha} \le n \cdot \alpha_\mathrm{\{per\ comparison\}},$
which follows from Boole's inequality. Example: $0.2649=1-\left( 1-.05 \right)^6 \le .05 \times 6=0.3$
There are different ways to assure that the familywise error rate is at most $\bar{\alpha}$. The most conservative, but free of independency and distribution assumptions method, is known as the Bonferroni correction $\alpha_\mathrm{\{per\ comparison\}}=\bar{\alpha}/n$. A more sensitive correction can be obtained by solving the equation for the familywise error rate of $n$ independent comparisons for $\alpha_\mathrm{\{per\ comparison\}}$. This yields $\alpha_\mathrm{\{per\ comparison\}}=1-{\left(1-\bar{\alpha}\right)}^{\frac{1}{n}}$, which is known as the Šidák correction. Another procedure is the Holm–Bonferroni method which uniformly delivers more power than the simple Bonferroni correction, by testing only the most extreme p value ($i=1$) against the strictest criterion, and the others ($i>1$) against progressively less strict criteria.[4] $\alpha_\mathrm{\{per\ comparison\}}=\bar{\alpha}/(n-i+1)$.
## Methods
Multiple testing correction refers to re-calculating probabilities obtained from a statistical test which was repeated multiple times. In order to retain a prescribed familywise error rate α in an analysis involving more than one comparison, the error rate for each comparison must be more stringent than α. Boole's inequality implies that if each test is performed to have type I error rate α/n, the total error rate will not exceed α. This is called the Bonferroni correction, and is one of the most commonly used approaches for multiple comparisons.
In some situations, the Bonferroni correction is substantially conservative, i.e., the actual familywise error rate is much less than the prescribed level α. This occurs when the test statistics are highly dependent (in the extreme case where the tests are perfectly dependent, the familywise error rate with no multiple comparisons adjustment and the per-test error rates are identical). For example, in fMRI analysis,[5][6] tests are done on over 100000 voxels in the brain. The Bonferroni method would require p-values to be smaller than .05/100000 to declare significance. Since adjacent voxels tend to be highly correlated, this threshold is generally too stringent.
Because simple techniques such as the Bonferroni method can be too conservative, there has been a great deal of attention paid to developing better techniques, such that the overall rate of false positives can be maintained without inflating the rate of false negatives unnecessarily. Such methods can be divided into general categories:
• Methods where total alpha can be proved to never exceed 0.05 (or some other chosen value) under any conditions. These methods provide "strong" control against Type I error, in all conditions including a partially correct null hypothesis.
• Methods where total alpha can be proved not to exceed 0.05 except under certain defined conditions.
• Methods which rely on an omnibus test before proceeding to multiple comparisons. Typically these methods require a significant ANOVA/Tukey's range test before proceeding to multiple comparisons. These methods have "weak" control of Type I error.
• Empirical methods, which control the proportion of Type I errors adaptively, utilizing correlation and distribution characteristics of the observed data.
The advent of computerized resampling methods, such as bootstrapping and Monte Carlo simulations, has given rise to many techniques in the latter category. In some cases where exhaustive permutation resampling is performed, these tests provide exact, strong control of Type I error rates; in other cases, such as bootstrap sampling, they provide only approximate control.
## Post-hoc testing of ANOVAs
Multiple comparison procedures are commonly used in an analysis of variance after obtaining a significant omnibus test result, like the ANOVA F-test. The significant ANOVA result suggests rejecting the global null hypothesis H0 that the means are the same across the groups being compared. Multiple comparison procedures are then used to determine which means differ. In a one-way ANOVA involving K group means, there are K(K − 1)/2 pairwise comparisons.
A number of methods have been proposed for this problem, some of which are:
Single-step procedures
• Tukey–Kramer method (Tukey's HSD) (1951)
• Scheffe method (1953)
Multi-step procedures based on Studentized range statistic
• Duncan's new multiple range test (1955)
• The Nemenyi test is similar to Tukey's range test in ANOVA.
• The Bonferroni–Dunn test allows comparisons, controlling the familywise error rate.[vague]
• Student Newman-Keuls post-hoc analysis
• Dunnett's test (1955) for comparison of number of treatments to a single control group.
Choosing the most appropriate multiple-comparison procedure for your specific situation is not easy. Many tests are available, and they differ in a number of ways.[7]
For example,if the variances of the groups being compared are similar, the Tukey–Kramer method is generally viewed as performing optimally or near-optimally in a broad variety of circumstances.[8] The situation where the variance of the groups being compared differ is more complex, and different methods perform well in different circumstances.
The Kruskal–Wallis test is the non-parametric alternative to ANOVA. Multiple comparisons can be done using pairwise comparisons (for example using Wilcoxon rank sum tests) and using a correction to determine if the post-hoc tests are significant (for example a Bonferroni correction).
## Large-scale multiple testing
Traditional methods for multiple comparisons adjustments focus on correcting for modest numbers of comparisons, often in an analysis of variance. A different set of techniques have been developed for "large-scale multiple testing", in which thousands or even greater numbers of tests are performed. For example, in genomics, when using technologies such as microarrays, expression levels of tens of thousands of genes can be measured, and genotypes for millions of genetic markers can be measured. Particularly in the field of genetic association studies, there has been a serious problem with non-replication — a result being strongly statistically significant in one study but failing to be replicated in a follow-up study. Such non-replication can have many causes, but it is widely considered that failure to fully account for the consequences of making multiple comparisons is one of the causes.
In different branches of science, multiple testing is handled in different ways. It has been argued that if statistical tests are only performed when there is a strong basis for expecting the result to be true, multiple comparisons adjustments are not necessary.[9] It has also been argued that use of multiple testing corrections is an inefficient way to perform empirical research, since multiple testing adjustments control false positives at the potential expense of many more false negatives. On the other hand, it has been argued that advances in measurement and information technology have made it far easier to generate large datasets for exploratory analysis, often leading to the testing of large numbers of hypotheses with no prior basis for expecting many of the hypotheses to be true. In this situation, very high false positive rates are expected unless multiple comparisons adjustments are made.[10]
For large-scale testing problems where the goal is to provide definitive results, the familywise error rate remains the most accepted parameter for ascribing significance levels to statistical tests. Alternatively, if a study is viewed as exploratory, or if significant results can be easily re-tested in an independent study, control of the false discovery rate (FDR)[11][12][13] is often preferred. The FDR, defined as the expected proportion of false positives among all significant tests, allows researchers to identify a set of "candidate positives", of which a high proportion are likely to be true. The false positives within the candidate set can then be identified in a follow-up study.
### Assessing whether any alternative hypotheses are true
A normal quantile plot for a simulated set of test statistics that have been standardized to be Z-scores under the null hypothesis. The departure of the upper tail of the distribution from the expected trend along the diagonal is due to the presence of substantially more large test statistic values than would be expected if all null hypotheses were true. The red point corresponds to the fourth largest observed test statistic, which is 3.13, versus an expected value of 2.06. The blue point corresponds to the fifth smallest test statistic, which is -1.75, versus an expected value of -1.96. The graph suggests that it is unlikely that all the null hypotheses are true, and that most or all instances of a true alternative hypothesis result from deviations in the positive direction.
A basic question faced at the outset of analyzing a large set of testing results is whether there is evidence that any of the alternative hypotheses are true. One simple meta-test that can be applied when it is assumed that the tests are independent of each other is to use the Poisson distribution as a model for the number of significant results at a given level α that would be found when all null hypotheses are true. If the observed number of positives is substantially greater than what should be expected, this suggests that there are likely to be some true positives among the significant results. For example, if 1000 independent tests are performed, each at level α = 0.05, we expect 50 significant tests to occur when all null hypotheses are true. Based on the Poisson distribution with mean 50, the probability of observing more than 61 significant tests is less than 0.05, so if we observe more than 61 significant results, it is very likely that some of them correspond to situations where the alternative hypothesis holds. A drawback of this approach is that it over-states the evidence that some of the alternative hypotheses are true when the test statistics are positively correlated, which commonly occurs in practice.[citation needed]
Another common approach that can be used in situations where the test statistics can be standardized to Z-scores is to make a normal quantile plot of the test statistics. If the observed quantiles are markedly more dispersed than the normal quantiles, this suggests that some of the significant results may be true positives.[citation needed]
## See also
Key concepts
General methods of alpha adjustment for multiple comparisons
## References
1. Miller, R.G. (1981). Simultaneous Statistical Inference 2nd Ed. Springer Verlag New York. ISBN ISBN 0-387-90548-0.
2. Benjamini, Y. (2010). "Simultaneous and selective inference: Current successes and future challenges". Biometrical Journal 52 (6): 708–721. doi:10.1002/bimj.200900299. PMID 21154895.
3. Benjamini, Y. (2010). "Simultaneous and selective inference: Current successes and future challenges". Biom. J. 52: 708–721. doi:10.1002/bimj.200900299. PMID 21154895.
4. Aickin, M; Gensler, H. "Adjusting for multiple testing when reporting research results: the Bonferroni vs Holm methods". Am J Public Health 1996 (86): 726–728.
5. Logan, B. R.; Rowe, D. B. (2004). "An evaluation of thresholding techniques in fMRI analysis". NeuroImage 22 (1): 95–108. doi:10.1016/j.neuroimage.2003.12.047. PMID 15110000.
6. Logan, B. R.; Geliazkova, M. P.; Rowe, D. B. (2008). "An evaluation of spatial thresholding techniques in fMRI analysis". Human Brain Mapping 29 (12): 1379–1389. doi:10.1002/hbm.20471. PMID 18064589.
7. Howell (2002, Chapter 12: Multiple comparisons among treatment means)
8. Stoline, Michael R. (1981). "The Status of Multiple Comparisons: Simultaneous Estimation of All Pairwise Comparisons in One-Way ANOVA Designs". The American Statistician (American Statistical Association) 35 (3): 134–141. doi:10.2307/2683979. JSTOR 2683979. More than one of `|number=` and `|issue=` specified (help)
9. Rothman, Kenneth J. (1990). "No Adjustments Are Needed for Multiple Comparisons". Epidemiology (Lippincott Williams & Wilkins) 1 (1): 43–46. doi:10.1097/00001648-199001000-00010. JSTOR 20065622. PMID 2081237. More than one of `|number=` and `|issue=` specified (help)
10. Ioannidis, JPA (2005). "Why Most Published Research Findings Are False". PLoS Med 2 (8): e124. doi:10.1371/journal.pmed.0020124. PMC 1182327. PMID 16060722. More than one of `|number=` and `|issue=` specified (help)
11. Benjamini, Yoav; Hochberg, Yosef (1995). "Controlling the false discovery rate: a practical and powerful approach to multiple testing". 57 (1): 125–133. JSTOR 2346101. More than one of `|number=` and `|issue=` specified (help)
12. Storey, JD; Tibshirani, Robert (2003). "Statistical significance for genome-wide studies". PNAS 100 (16): 9440–9445. doi:10.1073/pnas.1530509100. JSTOR 3144228. PMC 170937. PMID 12883005. More than one of `|number=` and `|issue=` specified (help)
13. Efron, Bradley; Tibshirani, Robert; Storey, John D.; Tusher,Virginia (2001). "Empirical Bayes analysis of a microarray experiment". 96 (456): 1151–1160. doi:10.1198/016214501753382129. JSTOR 3085878.
## Further reading
• F. Betz, T. Hothorn, P. Westfall (2010), Multiple Comparisons Using R, CRC Press
• S. Dudoit and M. J. van der Laan (2008), Multiple Testing Procedures with Application to Genomics, Springer
• P. H. Westfall and S. S. Young (1993), Resampling-based Multiple Testing: Examples and Methods for p-Value Adjustment, Wiley
• B. Phipson and G. K. Smyth (2010), Permutation P-values Should Never Be Zero: Calculating Exact P-values when Permutations are Randomly Drawn, Statistical Applications in Genetics and Molecular Biology Vol.. 9 Iss. 1, Article 39, doi:10.2202/1544-6155.1585
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 38, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8983132839202881, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/calculus/126159-evaluate-limit.html
|
# Thread:
1. ## Evaluate the limit
Evaluate the limit of 4-x/2-√x as x approaches 4
I believe that this function is in indeterminate form. I just cant seem to rewrite the function so that i can find the limit ( if it exists).
If anyone can explain how to evaluate this limit, I would be very grateful.
2. You should use parentheses to write this expression. It could mean at least 4 different things, but as stated it means $4-\frac x 2 - \sqrt x$.
3. Originally Posted by mj226
Evaluate the limit of 4-x/2-√x as x approaches 4
I believe that this function is in indeterminate form. I just cant seem to rewrite the function so that i can find the limit ( if it exists).
If anyone can explain how to evaluate this limit, I would be very grateful.
Multiply by $\frac{2+\sqrt{x}}{2+\sqrt{x}}$
$\lim_{x->4}\frac{4-x}{2-\sqrt{x}}\left(\frac{2+\sqrt{x}}{2+\sqrt{x}}\right )=\lim_{x->4}\frac{(4-x)(2+\sqrt{x})}{4-x}$
$=\lim_{x->4}(2+\sqrt{x})$
$=4$
4. Sorry about the parentheses. It should have been (4-x)/(2-x^1/2)
Oh. I see. Thank You.
5. Originally Posted by mj226
Sorry about the parentheses. It should have been (4-x)/(2-x^1/2)
Oh. I see. Thank You.
There is another one.
$4-x=(2)^2-(\sqrt{x})^2=(2-\sqrt{x})(2+\sqrt{x})$
Bad guys cancel, and you are done.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9328647255897522, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/advanced-applied-math/145356-getting-around-interacting-sequences-print.html
|
# Getting around interacting sequences
Printable View
• May 18th 2010, 11:46 AM
Geo877
Getting around interacting sequences
The problem I'm currently facing is how to produce and expression to describe the path of a particle in order that it's position can be calculated with a given time. So similar to your suvat equations of motion, this is all fine and good until you start trying to derrive equations where the next x coordinate, for example, is dependent somehow on the last y an . The same is true for y. One instance of this is gravity between two bodies, the displacement on x and y of a body is dependent on the distance between the two, which sure enough can only be calculated from the previous x,y displacement and if i try to rearrange this I end up with an infinite cycle of substitution. Mandelbrot and attractors springs to mind.
This seems to shows up everywhere in physics and I've not yet got my head round it. Effectively you have two interacting sequences, I've diluted this idea into a simple sequence where p and q are constants and a and b begin at 0:
$<br /> a_{n+1} = a_{n} * b_{n} +p<br />$
$<br /> b_{n+1} = 2*a_{n} * (b_{n} - 3) +q<br />$
I've graphed this in quickly flash, excuse the scaleless graph! Fork it and add scales if you into as3 flash on 2010-5-19 | wonderfl build flash online
so if you follow that link you'll see a spiral slowly changing before tightening up and apparently exploding. What happening is I'm iterating over that sequence 500 times and plotting each point (a = x, b=y), then once it's been plotted it's displayed, q is increased by 0.001 and it is calculated and replotted. This is happening 30 times a second so you can see a gradual change. Is this an attractor? I'm certainly new to all this stuff
So my question is how can work around this? Say I wanted to derive and expression for the x & y of an paritcle if its is being attracted by a static body? Is this possible without iterating over each value?
Thanks, apologies for the long post!
• May 22nd 2010, 10:10 PM
CaptainBlack
Quote:
Originally Posted by Geo877
The problem I'm currently facing is how to produce and expression to describe the path of a particle in order that it's position can be calculated with a given time. So similar to your suvat equations of motion, this is all fine and good until you start trying to derrive equations where the next x coordinate, for example, is dependent somehow on the last y an . The same is true for y. One instance of this is gravity between two bodies, the displacement on x and y of a body is dependent on the distance between the two, which sure enough can only be calculated from the previous x,y displacement and if i try to rearrange this I end up with an infinite cycle of substitution. Mandelbrot and attractors springs to mind.
This seems to shows up everywhere in physics and I've not yet got my head round it. Effectively you have two interacting sequences, I've diluted this idea into a simple sequence where p and q are constants and a and b begin at 0:
$<br /> a_{n+1} = a_{n} * b_{n} +p<br />$
$<br /> b_{n+1} = 2*a_{n} * (b_{n} - 3) +q<br />$
I've graphed this in quickly flash, excuse the scaleless graph! Fork it and add scales if you into as3 flash on 2010-5-19 | wonderfl build flash online
so if you follow that link you'll see a spiral slowly changing before tightening up and apparently exploding. What happening is I'm iterating over that sequence 500 times and plotting each point (a = x, b=y), then once it's been plotted it's displayed, q is increased by 0.001 and it is calculated and replotted. This is happening 30 times a second so you can see a gradual change. Is this an attractor? I'm certainly new to all this stuff
So my question is how can work around this? Say I wanted to derive and expression for the x & y of an paritcle if its is being attracted by a static body? Is this possible without iterating over each value?
Thanks, apologies for the long post!
With a bit of manipulation you can find a second order non-linear recurrence for $a_n$, if my algebra is error free it is:
$a_{n+1}=2a_n^2-6a_na_{n-1}+a_a(q-2p)+p$
I am not familiar with solving such recurrences, but there may be a method of doing so.
CB
All times are GMT -8. The time now is 02:44 PM.
Copyright © 2005-2013 Math Help Forum. All rights reserved.
Copyright © 2005-2013 Math Help Forum. All rights reserved.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9486462473869324, "perplexity_flag": "middle"}
|
http://telescoper.wordpress.com/2011/06/22/cosmic-clumpiness-conundra/
|
# In the Dark
A blog about the Universe, and all that surrounds it
## Cosmic Clumpiness Conundra
Well there’s a coincidence. I was just thinking of doing a post about cosmological homogeneity, spurred on by a discussion at the workshop I attended in Copenhagen a couple of weeks ago, when suddenly I’m presented with a topical hook to hang it on.
New Scientist has just carried a report about a paper by Shaun Thomas and colleagues from University College London the abstract of which reads
We observe a large excess of power in the statistical clustering of luminous red galaxies in the photometric SDSS galaxy sample called MegaZ DR7. This is seen over the lowest multipoles in the angular power spectra Cℓ in four equally spaced redshift bins between $0.4 \leq z \leq 0.65$. However, it is most prominent in the highest redshift band at $z\sim 4\sigma$ and it emerges at an effective scale $k \sim 0.01 h{\rm Mpc}^{-1}$. Given that MegaZ DR7 is the largest cosmic volume galaxy survey to date ($3.3({\rm Gpc} h^{-1})^3$) this implies an anomaly on the largest physical scales probed by galaxies. Alternatively, this signature could be a consequence of it appearing at the most systematically susceptible redshift. There are several explanations for this excess power that range from systematics to new physics. We test the survey, data, and excess power, as well as possible origins.
To paraphrase, it means that the distribution of galaxies in the survey they study is clumpier than expected on very large scales. In fact the level of fluctuation is about a factor two higher than expected on the basis of the standard cosmological model. This shows that either there’s something wrong with the standard cosmological model or there’s something wrong with the survey. Being a skeptic at heart, I’d bet on the latter if I had to put my money somewhere, because this survey involves photometric determinations of redshifts rather than the more accurate and reliable spectroscopic variety. I won’t be getting too excited about this result unless and until it is confirmed with a full spectroscopic survey. But that’s not to say it isn’t an interesting result.
For one thing it keeps alive a debate about whether, and at what scale, the Universe is homogeneous. The standard cosmological model is based on the Cosmological Principle, which asserts that the Universe is, in a broad-brush sense, homogeneous (is the same in every place) and isotropic (looks the same in all directions). But the question that has troubled cosmologists for many years is what is meant by large scales? How broad does the broad brush have to be?
At our meeting a few weeks ago, Subir Sarkar from Oxford pointed out that the evidence for cosmological homogeneity isn’t as compelling as most people assume. I blogged some time ago about an alternative idea, that the Universe might have structure on all scales, as would be the case if it were described in terms of a fractal set characterized by a fractal dimension $D$. In a fractal set, the mean number of neighbours of a given galaxy within a spherical volume of radius $R$ is proportional to $R^D$. If galaxies are distributed uniformly (homogeneously) then $D = 3$, as the number of neighbours simply depends on the volume of the sphere, i.e. as $R^3$, and the average number-density of galaxies. A value of $D < 3$ indicates that the galaxies do not fill space in a homogeneous fashion: $D = 1$, for example, would indicate that galaxies were distributed in roughly linear structures (filaments); the mass of material distributed along a filament enclosed within a sphere grows linear with the radius of the sphere, i.e. as $R^1$, not as its volume; galaxies distributed in sheets would have $D=2$, and so on.
The discussion of a fractal universe is one I’m overdue to return to. In my previous post I left the story as it stood about 15 years ago, and there have been numerous developments since then. I will do a “Part 2″ to that post before long, but I’m waiting for some results I’ve heard about informally, but which aren’t yet published, before filling in the more recent developments.
We know that $D \simeq 1.2$ on small scales (in cosmological terms, still several Megaparsecs), but the evidence for a turnover to $D=3$ is not so strong. The point is, however, at what scale would we say that homogeneity is reached. Not when $D=3$ exactly, because there will always be statistical fluctuations; see below. What scale, then? Where $D=2.9$? $D=2.99$?
What I’m trying to say is that much of the discussion of this issue involves the phrase “scale of homogeneity” when that is a poorly defined concept. There is no such thing as “the scale of homogeneity”, just a whole host of quantities that vary with scale in a way that may or may not approach the value expected in a homogeneous universe.
It’s even more complicated than that, actually. When we cosmologists adopt the Cosmological Principle we apply it not to the distribution of galaxies in space, but to space itself. We assume that space is homogeneous so that its geometry can be described by the Friedmann-Lemaitre-Robertson-Walker metric.
According to Einstein’s theory of general relativity, clumps in the matter distribution would cause distortions in the metric which are roughly related to fluctuations in the Newtonian gravitational potential $\delta\Phi$ by $\delta\Phi/c^2 \sim \left(\lambda/ct \right)^{2} \left(\delta \rho/\rho\right)$, give or take a factor of a few, so that a large fluctuation in the density of matter wouldn’t necessarily cause a large fluctuation of the metric unless it were on a scale $\lambda$ reasonably large relative to the cosmological horizon $\sim ct$. Galaxies correspond to a large $\delta \rho/\rho \sim 10^6$ but don’t violate the Cosmological Principle because they are too small to perturb the background metric significantly. Even the big clumps found by the UCL team only correspond to a small variation in the metric. The issue with these, therefore, is not so much that they threaten the applicability of the Cosmological Principle, but that they seem to suggest structure might have grown in a different way to that usually supposed.
The problem is that we can’t measure the gravitational potential on these scales directly so our tests are indirect. Counting galaxies is relatively crude because we don’t even know how well galaxies trace the underlying mass distribution.
An alternative way of doing this is to use not the positions of galaxies, but their velocities (usually called peculiar motions). These deviations from a pure Hubble flow are caused by lumps of matter pulling on the galaxies; the more lumpy the Universe is, the larger the velocities are and the larger the lumps are the more coherent the flow becomes. On small scales galaxies whizz around at speeds of hundreds of kilometres per second relative to each other, but averaged over larger and larger volumes the bulk flow should get smaller and smaller, eventually coming to zero in a frame in which the Universe is exactly homogeneous and isotropic.
Roughly speaking the bulk flow $v$ should relate to the metric fluctuation as approximately $\delta \Phi/c^2 \sim \left(\lambda/ct \right) \left(v/c\right)$.
It has been claimed that some observations suggest the existence of a dark flow which, if true, would challenge the reliability of the standard cosmological framework, but these results are controversial and are yet to be independently confirmed.
But suppose you could measure the net flow of matter in spheres of increasing size. At what scale would you claim homogeneity is reached? Not when the flow is exactly zero, as there will always be fluctuations, but exactly how small?
The same goes for all the other possible criteria we have for judging cosmological homogeneity. We are free to choose the point where we say the level of inhomogeneity is sufficiently small to be satisfactory.
In fact, the standard cosmology (or at least the simplest version of it) has the peculiar property that it doesn’t ever reach homogeneity anyway! If the spectrum of primordial perturbations is scale-free, as is usually supposed, then the metric fluctuations don’t vary with scale at all. In fact, they’re fixed at a level of $\delta \Phi/c^2 \sim 10^{-5}$.
The fluctuations are small, so the FLRW metric is pretty accurate, but don’t get smaller with increasing scale, so there is no point when it’s exactly true. So lets have no more of “the scale of homogeneity” as if that were a meaningful phrase. Let’s keep the discussion to the behaviour of suitably defined measurable quantities and how they vary with scale. You know, like real scientists do.
### Like this:
This entry was posted on June 22, 2011 at 4:23 am and is filed under The Universe and Stuff with tags cosmological principle, Cosmology, Dark Flow, Filipe Abdala, fractal universe, galaxies, galaxy clustering, Harrison-Zel'dovich spectrum, homogeneity, MegaZ, Ofer Lahav, Robertson-Walker metric, SDSS, Shaun Thomas, Sloan Digital Sky Survey. You can follow any responses to this entry through the RSS 2.0 feed. You can leave a response, or trackback from your own site.
### 35 Responses to “Cosmic Clumpiness Conundra”
1. Anton Garrett Says:
June 22, 2011 at 11:00 am
Yes, an estimate of the scale of homogeneity is a sort-of probability of a probability – a standard no-no in the only sensible view of probabilistics.
Fascinating! Are there enough galaxies in the universe for talk of fractal distributions to be meaningful?
• Anton Garrett Says:
June 22, 2011 at 7:49 pm
Wikipedia reckons that there are upward of 170 * (10**9) galaxies. In 3D this means that, if the universe were a cube, there are 5500 galaxies along a side. Given that fractals are in geometric progression of diminishing size of the repeating structure, it’s not clear to me that there are enough galaxies to meaningfully test a fractal vs a nonfractal distribution; if the geometric progression has common ratio = 10 then you can get only 3 levels of repetition, which I would not say is enough to meet the definition of a fractal.
• telescoper Says:
June 22, 2011 at 8:43 pm
I tried to post a reply earlier but my connection failed. See the comment below.
Also, we don’t see all the galaxies, at large distances only the brighter ones, introducing a selection bias, and survey volumes usually have a complicated geometry, highly flattened or long like a pencil. These make things even harder.
I agree that it’s hard to defend an argument that the Universe is fractal, but that’s a useful model we can use to ask the question whether what we see is consistent with large-scale homogeneity or with an inhomogeneous alternative (i.e. the fractal).
2. “The issue with these, therefore, is not so much that they threaten the applicability of the Cosmological Principle, but that they seem to suggest structure might have grown in a different way to that usually supposed.”
This is an important distinction. Even if this result (or some other result) convinces people that the standard model of structure formation is wrong, it doesn’t necessarily imply that other things about the standard model are wrong (age and composition of the universe, accelerated expansion now (but not at the beginning), formation of the elements etc.
Of course, the homogeneity of the CMB casts some light (no pun intended) on the question of homogeneity. In fact, one interesting problem in cosmology is the horizon problem: why is the universe (e.g. as reflected in the CMB) the same in every direction, implying that regions which were not causally connected are similar? Inflation can provide an answer, but inflation has not been proved. In other words, in general the problem is not the observed inhomogeneity, but rather the observed homogeneity.
3. telescoper Says:
June 22, 2011 at 8:39 pm
The Universe may well be infinite, but if it began a finite time in the past we can’t observe more than a bit of it. We do now have surveys of millions of galaxies and they suggest that clustering is roughly self-similar over a certain range of scales, but it’s only approximate and the dynamical range is relatively small in logarithmic terms, i.e. about two orders of magnitude.
• Anton Garrett Says:
June 22, 2011 at 9:58 pm
If the universe began a finite time T ago then how can it be infinite? Surely it can’t be larger than cT?
I suspected that my fractal reasoning above would run onto the rocks of astrophysical reality, as I am not an astrophysicist, but it’s always interesting to see how and learn a liltle more. I am expecting the same here…
4. >If the universe began a finite time T ago then how can it be infinite? Surely it can’t be larger than cT?
The Universe is infinite, but the observable Universe is finite (if the cosmological model is correct and we believe we have established it to be spatially flat)
• telescoper Says:
June 22, 2011 at 10:34 pm
We don’t know if the Universe is infinite or not. Even if the Universe is open or flat it could still be finite, but with a strange topology.
Anton: it’s quite possible for the Universe to be infinite in spatial extent but finite in past duration. The part we can see grows with time as ct but there could be an infinite universe beyond our horizon. We can’t see it, though.
• Anton Garrett Says:
June 22, 2011 at 11:41 pm
Peter: in that case 2 questions, if you are willing:
1. If the universe is spatially infinite yet not older than epoch T then it can’t have started from a point, is that right? (Since \infty – cT > 0)
2. Can we in principle infer any details of the parts we cannot see?
5. >1. If the universe is spatially infinite yet not older than epoch T then it can’t have started from a point, is that right? (Since \infty – cT > 0)
If the current cosmological model is correct and it is spatially flat, it was always infinite, it never was a “point”.
• Anton Garrett Says:
June 23, 2011 at 12:23 am
Big Bang is dead then? I’m way behind the times.
6. >Big Bang is dead then? I’m way behind the times.
No – it isn’t – what I said is the standard interpretation of the Big Bang. The BB is seriously mis-portrayed in the public arena.
• Anton Garrett Says:
June 23, 2011 at 1:10 am
Cusp: I’ve been a postdoc in theoretical physics depts but am not an astrophysicist; that’s my level. If the Big Bang is a goer then surely the universe starts from a point and can be no larger at a time T after that than cT?
7. If the universe is spatially flat, it is infinite in extent and always has been – there was no point in its history did it go from finite to infinite. It was born infinite.
The observable universe, the part from which we could have received light from, is a finite part of an infinite universe. It started “as a point”.
The Universe and the Observable Universe often get lumped into one, when they are quite different things.
I had to explain this to a quantum physicist the other day – he thought it was cool when he finally got it.
8. telescoper Says:
June 23, 2011 at 8:06 am
Anton,
It might help – I find it does in such things – to think of it all back-to-front. Think about an infinite sheet of graph paper representing a flat space, with the grid representing the distribution of galaxies. Now gradually reduce the scale: the squares get smaller and smaller. Eventually the scale becomes arbitrarily small so that everything is densely packed. But the sheet is still infinite.
It’s probably also worth saying that the Universe could be finite, like the 3D anaologue of the surface of a sphere, but its radius could be much larger than ct.
You ask whether one can make inferences about what happens outside our horizon – well, people do. However, if the Cosmological Principle holds true then what’s outside our horizon is pretty much the same as what’s inside!
• Anton Garrett Says:
June 23, 2011 at 9:32 am
Thanks Peter, that’s a step forward, but if the graph paper is sometihng physical rather than merely a coordinate system of our choosing, what IS it? And what caused abandonment of the simple Big Bang? (And how much would Fred Hoyle be laughing?)
• telescoper Says:
June 23, 2011 at 10:19 am
Also, it’s not quite “a coordinate system of our choosing”. It’s chosen so that the distribution of matter looks homogeneous and isotropic in the coordinate frame. This also gives a preferred time coordinate – we can slice space-time in such a way that surfaces of constant density are synchronous.
• telescoper Says:
June 23, 2011 at 9:35 am
There’s no “abandonment of the simple Big Bang”. The picture we’re discussing is precisely the same as that presented by Friedmann and Lemaitre in the 20s.
• I agree with Peter – What we have been saying *is* the standard Big Bang. What needs to be abandoned is the misconceptions about it. I’m sure Fred Hoyle knew this.
You might want to start by reading;
Expanding Confusion: common misconceptions of cosmological horizons and the superluminal expansion of the Universe
Tamara M. Davis, Charles H. Lineweaver
http://xxx.lanl.gov/abs/astro-ph/0310808
and if you want to, follow it up with
Expanding Space: the Root of all Evil?
Matthew J. Francis, Luke A. Barnes, J. Berian James, Geraint F. Lewis
http://xxx.lanl.gov/abs/0707.0380
I very much recommend the second one
• The old links don’t work anymore (at least for me right now). Try http://arxiv.org/abs/astro-ph/0310808 . Note that Tamara’s entire thesis is online; I just re-read parts of it a minute ago (I got there by a route having nothing to do with this blog, though I did notice along the way that Peter will be in Copenhagen again soon): http://arxiv.org/abs/astro-ph/0402278 . It’s a good introduction to cosmology and some of its confusing concepts. I particularly recommend the acknowledgments.
• telescoper Says:
June 23, 2011 at 11:59 am
I’m in Copenhagen now, and will be back again in August.
• Anton Garrett Says:
June 23, 2011 at 11:02 am
But I (thought I) understood that! Let’s discuss at Lords…
9. Cusp Says:
June 23, 2011 at 1:15 pm
> Let’s discuss at Lords…
Alas, I won’t be at Lords – and, to quote Dreadlock Holiday, “I don’t like cricket”
• Anton Garrett Says:
June 23, 2011 at 2:30 pm
That comment of mine was posted so as to show here as a response to something said by Peter, who WILL be at Lords with me.
Glad you *love* cricket!
10. These hyperclusters stretching over 3 billion light years would require over 100 billion years to form. Like the sloan great wall, a vast cosmic filament is associated. The dark flow is believed to be 150 billion light years away, and would indicate that plasma structures are fractal out to larger scales. Alfven proposed 26 fractal plasma mediums, which includes the galactic corona magnetic bubbles surrounding the milky way, and galaxy clusters having the hottest densest medium known. The IGM is believed to contain most of the baryonic matter of the universe, and the WHIM filaments about half the mass of the universe. Jets extend the lengths of galaxies, and most galaxies are nearby relative to their sizes apart. Only by seeing more of the sky with the SDSS, were they able to detect these hyperclusters. Like atoms, stars, galaxies, superclusters… it seems that there is no law or rule where the smallest plasma particle nor largest structure exists. There are always smaller particles and larger structures, each having their own relative time. Imperance, change, transitoriness takes place with everything. Size is relative to other objects.
http://holographicgalaxy.blogspot.com
http://hologramuniverse.wordpress.com
• telescoper Says:
July 4, 2011 at 6:15 pm
Sigh.
11. [...] redshifts. These most distant and oldest known galaxies are forming Hyperclusters ! cosmic clumpiness conundra 12 billion light year scale view by BOSS Filamentary Emission by a Rat Cell Milky Way Satellite [...]
12. [...] redshifts. These most distant and oldest known galaxies are forming Hyperclusters ! cosmic clumpiness conundra 12 billion light year scale view by BOSS Filamentary Emission by a Rat Cell Milky Way Satellite [...]
13. [...] is cosmic inhomogeneity on even larger scales, of course, but in such cases the “peculiar velocities” generated by the lumpiness can [...]
14. bobby Says:
February 23, 2012 at 9:18 am
The scalelength R at which the Universe is homogenous (10, 100, 1000 Mpc h^-1 ?) is a comoving length, right?
Does it mean that this scale is (1+z) smaller at a redshift z?
Thanks
• telescoper Says:
February 23, 2012 at 9:49 am
The physical scale will be smaller at higher redshift, yes.
15. bobby Says:
February 24, 2012 at 1:24 am
Thanks for the (quick!) reply.
R varies like 1 / (1+z) right?
I am also curious about the latest estimates of this scalelength. Has a consensus been reach as far as you know?
cheers
• Yes; R_0/R = 1 + z where R_0 is the scale factor today.
I’m sure we can say that the scale length is smaller than R_0.
16. [...] me that I never completed the story I started with a couple of earlier posts (here and there), so while I wait for the rain to stop I thought I’d make myself useful by posting something [...]
17. [...] do we find strong evidence against leftover relics and topological defects, but we measured this Harrison-Zel’dovich spectrum very accurately back in the 1990s, which was predicted by inflation more than a decade before it was observed! In [...]
%d bloggers like this:
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 26, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9364129304885864, "perplexity_flag": "middle"}
|
http://www.citizendia.org/Pseudo-Riemannian_manifold
|
In differential geometry, a pseudo-Riemannian manifold (also called a semi-Riemannian manifold) is a generalization of a Riemannian manifold. Differential geometry is a mathematical discipline that uses the methods of differential and integral Calculus to study problems in Geometry In Riemannian geometry, a Riemannian manifold ( M, g) (with Riemannian metric g) is a real Differentiable manifold M It is one of many things named after Bernhard Riemann. The key difference between the two is that on a pseudo-Riemannian manifold the metric tensor need not be positive-definite. In the mathematical field of Differential geometry, a metric tensor is a type of function defined on a Manifold (such as a Surface in space In Mathematics, a definite bilinear form is a Bilinear form B such that B ( x, x) has a fixed Instead a weaker condition of nondegeneracy is imposed. In Mathematics, specifically Linear algebra, a degenerate Bilinear form f(xy on a Vector space V is one such that
## Introduction
### Manifolds
Main articles: Manifold, differentiable manifolds
In differential geometry a differentiable manifold is a space which is locally similar to a Euclidean space. A manifold is a mathematical space in which every point has a neighborhood which resembles Euclidean space, but in which the global structure may be A differentiable manifold is a type of Manifold that is locally similar enough to Euclidean space to allow one to do Calculus. Differential geometry is a mathematical discipline that uses the methods of differential and integral Calculus to study problems in Geometry A differentiable manifold is a type of Manifold that is locally similar enough to Euclidean space to allow one to do Calculus. In an n-dimensional Euclidean space any point can be specified by n real numbers. These are called the coordinates of the point. In Mathematics and its applications a coordinate system is a system for assigning an n - Tuple of Numbers or scalars to each point
An n-dimensional differentiable manifold is a generalisation of n-dimensional Euclidean space. In a manifold it may only be possible to define coordinates locally. This is achieved by defining coordinate patches: subsets of the manifold which can be mapped into n-dimensional Euclidean space. For other uses of "atlas" see Atlas (disambiguation. In Mathematics, particularly topology an atlas describes how
See Manifold, differentiable manifold, coordinate patch for more details. A manifold is a mathematical space in which every point has a neighborhood which resembles Euclidean space, but in which the global structure may be A differentiable manifold is a type of Manifold that is locally similar enough to Euclidean space to allow one to do Calculus. For other uses of "atlas" see Atlas (disambiguation. In Mathematics, particularly topology an atlas describes how
### Tangent spaces and metric tensors
Main articles: Tangent space, metric tensor
Associated with each point p in an n-dimensional differentiable manifold M is a tangent space (denoted $\,T_pM$). In Mathematics, the tangent space of a Manifold is a concept which facilitates the generalization of vectors from Affine spaces to general manifolds since In the mathematical field of Differential geometry, a metric tensor is a type of function defined on a Manifold (such as a Surface in space In Mathematics, the tangent space of a Manifold is a concept which facilitates the generalization of vectors from Affine spaces to general manifolds since This is an n-dimensional vector space whose elements can be thought of as equivalence classes of curves passing through the point p. In Mathematics, a vector space (or linear space) is a collection of objects (called vectors) that informally speaking may be scaled and added In Mathematics, given a set X and an Equivalence relation ~ on X, the equivalence class of an element a in X
A metric tensor is a non-degenerate, smooth, symmetric, bilinear map which assigns a real number to pairs of tangent vectors at each tangent space of the manifold. In the mathematical field of Differential geometry, a metric tensor is a type of function defined on a Manifold (such as a Surface in space In Mathematics, specifically Linear algebra, a degenerate Bilinear form f(xy on a Vector space V is one such that In Mathematics, a bilinear map is a function of two arguments that is linear in each In Mathematics, the real numbers may be described informally in several different ways Denoting the metric tensor by g we can express this as $g : T_pM \times T_pM \to \mathbb{R}$.
The map is symmetric and bilinear so if $X, Y, Z \in T_pM$ are tangent vectors at a point p in the manifold M then we have
• $\,g(X,Y) = g(Y,X)$
• $\,g(aX + Y, Z) = a g(X,Z) + g(Y,Z)$
for some real number a.
That g is non-degenerate means there are no non-zero $X \in T_pM$ such that $\,g(X,Y) = 0$ for all $Y \in T_pM$. In Mathematics, specifically Linear algebra, a degenerate Bilinear form f(xy on a Vector space V is one such that
### Metric signatures
Main article: Metric signature
For an n-dimensional manifold the metric tensor (in a fixed coordinate system) has n eigenvalues. The signature of a Metric tensor (or more generally a nondegenerate Symmetric bilinear form, thought of as Quadratic form) is the number of positive In Mathematics, given a Linear transformation, an of that linear transformation is a nonzero vector which when that transformation is applied to it changes If the metric is non-degenerate then none of these eigenvalues are zero. The signature of the metric denotes the number of positive and negative eigenvalues, this quantity is independent of the chosen coordinate system by Sylvester's rigidity theorem and locally non-decreasing. The signature of a Metric tensor (or more generally a nondegenerate Symmetric bilinear form, thought of as Quadratic form) is the number of positive If the metric has p positive eigenvalues and q negative eigenvalues then the metric signature is (p,q). For a non-degenerate metric p + q = n.
## Definition
A pseudo-Riemannian manifold $\,(M,g)$ is a differentiable manifold $\,M$ equipped with a non-degenerate, smooth, symmetric metric tensor $\,g$ which, unlike a Riemannian metric, need not be positive-definite, but must be non-degenerate. A differentiable manifold is a type of Manifold that is locally similar enough to Euclidean space to allow one to do Calculus. In the mathematical field of Differential geometry, a metric tensor is a type of function defined on a Manifold (such as a Surface in space In Riemannian geometry, a Riemannian manifold ( M, g) (with Riemannian metric g) is a real Differentiable manifold M In Mathematics, a definite bilinear form is a Bilinear form B such that B ( x, x) has a fixed Such a metric is called a pseudo-Riemannian metric and its values can be positive, negative or zero.
The signature of a pseudo-Riemannian metric is (p,q) where both p and q are non-zero.
## Lorentzian manifold
A Lorentzian manifold is an important special case of a pseudo-Riemannian manifold in which the signature of the metric is (1,n − 1) (or sometimes (n − 1,1), see sign convention). In Physics, a sign convention is a choice of the signs (plus or minus of a set of quantities in a case where the choice of sign is arbitrary Such metrics are called Lorentzian metrics. They are named after the physicist Hendrik Lorentz. Hendrik Antoon Lorentz ( July 18, 1853 &ndash February 4, 1928) was a Dutch Physicist who shared the 1902 Nobel
### Applications in physics
After Riemannian manifolds, Lorentzian manifolds form the most important subclass of pseudo-Riemannian manifolds. They are important because of their physical applications to the theory of general relativity. General relativity or the general theory of relativity is the geometric theory of Gravitation published by Albert Einstein in 1916
A principal assumption of general relativity is that spacetime can be modeled as a 4-dimensional Lorentzian manifold of signature (3,1) (or equivalently (1,3)). General relativity or the general theory of relativity is the geometric theory of Gravitation published by Albert Einstein in 1916 SpaceTime is a patent-pending three dimensional graphical user interface that allows end users to search their content such as Google Google Images Yahoo! YouTube eBay Amazon and RSS Unlike Riemannian manifolds with positive-definite metrics, a signature of (p,1) or (1,q) allows tangent vectors to be classified into timelike, null or spacelike (see Causal structure). The causal structure of a Lorentzian manifold describes the causal relationships between points in the manifold
## Properties of pseudo-Riemannian manifolds
Just as Euclidean space $\mathbb{R}^n$ can be thought of as the model Riemannian manifold, Minkowski space $\mathbb{R}^{n-1,1}$ with the flat Minkowski metric is the model Lorentzian manifold. In Riemannian geometry, a Riemannian manifold ( M, g) (with Riemannian metric g) is a real Differentiable manifold M In Physics and Mathematics, Minkowski space (or Minkowski spacetime) is the mathematical setting in which Einstein's theory of Special relativity In Physics and Mathematics, Minkowski space (or Minkowski spacetime) is the mathematical setting in which Einstein's theory of Special relativity Likewise, the model space for a pseudo-Riemannian manifold of signature (p,q) is $\mathbb{R}^{p,q}$ with the metric: $g = dx_1^2 + \cdots + dx_p^2 - dx_{p+1}^2 - \cdots - dx_n^2$
Some basic theorems of Riemannian geometry can be generalized to the pseudo-Riemannian case. In particular, the fundamental theorem of Riemannian geometry is true of pseudo-Riemannian manifolds as well. In Riemannian geometry, the fundamental theorem of Riemannian geometry states that on any Riemannian manifold (or Pseudo-Riemannian manifold) there is a This allows one to speak of the Levi-Civita connection on a pseudo-Riemannian manifold along with the associated curvature tensor. In Riemannian geometry, the Levi-Civita connection is the torsion -free Riemannian connection, i On the other hand, there are many theorems in Riemannian geometry which do not hold in the generalized case. For example, it is not true that every smooth manifold admits a pseudo-Riemannian metric of a given signature; there are certain topological obstructions. Topology ( Greek topos, "place" and logos, "study" is the branch of Mathematics that studies the properties of Furthermore, a submanifold of a pseudo-Riemannian manifold need not be a pseudo-Riemannian manifold. In Mathematics, a submanifold of a Manifold M is a Subset S which itself has the structure of a manifold and for which the Inclusion
## See also
In Riemannian geometry, a Riemannian manifold ( M, g) (with Riemannian metric g) is a real Differentiable manifold M The causal structure of a Lorentzian manifold describes the causal relationships between points in the manifold In Mathematics, a metric or distance function is a function which defines a Distance between elements of a set. The signature of a Metric tensor (or more generally a nondegenerate Symmetric bilinear form, thought of as Quadratic form) is the number of positive
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 15, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8941669464111328, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/37230/hausdorff-dimension-of-higher-powers-of-the-mandebrot-set/37261
|
## Hausdorff dimension of higher powers of the Mandebrot set ?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
My third question about Shishikura's result :
Shishikura (1991) proved that the Hausdorff Dimension of the boundary of the Mandelbrot set equals 2, in this paper1. The Mandelbrot set is defined by iterating to infinity the z^2+c map.
Does his result also apply for higher powers, such as z^8 + c ?
Thanks again.
-
## 1 Answer
Yes, it does. See the full statement of Theorem 2 on page 6. The assumptions of the theorem are:
Suppose that a rational map $f_0$ of degree $d\ (> 1)$ has a parabolic fixed point ζ with multiplier exp(2πip/q) ($p, q \in\mathbb{Z}, \mathit{gcd}(p, q) = 1$) and that the immediate parabolic basin of ζ contains only one critical point of $f_0$.
This is the case for $z^d+c$.
-
Thanks for your answer. – Alexis Monnerot-Dumaine Sep 3 2010 at 8:17
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8888207077980042, "perplexity_flag": "middle"}
|
http://www.reference.com/browse/be+characteristic+of
|
Definitions
# Modulus and characteristic of convexity
In mathematics, the modulus and characteristic of convexity are measures of "how convex" the unit ball in a Banach space is. In some sense, the modulus of convexity has the same relationship to the ε-δ definition of uniform convexity as the modulus of continuity does to the ε-δ definition of continuity.
## Definitions
The modulus of convexity of a Banach space (X, || ||) is the function δ : [0, 2] → [0, 1] defined by
$delta \left(varepsilon\right) = inf left\left\{ left. 1 - left| frac\left\{x + y\right\}\left\{2\right\} right| , right| x, y in B, | x - y | geq varepsilon right\right\},$
where B denotes the closed unit ball of (X, || ||). The characteristic of convexity of the space (X, || ||) is the number ε0 defined by
$varepsilon_\left\{0\right\} = sup \left\{ varepsilon | delta\left(varepsilon\right) = 0 \right\}.$
## Properties
• The modulus of convexity, δ(ε), is a non-decreasing function of ε. Goebel claims the modulus of convexity is itself convex, while Lindenstrauss and Tzafriri claim that the modulus of convexity need not itself be a convex function of ε.
• (X, || ||) is a uniformly convex space if and only if its characteristic of convexity ε0 = 0.
• (X, || ||) is a strictly convex space (i.e., the boundary of the unit ball B contains no line segments) if and only if δ(2) = 1.
## References
• Goebel, Kazimierz (1970). "Convexity of balls and fixed-point theorems for mappings with nonexpansive square". Compositio Mathematica 22 (3): 269–274.
Wikipedia, the free encyclopedia © 2001-2006 Wikipedia contributors (Disclaimer)
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 2, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8259844183921814, "perplexity_flag": "head"}
|
http://mathhelpforum.com/calculus/2224-surface-integral.html
|
# Thread:
1. ## Surface Integral
I need to compute the surface integral of (curl A).n dS where:
A= (xyz+2)i + xj + ((x^2)z + (y^2))k
and S is defined by z>=0, z=16-(x^2)-(y^2).
(n is the unit normal field).
Thanks.
2. First, I calculated this using Stokes' theorem but since it's been a while since I did these, I wasn't sure of the outcome. Today, I did it without Stokes (just the surface integral) and I found the same, I'll show both here
Stokes states that the surface integral of the curl of the vector field (scalar multiplication with the exterior normal vector) is equal to the line integral of the vector field over the boundary of the surface.
$\iint\limits_S curl \vec v \cdot \vec n} \, \, dS = \oint\limits_C {\vec v\left( {\vec r} \right) \cdot d\vec r}$
In this case we have:
$\vec v = \left( {xyz + 2,x,x^2 z + y^2 } \right)$
$\vec r = \left( {x,y,16 - x^2 - y^2 } \right)$
S is the upper part of a paraboloid, the boundary in the xy-plane is the circle $C:x^2 + y^2 = 4^2$
I'll first do the line integral, we parametrize into polar coordinates:
$<br /> \left\{ \begin{gathered}<br /> x = 4\cos t \Rightarrow dx = - 4\sin t \hfill \\<br /> y = 4\sin t \Rightarrow dy = 4\cos t \hfill \\ <br /> \end{gathered} \right.<br />$
Since we're working in the xy-plane, we have that z = 0.
$<br /> \oint\limits_C {\vec v\left( {\vec r} \right) \cdot d\vec r} = \int\limits_0^{2\pi } {\left( {2,4\cos t,16\sin ^2 t} \right)} \frac{{d\left( {4\cos t,4\sin t,0} \right)}}<br /> {{dt}}dt<br />$
$<br /> 4\int\limits_0^{2\pi } {\left( {\frac{1}<br /> {2},\cos t} \right)} \left( { - \sin t,\cos t} \right)dt = 8\int\limits_0^{2\pi } {2\cos ^2 t - \sin tdt} <br />$
Elementary integration then yields a result of $16\pi$
---
Now, without Stokes. Since the surface is given as z = f(x,y), we can parametrize in x and y. This simplifies the calculation of the surface integral. We can calculate the flux as follows:
$<br /> \iint\limits_S {curl \vec v \cdot \vec n}\,dS = \iint\limits_g {\left( {curl \vec v \cdot \vec n} \right)}\,\left\| {\frac{{\partial \vec r}}<br /> {{\partial x}} \times \frac{{\partial \vec r}}<br /> {{\partial y}}} \right\|dS<br />$
Now we integrate over the projection of the paraboloid on the xy-plane which gives the same circle as integration area again. By the definition of the normal vector, we can simplify this to
$<br /> \iint\limits_g {curl \vec v \cdot \left( {\frac{{\partial \vec r}}<br /> {{\partial x}} \times \frac{{\partial \vec r}}<br /> {{\partial y}}} \right)}\,dS<br />$
With r and v as above, we have
$<br /> \frac{{\partial \vec r}}<br /> {{\partial x}} \times \frac{{\partial \vec r}}<br /> {{\partial y}} = \left( {1,0, - 2x} \right) \times \left( {0,1, - 2y} \right) = \left( {2x,2y,1} \right)<br />$
$<br /> curl \vec v = \nabla \times \left( {xyz + 2,x,x^2 z + y^2 } \right) = \left( {2y,xy - 2xz,1 - xz} \right)<br />$
But since we're integrating over the circle in the xy-plane, we have again that z = 0. So the scalar product gives
$<br /> \left( {2x,2y,1} \right) \cdot \left( {2y,xy,1} \right) = 2xy^2 + 4xy + 1<br />$
Now it's possible to coninue cartesian or to convert to polar coordinates. I'll give the integrals and leave the integration to you, both give the same result.
$<br /> \int\limits_{ - 4}^4 {\int\limits_{ - \sqrt {16 - y^2 } }^{\sqrt {16 - y^2 } } {2xy^2 + 4xy + 1\,dx} dy} = 16\pi <br />$
$<br /> \int\limits_0^{2\pi } {\int\limits_0^4 {\left( {128\sin ^2 t\cos t + 32\sin 2t + 1} \right)rdr} dt} = 16\pi <br />$
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 15, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9164018630981445, "perplexity_flag": "head"}
|
http://ergodicity.net/2012/09/02/the-card-cyclic-to-random-shuffle/
|
# An Ergodic Walk
a process whose average over time converges to the true average
September 2, 2012
## The card-cyclic to random shuffle
Posted by Anand Sarwate under Uncategorized | Tags: arXiv, probability |
Mixing time of the card-cyclic to random shuffle
Ben Morris
We analyze the following method for shuffling $n$ cards. First, remove card 1 (i.e., the card with label 1) and then re-insert it randomly into the deck. Then repeat with cards 2, 3,…, $n$. Call this a round. R. Pinsky showed, somewhat surprisingly, that the mixing time is greater than one round. We show that in fact the mixing time is on the order of $\log n$ rounds.
The talk is based on a paper with Weiyang Ning (a student at UW) and Yuval Peres. The description the results is somewhat different because it’s $\log n$ rounds of $n$ moves, or $n \log n$ moves. From the intro to the paper:
To prove the lower bound we introduce the concept of a barrier between two parts of the deck that moves along with the cards as the shuffling is performed. Then we show that the trajectory of this barrier can be well-approximated by a deterministic function $f$… we relate the mixing rate of the chain to the rate at which $f$ converges to a constant.
The key is to use path coupling, a technique pioneered by Bubley and Dyer. It’s a cute result and would be a fun paper to read for a class project or something, I bet.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 8, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9375758171081543, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/43762/why-are-topological-superconductors-hard-to-make
|
Why are Topological Superconductors hard to make?
Topological insulators (TI) have already been made in lab. Topological superconductors (TSC), being close cousins of TI, seem harder to make. Why is that?
It seems that materials in connection with majorana zero modes all have bad fate in lab. Are there any deep reasons behind?
-
I just thought of mentioning that at least one topological superconductor (TSC) is already known to exist in the lab, which is, He-3B. It belongs to DIII Class TSC in the conventional notation. – Tarun Jan 28 at 3:05
2 Answers
It's not the making as opposed to verifying of topological superconductors that is difficult experimentally. One of the most useful techniques in identifying topological properties of a material is Angle-Resolved Photoemission Spectroscopy (ARPES). ARPES can independently image the bulk and surface modes of a 3-D solid with very good energy and momentum resolution. As a result, it can detect the Dirac-like edge states, which are a signature of its topological nature. As a matter of fact, the first 3-D topological insulator that was identified, i.e. Bi$_x$Sb$_{1-x}$, was using ARPES in 2008. Interestingly, other 3-D topological insulators such as Bi$_2$Se$_3$ and Bi$_2$Te$_3$ have been around since the 1960s as thermoelectric materials. Even their band structures were studied using the pseudopotential technique. They were identified as topological insulators only very recently. The reason has to do partly with the fact that ARPES itself has experienced a renaissance very recently; that's primarily due to the study of high-temperature superconductivity in cuprates. In a way, you can say that the critical factor in the study of topological superconductors has been that of limitations in characterization technology.
One of the proposed topological superconductors is Cu$_x$Bi$_2$Se$_3$. You can check out this paper:
Liang Fu and Erez Berg. Odd-Parity Topological Superconductors: Theory and Application to Cu$_x$Bi$_2$Se$_3$. Phys. Rev. Lett. 105 no. 9, 097001 (2010). doi:10.1103/PhysRevLett.105.097001.
It has a superconducting transition temperature of 3.8 K and an superconducting energy gap of < 1meV. Current state-of-the-art ARPES systems cannot easily access these parameter regimes. Other than characterization difficulties, Cu$_x$Bi$_2$Se$_3$ is extremely hard to work with. The Cu atoms are dopants; i.e. Cu$_x$Bi$_2$Se$_3$ is not stoichiometrically stable. The diffusion of Cu atoms during the measurement process obfuscates the interpretation of data. ARPES already is notorious for difficult data analysis in normal stoichiometrically stable materials.
So, long story short, people are still improving instrument capabilities to identify topological superconductors. I hope that was helpful.
-
Thank you very much! – ChenChao Nov 9 '12 at 8:41
As for the second question (are the Majorana difficult to realize in lab ?) the answer is obviously yes, and for the same reason that we have no idea what to look for ! (NB: of course there are some predictions about the experimental signature of the Majorana, but no smoking gun experiment.
-
Hi Oaoa, and welcome to Physics Stack Exchange! When you post something as an answer, it should be limited to answering the question, so I've removed the part of your post that didn't do that. – David Zaslavsky♦ Dec 9 '12 at 1:49
I am actually curious about what was removed. – ChenChao Dec 9 '12 at 9:26
Can we detect long range entanglement in general? – ChenChao Dec 9 '12 at 9:28
Hi David, Thanks for teaching me. I nevertheless thought I was answering the question. One of the reason the topological superconductor are harder to make is that we have no clear picture on what they are. I even believe we do not have a single undoubted theoretical proof they exist. That's what I've tried to say. Maybe it was unclear, sorry for that. Bets regards David. – Oaoa Dec 9 '12 at 9:29
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 18, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9550579190254211, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/57916/generators-of-the-multiplicative-group-of-gf2
|
# Generators of the multiplicative group of GF(2)
GF(3) can be constructed as follows by polynomial $p(x)=x+1$:
$0=0$
$1=1$
$\alpha=2$
GF(5) can be constructed as follows by polynomial $p(x)=x+2$:
$0=0$
$1=1$
$\alpha=3$
$\alpha^2=\alpha\cdot\alpha=3\cdot3\bmod5=4$
$\alpha^3=\alpha^2\cdot\alpha=4\cdot3\bmod5=2$
Question is, what can be defined as element $\alpha$ in GF(2) and what primitive polynomial can be used to construct this field?
-
1
The possible confusion here is that the trivial group (with one element only) doesn't really need a generator. You may use $x-1$ as the generating polynomial in the same way as usual, but there really is nothing to generate :-) – Jyrki Lahtonen Aug 17 '11 at 4:17
## 2 Answers
A generator of the multiplicative group of a finite field is an element $\alpha$ such that the powers of $\alpha$ include all non-zero elements of the field. The multiplicative group of GF(2) has one element, and thus one generator: $\alpha = 1$.
-
Hint: How many element are there in the multiplicative group of $GF(2)$? Based on that can you point at a generator (if you need one)?
-
2 elements, 0 and 1, but is it possible to define element $\alpha$ in GF(2)? – scdmb Aug 16 '11 at 21:11
@scdmb: Well, because $\alpha$ has to be in the multiplicative group, it cannot be zero. – Jyrki Lahtonen Aug 16 '11 at 21:22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 18, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9234916567802429, "perplexity_flag": "head"}
|
http://mathhelpforum.com/algebra/41447-am-i-doing-correctly.html
|
# Thread:
1. ## Am I doing This correctly?
5x + 6y = 18 solve for y
5x - 5x + 6y = 18 + 5x
6y = 18 + 5x
6y/6 = 18 + 5x/6
y = 3 + 5x
This has 2 variables, do I need to solve for x or will y=3+5x work out?
2. When the problem says "solve for y," the intent is simply to get y by itself.
You're on the right track, but:
5x + 6y = 18
5x - 5x + 6y = 18 - 5x (here, you added 5x to the right hand side)
6y = 18 - 5x
6y/6 = (18-5x)/6
y = 3 - 5x/6 (When you divide by 6, you have to divide EACH TERM by 6 - this means you need to do 18/6 AND 5x/6)
So, you were on the right track, just a couple small things.
3. Thank you, a simple + or - can break me I now know this.
So when I plug in everything should it look like this?
5x + 6y = 18 solve for y
5x - 5x + 6y = 18 - 5x
6y = 18 - 5x
6y/6 = 18 + 5x/6
y = 3 - 5x/6
5x + 6(3 - 5x/6) = 18
5x + 18 - 30x/36 = 18
5x + 18 - 18 - 30x/36 = 18 -18
5x - 30x/6 = 0
30x/6 - 30x/6 = 0
x = 0
5(0) + 6y = 18
0 + 6y = 18
6y = 18
6y/6 = 18/6
y = 3
(0,3)
I will be so happy if I'm on the right trail.
4. Not quite. When you plug it back in, you get:
5x + 6(3 - 5x/6) = 18
5x + 18 - 5x = 18 (Note that [tex]6 \times \frac{5x}{6} = \frac{30x}{6} = 5x[\math].
18 = 18.
This means you've done it right - if you ended up with something like 12 = 18, you'd know you made a mistake.
The problem is NOT asking you to find numbers for x and y. The only thing you have to do when it says "solve for y" is to get y by itself and everything else on the other side. Your final answer is:
$y = 3 - \frac{5x}{6}$.
That's it!
Note that this does describe a relationship between x and y, and (0,3) does satisfy this relationship. However, note that there are an infinite number of combinations of x and y that work. If you put in, say, 6 for x, you get:
$y = 3 - \frac{30}{6} = 3 - 5 = -2$
so you get the point (6, -2).
If you were to graph ALL the points (there are an infinite number of them) that satisfy your equation, you'd get a straight line.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9324358701705933, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/advanced-math-topics/207823-proving-induction.html
|
# Thread:
1. ## Proving by induction
Can anyone help me get started on this math problem? I understand what the problem is purposing and I already know that it is indeed true, but how do I prove it?
Suppose there is a circular auto track that is one mile long. Along the track there are n > 0
gas stations. The combined amount of gas in all gas stations allows a car to travel exactly one
mile. A car has a gas tank that will hold a lot more than needed to get around the track once, but it starts out empty.
Show that no matter how the gas stations are placed, there is a starting point for the car such
that it can go around the track once (clockwise) without running out of gas.
Thanks
2. ## Re: Proving by induction
Without loss of generality, let us label the gas stations $a_1, a_2, ... a_n$, where $a_1$ holds the most amount of gas $a_2$ holds the second most amount, and so on. I'm only focusing on the 'worst' case as well, in other words, that all the gas stations hold exactly the required amount to go around the track once.
We are given that
$\sum_{i=1}^{n} a_i = 1$
We know that the base case, n=1, is true. If there is one gas station on the track, then we place the car right at the gas station.
Then we can invoke the induction step that if we have n gas stations, then we can place the car somewhere on the track and it will go around the track once.
Now we have to prove P(n+1), that is to say that if we have n+1 gas stations then we have to prove that we can place the car somewhere on the track and it will go around the track once. careful, however.
For this step we can only assume
$\sum_{i=1}^{n+1} a_i = 1$ (S1) not $\sum_{i=1}^{n} a_i = 1$
However, we do know that by (S1) $\sum_{i=1}^{n} a_i = 1 - a_{n+1}$. However, since we ordered the gas stations we know that " $a_{n+1}$ holds the least amount of gas amongst all the other gas stations." (S2)
and from here I don't know how to be more rigorous to prove that this is enough to show that the car can go around the track exactly once... but at least it's a start.
EDIT: I want to say something like "No matter where you put gas station $a_{n+1}$, the car will reach $a_{n+1}$ with the amount of gas that it has stored on its trip." with proof invoking (S1), (S2) and our induction step assumption.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 10, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9629249572753906, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/19847/finding-number-of-roots-of-a-polynomial-in-the-unit-disk
|
# Finding number of roots of a polynomial in the unit disk
I would like to know how to find the number of (complex) roots of the polynomal $f(z) = z^4+3z^2+z+1$ inside the unit disk. The usual way to solve such a problem, via Rouché's theorem does not work, at least not in an "obvious way".
Any ideas?
Thanks!
edit: here is a rough idea I had: For any $\epsilon >0$, let $f_{\epsilon}(z) = z^4+3z^2+z+1-\epsilon$. By Rouché's theorem, for each such $\epsilon$, $f_{\epsilon}$ has exactly 2 roots inside the unit disc. Hence, by continuity, it follows that $f$ has 2 roots on the closed unit disc, so it remains to determine what happens on the boundary. Is this reasoning correct? what can be said about the boundary?
-
## 1 Answer
This one is slightly tricky, but you can apply Rouché directly.
Let $g(z) = 3z^2 + 1$. Note that $|g(z)| \geq 2$ for $|z| = 1$ with equality only for $z = \pm i$ (because $g$ maps the unit circle onto the circle with radius $3$ centered at $1$).
On the other hand for all $|z| = 1$ we have the estimate $h(z) = |f(z) - g(z)| = |z(z^3 + 1)| \leq 2$ and for $z = \pm i$ we have $h(\pm i) = \sqrt{2} < 2 \leq |g(\pm i)|$. Therefore $|f(z) - g(z)| < |g(z)|$ for all $|z| = 1$ and thus Rouché can be applied.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 19, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9261617064476013, "perplexity_flag": "head"}
|
http://mathoverflow.net/revisions/38060/list
|
## Return to Answer
2 slightly expanded last para
I think the construction you're looking for can be seen as a right adjoint, and hence the details of the construction can be seen as coming from general transfinite constructions of adjoints.
$\newcommand{\inl}{\mathrm{inl}} \newcommand{\Coalg}{\mathbf{Coalg}}$ There's a functor $\inl^* : F$-$\Coalg \longrightarrow (F+1)$-$\Coalg$; it embeds $F$-coalgebras asthe full subcategory of "error-free" $F+1$-coalgebras, and is induced by the natural transformation $\inl : F \rightarrow F+1$ in an obvious-once-you-write-down-the-diagram way.
Now, if I'm understanding right, the construction you're looking at, the "error-free core" of an $F+1$ coalgebra, is the right adjoint to this.
Moreover, I think there should be theorems that show automagically why this can be computed by the construction you give, as an $\omega$-long limit of pullbacks — but I'm not sure exactly where, I'm afraid. It's almost certainly deducible from the Kelly "Unified treatment of transfinite constructions" paper, well-described by Tom Leinster here; the constructions of that have a very similar flavour.
Relevant
Possibly relevant well-known constructions to compare (in Kelly and elsewhere): the construction of an algebraically-(co)free (co)monad on an endofunctor, and ; the construction of the free $T$-algebra on a $T$-graph.T$-graph; the free$S$-algebra on a$T$-algebra, given a monad map$S \to T\$.
1
I think the construction you're looking for can be seen as a right adjoint, and hence the details of the construction can be seen as coming from general transfinite constructions of adjoints.
$\newcommand{\inl}{\mathrm{inl}} \newcommand{\Coalg}{\mathbf{Coalg}}$ There's a functor $\inl^* : F$-$\Coalg \longrightarrow (F+1)$-$\Coalg$; it embeds $F$-coalgebras asthe full subcategory of "error-free" $F+1$-coalgebras, and is induced by the natural transformation $\inl : F \rightarrow F+1$ in an obvious-once-you-write-down-the-diagram way.
Now, if I'm understanding right, the construction you're looking at, the "error-free core" of an $F+1$ coalgebra, is the right adjoint to this.
Moreover, I think there should be theorems that show automagically why this can be computed by the construction you give, as an $\omega$-long limit of pullbacks — but I'm not sure exactly where, I'm afraid. It's almost certainly deducible from the Kelly "Unified treatment of transfinite constructions" paper, well-described by Tom Leinster here; the constructions of that have a very similar flavour.
Relevant well-known constructions to compare (in Kelly and elsewhere): the construction of an algebraically-(co)free (co)monad on an endofunctor, and the construction of the free $T$-algebra on a $T$-graph.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 25, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.920470654964447, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/81865/wiener-process-db2-dt/256073
|
# Wiener Process $dB^2=dt$
Why is $dB^2=dt$? Every online source I've come across lists this as an exercise or just states it, but why isn't this ever explicitly proved? I know that $dB=\sqrt{dt}Z$, but I don't know what squaring a Gaussian random variable means.
-
1
What do you mean by every book? Could you list a couple? – cardinal Nov 14 '11 at 2:58
My mistake. I meant every online source I've come across from googling. This was stated in a Mathematical Finance class without justification, and I've been spending hours trying to figure out how this comes about. – j.diddland Nov 14 '11 at 6:13
1
Sorry. The point of my question, which may not have been clear, was to get a feel for the level at which you were expecting an answer. What textbook does the course use? Do you know about quadratic variation? At the level of say, S. Shreve, Stochastic Calculus for Finance or, maybe, Karatzas & Shreve, Brownian Motion and Stochastic Calculus? Or, maybe, the course is more at the level of J. C. Hull, Options, Futures, and Other Derivatives? Providing this kind of info will help me or someone else provide an answer at the appropriate level. Cheers. :) – cardinal Nov 14 '11 at 9:43
## 2 Answers
Obviously $dB_t^2 \neq dt$, since $dB_t \sim \mathcal{N} (0, dt)$ is a random variable, while $dt$ is deterministic.
As Michael Hardy said, they really meant to say $\mathbb{E} \left[ dB_t^2 \right] = dt$. To convince yourself, compute $$\mathbb{E} \left[ dB_t^n \right] = \int_{-\infty}^{+\infty} \frac{1}{\sqrt{2 \pi dt}} \exp\left(-\frac{x^2}{2 dt}\right) x^n dx \, .$$
-
Sorry but the nonrigorous shorthand $dB_t^2=dt$ refers to a much deeper result than the (true) fact that $B_{t+s}-B_t$ is centered with variance $s$, which your answer reduces to. As such, one may find said answer rather misleading. Or, since the question is badly formulated and the OP never answered @cardinal's fully justified demand for background, one can also consider all this rather moot... – Did Dec 11 '12 at 6:13
Certainly not. First, because neither $E(dB_t^4)$ nor $dt^2$ are well defined objects. Second, and even more importantly, because what the shorthand $dB_t^2=dt$ refers to is the whole class of Doob's semimartingale decompositions which Itô's formula provides (for example, to stay at the level of a toy example, the fact that $t\mapsto B_t^2-t$ is a martingale, which is not reducible to the fact that $E(B_t^2)=t$). – Did Dec 11 '12 at 6:44
– William S. Wong Dec 11 '12 at 6:47
For independent random variables, the variance of the sum equals the sum of the variances. So $\mathbb{E}((\Delta B)^2)=\Delta t$, i.e. if you increment $t$ a little bit, then the variance of the value of $B$ before that increment plus the variance of the increment equals the variance of the value of $B$ after the increment.
Or you could say $$\frac{\mathbb{E}((\Delta B)^2)}{\Delta t} = 1.$$ That much follows easily from the first things you hear about the Wiener process. I could then say "take limits", but that might be sarcastic, so instead I'll say that for a fully rigorous answer, I'd have to do somewhat more work.
-
Sorry but the nonrigorous shorthand $dB_t^2=dt$ refers to a much deeper result than the (true) fact that $B_{t+s}-B_t$ is centered with variance $s$, which your answer reduces to. As such, one may find said answer rather misleading. Or, since the question is badly formulated and the OP never answered @cardinal's fully justified demand for background, one can also consider all this rather moot... – Did Dec 11 '12 at 6:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 22, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9520204663276672, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/310382/how-many-complex-numbers-z-xiy-are-there-such-that-xy-1-and-eix2y2
|
# How many complex numbers $z=x+iy$ are there such that $x+y=1$ and $e^{i(x^2+y^2)}=1.$
I am stuck on the following problem that says:
How many complex numbers $z=x+iy$ are there such that $x+y=1$ and $e^{i(x^2+y^2)}=1.$ The options are as follows:
$1.0$
$2.$Non-zero but finitely many
$3.$Countably infinite
$4.$Uncountably infinite.
My Attempt: From $e^{i(x^2+y^2)}=1= e^{i(2n \pi)}$ which gives $x^2+y^2=2n \pi$ (where $n \in \mathbb N$) that indicates family of concentric circles with center at the origin. The required solution is the intersection of $x^2+y^2=2n \pi$ and the line $x+y=1$.But now I can not draw the conclusion.Am I going in the right direction? Can someone throw light on it.Thanks in advance for your time.
-
That's a great start. You can take 4 out of the list. – julien Feb 21 at 17:43
You're doing very well... I think you must have found the intersection of these circles with the line. What is holding you up from finishing? – rschwieb Feb 21 at 17:45
Like @rschwieb says, you're almost done. Substitute $y=1-x$ in the circle equations and solve the resulting quadratic equation in $x$. You'll find a neat parametrization of your solution set. – julien Feb 21 at 17:48
1
This is not about the complex number $z$ at all! It should be "How many points $(x,y) \in \mathbb R^2$ are there..." – TonyK Feb 21 at 18:02
## 2 Answers
$x+y=1$ describes a line. Any sufficiently big circle around the origin intersects this lline in two points. So we must have countably many solutions.
-
## Did you find this question interesting? Try our newsletter
Sign up for our newsletter and get our top new questions delivered to your inbox (see an example).
email address
Yes, but now consider how many times do the line and circles intersect. Looks like it would be either option 3 or 4 though one is more likely if you consider the domain of n. How many unique values of n exist?
-
I was trying to give a hint rather than finish the answer. – JB King Feb 21 at 17:52
Thanks a lot sir.I have got it. I guess option $3$ will be my friend. – learner Feb 21 at 17:55
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 21, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9434396624565125, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/219565/number-of-matrices-whose-square-is-identity
|
# Number of matrices whose square is identity
How many matrices are such that $A^2 =I$, where $A$ is a $2\times2$ matrix and $I$ is a $2\times2$ identity matrix?
I can only think of the identity and it negative are they more? Is it an application of Cayley-Hamilton theorem. I have seen a similarly post by I cannot follow it. Could someone answer in simple and understandable terms.
-
1
Can $A$ be complex matrix? – Patrick Li Oct 23 '12 at 18:45
2
There are more, e.g., $\begin{pmatrix} 0& 1\\1&0\end{pmatrix}$. – Fabian Oct 23 '12 at 18:46
## 4 Answers
Hint:$A=\left(\begin{array}{cc}1&a\\ 0&-1 \end{array}\right)$ check that $A^2=I$
-
Still not all, see my comment above. – Fabian Oct 23 '12 at 18:47
1
@Fabian I wanted to say the number is infinite – clark Oct 23 '12 at 18:48
Ok, got your point. – Fabian Oct 23 '12 at 18:49
@clark, I think this is the best answer, thanks very much. – Vaolter Oct 24 '12 at 13:00
You can compute this manually if you want:
$$\left[\begin{array}{cc} a & b \\c & d \end{array}\right]^2=\left[\begin{array}{cc} a^2+bc & b(a+d) \\c(a+d) & bc+d^2 \end{array}\right]$$
To get the identity matrix, either $a=-d$, so $a^2+bc=1$ (and these can be picked freely, leaving plenty of options) or $b=0$ and $c=0$, so $a=\pm1, d=\pm 1$.
More conceptually, you're asking this: "What linear transformation, applied twice, brings you back to where you started?" You could swap the $x$ and $y$ axes:
$$\left[\begin{array}{cc} 0 & 1 \\1 & 0 \end{array}\right]$$
flip the space around the $x$ axis:
$$\left[\begin{array}{cc} -1 & 0 \\0 & 1 \end{array}\right]$$
Or a number of other things! Just think of a transformation that is undone by applying it again, and find the matrix that corresponds to it.
-
1
I assumed matrices over $\mathbb{R}$. The connection to other fields is relatively straightforward, though I can elaborate if needed. – Robert Mastragostino Oct 23 '12 at 18:53
The possible eigenvalues are $\pm 1$. With the exception of $I$ and $-I$ the matrix will be a reflection. Therefore in a suitable basis it is given by $$\begin{pmatrix} 1 &0 \cr 0 & -1 \end{pmatrix}\ .$$ All other solutions are conjugate to this matrix. e.g. $$\begin{pmatrix}11 & -20 \cr 6 & -11 \end{pmatrix}$$
-
Let's call $q$ the minimum polynomial such that $q(A)=0$. It's a well known fact that $q$ must divide $t^2-1$. This means that it could be:
1. $t-1$
2. $t+1$
3. $(t-1)(t+1)$
In each one of this cases A is diagonalizable. That's because $q$ is the minimun polynomial which nullify $A$ (So, as example, $A-I$ is enough to nullify the generalized eigenspace for $1$).
So we have just 3 Jordan forms.
$\begin{pmatrix}1 & 0 \\ 0 & 1\end{pmatrix}$ $\begin{pmatrix}-1 & 0 \\ 0 & -1\end{pmatrix}$ $\begin{pmatrix}-1 & 0 \\ 0 & 1\end{pmatrix}$
To answer your question you have those 3 Conjugacy classes.
Notes: This argument can easily be generalized. In $\mathbb{F_2}$ there is just identity.
-
so are you saying there are infinite matrices? – Vaolter Oct 24 '12 at 13:09
I'm saying much more. But yes, in infinite field it's as you said. – Ivan Oct 24 '12 at 17:16
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 36, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9221319556236267, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/geometry/123936-pythagorean-problem.html
|
Thread:
1. Pythagorean problem
Hi, I'm studying geometry and I'm stuck with this problem.
The problem says
"Two points, A and B, are given in the plane (no figure). Describe the set of points for which ${AX}^2-{BX}^2$ is constant.
I am thinking that this triangle is isoceles and $AX^2 - BX^2 = 0$
but it sounds too easy of an answer maybe someone can explain this to me in terms of a geometric theory or whatever. Please explain with some reference to something..
Thanks!
2. Hello, stonedcarli!
Two points, $A$ and $B$, are given in the plane (no figure).
Describe the set of points for which ${AX}^2-{BX}^2$ is constant.
I assume the following:
. . We want the locus of a point $X(x,y)$
. . The given points are: . $A(a_1,a_2)$ and $B(b_1,b_2)$
. . $AX$ and $BX$ are distances.
We have: . $\begin{array}{ccc}AX^2 &=& (x-a_1)^2 + (y-a_2)^2 \\ \\[-4mm]<br /> BX^2 &=& (x-b_1)^2 + (y-b_2)^2 \end{array}$
Then: . $\bigg[(x-a_1)^2 + (y-a^2)^2\bigg] - \bigg[(x-b_1)^2 + (y - b_2)^2\bigg] \;=\;k$
. . $\bigg[x^2 - 2a_1x + a_1^2 + y^2 - 2a_2y + a_2^2\bigg] - \bigg[x^2 - 2b_1x + b_1^2 + y^2 - 2b_2y + b_2^2\bigg] \;=\;k$
. . $x^2 - 2a_1x + a_1^2 + y^2 - 2a_2y + a_2^2 - x^2 + 2b_1x - b_1^2 - y^2 + 2b_2y - b_2^2 \;=\;k$
. . $-2a_1x + 2b_1x - 2a_2y - 2b_2y +a_1^2 - b_1^2 - a_2^2 + b_2^2 \;=\;k$
. . $2(b_1-a_1)x + 2(b_2-a_2)y \;=\;k -(a_1^2 - a_2^2) + (b_1^2 - b_2^2)$
The equation is of the form: . $Ax + By \:=\:C\quad\hdots\;\text{ a straight line}$
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 17, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9213322401046753, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/275512/hadamard-regularization-isnt-working-out
|
# Hadamard regularization isn't working out
As part of an exercise in a grad course on "mathematical methods" (always such a helpful name), I've been asked to evaluate $I=\int_0^{1/2}{(x^2-x+c)^{-2}dx}$ as a Hadamard finite part integral for $0 \leq c<1/4$. The integrand is positive, so $I$ should be positive too, right?
HFPIs aren't actually on the syllabus, although it's briefly mentioned for a different example that Hadamard regularization gives the same answer as doing it with generalized functions. Well, I've tried every method I can think of - every substitution, every excuse for which complex value of a logarithm to take (depending on how you tackle it, you get artanh with an argument >1 in the result), some methods based on generalized functions (which my assignment otherwise doesn't work with), some which rely on Wikipedia's account of Hadamard regularization and some which do neither - but whatever I do, the answer isn't a positive real number.
Whatever methods I use, the closest I've gotten is this. When $c > 1/4$, $I = 4(4c-1)^{-3/2}\operatorname{arctan}{(4c-1)^{-1/2}}+1/{c(4c-1)}$, which is real. When $c < 0$, the identity $\operatorname{arctan} iz = i \operatorname{artanh} z$ again gives a real value, namely $I = 4(1-4c)^{-3/2}\operatorname{arctan}{(1-4c)^{-1/2}}+1/{c(4c-1)}$. (Maybe the first term needs a - sign due to me misusing powers of i, but I don't think so.) As I understand it, for $x > 1$ $\operatorname{artanh}x$ has multiple complex values, all with the same real part and in conjugate pairs. So can you "average out" the imaginary part? Well, even if you do, the result is negative for small positive $c$, and tends to $-\infty$ as $c \to 0$, even though we should have $I=\int_0^{1/2}{(x^2-x)^{-2}dx}=+\infty$.
So what's going on here?
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9646375775337219, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/152935/sequence-convergence-and-parentheses-insertion
|
# Sequence convergence and parentheses insertion
find an example for a series $a_{n}$ that satisfies the following:
1. $a_{n}\xrightarrow[n\to\infty]{}0$
2. ${\displaystyle \sum_{n=1}^{\infty}a_{n}}$ does not converges
3. There is a way to insert parentheses so ${\displaystyle \sum_{n=1}^{\infty}a_{n}}$ will converges.
I was thinking about the series:$1-1+\frac{1}{2}+\frac{1}{2}-\frac{1}{2}-\frac{1}{2}+\frac{1}{4}+\frac{1}{4}+\frac{1}{4}+\frac{1}{4}-\frac{1}{4}-\frac{1}{4}-\frac{1}{4}-\frac{1}{4}+...$
But I don't know how to prove 2.
Also will be nice to hear another examples, if any.
-
Any sequence $a_n$ converging to zero such that there exists parenthesis with every term inside the parenthesis will work. You could replace your $2^n$ in the denominator by a $\log(n)$ or a $n^n$, it doesn't matter. As long as you use your trick and put enough brackets. =) – Patrick Da Silva Jun 2 '12 at 16:41
Look at the sequence of partial sums, $(S_n)$, defined by $S_n=\sum_{k=1}^n a_k$. It should be clear how to show that this sequence does not converge (find a subsequence that alternates between $0$ and $1$, e.g.). Recall that an infinite sum converges iff its sequence of partial sums converges. – David Mitra Jun 2 '12 at 16:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 13, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9146410822868347, "perplexity_flag": "head"}
|
http://unapologetic.wordpress.com/2009/07/27/orthogonal-transformations/?like=1&source=post_flair&_wpnonce=9916d21231
|
# The Unapologetic Mathematician
## Orthogonal transformations
Given a form on a vector space $V$ represented by the transformation $B$ and a linear map $T:V\rightarrow V$, we’ve seen how to transform $B$ by the action of $T$. That is, the space of all bilinear forms is a vector space which carries a representation of $\mathrm{GL}(V)$. But given a particular form $B$, what is the stabilizer of $B$? That is, what transformations in $\mathrm{GL}(V)$ send $B$ back to itself.
Before we answer this, let’s look at it in a slightly different way. Given a form $B$ we have a way of pairing vectors in $V$ to get scalars. On the other hand, if we have a transformation $T$ we could use it on the vectors before pairing them. We’re looking for those transformations so that for every pair of vectors the result of the pairing by $B$ is the same before and after applying $T$.
So let’s look at the action we described last time: the form $B$ is sent to $T^*BT$. So we’re looking for all $T$ so that
$\displaystyle T^*BT=B$
We say that such a transformation is $B$-orthogonal, and the subgroup of all such transformations is the “orthogonal group” $\mathrm{O}(V,B)\subseteq \mathrm{GL}(V)$. Sometimes, since the vector space $V$ is sort of implicit in the form $B$, we abbreviate the group to $\mathrm{O}(B)$.
Now there’s one particular orthogonal group that’s particularly useful. If we’ve got an inner-product space $V$ (the setup for having our bra-ket notation) then the inner product itself is a form, and it’s described by the identity transformation. That is, the orthogonality condition in this case is that
$\displaystyle T^*T=I_V$
A transformation is orthogonal if its adjoint is the same as its inverse. This is the version of orthogonality that we’re most familiar with. Commonly, when we say that a transformation is “orthogonal” with no qualification about what form we’re using, we just mean that this condition holds.
Let’s take a look at this last condition geometrically. We use the inner product to define a notion of (squared-)length $\langle v\vert v\rangle$ and a notion of (the cosine of) angle $\langle w\vert v\rangle$. So let’s transform the space by $T$ and see what happens to our inner product, and thus to lengths and angles.
$\displaystyle\langle T(w)\vert T(v)\rangle=\langle w\rvert T^*T\lvert v\rangle$
First off, note that no matter what $T$ we use, the transformation in the middle is self-adjoint and positive-definite, and so the new form is symmetric and positive-definite, and thus defines another inner product. But when is it the same inner product? When $T^*T=I_V$, of course! For then we have
$\displaystyle\langle T(w)\vert T(v)\rangle=\langle w\rvert T^*T\lvert v\rangle=\langle w\rvert I_V\lvert v\rangle=\langle w\vert v\rangle$
So orthogonal transformations are exactly those which preserve the notions of length and angle defined by the inner product. Geometrically, they correspond to rotations and reflections that change orientations, but leave lengths of vectors the same, and leave the angle between any pair of vectors the same.
About these ads
### Like this:
Like Loading...
Posted by John Armstrong | Algebra, Linear Algebra
## 9 Comments »
1. [...] Unitary transformations are like orthogonal transformations, except we’re working with a complex inner product space. We’ll focus on just the [...]
Pingback by | July 28, 2009 | Reply
2. [...] and Orthogonal Matrices Let’s see what happens when we take a unitary or orthogonal transformation and turn it into a matrix by picking a basis for our vector [...]
Pingback by | July 29, 2009 | Reply
3. [...] of Unitary and Orthogonal Transformations Okay, we’ve got groups of unitary and orthogonal transformations (and the latter we can generalize to groups of matrices over arbitrary fields. [...]
Pingback by | July 31, 2009 | Reply
4. [...] All the transformations in our analogy — self-adjoint and unitary (or orthogonal), and even anti-self-adjoint (antisymmetric and “skew-Hermitian”) transformations [...]
Pingback by | August 5, 2009 | Reply
5. [...] over whose transpose and inverse are the same, which is related to the orthogonal group of orthogonal transformations of the real vector space preserving a specified bilinear form . Lastly, [...]
Pingback by | September 8, 2009 | Reply
6. [...] We can choose one, but there’s a whole family of other equally valid choices related by orthogonal transformations. Ideally, we should define things which don’t depend on this choice at all. If we must make a [...]
Pingback by | September 22, 2009 | Reply
7. [...] and inner product-preserving transformations, but we can also throw in reflections to get the whole orthogonal group, of all transformations from one orthonormal basis to [...]
Pingback by | November 10, 2009 | Reply
8. [...] Now that we’ve got our playing field down, we need to define a reflection. This will be an orthogonal transformation, which is just a fancy way of saying “preserves lengths and angles”. What makes it a [...]
Pingback by | January 18, 2010 | Reply
9. [...] if the form is invariant for the representation , then the image of is actually contained in the orthogonal group: [...]
Pingback by | September 27, 2010 | Reply
« Previous | Next »
## About this weblog
This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”).
I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
• ## RSS Feeds
RSS - Posts
RSS - Comments
• ## Feedback
Got something to say? Anonymous questions, comments, and suggestions at Formspring.me!
%d bloggers like this:
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 33, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.906356155872345, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/72791/is-the-endomorphism-algebra-of-a-dualizable-bimodule-necessarily-finite-dimension/73110
|
## Is the endomorphism algebra of a dualizable bimodule necessarily finite dimensional?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Let $k$ be field. Let $A$, $B$ be $k$-algebras, and let ${}_AM_B$ be a dualizable bimodule.
Pre-Question (too naive): Is the algebra of $A$-$B$-bilinear endomorphisms of $M$ necessarily finite dimensional?
Answer: No. Take $A$ some infinite dimensional commutative algebra, and $M={}_AA_A$. Then $End({}_AA_A)=A$ is not finite dimensional.
Question: Assume that $A$ and $B$ have finite dimensional centers. Is it then true that the algebra of $A$-$B$-bilinear endomorphisms of $M$ has to be finite dimensional?
Special case for which I know the answer to be positive:
If $k=\mathbb C$ or $\mathbb R$, and if we're in a C*-algebra context, then I know how to prove that $End({}_AM_B)$ is finite dimensional. But my proof relies on certain inequalities, and it does not generalize.
Definitions:
A bimodule ${}_AM_B$ is called left dualizable if there is an other bimodule ${}_BN_A$ (the left dual) and maps $r:{}_AA_A\to {}_AM\otimes_BN_A$ and $s:{}_BN\otimes_AM_B\to {}_BB_B$ such that $(1\otimes s)\circ(r\otimes 1) = 1_M$ and $(s\otimes 1)\circ(1\otimes r) = 1_N$.
There's a similar definition of right dualizability. I'm guessing that right dualizability is not equivalent to left dualizability, but I don't know a concrete example that illustrates the difference between these two notions.
In my question above, I've just used the term "dualizable", which you should interpret as "both left and right dualizable".
There's also the notion of fully dualizable, which means that the left dual has its own left dual, which should in turn have its own left dual etc., and similarly for right duals. Once again, I'm a bit vague as to whether all these infinitely many conditions are really needed, or whether they are implied by finitely many of them.
PS: I hope that I didn't mix my left and my right.
-
Take A=**C** and B some infinite dimensional algebra. Then B as an A-B-bimodule is left dualizable but not right dualizable. – Dmitri Pavlov Aug 13 2011 at 11:08
Thank you Dmitri. I'll edit the question. – André Henriques Aug 13 2011 at 12:29
3
Example 6.1 in the paper by Ponto and Shulman “Shadows and traces in bicategories” gives a complete characterization of left/right dualizable bimodules: An A-B-bimodule is right dualizable if and only if it is finitely generated and projective as a right B-module and similarly for left dualizability. – Dmitri Pavlov Aug 13 2011 at 21:00
## 2 Answers
This doesn't directly answer your question, but I think that it is relevant. Also, it's ridiculously long, but hopefully that will make for easier reading than a shorter, vaguer version.
Let's think of the center of an algebra $A$ and the space of bimodule maps from $_AA_A$ to itself. Pictorially, we think of this vector space as associated to a circle subdivided into an incoming and outgoing interval and labeled entirely by $A$. (Later we will consider circles labeled by various combinations of $A$, $B$, $M$, and $N$.)
We should (in TQFTish contexts) think of the finite dimensionality of a vector space $V$ as meaning there is another vector space $W$, together with maps $r:k\to V\otimes W$ and $s: V\otimes W \to k$, which satisfy the zig-zag identity. (See the definition of dualizable in André's original question.)
But now we should smell a rat. If the center of $A$ is associated to an (appropriately decorated) circle, then the maps $r$ and $s$ should be associated to annuli (a circle cross a 1-dimensional cup or cap, like the plumbing under a kitchen sink). The "rat" is nonlocality; we are in danger of violating the dogma that everything should be local. Circles and annuli might seem simple, but intervals and disks are simpler still. We should try to replace the condition that $A$ has finite dimensional center with a local criterion.
This can done as follows. Roughly speaking, we want to construct a fully extended 1+1-dimensional TQFT-like structure. The 0- and 1-dimensional parts can built for any algebra (or linear 1-cateogy) $A$. The existence of the 2-dimensional part, generated by (2-dimensional) cups, caps and saddles which satisfy relations corresponding to Morse cancellations, will imply that $A$ has finite dimensional center.
Now for the details. To a point we associate $A$ (or the category of represenations of $A$, if we prefer).
Our 1-manfolds will be split into "incoming" and "outgoing" parts. The boundary points of the 1-manifolds will be either left-facing or right-facing, and these will correspond to left and right actions of $A$.
• To an "outgoing" interval we associate the bimodule $_AA_A$.
• To an "incoming" interval associate the linear dual bimodule $_A(A^*)_A$.
• To an interval subdivided into incoming and outgoing halves, and with the endpoints facing left, we associate the bimodule $Hom_A(A\to A)$. This is isomorphisc as a bimodule to $_AA_A$ via the correspondence $a \mapsto (x \mapsto ax)$.
• To an interval subdivided into incoming and outgoing halves, and with the endpoints facing right, we associate the bimodule $_AHom(A\to A)$. This is isomorphisc as a bimodule to $_AA_A$ via the correspondence $a \mapsto (x \mapsto xa)$.
• To a disjoint union of an incoming and outgoing interval we associate $Hom_k(A \to A)$, thought of as a quadramodule. (The action of $a\otimes b\otimes c\otimes d$ on the function $x\mapsto f(x)$ is $x\mapsto af(cxd)b$.)
We re allowed to glue intervals together if left-facing joins to right-facing and incoming [outgoing] joins to incoming [outgoing]. Gluing outgoing intervals corresponds to taking tensor product over $A$. Gluing incoming intervals corresponds to taking cotensor product over $A$. It follows that
• To a circle composed of an incoming and an outgoing interval (a "bigon"), we associate the space of bimodule maps from $_AA_A$ to itself. This is just the center of $A$ via the correspondence $z \mapsto (x \mapsto zx = xz)$.
• To a circle that is entirely outgoing we associate the "coinvariants" $A/\langle xy \sim yx\rangle$.
• To a circle that is entirely incoming we associate the space of traces on $A$ -- $f\in A^*$ such that $f(xy) = f(yx)$.
We will only need to consider the first sort of circle above (the bigon).
Now for the 2-dimensional part. Rather than describe the full structure one might want, I'll concentrate on the parts of the 2d structure which are relevant to the question about centers. The generators for the 2d part correspond to Morse critical points (index 0 (cup), index 1 (saddle), index 2 (cap)). These come in various flavors, depending on how the top and bottom 1-manfolds are divided into incoming and outgoing. Since we are only interested in bigon-type circles, we will only consider one flavor of cup and cap. But we will need two flavors of saddle.
• The cup is an element of the center $Z(A)$ of $A$. There is a canonical choice, $1\in Z(A) \subset A$.
• The cap is a function $Z(A) \to k$. This is extra data.
• The "easy" saddle is a quadramodule map from $Hom_A(A\to A) \otimes_k {}_AHom(A\to A)$ to $Hom_k(A\to A)$. There is a canonical choice for the easy saddle which sends $(x\mapsto ax)\otimes(x \mapsto xc)$ to $(x \mapsto axc)$.
• The "hard" saddle goes the other way, from $Hom_k(A\to A)$ to $Hom_A(A\to A) \otimes_k {}_AHom(A\to A)$. I don't think there is a canonical choice for this -- it's extra data.
We require the above 2d data (cup, cap, easy saddle, hard saddle) to satisfy identities corresponding to Morse cancelations.
One identity involves the cup and the easy saddle and is automatically satisfied (assuming we make the canonical choices for cup and easy saddle). More specifically we go from $Hom_A(A\to A)$ via the cup to $Hom_A(A\to A) \otimes_k Hom_{A\times A^{op}}(A\to A)$ and then via the easy saddle back to $Hom_A(A\to A)$, and this composition is the identity. (And similarly for $_AHom(A\to A)$.)
The other identity involves the cap and the hard saddle. We go from $Hom_A(A\to A)$ via the hard saddle to $Hom_A(A\to A) \otimes_k Hom_{A\times A^{op}}(A\to A)$ and then via the cap back to $Hom_A(A\to A)$ (and similarly for $_AHom(A\to A)$). This composition is required to be the identity.
The above identities should be thought of as 2d versions of the zig-zag identity.
Definition. An algebra $A$ has Property X is there exists a cap and hard saddle satisfying the above identities. (Recall that the cup and easy saddle come for free.)
Observation 1. If $A$ has Property X, then $A$ has finite dimensional center. Proof: Define $r:k\to Z(A)\otimes Z(A)$ to be the cup followed by the hard saddle induced map from $Hom_{A\times A^{op}}(A\to A)$ to $Hom_{A\times A^{op}}(A\to A) \otimes Hom_{A\times A^{op}}(A\to A)$. Define $s:Z(A)\otimes Z(A)\to k$ to be the similar map composed of the easy saddle followed by the cap. The 1d zig-zag identity follow from the 2d Morse cancellation identities.
In summary, Property X is our local replacement for "finite dimensional center".
Remark. I (weakly) suspect that algbras with finite dimensional centers which do not satisfy Property X are relatively rare. If it turns out that Property X is very strong (e.g. if it implies that $A$ is finite dimensional semisimple), then I'll be a little bit embarrassed (but only weakly surprised).
Now, finally, we get to the punch line.
Observation 2. If $A$ and $B$ have Property X and $_AM_B$ is dualizable, then $End(M)$ is finite dimensional.
Proof. The idea is to construct a 2d TQFT, similar to the one above, in which manifolds are divided into $A$-colored parts and $B$-colored parts, with the interfaces between $A$ and $B$ labeled by $M$ or $N$ as appropriate. (Recall that $N$ is the dual of $M$.) So 1-manifolds in this TQFT can be independently either incoming or outgoing, either $A$-colored or $B$-colored, and (if they are part of the boundary of a 2d bordism) either top or bottom. We will construct maps $$u: k\to End(M) \otimes End(N)$$ and $$n: End(N) \otimes End(M) \to k$$ which satisfy the zig-zag identity. It will follow that $End(M)$ is finite dimensional.
We will need $M$ to be both right and left dualizable, with the right and left duals both $N$. Let $r_l, s_l, r_r, s_l$ be the left and right 1d cups and caps.
The map $u$ corresponds to a bigon cross a 1d cup, half colored by $A$ and half colored by $B$. More specifically, $u$ is the composition from $k$
• to $Hom_{A\times A^{op}}(A\to A)$ via the cup for $A$
• to $Hom_{A\times A^{op}}(A\to M\otimes_B N)$ via $r_l$
• to $Hom_{A\times A^{op}}(M\otimes_B N\to M\otimes_B N)$ via $s_r$
• to $Hom_{B\times A^{op}}(M\to M) \otimes Hom_{A\times B^{op}}(N\to N)$ via the hard saddle for $B$.
Similarly, $n$ is the composition from $Hom_{A\times B^{op}}(N\to N) \otimes Hom_{B\times A^{op}}(M\to M)$
• to $Hom_{B\times B^{op}}(N\otimes_A M\to N\otimes_A M)$ via the easy saddle for $A$
• to $Hom_{B\times B^{op}}(B \to N\otimes_A M)$ via $r_r$
• to $Hom_{B\times B^{op}}(B \to B)$ via $s_l$
• to $k$ via the cap for $B$.
The zig-zag identity for $u$ and $n$ follows from the right and left zig-zag identities for $M$ and the Morse cancellation identities for $A$ and $B$.
Remark. Unless I've made a mistake, we only need one of $A$ or $B$ to have Property X.
Remark. If we reverse the roles of $A$ and $B$ above, we get different maps $u': k\to End(M) \otimes End(N)$ and $n': End(N) \otimes End(M)$. (The zig-zag for $u$ and $n$ requires that $B$ and Property X, while the zig-zag for $u'$ and $n'$ requires that $A$ have Property X.) If the 2d TQFT structure is more complete, with more flavors of cups, caps and saddles, then one can show that $u=u'$ and $n=n'$.
(This would be all be much easier with pictures. I apologize for the lack of them.)
-
1
I added a picture ;-) More seriousely, am I right to assume that what you call "Property X" is the same as what Jacob Lurie calls fully dualizable? – André Henriques Aug 14 2011 at 18:45
I don't think so, but perhaps I'm mistaken. Of course it involves similar ideas. Which 2-category did you want to situate $A$ in in order to apply the definition of fully dualizable? I think that if we require $A$ to be fully dualizable in the 2-category of algebras-bimodules-intertwinors, then the cup would live in the coinvariants of $A$ rather than the center of $A$ (for example). – Kevin Walker Aug 14 2011 at 19:14
Following up on the previous comment, I think it's actually the saddle(s) which would live in a different space if we simply required that $A$ be fully dualizable in Bimod. Perhaps there's some other way of looking at it in which "Property X" would be equivalent to "fully dualizable", but I don't see it at the moment. – Kevin Walker Aug 14 2011 at 19:50
I'm confused about the relation of X and fully dualizable (in particular it's a property not a structure for A to be fully dualizable- the hard saddle and cap are determined functorially as units/counits of adjunctions if they exist in that setting), but I think Kevin's very nice argument can be applied in the fully dualizable context in any case (as a special case of the cobordism hypothesis with singularities, which covers "quilted" surfaces like the above). – David Ben-Zvi Aug 14 2011 at 20:28
I do seem to remember that $A$ is fully dualizable iff $A$ is finite dimensional semi-simple. – André Henriques Aug 15 2011 at 11:41
show 2 more comments
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
This is more or less orthogonal to Kevin's answer; he puts a stronger restriction on $A$ and $B$, while I put a stronger restriction (perhaps too strong!) on ${_A}M_B$. I likewise apologize for the lack of pictures.
If we assume that
1. ${_A}M_B$ has a 2-sided dual ${_B}N_A$, and
2. the duality pairings exhibit ${_B}N \otimes_A M_B$ as a retract of ${_B}B_B$,
then $\operatorname{End}({_A}M_B)$ is dualizable as a $Z(A)$-module, and hence, if $Z(A)$ is finite dimensional, it will follow that $\operatorname{End}({_A}M_B)$ is finite dimensional.
To prove this, we exhibit a duality pairing between ${_{Z(A)}}\operatorname{End}({_A}M_B)$ and $\operatorname{End}({_B}N_A)_{Z(A)}$.
The evaluation map $\epsilon: {_{Z(A)}}\operatorname{End}({_A}M_B) \otimes_k \operatorname{End}({_B}N_A)_{Z(A)} \to {_{Z(A)}}Z(A)_{Z(A)}$ is given by first mapping to $\operatorname{End}({_A}M \otimes_B N_A)$ and then pre- and post-composing by the appropriate unit and counit, respectively, from the duality pairings between $M$ and $N$ to get an endomorphism of ${_A}A_A$, i.e., an element of $Z(A)$.
The coevaluation map $\eta: k \to \operatorname{End}({_B}N_A) \otimes_{Z(A)} \operatorname{End}({_A}M_B)$ is given by identifying the codomain with $\operatorname{End}({_B}N \otimes_A M_B)$, which follows from condition 2, whence $\eta$ is just the unit of this algebra.
That $\epsilon$ and $\eta$ give a duality pairing follows from condition 2. (The nontrivial morphism in one of the triangle diagrams sends a map $f: {_A}M_B \to {_A}M_B$ to the identity on ${_A}M_B$ tensored with the "trace" of $f$ via the duality pairings. Condition 2 tells us that the identity string can absorb the trace bubble, and similarly for the other triangle diagram.)
Unfortunately, condition 2 (or its "mirror," which would show dualizability as a $Z(B)$-module) seems very strong, but it is crucial for the above proof.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 187, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9217740297317505, "perplexity_flag": "head"}
|
http://nrich.maths.org/5525/clue?nomenu=1
|
The first triangle number is $1$
The second triangle number is $1 + 2$
The third triangle number is $1 + 2 + 3$
The fourth triangle number is $1 + 2 + 3 + 4 \ldots$
The $n^{\text{th}}$ triangle number is $\frac{n(n+1)}{2}$
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8866279125213623, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/100029/3-manifold-theorem-reference-request-or-proof
|
# 3-manifold theorem reference request or proof
The following is a theorem of which I have great interest in but cannot find anything about on the internet,
Every 3-manifold of finite volume comes from identifying sides of some polyhedron
I'm fairly certain that "identifying sides of some polyhedron" may be a simplification of the technical terminology. I believe it is just referring to gluing faces of polyhedron to form closed 3-manifolds. Such examples are given by the Seifert-Weber space, the Poincare homology sphere, the 3-dimensional real projective space, the $\frac{1}{2}$ twist cube space, etc. I'm assuming the proof is based off of Moise's theorem and proceeds as follows,
Let $M$ be an arbitrary closed 3-manifold. By Moise's theorem we have that $M$ can be tetrahedralized, so we let $T$ be the tetrahedralization of $M$ consisting of tetrahedrons $t_{1},...,t_{n}$. Pick an arbitrary tetrahedra $t_{1}$ of $T$ and proceed to glue $t_{2}$ to $t_{1}$, forming a new polyhedron $P_{2}$, and then glue $t_{3}$ to $P_{2}$ resulting in $P_{3}$, and so on. After all tetrahedra $t_{1},...,t_{n}$ have been glued, we have some resulting polyhedron $P_{n}$. From here, then somehow show that $P_{n}$ can be glued to $M$?
Any references to papers, expository writing, a proof of, or even the formal statement and name of this theorem would be greatly appreciated!
-
– Ryan Budney Jan 18 '12 at 5:51
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 18, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9456887245178223, "perplexity_flag": "head"}
|
http://mathhelpforum.com/number-theory/214584-proof-using-wop.html
|
# Thread:
1. ## Proof using WOP
Hello everyone,
I want to prove that for postive integers $x, y, x+y \ne y$. I want to do this using the WOP. Here's what I have done so far:
Suppose for some postive integer $x, \exists y$ such that $y=x+y$. By the WOP, there exists a smallest $x_0$ such that $y=x_0+y$. Now I think I may have to apply the WOP again, but am not sure. Any advice?
Thanks a lot,
Kevin
2. ## Re: Proof using WOP
Originally Posted by kmerfeld
P.S. Is there a way to enter LaTex code?
[TEX]x+y\ne y [/TEX] gives $x+y\ne y$
If you click on the “go advanced tab” you should see $\boxed{\Sigma}$ on the tool-bar. That gives the [TEX]..[/TEX] wrap. Your LaTeX code goes between them.
3. ## Re: Proof using WOP
$x+y\ne y$ Got it. Thanks
#### Search Tags
View Tag Cloud
Copyright © 2005-2013 Math Help Forum. All rights reserved.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 8, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8807801008224487, "perplexity_flag": "middle"}
|
http://unapologetic.wordpress.com/2011/06/30/integral-submanifolds/?like=1&source=post_flair&_wpnonce=2c354f29b9
|
# The Unapologetic Mathematician
## Integral Submanifolds
Given a $k$-dimensional distribution $\Delta$ on an $n$-dimensional manifold $M$, we say that a $k$-dimensional submanifold $\iota:N\hookrightarrow M$ is an “integral submanifold” of $\Delta$ if $\iota_*\mathcal{T}_pN=\Delta_{\iota(p)}$ for every $p\in N$. That is, if the subspace of $\mathcal{T}_{\iota(p)}M$ spanned by the images of vectors from $\mathcal{T}_pN$ is exactly $\Delta_p$.
This is a lot like an integral curve, with one slight distinction: in the case on an integral curve we also demand that the length of $c'(t)$ match that of $X_{c(t)}$, not just the direction (up to sign).
Now, if for every $p\in M$ there exists an integral submanifold $N(p)$ of $\Delta$ with $p\in N(p)$, then $\Delta$ is integrable. Indeed, let $X$ and $Y$ belong to $\Delta$. Since $\iota_{*q}:N(p)_q\to\Delta_{\iota(q)}$ is an isomorphism of vector spaces at every point, we can find $\tilde{X}$ and $\tilde{Y}$ that are $\iota$-related to $X$ and $Y$, respectively. That is, $X_{\iota(q)}=\iota_*\tilde{X}_q$ for all $q\in N$, and similarly for $Y$ and $\tilde{Y}$. But then we know that $[X,Y]_{\iota(q)}=\iota_*[\tilde{X},\tilde{Y}]_q$, and so $[X,Y]_{\iota(q)}\in\iota_*\mathcal{T}_qN=\Delta_{\iota(q)}$.
### Like this:
Posted by John Armstrong | Differential Topology, Topology
## 2 Comments »
1. [...] around such that on . Then the curve with coordinates and for all other is a one-dimensional integral submanifold of through [...]
Pingback by | June 30, 2011 | Reply
2. [...] a -dimensional distribution on , which we say is “induced” by , and that any connected integral submanifold of should be contained in a leaf of . It makes sense, then, that we should call a leaf of a [...]
Pingback by | July 1, 2011 | Reply
« Previous | Next »
## About this weblog
This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”).
I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 34, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9225730299949646, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/2328/electrons-faster-than-speed-of-light
|
# Electrons faster than speed of light
While looking at some exercises in my physics textbook, I came across the following problem which I thought was quite interesting:
It is possible for the electron beam in a television picture tube to move across the screen at a speed faster than the speed of light. Why does this not contradict special relativity?
I suspect that it's because the television is in air, and light in air travels slower than light in a vacuum. So I suppose they're saying the the electron could travel faster in air than the speed of light in air, like what causes Cherenkov radiation?
-
7
You could also just consider a person shining a laser pointer at a distant wall. As you spin around, the spot of the laser pointer moves on the wall with a speed dependent on the distance to the wall. In principle, the wall could be so far away that the spot moves faster than the speed of light. But the light is still moving at the speed of light (in air, or whatever). The spot is not really an object - unless you are the inmate trying to escape from the insane asylum on a beam of light! – Greg P Dec 28 '10 at 22:17
@Greg oh! move across the screen... so is it talking about the picture itself? I thought it was saying the beam from the electron gun was moving faster than light – wrongusername Dec 28 '10 at 22:21
1
Yes. It is something I remember from an intro relativity book. It means the actual spot (yes, the image) moving across the screen. Otherwise, I don't get the point of the question. The electrons themselves don't move faster than light. It is just an illusion of something moving faster than the speed of light. – Greg P Dec 28 '10 at 22:31
There were some other 'paradoxes' where objects seem to move at superluminal speeds. Particularly one from astrophysics which seemed interesting...perhaps someone can remember it for me. – Greg P Dec 28 '10 at 22:35
3
– David Zaslavsky♦ Dec 28 '10 at 23:47
show 2 more comments
## 3 Answers
This is an example of what is sometimes called the "Marquee Effect." Think of the light bulbs surrounding an old-fashioned movie theater marquee, where the light bulbs turn on in sequence to produce the illusion, from a distance, of a light source which is moving around the the marquee.
There is no limit on how short the time interval is between one light turning on and the next turning on, so the perceived light source position can move arbitrarily fast, but in fact nothing is actually moving at all.
In the case of the television screen, the phosphors on the screen can be lit in rapid sequence, but the electrons in the beam do not ever need to move at (or even near) the speed of light.
More generally, there are loads of examples of some imaginary or conceptual "object" moving faster than light, but in all these cases there is nothing actually moving at all. A classic example is the intersection point of two nearly parallel lines, which moves very rapidly as the angle between the lines changes. In this case it is obvious that the moving "object" isn't moving at all, but its still a good example of a case where you can discuss something moving faster than light without there being any violation of physical law.
-
This is a conflation of phase velocity, and group velocity. The beam can be seen to move from say left to right at higher than c, but no information or particles are traveling that fast. Information is being transmitted from the electron gun to the phosphor at well under the speed of light.
It has nothing to do with the media it is embedded in. The information is going from the electron gun to the screen, not from one location on the screen to another.
-
How come the beam is a wave? – wrongusername Dec 28 '10 at 22:12
@wrongusername: A beam of electrons behaves like a wave beam, the same is behaviour is verified even in much larger particles (such as small atoms). The reason lies in Quantum Mechanics, and I can't get into it here. – Bruce Connor Dec 28 '10 at 22:47
5
All of this is mostly irrelevant though, as the wave nature of the beam has nothing to do with your question. This could happen with virtually anything that moves. – Bruce Connor Dec 28 '10 at 22:48
1
Replace the electron beam with a marshmallow gun. People think of quantum theory and waves when they hear "electron" but not with "marshmallows". Of course, it may be hard to actually create a series of marshmallow collisions on a distant wall appearing to move faster than c, but heck this is only a thought experiment, so imagine if you have a powerful enough gun... – DarenW Mar 5 '11 at 21:17
Here's another example from Griffith's book "Introduction to Electrodynamics" which illustrates phenomena where what we see is not what we observe. The apparent speed can be much greater than the speed of light. This speed is just what we see, an illusion, and it's the result of our inability sometimes to see the actual direction of movement of an distant object w.r.t. us and the fact that the light needs some finite time to get to our eyes.
Problem 12.6 Every 2 years, more or less, The New York Times publishes
an article in which some astronomer claims to have found an object
traveling faster than the speed of light. Many of these reports
result from a failure to distinguish what is seen from what is
observed--that is, from a failure to account for light travel time.
Here's an example: A star is traveling with speed $v$ at an angle $\theta$ to
the line of sight (Fig. 12.6). What is its apparent speed across the
sky'?
(Suppose the light signal from $b$ reaches the earth at a time At after
the signal from a, and the star has meanwhile advanced a distance $\Delta s$
across the celestial sphere; by "apparent speed" I mean $\Delta s/\Delta t$.) What
angle $\theta$ gives the maximum apparent speed? Show that the apparent
speed can be much greater than $c$, even if $v$ itself is less than $c$.
It can be easily shown that the apparent speed in this example is:
$u_{app}=\frac{v\sin\theta}{1-\frac{v}{c}\cos\theta}$
To find the angle $\theta$ that gives the maximum apparent speed we just differentiate and solve, for $\theta$, the equation:
$\frac{d u_{app}}{d\theta}=0 \Leftrightarrow \theta_{max}=\cos^{-1}(\frac{v}{c})$
At this angle, $u_{app}=\frac{v}{\sqrt{1-v^2/c^2}}=\gamma v$
This result shows that when $v\to c$, $u_{app}\to \infty$, even though $v<c$.
-
1
This is not quite the same issue as in the question, though. – David Zaslavsky♦ Jul 17 '11 at 23:37
I think this is exactly the same issue. The electrons gun changes its direction, lets say by $\theta=\phi$. If we suppose that the electrons are emitted once at $\theta=0$ and once at $\theta=\phi$, we get an triangle. While the beam at $\theta=0$ travels to the screen, the electron gun rotates by $\phi$ and emits the second beam. So the time difference between the arrival of two beams can be very small. – Andyk Jul 17 '11 at 23:55
Yes, but in the case of the electron gun (and the light bulbs, and the laser pointer, etc.), the light is being emitted by two completely different objects. Nothing actually moves even close to the speed of light. In fact, nothing has to move at all, in the case of the light bulbs. But your example with the star involves light being emitted by the same object at two separate points. Without motion, there is no superluminal effect. That's why they're different phenomena. – David Zaslavsky♦ Jul 18 '11 at 0:05
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 22, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9529497623443604, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/265514/linear-independence-in-f-mathbb-r-mathbb-r
|
# linear independence in $F(\mathbb R,\mathbb R)$
For each subset of each vector space, justify whether their vectors are linearly independent or dependent:
$$\{f,g,h\}, f(x)=e^{2x} , g(x)=x^2 , h(x)=x,\mbox{ in }F(\mathbb R,\mathbb R).$$
-
– experimentX Dec 26 '12 at 19:42
## 1 Answer
Assuming you meant $f(x) = e^{2x}, g(x) = x^2, h(x) = x$, you need to show that if there are constants $\alpha_f, \alpha_g, \alpha_h$ such that $\alpha_f f + \alpha_g g + \alpha_h h = 0$ (that is, equal to the the function $t \mapsto 0$), then these constants must be zero.
Some tricks for this sort of problem are evaluating $\alpha_f f + \alpha_g g + \alpha_h h$ at various points, or differentiating an appropriate number of times and evaluating.
Let $\phi(x) = \alpha_f f(x) + \alpha_g g(x) + \alpha_h h(x)$. We have $\phi(x) = 0$ for all $x$. So, $\phi(0) = \alpha_f = 0$. So we know that $\alpha_f = 0$. We note that since $\phi$ is contstant, $\phi' = 0$, and so $\phi'(0) = \alpha_g g'(0) + \alpha_h h'(0) = \alpha_h = 0$. So we know that $\alpha_h = 0$. Finally, since $\phi(1) = \alpha_g g(1) = \alpha_g = 0$, we have $\alpha_g = 0$.
Hence $f,g,h$ are linearly independent.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 18, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9332199096679688, "perplexity_flag": "head"}
|
http://unapologetic.wordpress.com/2007/04/28/divisibility/?like=1&source=post_flair&_wpnonce=4ca8fc8c32
|
The Unapologetic Mathematician
Divisibility
There is an interesting preorder we can put on the nonzero elements of any commutative ring with unit. If $r$ and $s$ are nonzero elements of a ring $R$, we say that $r$ divides $s$ — and write $r|s$ — if there is an $x\in R$ so that $rx=s$. The identity $1$ trivially divides every other nonzero element of $R$.
We can easily check that this defines a preorder. Any element divides itself, since $r1=r$. Further, if $r|s$ and $s|t$ then there exist $x$ and $y$ so that $rx=s$ and $sy=t$, so $r(xy)=t$ and we have $r|t$.
On the other hand, this preorder is almost never a partial order. In fact since $r(-1)=-r$ and $-r(-1)=r$ we see that $r|-r$ and $-r|r$, and most of the time $r\neq-r$. In general, when both $r|s$ and $r|s$ we say that $r$ and $s$ are associates. Any unit $u$ comes with an inverse $u^{-1}$, so we have $u|1$ and $1|u$. If $r=su$ for some unit $u$, then $r$ and $s$ are associates because $s=ru^{-1}$.
We can pull a partial order out of this preorder with a little trick that works for any preorder. Given a preorder $(P,\preceq)$ we write $a\sim b$ if both $a\preceq b$ and $b\preceq a$. Then we can check that $\sim$ defines an equivalence relation on $P$, so we can form the set $P/\sim$ of its equivalence classes. Then $\preceq$ descends to an honest partial order on $P/\sim$.
One place that divisibility shows up a lot is in the ring of integers. Clearly $n$ and $-n$ are associate. If $m$ and $n$ are positive integers with $m|n$, then there is another positive integer $x$ so that $mx=n$. If $x=1$ then $m=n$. Otherwise $m\lneq n$. Thus the only way two positive integers can be associate is if they are the same. The preorder of divisibility on $\mathbb{Z}^\times$ induces a partial order of divisibility on $\mathbb{N}^+$.
Like this:
Posted by John Armstrong | Ring theory
6 Comments »
1. Now, there is a neat trick. I don’t run across preorders nearly as much as I run across partial orders, but from your post I gather that they are partial orders where a
Comment by | April 28, 2007 | Reply
2. As my first post on orders said, a preorder is reflexive and transitive, and a partial order adds antisymmetry. What I left out there is that you can build a partial order from a preorder like I do here. It’s actually another example of the same sort of thing as all those “free” constructions I did for groups and rings and such, and I’ll unify all of them a little later.
Comment by | April 28, 2007 | Reply
3. Oh, how embarrassing. There was more to my comment, but I accidentally used > and *poof*.
Anyway, my real question was: if we throw away commutativity, do people still talk about left- and right-divisibility? Maybe they call it something else.
Comment by | April 28, 2007 | Reply
4. It’s perfectly possible, yeah. Technically you don’t even need a unit to write down the condition. It’s not even a preorder then, since an element need not divide itself.
In fact I seriously considered doing it in full generality, but it complicates things to no end. By far the most common application is to divisibility of natural numbers, since for more general rings you can do it all with ideals anyway.
Comment by | April 28, 2007 | Reply
5. [...] Now we know that we can talk about divisibility in terms of ideals, we remember a definition from back in elementary school: a number is [...]
Pingback by | May 18, 2007 | Reply
6. [...] differ by multiplication by a unit, and so each divides the other. This sort of thing happens in the divisibility preorder for any ring. For polynomials, the units are just the nonzero elements of the base field, [...]
Pingback by | August 5, 2008 | Reply
« Previous | Next »
About this weblog
This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”).
I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 58, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9289728999137878, "perplexity_flag": "head"}
|
http://crypto.stackexchange.com/questions/142/can-one-efficiently-iterate-valid-bcrypt-hash-output-values/179
|
# Can one efficiently iterate valid bcrypt hash output values?
bcrypt is an intentionally slow hash algorithm. In my last protocol idea, I wanted to use it to expand a password and then only transfer the bcrypt-hashed password.
An efficient attack on this would be an ability to iterate all bcrypt hashes (or only these from passwords in a dictionary, maybe $2^{12}$ ones) without actually bcrypting the passwords.
The output of bcrypt consists of the security parameter (an integer), the used salt (128 bits), and the actual hash (192 bits), which is the result of 64 iterated Blowfish ECB encryptions of the ASCII string "OrpheanBeholderScryDoubt", where the key and s-boxes were expensively created by the (costly) EksBlowfishSetup instead of the normal BlowfishSetup.
Assume we have a quite high security parameter, like 12 (meaning $2^{12}$ iterations in the setup phase).
To make it easier, assume that either the salt is fixed, or we want to enumerate all valid actual hashes for all salts (without paying attention to which salt is used for which hash).
Is there any method of enumerating all the hashes without actually running the bcrypt algorithm, or enumerating the whole 192-bit output space?
What I could see is simply enumerating all possible blowfish states (e.g. subkeys + S-boxes) before doing the 64-times encryption with each such state. But if I understand right, this state consists of a $576$ bit P-array (the subkeys), and $4·265·32=32768$ bits of S-boxes. This is a much larger space than the hash output space ($192$ bits), thus we then simply could assume that every possible hash occurs at least for one such state, and simply enumerate the whole $192$ bit space. Not really a win.
-
## 1 Answer
bcrypt uses Blowfish, which is a block cipher (albeit with a much enlarged key schedule). As such, Blowfish implements a permutation of the space of 64-bit blocks; and there should be no way to distinguish Blowfish (using a random key) from a permutation extracted at random, with uniform probability, from the set of permutations over 64-bit blocks (there are 264! such permutations).
The best cryptanalysis result so far, about precisely distinguishing Blowfish from a random permutation, is due to Vaudenay: some Blowfish keys can be detected (and thus making the block cipher distinguished from a random permutation) if the number of rounds is reduced to 14 (from 16 for the "normal" Blowfish). There is no attack on the full Blowfish, and even if there was, it would only be about the weak keys (about one key in 215 is weak).
Therefore we cannot guess anything about the Blowfish instance, that would not apply to a randomly chosen permutation. This still allows a small remark: since this is ECB encryption, i.e. encryption of three distinct 64-bit blocks, we know that the hash result must consist of three distinct 64-bit blocks, concatenated together. This implies that the space of possible hash values has size 264(264-1)(264-2), which is 2192-3×2128+2×264, i.e. very slightly lower than 2192 (but not by a significant amount).
The trickier part of your question is about bypassing the key schedule. The point of bcrypt is to make processing of a single password as slow as possible for the attacker, but not intolerably slow for the honest systems. The use of Blowfish is a good idea here: the key schedule of Blowfish is optimized for systems which have a few Kbytes of fast RAM, and that's a PC. Implementing Blowfish on, say, a FPGA or ASIC, would be a painful experience; and the RAM requirement is likely to impair performance on GPU as well. This still requires that the key schedule itself is not amenable to hidden optimization, that the attacker may use. To my knowledge, there is no known result on the security of the "expanded key schedule" used in bcrypt.
Note that this algorithm is of the kind which uses self-modifying lookup tables, like MD2 and RC4, which are notoriously difficult to analyze: we do not benefit from the usual tools such as differential or linear cryptanalysis. MD2 and RC4 turned out to have weaknesses, but it took an awful lot of time, and nothing currently suggests that Blowfish could have such weaknesses. We still lack a proper mathematical framework to discuss the security of such constructions. So the absence of published results just means that nobody found an actual break; it does not mean that there are known reasons why such a break may or may not exist. My feeling is that the extended key schedule used in bcrypt is robust for what it is used for (i.e. an unavoidable mass of computations for each password guess).
So my answer to your question ("Is there any method...") is: no.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9321824312210083, "perplexity_flag": "middle"}
|
http://quant.stackexchange.com/questions/845/few-questions-on-binomial-lattice-option-valuation?answertab=active
|
# Few questions on Binomial-Lattice Option Valuation
I have just started applying Binomial-Lattice, however I am yet to fully understand few things. My questions are:
1. What is the concept of working backward (left side) from the values in terminal (farthest) nodes at the right side. Why do we need to do backward induction? I started my first node with some value `S`, at say time `t=1`, then all I need to know are the option values anywhere ahead of this time till the option expires. What is the need and meaning of the values obtained by backward induction and how are those different from forward induction?
2. Now if I apply backward induction, then the values I get at the starting node (the left most node) is far greater than the values at any other node. What does that mean?
Let's take this for example: I start with `S=1.5295e+009` at the starting node, and then after using Binomial-Lattice and doing backward induction, I get `9.9708e+10` at the starting node. Why has it increased this much and what does that imply? If I decrease my time step by two times, then I further get very high values like `-1.235e+25`
3. The values that we get in each node as we move ahead of starting node (i.e. the values on the nodes right hand side to left most node) are the value analogues to Present Value (PV) at that time or Net Present Value (NPV)?
EDIT: This is my Matlab code for binomial lattice:
````function [price,BLOV_lattice]=BLOV_general(S0,K,sigma,r,T,nColumn)
% BLOV stands for Binomial Lattice Option Valuation
%% Constant parameters
del_T=T./nColumn; % where n is the number of columns in binomial lattice
u=exp(sigma.*sqrt(del_T));
d=1./u;
p=(exp(r.*del_T)-d)./(u-d);
a=exp(-r.*del_T);
%% Initializing the lattice
Stree=zeros(nColumn+1,nColumn+1);
BLOV_lattice=zeros(nColumn+1,nColumn+1);
%% Developing the lattice
for i=0:nColumn
for j=0:i
Stree(j+1,i+1)=S0.*(u.^j)*(d.^(i-j));
end
end
for i=0:nColumn
BLOV_lattice(i+1,nColumn+1)=max(Stree(i+1,nColumn+1)-K,0);
end
for i=nColumn:-1:1
for j=0:i-1
BLOV_lattice(j+1,i)=a.*(((1-p).*BLOV_lattice(j+1,i+1))+(p.*BLOV_lattice(j+2,i+1)));
end
end
price=BLOV_lattice(1,1);
````
EDIT 2 (an additional question): If the binomial lattice is giving me option PV's, and my PV's are supposed to decrease with time, then why does more than half of values in my terminal nodes show an increase in values than what I start with (`=S0`). See the attached picture for values.
-
## 1 Answer
Let's start with question (2). If you are not obtaining $S=1.5295e+009$ after backwardation, then you have a bug in your binomial tree code. You may wish to find and eliminate that before proceeding.
One simple check is to make all the terminal nodes have value 1.0. You should obtain that the initial node has value $e^{-rT}$. This assumes, of course, that you are using one of the better tree formulations that does not approximate the interest rate term. Also, check your valuations against one of the online american option pricer website.
Now, the reason you backwardate is, colloquially, that the tree is meant to represent option present-values under a particular set of assumptions and scenarios about how stock prices change. Note that during construction you effectively "forwardate" the stock prices $S$ on the tree (albeit in a trivial manner). Considering the knowledge you start out with for all these stock price scenarios, it is only at the terminal nodes of the tree that the option prices are clearly known with certainty.
The backwardation process is allowing you to form speculative values for the option in nodes / scenarios where you previously did not have a solid idea what the option value is. This whole business is hidden from you in the Black-Scholes formulas, but becomes more explicit in trees due to the need to account for early exercise in the scenarios.
There's a bunch of complicated stochastic calculus and dynamical programming theory behind why the trees you construct are a correct technique for handling the problem of option pricing, but the above should give you a basic idea.
-
If the binomial lattice is giving me option PV's, and my PV's are supposed to decrease with time, then why does half of values in my terminal nodes show an increase in values than what I start with (`=S0`). I have attached a snapshot of the values in the question. – S_H Apr 2 '11 at 5:03
– S_H Apr 2 '11 at 8:14
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8597630262374878, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/158071/is-there-any-closed-form-expression-to-calculate-each-element-of-the-inverse-of/158076
|
Is there any closed-form expression to calculate each element of the inverse of a matrix?
Considering a generic square matrix $A=(a_{i,j})$ we want to compute its inverse $A^{-1}=\left[a^{(-1)}_{i,j}\right]$.
Is there a way to express each $a^{(-1)}_{i,j}$ using a closed form expression?
-
2
– Rahul Narain Jun 14 '12 at 2:23
1
Depends on what you mean by closed form. Cramer's rule involves det. – André Nicolas Jun 14 '12 at 2:29
Yeah by closed-form expression I mean a set of rules that involves elementary operations... For example, cofactors are calculated using minors, If I wanted to replace the cofactor term in the relation with an expression, how would it be... What I'd like to reach is a final formula not involving more steps to calculate the final quantity. Maybe it is not possible, just want a confirmation of this if possible. – Andry Jun 14 '12 at 2:46
1 Answer
The $ij$ entry of $A^{-1}$ is $(-1)^{i+j}$ times the determinant of the matrix $C_{ji}$ obtained by deleting row $j$ and column $i$ from $A$, all divided by the determinant of $A$. I don't know whether you consider that to be a closed form.
-
Since the formula is a rational function in the entries of $A$, it should be good :) – N. S. Jun 14 '12 at 2:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9259312152862549, "perplexity_flag": "head"}
|
http://www.physicsforums.com/showthread.php?p=4006033
|
Physics Forums
## Calculus, Definition of limit, Concept
Hi comrades.
According to spivak, the defition of limit goes as follows:
" For every ε > 0, there is some δ > 0, such that, for every x, if 0 < |x-a| < δ,
then |f(x) - l |< ε. "
After some exercices, I came across with a doubt.
Say that I could prove that | f(x) - l |< 5ε, for some δ$_{1}$ such that 0 < |x-a| < δ$_{1}$.
Since ε > 0, and thus 5ε > 0, could I say that lim$_{x→a}$f(x) = l based on this proof?
Regards,
PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug
Recognitions: Gold Member Science Advisor Staff Emeritus Yes, if $|f(x)- L|< 5\epsilon$, for any $\epsilon> 0$ then, taking $\epsilon_1= \epsilon/5$ where $\epsilon$ is any given number, we have $|f(x)- L|< 5\epsilon_1= 5(\epsilon/5)= \epsilon$
Quote by HallsofIvy Yes, if $|f(x)- L|< 5\epsilon$, for any $\epsilon> 0$ then, taking $\epsilon_1= \epsilon/5$ where $\epsilon$ is any given number, we have $|f(x)- L|< 5\epsilon_1= 5(\epsilon/5)= \epsilon$
Thanks!
## Calculus, Definition of limit, Concept
Well, two days later and I have another question.
I tougth in creating another post, but since the question is similar and fairly simple, I will jsut continue on this post.
When introducing Integrals, Spivak reaches the following during a proof:
inf{U(f, P`)} - sup{L(f, P´)} < ε , for any ε > 0.
Since this is true for all ε > 0, it follows that sup{L(f, P´)}= inf{U(f, P`)}∴
First Comment : Althoug Spivak doesn't say it, we shoud keep in mind that inf{U(f, P`)}$\geq$ sup{L(f, P´)}, otherwise one would have to use the absolute value, right?
Second comment, during the study of limits, we used |f(x) - L | < ε, for any ε > 0. So according with this part, for a limit to exist that should be a δ > 0, such that | f(x) - L | = 0. But this doesn't seem right.
Could you shed some light in this?
regards,
Recognitions:
Gold Member
Science Advisor
Staff Emeritus
Quote by c.teixeira Well, two days later and I have another question. I tougth in creating another post, but since the question is similar and fairly simple, I will jsut continue on this post. When introducing Integrals, Spivak reaches the following during a proof: inf{U(f, P)} - sup{L(f, P´)} < ε , for any ε > 0. Since this is true for all ε > 0, it follows that sup{L(f, P´)}= inf{U(f, P)}∴ First Comment : Althoug Spivak doesn't say it, we shoud keep in mind that inf{U(f, P`)}$\geq$ sup{L(f, P´)}, otherwise one would have to use the absolute value, right?
Yes, that's the important case but Spivak doesn't have to say it because if it is negative, it is trivially less than any positive number.
Second comment, during the study of limits, we used |f(x) - L | < ε, for any ε > 0. So according with this part, for a limit to exist that should be a δ > 0, such that | f(x) - L | = 0. But this doesn't seem right.
No, that doesn't follow. The set of all positive numbers does not contain 0- no number in it is equal to 0 but the inf (greatest lower bound) is 0.
Could you shed some light in this? regards,
I am not sure If I follow. The set of all positive numbers doesn't coutain zero, however | f(x) - L| = 0, would respect | f(x) - L| < ε for any ε > 0. right? So, altough I am alredy a bit confused, I still don't understand why he can say that if inf{U(f, P)} - sup{L(f, P´)} < ε , for any ε > 0, then it follows that sup{L(f, P´)}= inf{U(f, P)}∴ But we cannot say that if | f(x) - L| < ε for any ε > 0, then |f(x) - L| = 0. thanks,
Quote by c.teixeira I am not sure If I follow. The set of all positive numbers doesn't coutain zero, however | f(x) - L| = 0, would respect | f(x) - L| < ε for any ε > 0. right? So, altough I am alredy a bit confused, I still don't understand why he can say that if inf{U(f, P)} - sup{L(f, P´)} < ε , for any ε > 0, then it follows that sup{L(f, P´)}= inf{U(f, P)}∴ But we cannot say that if | f(x) - L| < ε for any ε > 0, then |f(x) - L| = 0. thanks,
Let me rearrange my line of thought, so I can make my doubt clearer in order for you to help me out.
So, I actually understand that :we cannot say that if | f(x) - L| < ε for any ε > 0, then |f(x) - L| = 0.
I guess this may be explained to the fact that for any ε > 0, the is a natural number n with $\frac{1}{n}$ < ε. Is this a suitable explanation?
Anyway, to my main question:
So, from | f(x) - L| < ε for any ε > 0, it doesn't follow that |f(x) - L| = 0, why can Spivak assure that if inf{U(f, P)} - sup{L(f, P´)} < ε , for any ε > 0, then it follows that sup{L(f, P´)}= inf{U(f, P)}∴
I hope you have understood,
Regards,
Recognitions: Gold Member Science Advisor Staff Emeritus If it were true that $|f(x)- L|< \epsilon$ for every $\epsilon> 0$ then, yes, we would have to have f(x)= L. But that is NOT true. We are only saying that, given some $\epsilon> 0$, there exist $\delta> 0$ so that if $|x- a|< \delta$, then $|f(x)- L|<\epsilon$ for that particular $\epsilon$, not for all $epsilon$. While the general statement "inf{U(f, P)}-sup{L(f,P)}<$\epsilon$" is true for all $\epsilon> 0$", the particular P depends upon the particular $\epsilon$.
Recognitions:
Gold Member
Homework Help
Science Advisor
Quote by HallsofIvy If it were true that $|f(x)- L|< \epsilon$ for every $\epsilon> 0$ then, yes, we would have to have f(x)= L. But that is NOT true. We are only saying that, given some $\epsilon> 0$, there exist $\delta> 0$ so that if $|x- a|< \delta$, then $|f(x)- L|<\epsilon$ for that particular $\epsilon$, not for all $epsilon$. While the general statement "inf{U(f, P)}-sup{L(f,P)}<$\epsilon$" is true for all $\epsilon> 0$", the particular P depends upon the particular $\epsilon$.
In my student days, I made the revolutionary, and highly applicable definition of the hypercontinuous function:
A function f(x) is hypercontinuous, if, for EVERY epsilon>0 and EVERY delta>0, there exists an L so that |f(x)-L| is less than epsilon.
Limits, sure are a dificult concept to grasp completly. Althoug, you seem quite confident about explanation, I don't undersant it. I have tought about it most of my day. The defition of lim: "for every ε > 0, there is some δ > 0 , such that, for all x, if 0 < | x - a |< δ, then | f(x) - L | < ε". So, there should be and δ$_{1}$ > 0, for wich | f(x) - L | < 10$^{-20}$, given that |x -a| < δ$_{1}$, and there should be another δ$_{2}$, for wich | f(x) - L | < 10$^{-1000}$, and in the end, since it is valid for any ε > 0, shouln't it exist an δ$_{3}$ for wich | f(x) - L | = 0, given that |x -a| < δ$_{3}$ ? Because, this is the line of though we use to explain: → if inf{U(f, P)} - sup{L(f, P´)} < ε , for any ε > 0, then it follows that sup{L(f, P´)}=inf{U(f, P)}∴?, or is it not this tipe of reasoning? Thank you for your most valued explanations.
Quote by c.teixeira → if inf{U(f, P)} - sup{L(f, P´)} < ε , for any ε > 0, then it follows that sup{L(f, P´)}=inf{U(f, P)}∴?
I have actually tried to prove this by myself. Not even knowing if such thing of aplicable to the case.
Here it goes.
Let A = { ε : ε > 0}; Then 0 would be the greatest lower bound for the set A.
But, inf{U(f, P)} - sup{L(f, P´)} < ε, for any ε. Thus meaning inf{U(f, P)} - sup{L(f, P´)} is a lower bound for the set A. Consequenly, inf{U(f, P)} - sup{L(f, P´)}≤ 0, since 0 is the greatest lower bound. Hence sup{L(f, P´)} = inf{U(f, P)}, because on the other hand inf{U(f, P`)} - sup{L(f, P´)} ≥ 0. ∴
By how much did I miss the target?
Regards,
Quote by c.teixeira I have actually tried to prove this by myself. Not even knowing if such thing of aplicable to the case. Here it goes. Let A = { ε : ε > 0}; Then 0 would be the greatest lower bound for the set A. But, inf{U(f, P)} - sup{L(f, P´)} < ε, for any ε. Thus meaning inf{U(f, P)} - sup{L(f, P´)} is a lower bound for the set A. Consequenly, inf{U(f, P)} - sup{L(f, P´)}≤ 0, since 0 is the greatest lower bound. Hence sup{L(f, P´)} = inf{U(f, P)}, because on the other hand inf{U(f, P`)} - sup{L(f, P´)} ≥ 0. ∴ By how much did I miss the target? Regards,
That actually seems right to me.
Quote by c.teixeira Limits, sure are a dificult concept to grasp completly. Althoug, you seem quite confident about explanation, I don't undersant it. I have tought about it most of my day. The defition of lim: "for every ε > 0, there is some δ > 0 , such that, for all x, if 0 < | x - a |< δ, then | f(x) - L | < ε". So, there should be and δ$_{1}$ > 0, for wich | f(x) - L | < 10$^{-20}$, given that |x -a| < δ$_{1}$, and there should be another δ$_{2}$, for wich | f(x) - L | < 10$^{-1000}$, and in the end, since it is valid for any ε > 0, shouln't it exist an δ$_{3}$ for wich | f(x) - L | = 0, given that |x -a| < δ$_{3}$ ? Because, this is the line of though we use to explain: → if inf{U(f, P)} - sup{L(f, P´)} < ε , for any ε > 0, then it follows that sup{L(f, P´)}=inf{U(f, P)}∴?, or is it not this tipe of reasoning? Thank you for your most valued explanations.
This is actually the line of reasoning used to explain that sup{L(f, P´)}=inf{U(f, P`)}
Since for any partition P, U(f,P) is greater than or equal to L(f,P') for any partition P', it follows that U(f,P) is an upper bound on {L(f,P)} From this, it follows that inf{U(f,P)} is greater than or equal to sup{L(f,P)} (Otherwise, we would be able to produce some P such that U(f,P) were not an upperbound for {L(f,P)}) Thus, inf{U(f,P)} - sup{L(f,P)} ≥ 0.
Now to the part I think you're confused about. inf{U(f,P)} - sup{L(f,P)} < ε for all ε > 0 because of the following.
Given any ε > 0, there exist two partitions P and P' such that U(f,P) - L(f,P') < ε. Since U(f,P) ≥ inf{U(f,P)}, and L(f,P) ≤ sup{L(f,P)}, we have that
ε > U(f,P) - L(f,P') ≥ inf{U(f,P)} - L(f,P') ≥ inf{U(f,P)} - sup{L(f,P)} ≥ 0
Since we can do this for all ε > 0, we have that
ε > inf{U(f,P)} - sup{L(f,P)} ≥ 0 for all ε > 0, so that inf{U(f,P)} = sup{L(f,P)} holds.
The difference between saying that Lim x-> a of f(x) is L and |a - b| < ε for all ε > 0 is that, in the first case, we are only guaranteed the existence of an interval centered around a so that |f(x) - L| < ε holds for all values in this interval, and it is crucial to note that the interval may or may not depend on ε. While in saying that |a - b| < ε for all ε > 0, we are saying that two constants are arbitrarily close to each other.
I apologize for my lack of LaTex, but I honestly have no clue how to use it. I hope I helped!
Quote by 00Donut ..........The difference between saying that Lim x-> a of f(x) is L and |a - b| < ε for all ε > 0 is that, in the first case, we are only guaranteed the existence of an interval centered around a so that |f(x) - L| < ε holds for all values in this interval, and it is crucial to note that the interval may or may not depend on ε. While in saying that |a - b| < ε for all ε > 0, we are saying that two constants are arbitrarily close to each other. I apologize for my lack of LaTex, but I honestly have no clue how to use it. I hope I helped!
Yes, you have helped me. It is much clearer know.
Thanks,
Thread Tools
| | | |
|-------------------------------------------------------------|----------------------------|---------|
| Similar Threads for: Calculus, Definition of limit, Concept | | |
| Thread | Forum | Replies |
| | General Physics | 2 |
| | Calculus & Beyond Homework | 3 |
| | Calculus | 6 |
| | Calculus | 17 |
| | General Discussion | 57 |
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 52, "mathjax_display_tex": 0, "mathjax_asciimath": 11, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9363399147987366, "perplexity_flag": "middle"}
|
http://scicomp.stackexchange.com/questions/tagged/symbolic-computation
|
# Tagged Questions
The symbolic-computation tag has no wiki summary.
0answers
57 views
### What libraries are available for solving problems at the college physics level?
Are there any libraries available for solving problems at the level of college physics? In particular, I'm interested in libraries for general purpose programming languages that can calculate results ...
1answer
92 views
### How easy is it to combine symbolic and numeric computation in Matlab?
CS Beta people: I have been doing some multiple integrals with a combination of symbolic and numerical integration (because symbolic answers have not always been possible). I have been using ...
1answer
73 views
### Simple substitutions using symbolic computing in MATLAB
Suppose I have the following MATLAB code. syms a b c1 c2 c1 = a + b + pi*b c2 = a + b + 0.5*b Then c1 gets evaluated to ...
5answers
427 views
### Symbolic solution of a system of 7 nonlinear equations
I've got a system of ordinary differential equations - 7 equations, and ~30 parameters governing their behavior as part of a mathematical model of disease transmission. I'd like to find the steady ...
7answers
530 views
### Is there any open-source or easy-to-access software that can simplify algebraic expressions like $x^{2}+2x+3, x=\sqrt{2}t-1$?
I always calculate things by hand, but now my comrades are getting nasty and making a lot of repetitive exercises involving just plugging things in like the expression above. I am particularly ...
2answers
241 views
### Automatic generation of integration points and weights for triangles and tetrahedra
Usually one would consult a paper or book to find integration points and weights for unit triangle and tetrahedra. I am looking for a method to automatically compute such points and weights. The ...
6answers
808 views
### Symbolic software packages for Matrix expressions?
We know that $\mathbf A$ is symmetric and positive-definite. We know that $\mathbf B$ is orthogonal: Question: is $\mathbf B \cdot\mathbf A \cdot\mathbf B^\top$ symmetric and positive-definite? ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9108036756515503, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/6376/why-forgetful-functors-usually-have-left-adjoint/6403
|
Why forgetful functors usually have LEFT adjoint?
Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
for forgetful functors, we can usually find their left adjoint as some "free objects", e.g. the forgetful functor: AbGp -> Set, its left adjoint sends a set to the "free ab. gp gen. by it". This happens even in some non-trivial cases. So my question is, why these happen? i.e. why that a functor forgets some structure (in certain cases) implies that they have a left adjoint? Thanks.
-
7 Answers
Forgetful functors usually have a left adjoint because they usually preserve limits. For example, the underlying set of the direct product of two groups is the direct product of the underlying sets, and similarly for equalizers (that gives you all finite limits).
However, functors that preserve limits don't have to have left adjoints, because once in a while what you want to do to construct a free object results in a proper class. An example is complete lattices. Freyd's Adjoint Functor Theorem gives a necessary and sufficient condition for a limit-preserving functor to have a left adjoint. The proof and related results is discussed in section 1.9 of Toposes, Triples and Theories.
-
1
Forgetful functors usually preserve limits because they are usually representable. – KotelKanim Oct 22 2011 at 16:59
You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Not every forgetful functor has a left adjoint. If you take comodules over a k-coalgebra A, the forgetful functor to k-modules has a right adjoint, not a left adjoint. So you have cofree comodules, of the form A tensor_k V, but in general no free comodules. So I would argue that forgetful functors will have left adjoints when your structures are defined by maps into your object (like A tensor M into M), but right adjoints when your structures are defined by maps out of your object (like M into A tensor M). This is a universal algebra-style answer, not perhaps so helpful for non-algebraic situations.
But let me point out that the forgetful functor often has both adjoints in non-algebraic situations. For example, the forgetful functor from topological spaces to sets has left adjoint defined by the discrete topology, and right adjoint defined by the indiscrete topology.
-
8
I guess a more opaque way to answer the question along these lines is to say that forgetful functors usually have left adjoints because we (meaning mathematicians in general) usually work with monadic algebraic structures. – S. Carnahan♦ Nov 21 2009 at 20:49
Many standard examples of algebraic "forgetful" functors $U : C \to \mathrm{Set}$ have the following form:
• $C$ is a presentable category, i.e., there is a small category $I$ and a collection $S$ of cones of $I$ such that $C$ is equivalent to the full subcategory of functors $I \to \mathrm{Set}$ consisting of those functors which send the cones of $S$ to limit diagrams in $\mathrm{Set}$;
• $U$ is evaluation at an object $u \in I$.
For example, if $C$ is the category of monoids, take $I = \Delta^{\mathrm{op}}$ so that functors $I \to \mathrm{Set}$ are simplicial sets and choose $S$ so that the objects of $C$ are those simplicial sets $X$ such that $X_0 = \ast$ and $X_{i+j} \to X_{i} \times X_{j}$ is an isomorphism (where this map is induced by the inclusions of the first $i+1$ and last $j+1$ elements of an ordered $i+j+1$ element set). The object $u$ is the two-element set $[1]$. (One actually needs only the full subcategory of $\Delta^\mathrm{op}$ on the objects $[0]$, $[1]$, $[2]$, $[3]$, and the cones involving these objects; expanding this gives a possibly more familiar presentation of the notion of monoid.)
In these cases (which include models of any essentially algebraic theory) the existence of a left adjoint is guaranteed by the theory of presentable categories. Indeed, the inclusion of $C$ into $\mathrm{Set}^I$ has a left adjoint which we compose with the constant diagram functor $\mathrm{Set} \to \mathrm{Set}^I$ to obtain a left adjoint to $U$. See Adamek and Rosicky, Locally presentable and accessible categories, for an excellent introduction to the subject.
-
The term "forgetful functor" is not perfectly well defined. Depending on context, I've seen it defined as "faithful functor with a left adjoint", because most notions of "Forget" should have a corresponding notion of "Free".
Edit: I should emphasize that there are many notions of "forgetful functor", and it is not a canonically-defined word. J. Baez has thought about what the right notions of "forget" are. For him, a functor "forgets at most properties" if it is full and faithful, "forgets at most structure" if it is faithful, and "forgets at most stuff" if it is (empty list). I think it is not necessarily true that a full and faithful functor has a left adjoint.
-
1
agreed.. so far, I haven't seen any reference that really rigorously defines what is category-therotically meant by a "forgetful functor". – Jose Capco Nov 21 2009 at 21:24
1
I don't think I've ever seen it defined as such, but it has surely been strongly hinted at. – alekzander Nov 23 2009 at 1:47
Many forgetful functors in an algebraic setting are representable; for example, the forgetful functor $\text{Ab} \to \text{Set}$ is represented by $(\mathbb{Z}, 1)$. Functors with left adjoints are also representable, and sometimes the converse holds.
-
the answer of reid barton is perfect and general. for categories of $\tau$-algebras, where $\tau$ is a type (thus consisting of function symbols and identities), we also have the following: let $\tau \to \sigma$ be a homomorphism of types, then there is a functor from $\sigma$-algebras to $\tau$-algebras and it has a left adjoint. this can be proved using freyd's adjoint functor theorem. this yields tons of examples:
• the forgetful functor of $\tau$-algebras to sets has a left adjoint. in particular, free monoids, groups, modules, lie algebras etc. exist.
• if $R \to S$ is a ring homomorphism, $S-Alg \to R-Alg$ has a left adjoint.
• every ring has a free unital ring.
• forgetful functors of algebra may be seen as the functors from $\sigma$-algebras to $\tau$-algebras, where $\tau \subseteq \sigma$ is a subtype; these have a left-adjoint. for example, from groups to monoids, from rings to abelian groups, from R-algebras to R-modules, and so on.
basically, it's all about representing functors, and since subobjects of Set are well-behaved in a certain sense, freyd's adjoint functor theorem tells you that you can do everything you want.
-
I think the "left adjoint"/"right adjoint" thing is mostly just terminology, although probably now that I've said that someone's going to point me to a n-Cafe post or a SBS where they explain that it's deeply tied to TQFT or something.
I asked a question a couple weeks back, the answers to which helped me understand why so many functors did have left adjoints.
-
In what way do you mean "mostly just terminology"? As Mark Hovey's topological example (an exercise in MacLane, also) points out, one may have both a left and a right adjoint which are clearly distinct. – alekzander Nov 23 2009 at 1:46
Erm, I meant that, as far as I know, the reason we call left adjoints left and right adjoints right is basically entirely historical and has much more to do with arbitrary mathematical notation than any real difference between them. (I mean, yes, they're different, but not that different.) – Harrison Brown Nov 23 2009 at 3:05
If you're just talking about the adjectives left/right, well .. that's terminology, sure, since you're talking about .. terms. There was some usage historically of "adjoint"/"coadjoint", but it's unnatural AFAIAC to say that one is in the "correct" (I guess I have to avoid the word "right" here) direction. However, we do have "unit"/"counit", so it wouldn't be a stretch to associate "adjoint" with "unit", and co-. I think this isn't done mostly because it was historically not used consistently this way (or that's what I gleaned from a comment in MacLane). – alekzander Nov 23 2009 at 9:22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 46, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9366965293884277, "perplexity_flag": "head"}
|
http://unapologetic.wordpress.com/2007/03/23/rings/?like=1&_wpnonce=76a2d9cd23
|
# The Unapologetic Mathematician
## Rings
Okay, I know I’ve been doing a lot more high-level stuff this week because of the $E_8$ thing, but it’s getting about time to break some new ground.
A ring is another very well-known kind of mathematical structure, and we’re going to build it from parts we already know about. First we start with an abelian group, writing this group operation as $+$. Of course that means we have an identity element ${}0$, and inverses (negatives).
To this base we’re going to add a semigroup structure. That is, we can also “multiply” elements of the ring by using the semigroup structure, and I’ll write this as we usually write multiplication in algebra. Often the semigroup will actually be a monoid — there will be an identity element $1$. We call this a “ring with unit” or a “unital ring”. Some authors only ever use rings with units, and there are good cases to be made on each side.
Of course, it’s one thing to just have these two structures floating around. It’s another thing entirely to make them interact. So I’ll add one more rule to make them play nicely together:
$(a+b)(c+d) = ac+ad+bc+bd$
This is the familiar distributive law from high school algebra.
Notice that I’m not assuming the multiplication in a ring to be invertible. In fact, a lot of interesting structure comes from elements that have no multiplicative inverse. I’m also not assuming that the multiplication is commutative. If it is, we say the ring is commutative.
The fundamental example of a ring is the integers $\mathbb{Z}$. I’ll soon show its ring structure in my thread of posts directly about them. Actually, the integers have a lot of special properties we’ll talk about in more detail. The whole area of number theory basically grew out of studying this ring, and much of ring theory is an attempt to generalize those properties.
## 3 Comments »
1. [...] travel day. As I head back to New Haven, I think I’ll leave a few basic theorems about rings that can be shown pretty much straight from the definitions. The first three hold in any ring, [...]
Pingback by | March 24, 2007 | Reply
2. [...] an algebra ? This can only make sense if we allow rings without unit, which I only really mentioned back when I first defined a ring. This is because there’s only one endomorphism of the zero-dimensional vector space at all! [...]
Pingback by | December 8, 2008 | Reply
3. [...] A “Boolean ring” is a commutative ring with the additional property that each and every element is idempotent. That is, for any we have [...]
Pingback by | August 4, 2010 | Reply
« Previous | Next »
## About this weblog
This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”).
I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 6, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9224469661712646, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/10406?sort=oldest
|
## General construction for internal hom in a presheaf category
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I was reading about the internal hom functor for simplicial sets, and the construction is very "localized" (nothing to do with localization, just the english word). It seems like there should be a general construction for any presheaf category that would be similar to this. That is, an actual construction, not just the existence of the functor provided by the theorem that every Grothendieck topos is a Lawvere topos and therefore cartesian closed. Does such a construction exist, and if so, can you give a reference?
-
## 1 Answer
The formula for the internal hom between presheaves $F\colon C^{op}\to Set$ and $G\colon C^{op}\to Set$ can be derived from the Yoneda lemma. Given $c\in C$, we know that we must have $G^F(c) \cong Hom(y(c), G^F) \cong Hom(y(c) \times F, G)$ so we can simply define $G^F(c) = Hom(y(c) \times F, G)$, which is evidently a presheaf on $C$. The isomorphism $Hom(H,G^F)\cong Hom(H\times F, G)$ for non-representable $H$ then follows from the fact that every presheaf $H$ is canonically a colimit of representables, and $Hom(-,G^F)$ and $Hom(-\times F,G)$ both preserve colimits (the former by definition of colimits, and the latter by that and since limits and colimits in presheaf categories are computed pointwise and products in $Set$ preserve colimits).
This is Proposition I.6.1 in "Sheaves in geometry and logic."
-
Accepted and +1, but is there any way to obtain a more concrete description? – Harry Gindi Jan 1 2010 at 22:28
2
You can spell out the definition of homs in presheaf categories in terms of an end in Set, if you like: $G^F(c) = Hom(y(c)\times F, G) = \int_{c'} Hom(C(c,c')\times F(c'), G(c'))$. You could then invoke the construction of limits in Set, so that $G^F(c)$ is the set of tuples $(h_{c'})_{c'\in C}$, where $h_{c'}\colon C(c,c')\times F(c')\to G(c')$, such that for any $\gamma\colon c'\to c''$ in $C$ we have $G(\gamma) \circ h_{c'} = h_{c''} \circ (C(c,\gamma) \times F(\gamma))$. – Mike Shulman Jan 1 2010 at 22:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 19, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9372878074645996, "perplexity_flag": "head"}
|
http://mathhelpforum.com/advanced-algebra/139985-matrix-representation-print.html
|
# Matrix Representation
Printable View
• April 18th 2010, 07:49 PM
ktcyper03
Matrix Representation
Let X = (1, 2x - 1) be a basis for R2. Suppose that T is a linear transformation such that T(x) = x + 1/2 and T(1) = 2x + 1. Find RxxT the matrix representation of T with respect to the basis X.
Does this mean-
(1)(A) = T(1)
(2x - 1)(B) = T(x)
and the resulting matrix representation, RxxT is made up of A and B? Thanks.
• April 19th 2010, 03:52 AM
HallsofIvy
I presume you mean that {1, 2x-1} is a basis for the space of linear polynomials in x. That is NOT usually written as "R2".
But I really don't understand what you mean by
Quote:
(1)(A) = T(1)
(2x - 1)(B) = T(x)
and the resulting matrix representation, RxxT is made up of A and B
A simple way of finding the matrix representation of a linear transformation in a given basis is to apply the transformation to each basis element in turn, writing the result as a linear combination of vectors in that basis. The coefficients in that linear combination give the columns of the basis.
Here, T(1)= 2x+ 1= a(1)+ b(2x-1)= 2bx+ a- b so we must have 2b= 2 and a- b= 1. That is, b= 1, and then a= 2. The first column of the matrix representation is $\begin{bmatrix}2 \\ 1\end{bmatrix}$.
T(2x- 1)= 2T(x)- T(1)= 2(x+ 1/2)- (2x+ 1)= 2x- 1- 2x+ 1= 0. The second column of the matrix representation is $\begin{bmatrix} 0 \\ 0\end{bmatrix}$.
The matrix representation is $\begin{bmatrix}2 & 0 \\ 1 & 0\end{bmatrix}$.
All times are GMT -8. The time now is 02:26 AM.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8974312543869019, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/advanced-applied-math/152080-integral-exp-m-sech-1-m-m-dm.html
|
# Thread:
1. ## Integral of exp(-a*m)*sech^{1/m}(a*m) dm
Hi,
I would like at least some hints to compute the following integral :
$\int_0^\infty exp(-ax) \text{cosh}^{-1/x}(ax) dx$
As a known result we have :
$\lim_{x->0} \text{cosh}^{-1/x}(ax) = 1$
So the function should behave well and be integrable but mathematica gives no anwser for it...
Can somebody help ?
Thanks
Alexis
2. Originally Posted by AlexisM
Hi,
I would like at least some hints to compute the following integral :
$\int_0^\infty exp(-a*x) \text{cosh}^{-1/x}(a*x) dx$
As a known result we have :
$\lim_{x->0} \text{cosh}^{-1/x}(a*x) = 1$
So the function should behave well and be integrable but mathematica gives no anwser for it...
Can somebody help ?
Thanks
Alexis
Can you provide some background, why you need this integral (the problem it is part of ...) what if any level/ quality of approximation would be acceptable for the integral, ...
CB
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9119946956634521, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/54835/double-pendulum
|
# Double Pendulum
The equations of motions for the double pendulum is given by
$$\dot{\theta_1} = \frac{6}{ml^2}\frac{2p_{\theta1} - 3\cos(\theta_1 - \theta_2)p_{\theta2}}{16 - 9\cos^2(\theta_1 - \theta_2)}$$
and similarly for the other pendulum. In respect to what does the change in angle for the first pendulum refer to? Is it with respect to time? So that $\dot{\theta_1} = \frac{d\theta}{dt}$?
-
## 2 Answers
Yes. The point always refers to the derivative with respect to time.
-
The dot over a function or variable Isaac Newton's notation for a derivative; in physics it always means a derivative with respect to time.
Variables with two or three dots, like $\ddot{\theta}$ and $\dddot{\theta}$, represent second and third time derivatives respectively.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.922880232334137, "perplexity_flag": "head"}
|
http://motls.blogspot.com/2011/07/theory-vs-phenomenology-in-days-of.html?m=1
|
# The Reference Frame
Our stringy Universe from a conservative viewpoint
## Monday, July 25, 2011
### Theory vs phenomenology in the days of experimental reckoning
Exactly twenty years ago, in August 1991, Paul Ginsparg launched the arXiv - which used to be known under the main URL xxx.lanl.gov for many years.
The first hep-th paper would be a black hole paper by Horne and Horowitz. However, Paul Ginsparg himself and Sheldon Glashow succeeded in constructing a time machine in 1994, returned to 1986 and submitted the essay Desperately Seeking Superstrings in 1986, five years before Ginsparg created the arXiv and 8 years before Al Gore invented the Internet (1994) and Ginsparg invented the time machine.
Congratulations, Paul (both for the birthday and the time machine)!
The only decent celebration of the arXiv in the mainstream media, as far as I know, was published 10 years ago in the New York Times. Your humble correspondent couldn't avoid some humble memories haha. ;-)
Using the modern terminology, Paul's background is one of a clear and serious theorist. Much like others, he was able to see two loosely separated, partly overlapping - but nevertheless distinct - groups of non-experimenters in high-energy physics.
It seemed reasonable for him to separate the archives of preprints dedicated to these groups because many people only wanted to follow one of the groups. He had to invent clever names for the two archives. So he reserved the word "theory" for "more formal theorists" such as himself - even though the other group could also call itself "theorists" if there had been no Ginsparg - while he invented a new funny name for the other group, "phenomenologists".
Phenomenology: terminology
Much like the term "The Big Bang", the word "phenomenology" was meant to be derogatory or humiliating. It was a friendly way to make fun out of the other group. Why was it funny? A science-oriented person may think that "phenomenology" is related to "phenomena" so it is surely focused on observations so the word must be associated with the objective, cold, hard scientific way of looking at the real world.
However, if you check an encyclopedia, you will find out that the term "phenomenology" had been known and widely used in philosophy and psychology. Despite its seemingly "objective" flavor, phenomenology is actually all about the focus on subjective perceptions, consciousness, and all the philosophical mumbo jumbo that the philosophers actively want to be confused about. ;-)
But during the years, the term "phenomenology", when used by particle physicists, has lost the funny connotations, much like "the Big Bang". It's been used as a serious word.
Differences between hep-th and hep-ph
So the theorists such as Ginsparg himself would be given the hep-th archive - "high-energy physics, theory" - while the phenomenologists would be sending papers to the hep-ph archive - "high-energy physics, phenomenology". I still haven't explained what the difference is.
Well, phenomenologists are not experimenters: their main devices are pen, paper, and computer rather than the experimental apparatuses. In this sense, they are "generalized theorists". But they still think that the main purpose of physics is to study the phenomena that were recently observed or that could perhaps be observed in a very near future. So they think of themselves as the people who constantly interact with the experimenters.
One also uses the term "bottom-up approach". High-energy physics is a modern name for particle physics which tries to study the elementary particles that everything in the world is made out of. The main content of the "high-energy" terminology is that the higher energies you're able to squeeze into a particle, the deeper it may penetrate into another particle, and the shorter distance scales you're able to resolve.
So the high energy is an important "currency" in our knowledge. We need the energy of the collisions at the accelerators to be high if we want to learn very new things. The phenomena in which the energy per particle is low belong to the "low-energy physics" which is known from day-to-day life and that is studied by more ordinary disciplines of physical sciences such as the condensed matter physics.
On the other hand, the phenomena in which the energy per particle is "high" - and the boundary between "low" and "high" is flexible, of course: you may think that 1 TeV is the current boundary (but "low" and "high" Higgs mass are separated by 135 GeV and there are other special contexts in which the boundary is elsewhere) - are those potentially new phenomena that we haven't seen so far and where there is some room for adjustments of our theories or completely new discoveries.
The bottom-up approach is the approach that assumes that the optimum strategy (and, if you're a hardcore phenomenologist, the only strategy) to increase our knowledge is to start with the solid low-energy physics we know from many previous experiments and to gradually increase the energy. Experimenters should be looking at experiments with a gradually increasing energy; theorists (well, phenomenologists) should try to guess what will be seen tomorrow. In this way, we may be getting to ever higher energies and ever shorter distances.
On the other hand, the hep-th "formal theorists" are mostly "top-down theorists". The top-down approach has always assumed - and now acknowledges - that there are certain basic things we may learn about the real world that are true even though the corresponding energies are much higher than the "frontier" where the particle colliders have been able to get.
In particular, general relativity shows that if you collide particles whose energy (vastly) exceeds the Planck energy, they will create a black hole. So even though this is a very high-energy scattering, with much higher energies than what the LHC may achieve, we actually do know what happens. So there is another known "island" or "continent" at the top - with so high energies that another description, general relativity with black holes, becomes applicable. Even the classical general relativity is the best approximation in the limit of very high energies.
Now, from this new island, collisions with very high energies, we may try to dig a tunnel towards particle physics that can be studied by the colliders. Because we're going from the (known) regime at high energies down to the (unknown) regime at intermediate and then lower but already inaccessible energies (by the colliders), the approach is called the "top-down approach".
Aspects of hep-th research
Those comments about black holes wouldn't tell us much about particle physics in general. But of course, the main key insight is that people could have figured out that the reconciliation of the black-hole-dominated high-energy scattering with the low-energy physics described by quantum field theories (and the Standard Model as the key quantum field theory for the phenomena we may observe today) is a very hard, constraining problem.
And what is more important is that physics has found a consistent solution to the problem - moreover, one that seems to be unique. It's still called string theory. It makes perfect sense and it is the only "big surviving theme" in the hep-th approach to particle physics that actually offers some new insights about genuinely high-energy physics. Other approaches are either applications of low-energy effective quantum field theory - which includes things such as Hawking's amazing semiclassical analysis of the black hole radiation - or they have been proved wrong or they have been fads that could only look promising but never led to any convincing or conclusive results (and not even interesting mathematics with lots of consistency checks and/or surprising exact pattern and relations).
String theory has been and still is the most important theme underlying the hep-th archive and it is fair to say that a large majority of the valuable results published as hep-th papers during the two decades has depended on string theory in one way or another. The list of successes and fundamental breakthroughs in string theory during the recent 30 or 40 years - and even during the recent 10 or 15 years - is extensive. All those things have radically transformed the ways how we can think - and how we do think - about all kinds of questions and those insights won't be unlearned. These insights have affected not only "unification in physics" but even many other disciplines of physical sciences, including superconductivity, heavy ion physics, fluid dynamics, and others.
Aspects of hep-ph research
On the other hand, the hep-ph research has been motivated by the contact with experiments that are doable in the near future. It wasn't appreciated but it was always the case that this has been an extremely risky strategy, especially if many people invest millions of their man hours to this research - simply because there didn't have to be any new physics "behind the corner".
Even if there were new physics, it could confirm at most 1 model in the literature. But the hep-ph archive is filled with hundreds or thousands of models of what could be seen right behind the corner. It's true that out of 1,000 distinct models of new physics below 1 TeV, at least 999 of them had to be wrong - although many people apparently failed to realize or appreciate this trivial observation which reduces the expectation value of the value of a paper by 3 orders of magnitude. And the newest LHC data are increasingly pointing towards the conclusion that the right number is not 999 but 1,000 so it is more than 3 orders of magnitude. ;-)
Extinction of models
Some people seem to be shocked. (Our commenter "M" is not among them because he was being ironical while he largely agrees with my attitude.) Why? The hep-ph literature is full of papers that study various kinds of fireworks below 1 TeV so "M" and apparently others seem to be convinced that Nature has to agree with the "consensus of the papers" as well. But the LHC increasingly clearly and conservatively says something different: there don't seem to be any new fireworks below 1 TeV. Physics of the Standard Model works pretty much flawlessly and the Higgs sector is the main portion of the physical laws whose existence seems almost inevitable and whose details are still being awaited by the particle physicists.
The extermination of the models is fair and color-blind. Think about any random buzzword - leptoquarks, W' bosons, Z' bosons, preons, very low-mass superpartners (clearly, the most convincing representative for new physics in the past as well as today), light black holes, fourth generation quarks, and so on, and so on. All these things and many others - as long as their proponents linked them to a sub-TeV energy scale - are approaching extinction. People may be unhappy but a scientist should ultimately be happy about any truth he or she learns about Nature.
So Nature begs to differ: it doesn't want to join a "consensus" with the hep-ph arXiv. The only hep-ph papers it agrees with are those that modestly studied the Standard Model which probably looks boring to other phenomenologists - and sometimes to the researchers of the Standard Model itself.
The statement that Nature doesn't give a damn about the random distribution of some papers written by a particular group of humans should be obvious and understandable. But I want to make one more related point. Even if the LHC found and confirmed one of the new sub-TeV models physics, it wouldn't mean that Nature would join the hep-ph consensus. Why? Because there's simply no consensus among the models on the hep-ph arXiv to start with. The models are inequivalent so they disagree with each other.
You may say that many of them agree when it comes to the question whether the Standard Model should be superseded or extended by new physics below 1 TeV. In this particular war, the Standard Model faces a diverse group of foes :-) and the proponents of any of these foes could "unify" to fight against the Standard model.
But this logic is irrational because the question - whether the Standard Model is right up to a TeV - is completely a cherry-picked, contrived, and unnatural one. There's no reason to introduce the polarizations in which the Standard Model stands against everyone else. One could also ask different questions in which the Standard Model would have some allies with the same answer - or it would even be a part of a majority facing a smaller group of foes.
The only special thing about the Standard Model is that it's the "minimal" theory (when it comes to counting of the fields etc.) that is compatible with data we had known before the LHC. And indeed, so far it looks like that this minimal theory is the most accurate one even up to a TeV or so: any qualitative change with a low enough mass seems to make it incompatible with the observations. Trying to extrapolate your known theories as far as you can is the obvious strategy you should try - and you should only give up when you discover some inconsistencies (internal or with the observations). This is pretty much a strategy of the top-down hep-th theorists and indeed, the LHC data seem to support that this strategy is wise.
The "null" sub-TeV data that keep on coming from the LHC kill not only individual models from the hep-ph literature that wanted to offer "spectacular" predictions right behind the corner and to make the authors of these predictions famous in the case that these predictions are confirmed. The "null" sub-TeV LHC data kill and exclude whole philosophies, whole ways of thinking.
For example, particle physics has been talking about the hierarchy problem - why the Higgs is so much lighter than the Planck scale even though it could be heavier and quantum corrections naturally want to make these two masses very similar. And some phenomenologists extended it to the "little hierarchy problem" which is effectively the claim that there can't even be a small gap - like one order of magnitude - on the energy scale between the Higgs mass and the new physics that protects its smallness.
It's becoming increasingly clear that the statement that the "little hierarchy shouldn't be allowed in Nature" is wrong: Nature doesn't respect this law. And of course, as the LHC is continuing to push the lower limit on the energy of new physics towards higher values, it is making not only the little hierarchy problem but even the normal hierarchy problem more questionable.
So the LHC has the capacity to de facto exclude the whole philosophy of the "little hierarchy problem" and many other propagandist paradigms whose purpose was to irrationally justify the phenomenologists' sensationalism, their assumption that Nature was obliged to offer us new physics right behind the corner.
I am convinced - and, unless new breakthroughs will occur, will be convinced - that the naturalness arguments are fundamentally right. But one must be careful and avoid its versions that are not really robust and that resemble black magic or numerology. The little hierarchy problem is the statement that Nature doesn't want to cancel things with the relative accuracy of 1/50 or higher because it's "unlikely" that this would occur by chance.
Well, its odds could be calculated to be 2% in some straightforward way but 2% is extremely far from zero. It's just extremely dangerous to build your world view on such extremely weak arguments of a statistical character, especially if there's no rational justification of the a priori probability distribution that you have used.
Many people would be expecting that the LHC would be producing lots of new data incompatible with the Standard Model and there would be lots of interactions between the hep-ph archive - and hep-ph researchers - on one side and the experimenters on the other side. However, the final outcome seems to be that there is no interaction at all. Despite the phenomenologists' wishful thinking and their sometimes hysterical effort to be as close to the experimental frontier as you can get, their work in the recent decades seems to be irrelevant for the observations at the LHC.
(David Gross's "Oskar Klein and Gauge Theory" is a nice historical example showing that even the big shots of physics of the 1930s such as Heisenberg suffered from the disease to expect that everything we know breaks down behind the corner - including quantum mechanics. They would expect the postulates of QM themselves to break at the Compton wavelength of the electron, and so forth. It's crazy, it's been shown wrong but physicists still suffered from this disease even in the 21st century.)
The bulk of the model building work has been irrelevant for deeper and mathematical questions as well because most of the hep-ph research has been mathematically shallow.
Hep-th is different
The situation is very different in formal theory because formal theory wasn't developed to address the experiments that would be done next year or in the next 5 years. Hep-th theory is a successful effort to extract qualitatively new and sometimes quantitative and accurate insights about Nature from a careful mathematical reconciliation and analysis of the empirical insights that were being accumulated in the centuries and millenniums in which the humans observed Nature.
No hep-th theorist has ever claimed or boasted that the bulk of his work had too much in common with the data produced by the next-generation collider so of course, the hep-th work isn't really affected by the "null" results from the LHC. Everyone who has at least a clue about modern physics - aggressive crackpot fans of their fellow crackpot Peter Woit are surely not among them - knows that the majority of the string-theory phenomena that are being investigated is associated with the Planck scale, $$10^{19} {\rm GeV}$$, which is clearly not directly accessible by doable experiments. Nevertheless, this research is tightly connected with observations (made decades or centuries ago) because the phenomena above this Planck scale are dictated by general relativity.
Many theorists and many string theorists - but not all - would feel more excited if the LHC were generating totally new phenomena and their phenomenological friends would be really thrilled. However, it's still true that the theorists don't care as much as the phenomenologists do.
What I really want to say is that most of the phenomenological work has been a waste of human resources and time. Instead of producing 1,000 models that could be relevant for the sub-TeV observations, those people could have just waited for a few years and let Nature speak. And it seems that Nature has spoken - and it may still speak in an ever clearer language - and so far, the answer is that the right model of these phenomena is called the Standard Model.
If you speculate about future observations, you're always speculating. If you make a guess, it is a guess. You can't force Nature to produce some phenomena of a certain kind just by a wishful thinking. It doesn't work this way and it can't work in this way. Nature does whatever She likes to do. In effect, bottom-up phenomenologists have never had any rational reason to think that they're any closer to the future observable phenomena than the top-down theorists.
And because the "apparent proximity" to experiments was the only reason for them to believe that they knew what they were doing, they really didn't have any rational reason to believe that they were on the right track.
Meanwhile, the top-down theorists realized that they didn't have any qualitatively new data and they didn't know what the next qualitatively new data could have been if any. In fact, a defining feature of the top-down approach is that one does expect that the known observed theory with small additions - such as supersymmetry at a few TeVs - works all the way up to extremely high energy scales such as the GUT scale, near the Planck scale, and this high-energy scale is where most of the interesting things that should be studied takes place. You introduce new physics at the intermediate scales only if you are forced to by the consistency of the high-energy and low-energy phenomena - you're not supposed to do such things just for fun.
Indeed, the GUT scale is unobservable directly but the new direct observations are simply not the only way how to learn new things about Nature. A more mathematically interconnected and accurate analysis of the known observations is relevant, too.
So during those decades, hep-ph model builders have constructed lots of rather shallow models that were meant to be interesting because of the hope that their new phenomena could have been observed soon. If you just place these new phenomena to 50,000 TeV, they're not too interesting. It's really because the sensation doesn't come from some extraordinary features of these models per se - but from the cheap hype connected with the (probably wrong) expectations that these new phenomena could be seen soon.
So if the hep-ph archive had stocks, the price of the stock probably dropped by 80% or more during the recent week and the decline may continue.
On the other hand, the hep-th archive was largely unaffected simply because the interactions with the next collider have never been the driving force of the hep-th research. Hep-th theorists never tried to speculate about things they didn't know and that could have had millions of possible answers - including the most obvious answer (the Standard Model) - and instead, they were working hard on aspects of physics that they actually had a chance to pinpoint by some clever arguments and hard mathematical work.
It's ironic but the most valuable parts of the hep-ph research after the "extinction" are those things that overlap with the hep-th research. In the future, phenomenologists may continue to play and use the tools and new ideas that were inspired by string theory or overlap with it - extra dimensions, gauge theories with complicated quiver diagrams, and a few other major examples. But all the analyses of the detailed models that depended on the new physics' energy being almost equal to the electroweak scale - and indeed, the "new physics" part of the hep-ph archive is pretty much dominated by these things - are becoming worthless at a dramatic rate. The probability that a random physicist would be going to read one of those papers dropped by an order of magnitude between the last week and today.
Meanwhile, string theory has produced fascinating insights that didn't go away and will almost certainly never go away. A continued confirmation of the Standard Model by the LHC will lead many people to rethink what is well-motivated in research and what is not. I sincerely hope that the almost everyone will start to appreciate that making bets on the expectation that a particular new phenomenon will be observed next year - even though you don't have a glimpse of a proof - is not necessarily the wisest way to organize your time and priorities.
We know quite a lot about the Universe but we still need to know how those partial insights fit together. So I hope that instead of shifting the energy scales from 200 GeV to 1,400 GeV and continuing in random guessing, many phenomenologists will buy some string theory textbooks and begin to think about the Universe at a slightly deeper and less sensationalist level.
And that's the memo.
Bus
There is some deliberately propagated confusion if you want me not to use the word "lies" among the hacks on the blogosphere. A notorious, immoral, and professional demagogue named Peter W*it (sorry for the rude language) says that someone is "throwing supersymmetry under the bus". I am surely not throwing supersymmetry under the bus. I am convinced that supersymmetry is a part of Nature and it is broken at a scale that is much lower than the Planck scale. What I have thrown under the bus are models that have been experimentally excluded because that's what scientists do with the results from the experiments. I would do the same with supersymmetry as a principle if there were some powerful evidence it is not valid in Nature.
For example, the LHC has shown that the strongly interacting superpartners (at least gluinos and a majority of squarks) can't be lighter than a TeV. Because I don't see any reason whatsoever to think that these experiments are invalid, it simply implies that there are no gluinos below a TeV. They have to be heavier if they exist.
However, the LHC's "null" findings are not just about supersymmetry - and Peter W*it is just using his usual nasty demagogy when he cherry-picks SUSY.
The LHC has eliminated all major claims that a new physics would appear at this stage, in a color-blind fashion: and be sure that there have been lots of people at various level of competence who have boasted that their theory had "testable predictions at the LHC" and most of those are gone (some of them were gone as soon as they were proposed because they disagreed with things that were known long before the LHC).
W*it himself was among the deeply counterproductive and misguided individuals who were deliberately trying to spread the atmosphere in which people have to claim that they have some testable predictions for the next experiment - a machinery that was shown to produce exactly nothing in the whole world because this is not the right way to do physics at this point. Instead of being ashamed and disappearing from this scientific world where he has no business to verbally oxidate, this despicable man continues to expose his dishonest rants to the Internet.
Some sub-TeV physics may still be seen with a higher integrated luminosity but some sub-TeV physics is already excluded. A scientist doesn't have a problem to immediately learn the lesson. And I have never believed that something has to exist below a TeV to solve the hierarchy problem. 5 TeV is good as well.
My guess that SUSY will ever be found by the LHC are about 50-50 at this point. The recorded luminosity of the LHC is almost exponentially increasing and chances to find new physics are approximately logarithmically spread on the luminosity times energy scale, so the possible discoveries will keep on arriving as a Poisson process. Also, the Higgs sector is yet to be fully analyzed and its structure may provide us with indirect signs that support or disfavor the simplest version of SUSY, the MSSM.
We will see. You may also check What if the LHC doesn't see SUSY.
## Who is Lumo?
Luboš Motl
Pilsen, Czech Republic
View my complete profile
← by date
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9713302850723267, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/advanced-algebra/54296-exercise-algebra-thomas-w-hungerford.html
|
# Thread:
1. ## An exercise in Algebra by Thomas W. Hungerford
A commutive ring wih unity has a minmal prime ideal contains all zero divisor and all nonunits are zero divisor
prove
all nonunits are nilpotent
this is an exercise in Algebra by Thomas W. Hungford page 148
2. Originally Posted by chipai
A commutive ring wih unity has a minmal prime ideal which contains all zero divisors and all nonunits are zero divisor. prove that all nonunits are nilpotent.
this is an exercise in Algebra by Thomas W. Hungford page 148 about local ring.
let $R$ be the ring and $P$ the minimal prime. so $R - P$ is exactly the set of units of $R.$ hence any proper ideal of $R$ is contained in $P.$ since $P$ is a minimal prime, it cannot contain any prime ideal
properly. thus $P$ is the unique prime ideal of $R.$ now suppose $x \in R$ is a non-unit. so $x \in P.$ suppose $x$ is not nilpotent. let $\mathcal{S}=\{x^n : \ n = 1, 2, \cdots \}.$ then $0 \notin \mathcal{S},$ because we assumed that $x$ is
not nilpotent. let $\mathcal{C}=\{I \lhd R: \ I \cap \mathcal{S} = \emptyset \},$ which is a non-empty set because $<0> \in \mathcal{C}.$ apply Zorn's lemma to find $Q,$ a maximal element of $\mathcal{C}.$ we claim that $Q$ is prime: suppose $ab \in Q$ but
$a \notin Q, \ b \notin Q.$ then $Q \subset Q + Ra, \ Q \subset Q+Rb.$ therefore $(Q+Ra) \cap \mathcal{S} \neq \emptyset, \ (Q+Rb) \cap \mathcal{S} \neq \emptyset,$ by maximality of $Q.$ so there exists $i,j$ such that $x^i \in Q+Ra, \ x^j \in Q+Rb.$ but then:
$x^{i+j} \in (Q+Ra)(Q+Rb) \subseteq Q+Rab =Q,$ because $ab \in Q.$ thus $Q \cap \mathcal{S} \neq \emptyset,$ which is contradiction! so $Q$ is prime and hence $Q=P.$ thus $P \cap \mathcal{S} = \emptyset,$ which is false because $x \in P \cap \mathcal{S}. \ \ \ \Box$
Remark 1: in general, in a (not even necessarily commutative) ring $R,$ given a multiplicatively closed set $S \subset R$ with $0 \notin S,$ we can always find a prime ideal $Q$ which is contained in $R - S.$
Remark 2: if you already know about the nilradical of a commutative ring, then your problem is trivial: since $P$ is the only prime ideal of $R,$ the nilradical of $R$ is $P.$ but we know that the nilradical
is exactly the set of nilpotent elements of $R.$
Remark 3: the information given in the problem about zero-divisors is only used to conclude that $R-P$ is exactly the set of units of $R.$
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 46, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9314965009689331, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/33861?sort=oldest
|
## Large cardinals
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Looking at the chart of cardinals in Kanamori's book, one realizes that all large cardinals are implied by stronger ones and imply weaker ones. For instance measurable implies Jonsson which implies zero sharp which implies weakly compact which implies Mahlo which implies inaccessible. So it seems as if all these large cardinal assumptions are linearly ordered by consistency strength. Is there a some assumption above ZFC that is not implied by and does not imply any of the linearly ordered large cardinals?
-
5
alephomega -- the large cardinal axioms in that list do not always imply one another, they are just ordered in terms of the consistency strength. See for example the discussion mathoverflow.net/questions/12804/… showing how a non-logician (myself in that case) can get a bit confused about this. Some of those axioms however do imply some other axioms, as explained in Joel Hamkins's answer in that thread. – algori Jul 29 2010 at 23:34
Thanks for the link algori. – alephomega Jul 30 2010 at 0:50
1
In the light of algori's comment, the question should be: Is there some statement A that is consistent with ZFC (which we cannot prove, but believe in) such that Con(ZFC+A) implies Con(ZFC) and Con(ZFC+A) does not imply the consistency of one of the usual large cardinals and is not implied by the consistency of one of the usual large cardinals? – Stefan Geschke Jul 30 2010 at 21:04
@Stefan, thank you for the clarification, this is exactly what I had in mind, I used the wrong formulation. It is possible? Except CH, is there such principles? I don't know if MA is a candidate because I don't know if it follows from some other statement, I only know you can force its truth by iteration with finite support. – alephomega Jul 30 2010 at 23:17
Well, both MA and CH have no consistency strength over ZFC. The consistency of either statement follows from the consistency of ZFC. But you are right, CH and MA are both examples for the kind of statements that you are looking for. That is why I said that you should not look for actual implications but for implications of consistency. – Stefan Geschke Aug 1 2010 at 10:14
## 2 Answers
By the well-known Levy-Solovay theorem, large cardinal properties are preserved under "small" forcing. Therefore CH is an assumption above ZFC which is not settled by large cardinal axioms.
-
I am going to look up this theorem, this sounds great. So large cardinals are absolute for certain types of forcing and for generic extensions, if I understand your answer. – alephomega Jul 30 2010 at 0:48
What exactly do you mean by "above ZFC"? Does not follow from ZFC and is consistent with, or the statement in question is consistent with ZFC and this consistency does not follow from the consistency of CH? – Stefan Geschke Jul 30 2010 at 20:58
@Stefan, yes by "above ZFC", I mean is not provable from ZFC but can be consistent with, which actually prompts another question: are there non trivial statements of cardinal arithmetic which are "in between" CH and ZFC, that is cardinal arithmetic statements provable when you add CH to ZFC but not provable from ZFC alone? Because presumably, CH does not say anything about singular cardinals for instance or even about – alephomega Jul 30 2010 at 23:07
Well, I can't really think of a cardinal arithmetic statement that follows from CH but not from ZFC, except for CH itself, of course. There are interesting theorems like Silver's theorem, though: If $\kappa$ is a singular cardinal of uncountable cofinality and for all $\lambda<\kappa$, $2^\lambda=\lambda^+$ (GCH holds below $\kappa$), then $2^\kappa=\kappa^+$. – Stefan Geschke Aug 1 2010 at 10:04
It is true that Silver's Theorem is an example but it is implied by the GCH below a singular of uncountable cofinality not by CH alone. – alephomega Aug 3 2010 at 7:56
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Woodin's Ω-conjecture implies that all large cardinal axioms are well ordered under the relation its consistency implies the consistency of''. See his paper in the Notices 2001/7. For this, of course, he does define what a large cardinal is.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9487910270690918, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/statistics/36377-how-large-should-my-survey.html
|
Thread:
1. How large should my survey be?
I need to conduct a 10min long marketing survey with 6 to 7 questions . How large should my sample size be to have results representative of a 1000000-large population group, considering I want to achieve a 95% confidence level?
I have read the formulas:
$n = ({z \sigma \over E })^2$
but how can I reconcile the concept of a questionnaire with the standard deviation of the sample population $\sigma$ and the error E? Is there even a mean or proportion to talk about? The only unambiguous part is z = 1.96 for 95% confidence.
2. Originally Posted by chopet
I need to conduct a 10min long marketing survey with 6 to 7 questions . How large should my sample size be to have results representative of a 1000000-large population group, considering I want to achieve a 95% confidence level?
I have read the formulas:
$n = ({z \sigma \over E })^2$
but how can I reconcile the concept of a questionnaire with the standard deviation of the sample population $\sigma$ and the error E? Is there even a mean or proportion to talk about? The only unambiguous part is z = 1.96 for 95% confidence.
Read this: Determining Sample Size
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8994348645210266, "perplexity_flag": "middle"}
|
http://stats.stackexchange.com/questions/10238/even-more-with-the-kolmogorov-smirnov-test-with-r-software
|
# Even more with the Kolmogorov-Smirnov test with R software
This follows on from the previous question on differences between K-S manual test and K-S test with R.
My frequency sample was
````a=c(0,1,1,4,9).
````
Then the observed sample is
```` obs=c(2,3,4,4,4,4,5,5,5,5,5,5,5,5,5)
````
The expected sample is then
````exp=c(1,1,1,2,2,2,3,3,3,4,4,4,5,5,5)
````
I hope you agree.
First, I use `ks.test`, like another time:
````ks.test(obs,exp)
data: oss and att
D = 0.4667, p-value = 0.07626
````
Then, I use the ks.test the other way:
The expected distribution can be the uniform. Do you agree?
And then:
````ks.test(obs, "punif", 0,5)
data: obs
D = 0.6667, p-value = 3.239e-06
````
### Question
• Why do the two approaches give different results?
-
I added the `multinomial` tag because it will provide useful related links at the right of this page. – whuber♦ May 2 '11 at 16:18
@whuber Thank for this. I agree with you that mine is a discrete distribution but R don't know this and, for that, it is not correct the application of ks.test (for 1 sample) in this situation. I don't sure to undestand the second part.But my question was different and, perhaps, I don't explain very well. In others worlds: – Massimo May 6 '11 at 11:17
... In the first 5 hours I see 15 birds. In the first hour, nothing. In the second and third only 1, in the fourth 9 birds and in the fifth 4 birds. Then the frequency of the birds is (0,1,1,9,4) and the expected is (3,3,3,3,3). In the other question suggested the others samples: obs=c(2,3,4,4,4,4,4,4,4,4,4,5,5,5,5): number 1 there is not because in the first hour no birds. Number 2 and 3 only one time because in the second and third hour there is only one bird.... – Massimo May 6 '11 at 11:38
@whuber Then I need to know (and I ask) if there is a way for use the ks.test for one sample and get the same result. R know the uniform discrete distribution?..... Thank – Massimo May 6 '11 at 11:54
## 1 Answer
The first is a two-sample test; the second is a one-sample test against a continuous distribution. Neither is used correctly:
• The two-sample test views both sets of data as being data, but your "expected sample" is not data, it's a theoretical reference. It is not subject to any variation. The two-sample test thinks that it can vary. That's why the p-value is so large.
• The reference distribution used in the one-sample test is a continuous uniform distribution between 0 and 5. However, these data look discrete: from the way they are given, it appears they can attain only the values 1, 2, ..., 5. Because the one-sample test doesn't know this, its p-value is probably too small.
At least this lets us infer that the correct p-value should lie somewhere between 0.076 and 3.2e-06. Because that doesn't settle the question, let's analyze further.
To get a sense of whether the data (0, 1, 1, 4, 9) differ significantly from the discrete uniform frequencies (3, 3, 3, 3, 3), view the latter as describing a five-sided die. What are the chances that in 0+1+...+9 = 15 tosses of this die that at least one value would appear 9 or more times? The events (1 appears 9 or more times), (2 appears 9 or more times), ..., (5 appears 9 or more times) are mutually exclusive--no two of them can hold at once--so their probabilities add. Because the die is uniform each of these five events has the same probability. We can compute the chance that a 5 comes up 9 or more times by viewing it like tosses of a biased coin: a 5 has a 1/5 chance; a non-5 has a 4/5 chance. The chance of 9 or more 5's therefore equals
$$\binom{15}{9}(1/5)^9(4/5)^6 + \binom{15}{10}(1/5)^{10}(4/5)^5 + \cdots + \binom{15}{15}(1/5)^{15}(1/4)^0.$$
This value is approximately 0.000785. Multiplying by 5 gives .00392 = 0.39%, still quite small. Thus this set of frequencies is unlikely to have arisen through a single experiment in which each of the values has an equal chance of arising.
-
Very elucidating answer for us non-stats guys/gals. – Roman Luštrik May 3 '11 at 7:45
default
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9227630496025085, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/23478?sort=newest
|
## Examples of common false beliefs in mathematics. [closed]
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
The first thing to say is that this is not the same as the question about interesting mathematical mistakes. I am interested about the type of false beliefs that many intelligent people have while they are learning mathematics, but quickly abandon when their mistake is pointed out -- and also in why they have these beliefs. So in a sense I am interested in commonplace mathematical mistakes.
Let me give a couple of examples to show the kind of thing I mean. When teaching complex analysis, I often come across people who do not realize that they have four incompatible beliefs in their heads simultaneously. These are
(i) a bounded entire function is constant; (ii) sin(z) is a bounded function; (iii) sin(z) is defined and analytic everywhere on C; (iv) sin(z) is not a constant function.
Obviously, it is (ii) that is false. I think probably many people visualize the extension of sin(z) to the complex plane as a doubly periodic function, until someone points out that that is complete nonsense.
A second example is the statement that an open dense subset U of R must be the whole of R. The "proof" of this statement is that every point x is arbitrarily close to a point u in U, so when you put a small neighbourhood about u it must contain x.
Since I'm asking for a good list of examples, and since it's more like a psychological question than a mathematical one, I think I'd better make it community wiki. The properties I'd most like from examples are that they are from reasonably advanced mathematics (so I'm less interested in very elementary false statements like $(x+y)^2=x^2+y^2$, even if they are widely believed) and that the reasons they are found plausible are quite varied.
-
46
I have to say this is proving to be one of the more useful CW big-list questions on the site... – Qiaochu Yuan May 6 2010 at 0:55
12
The answers below are truly informative. Big thanks for your question. I have always loved your post here in MO and wordpress. – To be cont'd May 22 2010 at 9:04
13
wouldn't it be great to compile all the nice examples (and some of the most relevant discussion / comments) presented below into a little writeup? that would make for a highly educative and entertaining read. – S. Sra Sep 20 2010 at 12:39
14
It's a thought -- I might consider it. – gowers Oct 4 2010 at 20:13
21
Meta created meta.mathoverflow.net/discussion/1165/… – quid Oct 8 2011 at 14:27
show 8 more comments
## 169 Answers
Just today I came across a mathematician who was under the impression that $\aleph_1$ is defined to be $2^{\aleph_0}$, and therefore that the continuum hypothesis says there is no cardinal between $\aleph_0$ and $\aleph_1$.
In fact, Cantor proved there are no cardinals between $\aleph_0$ and $\aleph_1$. The continuum hypothesis says there are no cardinals between $\aleph_0$ and $2^{\aleph_0}$.
$2^{\aleph_0}$ is the cardinality of the set of all functions from a set of size $\aleph_0$ into a set of size $2$. Equivalently, it is the cardinality of the set of all subsets of a set of size $\aleph_0$, and that is also the cardinality of the set of all real numbers.
$\aleph_1$, on the other hand, is the cardinality of the set of all countable ordinals. (And $\aleph_2$ is the cardinality of the set of all ordinals of cardinality $\le \aleph_1$, and so on, and $\aleph_\omega$ is the next cardinal of well-ordered sets after all $\aleph_n$ for $n$ a finite ordinal, and $\aleph_{\omega+1}$ is the cardinality of the set of all ordinals of cardinality $\le \aleph_\omega$, etc. These definitions go back to Cantor.
-
2
I retract my above question to my suprise it indeed seems to be common. Yet, this answer is a dublicate see an answer of April 16. – quid Oct 6 2011 at 0:50
1
This example already appears on this very page. mathoverflow.net/questions/23478/… – Asaf Karagila Oct 6 2011 at 12:41
1
One of the deficiencies of mathoverflow's software is that there is no easy way to search through the answers already posted. Even knowing that the date was April 16th doesn't help. – Michael Hardy Oct 7 2011 at 20:26
1
@Michael Hardy: You can sort the answers by date by clicking on the "Newest" or "Oldest" tabs instead of the "Votes" tab. – Douglas Zare Oct 19 2011 at 23:03
show 5 more comments
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
In group theory, if `$G_1 \cong G_2$` and `$H_1 \cong H_2$`, then
`$G_1 / H_1 \cong G_2 / H_2$`.
For example, `$\mathbb{Z} / 2\mathbb{Z} \not \cong \mathbb{Z} / \mathbb{Z}$`. The point is that the inclusion of `$H_j$` into `$G_j$` is needed in order to define the quocient.
-
Here's one that bugged me from point set topology: "A subnet of a sequence is a subsequence".
See here for the definitions. Using this one gives a great proof that compactness implies sequential compactness in any topological space:
Let $X$ be a compact space. Let $(x_n)$ be a sequence. Since a sequence is a net and it's a basic theorem of point set topology that in a compact topological space, every net has a convergent subnet (proof in the above link), there is a convergent subnet of the sequence $(x_n)$. Using the above belief, the sequence $(x_n)$ has a convergent subsequence and hence $X$ is sequentially compact.
For a counterexample to this "theorem", consider the compact space $X= \lbrace 0,1 \rbrace ^{[0,1]}$ with $f_n(x)$ the $n$th binary digit of $x$.
-
A possible false belief is that "a maximal Abelian subgroup of a compact connected Lie group is a maximal torus". Think of the $\mathbf Z_2\times\mathbf Z_2$-subgroup of $SO(3)$ given by diagonal matrices with $\pm1$ entries.
-
False belief: Every commuting pair of diagonalizable elements of $PSL(2,\mathbb{C})$ are simultaneously diagonalizable. The truth: I suppose not many people have thought about it, but it surprised me. Look at $$\left(\matrix{i& 0 \cr 0 & -i\cr } \right), \left(\matrix{0& i \cr i & 0\cr } \right).$$
-
A random $k$-coloring of the vertices of a graph $G$ is more likely to be proper than a random $(k-1)$-coloring of the same graph.
(A vertex coloring is proper if no two adjacent vertices are colored identically. In this case, random means uniform among all colorings, or equivalently, that each vertex is i.i.d. colored uniformly from the space of colors.)
-
2
For some graphs $G$ and integers $k$, the opposite. The easiest example is the complete bipartite graph $K_{n,n}$ with $k=3$. The probability a $2$-coloring is proper is about $(1/4)^n$ while the same for a $3$-coloring is about $(2/9)^n$, where I've ignored minor terms like constants. The actual probabilities cross at $n=10$, so as an explicit example, a random $2$-coloring of $K_{10,10}$ is more likely to be proper than a random $3$-coloring. – aorq May 10 2011 at 0:37
6
This seems like a good example of a counterintuitive statement, but to call it a common false belief would mean that there are lots of people who think it's true. The question would probably never have occurred to me it I hadn't seen it here. The false belief that Euclid's proof of the infinitude of primes, on the other hand, actually gets asserted in print by mathematicians---in some cases good ones. – Michael Hardy May 10 2011 at 15:36
show 2 more comments
"Suppose that two features $[x,y]$ from a population $P$ are positively correlated, and we divide $P$ into two subclasses $P_1$, $P_2$. Then, it cannot happen that the respective features ( $[x_1,y1]$ and $[x_2,y_2]$) are negatively correlated in both subclasses
Or more succintly:
"Mixing preserves the correlation sign."
This seems very plausible - almost obvious. But it's false - see Simpon's paradox
-
Coordinates on a manifold do not have an immediate metric meaning. Until becoming familiar with differential geometry one tends to think they do. (Einstein wrote that he took seven years to free himself from this idea.)
For example, linear control theory is for the most part metric with variables in $R^n$. When moving away from linear control theory, variables are represented as coordinates on a manifold. Nevertheless, much of the literature tends to either abandon metric notions altogether, or to keep using an Euclidean metric though it is no longer very useful.
-
Here are mistakes I find surprisingly sharp people make about the weak$^{*}$ topology on the dual of $X,$ where $X$ is a Banach space.
-It is metrizable if $X$ is separable.
-It is locally compact by Banach-Alaoglu.
-The statement $X$ is weak`$^{*}$` dense in the double dual of $X$ proves that the unit ball of $X$ is weak$^{*}$ dense in the unit ball of the double dual of $X.$
The first two are in fact never true if $X$ is infinite dimensional. While both statements in the third claim are true, the second one is significantly stronger, but a lot of people believe you can get it from the first by just "rescaling the elements" to have norm $\leq 1.$ (Although the proof of the statements in the third claim is not hard). The difficulty is that if $X$ is infinite dimensional then for any $\phi$ in the dual of $X,$ there exists a net $\phi_{i}$ in the dual of $X$ with $\|\phi_{i}\|\to \infty$ and $\phi_{i}\to \phi$ weak$^{*},$ so this rescaling trick cannot be uniformly applied. Really these all boil down to the following false belief:
-The dual of $X$ has a non-empty norm bounded weak$^{*}$ open set.
Again when $X$ is infinite dimensional this always fails.
-
1
I think $M(T)$ is not metrizable in the weak$^{*}$ topology, and in fact my claim that this fails for every infinite dimensional Banach space i also think is true. The rough outline of the proof I saw was this: 1. If $X^{*}$ is weak$^{*}$ metrizable, then a first countabliity at the origin argument implies that $X^{*}$ has a translation invariant metric given the weak$^{*}$ topology. 2. One can characterize completeness topologically for translation-invariant metrics, and see directly that if $X^{*}$ had a translation-invariant metric given the weak$^{*}$ topology it would be complete. – Benjamin Hayes Oct 12 2011 at 3:42
show 3 more comments
Hopefully this isn't a repeat answer. False belief: a matrix is positive definite if its determinant is positive.
-
3
Is this really a common(!) false belief? – Martin Brandenburg Oct 3 2011 at 7:23
This is more of a false philosophy than a clear mistake, but nevertheless it is very common:
A compact topological space must be "small" in some sense: it should be second countable or separable or have cardinality $\le 2^{\aleph_0}$, etc.
This is all true for compact metric spaces, but in the general case, Tychonoff's theorem gives plenty of examples of compact spaces which are "huge" in the above sense.
-
"The universal cover of $SL_2(R)$ is a universal central extension" (which I believed until recently...)
-
It took me a bit too long to realize that these two beliefs are contradictory:
• Period 3 $\Rightarrow$ chaos: if a continuous self-map on the interval has a period-3 orbit, then it has orbits of all periods.
• The black dots on each horizontal slice of this picture above $x=a$ show the location of the periodic points of the logistic map $f_a(y) = ay(1-y)$:
You can clearly see a 3-cycle in the light area towards the right; yet we know that if there is a 3-cycle in that slice then there must be a cycle of any period in that slice... so where are they?
(The other cycles are there of course, but they are repelling and hence are not visible. You can see artifacts from these repelling cycles near the period-doubling bifurcations in this picture)
-
If $\alpha>0$ is not an integer, the set of functions $f:[a,b]\rightarrow\mathbb R$ such that $$\sup_{y\ne x}\frac{|f(y)-f(x)|}{|y-x|^\alpha}<+\infty$$ is ${\mathcal C}^\alpha([a,b])$.
False for $\alpha>1$, because this set contains only constant functions.
-
$$2^{\aleph_0} = \aleph_1$$
This is a pet peeve of mine, I'm always surprised at the number of people who think that $\aleph_1$ is defined as $2^{\aleph_0}$ or $|\mathbb{R}|$.
-
Something I was sure about until earlier today:
Suppose $\kappa$ is an $\aleph$ number, then $AC_\kappa$ is equivalent to $W_\kappa$, namely the universe holds that the product of $\kappa$ many sets is non-empty if and only if every cardinality is either of size less than $\kappa$ or has a subset of cardinality $\kappa$.
In fact this is only true if you assume full $AC$, and $(\forall \kappa) AC_\kappa$ doesn't even imply $W_{\aleph_1}$, I was truly shocked.
Furthermore, $W_\kappa$ doesn't even imply $AC_\kappa$ in most cases.
The strongest psychological implication is that most people actually think of the well-ordering principle as a the "correct form" of choice, when it is actually Dependent Choice (limited to $\kappa$, or unbounded) which is the "proper" form, that is $DC_\kappa$ implies both $AC_\kappa$ and $W_\kappa$.
-
6
How common is this misconception? – Thierry Zell Apr 17 2011 at 3:08
1
@Thierry: For the past couple of weeks I spent a lot time considering models without choice, not only I held that misconception but not once anyone corrected me about it - grad students and professors alike. – Asaf Karagila Apr 17 2011 at 6:09
I have heard the following a few times :
"If $f$ is holomorphic on a region $\Omega$ and not one-to-one, then $f'$ must vanish somewhere in $\Omega$."
$f(z)=e^z$ of course is a counterexample.
-
3
No true. Take $f(z)=z^3-3z$ and restrict it to the complement of $\lbrace 1,-1\rbrace$ so that $f'(z)$ is never $0$. It maps this domain onto $\mathbb C$. – Tom Goodwillie May 4 2011 at 0:16
show 3 more comments
• Many students have the false belief that if a topological space is totally disconnected, then it must be discrete (related to examples already given). The rationals are a simple counter-example of course.
• It is common to imagine rotation in an n-dimensional space, as a rotation through an "axis". this is of course true only in 3D, In higher dimensions there is no "axis".
• In calculus, I had some troubles with the following wrong idea. A curve in a plane parametrized by a smooth function is "smooth" in the intuitive sense (having no corners). the curve that is defined by $(t^2,t^2)$ for $t\ge0$ and $(-t^2,t^2)$ for $t<0$ is the graph of the absolute value function with a "corner" at the origin, though the coordinate functions are smooth. the "non-regularity" of the parametrization resolves the conflict.
• When first encountering the concept of a spectrum of a ring, the belief that a continuous function between the spectra of two rings must come from a ring homomorphism between the rings.
-
2
Unfortunately, "smooth" is a word which means whatever its utterer does not want to specify. Differentiable, C^infty, continuous, everything is mixed. – darij grinberg Apr 14 2011 at 15:12
1
I don't think the curve (-t^2,t^2) is the graph of the absolute value function. – Zsbán Ambrus May 2 2011 at 16:36
2
+1 for the discrete $\neq$ totally disconnected example. – Jim Conant May 4 2011 at 15:12
1
Discrete $\ne$ totally disconnected is a good one that I thought of today and just had to check to see if it was posted already. It adds to the confusion that every finite subset of a totally disconnected space must have the discrete topology, and that in most topological spaces encountered "in nature," the connected components are open sets. – Timothy Chow Oct 20 2011 at 14:30
show 4 more comments
A degree $k$ map $S^n\to S^n$ induces multiplication by $k$ on all the homotopy groups $\pi_m(S^n)$.
(Not sure if this is a common error, but I believed it implicitly for a while and it confused me about some things. If you unravel what degree $k$ means and what multiplication by $k$ in $\pi_m$ means, there's no reason at all to expect this to be true, and indeed it is false in general. It is true in the stable range, since $S^n$ looks like $\Omega S^{n+1}$ in the stable range, "degree k" can be defined in terms of the H-space structure on $\Omega S^{n+1}$, and an Eckmann-Hilton argument applies.)
-
2
If $n$ is even and $x \in \pi_{2n-1}(S^n)$ and $f$ a degree $k$ map and $H$ the Hopf invariant, then $H(f_* (x)) = k^2 H(x)$. A related misbelief: if $M$ is a framed manifold and $N\to$M a finite cover, of degree $d$. Then the framed bordism classes satisfy $[N]=d [M]$. Completely wrong. – Johannes Ebert Apr 14 2011 at 9:04
I saw many students using the "fact" that for a subset `$S$` of a group one has `$SS^{-1}=\{e\}$`
-
2
This is an interesting example, because it addresses the mistakes that come from the all-too frequent confusion with notations. But we need our shortcuts, our $f^{-1}(x)$ versus $x^{-1}$, etc. Obtaining concise notations while avoiding confusion: a tricky proposition! – Thierry Zell Apr 14 2011 at 15:50
If every collection of disjoint open sets in a topological space is at most countable, then the space is separable
-
(*) "Let $(I,\leq)$ be a directed ordered set, and $E=(f_{ij}:E_i\to E_j)_{i\geq j}$ be an inverse system of nonempty sets with surjective transition maps. Then the inverse limit `$\varprojlim_I\,E$` is nonempty."
This is true if $I=\mathbb{N}$ ("dependent choices"), and hence more generally if $I$ has a countable cofinal subset. But surprisingly (to me), those are the only sets $I$ for which (*) holds for every system $E$. (This is proved somewhere in Bourbaki's exercises, for instance).
Of course, other useful cases where (*) holds are when the $E_i$'s are finite, or more generally compact spaces with continuous transition maps.
-
For a bounded subset of a metric space the diameter is two times the radius!
Let $S\subset X$ be bounded. The definitions are:
`$\mathrm{diameter}(S):=\sup\{d(x,y)\,|\,x,y\in S\}$`
`$\mathrm{radius}(S):=\inf\{r>0\,|\,\exists x\in X:\,S\subset B(x,r)\}$`
where `$B(x,r)$` denotes the open ball of radius `$r$` around `$x$`.
-
4
Hםw do you define the radius of an arbitrary bounded subset? – Mark Schwarzmann Apr 11 2011 at 15:34
1
Disproved nicely by Reuleaux triangles. – darij grinberg Apr 12 2011 at 8:10
9
Disproved nicely by a two-point metric space. – Tom Goodwillie Apr 17 2011 at 1:36
## From Keith Devlin
"Multiplication is not the same as repeated addition", as put forward in Devlin's MAA column.
I'm not really sure how I feel about this one; I might be one of the unfortunate souls who are still prey to that delusion.
## Caution
In case you missed it, the column ended up spilling a lot of electronic ink (as evidenced in this follow-up column), so I don't believe it would be wise to start yet a new one on MO. Thanks in advance!
-
13
The more I think about this "error", the less I am convinced. It's like saying that you cannot say that $\binom n k$ is the number of $k$-element sets in an $n$-element set because then you will be unable to generalize to complex values of $n$. Or you cannot define the chromatic polynomial as the function counting the colourings and then plug in $-1$ to get the acyclic orientations of the graph. Also, I think it is perfectly understandable what it means to add something halfways. – thei Apr 10 2011 at 20:50
1
It's not a "false belief". It's a false heuristic. And it's actually here: mathoverflow.net/questions/2358/… – darij grinberg Apr 10 2011 at 21:17
2
When I taught elementary teachers the course on arithmetic, they all had been taught that multiplication is repeated addition, but I myself thought it was the cardinality of the cartesian product. We enjoyed discussing this difference in point of view. – roy smith May 9 2011 at 3:06
1
The "repeated addition" characterization has an advantage over the "cardinality of the Cartesian product" characterization (which possibly in some contexts could be considered a disadvantage). That is that it's not self-evident that it's commutative, and so one has a useful exercise for certain kinds of students: figure out why it's commutative. – Michael Hardy May 20 2011 at 2:28
show 5 more comments
Real projective space ${\mathbb{RP}}^3 = (\mathbb R^4 - 0)/\mathbb R^*$ is non-orientable.
-
9
"Non-orientable surfaces do not embed in orientable three-manifolds." is also a classic. – Sam Nead Apr 10 2011 at 19:33
show 1 more comment
I'm not sure that anyone holds this as a conscious belief but I have seen a number of students, asked to check that a linear map $\mathbb{R}^k \to \mathbb{R}^{\ell}$ is injective, just check that each of the $k$ basis elements has nonzero image.
-
8
Higher-level version: $n$ vectors are linearly independent iff no two are proportional. I've seen applied mathematicians do that. – darij grinberg Apr 10 2011 at 18:45
The cost of multiplying two $n$-digit numbers is of order $n^2$ (because each digit of the first number has to be multiplied with each digit of the second number).
A lot of information is found on http://en.wikipedia.org/wiki/Multiplication_algorithm .
The first faster (and easily understandable) algorithm was http://en.wikipedia.org/wiki/Karatsuba_algorithm with complexity $n^{log_2 3} \sim n^{1.585}$.
Basic idea: To multiply $x_1x_2$ and $y_1y_2$ where all letters refer to $n/2$-digit parts of $n$-digit numbers, calculate $x_1 \cdot y_1$, $x_2\cdot y_2$ and $(x_1+x_2)\cdot(y_1+y_2)$ and note that this is sufficient to calculate the result with three such products instead of four.
-
1
It would be better if these misconceptions would come with explanations how things really are... – darij grinberg Apr 10 2011 at 18:28
1
Along these lines: there is a widespread misapprehension that multiplication is the same thing as a multiplication algorithm (whichever one the speaker learned in elementary school). – Thierry Zell Apr 10 2011 at 19:25
4
At least it's better than people thinking multiplication is constant-time. :P – Harry Altman Apr 10 2011 at 19:35
show 3 more comments
Regard a reasonably nice surface in $\mathbb R^3$ that can locally be expressed by each of the functions $x(y,z)$, $y(x,z)$ and $z(x,z)$, then obviously
$\frac {dy} {dx} \cdot \frac {dz} {dy} \cdot \frac {dx} {dz} = 1$
(provided everything exists and is evaluated at the same point).
After all, this kind of reasoning works in $\mathbb R^2$ when calculating the derivative of the inverse function, it works for the chain rule and it works for separation of variables.
Note that this product is in fact $-1$ which can either be seen by just thinking about what happens to the equation $ax+by+cz=d$ of a plane / tangent plane or by looking at the expression coming out of the implicit function theorem.
I recall someone claiming that this example proves that $dx$ should be regarded as linear function rather than infinitesimal, but I cannot reconstruct the argument at the moment as this discussion was 15 years ago.
In particular, it is true under appropriate conditions in $\mathbb R^4$ that $\frac {\partial y} {\partial x} \cdot \frac {\partial z} {\partial y} \cdot \frac {\partial w} {\partial z} \cdot \frac {\partial x} {\partial w} = 1$
-
5
This is an example of the principle that naïve reasoning with Leibniz notation works fine for total derivatives but not for partial derivatives. This is one reason why I would always write the left-hand side as $\frac{\partial{y}}{\partial{x}} \cdot \frac{\partial{z}}{\partial{y}} \cdot \frac{\partial{x}}{\partial{z}}$ if not $\left(\frac{\partial{y}}{\partial{x}}\right)_z \cdot \left(\frac{\partial{z}}{\partial{y}}\right)_x \cdot \left(\frac{\partial{x}}{\partial{z}}\right)_y$ (notation that I learnt from statistical physics, where the independent variables are otherwise not clear). – Toby Bartels Apr 7 2011 at 12:56
2
Can you help us understand it? Or is there no better way than computation? – darij grinberg Apr 10 2011 at 18:27
show 1 more comment
This might not be common, but I once believed the following.
Let $A, B$ be integers, and define a sequence by the linear recurrence $s_n = A s_{n-1} + B s_{n-2}$ with the base case $s_0 = 0$, $s_1 = 1$. Two important special cases are the Fibonacci sequence ($A = B = 1$) and the sequence $s_n = 2^n - 1$ (where $A = 3$, $B = -2$). Then, for any integers $n$ and $k$, $\gcd(s_n, s_k) = s_{\gcd(n,k)}$.
This is true in the two mentioned special cases, so it's tempting to believe it's true in general. But there's a counterexample: $A = B = k = 2$, $n = 3$.
Update: corrected the powers of two minus one example from B = 2 to B = -2. Thanks to Harry Altman.
-
I'm not sure how common it is but I've certainly been able to trick a few people into answering the following question wrong:
Given $n$ identical and independently distributed random variables, $X_k$, what is the limiting distribution of their sum, $S_n = \sum_{k=0}^{n-1} X_k$, as $n \to \infty$?
Most (?) people's answer is the Normal distribution when in actuality the sum is drawn from a Levy-stable distribution. I've cheated a little by making some extra assumptions on the random variables but I think the question is still valid.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 210, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9387867450714111, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/234495/injective-morphisms-monomorphisms-and-left-invertible-morphisms-in-abelian-cate
|
Injective Morphisms, Monomorphisms and Left Invertible Morphisms in Abelian Categories
Let $\mathcal{C}$ be an abelian category. A morphism $f:X \rightarrow Y$ is called injective if its kernel is zero. $f$ is called monomorphism if whenever $f \circ g=0$, for $g:Z \rightarrow X$, then $g=0$. We have the result that a morphism is injective if and only if it is a monomorphism. My question is: what is the correct terminology for the stronger property of existence of a morphism $h: Y \rightarrow X$ such that $h \circ f=id_{X}$? What is the minimal additional assumption that we need to make for $\mathcal{C}$ such that a morphism is injective if and only if the above-mentioned property is true? For example, if the objects are sets, then the equivalence is true.
-
$f$ is then said left invertible. – Berci Nov 10 '12 at 23:10
Your final claim is false for trivial reasons: the inclusion $\emptyset \to X$ is always injective/monic but splits if and only if $X$ is also empty. – Zhen Lin Nov 11 '12 at 0:01
1 Answer
$f$ is left invertible. Equivalently, it is a split monomorphism. I wrote a blog post on the subject that you might find helpful.
The condition that every monomorphism splits is quite strong. For example, in $\text{Top}$, the split monomorphisms are precisely the inclusions of retracts, and most monomorphisms in $\text{Top}$ are not of this form. Similarly, in $\text{Ab}$, the split monomorphisms are precisely the inclusions of direct summands, and most monomorphisms in $\text{Ab}$ are not of this form.
For abelian categories, $\mathcal{A}$ has this property if and only if every short exact sequence splits, hence if and only if $\mathcal{A}$ is semisimple. This is a highly restrictive condition.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 19, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9323562383651733, "perplexity_flag": "head"}
|
http://www.reference.com/browse/wiki/Arrangement_of_hyperplanes
|
Definitions
# Arrangement of hyperplanes
In geometry and combinatorics, an arrangement of hyperplanes is a finite set A of hyperplanes in a linear, affine, or projective space S. Questions about a hyperplane arrangement A generally concern geometrical, topological, or other properties of the complement, M(A), which is the set that remains when the hyperplanes are removed from the whole space. One may ask how these properties are related to the arrangement and its intersection semilattice. The of A, written L(A), is the set of all subspaces that are obtained by intersecting some of the hyperplanes; among these subspaces are S itself, all the individual hyperplanes, all intersections of pairs of hyperplanes, etc. (excluding, in the affine case, the empty set). These subspaces are called the flats of A. L(A) is partially ordered by reverse inclusion.
If the whole space S is 2-dimensional, the hyperplanes are lines; such an arrangement is often called an arrangement of lines. Historically, real arrangements of lines were the first arrangements investigated. If S is 3-dimensional one has an arrangement of planes.
## General theory
### The intersection semilattice
The intersection semilattice L(A) is a meet semilattice and more specifically is a geometric semilattice. If the arrangement is linear or projective, or if the intersection of all hyperplanes is nonempty, the intersection lattice is a geometric lattice. (This is why the semilattice must be ordered by reverse inclusion--rather than by inclusion, which might seem more natural but would not yield a geometric (semi)lattice.)
### Polynomials
For a subset B of A, let us define f(B) := the intersection of the hyperplanes in B; this is S if B is empty. The characteristic polynomial of A, written pA(y), can be defined by
$p_A\left(y\right) := sum_B \left(-1\right)^$
y^{dim f(B)},>
summed over all subsets B of A except, in the affine case, subsets whose intersection is empty. (The dimension of the empty set is defined to be −1.) This polynomial helps to solve some basic questions; see below. Another polynomial associated with A is the Whitney-number polynomial wA(x, y), defined by
$w_A\left(x,y\right) := sum_B x^\left\{n-dim f\left(B\right)\right\} sum_C \left(-1\right)^$
>y^{dim f(C)},
summed over B ⊆ C ⊆ A such that f(B) is nonempty.
Being a geometric lattice or semilattice, L(A) has a characteristic polynomial, pL(A)(y), which has an extensive theory (see geometric lattice). Thus it is good to know that pA(y) = yi pL(A)(y), where i is the smallest dimension of any flat, except that in the projective case it equals yi + 1pL(A)(y). The Whitney-number polynomial of A is similarly related to that of L(A). (The empty set is excluded from the semilattice in the affine case specifically so that these relationships will be valid.)
### The Orlik-Solomon algebra
The intersection semilattice determines another combinatorial invariant of the arrangement, the Orlik-Solomon algebra. To define it, fix a commutative subring K of the base field, and form the exterior algebra E of the vector space
$bigoplus_\left\{H in A\right\} K e_H$
generated by the hyperplanes. A chain complex structure is defined on E with the usual boundary operator $partial$. The Orlik-Solomon algebra is then the quotient of E by the ideal generated by elements of the form $e_\left\{H_1\right\} wedge cdots wedge e_\left\{H_p\right\}$ where H_1, ..., H_p have empty intersection, and by boundaries of elements of the same form for which $H_1 cap cdots cap H_p$ has codimension greater than p.
## Real arrangements
In real affine space, the complement is disconnected: it is made up of separate pieces called regions or chambers, each of which is either a bounded region that is a convex polytope, or an unbounded region that is a convex polyhedral region which goes off to infinity. Each flat of A is also divided into pieces by the hyperplanes that do not contain the flat; these pieces are called the faces of A. The regions are faces because the whole space is a flat. The faces of codimension 1 may be called the facets of A. The face semilattice of an arrangement is the set of all faces, ordered by inclusion. Adding an extra top element to the face semilattice gives the face lattice.
In two dimensions (i.e., in the real affine plane) each region is a convex polygon (if it is bounded) or a convex polygonal region which goes off to infinity.
• As an example, if the arrangement consists of three parallel lines, the intersection semilattice consists of the plane and the three lines, but not the empty set. There are four regions, none of them bounded.
• If we add a line crossing the three parallels, then the intersection semilattice consists of the plane, the four lines, and the three points of intersection. There are eight regions, still none of them bounded.
• If we add one more line, parallel to the last, then there are 12 regions, of which two are bounded parallelograms.
A typical problem about an arrangement in n-dimensional real space is to say how many regions there are, or how many faces of dimension 4, or how many bounded regions. These questions can be answered just from the intersection semilattice. For instance, two basic theorems are that the number of regions of an affine arrangement equals (−1)npA(−1) and the number of bounded regions equals (−1)npA(1). Similarly, the number of k-dimensional faces or bounded faces can be read off as the coefficient of xn−k in (−1)n wA (−x, −1) or (−1)nwA(−x, 1).
Another question about an arrangement in real space is to decide how many regions are simplices (the n-dimensional generalization of triangles and tetrahedra). This cannot be answered based solely on the intersection semilattice.
A real linear arrangement has, besides its face semilattice, a poset of regions, a different one for each region (Edelman 1984). This poset is formed by choosing an arbitrary base region, R0, and associating with each region R the set A(R0, R) defined as the set of hyperplanes that separate the two regions. One says R1 ≥ R2 if A(R1, R) contains A(R2, R). This lattice has interesting properties that we will not go into here; notably, it is an Eulerian poset.
Meiser designed a fast algorithm to determine the face of an arrangement of hyperplanes containing an input point.
## Complex arrangements
In complex affine space (which is hard to visualize because even the complex affine plane has four real dimensions), the complement is connected (all one piece) with holes where the hyperplanes were removed.
A typical problem about an arrangement in complex space is to describe the holes.
The basic theorem about complex arrangements is that the cohomology of the complement M(A) is completely determined by the intersection semilattice. To be precise, the cohomology ring of M(A) (with integer coefficients) is isomorphic to the Orlik-Solomon algebra on Z.
The isomorphism can be described rather explicitly, and gives a presentation of the cohomology in terms of generators and relations, where generators are represented (in the de Rham cohomology) as logarithmic differential forms
$frac\left\{1\right\}\left\{2pi i\right\}frac\left\{dalpha\right\}\left\{alpha\right\}.$
with $alpha$ any linear form defining the generic hyperplane of the arrangement.
## Technicalities
Sometimes it is convenient to allow the degenerate hyperplane, which is the whole space S, to belong to an arrangement. If A contains the degenerate hyperplane, then it has no regions because the complement is empty. However, it still has flats, an intersection semilattice, and faces. The preceding discussion assumes the degenerate hyperplane is not in the arrangement.
Sometimes one wants to allow repeated hyperplanes in the arrangement. We did not consider this possibility in the preceding discussion, but it makes no material difference.
## References
| issue = 2 | journal = Transactions of the American Mathematical Society | pages = 617–631 | title = A partial order on the regions of ℝn dissected by hyperplanes | volume = 283 | year = 1984}}.
| issue = 2 | journal = Information and Computation | pages = 286–303 | title = Point location in arrangements of hyperplanes | volume = 106 | year = 1993}}.
| location = Berlin | publisher = Springer-Verlag | series = Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences] | title = Arrangements of Hyperplanes | volume = 300 | year = 1992}}.
| location = Providence, R.I. | publisher = American Mathematical Society | journal = Memoirs of the American Mathematical Society | title = Facing up to arrangements: face-count formulas for partitions of space by hyperplanes | year = 1975}}.
Wikipedia, the free encyclopedia © 2001-2006 Wikipedia contributors (Disclaimer)
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 8, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.915884792804718, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/23885/why-events-in-probability-are-closed-under-countable-union-and-complement?answertab=votes
|
# Why events in probability are closed under countable union and complement?
In probability, events are considered to be closed under countable union and complement, so mathematically they are modeled by $\sigma$-algebra. I was wondering why events are considered to be closed under countably union and complement?
In Nate Eldredge's post, he has done an excellent job on explaining this, by using whether questions are answered or not as an analogy to whether events occur or not, if I understand his post correctly. However, if someone could explain plainly without analogy, it could be clearer to me.
I was particularly curious why events are not considered to be closed under infinite (possibly uncountably) union, but instead just under countably union? So possibly to model events using the power set? I think this is not addressed in Nate Eldredge's post.
My guess would be that the reason might be related to the requirement on the likelihood of any event to occur to be "measurable" in some sense. But how exactly to understand this requirement is unclear to me.
PS: This post is related to my previous one Interpretation of sigma algebra, but the questions asked in these two are not the same.
Thanks and regards!
-
## 2 Answers
As Jonas mentioned, allowing arbitrary unions is not "consistent", in the sense that there is no proper definition of probability. This is also related to the fact that infinite sums make much more sense when countable, since it's not clear how to attach a finite number to an uncountable sum of positive reals.
On the other hand, many desirable events are describable using countable unions and intersections. For example, events like "the random walk returns to the origin" is a union of countably many events "the random walk returns to the origin at time $t$", and any one of those is a finite union of "basic" events.
In general, first order properties always correspond to taking countable unions and intersections; this means that if you have a statement of the form "$\forall x \exists y \cdots P(x,y,\ldots)$", where $x,y,\ldots$ are integers, and the $P$s are basic events (e.g. for a random walk, depend on finitely many times), then the corresponding event is guaranteed to be in the $\sigma$-algebra, i.e. is guaranteed to have assigned to it a "probability".
-
2
Spot on. Re uncountable sums of positive reals, it is clear that there is no way to attach anything else than $+\infty$ to any of these. Namely: let $(x_t)_{t\in T}$ denote any family of positive real numbers $x_t$ indexed by an uncountable set $T$. Then, for every $M$, there exists a finite subset $S$ of $T$ such that the sum of $x_t$ over $t\in S$ is greater than $M$. – Did Feb 26 '11 at 20:36
If events were closed under arbitrary unions, and if singletons are events, then everything would be an event, because every set is the union of the singletons it contains.
Trying to define probabilities or measures on all sets can lead to problems.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 16, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9668397307395935, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/112703/deciding-whether-or-not-an-exponentially-distributed-random-variable-exists-in-a
|
## Deciding whether or not an exponentially distributed random variable exists in a set via the use of a “black box” function
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I have some set of known size but with unknown elements, $(x_1, ..., x_N) \in X$, where the elements of $X$ are exponentially distributed random variables with unknown rate parameters, $(\lambda_1, ..., \lambda_N) \in R$. I also have a "black box" function $f$ that samples an element from $X$ with uniform probability, and then returns a randomly sampled value from the chosen element's exponential distribution (corresponding, perhaps, to the time until the first instance of an event governed by the chosen variable).
I'm looking to use $f$ to discern whether or not an exponentially distributed random variable, $x_q$, with known rate parameter, $\lambda_q$, exists in the set $X$. I also know that $\lambda_q$ is smaller then all other rate parameters in the set $X$ by at least a multiplicative factor $w$. Said another way, $\lambda_q \leq Min[(R-\lambda_q)]*w$, where $w < 1$.
Provided $w$, how many times must I use $f$ to sample from $X$ to decide whether $x_q \in X$ with some threshold confidence?
Note - If this problem is too open ended as things stand, please feel free to suggest additional restrictions or clarifications!
Note 2 - We can specify that $N \leq 100$, where $N$ is a positive integer, and that $w \leq \frac{1}{2}$, though we cannot say that $w << 1$.
-
Surely you have to know something about $N$ also in order for this to have any hope? Maybe you want a bound in terms of $N$? – Anthony Quas Nov 17 at 21:16
@Anthony Quas Fair point. I am looking for a bound in terms of $N$, and I have changed the question to specify that we know $N$. – unknown (yahoo) Nov 18 at 7:20
What are typical values of $N$ and $w$? And what is $R$ in $R-\lambda_q$? – fedja Nov 19 at 1:51
@fedja I have added some specifications for $N$ and $w$ in Note 2. I can tighten them as needed. $R - \lambda_q$ is meant to be the set $R$ without the element $\lambda_q$ (perhaps this notation is incorrect?) – unknown (yahoo) Nov 19 at 1:56
@fedja Ah, $R$ is defined earlier as the set of rate parameters associated with the exponentially distributed random variables in $X$. – unknown (yahoo) Nov 19 at 2:00
show 5 more comments
## 1 Answer
OK, here is what I have. I'll skip some derivations (I'll provide them later if you are interested) and just describe the conclusions. The final tables apply if you have noiseless data. Any noticeable amount of noise will cost you quite a bit here.
The problem of how to distinguish between two fixed densities $p(x)$ and $q(x)$ is classical. Suppose that we want to bound the combined probability of error by some small $\theta>0$. This means that if we are allowed to take $n$ samples, we have to find some set $E\subset\mathbb R^n$ such that $\int_E P+\int_{E^c}Q\le\theta$ where $P(x_1,\dots,x_n)=p(x_1)\dots p(x_n)$ and similarly for $Q$. Here $E$ is the set where we declare $q$ to be actual density. Note that in no way can this sum be better than $\int\min(P,Q)$ and we can achieve that by the standard maximal likelihood decision: we declare the density $Q$ if `$P(X_1,\dots,X_n)<Q(X_1,\dots,X_n)$` and $P$ otherwise. We also can get a fairly clear idea of the necessary sampling size. In fact, we can tell it almost up to a factor of $2$. Note that $\min(P,Q)\le\sqrt{PQ}$, so $$\int\min(P,Q)\le \left(\int \sqrt{pq}\right)^n$$. On the other hand, $$\left(\int \sqrt{pq}\right)^{2n}=\left(\int \sqrt{PQ}\right)^{2}\le \left(\int\min(P,Q)\right)\left(\int\max(P,Q)\right)\le 2\int\min(P,Q)$$ Thus, if $\int\sqrt{pq}=e^{-H}$, then to reach the level $\theta$ of combined error, we need at least $\frac 12 H^{-1}\log\frac 1{2\theta}$ and $H^{-1}\log\frac 1\theta$ samples will suffice.
The problem with your case is that we test not two densities but two families of densities against each other. However, if my computations are correct, we are lucky and the likelihood test that distinguishes the worst pair is actually universal enough to achieve the level of confidence given by the above $\sqrt{pq}$ estimate. So assuming that $\lambda_q=w$ (so every other $\lambda$ is $\ge 1$), we can define $p_L(x)=\frac{N-1}N Le^{-Lx}+\frac 1Nwe^{-wx}$, $q(x)=e^{-x}$ where $L=L(N,w)$ is determined from the maximization problem $\int\sqrt{p_Lq}\to\max$ (which in practice is better to pose as $H=\frac 12\int(\sqrt{p_L}-\sqrt q)^2\to\min$), then the corresponding maximal likelihood text works fine and gives a guaranteed bound $\theta$ for each one-sided error whenever the $\sqrt{pq}$ estimate yields the combined error of $\theta$.
I ran a small program to see what sampling sizes it gives for reasonable $w$ and $N$. The table for the sacramental $\theta=0.05$ is below. The lines are $N,L,n$. \phantom{+} is the artifact of the automatic LaTeX style formatting that I was too lazy to disable. As you can see, with your $10^5$ samples you are just on the edge of "theoretically feasible" for $w=0.5,N=100$ but if you can drop either number, everything gets fairly nice (if no noise is present, of course).
I suggest you run a few simulations and see whether it works for you (the "general theory" should be OK, but I could make some stupid mistakes somewhere). Normally, you are getting something like $$n=8N^{\frac 1{1-w}}\log\frac {1}{\theta}$$ as a rule of thumb for choosing the sample size. This is all "the best performance in the worst case" approach. If you actually have more information than you put in the post, that may help push the numbers down a bit :).
Feel free to ask questions but do not expect a quick answer: life is crazy at this end...
```w=0.5
100 \phantom{+} 1.009397 186378
90 \phantom{+} 1.010406 155814
80 \phantom{+} 1.011662 127611
70 \phantom{+} 1.013269 101830
60 \phantom{+} 1.015398 78546
50 \phantom{+} 1.018358 57847
40 \phantom{+} 1.022762 39849
30 \phantom{+} 1.030037 24705
20 \phantom{+} 1.044454 12637
w=0.45
100 \phantom{+} 1.010954 89813
90 \phantom{+} 1.012108 75790
80 \phantom{+} 1.013540 62719
70 \phantom{+} 1.015367 50637
60 \phantom{+} 1.017779 39584
50 \phantom{+} 1.021120 29613
40 \phantom{+} 1.026065 20790
30 \phantom{+} 1.034179 13204
20 \phantom{+} 1.050103 6985
w=0.4
100 \phantom{+} 1.012550 45454
90 \phantom{+} 1.013842 38711
80 \phantom{+} 1.015442 32363
70 \phantom{+} 1.017476 26429
60 \phantom{+} 1.020152 20932
50 \phantom{+} 1.023843 15900
40 \phantom{+} 1.029277 11371
30 \phantom{+} 1.038131 7392
20 \phantom{+} 1.055337 4039
w=0.35
100 \phantom{+} 1.014103 24058
90 \phantom{+} 1.015519 20670
80 \phantom{+} 1.017266 17449
70 \phantom{+} 1.019481 14406
60 \phantom{+} 1.022385 11553
50 \phantom{+} 1.026372 8904
40 \phantom{+} 1.032211 6480
30 \phantom{+} 1.041659 4307
20 \phantom{+} 1.059842 2427
w=0.3
100 \phantom{+} 1.015495 13254
90 \phantom{+} 1.017009 11481
80 \phantom{+} 1.018872 9781
70 \phantom{+} 1.021226 8158
60 \phantom{+} 1.024301 6619
50 \phantom{+} 1.028504 5172
40 \phantom{+} 1.034628 3826
30 \phantom{+} 1.044470 2597
20 \phantom{+} 1.063235 1506
w=0.25
100 \phantom{+} 1.016562 7561
90 \phantom{+} 1.018136 6600
80 \phantom{+} 1.020067 5670
70 \phantom{+} 1.022499 4775
60 \phantom{+} 1.025664 3917
50 \phantom{+} 1.029973 3099
40 \phantom{+} 1.036219 2328
30 \phantom{+} 1.046192 1611
20 \phantom{+} 1.065043 960
w=0.2
100 \phantom{+} 1.017079 4443
90 \phantom{+} 1.018656 3906
80 \phantom{+} 1.020587 3382
70 \phantom{+} 1.023011 2873
60 \phantom{+} 1.026156 2380
50 \phantom{+} 1.030419 1906
40 \phantom{+} 1.036571 1453
30 \phantom{+} 1.046335 1024
20 \phantom{+} 1.064644 625
w=0.15
100 \phantom{+} 1.016716 2675
90 \phantom{+} 1.018218 2367
80 \phantom{+} 1.020052 2064
70 \phantom{+} 1.022349 1768
60 \phantom{+} 1.025319 1478
50 \phantom{+} 1.029333 1197
40 \phantom{+} 1.035099 924
30 \phantom{+} 1.044204 662
20 \phantom{+} 1.061157 414
w=0.1
100 \phantom{+} 1.014952 1647
90 \phantom{+} 1.016263 1466
80 \phantom{+} 1.017861 1287
70 \phantom{+} 1.019857 1111
60 \phantom{+} 1.022433 937
50 \phantom{+} 1.025903 766
40 \phantom{+} 1.030871 599
30 \phantom{+} 1.038684 436
20 \phantom{+} 1.053150 278
w=0.05
100 \phantom{+} 1.010786 1099
90 \phantom{+} 1.011716 983
80 \phantom{+} 1.012849 868
70 \phantom{+} 1.014262 754
60 \phantom{+} 1.016083 641
50 \phantom{+} 1.018533 528
40 \phantom{+} 1.022034 417
30 \phantom{+} 1.027529 308
20 \phantom{+} 1.037677 200
```
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 74, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9382656216621399, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/103695?sort=oldest
|
## Does the self-product of a $g$-dimensional abelian variety contain an abelian variety of dimension smaller than $g$ at some point
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Let me be more precise than the title. (This will be my last attempt to do something with abelian varieties. Sorry for all the basic questions. The answers have been great!)
Let $A$ be a simple abelian variety over a field $k$. Let $g\geq 2$ be the dimension of $A$.
Does there exist an integer $n\geq 1$ such that $A^n = A\times_k A\ldots\times_k A$ contains an abelian variety of dimension less than $g$?
It suffices to prove that $A^n$ contains a curve of genus strictly smaller than $g$ for some $n\geq 1$.
I'm afraid that this is not true. In fact, if $B\subset A^n$, then $B$ is isogenous to $A^m$ probably. Therefore, $\dim B =mg$. I'm just asking to be sure.
-
1
The answer to your question is no. Poincaré's complete reducibility theorem. As you surmise, any $B$ in $A^n$ is isogenous to $A^m, m \le n$. – Felipe Voloch Aug 1 at 15:38
## 2 Answers
No (I suppose that $k$ is algebraically closed). This is because Poincaré's complete reducibility theorem contains a unicity statement for the intervening factors (up to isogeny). See Mumford, Abelian varieties, p. 173-174.
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
In fact, it is no for completely elementary reasons. If $A$ is simple and $B\subset A^n$ is an abelian variety with $\dim B < g$, then $Hom(B,A^n)=Hom(B,A)^n$ is necessarily zero. So $B=0$.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 25, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9280476570129395, "perplexity_flag": "head"}
|
http://unapologetic.wordpress.com/2007/01/30/groups/?like=1&_wpnonce=9d779cb1c0
|
# The Unapologetic Mathematician
## Groups
A good ramp up into abstract algebra is the idea of a group. Groups show up everywhere in mathematics, and getting a feel for working with them really helps you learn about other algebraic notions.
There are a number of ways to think about groups, but for now I’ll stick with a very concrete, hands-on approach. This is the sort of thing you’d run into in a first undergraduate course in abstract (or “modern”) algebra.
So, a group is basically a set (a collection of elements) with some notion of composition defined which satisfies certain rules. That is, given two elements $a$ and $b$ of a group, there’s a way to stick them together to give a new element ab of the group. Then there are the
Axioms of Group Theory
1. Composition is associative. That is, if we have three elements $a$, $b$, and $c$, the two elements $(ab)c$ and $a(bc)$ are equal.
2. There is an identity. That is, there is an element (usually denoted $e$) so that $ae=a=ea$.
3. Every element has an inverse. That is, for every element $a$ there is another element $a^{-1}$ so that $aa^{-1}=e=a^{-1}a$.
That’s all well and good, but if this is the first time thinking about an algebraic structure like this it doesn’t really tell you anything. What you need (after the jump) are a few
Examples of groups
• The integers with addition as the operation
• The rational numbers with addition as the operation
• The nonzero rational numbers with multiplication as the operation
• The real numbers with addition as the operation
• The nonzero real numbers with multiplication as the operation
• The numbers on a clock face with addition “modulo 12″ as the operation
• Rearrangements of three distinct items on a line with composition of rearrangements as the operation
• Rotations of three-dimensional space with composition of rotations as the operation
The first five examples come up a lot, and many other systems are based on them. It shouldn’t take much thought to verify the axioms for them.
The sixth example, “clock addition”, is an extremely important one. The term “modulo 12″ could use some explanation, though. All this means is that when I add or subtract numbers I might get something outside the range of from one to twelve that actually show up on a clock. We handle this by just adding or subtracting twelves until we get back into that range. We do this all the time without thinking too much about it: if it’s 11:00 now and I want to go for tea at 4:00 I subtract 11 from 4 to get -7. This is below the proper range, so I add 12 to get 5 — tea is five hours away. It takes a bit more work to pick out the identity and inverses here, but it’s not too difficult. Also, note that there’s nothing really special about 12. We can work “modulo n” for any number n. We call this example Z12, and the general case Zn.
For the seventh example, imagine I have three objects — a, b, and c — and I want to line them up. There are six ways I can do it:
```a b c
a c b
b a c
b c a
c a b
c b a```
How do I get from the first arrangement to the third? I swap the first two objects. That transformation we call a “permutation” of the three objects. Going from the first to the fifth is another permutation by taking the object from the end and sticking it at the beginning of the line. Doing these two permutations one after the other I swap the first two objects, then take the third and move it to the front, taking $(a\,b\,c)$ to $(c\,b\,a)$. If I move the third object to the end first, though, the composition takes $(a\,b\,c)$ to $(a\,c\,b)$.
This illustrates an important point about groups: we never assumed that the operation is “commutative”. The order we do things matters in general. This isn’t familiar from arithmetic, but it’s common in everyday life. If I’m driving with my arm out the window, there’s a big difference between these two procedures
• Pull my arm inside, then roll up the window
• Roll up the window first, then pull my arm inside
The last example is also not commutative. It’s also got a new twist that’s also in the examples involving the real numbers: the group elements form a continuum. For integers with addition we’re just looking at this number or that number, and they’re nicely separated from each other. We can slide our way around the group of rotations from one rotation to another, which adds all sorts of new geometric structure to the group. This is an example of what we call a “Lie group”, after Sophus Lie (pronounced “lee”). Given how important they are I’m sure I’ll mention them more in the future.
I’ll close for now with a few basic statements. I’ll leave the proofs for interested readers. They aren’t too hard from the basic axioms of a group.
• A group has only one identity element
• An element of a group has only one inverse
• For elements $a$ and $b$ of a group, we have $(ab)^{-1}=b^{-1}a^{-1}$
• For an element $a$ of a group, we have $(a^{-1})^{-1}=a$
• For any two elements $a$ and $b$ of a group, the equations $ax=b$ and $xa=b$ have unique solutions in the group
About these ads
Like Loading...
## 24 Comments »
1. for the last thing in the “for the reader to prove” section, it seems that the solution for x in ax=b will not be the same solution for x in xa=b unless a^(-1) and b commute.
Comment by andy | January 31, 2007 | Reply
2. Correct: they dont have the same solutions. What’s to be shown is that the solution for each equation exists and is unique.
Comment by | January 31, 2007 | Reply
3. Interesting blog. Hope it grows in popularity.
About the permutation group, it is perhaps better not to talk both about `swapping’ and `sticking it at the beginning’ as these two are different operations, the latter is made up of several swap operations. Going from { a b c } to { c a b } is two swaps, from { a b c } to { a c b } to { c a b }. Using only swaps is also useful in determining even and odd permutations of n objects (cyclic is odd for even n).
Comment by Amitabha | January 31, 2007 | Reply
4. You’re exactly right, Amitabha, that every permutation can be built up from transpositions. However, I’m trying to give examples of different kinds of permutations for an audience that may not be as accustomed to them. Rest assured that I will have to return to the transpositions picture when I eventually need to talk about the signum representation.
Comment by | January 31, 2007 | Reply
5. most people now start with set, then move up to groupoid, semi-group, then quasigroup, then loop, then monoid, then finally group
Comment by babi | January 31, 2007 | Reply
6. babi: If this were a formal exposition I would start with set theory, yes. This isn’t a formal exposition, though. I’m trying to give the idea of groups for a somewhat general audience.
That said, I’m not sure what “most people” you’re talking about. It’s a lot easier to point to examples of groups than to examples of semigroups, so the notion of a group is easier to communicate at an introductory level. Almost all the basic abstract algebra texts I’ve seen — Hungerford, Judson, and Jacobson for examples — start in with groups. Even Lang’s Algebra only starts with monoids and then moves to groups within a couple pages.
The only major departure I’ve seen is older texts like Birkhoff & MacLane, which start with integral domains to build off of a basic understanding of the integers. I have yet to see a single basic algebra text which even mentions loops, except possibly in passing. Can you provide an example of one? Bourbaki doesn’t count since I don’t know anyone who would seriously try to teach a first course in anything from Bourbaki.
Anyhow, if I wanted to be thorough I’d have to throw in magmas, categories, n-categories, and so on. If I wanted to do that I’d start from categories, steal the exposition from Lawvere, and promptly confuse everyone. Groups are rather easy for a novice to pick up, while being useful enough to lead into quandles, which is what I really want to get to pretty soon.
Comment by | January 31, 2007 | Reply
7. Motivation *before* formalism, I always say. Nice introduction to group theory.
Comment by | January 31, 2007 | Reply
8. Just so that everybody’s on the same page: I’m pretty sure that what babi calls “groupoids” are the same thing as what John (in the next comment) calls “magmas”.
Comment by | January 31, 2007 | Reply
9. Toby, you mean that some people use “groupoid” for a different thing? Then what do they call groupids?
Comment by | January 31, 2007 | Reply
10. Allow me to introduce a discordant note. While this is a well-written introduction to groups, I’m pretty sure virtually no one from “the general audience”, without prior experience with rigorous college-level math, will be able to really understand it.
It may appear to you as though you’re starting from scratch with the basic definitions anyone can understand, but in fact to even follow your train of thought requires prior experience with these ideas, in no particular order:
1. That a set is a collection of entities treated as having no structure, and what that means
2. That an algebraic structure is generally a set + some operations defined axiomatically, and how that connects to familiar objects not previously so considered
3. Why sticking a and b together is called “composition”, what does it mean that it’s another element of the group (it’s just a and b written together!).
4. What’s ‘associative’ and why it’s important.
5. What does it mean to have an operation that’s not given explicitly, but only constrained axiomatically, and how does one do things with it if you don’t know what it means.
6. ugh, rational numbers… I remembered once what those are exactly… oh you mean fractions!
7. How the hell did you get from permutations of letters to some mysterious “objects”?
and so on and so forth.
While I appreciate the spirit in which you’re approaching your new blog (which looks interesting, by the way, thanks), I think you’re vastly underestimating the difficulty of explaining modern math structures to general public. I tried to further speculate on the nature of such underestimating in a post inspired by your effort.
Comment by | February 1, 2007 | Reply
11. Thanks for the comments, Anatoly. You raise some interesting points, and I’ll try to keep them in mind as I continue these basic-concepts posts. At the outset, you’re right that what I’m trying to pull off here is difficult. You’re also dead-on that I have a long row to hoe to model an average reader’s thought processes, especially since I diverge from the norm you point out about learning abstract algebra only after basic linear algebra.
I’m not quite as pessimistic, though. If I were trying to teach people to actually use group theory I’d be with you all the way. Instead, I’m trying to give people some of the flavor of these subjects with as few outright lies as possible. This game is downright popular in theoretical physicis, with such august authors as Greene, Hawking, and Penrose. In fact, Penrose’s The Road to Reality is a great inspiration to my project.
I’ve actually had some pretty good results using this sort of approach in the past, albeit face-to-face. Can my friends past whom I’ve thrown some ideas actually work in group theory, or even read a current paper in the subject? Probably not. Do they have an idea of what group theorists do? Definitely more than before we talked.
I’m also hoping that using many examples will help clarify the abstract axioms I’m laying down. I’m very explicitly trying not to be another Bourbaki here.
My next post in this series will likely be an attempt to clean up some of these questions you mention. Any other points I’m less than clear on in the future, please bring them up and I’ll be glad to go back and run over them again.
Comment by | February 1, 2007 | Reply
12. MetaMath.org has recently added some simple group theory. Since they don’t have a permutation notation, or even a plan for decimal numbers larger than 9, it may be a while before you can use them for introductions to abstract algebra. I found the following related proofs (if you are comfortable with the “metalogic” used by the author).
A group has only one identity element
http://us.metamath.org/mpegif/grpideu.html
An element of a group has only one inverse
http://us.metamath.org/mpegif/grpinveu.html
For elements a and b of a group, we have (ab)^-1=b^-1 a^-1
http://us.metamath.org/mpegif/grpinvop.html
For an element a of a group, we have (a^-1)^-1=a
http://us.metamath.org/mpegif/grp2inv.html
For any two elements a and b of a group, the equations ax=b and xa=b have unique solutions in the group
Aha. This would seem to be some low-hanging fruit. Now if I could only figure out how to enter proofs into their system.
Comment by | February 3, 2007 | Reply
13. As I understand it, MetaMath is an attempt to actually present formal proofs of mathematical theorems? It’s interesting, but I think that even the notion of a formal system and a formal proof would go over the head of most basic readers. It’s going in the opposite direction from Bourbaki that I want to take this discussion.
Thanks for the heads-up that they’re doing this sort of thing now, though.
Comment by | February 3, 2007 | Reply
14. [...] few more groups I want to throw out a few more examples of groups before I move deeper into the [...]
Pingback by | February 25, 2007 | Reply
15. [...] Like groups, rings, modules, and other algebraic constructs, we define a category by laying out what’s in [...]
Pingback by | May 22, 2007 | Reply
16. Though six months late, as a non-mathematician I understand what John has written. I think there are many semi-techies on the net that are:
1. Interested in whats going on in math and science
2. Have some training in them
I work in publishing .
Comment by Michael D. Cassidy | July 12, 2007 | Reply
17. Michael’s remark above seems entirely correct to me.
And, I think a good reason for starting with groups rather than say monoids or semigroups is that groups have some fairly spectacular but accessible results, such as Lagrange’s theorem, whereas I’m not aware of a single surprising fact about semigroups that you could explain to to somebody who knew only the semigroup axiom and sixth grade arithmetic, and for monoids there’s nothing but the uniqueness of the identity, which might as well be presented in the intro to groups.
Comment by MathOutsider | October 13, 2007 | Reply
• For every finite semigroup F which has an identity and an inverse, the order of every subsemigroup S, with an identity and inverse both identical to those of F, of F divides the order of S. Or more generally, for any “surprising fact” about groups and subgroups, you can just restate it as a surprising fact about semigroups which have an identity and inverse and subsemigroups which have the same identity and inverse as the semigroup they are subsets of. After all, a group can get defined “merely” as a semigroup which also has an identity and an inverse. For monoids, you can do something similar also.
Additionally, the uniqueness of the identity element for a set with some binary opeartion holds even in the abscence of *any* axiom.
I also don’t know if you’ll find this sueprising or not, but say we measure the length of expressions by the number of symbols they have (parentheses get counted). For any semigroup expression E written in infix notation you can move parentheses around as you like obtaining E’, and for all E, E’, E=E’, no matter what lengths E and E’ have. For semigroup expressions written in prefix notation (and infix and postfix notation also), no matter what length two given expression have, as long two expressions having the same length come as meaningful expressions, and have the variables appear in the same order, they will come as equal expressions. For instance, for all semigroups with binary operation S, I know SSSabcd=SSabScd just because they both come as meangingful, all variables appear in the same order, and they have the same length. Or perhaps better…
SSSSSabScSdSefghi=SSabSSSScdeSSfghi.
Comment by Doug Spoonwood | May 16, 2011 | Reply
18. [...] we know what a group is and what a subgroup is. Today I want to talk about the cosets of a subgroup in a group [...]
Pingback by | October 26, 2007 | Reply
19. [...] Well, they probably mean “group” as “collection”, rather than group. But what is the layman’s meaning for “coset”? They can’t possibly mean the [...]
Pingback by | July 25, 2008 | Reply
20. [...] Well, they probably mean “group” as “collection”, rather than group. But what is the layman’s meaning for “coset”? They can’t possibly mean the [...]
Pingback by | August 28, 2010 | Reply
21. [...] talking about symmetric groups, which are, of course, groups. We have various ways of writing down an element of , including the two-line notation and the cycle [...]
Pingback by | September 8, 2010 | Reply
22. [...] to read. If you try to start the front page you may find yourself lost—best to return to the beginning and work your way [...]
Pingback by | October 11, 2010 | Reply
23. [...] -dimensional manifold equipped with a multiplication and an inversion which satisfy all the usual group axioms (wow, it’s been a while since I wrote that stuff down) and are also smooth maps between [...]
Pingback by | June 6, 2011 | Reply
« Previous | Next »
## About this weblog
This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”).
I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
• ## RSS Feeds
RSS - Posts
RSS - Comments
• ## Feedback
Got something to say? Anonymous questions, comments, and suggestions at Formspring.me!
%d bloggers like this:
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 25, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9443029761314392, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/statistics/177290-basic-question-distribution-equations.html
|
# Thread:
1. ## basic question on distribution equations
if i have a exponential distribution equation for example, and i want to know the probability of a certain region between x1 and x2, would all i have to do is get the cdf equation, then plugin lambda, then subtract this equation with a x2 value, with the same equation with a x1 value, to get the probability between x1 and x2?
so
and of course i set lamdba.
cdf(x2) - cdf(x1) = probability between x1 and x2?
Does this work with any distribution equation weather discrete or continuous?
2. You need to be careful with your definition of "between x1 and x2" for the discrete case. The following is true for any discrete or continuous distribution:
$P(a < X \leq b) = cdf(b) - cdf(a)$
for most* continuous variables, its true that
$P(a < X) = P (a \leq X)$ and so you can say
$P(a \leq X \leq b) = cdf(b) - cdf(a)$ for those cases
*some continuous variables are on a mixed distribution has a probability mass at a fixed point, so the last part may not work for those.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9394546151161194, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/number-theory/164816-unique-integers-print.html
|
# unique integers
Printable View
• November 29th 2010, 10:50 PM
mremwo
unique integers
what exactly does it mean for integers to be unique? if i am supposed to prove that there exists UNIQUE positive m and n integers under a condition, can m and n be equal at some conditions? for them to be unique, does it just mean there is only one m and only one n each time the condition is met? as in if xsqrd= m, there would not be a unique solution m?
thanks!!!
• November 29th 2010, 11:15 PM
aman_cc
better you post the problem - it would make the context clear, and a lot of times answer to what you have asked follows from there
• November 30th 2010, 05:05 AM
HallsofIvy
Quote:
Originally Posted by mremwo
what exactly does it mean for integers to be unique? if i am supposed to prove that there exists UNIQUE positive m and n integers under a condition, can m and n be equal at some conditions? for them to be unique, does it just mean there is only one m and only one n each time the condition is met? as in if xsqrd= m, there would not be a unique solution m?
thanks!!!
"Unique m and n" means there is only one pair of numbers, (m, n), that satifies the condition. It is quite possible that m and n are the same.
If you mean a pair, (m, n), such that $m^2= n$ then, no, that would not be unique- but not just because, for example, both of (-2, 4) and (2, 4) is such a pair. It is also not unique because (2, 4) and (3, 9) are such pairs.
All times are GMT -8. The time now is 10:51 PM.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9534529447555542, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/advanced-algebra/184640-solution-infinite-dimensional-matrix.html
|
# Thread:
1. ## solution to infinite dimensional matrix
Consider the equations Ax=0 where both the number of columns and rows of A are countably infinite and all entries are either 1, 0 or -1.
Is the following statement true or false?
Ax=0 has a nonnegative bounded solution (i.e., Ax=0 for some x=(x1,x2,...) with xi>=0 for all i and sum_i(xi)<infinity)
iff
Ax=0 has a nonnegative bounded solution with at most finitely many nonzeros (i.e., Ax=0 for some x=(x1,x2,...) with xi>=0 for all i and sum_i(xi)<infinity AND xi=0 for all but finitely many i).
I guess the answer is no.
I greatly appreciate any reference. I have checked a few books but can't find the answer.
2. ## Re: solution to infinite dimensional matrix
Originally Posted by vivian6606
Consider the equations Ax=0 where both the number of columns and rows of A are countably infinite and all entries are either 1, 0 or -1.
Is the following statement true or false?
Ax=0 has a nonnegative bounded solution (i.e., Ax=0 for some x=(x1,x2,...) with xi>=0 for all i and sum_i(xi)<infinity)
iff
Ax=0 has a nonnegative bounded solution with at most finitely many nonzeros (i.e., Ax=0 for some x=(x1,x2,...) with xi>=0 for all i and sum_i(xi)<infinity AND xi=0 for all but finitely many i).
I guess the answer is no.
I greatly appreciate any reference. I have checked a few books but can't find the answer.
Suppose that A looks like this:
$A = \begin{bmatrix}1&-1&-1&-1&\ldots\\ 0&1&-1&-1&\ldots\\ 0&0&1&-1&\ldots\\ \vdots&\vdots&&\ddots&\ddots\end{bmatrix}$,
with 1s on the main diagonal, –1s everywhere above it and 0s everywhere below it. Then Ax=0, where $x = \bigl(\tfrac12,\tfrac14,\tfrac18,\ldots,2^{-n},\ldots\bigr).$
But if Ty = 0 with $y_n=0$ whenever n≥N, then by looking at the (N–1)th coordinate of Ay you can see that $y_{N-1}=0$, and by "backwards induction" $y_n=0$ for all n.
3. ## Re: solution to infinite dimensional matrix
THANK YOU SO MUCH!!!
That helps a lot.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9285057187080383, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/32315/has-the-lie-group-e8-really-been-detected-experimentally
|
Has the Lie group E8 really been detected experimentally?
Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
A few months ago there were several math talks about how the Lie group E8 had been detected in some physics experiment. I recently looked up the original paper where this was announced,
"Quantum Criticality in an Ising Chain: Experimental Evidence for Emergent E8 Symmetry", Science 327 (5962): 177–180, doi:10.1126/science.1180085
and was less than convinced by it. The evidence for the detection of E8 appears to be that they found a couple of peaks in some experiment, at points whose ratio is close to the golden ratio, which is apparently a prediction of some paper that I have not yet tracked down. The peaks are quite fuzzy and all one can really say is that their ratio is somewhere around 1.6. This seems to me to be a rather weak reason for claiming detection of a 248 dimensional Lie group; I would guess that a significant percentage of all experimental physics papers have a pair of peaks looking somewhat like this.
Does anyone know enough about the physics to comment on whether the claim is plausible? Or has anyone heard anything more about this from a reliable source? (Most of what I found with google consisted of uninformed blogs and journalists quoting each other.)
Update added later: I had a look at the paper mentioned by Willie Wong below, where Zamoldchikov predicts the expected masses. In fact he predicts there should be 8 peaks, and while the experimental results are consistent with the first 2 peaks, there are no signs of any of the other peaks. My feeling is that the interpretation of the experimental results as confirmation of an E8 symmetry is somewhat overenthusiastic.
-
13
Why is this getting voted down? This seems like a legitimate question in mathematical physics, albeit with an empirical bent. Not to mention, one being asked by an (I assume) Fields medalist. – Daniel Litt Jul 17 2010 at 21:40
30
It seems to me that "being a Fields medalist" doesn't carry any information one way or the other about the quality of the question, which should be the primary reason that people choose their votes. (That said, it seems like a completely reasonable question to me.) – JBL Jul 17 2010 at 22:11
13
And, at the risk of giving truth to a classical estereotype about mathematicians (nicely captured in the joke about sheep who have one black side) "having user name «bocherds»" does not carry any information one way or the other about the quality of "being a Fields medalist" :) If that is not clear, well, I happen to be a Nigerian prince!... – Mariano Suárez-Alvarez Jul 17 2010 at 22:19
5
My views in the philosophy of mathematics and philosophy of physics make this question nonsense. – Alexander Woo Jul 17 2010 at 23:28
12
But Wadim, surely the goal of an answer on MO is to answer the question posed by the original poster, rather than to guess on what "everybody who votes" wants? The latter seems neither desirable nor practical. – Yemon Choi Jul 18 2010 at 11:31
show 11 more comments
3 Answers
This is a great question, but I don't think a reasonable answer can be given in this short space. So I wrote an expository note jointly with a colleague who was trained as a physicist. You can read it by following the link above -- comments are welcome.
But here are a couple of highlights:
1. It's not true that they measured this one number, and so claimed to have detected E8.* There is a bit more data than that. And there is a lot more history! Back around 1990, there was a series of theoretical "deductions" (in the weak sense of physics) investigating what the appropriate theoretical model should be for the situation in the magnet experiment. This led to a unique candidate for a model, one built out of E8. I would say that the experiment corroborated the series of deductions, with the sensational bonus that the deductions led to E8.
2. Which E8 appears in the theoretical model? The obvious answer is that it is the compact real E8 and not just the root system or root lattice. For example, even though the masses of the 8 particles are given as entries in an eigenvector for the Cartan matrix (which makes it sound like it's just the root system), the proof of this statement is a calculation within the compact Lie algebra.
One can argue about both of these points, of course. But these seem to be what the physicists claim and what they use in their papers.
To address Wadim's concerns about fringe science: Whether or not you find the E8 angle interesting or plausible, it seems that the experiment is interesting for entirely different reasons. The experimenters themselves claim that their main achievement is realizing this 1-dimensional quantum Ising model in the laboratory in a situation where the external field can be tuned to be above, below, or at the critical point. The Physics Today article on the subject paraphrases Subir Sachdev:
only recently could researchers reach the high fields and low temperatures needed to access the critical point and have high enough instrumental resolution to resolve the masses of at least some of the quasiparticles they excited. (Temperatures have to be low enough to suppress any impact of thermal fluctuations.)
• Footnote: To be precise: Also, Coldea, the author of this particular article in Science, uses the more-cautious "detected evidence of E8 symmetry" as opposed to the stronger "detected E8".
-
The arxiv note (linked above) is easy enough to read. But if you prefer to see a talk, I'm giving a talk about it at the AMS National Meeting, Saturday, January 8, 2011, at 2pm in Napoleon D3 on the 3rd floor of the Sheraton. – Skip Jan 4 2011 at 20:26
You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
It should be emphasized that this is not the $E_8$ of heterotic string theory or the $E_8$ gauge group in various grand unified theories. It comes out of something much more down-to-earth, namely solid state physics. The $E_8$ in this story is an unexpected symmetry of the two-dimensional Ising model in a magnetic field that was discovered by Zamolodchikov in 1989.
The Ising model was devised as a simple mathematical model that, it was hoped, would exhibit a ferromagnetic critical point. This turned out to be the case, as Onsager showed in 1944. The model is connected to a lot of beautiful mathematics, including Kac-Moody algebras, the Yang-Baxter equation, q-series, Painlevé equations, and conformal field theory. Zamolodchikov's amazing discovery was that the conformal field theory that describes the model at its critical point can remain integrable when one perturbs away from the critical point in certain directions. One of these perturbations corresponds physically to turning on an external magnetic field. It is in this context that the $E_8$ symmetry emerges.
Since the Ising model is a toy model - real ferromagnets are messier (and 3-dimensional!) - it doesn't really need experimental test. The model is important more for the physical and mathematical insight it gives, than for any quantitative information it might yield. Nevertheless, there is a tradition of trying to find real physical systems that embody the simple microscopic picture of the Ising model. (Such systems have to behave effectively as if they are one or two dimensional.) The recent work lies within this tradition, and they have managed to verify one of the consequences of Zamolodchikov's work, namely that the mass ratio of the two lightest quasiparticles is $\phi$. I have no reason to doubt that what Coldea et al. measured in their experiment is a genuine manifestation of the $E_8$ symmetry of the model.
One consequence of the $E_8$ symmetry is that there are eight species of particles, with definite mass ratios. If I had to guess, I would say that it's going to be very difficult to observe the peaks corresponding to the remaining particles. The third particle has a mass very close to twice that of the lightest particle, which means that that peak will be buried under lots of nearby two-particle states. Furthermore, the particles with masses higher than two will be unstable. (The field theory studied by Zamolodchikov is integrable and therefore does not have unstable particles, but any attempt to realize the model experimentally will surely destroy the integrability and therefore the stability of the higher-mass particles.)
Note: The one-dimensional quantum spin chain model in the Science article is described by a Hamiltonian that has the same eigenvectors as the transfer matrices of the classical two-dimensional square lattice Ising model solved by Onsager. So for the purposes of this discussion they are equivalent.
Addendum: (This an answer to the comment of Victor Prostak that was too large to fit in the comment box.) It is widely believed (but not a theorem as far as I know) that there are only two integrable perturbations: Onsager's thermal perturbation, which has a very simple symmetry, and Zamolodchikov's magnetic perturbation with the $E_8$ symmetry. The model is not expected to be integrable if one does both perturbations simultaneously, but the question is physically just as interesting, and was studied in B.M. McCoy and T.T. Wu, Two-dimensional Ising field theory in a magnetic field: Breakup of the cut in the two-point function, Phys. Rev. D 18, 1259–1267 (1978). I am not aware of any computation of the mass ratios in the general case. I wouldn't expect any exceptional symmetries to show up.
On the other hand, exceptional groups do show up in related conformal field theories. A perturbation of the $\mathcal{M}(4,5)$ minimal model with central charge $c=7/10$ has an $E_7$ symmetry, and a perturbation of the $\mathcal{M}(6,7)$ minimal model with central charge $c=6/7$ has an $E_6$ symmetry. These models describe different kinds of critical points than the $c=1/2$ model does, and so are experimentally distinguishable. In addition, the mass ratios of the quasiparticles should be different.
-
10
+1. A very good answer. – José Figueroa-O'Farrill Jul 18 2010 at 6:04
3
Can you, please, comment on whether it's only $E_8$ that shows up in Zamolodchikov's integrable perturbation, or could it be another simple Lie group? I know how Ising model is related to $c=1/2$ CFT (described holomorphically by BPZ), but the appearance of exceptional Lie groups in various models of this type always struck me as a bit numerological. If other symmetries are indeed possible, how would one experimentally differentiate between the possible groups? – Victor Protsak Jul 18 2010 at 9:26
5
This gives a good explanation of why the existence of an E8 symmetry implies that you get two (or rather 8) peaks with a certain ratio. However my question was about the opposite implication: does the existence of these two peaks imply an E8 symmetry? As Victor Protsak pointed out, there could be lots of other much simpler explanations for these peaks. Experimental detection of E8 symmetry is an extraordinary claim, so I would like to see some extraordinary evidence for it before believing it. – Richard Borcherds Jul 18 2010 at 17:12
5
I have to chime in with a rather pedestrian comment: experiments can support, but not prove a theoretical claim. (On the other hand experiments can certainly disprove a theory.) How convinced one is of a theory based on its supporting evidence is, as far as I can tell, usually a very subjective issue. – Willie Wong Jul 18 2010 at 18:23
2
Will I had a question about your answer, but it was too involved to put in a comment so I asked it as a regular question: mathoverflow.net/questions/32432 – Noah Snyder Jul 19 2010 at 1:08
show 2 more comments
I think the answer is no they didn't detect the E_8 Lie group, though they did detect a symmetry related to the E_8 lattice. (Or rather, they collected some evidence which is consistent with the theoretical prediction that the system have a symmetry related to the E_8 lattice.)
Let me try to make that more precise. When people say that there's and SU(3) symmetry to QCD which explains the eightfold way, what are they saying? They're saying that the particles can be identified with certain vectors in certain representations of the group SU(3). For example, the quarks transform like the standard rep of SU(3), the anti-quarks like the dual rep of SU(3), and the meson octet transforms like the adjoint representation of SU(3).
If you were to discover a physical system with an E_8 symmetry then the particles should correspond to vectors in a certain representation of E_8. For example, you might expect to find particles corresponding to the weight vectors of the adjoint representation of E_8. That would mean 248 different particles!
Zamolodchikov's E_8 symmetry is an entirely different sort of beast. Instead of 248 particles there are only 8 particles. These particles correspond only to the 8 simple roots of E_8. So the system is closely related to the E_8 root system, but it is not the sort of thing one usually thinks of when thinking of a system which exhibits symmetry with respect to the E_8 Lie group.
In some ways this is good. Certainly you would need a lot less evidence to suggest that a system exhibited icosahedral symmetry (which hardly seems surprising since it's a relatively small group) than you would to think a system showed symmetry with respect to the affine Lie algebra E~_8 (which is huge and complicated). Nonetheless the Grothendieck group of the icosahedral group A_5 can be identified with the affine E~_8 root system. Similarly you should not be too surprised to find that a system exhibits Zamolodchikov's type of "E_8 symmetry" which just says that the particles are the objects in a small fusion category whose Grothendieck group can be identified with the E_8 lattice.
All of this should be taken with a grain of salt. The Science paper certainly claims "Remarkably, the simplest of systems, the Ising chain, prom- ises a very complex symmetry, described mathematically by the E8 Lie group." I was unable to find a claim in the theoretical literature that the E_8 Lie group occurs, rather than just the E_8 lattice. Nonetheless either I've seriously misunderstood something, or the authors are being imprecise in their use of mathematical language. I'm not terribly confident in my ability to understand physics, but I'm also not terribly confident that physicists use mathematical language in an extremely precise fashion which agrees exactly with how mathematicians use that language.
-
2
The general viewpoint Noah is explaining here seems quite related to the recent article in the AMS Bulletin (July 2010) by John Baez and John Huerta. As someone who knows little physics, I found their article pretty interesting and understandable (although I haven't finished reading it yet...). The article is available here: ams.org/journals/bull/2010-47-03/… – Dan Ramras Jul 19 2010 at 21:22
Here's another example. I don't think you'd hear people call something an SU(3) symmetry just because there are two particle types whose energy levels correspond to the Frobenius-Perron eigenvector for the Dynkin diagram A_2 (that is two particles with the same energy). (Though I could be wrong, if anyone has seen such examples please share them!) – Noah Snyder Jul 19 2010 at 21:53
You may be right that only root system, and not the algebra itself, is relevant to the Ising model - I can't say for certain at the moment. If there is a connection with the Lie group $E_8$, it would probably be a consequence of the fact that the the minimal CFT that describes the Ising critical point can be generated by a coset construction involving WZW models that are invariant under $E_8$. How this relates to what happens when the system is perturbed away from criticality is something I don't yet understand. – Will Orrick Jul 20 2010 at 2:47
Could you give me a reference for something that says that the E_8 CFT has to do with a coset construction involving E_8? For example, in Kawihashi and Longo's classification of c<1 CFTs (arxiv.org/abs/math-ph/0201015) they quote Boeckenhauer-Evans saying that the (E_8,A_30) modular invariant should come from a coset SU(2)_29 < (G2)_1 x SU(2)_1 while the (A_28,E_8) does not come from any coset construction. – Noah Snyder Jul 20 2010 at 3:13
I may not have made it clear that the Ising CFT is the $A_1$ CFT. It has two distinct coset constructions: $\hat{su}(2)_2<\hat{su}(2)_1\times\hat{su}(2)_1$ and $(\hat{E}_8)_2<(\hat{E}_8)_1\times(\hat{E}_8)_1$. See Table 3b in P. Bowcock and P. Goddard, Virasoro algebras with central charge $c<1$, Nucl. Phys. B 285[FS19] (1987) 651-670, or Section 18.4.1 of Di Francesco, Mathieu, and Senechal's book, or Section 14.2.2 of G. Mussardo's book, Statistical Field Theory, which contains the most detailed information about this topic. – Will Orrick Jul 20 2010 at 4:44
show 1 more comment
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 23, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9525761604309082, "perplexity_flag": "middle"}
|
http://nrich.maths.org/2364/note
|
### Rationals Between
What fractions can you find between the square roots of 56 and 58?
### Root to Poly
Find the polynomial p(x) with integer coefficients such that one solution of the equation p(x)=0 is $1+\sqrt 2+\sqrt 3$.
### Consecutive Squares
The squares of any 8 consecutive numbers can be arranged into two sets of four numbers with the same sum. True of false?
# Lost in Space
##### Stage: 4 Challenge Level:
The idea of routes through triangular mazes can be adapted for a range of similar problems.
Is it possible to create different arrangements of numbers to give different families of solutions?
The NRICH Project aims to enrich the mathematical experiences of all learners. To support this aim, members of the NRICH team work in a wide range of capacities, including providing professional development for teachers wishing to embed rich mathematical tasks into everyday classroom practice. More information on many of our other activities can be found here.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9161034822463989, "perplexity_flag": "middle"}
|
http://en.wikibooks.org/wiki/F_Sharp_Programming/Recursion
|
# F Sharp Programming/Recursion
F# : Recursion and Recursive Functions
A recursive function is a function which calls itself. Interestingly, in contrast to many other languages, functions in F# are not recursive by default. A programmer needs to explicitly mark a function as recursive using the `rec` keyword:
```let rec someFunction = ...
```
## Examples
### Factorial in F#
The factorial of a non-negative integer n, denoted by n!, is the product of all positive integers less than or equal to n. For example, 6! = 6 * 5 * 4 * 3 * 2 * 1 = 720.
In mathematics, the factorial is defined as follows:
$fact(n) = \begin{cases} 1 & \mbox{if } n = 0 \\ n \times fact(n-1) & \mbox{if } n > 0 \\ \end{cases}$
Naturally, we'd calculate a factorial by hand using the following:
```fact(6) =
= 6 * fact(6 - 1)
= 6 * 5 * fact(5 - 1)
= 6 * 5 * 4 * fact(4 - 1)
= 6 * 5 * 4 * 3 * fact(3 - 1)
= 6 * 5 * 4 * 3 * 2 * fact(2 - 1)
= 6 * 5 * 4 * 3 * 2 * 1 * fact(1 - 1)
= 6 * 5 * 4 * 3 * 2 * 1 * 1
= 720
```
In F#, the factorial function can be written concisely as follows:
```let rec fact x =
if x < 1 then 1
else x * fact (x - 1)
```
Here's a complete program:
```open System
let rec fact x =
if x < 1 then 1
else x * fact (x - 1)
(* // can also be written using pattern matching syntax:
let rec fact = function
| n when n < 1 -> 1
| n -> n * fact (n - 1) *)
Console.WriteLine(fact 6)
```
### Greatest Common Divisor (GCD)
The greatest common divisor, or GCD function, calculates the largest integer number which evenly divides two other integers. For example, largest number that evenly divides 259 and 111 is 37, denoted GCD(259, 111) = 37.
Euclid discovered a remarkably simple recursive algorithm for calculating the GCD of two numbers:
$gcd(x,y) = \begin{cases} x & \mbox{if } y = 0 \\ gcd(y, remainder(x,y)) & \mbox{if } x >= y \mbox{ and } y > 0 \\ \end{cases}$
To calculate this by hand, we'd write:
```gcd(259, 111) = gcd(111, 259 % 111)
= gcd(111, 37)
= gcd(37, 111 % 37)
= gcd(37, 0)
= 37
```
In F#, we can use the `%` (modulus) operator to calculate the remainder of two numbers, so naturally we can define the GCD function in F# as follows:
```open System
let rec gcd x y =
if y = 0 then x
else gcd y (x % y)
Console.WriteLine(gcd 259 111) // prints 37
```
## Tail Recursion
Let's say we have a function `A` which, at some point, calls function `B`. When `B` finishes executing, the CPU must continue executing `A` from the point where it left off. To "remember" where to return, the function `A` passes a return address as an extra argument to `B` on the stack; `B` jumps back to the return address when it finishes executing. This means calling a function, even one that doesn't take any parameters, consumes stack space, and its extremely easy for a recursive function to consume all of the available memory on the stack.
A tail recursive function is a special case of recursion in which the last instruction executed in the method is the recursive call. F# and many other functional languages can optimize tail recursive functions; since no extra work is performed after the recursive call, there is no need for the function to remember where it came from, and hence no reason to allocate additional memory on the stack.
F# optimizes tail-recursive functions by telling the CLR to drop the current stack frame before executing the target function. As a result, tail-recursive functions can recurse indefinitely without consuming stack space.
Here's non-tail recursive function:
```> let rec count n =
if n = 1000000 then
printfn "done"
else
if n % 1000 = 0 then
printfn "n: %i" n
count (n + 1) (* recursive call *)
() (* <-- This function is not tail recursive
because it performs extra work (by
returning unit) after
the recursive call is invoked. *);;
val count : int -> unit
> count 0;;
n: 0
n: 1000
n: 2000
n: 3000
...
n: 58000
n: 59000
Session termination detected. Press Enter to restart.
Process is terminated due to StackOverflowException.
```
Let's see what happens if we make the function properly tail-recursive:
```> let rec count n =
if n = 1000000 then
printfn "done"
else
if n % 1000 = 0 then
printfn "n: %i" n
count (n + 1) (* recursive call *);;
val count : int -> unit
> count 0;;
n: 0
n: 1000
n: 2000
n: 3000
n: 4000
...
n: 995000
n: 996000
n: 997000
n: 998000
n: 999000
done
```
If there was no check for `n = 1000000`, the function would run indefinitely. Its important to ensure that all recursive function have a base case to ensure they terminate eventually.
### How to Write Tail-Recursive Functions
Let's imagine that, for our own amusement, we wanted to implement a multiplication function in terms of the more fundamental function of addition. For example, we know that `6 * 4` is the same as `6 + 6 + 6 + 6`, or more generally we can define multiplication recursively as `M(a, b) = a + M(a, b - 1), b > 1`. In F#, we'd write this function as:
```let rec slowMultiply a b =
if b > 1 then
a + slowMultiply a (b - 1)
else
a
```
It may not be immediately obvious, but this function is not tail recursive. It might be more obvious if we rewrote the function as follows:
```let rec slowMultiply a b =
if b > 1 then
let intermediate = slowMultiply a (b - 1) (* recursion *)
let result = a + intermediate (* <-- additional operations *)
result
else a
```
The reason it is not tail recursive is because after the recursive call to `slowMultiply`, the result of the recursion has to added to `a`. Remember tail recursion needs the last operation to be the recursion.
Since the `slowMultiply` function isn't tail recursive, it throws a `StackOverFlowException` for inputs which result in very deep recursion:
```> let rec slowMultiply a b =
if b > 1 then
a + slowMultiply a (b - 1)
else
a;;
val slowMultiply : int -> int -> int
> slowMultiply 3 9;;
val it : int = 27
> slowMultiply 2 14;;
val it : int = 28
> slowMultiply 1 100000;;
Process is terminated due to StackOverflowException.
Session termination detected. Press Enter to restart.
```
Its possible to re-write most recursive functions into their tail-recursive forms using an accumulating parameter:
```> let slowMultiply a b =
let rec loop acc counter =
if counter > 1 then
loop (acc + a) (counter - 1) (* tail recursive *)
else
acc
loop a b;;
val slowMultiply : int -> int -> int
> slowMultiply 3 9;;
val it : int = 27
> slowMultiply 2 14;;
val it : int = 28
> slowMultiply 1 100000;;
val it : int = 100000
```
The accumulator parameter in the inner loop holds the state our function throughout each recursive iteration.
## Exercises
Solutions.
### Faster Fib Function
The following function calculates the nth number in the Fibonacci sequence:
```let rec fib = function
| n when n=0I -> 0I
| n when n=1I -> 1I
| n -> fib(n - 1I) + fib(n - 2I)
```
Note: The function above has the type `val fib : bigint -> bigint`. Previously, we've been using the `int` or `System.Int32` type to represent numbers, but this type has a maximum value of `2,147,483,647`. The type `bigint` is used for arbitrary size integers such as integers with billions of digits. The maximum value of `bigint` is constrained only by the available memory on a users machine, but for most practical computing purposes we can say this type is boundless.
The function above is neither tail-recursive nor particularly efficient with a computational complexity O(2n). The tail-recursive form of this function has a computational complexity of O(n). Re-write the function above so that it's tail recursive.
You can verify the correctness of your function using the following:
```fib(0I) = 0
fib(1I) = 1
fib(2I) = 1
fib(3I) = 2
fib(4I) = 3
fib(5I) = 5
fib(10I) = 55
fib(100I) = 354224848179261915075
```
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 2, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8911892771720886, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/118057/relation-between-singular-values-of-matrices-and-their-products
|
## Relation between singular values of matrices and their products
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Hello everybody, Is there any explicit relation between the singular values $\lambda_X$ and $\lambda_Y$ of two same size matrices $X$ and $Y$, respectively, and the singular values $\lambda_{XY^t}$ of the matrix $XY^t$? Otherwise said, is there a function $f$ such that $\lambda_{XY^t}=f(\lambda_X , \lambda_Y)$?
Thank you Riadh
-
## 1 Answer
No. It matters how the singular vectors interact.
For example, let $X$ be the diagonal matrix with diagonal entries 1 and 2. Let $Y_1=X$, and let $Y_2$ be the diagonal matrix with diagonal entries 2 and 1. Then $XY_1$ and $XY_2$ have different singular values.
However, if you have estimates on the singular vectors, you may get estimates on the singular values of the product.
-
Thank you Riadh – Riadh Jan 4 at 16:00
1
Even though there is no functional relation between these singular values, there is a set of inequalities, called Horn inequalities, which completely describes possible singular values for $XY^t$ if the singular values of $X$ and $Y$ are fixed, see e.g. Fulton's survey ams.org/journals/bull/2000-37-03/… (the result is a corollary of Thompson's conjecture solved by Klyachko and work of Klyachko, Knutson and Tao on the eigenvalue problem). – Misha Jan 4 at 19:33
Thank you a lot for your help Riadh – Riadh Jan 4 at 22:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 16, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8454903364181519, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/36353/what-is-the-most-natural-classical-polynomial-complexity-class-that-includes-all
|
# What is the most natural classical polynomial complexity class that includes all of BQP and NP?
Since we know that there are some oracle problems which can be solved on a quantum computer, but not on an NP machine with the same oracle, the idea of nondeterministic (i.e. infinitely parallel) machine is not sufficient to describe what is going on in quantum mechanics.
The question then arises--- what is? What is the natural classical machine which can simulate a quantum computer efficiently in polynomial time? What is the complexity class of this natural machine?
-
## 2 Answers
The smallest 'simple' complexity class which is known to contain BQP (and suspected to do so strictly) is the class PP. As PP is contained in PSPACE, this yields a potentially tighter algorithm in your hypothetical machine model.
Translating from a more traditional description of PP in terms of nondeterministic Turing machines, a generic computation for solving a PP problem (which are 'yes/no' problems, like those in P and in NP) looks like some branching program of the sort you're interested in, and where each of the 'threads' submits a vote for whether the answer is 'yes' or 'no'. If the majority (fifty percent plus one) vote 'yes', then the answer which the machine produces is 'yes'; otherwise it produces a 'no' answer. It is straightforward to show that PP contains NP; and PP was proven to contain BQP by
however, I find that a simpler approach to the proof is presented by
which, like the traditional proof that BQP is contained in PSPACE, uses an approach in terms of a sum-over-paths; but unlike that approach restricts itself to paths with weights $\pm 2^{-n/2}$.
-
Yes, this is the best answer. I still like SHM-P, because it's a nice complexity class that nobody studies because of their codeless upbringing. – Ron Maimon Sep 14 '12 at 0:41
First let us note that if you extend C to infinite memory, and consider running UNIX on the Turing machine, then an NP machine is one which is allowed to the UNIX fork instruction, and produce two independent processes with a duplicated copies of memory, with no time cost, and the program terminates when exactly one of the outputs terminates.
That this is true is easy to prove: Given any nondeterministic automaton, fork on each step according to the number of outcomes. When any fork halts, you kill all the other processes. This simulates a nondeterministic machine with "fork". to go the other way, simulate UNIX on your nondeterministic machine, and have a nondeterministic step at each "fork". They are equivalent concepts.
The natural generalization of this is to use the UNIX threading instruction to produce parallel threads rather than parallel processes. In this case, the processes can share memory with each other, but one has to be careful, because exponentially many processes will be using exponentially much memory, so they can't search all of it. With less risk of mistake, you can allow the processes to send fixed length messages to another process, whose process label they already know. This is equivalent to allowing any pair to share memory, since syncing all the memory you used until time t only takes time polynomial in t.
Observation: A probabilistic version of this machine can simulate any quantum process.
Given a finite size exponentiated-Hamiltonian U matrix on N states, you want to compute the quantum evolution to time T, then reduce the state according to a measurement, then compute the quantum evolution again. To do this, you fork a machine to simulate each path in the path integral, and keep track of it's U-matrix weight. You keep track of the final state of each forked process.
Then you congeal the processes by sending a message to the nearest processor with the same final state, and adding your amplitudes, shutting down the processor with the smaller number. This congeals your state to half the states. Then you congeal again, and in log(T) steps, you know the amplitude for every state. This also allows you to rotate by a Unitary you can construct before making a measurement.
Then you square this amplitude for each state, and you pick another process with a square amplitude, and pick one of the two at random according to the square amplitude. Again, after log steps, you have picked one of the processes according to the square amplitude.
This means that BQP is inside SHM-P. SHM-P includes NP so it is not reasonable that it is BQP. It shouldn't be P-space, since you are still limited to polynomial time computation on any of the threads.
-
This appears to be the classic sum-over-paths argument for why BQP is contained in PSPACE (a result which Scott Aaronson has more than once described as the grounds for Feynmann's Nobel Prize). Whether this same algorithm suffices to show a stronger containment is not clear. – Niel de Beaudrap Sep 13 '12 at 18:34
As an aside, regarding your machine model: how do you ensure that the first process to 'halt' is one which yields an outcome of 'success' (e.g. finding a satisfying assignment to SAT)? Or, if the failing threads do not halt, how do you treat the case where none of the branches will succeed (as for an unsatisfiable formula)? – Niel de Beaudrap Sep 13 '12 at 18:35
@NieldeBeaudrap: 1. You execute halt instruction on a process only when you succeed 2. you have a global process that just counts polynomial time and halts with "fail" if nothing else halts first. Regarding the Feynman sum-over-paths, this is also obvious from matrix quantum mechanics too or old-fasioned time-dependent perturbation theory, Feynman's contributions are deeper--- the relation to imaginary time, the relativistic particle formalism, and the first relativistically invariant regulator. The same argument shows containment in SHM-P, as I showed above, and SHM-P is not PSPACE. – Ron Maimon Sep 13 '12 at 18:39
Fair remarks for the machine model. Regarding Scott's remarks, naturally Feynmann showed more: I should have noted that his statements of that sort are tongue-in-cheek. --- Do I take it that you consider this a proof that P is strictly contained in PSPACE (given that you describe something which can simulate any algorithm in P, and which you believe cannot solve PSPACE complete problems)? – Niel de Beaudrap Sep 13 '12 at 18:52
@NieldeBeaudrap: regarding P<PSPACE, of course this doesn't prove anything of the sort! What kind of nonsense is this--- it is not a proof of anything except that SHM-P simulates BQP in polynomial time. There is no progress in this on the main open problems. – Ron Maimon Sep 13 '12 at 19:03
show 9 more comments
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.920042097568512, "perplexity_flag": "middle"}
|
http://unapologetic.wordpress.com/2008/07/31/roots-of-polynomials-ii/?like=1&source=post_flair&_wpnonce=5d5a17dad5
|
# The Unapologetic Mathematician
## Roots of Polynomials II
This one might have to suffice for two days. The movers are almost done loading the truck here in New Orleans, and I’ll be taking today and tomorrow to drive to Maryland.
We can actually tease out more information from the factorization we constructed yesterday. Bur first we need a little definition.
Remember that when we set up the algebra of polynomials we noted that the coefficients have to be all zero after some finite number of them. Thus there must be a greatest nonzero coefficient $c_n$. The index $n$ corresponding to this coefficient we call the “degree” of the polynomial. If all the coefficients are zero — giving the zero polynomial — then a common convention is to assign it degree $-\infty$. This actually isn’t completely arbitrary, but the details won’t concern us until later.
So, armed with this information, look at how we constructed the factorization $p=(X-x)q$. We replaced each term $c_kX^k$ in $p$ with the term $c_k(X^k-x^k)$. Then we factored out $(X-x)$ from this term, giving $c_k(X^{k-1}+X^{k-2}x+...+Xx^{k-2}+x^{k-1})$. So the highest power of $X$ that shows up in this term (with a nonzero coefficient) is $c_kX^{k-1}$. And the highest power coming from all the terms of $p$ will be $c_nX^{n-1}$. The power $X^{n-1}$ shows up only once in the expression for $q$, so there’s no way for two such terms to add together and make its coefficient turn out to be zero, and no higher power of $X$ ever shows up at all. Thus the degree of $q$ is one less than that of $p$.
So what does this gain us? Well, each time we find a root we can factor out a term like $(X-x)$, which reduces the degree by ${1}$. So if $p$ has degree $n$ there can only be at most $n$ roots!
A nonzero constant polynomial $p=c_0$ has degree ${0}$, but it also has no roots! Perfect.
A linear polynomial $p=c_0+c_1X$ has degree ${1}$, and it has exactly one root: $-\frac{c_0}{c_1}$.
Now let’s assume that our statement is true for all polynomials of degree $n$: they have $n$ or fewer roots. Then given a polynomial $p$ of degree $n+1$ either $p$ has a root $x$ or it doesn’t. If it doesn’t, then we’re already done. If it does, then we can factor $p=(X-x)q$, where $p$ has degree $n$. But then $q$ can have at most $n$ roots, and thus $p$ can have at most $n+1$ roots!
A nice little corollary of this is that if our base field $\mathbb{F}$ is infinite (like it is for the most familiar examples) then only the zero polynomial can give us the zero function when we evaluate it at various field elements. That is, if $p(x)=0$ for all $x\in\mathbb{F}$, then $p=0$. This must be true because $p$ has an infinite number of roots, and so no finite degree polynomial can possibly have that many roots. The only possibility left is the zero polynomial.
Just to be clear, though, let’s look at this one counterexample. Think about the field $\mathbb{Z}_3$ we used when we talked about Set. The polynomial $p=X^3-X$ is not the zero polynomial, but $p(x)$ is the zero function. Indeed $p(0)=0^3-0=0$, $p(1)=1^3-1=0$, and $p(2)=2^3-2=8-2=6$, which is divisible by $3$, and so is the same as ${0}$ in this field.
### Like this:
Posted by John Armstrong | Algebra, Polynomials, Ring theory
## 3 Comments »
1. [...] now. Instead of using the absolute value of an integer to measure its size, we’ll use the degree of a polynomial to measure its [...]
Pingback by | August 4, 2008 | Reply
2. [...] Okay, we saw that roots of polynomials exactly correspond to linear factors, and that a polynomial can have at most as many roots as its degree. In fact, there’s an expectation that a polynomial of degree will have exactly roots. Today [...]
Pingback by | August 6, 2008 | Reply
3. Highest degree of polynomial awesomeness!
Comment by | July 21, 2012 | Reply
« Previous | Next »
## About this weblog
This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”).
I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
• ## Feedback
Got something to say? Anonymous questions, comments, and suggestions at Formspring.me!
%d bloggers like this:
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 54, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9380277991294861, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/119907/how-to-handle-extremes-in-m-m-c-system-in-the-queue-theory
|
## How to handle “extremes” in M/M/C system in the queue theory?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Hi, I'm beginning to learn queue theory and I have a question. I want try use the queue theory to estimate the indicated server amount to handle operations in a queue. My big problem is that the classical equations for the M/M/C system that I using return "expected" values only to a minimal server range.
For example: If I use $\lambda=15$, $\mu=1$ and $c=10$ in the site http://www.supositorio.com/rcalc/rcalclite.htm, it give me a error, because $c\cdot\mu < \lambda$. But, if I use the site http://www.stat.auckland.ac.nz/~stats255/qsim/qsim.html I can make this evaluation, it give me $W = 19.257$.
My equations implementations give me the same results of the first site, but, I need a implementation that give me results equals from second site. Anybody know if this second site implementation is correct from the queue theory perspective? Where I can find the equations that implements this second approach?
Thanks by help!
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8399043083190918, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/22934/a-question-of-erds-on-equidistribution
|
## A question of Erdős on equidistribution
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
In his book Metric Number Theory, Glyn Harman mentions the following problem he attributes to Erdős:
Let $f(\alpha)$ be a bounded measurable function with period 1. Is it true that
$$\lim_{N\rightarrow\infty} \frac{1}{\log N} \sum_{n=1}^N \frac{1}{n}f(n\alpha) = \int_0^1 f(x) dx$$
for almost all $\alpha$,
writing "so far as the author is aware, this question remains open."
Harman's book is from 1997. Does anyone know the current status of the problem?
Motivation, for the curious
We lose no generality in assuming $f$ has mean $0$. The rough idea is that for almost all $\alpha$, $n\alpha$ will be equidistributed $(\mod 1)$ in a strong enough way to cause a great deal of cancellation in the sum, so in particular we might guess the sum is $o(\log N)$. It is a weaker version of a more classical conjecture of Khintchine that
$$\lim_{N\rightarrow\infty} \frac{1}{N} \sum_{n=1}^N f(n\alpha) = \int_0^1 f(x) dx$$
for almost all $\alpha$, where $f$ is as above. This is known to be false. (Of course, if $f$ is continuous it is true, for all irrational $\alpha$ even.)
-
No idea what the answer is, but thanks for drawing my attention to this nice problem. (I'm hoping someone will say that it's still open.) How easy is the counterexample to Khintchine's conjecture? – gowers Apr 29 2010 at 6:55
The integrand should be f(\alpha x) dx, I think. – TonyK Apr 29 2010 at 8:23
@TonyK: I'm using $\alpha$ as both a dummy variable on the RHS, and an actual variable on the LHS. In fairness, I'm following Harman (which is otherwise a great book). I'll edit this above. @gowers: There is a counterexample due to JM Marstrand of Khintchine's conjecture given in Harman's book, for f an indicator function of some measurable set. It runs a few pages and is mainly arithmetical, rather than analytic. I don't know if it's the only counterexample known though. According to Harman it does not work in Erdős's question. (I haven't yet checked this myself.) – Brad Rodgers Apr 29 2010 at 17:54
Ah, so implicit in the conjecture is that the LHS takes the same value, independent of alpha, for almost all alpha. Is that right? – TonyK Apr 29 2010 at 18:05
This is still referred to as an open question in arxiv.org/abs/math/0312440 – Gjergji Zaimi Apr 29 2010 at 21:37
show 1 more comment
## 1 Answer
The statement was shown to be false by J. Bourgain in a paper published in 1988 (Almost Sure Convergence and Bounded Entropy, doi:10.1007/BF02765022). Well before either Harman's 1997 book and the 2003 paper Gjergji mentioned in the comment, which both say that it is an open problem!
From page 2 of Bourgain's paper
As further application of our method, a problem due to A. Bellow and a question raised by P. Erdős are settled.
And, further down (using ${\bf T}={\bf R}/{\bf Z}$),
The problem of Erdős mentioned above deals with weaker versions of the Khintchine problem. In particular he raised the question whether given a measurable subset $A$ of $\bf T$, then for almost all $x$ the set $\lbrace j\in{\bf Z }_+\mid jx\in A\rbrace$ has a logarithmic density, i.e. $$\frac{1}{\log n}\sum_{\substack{j\le n\\ jx\in A}}\frac1j\to\vert A\vert.$$ We will disprove this fact.
I can't vouch that Bourgain's paper is free of errors, as I have only just found it now and haven't read through it all in detail. However, Bourgain's result is also (very briefly) referred to in this 2004 paper, http://arxiv.org/abs/math/0409001v1, so I assume it is considered to be valid.
-
Bourgain's paper does seem to be quite well known. See also arxiv.org/abs/math/0611621v2 (Bourgain's Entropy Estimates Revisited) which re-proves of Bourgain's main results, although it doesn't treat the example asked for here. – George Lowther Jun 6 2010 at 2:35
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 18, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9342190623283386, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/advanced-algebra/160679-skew-symmetric-metrix.html
|
# Thread:
1. ## skew-symmetric metrix
Hi i need some help solving the following problem:
What i know:
The following does not hold for p=2 since an inverse does not hold.
but find it diffecult to show that this holds for primes p> 3.
Thanks
2. Originally Posted by 1234567
Hi i need some help solving the following problem:
What i know:
The following does not hold for p=2 since an inverse does not hold.
but find it diffecult to show that this holds for primes p> 3.
Thanks
For a prime $p\geq 3$ we have that $1\neq -1$ and thus it's easy to show that $dim S_n+dim A_n=dim M_n\,,\,\,M_n=$ all the square nxn matrices over the field...complete the proof now.
Tonio
3. how would one use $dim S_n+dim A_n=dim M_n\,,\,\,M_n=$ all the square nxn matrices over the field,
I thought you hav to show
1. $S_n+A_n= M_n\,,$
2. $S_n n A_n= {0}\,,$
4. Originally Posted by 1234567
how would one use $dim S_n+dim A_n=dim M_n\,,\,\,M_n=$ all the square nxn matrices over the field,
I thought you hav to show
1. $S_n+A_n= M_n\,,$
2. $S_n n A_n= {0}\,,$
The second condition is trivial, so $\dim S_n+\dim A_n=\dim(S_n+A_n)-\dim(S_n\cap A_n)=\dim(S_n+A_n)$ , and thus
proving what I told you we get $S_n+A_n=M_n$
Tonio
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 11, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9169449210166931, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/228537/prove-that-function-f-is-bounded-below?answertab=votes
|
# Prove that function f is bounded below
$f:\mathbb{R}^k \rightarrow \mathbb{R}$ is differentiable function such that $\sum_{i=1}^{k} x_i \frac{df}{dx_i}(\mathrm{x} )\ge 0$ for $\mathrm{x}=(x_1,x_2,...,x_k) \in \mathbb{R} ^k$. Prove that function f is bounded below.
All what I've got: $\sum_{i=1}^{k} x_i \frac{df}{dx_i}(\mathrm{x} )= (x_1,x_2,...,x_k) \cdot (\nabla f(x))=\nabla _w f(\mathrm{x})$, hence $\nabla _w f(\mathrm{x}) \ge 0$. I think that it's important remark, but I don't know how to end this solution.
-
## 2 Answers
Let $x \in \mathbb R^k$, consider the function $g \colon [0,1] \to \mathbb R$, given by $g(t) = f(tx)$. Then $g$ is differentiable and we have $g'(t) = f'(tx)x$, so for $t > 0$ we have $g'(t) = \frac 1t f'(tx)tx = \frac 1t\sum_{i=1}^k\frac{\partial f}{\partial x_i}(tx)tx_i \ge 0$. Hence \begin{align*} f(x) &= g(1)\\ &= g(0) + \int_0^1 g'(t)\,dt\\ &\ge g(0)\\ &= f(0) \end{align*} So $f$ is bounded below by $f(0)$.
-
A sketch of solution. You'll have to fill in the gaps.
Try fixing $x\ne 0$ and evaluating $f$ along the half-line $\{tx\ :\ t\ge 0\}$. As you have already noticed, the resulting function of $t$ is nondecreasing. This means that, whatever $y\in \mathbb{R}^k$ you take outside of a fixed closed ball (say, of radius one) centered at the origin, you will have $$f(y)\ge f(\text{some point inside the ball}).$$ Taking infimums, you can conclude that $$\inf_{\mathbb{R}^k} f(y)=\inf_{\{\lvert y \rvert \le 1\}} f(y).$$ Now you only have to use the continuity of $f$.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 20, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9538257718086243, "perplexity_flag": "head"}
|
http://mathhelpforum.com/algebra/133871-finding-intersection-two-functions.html
|
# Thread:
1. ## finding intersection of two functions
Here are the functions:
(G of k)(x) = x^2-1
f(x) = x^2/(x^2+1)
how do i find the intersecrions?
2. Originally Posted by Anemori
Here are the functions:
(G of k)(x) = x^2-1
f(x) = x^2/(x^2+1)
how do i find the intersecrions?
Start by solving $x^2 - 1 = \frac{x^2}{x^2 + 1}$.
3. Originally Posted by Anemori
Here are the functions:
(G of k)(x) = x^2-1
f(x) = x^2/(x^2+1)
how do i find the intersecrions?
The values of the functions must be equal at the point of intersection. Thus:
$x^2-1=\dfrac{x^2}{x^2+1}$
Multiply both sides by $x^2+1$ to get rid of the fraction and move all terms to the LHS:
$x^4-x^2-1=0$
This is a quadratic equation in x². Use the substitution $u = x^2~\implies~|x|=\sqrt{u}$. The equation becomes:
$u^2-u-1=0$
Solve for u, afterwards determine x, plug in the value of x in one of the equations of a function to calculate the y-coordinate of the point of intersection.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8530997633934021, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/28761?sort=newest
|
## Minimum Spanning Tree of a Weighted Graph
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I have a connected graph $G=(V,E)$ in $n$ vertices. The edge weights are non-negative and form a metric space, thus for vertices $u,v,w \in V$ , such that $(u,v), (v,w), (w,u)\in E$ we have $r(u,w) \leq r(u,v)+r(v,w)$. We furthermore have the following condition: $\sum_{u\in V}R(u) \leq n$ where $R(u)$ is the average of the weights of the edges incident on $u$.
My question is, does there exist a minimum spanning tree, that has weight at most $Cn$ where $C$ is some universal constant? In place of a minimum weight spanning tree, a walk (sequence of connected vertices) such that the sum of weights of the walk is $Cn$ for some universal constant.
-
## 2 Answers
You need some variant of the degree-constrained GMST (Generalized Minimum Spanning Tree) with edges satisfying the triangle inequality. These are some pointers to literature.
1. Bruce Boldon, Narsingh Deo and Nishit Kumar.Minimum-weight degree-constrained spanning tree problem
The minimum spanning tree problem with an added constraint that no node in the spanning tree has the degree more than a specified integer, d, is known as the minimum-weight degree-constrained spanning tree (d-MST) problem. Such a constraint arises, for example, in VLSI routing trees, in backplane wiring, or in minimizing single-point failures for communication networks. The d-MST problem is NP-complete. Here, we develop four heuristics for approximate solutions to the problem and implement them on a massivelyparallel SIMD machine, MasPar MP-1. An extensive empirical study shows that for random graphs on up to 5000 nodes (about 12.5 million edges), the heuristics produce solutions close to the optimal in less than 10 seconds. The heuristics were also tested on a number of TSP benchmark problems to compute spanning trees with a degree bound d = 3.
2. MR1469650 (98h:68181) Fekete, Sándor P. ; Khuller, Samir ; Klemmstein, Monika ; Raghavachari, Balaji ; Young, Neal . A network-flow technique for finding low-weight bounded-degree spanning trees. J. Algorithms 24 (1997), no. 2, 310--324.
3. MR1469648 (98d:68165) Guttmann-Beck, Nili ; Hassin, Refael . Approximation algorithms for min-max tree partition. J. Algorithms 24 (1997), no. 2, 266--286.
4. MR2006103 (2004h:68154) Hassin, Refael ; Levin, Asaf . Minimum spanning tree with hop restrictions. Twelfth Annual ACM-SIAM Symposium on Discrete Algorithms (Washington, DC, 2001). J. Algorithms 48 (2003), no. 1, 220--238.
5. MR2480226 (2010f:68072) Srivastav, Anand ; Werth, Sören . Probabilistic analysis of the degree bounded minimum spanning tree problem. FSTTCS 2007: Foundations of software technology and theoretical computer science, 497--507, Lecture Notes in Comput. Sci., 4855, Springer, Berlin, 2007.
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
I think the answer is no for both questions. Let $T$ be the unique tree on $2n$ vertices with two adjacent vertices $u$ and $v$ of degree $n$. Let $e=uv$. Let the weight of $e$ be $n^2$ and all other edges to have weight 0. Then the sum of all the average weights is
$n^2/n + n^2/n = 2n = |V(T)|$.
However, $T$ has total weight $n^2$, which is not $O(2n)$.
Comment. I edited my first answer as I misread the condition on the average degrees.
-
That violates the sum condition. – MAKCL Jun 19 2010 at 17:11
Thanks. I made my conditions too liberal. This comes from a problem Ive been working on with more stringent conditions, which your solution would violate. I should probably cease to elaborate, however, I would ask if anyone can point me to the literature where this kind of thing might be dealt with - that is relating the average to the MST. Thanks again. – MAKCL Jun 19 2010 at 18:03
2
If you don't mind elaborating, I'd like to hear the more stringent problem. It might be easier for someone to give you the right pointer if they had more details. The problem seems pretty interesting. – Tony Huynh Jun 19 2010 at 21:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 23, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8981441259384155, "perplexity_flag": "middle"}
|
http://nrich.maths.org/6151/note
|
# Bio Graphs
### Why do this problem?
This problem encourages students to get into the real meaning of graphical representation without getting bogged down in algebraic calculations or falling back into blind computation. It will also encourage the students to think about the various differences and similarities between growth processes in the sciences.
### Possible approach
This problem works well in group discussion. For each idea, try to encourage students to explain their reasoning as precisely and clearly as possible. You could split the class into different groups and see who can produce the most valid examples for each graph.
### Key questions
• How many 'growth processes' in science can you think of. Would any of these graphs match those processes?
• How might you label the scales for each example?
### Possible extension
This type of problem is rich with extension possibilities. We suggest two:
Extension 1: Are there other shapes of graph which could be used to model other natural growth processes?
How might you describe these curves algebraically? Can you write down equations, the graphs of which match the shape of the curves in this question?
Extension 2: Look up the profile of a biphasic bacterial growth curve and understand the conditions that produced such a curve. Wikipedia is a useful place to start. Two clear phases of growth are seen due to:
1) The depletion of glucose from the nutrient medium
2) Transcription of $\beta$-galactosidase and associated enzymes to allow lactose metabolism
Is there any similarity to some of the curves given to you in the question?
You might naturally try Real-life equations next.
### Possible support
Let students leaf through a science textbook searching for graphs and charts. Do they notice that the same shapes of charts appear frequently? Can they match any to the graphs in this question?
The NRICH Project aims to enrich the mathematical experiences of all learners. To support this aim, members of the NRICH team work in a wide range of capacities, including providing professional development for teachers wishing to embed rich mathematical tasks into everyday classroom practice. More information on many of our other activities can be found here.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9307644367218018, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/102126/leray-spectral-sequence-for-lowest-weight-part-of-a-smooth-morphism/102135
|
## Leray spectral sequence for lowest weight part of a smooth morphism
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Let me assume everything in sight is as nice as possible, probably if the result I want is true then these conditions are too restrictive. All spaces will be smooth algebraic varieties over the complex numbers. We are given maps $j \colon U \to X$, $g \colon X \to S$, $f = g\circ j$. The map $j$ is an open immersion whose complement is a simple normal crossing divisor, $g$ is a smooth projective morphism, and $f$ is topologically a locally trivial fibration.
On one hand, we can restrict cohomology classes on $X$ to $U$ fiberwise, giving us $$\newcommand{\Q}{\mathbf{Q}}R^qg_\ast \Q \twoheadrightarrow W_qR^qf_\ast\Q \hookrightarrow R^qf_\ast\Q.$$ Here $W_\bullet$ denotes the weight filtration on $R^qf_\ast\Q$, considered as a variation of mixed Hodge structure.
On the other hand, one can also consider $$H^\bullet(X,\Q) \twoheadrightarrow \mathrm{Im}(j^\ast) \hookrightarrow H^\bullet(U,\Q)$$ given by restricting cohomology classes globally.
Question 1: Is there a "Leray" spectral sequence $H^p(S,W_qR^qf_\ast\Q) \implies \mathrm{Im}(j^\ast)$, compatible with the maps above and the Leray spectral sequences for $f$ and $g$?
Question 2: If so, does it always degenerate at $E_2$, like the Leray spectral sequence for $g$?
-
## 1 Answer
You can identify `$$W_qR^qf_*\mathbb{Q}=im[R^qg_*\mathbb{Q}\to R^qf_*\mathbb{Q}]$$` It is enough to check this fibrewise, where it's Deligne's Hodge II, cor 3.2.17. Now compare Leray spectral sequences for $g$ and $f$, and take the image $$im ([E_2(g)\Rightarrow H^*(X)]\to [E_2(f)\Rightarrow H^*(U)])$$ This should give your desired answer to Question 1. [Note: there's a subtle strictness question that I overlooked. I'll try to sort it out when I have more time. ]
For Q2, let's first suppose that $S$ is smooth and proper. Then $H^p(S, W_qR^qf_*\mathbb{Q})$ is pure of weight $p+q$, so $d_2,\ldots$ must be zero because it goes between Hodge structures of different weights. This is just the barest outline, but see my paper for some more details. I think the result is true in general, but you would need to use Saito's version of the decomposition theorem in his category of polarizable Hodge modules. I'll see if I can supply some more precise arguments later on.
Added Note: As Dan noted below, Q2 follows easily from the first paragraph, and Deligne's degeneration argument for $g$.
-
1
Thanks! Actually, doesn't this prove Q2, even if $S$ is not smooth/proper? Since $[E_2(g) \implies H^\bullet(X)]$ has zero differentials, $\mathrm{Im}([E_2(g) \implies H^\bullet(X)] \to [E_2(f) \implies H^\bullet(U)])$ should also have vanishing differentials. – Dan Petersen Jul 13 at 12:29
Yes, good point. You've answered your own question, which is always the best way. – Donu Arapura Jul 13 at 12:39
@Donu. Actually, now I am starting to doubt the argument. In general for a map $f \colon A^\bullet \to B^\bullet$ of complexes, there is no reason to have $H^\bullet(\operatorname{Im} f) \cong \operatorname{Im}(H^\bullet(f))$. So now I don't see why it should be automatic that $H^p(S,W_qR^qf_\ast\mathbf Q)$ converges to $\operatorname{Im}(j^\ast)$. – Dan Petersen Jul 13 at 13:07
Dan, right again. I was too quick to type my original answer. Unfortunately, I tend not to think deeply about MO questions. So you if it's something that's really important to you, you can send me email (I'm easy to find). Let me give a quick fix with a stronger hypothesis for now. – Donu Arapura Jul 13 at 13:51
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 29, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9393064379692078, "perplexity_flag": "head"}
|
http://www.physicsforums.com/showthread.php?p=4056474
|
Physics Forums
Blog Entries: 30
Boltzmann and Fourier for biologist
Hello,
Please could someone explain to me about Boltzmann distribution and Fourier transformation? or point me in the direction of some really really easy-to-understand guide?
I need to understand it for biology - to understand NMR and mass spectrometry.
Thanks.
Recognitions: Science Advisor Have you tried the Wikipedia pages? They are usually a good intro., and you can come back here with more specific questions which makes it easier both foryou, and for us to help you.
Blog Entries: 30 I have tried wikipedia, it was confusing and complicated! But I'll try to read it again. I just need a basic understanding of what they are and what they do.
Recognitions:
Homework Help
Boltzmann and Fourier for biologist
Hi.
Is this the sort of thing you're looking for?
http://www.cs.unm.edu/~brayer/vision/fourier.html
It shows how an image can be transformed with fourier to a spectrum.
Then some filtering can be done.
And then it can be converted back into an image.
The link explains more about how it works and what it does.
It's about applying the Fourier transform to images, which may be what you'd do in NMR.
The other typical application of Fourier is to signals and their conversion to a frequency spectrum.
This applies to spectrometry that you also want to know about.
But they don't have such cool pictures.
The Boltzmann distribution says that if you have a system with a well defined temperature and a series of energy states, the probability that the system is in a given energy state is proportional to $\exp(-E/kT)$. Lower energy states are always preferred, but only on a statistical level, and as temperature rises, all states become nearly equally populated. The Fourier transform is a way of breaking down an arbitrary function (signal, wave, whatever) into a continuous set of sine or cosine functions. The transformed function is the amplitude of the sine or cosine at any particular frequency.
Blog Entries: 30
Quote by I like Serena The other typical application of Fourier is to signals and their conversion to a frequency spectrum. This applies to spectrometry that you also want to know about. But they don't have such cool pictures.
yes, this is what I need to know - conversion of a signal into some kind of graph. what are the important aspects of this?
Blog Entries: 30
Quote by Muphrid The Fourier transform is a way of breaking down an arbitrary function (signal, wave, whatever) into a continuous set of sine or cosine functions. The transformed function is the amplitude of the sine or cosine at any particular frequency.
hm, interesting, I vaguely remember sines and cosines from school... how does it do this?
did not understand the Boltzmann explanation at all...
Sines and cosines have a special property that, for two frequencies $\omega_1$ and $\omega_2$, $$\int_{-\infty}^{\infty} \sin(\omega_1 t) \sin(-\omega_2 t) \; dt = 0$$ unless $\omega_1 = \omega_2$. The Fourier transform takes advantage of this "orthogonality" to decompose a signal into a bunch of sines and cosines, each with their own amplitude. As for Boltzmann, here's a simpler idea: you have some system that can be in a bunch of energy states. This could be a molecule, for example, that can have different configurations. Some configurations are more stable (lower energy); others are less stable (higher energy). The Boltzmann distribution says that lower energy states are always more likely than higher energy states, but with higher temperatures, the difference in these likelihoods shrinks.
Recognitions:
Homework Help
Quote by nucleargirl yes, this is what I need to know - conversion of a signal into some kind of graph. what are the important aspects of this?
Didn't you like the pictures?
As for spectrometry, consider the light of the sun - it is white.
But it is not really white, it contains all the colors of the rainbow and many invisible colors all mixed together.
If you put it through a prism it disperses into a spectrum.
Basically a fourier transform does the same thing.
It's a complex mathematical method that converts a signal into a spectrum.
Then you can see how much of each color is in it.
Specifically in NMR and mass spectometry an object is treated in such a way that it emits (or absorbs) radiation.
The radiation is caught in sensors that record it as a signal in a computer.
With a fourier transform the signal is converted to a spectrum so you can see how much of each wavelength of radiation is in there.
Different atoms will emit different radiation at specific wavelengths.
From the spectrum you can deduce how much of each atom is present in the object.
Here's a page that explains in more detail:
http://en.wikipedia.org/wiki/Fourier...m_spectroscopy
Recognitions: Science Advisor Excellent answers by both; better than anything I would have come up with.
Blog Entries: 30 Thanks for trying to help :) I think I will never really understand... but its ok. its like knowing to use the tv without understanding how it works...
Blog Entries: 2 The basic idea behind a fourier transformation is to represent a function f(x) as a sum of other functions b1(x), b2(x), ..., bn(x): f(x) = c1*b1(x) + c2*b2(x) +...+ cn*bn(x) The c's are the coefficients. Let's take a very easy example: Suppose you are given the functions b1(x) = sin(x) b2(x) = sin(2x) b3(x) = sin(3x) Is it possible to build the following function f(x) = 7*sin(3x) + 5*sin(2x) by using b1(x), b2(x), b3(x)? It is possible: f(x) = 0*b1(x) + 5*b2(x) + 7*b3(x) The numbers 0, 5 and 7 are the coefficients. They tell you "how much" of each b-function you need in order to represent f(x). These coefficients are what you see in the Fourier-spectrum. In other words the Fourier transform determines the numbers 0,5 and 7 for you. E.g. you have a signal and observe how it it behaves in time (perhaps from NMR). Suppose for this signal you want to know if it is possible to represent it as a sum of the b-functions. Then you apply the Fourier transform which gives you the numbers 4, 1, 3. These appear as peaks in the Fourier spectrum: Code: ```x
x x
x x
x x x
----------------------------------------------------``` These peaks just tell you how much of each b-function you need to construct your signal.
Thread Tools
| | | |
|----------------------------------------------------------|-------------------------------------|---------|
| Similar Threads for: Boltzmann and Fourier for biologist | | |
| Thread | Forum | Replies |
| | Biology, Chemistry & Other Homework | 0 |
| | Linear & Abstract Algebra | 20 |
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9324228167533875, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/tagged/matrix-elements+symmetry
|
# Tagged Questions
1answer
35 views
### Diagonal matrix in k-space
I'm having some trouble with an integration I hope you guys can help me with. I have that: ${{\mathbf{v}}_{i}}\left( \mathbf{k} \right)=\frac{\hbar {{\mathbf{k}}_{i}}}{m}$ and ...
2answers
185 views
### If the S-matrix has symmetry group G, must the fields be representations of G?
If the fields in QFT are representations of the Poincare group (or generally speaking the symmetry group of interest), then I think it's a straight forward consequence that the matrix elements and ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.931615948677063, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/227332/maximum-possible-area-of-triangle-pqr
|
# Maximum possible area of triangle PQR
A point $P$ is given on the circumference of a circle of radius $r$.Chords $QR$ are drawn parallel to the tangent at $P$.Then how can we determine the largest possible area of triangle $PQR$?
Thanks.
-
## 1 Answer
Here is one way.
We may assume, without loss of generality, that the circle is centered at the origin. Let $P$ be the point $(-r,0)$. Then $QR$ will be vertical. Let $\theta$ be the angle made by the line through $Q$ and the origin, and the positive $x$-axis. Then, $Q$ has coordinates $(r \cos \theta, r \sin \theta)$.
Hence, the area of the triangle can be expressed as $$\mbox{area} = \frac{1}{2}(r \cos \theta + r) 2r \sin \theta = r^2 \sin \theta (1+\cos \theta).$$
You can then use calculus to maximize this function of $\theta$, keeping in mind that $0 \le \theta \le \pi$.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 15, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8661895394325256, "perplexity_flag": "head"}
|
http://mathoverflow.net/revisions/120827/list
|
Return to Answer
1 [made Community Wiki]
One basic structural problem about the SW invariants is the question of simple type: suppose that $X$ is a simply connected 4-manifold with $b^+>1$, and $\mathfrak{s}$ a $\mathrm{Spin}^c$-structure such that $SW_X(\mathfrak{s})\neq 0$. Must $\mathfrak{s}$ arise from an almost complex structure? This is true when $X$ is symplectic (Taubes in "$SW\Rightarrow Gr$") but open in general.
The 11/8-conjecture (that for a closed Spin 4-manifold $X$ of signature $\sigma$, one has $b_2(X)\geq 11|\sigma|/8$) is open. SW theory has yielded strong results in this direction (Furuta's 10/8 theorem); proving the conjecture via SW theory is very hard but might be possible.
Essentially all of the fundamental questions about the classification of smooth 4-manifolds, or about the existence and uniqueness of symplectic structures on them, are open. We do not know how much Seiberg-Witten theory sees. For instance:
Suppose $X$ is a closed 4-manifold with an almost complex structure $J$. Let $w\in H^2(X;\mathbb{R})$ be a class with $w^2>0$. Is there a symplectic form $\omega$ with compatible almost complex structure homotopic to $J$ and symplectic class $w$? The "Taubes constraints" are the following necessary conditions, which constrain the SW invariants in terms of $w$ and $c=c_1(TX,J)$ (see e.g. Donaldson's survey on the SW equations): (i) $SW(\mathfrak{s}_{can})=\pm 1$ (the sign can be made precise) where `$\mathfrak{s}_{can}$` is the $\mathrm{Spin}^c$-structure arising from $J$; (ii) $-c\cdot w\geq 0$; and (iii) if $SW(\mathfrak{s})\neq 0$ then $|c_1(\mathfrak{s})\cdot [\omega]| \leq -c \cdot [\omega]$, with equality iff $\mathfrak{s}$ is isomorphic to $\mathfrak{s}_{can}$ or its conjugate. The question is: if $X$ is simply connected, are these sufficient conditions? (Example: Fintushel-Stern knot surgery on an elliptically fibered K3 surface along a knot with monic Alexander polynomial.)
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 29, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9298199415206909, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/16711?sort=newest
|
## homotopy associative $H$-space and $coH$-space
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Let $[X, Y]_0$ denote base point preserving homotopy classes of maps $X\rightarrow Y$. A multiplication on a pointed space $Y$ is a map $\phi: Y\times Y\rightarrow Y.$ From this map, we can define a continuous map for each pointed space $X$, $\phi_X: [X, Y]_0\times [X, Y]_0\rightarrow [X, Y]_0,$ by the composition $$\phi_X (\alpha, \beta)(x)=\phi(\alpha(x), \beta(x)).$$ If $([X, Y]_0, \phi_X)$ is a group for each $X$, then $(Y, \phi)$ is called a homotopy associative $H$-space.
A $coH$-space is defined from a comultiplication, namely, a map $\psi: X\rightarrow X\vee X.$ Then, for each pointed space $Y$, we can define a function $\psi^Y: [X, Y]_0\times [X, Y]_0\rightarrow [X, Y]_0$ in this way: $$\psi^Y(\alpha, \beta)=(\alpha\vee\beta)\circ\psi.$$ If $([X, Y]_0, \psi^Y)$ is a group for each $Y$, then $(X, \psi)$ is called a homotopy associative $coH$-space.
So, as we can see, if we have a homotopy associative $coH$-space $(X, \psi)$ and a homotopy associative $H$-space $(Y, \phi)$, then we can define two group structures on the space $[X, Y]_0$. My question is: are they "equivalent" in some sense? Obviously, whatever $\phi$ or $\psi$ is, the zero element of the group is the constant map in $[X, Y]_0.$ However, the two group structures do depend on the choice of $\phi$ and $\psi$, which seems have little relationship with each other.
-
## 1 Answer
I looked at my homotopy theory lecture notes and we had the following similar result: $X$ H-CoGroup, $Y$ $H$-Group, then both group structures defined on [X,Y] agree. The proof goes roughly as follows: Call the upper products $\cdot$, resp. $*$. Inserting the definitions of those products, one can show the following "distributivity":
$(a\cdot b)*(c\cdot d)=(a * c)\cdot(b * d)$
Then one shows that both products have the same neutral element and finally
$f*g=(f\cdot 1) * (1\cdot g)=(f * 1)\cdot(1 * g)=f\cdot g$,
gives the result. That's the strategy of the proof in the case of $H$-(co-)groups.
-
Thanks! I really love the "distributivity" equality you gave. It's easy to prove, useful, and elegant. With this formula, we can also prove that both groups are abelian. Thanks! – Megan Feb 28 2010 at 20:35
8
This is called the Eckmann-Hilton Argument: en.wikipedia.org/wiki/Eckmann–Hilton_argument It not only proves that the two products are the same, but also proves that they are commutative and associative (you don't need to assume this!). This argument is also used to prove that $\pi_i$ is an abelian group for $i\geq 2$, since in this case you have at least two multiplications (from gluing spheres in different directions) which satisfy the above distributivity. – Chris Schommer-Pries Feb 28 2010 at 20:44
Hmmm. The above link is broken. Let me try again: <a href="en.wikipedia.org/wiki/Eckmann–Hilton_argument">en.wikipedia.org/wiki/Eckmann–Hilton_argument</a>. – Chris Schommer-Pries Feb 28 2010 at 20:47
I see how to prove the commutativity of $\pi_i$ with $i\geq 2$. For pointed spaces $X$ and $Y$, let's consider the loop space $\Omega Y$ and the suspension $SX$. Define $\phi: \Omega Y\times\Omega Y\rightarrow\Omega Y$ by $\phi(u,v)=u(2t)$ when $t\in [0,\frac{1}{2}]$ and $=v(2t-1)$ when $t\in [\frac{1}{2},1].$ Then, $(\Omega Y,\phi)$ is a $H$-space. And, define $\psi: SX\rightarrow SX\vee SX$ to be $\psi([x,t])=([x,2t],*)$ when $t\in [0,\frac{1}{2}]$ and $=(*, [x,2t-1])$ when $t\in [\frac{1}{2},1]$. Then, $(SX, \psi)$ is a $coH$-space. – Megan Feb 28 2010 at 21:52
2
There's also an nlab page on this: ncatlab.org/nlab/show/Eckmann-Hilton+argument and in a talk recently, I came up with an animation showing the key step: math.ntnu.no/~stacey/Seminars/pearl.html – Andrew Stacey Mar 1 2010 at 8:52
show 1 more comment
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 60, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9085171222686768, "perplexity_flag": "head"}
|
http://mathhelpforum.com/algebra/16697-explorations.html
|
# Thread:
1. ## explorations
A companys profit for 2000 was 5 million. Each year, the profit increases by 2 million. Let p=f(t) represent the profit(in million of dollars) for the year that is t years after 2000.
1. Find an equation for f. Write the equation using f(t) notation. Also, write the equation in terms of p and t.
2. Use the equation for f to predict when the company will have a profit of 21 million.
3. In problem 1 you found the equation p=2t+5. Solve this equation for t.
4. Substitute 21 for p in your equation from problem 3 and solve for t
5. Compare your results for problems 2 and 4
6. Enter the equation you found in problem 3 in graphing calc to help complete the table:
Profit Years
21 t
22
23
24
25
7. In problems 2 and 4 you used two different methods for finding when the company will have a profit of 21 million. If the company wants to know when it might attain 15 different profit levels, which method would be the best to use?
2. Originally Posted by getnaphd
A companys profit for 2000 was 5 million. Each year, the profit increases by 2 million. Let p=f(t) represent the profit(in million of dollars) for the year that is t years after 2000.
1. Find an equation for f. Write the equation using f(t) notation. Also, write the equation in terms of p and t.
2. Use the equation for f to predict when the company will have a profit of 21 million.
3. In problem 1 you found the equation p=2t+5. Solve this equation for t.
4. Substitute 21 for p in your equation from problem 3 and solve for t
5. Compare your results for problems 2 and 4
6. Enter the equation you found in problem 3 in graphing calc to help complete the table:
Profit Years
21 t
22
23
24
25
7. In problems 2 and 4 you used two different methods for finding when the company will have a profit of 21 million. If the company wants to know when it might attain 15 different profit levels, which method would be the best to use?
Exactly what are the difficulties you are having with this?
RonL
3. I don't even know where to begin. I have no idea how to do an equation with a notation.
4. Originally Posted by getnaphd
A companys profit for 2000 was 5 million. Each year, the profit increases by 2 million. Let p=f(t) represent the profit(in million of dollars) for the year that is t years after 2000.
1. Find an equation for f. Write the equation using f(t) notation. Also, write the equation in terms of p and t.
This means write the profit n years after 2000 in the form of an equation
$<br /> p(t)=\mbox{some algebraic extpresion in }t <br />$
Now compute a few years profit from what you are told:
$p(0)=5$
$p(1)=7$
$p(2)=9$
$p(3)=11$
Plot these an some graph paper (or in Excel) and you will see that they fall on a straight line.
You should know that the equation of a straight line takes the form:
$p=m~t+c$
So using the calculated values we can find $m$ and $c$.
As $p(0)=5$ we have $c=5$, and as $p(1)=7$, we also have $m=2$, so the required line is:
$<br /> p(t) = 2~t+5<br />$
RonL
5. Originally Posted by getnaphd
A companys profit for 2000 was 5 million. Each year, the profit increases by 2 million. Let p=f(t) represent the profit(in million of dollars) for the year that is t years after 2000.
1. Find an equation for f. Write the equation using f(t) notation. Also, write the equation in terms of p and t.
CaptainBlack did an excellent job. Here's another way to think about it.
when t = 0 we have:
after every year, we increase our profits by 2 mill, so when t = 1 we have:
p(1) = 5 + 2 (note, there is 1 2 when we have 1 year)
when t = 2
p(2) = 5 + 2 + 2 (note, there are two 2's when we have 2 years)
when t = 3
p(3) = 5 + 2 + 2 + 2 (note we have three 2's when we have 3 years)
so obviously there is a pattern. we make 2 mill with each passing year, so for t years we simply multiply t by 2 to find how much profit we gained since 2000.
thus, p(t) = 5 + 2t .............that is we add 2 million per year, so after t years we would have added 2t million
6. ## still stuck
For part 2. Use the equation for f to predict when the company will have a profit of 21 million?
Do I just substitute 21 for f(t)?
7. Originally Posted by getnaphd
For part 2. Use the equation for f to predict when the company will have a profit of 21 million?
Do I just substitute 21 for f(t)?
yes, and solve for t
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 13, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9284144639968872, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/92573?sort=oldest
|
## Gausian distributions in the Frequency domain
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I have read in many texts that the Fourier Transform of a Gaussian is yet another Gaussian, however how does the mean and standard deviation change?
Also if we convolve a Gaussian with itself then we get a wider Gaussian, this is equivalent to the product with the Fourier Transform of the Gaussian with itself. Will this still be a wider Gaussian?
Thanks
-
5
This is in every elementary textbook (AND you can do the computation yourself) so I would not call this a research level question. Voting to close. – Igor Rivin Mar 29 2012 at 15:23
## 1 Answer
The formula for transforming a 0 mean Gaussian says $F_x[e^{-ax^2}](k)=\sqrt{\frac{\pi}{a}}e^{-\pi^2k^2/a}$ so the standard deviation certainly changes. Indeed the inverse proportionality is an example of the Heisenberg phenomenon.
Changing the mean of the input by translating its graph will multiply the output by a phase factor.
A widening self-convolution in one domain corresponds to a narrowing self-multiplication in the other domain.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.875191867351532, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/pre-calculus/42569-parable.html
|
# Thread:
1. ## parable
Finding the equation of the parable and build it
Axis parallel to y = 0 and goes through A(-2,4) B(-3,2) C(-11,-2)
Answer
$y^2-8y+4x+24=0$
2. I hope you mean "parabola"
There are two forms of parabolas. What are they?
Once you know which of the two you need, how can you determine the specifics of the equation?
3. The equation is:
$(y-k)^2=2p(x-h)$
but with points and I think the equation
4. All right. We have an equation now. That's a step in the right direction.
A parabola is defined by three points. Since we have three points here, we can define it.
In our equation $(y-k)^2=2p(x-h)$, we can use $(x,y)=(-2,4)$ to get an equation in three unknowns: $h,k,p$. Doing the same with the other two points will give us three equations in three unknowns, and from there, we can solve for $h,k,p$.
5. Thank you. Do you have any means of online communication, for example, msn
6. No, I really don't. My computer is so old, I can barely run Itunes and Firefox at the same time.
However, I will be checking this site frequently, so a private message to me would probably be easiest.
I'm glad to have been of assistance.
7. I not find the answer and you?
8. Originally Posted by Apprentice123
Finding the equation of the parable and build it
Axis parallel to y = 0 and goes through A(-2,4) B(-3,2) C(-11,-2)
Answer
$y^2-8y+4x+24=0$
Originally Posted by Apprentice123
The equation is:
$(y-k)^2=2p(x-h)$
but with points and I think the equation
So
$(4 - k)^2 = 2p(-2 - h)$
$(2 - k)^2 = 2p(-3 - h)$
$(-2 - k)^2 = 2p(-11 - h)$
So we get
$k^2 - 8k + 16 = -4p - 2ph$ (1)
$k^2 - 4k + 4 = -6p - 2ph$ (2)
$k^2 + 4k + 4 = -22p - 2ph$ (3)
Subtract equation (2) from equation (1) and then subtract equation (3) from equation (1)
$-4k + 12 = 2p$
$-12k + 12 = 18p$
You can find values of k and p from these and then use one of the original equations to find h.
Another (perhaps simpler in this case) way to approach this is to use the "standard" form for this kind of parabola:
$ay^2 + by + c = x$
and plug the points into that. This gives you a linear system in a, b, and c.
-Dan
9. Originally Posted by topsquark
Another (perhaps simpler in this case) way to approach this is to use the "standard" form for this kind of parabola:
$ay^2 + by + c = x$
and plug the points into that. This gives you a linear system in a, b, and c.
-Dan
$(-2,4), (-3,2), (-11,-2)$
$ay^2+by+c=x$
Substituting each ordered pair into the standard equation, we get:
$1.\ \ a(4)^2+b(4)+c=-2$
$16a+4b+c=-2$
$2. \ \ a(2)^2+b(2)+c=-3$
$4a+2b+c=-3$
$3. \ \ a(-2)^2+b(-2)+c=-11$
$4a^2-2b+c=-11$
Solve the system using matrix equation (or any method you like):
$\left[\begin{array}{cccc}16 & 4 & 1 & \\ 4 & 2 & 1 & \\ 4 & -2 & 1 \\ \end{array}\right] \cdot \left[\begin{array}{c}a \\ b \\ c \end{array}\right] = \left[\begin{array}{c}-2 \\ -3 \\ -11 \end{array}\right]$
$\left[\begin{array}{cccc}16 & 4 & 1 & \\ 4 & 2 & 1 & \\ 4 & -2 & 1 \\ \end{array}\right]^{-1} \cdot \left[\begin{array}{c}-2 \\ -3 \\ -11 \end{array}\right] = \left[\begin{array}{c}a \\ b \\ c \end{array}\right]$
$\left[\begin{array}{c}a \\ b \\ c \end{array}\right]= \left[\begin{array}{c}-\frac{1}{4} \\ 2 \\ -6 \end{array}\right]$
Substituting back into your original equaton:
$ay^2+by+c=x$
$-\frac{1}{4}y^2+2y-6=x$
$\boxed{y^2-8y+4x+24=0}$
10. Thank you very much
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 32, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.927209198474884, "perplexity_flag": "middle"}
|
http://motls.blogspot.com/2011/08/higgs-search-neither-fish-nor-fowl.html?m=1
|
The Reference Frame
Our stringy Universe from a conservative viewpoint
Wednesday, August 24, 2011
Higgs search: neither fish nor fowl
My "live report" from Lepton-Photon 2011 in Bombay ended up being somewhat unreadable, missing the big picture: that's what often happens with texts in which you don't know in advance what the final product will look like.
So let me summarize: the LHC has still found no evidence for any new physics or any Higgs and the previous hints for a 140 GeV Higgs boson that we had in July have actually weakened. If you assume that the Higgs sector is governed by the Standard Model, this graph by Phil Gibbs shows our "neither fish nor fowl" situation pretty clearly.
Click to zoom in.
By the way, the proper saying should be "neither fish nor crayfish" ("ani ryba ani rak" in Czech) but the Englishmen probably failed to distinguish poultry and crayfish.
The red line "0" means "predictions of a theory without any Higgs" while the green line "1" means "predictions of a theory with a SM Higgs boson of the mass given on the $$x$$-axis". The key question is whether the observed black curve is closer to the red line (no Higgs) or the green line (one Higgs). The error margins, plus minus 1 and 2 standard deviations, are given by the dark blue and light blue strips, respectively.
I kind of trust that these graphs are close to the correct ones even if Phil doesn't quite know why his calculation is right. He's a data-driven neural network that's been trained to mimic the official CERN graphs. In fact, Jester has an explanation why they didn't show the combined ATLAS+CMS graphs in Bombay: they could easily turn out to be identical to those previously drawn by Phil and named "complete nonsense" by some official figures at CERN. ;-)
Maybe even an agreement with my graphs would be embarrassing enough for them.
At any rate, see that above the Higgs of 150 GeV or below 115 GeV or so, the black curve is close to the red line - within 2 sigma - so the observations are consistent with the hypothesis "there's no Higgs of this mass". However, for the intermediate masses, you're somewhere in between the red line and the green line.
The observations seem to be so nicely in between that they seem to falsify both "Yes Higgs" and "No Higgs" for the whole interval of masses 135-150 GeV or so at the 2-sigma level. ;-) This would be unlikely to happen if the Standard Model were actually the whole story. So a very sensible explanation is that the Standard Model is actually not the full story.
By the way, the error margins are larger near 119 GeV - my favorite place of the "main" Higgs boson according to the newest graphs - and the existence of a 119 GeV Higgs is consistent with all measurements as of today. While the behavior of the graph near 119 GeV could be consistent with a non-supersymmetric Standard Model Higgs at 119 GeV, the "half-signal" near 140 GeV seems strange.
So I would say that according to similar graphs, it's more likely than not that the Standard Model is not the full story. At the same moment, convincing or "direct" evidence for new physics is still absent. This is a funny intermediate situation that will almost certainly change in a year or a few years.
Higgs and new physics love to hide
There's one point that's worth mentioning. Many people have argued that supersymmetry or some other new physics had to be behind the corner, and so on. I've always emphasized that there was no good evidence for this belief. There's no new physics in the LHC data so far - and in fact, it seems that the LHC has refuted pretty much all anomalies previously claimed by the Tevatron. I don't think that this means that the LHC will not discover SUSY - or anything else which is new.
Because the LHC reach continues to increase nearly exponentially and because new physics may be imagined to be uniformly distributed on the log scale (when it comes to increasing particle masses as well as correspondingly decreasing cross sections), the LHC still has the same "probability per unit time of running" to make a new discovery.
However, it's funny to observe people like Jester who have thought that the "long time" it clearly takes to discover SUSY means that SUSY is very unlikely. Jester himself obviously believes that there is a Higgs boson - and he probably believes that it should be a Standard Model Higgs boson - but he is starting to see that the same arguments he has used against SUSY start to hold for the Higgs boson itself.
Jester says that the God particle is apparently trying to maximize the difficulty of discovering it - so God is rather Loki than Thor. Well, does it mean that something is truly unnatural if the Higgs takes so much effort to be found relatively to all other particles we know? And what about if you replace "Higgs" by "SUSY"?
I don't think that there's any contradiction. We just know that among all the particles we have already observed plus the Higgs, the Higgs is the hardest one to be found. If the hardest (or heaviest or last) particle were anything else than the Higgs, like a lepton (tau?), we could be equally "puzzled" why it's exactly this particle. But one particle has to be the least accessible one! In fact, it's very natural for this "hardest particle" to be the Higgs because other particles' masses are just "traces" of the power of the God particle, so they're naturally smaller (a fact that is obvious for light fermions because of the tiny Yukawa couplings).
What the experiments show is really not the absolute statement that "Higgs is very hard to be found" or "SUSY is very hard to be found". What we see is the relative statement "Higgs is harder to be found than the typical expectations" and similarly for SUSY. So it's necessary to realize that what we're doing is a comparison with the expectations. The experiments simply show that the expectations (and "gut feelings") by many people have been wrong. If you phrase it in this way, it's not too shocking and it's surely no contradiction.
Superpartners may have masses a few TeV or so and SUSY would still play an important role in solving "most" of the hierarchy problem, producing a dark matter particle, and assuring the gauge coupling unification. If SUSY is helpful for stabilizing the Higgs sector, it doesn't mean that its superpartner scale must be "exactly the same" scale as the Higgs scale.
Similarly for the Higgs: more complicated models of the Higgs may easily make the Higgs more secretive than it is in the Standard Model. Just imagine a simple model. Instead of one Higgs doublet, you have $$N$$ Higgs doublets
\[H_1, H_2, \dots , H_N.\] Imagine that all of them get the same vev. The squared masses of W-bosons and Z-bosons, i.e. the coefficients in the $$m^2 A_\mu A^\mu / 2$$ mass terms for the gauge bosons, are arising from $$\sum_i D_\mu H_i D^\mu H_i / 2$$ and are proportional to $$v^2$$ where $$v$$ is the vev of each Higgs doublet.
We see that the required vev scales like
\[ v = \frac{246\,\,{\rm GeV}}{\sqrt{N}}. \] It decreases with the number of the species. I am assuming that the kinetic terms are canonically normalized. What about the Yukawa couplings? The fermion masses arise as
\[ m = \sum_{i=1}^N y_i v_i \] The sum gives you a factor of $$N$$ while $$v_i$$ goes like $$1/\sqrt{N}$$ so you see that each $$y_i$$ - assuming their equal magnitude - must go like $$1/\sqrt{N}$$ as well. The cross sections etc. resulting from diagrams with a single Yukawa vertex will therefore go as $$1/N$$.
This means that if you replace a single Higgs by $$N$$ of them, the probability of the Higgs production is simply "divided equally" between all the Higgses. So all the production cross sections become smaller - the bumps are diluted and distributed over various places. The width of each of them may become a little bit smaller, too: it's because there may be extra Yukawa vertices for the Higgs decay.
So it's easy to get "half-signals" for many Higgses. In more realistic situations, the numerous Higgses don't play the same role. Some of them like to couple to some kinds of fermions; others like to couple to other fermions, and so on. Many particular cross sections may diminish; and some cross sections may also get amplified - e.g. the down-type Higgs boson in MSSM that has a smaller vev (because it wants to produce a smaller bottom quark mass) but a higher Yukawa coupling than the SM Higgs boson. That's the origin of the enhancement of the $$bbb$$-related processes by $$\tan^2\beta$$.
Chances are high that what we have to find is a more complicated model with many Higgses or new physics and of course, we shouldn't be shocked that one needs more data to do find more information. However, if you imagined that we exactly knew what the model is except for one parameter, the value of the parameter could probably be determined from the data we already have (by drawing suggestive combined graphs - and we don't know which graphs they should be) - much like what would be true about the Standard Model Higgs if the Standard Model were right (people kind of expected that this could already be found by now).
The odds are increasing that we have to wait for a longer time - but what we will ultimately find may also be a greater trove of jewels than the minimum we were hoping for. Because nothing new seems to display itself clearly in 2/fb of the data, it's pretty clear that we have to wait for something like 10/fb of data to get some clear 5-sigma discoveries of something.
Even if those 10/fb are obtained by combining 5/fb from each major detector, it's pretty clear that we won't get clear discoveries before the end of 2011. In 2012, it's a totally different question. In 2012, we will either discover the SM Higgs or will have a clear evidence that the SM isn't the right theory or the full story.
Who is Lumo?
Luboš Motl
Pilsen, Czech Republic
View my complete profile
← by date
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 15, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9603051543235779, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/63496/what-can-be-said-about-an-infinite-linear-chain-of-conjugate-prior-distributions
|
## What can be said about an infinite linear chain of conjugate prior distributions?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
We can sample a discrete value from the multinomial distribution.
We can also sample the parameters of the multinomial distribution from its conjugate prior the dirichlet distribution.
Since the dirichlet distribution is part of the exponential family, it too must have a conjugate prior distribution in the exponential family.
I hope you see where I'm going: what happens as this chain of priors is taken to infinity?
For a simpler example, what happens with the self-conjugate Gaussian distribution?
-
## 1 Answer
Let's say that you have a distribution $F$ in the exponential family with density \begin{align} \newcommand{\mbx}{\mathbf x} \newcommand{\btheta}{\boldsymbol{\theta}} f(\mbx \mid \btheta) &= \exp\bigl(\eta(\btheta) \cdot T(\mbx) - g(\btheta) + h(\mbx)\bigr) \end{align}
Given independent realizations ${x_1, x_2, \dotsc, x_n}$ of $F$ (with unknown parameter $\theta$), then the distribution over $\theta$, $F'$, is the conjugate prior of $F$. The density of $F'$ is \begin{align} f(\btheta \mid \boldsymbol\phi) = L(\btheta \mid \mbx_1, \dotsc, \mbx_n) &= f(\mbx_1, \dotsc, \mbx_n \mid \btheta) \\ &\propto \prod_i f(\mbx_i\mid \btheta) \\ &= \textstyle\prod_i\exp\Bigl(\eta(\btheta) \cdot \textstyle T\left(\mbx_i\right) - g(\btheta) + h(\mbx_i)\Bigr) \\ &\propto \textstyle\prod_i\exp\Bigl(\eta(\btheta) \cdot \textstyle T\left(\mbx_i\right) - g(\btheta)\Bigr) \\ &= \textstyle\exp\Bigl(\eta(\btheta) \cdot \bigl(\textstyle\sum_iT\left(\mbx_i\right)\bigr) - ng(\btheta)\Bigr) \\ &= \exp\bigl(\eta'(\boldsymbol \phi) \cdot T'(\btheta)\bigr) \end{align} where \begin{align} \eta'(\boldsymbol\phi) &= \begin{bmatrix} \sum_iT_1(\mbx_i) \\ \vdots \\ \sum_iT_k(\mbx_i) \\ \sum_i1 \end{bmatrix} & T'(\btheta) &= \begin{bmatrix} \eta_1(\btheta) \\ \vdots \\ \eta_k(\btheta) \\ -g(\btheta) \end{bmatrix}. \end{align} Thus, $F'$ is also in the exponential family ($T'$ replaced $\eta$ and $\eta'$ replaced $T$ since this distribution is over $\theta$ the parameter of the distribution over $x$.)
Interestingly, $\boldsymbol\phi$ has exactly one more parameter than $\btheta$ except in the rare case where natural parameter $\phi_{k+1}$ is redundant, but such a distribution would be very weird (it would mean that the number of observations $\mbx$, that is, $n$, tells you nothing about $\btheta$.)
So, to answer your question, with each conjugate prior you get exactly one more hyperparameter.
There are many conjugate priors of the Gaussian distribution depending on how you look at it. In my opinion, the analogy to the Multinomial-Dirichlet example would set things up as follows: assume that $n$ real-valued numbers are generated by a Gaussian with unknown mean and variance. Then, the distribution of the mean and variance given the data points is a three-parameter conjugate prior distribution whose sufficient statistics are the total of the samples, the total of the squares of the samples, and the number of samples.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 22, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.7506814002990723, "perplexity_flag": "middle"}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.