url
stringlengths 17
172
| text
stringlengths 44
1.14M
| metadata
stringlengths 820
832
|
---|---|---|
http://occamstypewriter.org/boboh/2010/04/06/more_on_branch_lengths_and_species_1/
|
Brought to you by Occam's Typewriter
# More on Branch Lengths and Species
Posted on April 6, 2010 by
Last Monday I wrote about one of those frustrating papers that asks an interesting question, but the more you look at it, the less sure you are of the results. In this case they might be right, but I think there are enough problems with the analysis that I don’t have any confidence in the conclusions.
This is going to get technical, so be warned (or go and read the latest Scientias).
So, you want to stay, then? Well, it would be good to read my previous blog post, so I don’t have to repeat myself. If you’re still interested, read on…
I found a few problems, each of which might make the results dubious, so let’s take them one by one and then I’ll sum up what I think the paper says (and what can be said). I must also thank Graham Jones for his discussion on my original post: it was very useful and clarified some of the points I raise below.
## But they look the same
One problem with looking at the distributions that the authors do is that they’re indistinguishable:
with the exception of the normal, these statistical models can produce almost indistinguishable densities, but imply different modes of causation. For example, the Weibull density simplifies to the exponential density as a special case when α = 1 and β = 1. The variable-rates model is the convolution of multiple exponentials whose individual rates are assumed to follow a gamma probability distribution–in this model α and β describe the shape and scale, respectively13. If this gamma distribution is very narrow (small β), then the variable-rates model converges on an exponential.
So if their analysis was fair to all of the models, these models should have roughly equal probabilities. That it doesn’t (the exponential is hugely favoured) raises some warning flags, but we need something more solid than this.
## How to penalise a model
As I mentioned in the previous post, I think the method used is biased towards the exponential distribution. Oddly, the authors are aware of the problem and try to fix it, but aren’t aware of the consequences.
The problem is because they are comparing models with different numbers of parameters. It’s almost a truism that a model with more parameters will fit better than one with less: it simply has more flexibility. So if one just compares the fit of the models to the data, the models with more parameters will be better. To stop this happening, we penalise the complex models. All else being equal, we would prefer simpler models (Occam’s razor and all that). The problem is how to penalise models.
The authors of the paper use a Bayesian approach to the model fitting. For our purposes, they calculate the probability of a model given the data. This means calculating the following integral (θ is the parameters of the model):
$P( extrm{Data}%20|%20 extrm{Model})%20\propto%20\int%20P( extrm{Data}%20| heta%20,%20 extrm{Model})%20P( heta%20|%20 extrm{Model})%20\:%20\mathrm{d} heta$
from which the probabilities for the 1st model can be calculated:
$P( extrm{Model}_1%20|%20 extrm{Data})%20=%20 rac{P( extrm{Data}%20|%20 extrm{Model}_1)%20P( extrm{Model}_1)}{\sum%20P( extrm{Data}%20|%20 extrm{Model}_i)%20P( extrm{Model}_i)}$
Taking the second equation first, P(Modeli) is the prior probability of the model (i.e. how likely we think it is before we see the data). we can adjust this, e.g. to make all the models equally likely or to penalise models we don’t like. The other term, P(Data|Model), is calculated in the first equation, and this can also act to penalise models. This is the marginal likelihood: the probability of the data given the model. What the first equation says is that it is averaged over the parameters of the model. In other words, we average the likelihood – P(Data |θ, Model) – over the different possible values of the parameters – P(θ | Model).
How does this penalise models? Well, the likelihood is usually only large for a small portion of the parameters: for most combination of parameters it is effectively zero. Say for one parameter it takes 1/3rd of the parameter space. For two parameters it might take up 1/3rd of each parameter space. If these dimensions are independent, the result will be a circle with and area of π/32: about 9% of the total parameter space. Or, in pictures:
Of course, it’s not quite this simple (because the likelihood changes too), but this gives the general idea. Anyway, what this means is that if I make the parameter space larger, even more of the parameter space is roughly zero, so the average likelihood is lower. Hence a model with more parameters has a lower overall average – it is penalised by this extra parameter space1.
Getting back to the paper, the authors have one single-parameter model (the exponential), which their results suggest is better than all the two-parameter models. now, the obvious conclusion, given what I have written, is that this is an artifact of the change in dimensions. Well, it is but the story is a bit more complicated because the authors are aware of the problem and try to correct it.
What they do is to run the analysis twice:
To establish the range of these uniform priors we conducted a series of analyes [sic] with each model using priors on a 0-100 interval and then inspected the posterior distributions of the models’ parameters. … For these models, then, we chose our priors to be equal to these posteriors, with most on a uniform scale of width three.
I assume they did this for each model for each data set, but this isn’t clear. it’s also not clear what exactly they did: I assume they picked the extreme values from the posteriors and used those to determine the range of the priors in the “proper” analysis. but this is a terrible way to do it: the extremes are, well, extreme, and hence unstable. the posterior distributions are simulated: i.e. they draw a lot of values the posterior. But, had they drawn more values, they would tend to get extremes that are further out. So, the amount by which they penalise depends on how many simulations they do. Unless they chose the number of simulations to make the distribution “right”, this is just wrong.
What effect this has depends on which part of the paper you read. From the legend to Figure 1:
In both approaches, we used Bayesian prior distributions chosen to favour the four two-parameter models over the one-parameter exponential (Supplementary Information).
Which is contradicted by the supplementary information:
Our procedures for choosing the priors ensured that they were as narrow as possible for the two-parameter models, without constraining the parameter space the model could explore, and translates into the two-parameter models having less prior weight to overcome than if we had set priors from the usual position of relative ignorance. The average prior cost we assessed the two-parameter models translates to having to overcome a ‘debt’ of about 1.1 log-units. That is, to perform better than the exponential the two-parameter model would need to improve the log-likelihood by this amount. This is less than a typical likelihood ratio test, which uses a criterion of 1.92 log-units per parameter, or an Akaike Information Criterion test which penalises models 2 log-units per parameter.
I think the supplementary information (which nobody reads) is correct: the 1.1 comes assuming the parameters are independent (which they look at, at least informally), so that the difference in the likelihoods is log(r) – log(π), where r is the radius. log(π) = 1.14, which is their 1.1 log units.
So, the authors penalise but not enough. What makes this worse is that when they present the results, they only show the best model. The exponential is still favoured (on average: sometimes it will be worse because the sampled extremes are wider apart), so if there is really no difference between the models, it will tend to come out as being better, simply because it isn’t penalised enough.
The upshot of this is that the results that are presented are simply worthless for deciding whether the exponential is better: we would expect to see this if there was no information in the data (sometimes the other models will do better because by chance they do fit better2). What we need to see is how strongly the exponential is favoured: a histogram of the posterior probabilities for the exponential would be a simple start.
## A Way to Measure Time
For me that might would be enough to decide that the paper is uninformative, but it could be rescued. But why stop now?
The authors’ claim is that speciation occurs at a constant rate. They write:
We suppose there are many potential causes of speciation, including environmental and behavioural changes, purely physical factors such as the uplifting of a mountain range that divides two populations, or genetic and genomic changes.
and it it how these causes combine that they are interested in. But what time scale are they thinking of? It’s not the one you would think: the one measured in hours, minutes and seconds. But the authors do something different, they use the genetic distance: the number of substitutions (i.e. changes) in the DNA sequence: the more changes the greater the distance. If the rate of substitution is constant, then this is equivalent to calender time. And presumably this is the scale they are thinking that the Red Queen operates. After all, the rise of mountains is not connected to the rate of substitution in a species. But they then admit that the rate isn’t constant:
We used genetic branch lengths in preference to branch lengths scaled to time because all temporal-scaling methods introduce uncertain nonlinear transformations
If they think that the temporal-scaling methods are valid, then they are saying that finding an exponential distribution in their distances equates to a non-exponential distribution in real time: that’s what the non-linear transformation means. Now, they’re uncertain what transformation is correct, which is fine, but that ignorance propagates: it means that they don’t know what the real distribution of speciation times is, so they can’t conclude anything about it. Odd.
## Ignoring Extinctions
This is something I first saw raised by “Sooner Emeritus” on Uncommon Descent (an ID blog: Sooner Emeritus is one of the Loyal Opposition who hasn’t been banned from there yet. At least not banned under that name). The phylogeny that is constructed is based on the extant species: the ones that haven’t gone extinct. Do the extinctions matter? Initially I thought that they did unless the distribution was exponential. But after discussing it with Graham on the previous post, I’m now fairly certain that I was wrong, and that they always matter. The problem is illustrated below.
This is a coalescent tree: we look at them backward, so at each generation, the offspring “chose” their parent. Looking at it this way means we don’t have to worry about the extinct lineages. If we have a simple process where each parent has a random number of offspring, and a constant population size (i.e. the Wright-Fisher model), then going forwards we have (approximately) exponential extinctions and ‘speciations’: this is equivalent to the simplest Red Queen model of Venditti et al. So what happens to the branch lengths?
Well, digging into the literature (p333) we see that if we have j lineages, then the time until any pair coalesces is exponentially distributed with mean 2/(j(j-1)). But this means that the branch lengths do not have a constant length: if they did then the time to an event would be proportional to 1/j, not 1/j(j-1).
Now, I should be a bit careful: the evolutionary process i just outlined is extremely simple. For example, it assumes that ‘extinction’ and ‘speciation’ are balanced. if speciation is quicker than extinction, so that the population size grows at a rate eβ then the rate of coalescence is 2/(eβtj(j-1)) (where t is the time), which still depends on the number of lineages. The point here is that balancing extinction and speciation so that the time to the speciation in the observed tree is constant is tricky: there’s no guarantee that it can be done.
However, the times to each speciation event are still exponentially distributed, but the mean changes. However, this doesn’t mean that the overall distribution is an exponential. the authors discuss this a bit, and show that if a branch is made up of n exponentially distributed bits, and n is geometrically distributed, then the total branch length is exponentially distributed. What this means in English is that there are two possible events: Speciation and Nothing. If the branch is made by having a random time to an event, and if there is a constant probability that it is Speciation, then the distribution is exponential. The problem with their tree is that the probability changes: the closer you are to the root of the tree, the smaller than chance of speciation in the tree: there may be more speciations, so that the overall rate is constant, but these aren’t seen in the tree because that lineage goes extinct. So under a random scenario, the overall distribution of branch lengths isn’t exponential, it’s got more variance. than that.
## The Authors’ Final Demonstration
Finally, the authors manage to show that they can’t tell what the distribution in the data is – possibly. What they do is remove the short branches, i.e. where the times to speciation are short:
The exponential and variable-rates models expect very short branches but the lognormal does not. If very short branches are poorly estimated in the phylogenetic inference, this could bias results in favour of the exponential. To ensure that we did not have a bias towards short branches we used uniform priors on branch lengths at the phylogenetic inference step (it makes no difference to our results if we use the conventional exponential prior on branch lengths). Before fitting the statistical models, we then removed all branches with an expectation of having less than or equal to 0.5 nucleotide substitutions per branch. Table 3S shows the results with these branches removed – they do not qualitatively differ from the complete analysis and only the exponential model achieves more than the null expectation of fitting 20% of the datasets.
The exponential distribution has a mode at zero. Removing the short branches should remove the mode, and the “true” distribution should now have less probability at short times: the other distributions can accommodate this, the exponential cannot. So, either they remove so few lineages that it doesn’t matter, or the analysis is unable to distinguish between a full data set and one with the small values chopped off.
Overall, what does this say? Well, very little. What it does say, I think, is that there isn’t a strong signal that speciation times are non-exponential. But I don’t think we can conclude that speciation times are exponentially distributed. It might be that there is more information that could be given to persuade us. But, to be honest, I doubt it: I wouldn’t expect there to be a lot of information about the distributions in the data.
This is a general issue in fitting complex models to data. If we knew the true speciation times, we could easily say what the distribution is. But we only know them uncertainly. if we try to estimate the distribution, we can get the broad pattern: the mean and variance, and perhaps that the skewness away from symmetry. But really, we’ll get little else. The times are uncertain, so they can be shuffled around in their confidence bounds to fit the different distributions. this means that, unless there is a huge amount of data, distinguishing between similar distributions is (at best) difficult, and probably impossible.
Which is a shame.
1 For those who are interested, this is how BIC, the Bayesian Information Criterion works.
2 This is a good excuse to plug a paper my group threw together a couple of years ago. The short version is that different data generated with the same mechanism can strongly favour different models.
Venditti, C., Meade, A., & Pagel, M. (2009). Phylogenies reveal new interpretation of speciation and the Red Queen Nature, 463 (7279), 349-352 DOI: 10.1038/nature08630
## About rpg
Scientist, poet, gadfly
This entry was posted in Research Blogging, Science Blogging. Bookmark the permalink.
### 9 Responses to More on Branch Lengths and Species
1. Richard Carter, FCD says:
Amazing! That’s exactly what I was going to say!
2. Bob O'Hara says:
Yeah, but my pictures are better.
3. GrrlScientist says:
wow, and i married this smart guy! if you think this is interesting, you should hear what he says when only i am listening!
4. Mike Fowler says:
Typically interesting stuff, Dr Bob.
While we’re self publicising, I may as well point out that I’ve been pushing a similar idea — that distribution shapes matter, even when mean and variances are the same — albeit in terms of cascading extinctions in ecological communities.
So, distributions in extinction ‘time’ and extinction probability are ‘driven’ by ummm, other distributions. I think.
5. Graham Jones says:
A lot more on branch lengths and species, and well worth the wait. I’m glad you found my previous comments useful. I completely agree with your criticisms of the paper (which I still haven’t read!).
You described the coalescent tree, but usually people use a birth-death process for modelling macroevolution. The number of species is then a random variable not a constant or a given function of time. A similar pattern still holds though. If the extinction rate is close to the speciation rate, you get long branches (sometimes very long branches) near the root, and short ones near the present.
http://www.stat.berkeley.edu/~aldous/Research/Phylo/pictures.html
has some pictures of the kind of trees you get when extinction rate = speciation rate. Note that some go off the graph.
I have some code which simulates age-dependent branching processes with extinctions. I could run this and collect branch lengths… if I get round to doing this I’ll report back.
Someone should redo what Venditti did, only better. That is, use a program (eg BEAST) which estimates node times, and then look at the likelihoods of (topology plus node times) under various models. I think that leads to quite powerful tests, despite the uncertainties in the estimates.
6. Tom English says:
Hi, Bob.
I was Googling for “log-likelihood ratio” (the active information measure of Dembski and Marks takes that form), and ended up on a Nature Network page with a comment by you. A couple clicks brought me here for the first time. Imagine my shock at seeing “Sooner Emeritus.” I’m even more shocked to find that he said something non-stupid regarding Venditti et al.
Cheers,
Tom
7. Bob O'Hara says:
Mike – the empirical problem is still estimating the distribution from noisy data. I guess it might be possible with a metacommunity, where there are extinctions and recolonisations to tell you about the extinction pattern.
Graham – it doesn’t surprise me that the results are the same: all the asymptotics drive the a coalescent towards a birth-death process. Your suggestion for simulataneously estimating topology and branch length is what the authors did (sorry if I wasn’t clear).
Tom – I hope Clive doesn’t see this: you’ll be banned. banned I say. Although youhe might get banned by Dembski for the latest bit of poking you’ve done at him (I see DiEb has just been threatened with bannination).
8. Graham Jones says:
Bob, it was me that wasn’t clear. I never doubted that they simultaneously estimated topology and branch lengths. But (a) it would be better to estimate topology and node times (in calendar time) and (b) it would be better to compare with models for the joint distribution of topology and node times, ie for the whole tree, not just models for the distribution of branch lengths.
9. Bob O'Hara says:
I wasn’t clear on your (b) – they actually do compare the models with the joint distributions.
• ### Recent Comments
• Bob O'H on Off to the other side of the world
• Fred the Bulbous Squidge on Off to the other side of the world
• Bob O'H on Off to the other side of the world
• Bob O'H on Off to the other side of the world
• Bob O'H on Off to the other side of the world
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 2}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9335222244262695, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/algebra/63688-can-someone-please-break-down-me.html
|
# Thread:
1. ## Can someone please break this down for me?
X= -4√x+12
2. Originally Posted by HappyFeet
x= -4√x+12
$x-12=-4\sqrt x$
now, squaring both sides and solve the quadratic eqn.
3. Originally Posted by HappyFeet
X= -4√x+12
What are you trying to do? Solve for x? Then:
$x = -4\sqrt{x} + 12 \Rightarrow x - 12 = -4 \sqrt{x} \Rightarrow (x - 12)^2 = 16x \Rightarrow x^2 - 40x + 144 = 0$.
One of the solutions to this equation is the solution to $x = -4\sqrt{x} + 12$, the other is not (it's an extraneous solution and is in fact the solution to $x = 4\sqrt{x} + 12$).
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.931790292263031, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/29413/defining-variable-symbol-indeterminate-and-parameter/29421
|
## Defining variable, symbol, indeterminate and parameter
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Are there precise definitions for what a variable, a symbol, a name, an indeterminate, a meta-variable, and a parameter are?
In informal mathematics, they are used in a variety of ways, and often in incompatible ways. But one nevertheless gets the feeling (when reading mathematicians who are very precise) that many of these terms have subtly different semantics.
For example, an 'indeterminate' is almost always a 'dummy' in the sense that the meaning of a sentence in which it occurs is not changed in any if that indeterminate is replaced by a fresh 'name' ($\alpha$-equivalence). A parameter is usually meant to represent an arbitrary (but fixed) value of a particular 'domain'; in practice, one frequently does case-analysis over parameters when solving a parametric problem. And while a parameter is meant to represent a value, an 'indeterminate' usually does not represent anything -- unlike a variable, which is usually a placeholder for a value. But variables and parameters are nevertheless qualitatively different.
The above 2 paragraphs are meant to make the intent of my question (the first sentence of this post) more precise. I am looking for answers of the form "an X denotes a Y".
-
@Jacques, such philosophical questions are perfect as community wiki. – Wadim Zudilin Jun 25 2010 at 7:24
2
@Wadim: I was rather hoping that this was not really philosophical (anymore). Isn't this so basic that there should be definitions? – Jacques Carette Jun 25 2010 at 11:36
@Jacques: this is the type of thing that is so basic that there are not mathematical definitions. Unless mathematicians were proving things about variables, why would they need to define them precisely? – Carl Mummert Apr 27 2011 at 10:14
@Carl: This is exactly what foundation studies are all about: take things that were thought basic and define them formally. Both logic and set theory did that, for a part of mathematics. But it appears that there are still parts of mathematics which are not formal. Why is that? – Jacques Carette Apr 27 2011 at 12:30
## 6 Answers
In written English (and of course other languages), we have linguistic constructs which tell the reader how to approach the ideas that are about to be presented. For example, if I begin a sentence with "However, . . .", the reader expects a caution about a previously stated proposition, but if I begin the sentence with "Indeed, . . . ", the reader expects supporting evidence for a position. Of course we could completely discard such language and the same ideas would be communicated, but at much greater effort. I regard the words "variable", "constant", "parameter", and so on, in much the same way I regard "however", "indeed", and "of course"; these words are informing me about potential ways to envision the objects I am learning about. For example, when I read that "$x$ is a variable", I regard $x$ as able to engage in movement; it can float about the set it is defined upon. But if $c$ is an element of the same set, I regard it as nailed down; "for each" is the appropriate quantifier for the letter $c$. And when (say) $\xi$ is a parameter, then I envision an uncountable set of objects generated by $\xi$, but $\xi$ itself cannot engage in movement. Finally, when an object is referred to as a symbol, then I regard its ontological status as in doubt until further proof is given. Such as: "Let the symbol '$Lv$' denote the limit of the sequence $\lbrace L_{n}v \rbrace_{n=1}^{\infty}$ for each $v \in V$. With this definition, we can regard $L$ as a function defined on $V$. . . "
So in short, I regard constructing precise mathematical definitions for these terms as equivalent to getting everyone to have the same mental visions of abstract objects.
-
Excellent! Your discourse above is very reminiscent (to me) of the same discourse present in Leibniz's work, and later Frege as well as Russell, which served to really clarify the mathematical vernacular. The informal use of words led to some formalizations (think set theory and logic) which really help put mathematics on a much more solid foundation than before. This is now established canon. But why do other words in common mathematical usage (with mathematical meaning) escape this treatment? – Jacques Carette Apr 27 2011 at 12:36
I believe that "variable", "constant", and "parameter" have identical set theoretic meaning, as they operate as adjectives describing elements of sets within any given proof, and the validity of proofs depends only on the properties of the elements of the sets under consideration, not the adjectives used to describe the elements. So though we regard "variable" as a noun, it arises from the mental abstraction of an adjective. Objects which seem to be amenable to precise mathematical definitions seem to arise as abstractions of nouns. (That's the best answer I can come up with unfortunately!) – Nick Thompson Apr 28 2011 at 3:02
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Regarding the status of variables, you probably want to look at Chung-Kil Hur's PhD thesis "Categorical equational systems: algebraic models and equational reasoning". Roughly speaking, he extends the notion of formal (as in formal polynomials) to signatures with binding structure and equations. He was a student of Fiore's, and I think they've been interested in giving better models (inspired by the nominal sets approach) to things like higher-order abstract syntax. I've been meaning to read his thesis for a while, to see if his treatment of variables can suggest techniques that could be used for writing reflective decision procedures which work over formulas with quantifiers.
For schematic variables or metavariables, there's a formal treatment of them in MJ Gabbay's (excellently-titled) paper "One and a Halfth-Order Logic"
-
I've been reading other of Gabbay's papers, which I have been greatly enjoying. I have tried several times to erad Fiore's work, but my understanding of CT is just not strong enough to cope. It appears quite unfortunate, since he does seem to have a lot to say about questions I have been asking myself. – Jacques Carette Jun 24 2010 at 20:09
Intriguing question...
If there are definitions then as far as I know they're pretty much unspoken ones. Maybe someone has actually codified them somewhere, but I'm guessing not- so I'm going to take a few guesses and stick this answer as community wiki in a bid to get some kind of consensus:
I can (fairly confidently) vouch for
Variable: The argument of a function (sometimes a truth function :))
Indeterminate: Dummy variable used to prove statements with universal quatifiers
Parameter: A numerical variable determining an object
I would take guesses at:
Symbol: A function or functional (ie. more complex than simply an object) that is a variable
Name: The argument of a truth function
And I have no idea about:
Metavariable: ?
-
1
Feel free to mess about with this ^^^^^^^^ – Tom Boardman Jun 24 2010 at 19:46
This is indeed what I was looking for. Some (like indeterminate), seem quite close to the mark. I am less keen on your characterization of 'variable' though. Don't variables sometimes occur outside the scope of a function? [Although another might be that they don't and those who use variable in that context are guilty of either sloppiness or misunderstanding of what a variable is]. – Jacques Carette Jun 24 2010 at 20:06
Of the various types of "placeholder", certainly a couple have definite mathematical meanings. In logic, the meaning of free and bound variables is set out in detail. And I take "indeterminate" to be a term used with a precise meaning in algebra; in polynomial rings, for example, the indeterminates are not exactly independent variables in the conventional sense of functional notation.
-
I know this - I could have put all of that in my question (which was long enough as it is). What I am seeking is exactly some of those precise meanings. I do know the precise meanings from logic (I did mention $\alpha$-equivalence, right?). But the meaning of a term in logic is not always the meaning of that term in the rest of mathematics... – Jacques Carette Jun 24 2010 at 19:10
It might be clearer to ask first where the dominant idea of "function", as mathematicians now understand it, is not the only useful one. And then ask for the descriptive terms to be clarified. – Charles Matthews Jun 24 2010 at 21:01
It seems to me that this PhD thesis may well contain answers that I find satisfying. The discussion on p.52 is particularly appealing, but the whole thesis is strewn with similar passages discussing mathematical terms which are frequently left (formally) undefined in the mathematical literature.
Warning: many people who post here would likely call this thesis part mathematical philosophy and part computer science, and find little modern mathematics in it. But then again, as mathematicians seem to be trying to take type theory back for their own, maybe this kind of work will come back in vogue too.
-
I used to worry a lot about the “ontological status of variables”, and I was eventually able to achieve a modicum of ontological security, at least, with respect to the simple domains that interested me, by taking a pattern-theoretic view of variables. In this view, you shift the question from the status of an isolated variable name like “$x$” to the syntactic entity “$S \ldots x \ldots$” in which the variable name occurs. You may now view “$S \ldots x \ldots$” as a name denoting the objects denoted by its various substitution instances.
-
You're telling me that variables are different when viewed intensionally or extensionally. We agree. So, can you formalize not just variable, but each of the terms I gave, with an explicit denotation? – Jacques Carette Jun 24 2010 at 19:13
@ Jacques Carette –– That would require an excursion into details that experience tells me are not likely to be tolerated here. Shrift as shortly as possible, it helps to have the classical notion of general or plural denotation, which classical thinkers deployed to good effect long before they had classes. – Jon Awbrey Jun 24 2010 at 19:28
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 17, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9484298825263977, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/tagged/triangle+circle
|
# Tagged Questions
3answers
57 views
### Geometry - Equilateral triangle covered with five circles
I have to cover an equilateral triangle (whose sides are 1m long) with 5 identical circles: what's the minimum radius of the circles?
2answers
116 views
### Calculating circle radius from two points and arc length
For a simulation I want to convert between different kind of set point profiles with one being set points based on steering angles and one being based on circle radius. I have 2 way points the ...
1answer
35 views
### Similarity of triangles in a circle
The problem: c is a circle with a diameter AB. t is the tangent at the point B. Now C and D are two points on t and at different sides of B. I draw the line segments AC and AD, the point where AC ...
0answers
114 views
### Finding side and angle of isosceles triangle inside two circles
I'm having a problem that I'm not sure how to solve (or if it's even possible). It's not homework, just something i'm struggling with for a project. :) Basically, there are two circles, represented ...
4answers
105 views
### Circle/Triangle math problem
The question asks to find angles $\angle X$ and $\angle Y$, however I don't know how to do this without assuming that lines $\overline {GO}$ and $\overline{OJ}$ are parallel. The only angle given is ...
4answers
137 views
### How to know location of a point?
I have a circle formed with three given points. How can i know whether another given point is inside the circle formed by previous three points. Is it determinant i need to calculate? Then what are ...
1answer
443 views
### Calculating circle radius from two points on circumference (for game movement)
I'm designing a game where objects have to move along a series of waypoints. The object has a speed and a maximum turn rate. When moving between points p1 and p2 it will move in a circular curve ...
3answers
67 views
### Prove that point M is on circle c
It's hard to create question names that make sense. Anyhow, the following is another question from my math assignment. Line-segment AB has a fixed length of 10 units. point A moves on the positive ...
1answer
645 views
### How to calculate radius when I know the tangent line length?
For my math homework, I was asked this question: The tangent lines from O hit a circle with center M and radius r in R and S. Calculate r. -The length of OR and OS is 4 How do I calculate the ...
1answer
311 views
### Area of triangle ABC inside circle
Consider the following diagram: $AB+AD=DE$, $\angle BAD= 60$, and $AE$ is $6$. How do we find the area of the triangle $ABC$?
1answer
85 views
### find distance from point in circle to perimiter
If I have the following circle, with centre in red and a random point in the circle in blue. I know the radius ,r, length of d, and the angle p: I then create a a new green point and I know the ...
3answers
69 views
### Calculate incircle radius.
A circle is inscribed in a right angled triangle ABC where AC is the hypotenuse. The circle touches AC at point P. Length of AP = 2unit and CP = 4 units. What is the radius of the circle?
2answers
298 views
### How to calculate radius of flush arch between two intersecting lines?
I am trying to make a corner of a robot I am designing flush for aesthetic reasons as well as safety reasons but I'm not sure how to make the arch of the corner lay flush with the two lines that make ...
1answer
401 views
### How many triangles can be formed from N points on a circle?
I have a circle with N points on it, and I want to determine how many triangles can be formed using these points. How can I do this? Thanks! Andrew
4answers
954 views
### Perimeter of Triangle inside a circle
If the circle has a radius of 4, what is the perimeter of the inscribed equilateral triangle? Answer: $12\sqrt{3}$
3answers
327 views
### Triangle Inside Circle
If the radius of the circle is equal to the length of the chord $AB$, what is the value of $x$? How would I solve this problem ?
3answers
99 views
### Deteriming an angle without Trig. ratios
I am trying to solve the current problem If O is the center of a circle with diameter 10 and the perimeter of AOB=16 then which is more x or 60 Now I know the triangle above is an ...
2answers
178 views
### Angles of triangle inside a cricle
In the figure shown if area of circle with center o is 100pi and CA has length of 6 what is length of AB ? I looked around on the web and cant seem to get an idea of what the angles AOC ...
1answer
218 views
### 3D intersection point between circle and triangle
Given a 3D triangle with vertices $(v0, v1, v2)$ and a 3D circle of radius $r$, centered at $c$, and lying in the plane perpendicular to $axis$, how can I test for intersection points between them? ...
3answers
248 views
### angle of an inscribed triangle
I have a scalene triangle inscribed in a circle, one of its sides $a$ is $2\sqrt3$ and the length $r$ from that side to the center is $1$. I need to find the angle $x$ opposite to the side given. ...
2answers
378 views
### Find the radius of the circle?
Three circles of equal radii have been drawn inside an equilateral triangle , of side a , such that each circle touches the other two circles as well as two sides of triangle. Then find the radius ...
1answer
1k views
### How does this equation to find the radius from 3 points actually work?
I had searched online and found an equation that solves the radius of a circle from 3 points that are located on the circumference of that specific circle. Where I had found this formula did not state ...
0answers
75 views
### Get value of angle with 45 degrees as maximum and 0 and 90 degrees as minimum
I want the calculate the "value" of an angle in such a way that: The angle of 45 degrees corresponds with the maximum value of 1 The angles of 0 and 90 degrees correspond with the minimum value of 0 ...
2answers
107 views
### Find the ratio in which the circle divides each of the sides AB and AC?
A circle passes through the vertex A of an equilateral triangle ABC and is tangent to BC at its midpoint . Find the ratio in which the circle divides each of the sides AB and AC? Does the line ...
1answer
145 views
### Similar Right Triangles and Incircles [duplicate]
Possible Duplicate: Triangle and Incircle In a setup of right triangles ABC, BDA, and BDC not unlike this diagram (click on the link, and ignore the written side measures and subtext in ...
1answer
82 views
### Finding a point which is constrained to 3 other points.
Is there an easy way to find the 4th point given 3 fixed points and a different minimum length between the 4th point and each of the 3 points? Similar to this question, but with non-fixed minimum ...
2answers
445 views
### Sangaku: Show line segment is perpendicular to diameter of container circle
"From a 1803 Sangaku found in Gumma Prefecture. The base of an isosceles triangle sits on a diameter of the large circle. This diameter also bisects the circle on the left, which is inscribed so that ...
1answer
254 views
### A plane Geometry Problem
The triangle $ABC$ has $CA=CB$, circumcenter $O$ and incenter $I$. The point $D$ on $BC$ is such that $DO$ is perpendicular $BI$. Show that $DI$ is parallel to $AC$.
1answer
116 views
### What is the ratio of the area?
If the segment A'B' is tangent to the incircle of triangle ABC, and that segment AB = segment CM; then, what is the ratio of the area of the triangle ABC to the area of the small triangle A'B’C? ...
1answer
248 views
### Get the relation between X and Y axes in triangle based on the degree between
I have a given degree (0 - 360), and based on it, I'd like to be able to calculate the length of X and Y axis of a triangle built on that angle , if the third side of that triangle is equal to 1. I ...
1answer
481 views
### Positioning three circles, all of them touching each other
There are three circles, all of them touching each other. The bottom two circles are laying on an imaginary floor, such that they touch the line g=-r as well. Given are all three radii, r1 (A), r2 ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 31, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9376072287559509, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/100915?sort=newest
|
## Number of generators of $m$-primary ideals in $k[x, y]$
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Let $R = k[x, y]$ with $k$ algebraically closed, and $m = (x, y)$. Suppose $I$ is an $m$-primary ideal of $R$, i.e., $(x, y)^n \subset I \subset (x, y)$ for some $n$. If $I_m$ is generated by a regular sequence of length 2, i.e., $I_m = aR_m + bR_m$ where $a, b$ is a regular sequence of $R_m$. What can we say about the number of generators of $I$ in this case?
All my examples show that $I$ is generated by a regular sequence of length 2, yet not a proof is found.
Thanks,
-
## 1 Answer
This is a standard result using elementary homological algebra. If $I$ is a height two local complete intersection ideal in $R=k[x,y]$, then it is a complete intersection. Under the hypothesis, it follows that $\mathrm{Ext}^1_R(I,R)$ is isomorphic to $R/I$, this being a local calculation and Chinese remainder theorem. The extension corresponding to $1\in R/I$ is $0\to R\to P\to I\to 0$ and one checks that $P$ is $R$-projective of rank two, since by choice $\mathrm{Ext}^1_R(P,R)=0$, and hence free (by Seshadri's theorem).
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 25, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.921327531337738, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/240918/tangent-spaces-at-different-points-and-the-concept-of-connection?answertab=votes
|
# Tangent spaces at different points and the concept of connection
If $M$ is a smooth manifold and $TM$ is the tangent bundle clearly $T_pM\cong T_qM$ (as vector spaces) for every $p,q\in M$. Nobody ensures that the previous vector spaces isomorphism is natural (or canonical). In $\mathbb R^n$ we have that $T_p\mathbb R^n$ and $T_q\mathbb R^n$ are naturally isomorphic to $\mathbb R^n$ so we can differentiate a vector field along a direction, in the usual way so taking the directional derivatives of each component.
If the isomorphism between tangent spaces in different point isn't natural, why can't we differentiate a vector field in the usual way? The problem is comparing vectors belonging in different (isomorphic) vector spaces; but we can send the two vectors, with an isomorphism, in a common vector space and then subtract them. Where is the importance of a natural isomorphism?
-
2
If you change what you mean by "with an isomorphism in a common vector space" then your result pulled back to the original tangent spaces will change. That is, what you are proposing will depend on how you make your two tangent spaces isomorphic to a common vector space. There are lots of isomorphisms between two vector spaces of the same dimension. – KCd Nov 19 '12 at 22:35
It is true that there are lots of isomorphisms between vector spaces, but there are also lots of natural isomorphisms. For example canonical automorphisms of $V$ are the elements of $Z(GL(V))$, so homothetic transformations. – Galoisfan Nov 20 '12 at 19:24
So even if I choose a natural isomorphism, the directional derivative will not depends on the basis imposed on tangent spaces, but it depends on the choice of the natural isomorphism. – Galoisfan Nov 20 '12 at 19:28
1
What Kofi says is not quite right (e.g., a finite-dimensional vector space and its double dual are naturally isomorphic), but I suppose what Kofi means is that two vector spaces of the same dimension are generally not isomorphic in a canonical way. Galoisfan, do you think tangent spaces at different points of a general (abstract) manifold are naturally isomorphic in some way? What does the term "natural isomorphism" mean to you? – KCd Nov 20 '12 at 21:13
1
There is no such thing as a canonical isomorphism in general between two vector spaces and it is hopeless to prove there are "lots" of them between two fixed vector spaces. As an analogy, there is no such thing as a natural inner product on a (finite-dim.) real vector space. Let $V$ be the vector space of real polynomials $a+bx+cx^2$ that vanish at 45. There are many inner products on $V$, but none is "natural" or "canonical", right? – KCd Nov 21 '12 at 16:01
show 5 more comments
## 2 Answers
If a manifold has curvature, then parallel transport of vectors depends on the path. In other words, any map between tangent spaces at different points is dependent on an arbitrary choice which is particular for each pair of points and each particular manifold. It is the same problem as when relating a finite-dimensional vector space $V$ with its dual $V^*$. In order to do so, we need to fix a basis for $V$.
In regards to the importance of naturality, informally a transformation is called natural if it is "independent of its source" in a sense. For example, the isomorphism between $V$ and $V^{**}$, the double algebraic dual, is natural. We send $v\in V$ to $v^{**}\in V^{**}$ such that $v^{**}(f)=f(v)$ for all $f\in V^*$. Note that this is independent not only of the basis of $V$, but also does not reference anything particular about $V$, such as a basis, and so this isomorphism can be carried out consistently over all finite-dimensional vector spaces over a common field. Quoting Wikipedia, a transformation is not natural if it cannot be extended consistently over the entire category in question.
-
many books say, without explaining the meaning of the adjective "natural", that the parallel transport give a way to naturally identify tangent spaces on points which can be connected by a path. – Galoisfan Nov 25 '12 at 11:54
I will explain better my doubt: If on a smooth manifold we haven't a natural identification beetwen tangent spaces, than we can't define a derivative of a vector field as "a limit of a quotient". Once we define a parallel transport, then a covariant derivative of a vector field $V$ along a curve $\gamma$ is $$lim_{t\to t_0}\frac{P^{-1}_{t_0,t}V(t)- V(t_0)}{t-t_0}$$ So i guess that $P_{t_0,t}$ is a natural isomorphism because allows to write the covariant derivative as a limit of a quotient. – Galoisfan Nov 25 '12 at 12:11
just quoting the book "Manifold and Differential Geometry - Jeffrey M. Lee" on page 502, talking about the Koszul connection: "...The definition of this connection takes advantage of the natural identification of tangent spaces..." – Galoisfan Nov 25 '12 at 13:11
I haven't read Lee's book, and I am not too familiar this this construction, but this isomorphism seems to depend on the choice of metric for the manifold and so cannot be natural. Maybe he means that for a manifold with this extra structure, the tangent spaces have natural isomorphisms between them (I cannot see why this would be true), but then we are no longer talking about general differentiable manifolds, and there is in general several connections on a given manifold. – espen180 Nov 25 '12 at 14:12
Why don't you let $M = S^2 \subset \mathbb{R}^3$ ? The equation for the Tangent plane is $$T_{(a,b,c)}S^2 = \{ (x-a)a+(y-b)b+(c-z)c = 0 \}$$ This is a collection of planes parameterized by points $(a,b,c): a^2 + b^2 + c^2 = 1$.
We can define an initial vector $\vec{x}=(x,y,z) \in T_{(a,b,c)}S^2$. Then we can ask $$\vec{x}(t + \delta t) = A(t) \vec{x}(t) \in T_{(a,b,c) + \vec{v}\delta t}S^2$$ For any time-paramterized matrix $A$, we should get a connection on the sphere that necessarily Levi-Civita connection, since this flow is not the geodesic flow.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 31, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9320427179336548, "perplexity_flag": "head"}
|
http://mathhelpforum.com/calculus/136149-taylor-series-cos-proof.html
|
# Thread:
1. ## Taylor series of cos proof
Prove that the Taylor series for y = cos x, centred at 0, converges to cos x
for every x in R.
How would I do this?
2. You can use either the ratio test or the root test to show that the series converges for all $x$.
3. That won't show that it converges to cos x though...wouldn't it just show whether it is convergent or not.
4. Nevermind, I'll assume I don't have to prove the Taylor series expansion of cos x converges to cos x.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9201255440711975, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/31374/how-does-higgs-boson-get-the-rest-mass?answertab=oldest
|
# How does Higgs Boson get the rest mass?
Higgs Boson detected at LHC is massive. It has high relativistic mass means it has non-zero rest mass.
Higgs Boson gives other things rest mass. But, how does it get rest mass by itself?
-
1
– Manishearth♦ Jul 5 '12 at 13:16
@Manishearth Put that as answer... – Sachin Shekhar Jul 5 '12 at 13:21
I'm really not sure about it, and I don't know enough to put it as an answer. Ping me again if nobody else answers :) – Manishearth♦ Jul 5 '12 at 13:22
1
@Manishearth (3 comments up) you could sort of say that's true, but the second part would be the vacuum expectation value, not an oscillation. – David Zaslavsky♦ Jul 5 '12 at 13:48
– Qmechanic♦ Jan 5 at 15:19
## 4 Answers
The Higgs gives the W's, Z's, quarks, leptons, and (probably) the neutrinos a mass, but it doesn't have hardly anything to do with the proton mass, it only contributes the tiny amount roughly equal to the proton neutron mass difference. Its own mass is fundamental parameter in the standard model, and the natural value is the Planck mass.
Physicists call the question of why the Higgs is so much lighter than the Planck mass "The Heirarchy problem".
-
If it gives quarks mass, how couldn't protons get mass as they are made up of quarks? – Sachin Shekhar Jul 5 '12 at 14:06
4
@SachinShekhar: It gives about 1% of the proton mass, the rest is confinement processes. The quarks in the proton are not slow, it's not like an H atom. Most of the mass of the proton is glue and quark kinetic energy. The quark rest-mass is a 1% correction. – Ron Maimon Jul 5 '12 at 14:16
Can we take quark kinetic energy as REST mass? Isn't it a relativistic ju-ju? – Sachin Shekhar Jul 5 '12 at 14:54
1
@SachinShekhar: The quark kinetic energy is a contribution to the rest mass when the total momentum of everything adds up to zero all together. This type of common confusion is why I think it is important to ignore Einstein and follow Tolman and use the term "relativistic mass" as a synonym for energy, and use the term "rest mass" for rest mass. – Ron Maimon Jul 5 '12 at 15:04
@Ron: doesn't the kinectic energy of the quarks always contribute to the 'rest mass' of the hadron? the (invariant) mass of a system is given by $E^2 - p^2$ in every frame – Christoph Jul 5 '12 at 16:42
show 1 more comment
You have to read a bit about the Higgs mechanism.
In particle physics, the Higgs mechanism (also called the Brout–Englert–Higgs mechanism, Englert–Brout–Higgs–Guralnik–Hagen–Kibble mechanism,1 and Anderson–Higgs mechanism) is the process that gives mass to elementary particles. The particles gain mass by interacting with the Higgs field that permeates all space.
..................
The simplest implementation of the mechanism adds an extra Higgs field to the gauge theory. The spontaneous symmetry breaking of the underlying local symmetry triggers conversion of components of this Higgs field to Goldstone bosons which interact with (at least some of) the other fields in the theory, so as to produce mass terms for (at least some of) the gauge bosons. This mechanism may also leave behind elementary scalar (spin-0) particles, known as Higgs bosons.
*This left over spin 0 is the Higgs * that we hope has been discovered, if nature follows the simplest higgs mechanism implementation. It acquires its mass by the Higgs mechanism too. That is why the scientists are careful to state that more work is needed to establish what type of Higgs particle this is. ............
The Higgs mechanism was incorporated into moder particle physics by Steven Weinberg and Abdus Salam, and is an essential part of the standard model.
In the standard model, at temperatures high enough so that electroweak symmetry is unbroken, all elementary particles are massless. At a critical temperature the Higgs field becomes tachyonic, the symmetry is spontaneously broken by condensation, and the W and Z bosons acquire masses. (EWSB, ElectroWeak Symmetry Breaking, is an abbreviation used for this.)
Fermions, such as the leptons and quarks in the Standard Model, can also acquire mass as a result of their interaction with the Higgs field, but not in the same way as the gauge bosons.
.............
-
The mass of the Higgs boson is a free parameter of the standard model and not (only) due to the interactions with a non-zero Higgs field.
If the Higgs field were zero, the standard model predicts four massive Higgs bosons, which are the only massive particles. In case of a non-zero Higgs field, only one of them gains some extra mass via Higgs field interactions and becomes 'the' Higgs boson (assuming that there is only one), while the remaining Higgs bosons mix with Isospin and Hypercharge gauge bosons to form the electroweak gauge bosons.
-
Forget about relativistic mass; it's an outdated and, in this case, irrelevant concept. The Higgs boson has a rest mass of about $125\ \mathrm{GeV}/c^2$ assuming it is in fact what the LHC has found.
Anyway, I would say that the Higgs boson does not actually give other particles mass directly; instead, it's a side effect of the mechanism by which those other particles become massive. It just naturally turns out that the particle produced by this mechanism has to be a massive particle itself.
Or to put it another way, the Higgs field would not be able to give other particles mass if it were not itself massive. Take a look at the "Mexican hat" potential shown in this site's logo. The bump in the middle arises because the Higgs field has an associated mass, the mass of the Higgs boson. That bump pushes the "natural" state of the Higgs field off center, which means the field has a nonzero "default" value, called the vacuum expectation value. It's that vacuum expectation value that gives other particles mass. Without the bump, the minimum of the potential would be in the center, which means the vacuum expectation value of the Higgs field would be zero, which in turn would render it incapable of giving other particles mass.
I'll refer you to another answer of mine for some of the mathematical detail.
-
Can you please link me to official CERN source which says the value was "Rest" mass? – Sachin Shekhar Jul 5 '12 at 14:34
@Sachin: as far as I know (not a quantum field-theorist), QFT always assumes 'rest mass'/'invariant mass' – Christoph Jul 5 '12 at 16:29
Yes, the $m$ in the equations of quantum field theory is rest mass. You do not need a source from CERN to tell you that; check a good QFT textbook. – David Zaslavsky♦ Jul 5 '12 at 18:27
## protected by Qmechanic♦Dec 23 '12 at 22:10
This question is protected to prevent "thanks!", "me too!", or spam answers by new users. To answer it, you must have earned at least 10 reputation on this site.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9372857809066772, "perplexity_flag": "middle"}
|
http://xianblog.wordpress.com/tag/missing-data/
|
# Xi'an's Og
an attempt at bloggin, from scratch…
## more typos in Monte Carlo statistical methods
Posted in Books, Statistics, University life with tags capture-recapture, EM algorithm, frequentist inference, integer set, Jensen's inequality, missing data, Monte Carlo Statistical Methods, optimisation, typos, UNC on October 28, 2011 by xi'an
Jan Hanning kindly sent me this email about several difficulties with Chapters 3, Monte Carlo Integration, and 5, Monte Carlo Optimization, when teaching out of our book Monte Carlo Statistical Methods [my replies in italics between square brackets, apologies for the late reply and posting, as well as for the confusion thus created. Of course, the additional typos will soon be included in the typo lists on my book webpage.]:
1. I seem to be unable to reproduce Table 3.3 on page 88 – especially the chi-square column does not look quite right. [No, they definitely are not right: the true χ² quantiles should be 2.70, 3.84, and 6.63, at the levels 0.1, 0.05, and 0.01, respectively. I actually fail to understand how we got this table that wrong...]
2. The second question I have is the choice of the U(0,1) in this Example 3.6. It feels to me that a choice of Beta(23.5,18.5) for p1 and Beta(36.5,5.5) for p2 might give a better representation based on the data we have. Any comments? [I am plainly uncertain about this... Yours is the choice based on the posterior Beta coefficient distributions associated with Jeffreys prior, hence making the best use of the data. I wonder whether or not we should remove this example altogether... It is certainly "better" than the uniform. However, in my opinion, there is no proper choice for the distribution of the pi's because we are mixing there a likelihood-ratio solution with a Bayesian perspective on the predictive distribution of the likelihood-ratio. If anything, this exposes the shortcomings of a classical approach, but it is likely to confuse the students! Anyway, this is a very interesting problem.]
3. My students discovered that Problem 5.19 has the following typos, copying from their e-mail: “x_x” should be “x_i” [sure!]. There are a few “( )”s missing here and there [yes!]. Most importantly, the likelihood/density seems incorrect. The normalizing constant should be the reciprocal of the one showed in the book [oh dear, indeed, the constant in the exponential density did not get to the denominator...]. As a result, all the formulas would differ except the ones in part (a). [they clearly need to be rewritten, sorry about this mess!]
4. I am unsure about the if and only if part of the Theorem 5.15 [namely that the likelihood sequence is stationary if and only if the Q function in the E step has reached a stationary point]. It appears to me that a condition for the “if part” is missing [the "only if" part is a direct consequence of Jensen's inequality]. Indeed Theorem 1 of Dempster et al 1977 has an extra condition [note that the original proof for convergence of EM has a flaw, as discussed here]. Am I missing something obvious? [maybe: it seems to me that, once Q reaches a fixed point, the likelihood L does not change... It is thus tautological, not a proof of convergence! But the theorem says a wee more, so this needs investigating. As Jan remarked, there is no symmetry in the Q function...]
5. Should there be a (n-m) in the last term of formula (5.17)? [yes, indeed!, multiply the last term by (n-m)]
6. Finally, I am a bit confused about the likelihood in Example 5.22 [which is a capture-recapture model]. Assume that Hij=k [meaning the animal i is in state k at time j]. Do you assume that you observed Xijr [which is the capture indicator for animal i at time j in zone k: it is equal to 1 for at most one k] as a Binomial B(n,pr) even for r≠k? [no, we observe all Xijr's with r≠k equal to zero] The nature of the problem seems to suggest that the answer is no [for other indices, Xijr is always zero, indeed] If that is the case I do not see where the power on top of (1-pk) in the middle of the page 185 comes from [when the capture indices are zero, they do not contribute to the sum, which explains for this condensed formula. Therefore, I do not think there is anything wrong with this over-parameterised representation of the missing variables.]
7. In Section 5.3.4, there seems to be a missing minus sign in the approximation formula for the variance [indeed, shame on us for missing the minus in the observed information matrix!]
8. I could not find the definition of $\mathbb{N}^*$ in Theorem 6.15. Is it all natural numbers or all integers? May be it would help to include it in Appendix B. [Surprising! This is the set of all positive integers, I thought this was a standard math notation...]
9. In Definition 6.27, you probably want to say covering of A and not X. [Yes, we were already thinking of the next theorem, most likely!]
10. In Proposition 6.33 - all x in A instead of all x in X. [Yes, again! As shown in the proof. Even though it also holds for all x in X]
Thanks a ton to Jan and to his UNC students (and apologies for leading them astray with those typos!!!)
### Share:
Leave A Comment »
## Typo in Example 5.18
Posted in Books, R, Statistics, University life with tags EM algorithm, missing data, Monte Carlo Statistical Methods, typos on October 3, 2010 by xi'an
Edward Kao is engaged in a detailed parallel reading of Monte Carlo Statistical Methods and of Introducing Monte Carlo Methods with R. He has pointed out several typos in Example 5.18 of Monte Carlo Statistical Methods which studies a missing data phone plan model and its EM resolution. First, the customers in area i should be double-indexed, i.e.
$Z_{ij}\sim\mathcal{M}(1,(p_1,\ldots,p_5))$
which implies in turn that
$T_i=\sum_{j=1}^{n_j}Z_{ij}$.
Then the summary T should be defined as
$\mathbf{T}=(T_1,T_2,\ldots,T_n)$
and $W_5$ as
$W_5=\sum_{i=m+1}^nT_{i5},$
given that the first m customers have the fifth plan missing.
### Share:
Leave A Comment »
## JSM 2010 [day 1]
Posted in R, Statistics, University life with tags ABC, auxiliary variable, Bayesian non-parametrics, cloud computing, GPU, JSM 2010, missing data, mixtures, multithreading, parallelisation, Vancouver on August 2, 2010 by xi'an
The first day at JSM is always a bit sluggish, as people slowly drip in and get their bearings. Similar to last year in Washington D.C., the meeting takes place in a huge conference centre and thus there is no feeling of overcrowded [so far]. It may also be that the peripheric and foreign location of the meeting put some regular attendees off (not to mention the expensive living costs!).
Nonetheless, the Sunday afternoon sessions started with a highly interesting How Fast Can We Compute? How Fast Will We Compute? session organised by Mike West and featuring Steve Scott, Mark Suchard and Qanli Wang. The topic was on parallel processing, either via multiple processors or via GPUS, the later relating to the exciting talk Chris Holmes gave at the Valencia meeting. Steve showed us some code in order to explain how feasible the jump to parallel programming—a point demonstrated by Julien Cornebise and Pierre Jacob after they returned from Valencia—was, while stressing the fact that a lot of the processing in MCMC runs was opened to parallelisation. For instance, data augmentation schemes can allocate the missing data in a parallel way in most problems and the same for independent data likelihood computation. Marc Suchard focussed on GPUs and phylogenetic trees, both of high interest to me!, and he stressed the huge gains—of the order of hundreds in the decrease in computing time—made possible by the exploitation of laptop [Macbook] GPUs. (If I got his example correctly, he seemed to be doing an exact computation of the phylogeny likelihood, not an ABC approximation… Which is quite interesting, if potentially killing one of my main areas of research!) Qanli Wang linked both previous with the example of mixtures with a huge number of components. Plenty of food for thought.
I completed the afternoon session with the Student Paper Competition: Bayesian Nonparametric and Semiparametric Methods which was discouragingly empty of participants, with two of the five speakers missing and less than twenty people in the room. (I did not get the point about the competition as to who was ranking those papers. Not the participants apparently!)
### Share:
6 Comments »
Cancel
Post was not sent - check your email addresses!
Email check failed, please try again
Sorry, your blog cannot share posts by email.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 6, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9314485788345337, "perplexity_flag": "middle"}
|
http://windowsontheory.org/2012/05/10/when-did-majority-become-the-stablest-part-2/
|
A Research Blog
by
The first question we’d like to answer is this: which is the monotone, balanced, transitive Boolean function which is least sensitive to bit-flips? We know that Majority is the worst possible with $NS( Maj) = \Theta(1/\sqrt{n})$.
The new champion turns out to be the Tribes function first studied by Ben-Or and Linial. $Tribes_{s,w}$ is a read-once DNF which is the OR of $s$ disjoint terms, each term is the AND of $w$ variables:
$Tribes_{s,w}(x^1_1,...,x^1_w,...,x^s_1,...,x^s_w) = \vee_{i=1}^s\left( \wedge_{j=1}^w x^i_j \right)$
By choosing $s = \Theta(2^w)$ carefully, we get a (near)-balanced, transitive, monotone function where $NS(Tribes) = \Theta(2^w) = \Theta(\log n/n)$. So it beats Majority quite handily. But is this the best possible? Or can we hope to achieve sensitivity $O(1/n)$? This would be optimal, since it is easy to argue an $\Omega(1/n)$ lower bound.
It turns out that in fact $O(\log n/n)$ is the best possible. This follows from the celebrated result of Kahn, Kalai and Linial (KKL), which tells us that this is a lower bound on the sensitivity of any balanced Boolean function, where all variables have small influence. The Theorem is usually stated in terms of average sensitivity, which is $AS(f) = n\cdot NS(f)$. In our setting, the monotone and transitive conditions imply that no variable has influence more than $O(1/\sqrt{n})$, hence their result applies.
At first, Tribes might not seem as natural a function as Majority. So why does it turn out to be the least sensitive to bit-flips? For KKL to be tight, the Fourier expansion of $NS(f)$ tells us to look for a function which is concentrated on the first $O(\log n)$ levels of the Fourier spectrum. This is equivalent to saying that $f$ is well-approximated by a polynomial of degree $O(\log n)$. We know DNFs have such concentration. But why not something even simpler, like a decision tree that has (exact) degree $O(\log n)$? It turns out that such functions will invariably have some “special” co-ordinates, and hence cannot be transitive. This is a relatively recent result by O’Donnell, Saks, Schramm and Servedio, which says that every degree $d$ Boolean function has a variable with influence $\Omega(1/d)$. So we are back to CNF/DNFs and Tribes is about as simple as DNFs get.
Another natural example of a function that has low sensitivity (which I learnt from Yuval Peres): consider $n$ points on a circle. Assign a random bit to each. Define $g(x)$ to be $1$ if the longest run of $1$s is of length $c\log n$. For a suitable choice of $c$, this function is near balanced. It is monotone, and further $NS(g) = \Theta(\log n/n)$.
So given that Tribes beats Majority so handily for bit-flips, how does it fare under $\epsilon$-noise for constant $\epsilon$? Not very well, it’s sensitivity approaches $0.5$. Indeed, if you are looking for a function which is maximally sensitive, Tribes is not a bad choice.
So life is yet again complicated: there is no single functions that seems optimal for all settings of $\epsilon$-noise. Indeed, this is why results on hardness amplification within NP which rely on composition with monotone functions need to switch from one function to another at various points. In contrast, if we don’t care about staying within NP, we can just use the XOR function.
But at the same time, perhaps a unified explanation is possible to explain the entire spectrum?
1. Is there an intuitive explanation for why the least sensitive function should look like a Hamming ball when $\epsilon$ is large, and look “Tribes-like” (Tribish? Tribal) when $\epsilon$ is small? In some sense, we got into this mess by disallowing co-ordinate functions, which are essentially sub-cubes. Are these the functions that “look” most like sub-cubes (from some strange angle)?
2. Is there an inverse result which interpolates smoothly between KKL and Majority is the Stablest? Given that these are two of the most powerful theorems in all of Boolean function analysis, it would have to be quite a theorem.
Parikshit.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 32, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9492092132568359, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/119208?sort=newest
|
## What is the expected value for this
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
If there are $8$ random points in the plane whose horizontal coordinate and vertical coordinate are uniformly distributed on the open interval $\left(0,1\right)$, what is the expected largest size of a subset in which the points form the vertices of a convex polygon? Thanks!
-
By the way, "size" only means cardinality here. – user0o Jan 17 at 20:14
Related: en.wikipedia.org/wiki/Happy_ending_problem – Sam Hopkins Jan 17 at 20:22
@Berlusconi: it does not need to 16. An answer for any number less than or equal to $8$ would be highly appreciated. It may not be trivial to solve for a large number, but one can probably have some probability estimates on the angles, which largely determine the convexity. – user0o Jan 17 at 20:44
@Sam: do you know about the probability that the extremal configurations in generalized happy ending problem occur for $8$ points or less? – user0o Jan 17 at 20:49
2
Probability distribution not clear: Are the points independent? For a given point, are the horizontal and vertical coordinates independent? – Gerald Edgar Jan 19 at 13:35
show 3 more comments
## 1 Answer
For specific and not very small $n$ this would be quit a messy computation.
There are esults about the asymptotics for large $n$ in a paper of Ambrus and Barany http://arxiv.org/abs/0906.5452 . They consider a slightly different problem, but their methods work and the result is $c n^{1/3}$ for some computable $c$.
Note also that they compute the typical value, which is only a lower bound for the expectation. Various concentration inequalities which apply since the points are iid can be used to get the expectation as well. (Or you could try to get a large deviation estimate directly.)
-
Dear Omer, thank you for your answer, but do you know if the exponent would change for other convex regions? since they only considered a triangle. – user0o Jan 25 at 3:46
The exponent would not change. For example, take random points in a circular disk. Fit a triangle in the disk. A constant fraction of the n points falls inside the triangle (w.h.p.), and thus you get at least $c′n^{1/3}$ points in convex position; similarly, an ellipse (affine image of a disk) fits into any triangle, showing the other direction of the inequality. Any two convex shapes are related like this (after affine transformation). – Günter Rote Apr 28 at 23:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8923681378364563, "perplexity_flag": "head"}
|
http://dsp.stackexchange.com/questions/8021/iir-filter-design-in-digital-domain-using-magnitude-squared
|
# IIR filter design in digital domain using Magnitude Squared
Does anyone have any good references for deriving parameters of an IIR Low pass/High Pass filter directly in the digital domain using the magnitude squared at the corner frequency?
I have been able to derive the parameters of a first order Low/High pass filter with 3 dB attenuation at the corner frequency i.e. calculating K and alpha in:-
````H(Z) = k(1+z^-1)/(1-alpha*z^-1)
````
My issue is that I distinctly remember deriving the parameters using a 6 dB attenuation at the corner frequency in a DSP course I have done previously but I have forgotten the trigonometric identiftes used to finish the derivation.
The general procedure is as follows:- 1) Let w = 0/pi to calculate the gain term k such that there is a 0dB gain at 0/pi 2) Calculate the magnitude squared at the corner frequency to obtain a value for alpha in terms of the corner frequency.
The problem may be that it should be a second order filter or I am recalling the method for a band pass/stop filter but I'm not sure and it appears this method is not used very often except in the case of band pass/stop filters for parametric EQ.
I hope the question is clear and I will try to improve the structure with the responses so it will be useful for others. Any help will be apreciated
-
– B Z Feb 27 at 16:01
## 1 Answer
To solve the case that you mentioned...
You have 2 variables to determine, so you need two relationships to resolve the two variables. I'm going to use $k$ and $a$ as the variables to make this easy to type up.
$H(z) = k\frac{1 + z^-1}{1-az^-1}$
Start by considering the passband gain. Use $f = 0$ for this. Assume you want unity gain at $f_0=0$.
Assume: $H(f_0) = 1, f_0 = 0$
Substitute $e^{i2\pi f/fs}$ for z, fs is your sampling rate, set $f = 0$ and solve for $k$ to satisfy $H(f_0) = 1$
From this you get $k = \frac{(1-a)}{2}$
Now work on the gain squared at your desired corner frequency ($f_c$) to determine $a$.
$H(f_c) = -3$dB (magnitude squared will be -6dB as you've stated)
We'll work with the magnitude squared at $f_c$ and set the gain to 1/2 (-6dB).
$|H(f_c)|^2 = \frac{1}{2}$
This time substitute $e^{i2\pi fc/fs}$ for z.
To simplify the arithmatic you can solve this equation:
$$\left(\frac{|H(f0)|}{|H(fc)|}\right)^2 = 2$$
This eliminates the factor $k$.
You will end up with a quadradic relationship in $a$. Solving for $a$ yields:
$a = \frac {1 - \sqrt{1-cos^2(2\pi\frac{f_c}{f_s})}} {cos(2\pi\frac{f_c}{f_s}) }$
Simplify this to:
$a = \frac {1 - sin(2\pi\frac{f_c}{f_s})} {cos(2\pi\frac{f_c}{f_s}) }$
-
1
@Peter k Thanks Peter! – B Z Feb 28 at 18:18
You're welcome! :-) I just have OCD when it comes to math formatting. :-) – Peter K.♦ Feb 28 at 19:11
2
I have to get my posts in quickly, so I don't always get things the way I want them. My wife was glaring at me last night when she noticed I was posting some geeky thing on the internet instead of watching "The Bachelor" with her. – B Z Feb 28 at 20:26
Thank you for the response and the detailed derivation. I had the same result for the 3dB case, however, when I mentioned the 6dB attenuation I was referring to the design specification such that magnitude squared will have 12 dB attenuation. It is more than likely the lecturer plotted the magnitude squared and this is where the confusion has set in. I'm happy enough that this is the case and will use a -3db corner frequency. Cheers – melinnde Feb 28 at 23:03
I see, I missed the original point. I would agree that the prof was probably using mag squared to simplify the math. 3dB is generally used because it is a corner frequency. 6dB is well into the filter transition region which is less well defigned. – B Z Mar 1 at 12:28
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 22, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9437698721885681, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/28883/what-can-be-tiled-by-t-tetrominoes
|
What can be tiled by T-tetrominoes?
Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
The T-tetromino is a T-shaped figure made of four unit squares. An $m\times n$ rectangle can be tiled by T-tetrominoes if and only if both $m$ and $n$ are multiples of 4. This was proved in a 1965 paper by D.W.Walkup, and the proof was "hands on".
Some "algebraic" tricks like colouring or tiling groups can prove that $mn$ must be a multiple of 8, but they do not seem to rule out the cases like $99\times 200$ and $100\times 102$.
I wonder whether a better proof of D.W.Walkup's theorem is known today. By "better" I mean applicable to non-rectangular regions as well. For example, is there a way to determine what 6-gons (8-gons, ...) admit tiling by T-tetrominoes?
-
6
The paper "Tiling rectangles with T-tetrominoes" by Korn and Pak seems to present some progress for non-rectangular simply-connected regions, mainly Theorem 11 in section 8. It doesn't seem like they have a complete answer though. citeseerx.ist.psu.edu/viewdoc/… – Alon Amit Jun 20 2010 at 21:57
See also Korn's thesis, "Geometric and algebraic properties of polyomino tilings". – Gjergji Zaimi Jun 21 2010 at 1:19
1 Answer
There are only partial answers to this question. First, one can prove that Walkup's result cannot be proved using coloring arguments (I think I did this in New horizons paper, but the setting is formalized in the Ribbon tile invariants paper). Second, Walkup's proof uses an easy induction argument, and it extends to regions with sides multiples of 4. Third, I am pretty sure you can classify all 6- and 8-gons tileable by T-tetrominoes. This won't be conceptual. Why do it then?
Now, motivated by the quest to find a better proof, I made a "local move connectivity" conjecture saying that every two T-tetromino tilings of a simply connected region are connected by a series of moves involving either two T-tetrominoes or four T-tetrominoes (forming a $4\times 4$ square). Usually, the "conceptual proof" comes from some kind of height function argument which also proves the local move connectivity. Now, Mike Korn in his thesis disproved this by a simple construction. One can ask if the Conway group approach in full generality can prove something like what you are asking. You need to compute $F_2/\langle tile~words\rangle$ (see Conway-Lagarias paper, "New horizons" or Korn's thesis). We did not do that, but I won't be very optimistic - it is a bit of a miracle when this approach works out.
Mike and I were still able to prove the conjecture (by a height function argument) for rectangles and the above mentioned 4-multiple regions, but that proof assumes Walkup's theorem. Independently this was established by Makarychev brothers, using a related but somewhat different argument (in Russian, based on connection to the six-vertex model). In fact, in a followup paper we use Walkup's theorem as a definition of the graphs in which the number of claw partitions is "nice". Anyway, hope this helps.
UPDATE: I just remembered that Michael Reid also did the T-tetromino computation (as well as many other computations) here.
-
Do any of these results provide new obstructions for tilings (as opposed to studying the structure of the set of possible tilings of some regions)? Is there any non-rectangular region which is proved to be non-tilable, but not for colouring reason and not because of some local obstruction (like "there is only one way to cover this square, but then no way to cover that")? – Sergei Ivanov Jun 21 2010 at 22:02
If the question is whether one can extend Walkup's result to "nice" non-rectangular regions, the answer is yes, but only by using his idea again. If the question is whether our local connectivity results imply any non-tileability results, the answer is also yes, but the regions would be rather ugly and actually hard to construct. If the question is whether one should hope for a general algebraic approach, the answer is probably no, or at least that's my intuition. – Igor Pak Jun 22 2010 at 0:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.944365382194519, "perplexity_flag": "middle"}
|
http://nanohub.org/topics/RoleofNormalProcessesinThermalConductivityofSilicon?task=compare&oldid=8&diff=10
|
Search
Need Help?
# Support
## Support Options
• ### Knowledge Base
Find information on common issues.
• ### Ask the Community
Ask questions and find answers from other users.
• ### Wish List
Suggest a new site feature or improvement.
• ### Support Tickets
Check on status of your tickets.
## Role of Normal Processes in Thermal Conductivity of Silicon
Version 8
2011-12-10 04:01:53 by Prabhakar Marepalli
Version 10
2011-12-10 04:22:46 by Prabhakar Marepalli
Deletions or items before changed
Additions or items after changed
| 1 | 2 | 3 | 4 |
|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----|--------------------------------|
| [[File(JamesAndPrabhakar_ProjectReport.pdf)]] | | | |
| | | | |
| == Role of Normal Processes in Thermal Conductivity of Silicon == | | | |
| | | | |
| James Loy, Prabhakar Marepalli | | | |
| | | | |
| ME 503 Final Project Report | | | |
| | | | |
| | | | |
| '''Abstract''' | | | |
| ''In the past decade, with the miniaturization of electronic circuits and growing interest in microscale heat transfer, Boltzmann transport equation (BTE) emerged as an important model for predicting the thermal behavior at small scales. Recent advancements in numerical methods and computational power enabled to solve this equation rigorously by relaxing several assumptions. But the scattering term of BTE is incredibly complex, still requiring some simplifications to make the solution possible while preserving relevant physics. One approximation is the single mode relaxation time (SMRT) approximation. This approximation assumes that each phonon scatters only to a lattice equilibrium energy. For many materials near equilibrium and at high enough temperatures, this works well. However, for low temperatures, and for low dimensional materials like graphene, this model fails to include phonon scattering due to normal processes, which play an indirect role in thermal resistance. In this project, we include normal phonon scattering in the BTE through the use of a shifted equilibrium distribution proposed by Callaway. We solve the non-gray BTE using finite volume method (FVM) with coupled ordinates method (COMET) to compute the thermal conductivity and temperature distribution of silicon at different temperatures and length scales. We implement full phonon dispersion for all polarizations under isotropic assumption. The effect of including normal processes on the thermal conductivity predictions is rigorously analyzed. Our results show that ignoring the normal process overpredicts the thermal conductivity – which is a physically intuitive result. We also observe that the thermal conductivity increases and trends towards an asymptotic value as the length scale increases. By analyzing the temperature distribution, we also show that inclusion of normal processes diffuses the energy across the material – which is an expected result.'' | | | |
| | | | |
| | | | |
| == 1. Introduction== | | | |
| | | | |
| With increasing miniaturization of integrated electronic circuits (ICs) following the Moore’s law [1] several challenges pop up in trying to keep up with the trend. One of the major bottlenecks is the high density localized heat generation in ICs that impair the device performance. In order to better understand the thermal behavior at those scales it is essential to create robust models that can accurately predict the device failure. As the device size in ICs these days are less than 32nm where the ballistic behavior dominates, Fourier law cannot be used to make accurate predictions. One of the widely used alternatives to use the Boltzmann Transport Equation (BTE) which models the phonon distribution – the major heat carrier in semiconductors – for a given macroscopic device conditions. Phonon is a quantum of thermal vibration which is considered as particle in BTE. | | | |
| | | | |
| Several researchers have developed simplified versions of BTE making intuitive assumptions to make it analytically or numerically solvable. Using these assumptions they were able to predict the thermal behavior of various materials. One of the most celebrated applications of BTE developed in early 1950s is to compute the thermal conductivity of a given material as a function of temperature, composition, and geometry, etc. [2], [3]. These analyses consider various phonon scattering mechanisms responsible for the thermal conductivity of a given material and models the effect of these scattering rates on the phonon distribution. Widely considered scattering mechanisms are isotopic scattering, boundary scattering, and three-phonon scattering (Umklapp (U) and Normal (N) scattering). Isotopic scattering is caused due to the fact that any given material, by nature, has various isotopic compositions in it. Boundary scattering is dominant at low temperatures and length scales where the phonons hit the physical boundary of the material thereby causing resistance to heat flow. Three-phonon scattering takes place when three phonons interact and results in frequency modification. This process is also called intrinsic or inelastic scattering which occurs due to the anharmonic nature of interatomic potential. These mechanisms explain why even perfectly pure crystals do not have infinite thermal conductivity (In most of the modeling procedures the interatomic potential is assumed harmonic which fail to show the three-phonon scattering). | | | |
| | | | |
| Over the last decade, with the improvement in computational power and numerical methods, many assumptions are relaxed and the BTE is solved more rigorously using entire phonon dispersion and relaxation time approximation [4]. Common methods to obtain relaxation rates include fitting the rate expressions to experimental values or perturbation theory [2], [5]. While the influence of all scattering terms on overall thermal conductivity at different temperatures and geometries has been well analyzed, the three-phonon N-processes have been neglected in most of the computations. The reason for neglecting normal processes is the premise that they conserve phonon momentum and hence do not offer thermal resistance. While the above fact is partially true, N-processes populate phonons in that region that can participate in U-processes thereby indirectly contributing to the overall thermal conductivity. Neglecting N-processes still provided reasonable comparison with experiment for materials like Si and Ge because these are 3D materials and the number of phonons that participate in U-processes are comparably higher than N-process phonons. But in case of lower dimensional (2D) materials like graphene, the population of phonons in large-wavevector region is very small and neglecting N-process provides false prediction of diverging thermal conductivity. This is because of the fact that the scarce phonon population makes them travel ballistically without many collisions. On the other hand, it was shown in [6] that on including N-processes the conductivity asymptotes to a constant value. | | | |
| | | | |
| In this paper, we simulate the thermal conductivity of Silicon by including N-processes with full BTE model using relaxation time approximation and full phonon dispersion. N-process scattering formulation developed by Callaway [2] is used by strictly enforcing momentum conservation for N-process phonons and energy conservation for N, and U process phonons. We only consider isotopic and three-phonon scattering mechanisms in this project. The simulation is performed on Silicon owing to easy availability of its dispersion curves and relaxation rates. Isotropic assumption is made in k-space which is reasonable for Si. In our simulation, we solve the energy form of non-gray BTE [7] simultaneously with overall energy conservation (N+U processes) to extract energy distribution and temperatures respectively in a coupled fashion. This provides quick convergence compared to sequential solution of these equations (non-gray BTE and energy conservation). Then we perform a detailed study of the effect of temperature, geometry, and mainly N-process scattering on the overall bulk thermal conductivity of Si. | | | |
| | | | |
| We organize the rest of the paper as follows. In Section 2, we explain the physics of Normal scattering processes and discuss the situations when their effect would be significant. In Section 3, we provide a literature review of different models and assumptions used to simulate and analyze the effect of N-processes. In Section 4, we use Callaway’s thermal conductivity model to make a first order prediction of thermal conductivity of silicon. Here we briefly discuss the assumptions made in the model and their implications. In Section 5, we discuss the numerical method and solution procedure we used for simulating the thermal conductivity. Here we provide a detailed description of equations involved and the formulation used. In Section 6, we present and discuss the results of our simulations. We conclude in Section 7 by providing a direction for possible future work. | | | |
| | | | |
| | | | |
| ==2. Normal process scattering == | | | |
| | | | |
| '''Three-phonon scattering''' | | | |
| | | | |
| A three phonon scattering process results in frequency modification of the resultant phonons. They are also called as inelastic scattering events. These processes can be described by the energy and momentum relations shown in Figure 1 [8]. As shown in the figure, these processes can be classified into Normal and Umklapp processes. A Normal process conserves energy and momentum whereas Umklapp process only conserves energy. Another illustration in Figure 2 shows why U-processes do not conserve momentum. The Brillouin zone of the given material is shown in gray. Incoming phonons of wave vector k1 and k2 combine to form a single phonon of wave vector k3. The left part of the figure shows N-process in which the resultant phonon lies inside the Brillouin zone; whereas the resultant phonon for U-process (figure on the right side) has such a high wave vector that it is knocked out of the Brillouin zone. By mapping it back into Brillouin zone using reciprocal lattice vector G, we can see that the resultant phonon of wave vector k3 is in the direction opposite to that of k1 and k2. This explains why U-processes impede phonon momentum and thereby the heat flow. On the other hand, as N-processes do not impede phonon momentum they do not impede the heat flow directly. But they contribute indirectly by redistributing overall phonon population which can further participate in U-processes. | | | |
| | | | |
| [[Image(three_phonon_events.bmp, 600px)]] | | | |
| | | | |
| '''Figure 1''': Three-phonon scattering events [8] | | | |
| | | | |
| | | | |
| [[Image(Phonon_nu_process.png, 600px)]] | | | |
| | | | |
| '''Figure 2''': Illustration of momentum conservation by N and U processes [Wikipedia] | | | |
| | | | |
| '''Importance of N-process''' | | | |
| | | | |
| As discussed earlier, given the nature of N-processes it would be worthwhile to think of situations when N-process contribution is indeed significant. First, we consider how low-frequency modes interact with high frequency-modes near the Brillouin zone boundary. Considering the selection rules (see Figure 1) of U-processes, only a mode of some minimum frequency ωi can participate in them [9]. This prohibits the interaction of low-frequency modes with that of Brillouin zone boundary. But intuitively we know that these modes should somehow contribute to thermal resistance. This can be explained by the premise that N-processes that involve these low-frequency modes generate the modes this minimum frequency ωi, which can then participate in U-processes, thereby providing thermal resistance. One of the other observations on importance of N-processes is discussed in [6] where it is shown that, with N-processes, graphene’s thermal conductivity diverges with increasing flake diameter thereby providing length dependence. But by including N-processes they showed that the conductivity asymptotes to a constant value. | | | |
| | | | |
| | | | |
| '''3. Literature review of Normal processes ''' | | | |
| | | | |
| In this section, we provide a brief literature review of the N-process analysis and their findings. We begin with Callaway’s phenomenological model for lattice thermal conductivity at low temperatures [2]. In this work, Callaway uses a relaxation time approximation for the scattering term of BTE, and assumes that all momentum destroying processes (isotopic, boundary, and Umklapp scattering) tend toward an equilibrium Planck distribution, whereas N processes lead to a displaced Planck distribution. Using this approximation, the scattering terms can be written as: | | | |
| | | | |
| | | | |
| $\left\{\left\{\left\left( \frac\left\{\partial N\right\}\left\{\partial t\right\} \right\right)\right\}_\left\{c\right\}\right\}=\frac\left\{N\left(\lambda \right)-N\right\}\left\{\left\{\tau \right\}_\left\{N\right\}\right\}+\frac\left\{\left\{N\right\}_\left\{0\right\}-N\right\}\left\{\left\{\tau \right\}_\left\{u\right\}\right\}$ | | | |
| - | | | |
| (1) | | | |
| | | | |
| where is the distribution function, is the relaxation time for all normal processes and is the relaxation time for all other momentum destroying processes, is the equilibrium Planck’s distribution and is displaced Planck’s distribution defined as | | | |
| | | | |
| | | | |
| $N\left(\lambda \right)=\left\{\left\{\left\left[ \exp \left\left( \frac\left\{\hbar \omega -\lambda .\mathsf\left\{k\right\}\right\}\left\{\left\{k\right\}_\left\{B\right\}T\right\} \right\right) \right\right]\right\}^\left\{-1\right\}\right\}$ | | | |
| - | | | |
| (2) | | | |
| | | | |
| The term is along the direction of temperature gradient and defines the amount of energy redistributed by N-processes. He computed by enforcing momentum conservation for all N-process phonons using | | | |
| | | | |
| | | | |
| $\int\left\{\left\{\left\left( \frac\left\{\partial N\right\}\left\{\partial t\right\} \right\right)_\left\{N\right\}\right\}\text\left\{k\right\}\right\}\left\{\left\{d\right\}^\left\{3\right\}\right\}k=\int\left\{\frac\left\{N\left(\lambda \right)-N\right\}\left\{\left\{\tau \right\}_\left\{N\right\}\right\}\text\left\{k\right\}\left\{\left\{d\right\}^\left\{3\right\}\right\}k=0\right\}$ | | | |
| - | | | |
| (3) | | | |
| | | | |
| Using the assumptions that only acoustic phonons contribute to thermal conductivity all acoustic modes can be averaged using a single group velocity, and the relaxation rates for all processes can be expressed as a function of frequency and temperature he computed a simplified expression for thermal conductivity as | | | |
| | | | |
| | | | |
| | | | |
| $k=\frac\left\{\left\{k\right\}_\left\{B\right\}\right\}\left\{2\left\{\pi \right\}^\left\{2\right\}c\right\}\left(\left\{I\right\}_\left\{1\right\}+\beta \left\{I\right\}_\left\{2\right\}\right)$ | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| $\left\{\left\{I\right\}_\left\{1\right\}\right\}=\int_\left\{0\right\}^\left\{\left\{\left\{k\right\}_\left\{B\right\}\right\}\Theta /\hbar \right\}\left\{\left\{\left\{\tau \right\}_\left\{c\right\}\right\}\frac\left\{\left\{\left\{\hbar \right\}^\left\{2\right\}\right\}\left\{\left\{\omega \right\}^\left\{2\right\}\right\}\right\}\left\{\left\{\left\{k\right\}_\left\{B\right\}\right\}^\left\{2\right\}\left\{\left\{T\right\}^\left\{2\right\}\right\}\right\}\frac\left\{\left\{\left\{e\right\}^\left\{\hbar \omega /\left\{\left\{k\right\}_\left\{B\right\}\right\}T\right\}\right\}\right\}\left\{\left\{\left\{\left\left( \left\{\left\{e\right\}^\left\{\hbar \omega /\left\{\left\{k\right\}_\left\{B\right\}\right\}T\right\}\right\}-1 \right\right)\right\}^\left\{2\right\}\right\}\right\}\left\{\left\{\omega \right\}^\left\{2\right\}\right\}d\omega \right\}$ | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| $\left\{\left\{I\right\}_\left\{2\right\}\right\}=\int_\left\{0\right\}^\left\{\left\{\left\{k\right\}_\left\{B\right\}\right\}\Theta /\hbar \right\}\left\{\frac\left\{\left\{\left\{\tau \right\}_\left\{c\right\}\right\}\right\}\left\{\left\{\left\{\tau \right\}_\left\{N\right\}\right\}\right\}\frac\left\{\left\{\left\{\hbar \right\}^\left\{2\right\}\right\}\left\{\left\{\omega \right\}^\left\{2\right\}\right\}\right\}\left\{\left\{\left\{k\right\}_\left\{B\right\}\right\}^\left\{2\right\}\left\{\left\{T\right\}^\left\{2\right\}\right\}\right\}\frac\left\{\left\{\left\{e\right\}^\left\{\hbar \omega /\left\{\left\{k\right\}_\left\{B\right\}\right\}T\right\}\right\}\right\}\left\{\left\{\left\{\left\left( \left\{\left\{e\right\}^\left\{\hbar \omega /\left\{\left\{k\right\}_\left\{B\right\}\right\}T\right\}\right\}-1 \right\right)\right\}^\left\{2\right\}\right\}\right\}\left\{\left\{\omega \right\}^\left\{2\right\}\right\}d\omega \right\}$ | | | |
| | | | |
| where is the combined relaxation time given by , is the frequency, is the temperature, and is the normalized Plancks constant. | | | |
| | | | |
| Using the above expression, he computed the thermal conductivity of germanium and compared it to various experimental observations to obtain fitting constants for relaxation rates. Once the constants are obtained, he found a striking match between theory and experiments. These constants are then used to examine the effects of individual processes. The second term in the thermal conductivity expression is commonly referred as correction term that accounts for correction due to N-processes. He computed that the value of this term is 10% of the overall conductivity in case of a pure germanium sample. This model for thermal conductivity has been rigorously used to compute thermal conductivities for various materials and found to provide good match at low temperatures [3], [9]. | | | |
| | | | |
| One of the papers that made improvements to Callaway’s model is Armstrong’s paper using two-fluid model [9]. In this work Armstrong divides phonons into two groups – propagating and reservoir modes. He assumes that both low and high-frequency phonons participate in N-processes. High-frequency phonons account for the fact that they can also participate in N-processes by splitting into phonons of lower frequencies. Instead of Callaway’s displaced Planck’s distribution, he used two displaced distribution functions for N-process phonons. This is because the high frequency phonons participating in N-process cannot equilibriate to very low value of distribution function that low-frequency phonons equilibriate to. He also considers the effect of different polarizations. Using this assumptions he solves the modified BTE called Boltzmann-Peierls equation for phonons written as | | | |
| | | | |
| | | | |
| $-\frac\left\{\left\{\left\{N\right\}_\left\{q\right\}\right\}-N\left(\beta \right)\right\}\left\{\left\{\left\{\tau \right\}_\left\{NN\right\}\right\}\right\}-\frac\left\{\left\{\left\{N\right\}_\left\{q\right\}\right\}-\left\{\left\{N\right\}_\left\{0\right\}\right\}\right\}\left\{\left\{\left\{\tau \right\}_\left\{R\right\}\right\}\right\}=\left\{\left\{c\right\}_\left\{q\right\}\right\}.\nabla T\frac\left\{d\left\{\left\{N\right\}_\left\{q\right\}\right\}\right\}\left\{dT\right\}$ | | | |
| - | | | |
| (5) | | | |
| | | | |
| where $\left\{\left\{N\right\}_\left\{0\right\}\right\}$ is the equilibrium distribution, $\left\{\left\{N\right\}_\left\{q\right\}\right\}$ is the phonon occupation number, $N\left(\beta \right)$ is the displaced distribution, $\left\{\left\{c\right\}_\left\{q\right\}\right\}$ is the group velocity of a given polarization, $\left\{\left\{\tau \right\}_\left\{NN\right\}\right\}$ and $\left\{\left\{\tau \right\}_\left\{R\right\}\right\}$ are the characteristic relaxation times,$\nabla T$ is the temperature gradient across the material. | | | |
| | | | |
| Recently, several rigorous computational simulations have been performed by relaxing many assumptions that were made to simplify the equation. For example, Yunfei Chen et al., [10] used Monte Carlo (MC) simulation to solve the BTE to compute the thermal conductivity of silicon nanowire. In this work, N-processes are included using genetic algorithm to generate the phonons that satisfy momentum conservation and tend towards displaced equilibrium distribution defined by Callaway. | | | |
| | | | |
| | | | |
| | | | |
| '''4. Thermal conductivity of Silicon using Callaway’s analysis''' | | | |
| | | | |
| As a first approximation to our simulation, we start with applying Callaway’s model to compute the thermal conductivity of silicon. Both the terms of thermal conductivity expression are retained. We use the same relaxation rate expressions used by Callaway given as | | | |
| | | | |
| | | | |
| $\left\{\left\{\tau \right\}_\left\{u\right\}\right\}^\left\{-1\right\}=A\left\{\left\{\omega \right\}^\left\{4\right\}\right\}+\left\{\left\{B\right\}_\left\{1\right\}\right\}\left\{\left\{T\right\}^\left\{3\right\}\right\}\left\{\left\{\omega \right\}^\left\{2\right\}\right\}+c/LF$ | | | |
| - | | | |
| (6) | | | |
| | | | |
| where $A\left\{\left\{\omega \right\}^\left\{4\right\}\right\}$ represents isotopic scattering; | | | |
| $\left\{\left\{B\right\}_\left\{1\right\}\right\}\left\{\left\{T\right\}^\left\{3\right\}\right\}\left\{\left\{\omega \right\}^\left\{2\right\}\right\}$ includes the Umklapp processes with $\left\{\left\{B\right\}_\left\{1\right\}\right\}$ containing the exponential temperature factor $\left\{\left\{e\right\}^\left\{-\Theta /aT\right\}\right\}$ where $\Theta$ is the Debye temperature; and | | | |
| $c/LF$ represents the boundary scattering with | | | |
| $F$ being the correction factor due to both smoothness of the surface and the finite length to thickness ratio of the sample [3]. For N-processes we use | | | |
| $\left\{\left\{\tau \right\}_\left\{N\right\}\right\}^\left\{-1\right\}=\left\{\left\{B\right\}_\left\{2\right\}\right\}\left\{\left\{T\right\}^\left\{3\right\}\right\}\left\{\left\{\omega \right\}^\left\{2\right\}\right\}$. The combined relaxation rate is then defined as | | | |
| | | | |
| | | | |
| $\left\{\left\{\tau \right\}_\left\{c\right\}\right\}^\left\{-1\right\}=A\left\{\left\{\omega \right\}^\left\{4\right\}\right\}+\left(\left\{\left\{B\right\}_\left\{1\right\}\right\}+\left\{\left\{B\right\}_\left\{2\right\}\right\}\right)\left\{\left\{T\right\}^\left\{3\right\}\right\}\left\{\left\{\omega \right\}^\left\{2\right\}\right\}+c/LF$ | | | |
| - | | | |
| (7) | | | |
| | | | |
| The above relaxation rates are used in the thermal conductivity expression to fit the constants $A$, $\left(\left\{\left\{B\right\}_\left\{1\right\}\right\}+\left\{\left\{B\right\}_\left\{2\right\}\right\}\right)$, and $F$. Though ''B1'' implicitly has temperature dependence, we neglect that dependence while fitting. Using the experimental results shown in Table 1 [11], [12] we obtain the following values of fitting constants: | | | |
| | | | |
| ''A'' = .22e-44 sec^3^ | | | |
| | | | |
| ''B1+B2'' = 2.9e-24 sec/deg^3^ | | | |
| | | | |
| ''F'' = .8 | | | |
| | | | |
| | | | |
| ||T(K)||k (W/cmK)|||T (K)||K (W/cmK)|| | | | |
| ||2|| .44|| 200|| 2.66|| | | | |
| ||4|| 3.11|| 300|| 1.56|| | | | |
| ||6|| 8.99|| 400|| 1.05|| | | | |
| ||8|| 16.4|| 500|| 0.80|| | | | |
| ||10|| 24.0|| 600|| 0.64|| | | | |
| ||20|| 47.7|| 700|| 0.52|| | | | |
| ||30|| 44.2|| 800|| 0.43|| | | | |
| ||40|| 36.6|| 900|| 0.36|| | | | |
| ||50|| 28.0|| 1000|| 0.31|| | | | |
| ||100|| 9.13|| 1100|| 0.28|| | | | |
| ||150||4.10|| | | | |
| | | | |
| '''Table1''': Thermal conductivity measurements of Silicon [11], [12]. | | | |
| | | | |
| We compare the thermal conductivity obtained by substituting fitted constants into equation (1.4) and observe a good comparison with experiments at low temperatures, see Figure 3. The fitted equation is really helpful to quickly investigate the effects of individual scattering events. For example, we can fix all the constants and just vary the constant to examine the change of thermal conductivity with various isotope scattering events. The value of A for a given isotopic concentration can be found from Klemens expression [13] for isotopic scattering. This is how several researchers exploit this equation to better understand the thermal conductivity behavior of different materials. | | | |
| | | | |
| [[Image(Callaways_model_fitting.png)]] | | | |
| | | | |
| '''Figure 3''': Callaway’s model for thermal conductivity. The constants of relaxation rates are fitted to match the model with experiment at low temperatures. | | | |
| | | | |
| While the above means of using Callaway’s expression would be extremely helpful, our focus in this project is to better understand the model itself. A small discussion on correction term ( ) and the justification for neglecting it might shed some light in this direction. As we discussed earlier, Callaway used a combined relaxation rate | | | |
| $\left\{\left\{\tau \right\}_\left\{c\right\}\right\}^\left\{-1\right\}=\left\{\left\{\tau \right\}_\left\{N\right\}\right\}^\left\{-1\right\}+\left\{\left\{\tau \right\}_\left\{u\right\}\right\}^\left\{-1\right\}$ in his derivation. But as N-processes do not actually contribute to thermal resistance, the $\left\{\left\{\tau \right\}_\left\{c\right\}\right\}$ term under predicts the thermal conductivity. Most of the times the under prediction is very low and hence can be safely neglected. But in some cases, it might the prediction might differ by around 20%. An example is when isotopic scattering is absent. In this case, the predicted thermal conductivity with and without the correction term is more than 20% of its overall value. This can be qualitatively seen in Figure 4. (Note that the y-axis here is logarithmic). So it can be explained that the correction term compensates for the fact that using $\left\{\left\{\tau \right\}_\left\{c\right\}\right\}$ provides more resistance to the conductivity. | | | |
| | | | |
| [[Image(Callaways_model_with_and_wo_correction.png)]] | | | |
| | | | |
| '''Figure 4''': Meaning of correction term in Callaway’s model for thermal conductivity. The correction term compensates the fact that the overall relaxation rate used in Callaway’s model underpredicts the thermal conductivity | | | |
| | | | |
| | | | |
| | | | |
| '''5. Numerical Method''' | | | |
| | | | |
| | | | |
| All of the governing equations are solved using the Finite Volume method [14]. The BTE is a partial differential equation involving two vector spaces (physical space and wave vector space). We therefore must discretize both spaces accordingly. With this, we will arrive at a linear system with at least one equation per discretized physical space and per discretized wave vector space. | | | |
| | | | |
| The physical domain is discretized into several arbitrary convex polyhedral. Here we have chosen a square domain which can easily be discretized using a non-uniform structured grid, shown below: | | | |
| | | | |
| - | [[Image(80x80grid.tif)]] | + | [[Image(80x80grid.jpg,400px)]] |
| | | | |
| '''Figure 5''': 80x80 grid used as the physical domain. The side length is unity and is scaled to the appropriate domain size. | | | |
| | | | |
| To discretize the BTE, we integrate over a finite control volume in physical space and wave vector space then apply the divergence theorem to the convective operator as shown below: | | | |
| | | | |
| | | | |
| | | | |
| $\int\limits_\left\{\Delta \mathsf\left\{K\right\}\right\}\left\{\int\limits_\left\{\Delta V\right\}\left\{\nabla \cdot \left\left( \mathsf\left\{v\right\}e\text{'}\text{'} \right\right)dV\left\{\left\{d\right\}^\left\{3\right\}\right\}\mathsf\left\{K\right\}\right\}\right\}=\int\limits_\left\{\Delta \mathsf\left\{K\right\}\right\}\left\{\int\limits_\left\{\Delta \mathsf\left\{A\right\}\right\}\left\{e\text{'}\text{'}\mathsf\left\{v\right\}\cdot d\mathsf\left\{A\right\}\left\{\left\{d\right\}^\left\{3\right\}\right\}\mathsf\left\{K\right\}=\right\}\right\}\int\limits_\left\{\Delta \mathsf\left\{K\right\}\right\}\left\{\int\limits_\left\{\Delta V\right\}\left\{\frac\left\{\left\{\left\{e\right\}^\left\{0\right\}\right\}-e\text{'}\text{'}\right\}\left\{\left\{\left\{\tau \right\}_\left\{U\right\}\right\}\right\}+\frac\left\{e_\left\{\mathsf\left\{\lambda \right\}\right\}^\left\{0\right\}-e\text{'}\text{'}\right\}\left\{\left\{\left\{\tau \right\}_\left\{N\right\}\right\}\right\}dV\left\{\left\{d\right\}^\left\{3\right\}\right\}\mathsf\left\{K\right\}\right\}\right\}$ | | | |
| - | | | |
| (8) | | | |
| | | | |
| We now apply our discretization. In the wave vector space, a central difference approximation is made on both sides of the equation, thus the finite volume of k-space appears on both sides and is dropped. We arrive at the following discrete equation set: | | | |
| | | | |
| | | | |
| | | | |
| $\sum\limits_\left\{f\right\}\left\{\left\{\left\{e\right\}_\left\{f\right\}\right\}\text{'}\text{'}\right\}v\cdot \Delta \left\{\left\{\mathsf\left\{A\right\}\right\}_\left\{f\right\}\right\}+e\text{'}\text{'}\Delta V\left\left( \tau _\left\{N\right\}^\left\{-1\right\}+\tau _\left\{U\right\}^\left\{-1\right\} \right\right)-\left\{\left\{e\right\}^\left\{0\right\}\right\}\frac\left\{\Delta V\right\}\left\{\left\{\left\{\tau \right\}_\left\{U\right\}\right\}\right\}-e_\left\{\mathsf\left\{\lambda \right\}\right\}^\left\{0\right\}\frac\left\{\Delta V\right\}\left\{\left\{\left\{\tau \right\}_\left\{N\right\}\right\}\right\}=0$ | | | |
| - | | | |
| (9) | | | |
| | | | |
| The energy conservation equation can be rearranged as follows: | | | |
| | | | |
| | | | |
| $\int\left\{\frac\left\{\left\{\left\{e\right\}^\left\{0\right\}\right\}\right\}\left\{\left\{\left\{\tau \right\}_\left\{U\right\}\right\}\right\}\left\{\left\{d\right\}^\left\{3\right\}\right\}\mathsf\left\{K\right\}=\right\}\int\left\{\frac\left\{e\text{'}\text{'}\right\}\left\{\left\{\left\{\tau \right\}_\left\{U\right\}\right\}\right\}\left\{\left\{d\right\}^\left\{3\right\}\right\}\mathsf\left\{K\right\}\right\}-\int\left\{\frac\left\{e_\left\{\mathsf\left\{\lambda \right\}\right\}^\left\{0\right\}-e\text{'}\text{'}\right\}\left\{\left\{\left\{\tau \right\}_\left\{N\right\}\right\}\right\}\left\{\left\{d\right\}^\left\{3\right\}\right\}\mathsf\left\{K\right\}\right\}$ | | | |
| - | | | |
| (10) | | | |
| | | | |
| When we apply a second order finite volume discretization we arrive at the following: | | | |
| | | | |
| | | | |
| $\sum\left\{\left\{\left\{e\right\}^\left\{0\right\}\right\}\frac\left\{\Delta \mathsf\left\{K\right\}\right\}\left\{\left\{\left\{\tau \right\}_\left\{U\right\}\right\}\right\}\right\}=\sum\left\{e\text{'}\text{'}\left\left( \tau _\left\{N\right\}^\left\{-1\right\}+\tau _\left\{U\right\}^\left\{-1\right\} \right\right)\Delta \mathsf\left\{K\right\}\right\}-\sum\left\{e_\left\{\mathsf\left\{\lambda \right\}\right\}^\left\{0\right\}\frac\left\{\Delta \mathsf\left\{K\right\}\right\}\left\{\left\{\left\{\tau \right\}_\left\{U\right\}\right\}\right\}\right\}$ | | | |
| - | | | |
| (11) | | | |
| | | | |
| Because of the tight inter-equation coupling, caused by the in-scattering term of the BTE, it is advantageous to visit each physical cell and solve all points in wave vector space and the energy conservation equation in a coupled fashion. This procedure is very similar to that shown in [15], thus we have adopted the same name, the Coupled Ordinates METhod (COMET). In COMET, the success hinges on the ability to solve all the BTE’s and the energy conservation equation. To do this, we must determine the common variable which couples all said equations. The lattice temperature shows up in the function for the equilibrium distribution function, albeit nonlinear. To extract the lattice temperature equation, we simply linearize the equilibrium function using a Taylor series expansion: | | | |
| | | | |
| | | | |
| | | | |
| $\left\{\left\{e\right\}^\left\{0,new\right\}\right\}=\left\{\left\{e\right\}^\left\{0,old\right\}\right\}+\left\left( \left\{\left\{T\right\}^\left\{new\right\}\right\}-\left\{\left\{T\right\}^\left\{old\right\}\right\} \right\right)\left\{\left\{\left\left( \frac\left\{\partial \left\{\left\{e\right\}^\left\{0\right\}\right\}\right\}\left\{\partial T\right\} \right\right)\right\}^\left\{old\right\}\right\}$ | | | |
| - | | | |
| (12) | | | |
| | | | |
| The derivative used is a very familiar value, the specific heat of the specific phonon frequency at the previous iterations temperature. For this procedure, we will construct a matrix which solves for the correction to the previous values of $e\text{'}\text{'}$ and ''T''. In this way, the residual of the previous iteration act as a source for the correction equation. With this, the BTE and the energy conservation equations become: | | | |
| | | | |
| | | | |
| | | | |
| $\begin\left\{align\right\}$ | | | |
| & \sum\limits_{f}{\Delta {{e}_{f}}''}\mathsf{v}\cdot \Delta {{\mathsf{A}}_{f}}+\Delta e''\Delta V\left( \tau _{N}^{-1}+\tau _{U}^{-1} \right)-\Delta T\frac{\Delta V}{{{\tau }_{U}}}{{\left( \frac{\partial {{e}^{0}}}{\partial T} \right)}^{old}}={{R}_{BTE}} \\ | | | |
| & -\Upsilon \sum{\Delta e''}\left( \tau _{N}^{-1}+\tau _{U}^{-1} \right)\Delta \mathsf{K}+\Delta T={{R}_{energy}} \\ | | | |
| & {{\Upsilon }^{-1}}={{\sum{\left( \frac{\partial {{e}^{0}}}{\partial T} \right)}}^{old}}\frac{\Delta \mathsf{K}}{{{\tau }_{U}}} \\ | | | |
| \end{align} | | | |
| - | | | |
| (13) | | | |
| | | | |
| where $\Delta e$ is $\left\{\left\{e\right\}^\left\{new\right\}\right\}-\left\{\left\{e\right\}^\left\{old\right\}\right\}$ with ''old'' and ''new'' representing the current and previous iteration values of simulation. ''R,,BTE,,'' is the residual for the BTE while ''R,,energy,,'' is the residual for the energy equation. These equations form a linear system of order $\left\{\left\{N\right\}_\left\{K\right\}\right\}+1$, where $\left\{\left\{N\right\}_\left\{K\right\}\right\}$ is the number if points in wave vector space. The shape of the matrix is especially convenient, forming an arrowhead pointing to the lower right (all diagonals populated, last row populated, last column populated) which can be solved directly in ''O(N)'' operations. | | | |
| | | | |
| To keep the advantageous shape of the arrowhead matrix, we do not include the momentum conservation equation in the coupled solve. The shifted equilibrium value used is the prevailing value. After the BTE and the energy conservation equation are solved, we use the new value of the lattice temperature and calculate the lambda vector which we then use to update the shifted equilibrium energy. | | | |
| | | | |
| The solution procedure for COMET begins with the initialization of all values. The first cell is visited whereby we solve the coupled BTE and energy conservation equation. We use the new lattice temperature to solve the momentum conservation equation, giving us a new lambda vector which we use to update the shifted equilibrium energy. Each cell is visited in turn. The residuals are collected for each cell, and the heat balance is assessed. If the residual has reached the prescribed tolerance and heat balance is sufficiently satisfied, the procedure exits, otherwise the procedure begins again. A visual representation of the solution procedure is shown below. | | | |
| | | | |
| - | [[Image(flowchartCOMET.tif)]] | + | [[Image(flowchartCOMET.jpg)]] |
| | | | |
| '''Figure 6''': Flow chart for the COMET solution procedure. | | | |
| | | | |
| | | | |
| '''Simulation Details''' | | | |
| | | | |
| We will use an isotropic Brillouin zone, using the dispersion of silicon in the [100] direction. The dispersion is taken using the environment dependent interatomic potential [16]. A plot of the dispersion is shown in Figure 7. To discretize the Brillouin zone, we will use a spherical coordinate system and divide the Brillouin zone sphere into $\left\{\left\{N\right\}_\left\{\theta \right\}\right\}\left\{\left\{N\right\}_\left\{\phi \right\}\right\}\left\{\left\{N\right\}_\left\{K\right\}\right\}$ control volumes. $\left\{\left\{N\right\}_\left\{\theta \right\}\right\}$ is the number of discretizations in the polar angle, $\left\{\left\{N\right\}_\left\{\phi \right\}\right\}$ is the number of discretizations in the azimuthal angle, and $\left\{\left\{N\right\}_\left\{K\right\}\right\}$ is the number of discretizations in the wave vector magnitude. See Figure 8 for a visual representation. | | | |
| | | | |
| [[Image(Si_scatt.jpg, 600px)]] | | | |
| | | | |
| '''Figure 7''': Dispersion relation for silicon in the [100] direction [16]. | | | |
| | | | |
| The Umklapp scattering rates take the following form: $\tau _\left\{U\right\}^\left\{-1\right\}=BT\left\{\left\{\omega \right\}^\left\{2\right\}\right\}\left\{\left\{e\right\}^\left\{\left\{-C\right\}/\left\{T\right\}\;\right\}\right\}$ . Here the constants ''B'' and ''C'' are 1.73x10-19 ''s/K'' and 137.39 K [Mingo et al.]. The Normal scattering rates take the following form: $\tau _\left\{N\right\}^\left\{-1\right\}=\left\{\left\{B\right\}_\left\{l\right\}\right\}\left\{\left\{\omega \right\}^\left\{2\right\}\right\}\left\{\left\{T\right\}^\left\{3\right\}\right\}$ . Here the constant ''B,,l,,'' is 2x10-24 ''s/K^3^''. | | | |
| | | | |
| The domain is shown in Figure 9. We will have temperature boundary conditions on all sides of the domain. Three of the boundaries will have the same temperature, while the fourth boundary will be 1 Kelvin higher. A schematic of the domain is shown below. | | | |
| | | | |
| - | [[Image(shperical.tif)]] | + | [[Image(spherical.jpg,300px)]] |
| | | | |
| '''Figure 8''': Schematic of k-space discretization | | | |
| | | | |
| - | [[Image(domain.tif)]] | + | [[Image(domain.jpg,300px)]] |
| | | | |
| '''Figure 9''': Sketch of the simulation domain. $\left\{\left\{T\right\}_\left\{2\right\}\right\}=\left\{\left\{T\right\}_\left\{1\right\}\right\}+1K$ . | | | |
| | | | |
| To extract the thermal conductivity, we will use the analytical solution for the heat rate through the bottom wall, shown as follows: | | | |
| | | | |
| $\begin\left\{align\right\}$ | | | |
| & k=\frac{q\pi }{2\Delta T}{{\left\{ \sum{\frac{{{\left[ {{\left( -1 \right)}^{n+1}}+1 \right]}^{2}}}{n}{{\sinh }^{-1}}\left( n\pi \right)} \right\}}^{-1}} \\ | | | |
| & k\approx 4.53236q \\ | | | |
| \end{align} | | | |
| - | | | |
| - | (14) | + | (14) |
| + | | | |
| + | | | |
| + | | | |
| + | '''6. Results and Discussion''' | | |
| + | | | |
| + | Our predictions of thermal conductivity yield good qualitative results. However, the quantitative results do not match experiment well. We attribute this to the crude functional representation of the scattering rates. Because the functional forms we used are curve fits to experimental data, there was not a good representation of the difference between Umklapp and Normal scattering. In the curve fits, Normal scattering was ignored and the Umklapp scattering was fit to thermal conductivity data. To get an accurate representation of the actual scattering rates, a more fundamental method, i.e., Fermi’s Golden Rule, should be used to distinguish the difference between Normal and Umklapp scattering. Another reason is that the experimental data is measured for bulk thermal conductivity, whereas we are predicting thermal conductivity in confined (2D) domain. With this, we shall focus on the qualitative trends observed when Normal scattering is included in the solution of the BTE. | | |
| + | | | |
| + | [[Image(0.1um.jpg,600px)]] | | |
| + | | | |
| + | '''Figure 10''': Predicted thermal conductivity for silicon at varying temperature for L=0.1um. | | |
| + | | | |
| + | [[Image(0.5um.jpg,600px)]] | | |
| + | | | |
| + | '''Figure 11''': Predicted Thermal conductivity for silicon at varying temperatures for L=0.5um. | | |
| + | | | |
| + | [[Image(1umplot.jpg,600px)]] | | |
| + | | | |
| + | '''Figure 12''': Predicted thermal conductivity for silicon at varying temperature for L=1.0um. | | |
| + | | | |
| + | | | |
| + | In Figures 10-12 we have the results for the predicted thermal conductivity at varying temperatures for three different domain sizes. As expected, the conductivity increases, reaches a peak, and then drops. At low temperatures only low energy phonon modes are active leading to low thermal transport for a given temperature difference. The temperature is not high enough to cause appreciable inter-phonon scattering. In this regime boundary scattering is the dominant scattering mechanism, which has no temperature dependence. As we increase the temperature, higher energy modes become active giving rise to a higher heat flux. Moreover, at low enough temperatures the phonons still travel relatively unimpeded due to low scattering rates. Eventually, inter-phonon scattering begins to take over and the thermal conductivity increase begins to slow and eventually reach a maximum. After this, the specific heat begins to asymptote and the inter-phonon scattering continues to increase, thus causing a decrease in the thermal conductivity. | | |
| + | | | |
| + | As expected, the inclusion of Normal processes shows up as an earlier onset of the degradation in thermal conductivity, as well as a lower value at the peak. This is because of the indirect function Normal processes have in the thermal resistance. Normal processes aid in the creation of wave vectors large enough to participate in Umklapp scattering. Callaway‟s model accomplishes this by redistributing phonons to higher wave vectors in the direction of the lambda vector such that momentum is conserved. By adding the lambda vector, this allows the Normal scattering term to more selectively redistribute phonons. | | |
| + | | | |
| + | As the domain size increases, there is a shift in the temperature where the maximum thermal conductivity occurs. This can be explained intuitively by domain size effects. As we increase the temperature, smaller domain sizes start with an average Knudsen number (the Knudsen number is defined as the phonon mean free path divided by the characteristic length of the domain) higher than that of the larger domains. By starting with a larger Knudsen number, smaller domains experience ballistic effects over a much larger temperature range. With this, the scattering rates must be larger (higher temperatures) in order to reach a scenario where a less than ballistic regime is attained. | | |
| + | | | |
| + | Figures 13 and 14 show the change in thermal conductivity with respect to the domain size with and without Normal processes, respectively. As we increase the domain size, the thermal conductivity increases. We can explain this, again, because of domain size effects. In order to attain the bulk values for thermal conductivity, a certain size must be reached to a point where boundary effects are no longer felt. At low domain sizes, the boundaries play too large of a role for us to compare with a bulk material. As the domain size increases, the thermal conductivity would asymptote toward the bulk values for thermal conductivity. | | |
| + | | | |
| + | [[Image(withN.jpg,600px)]] | | |
| + | | | |
| + | '''Figure 13''': Predicted thermal conductivity for varying domain size with the inclusion of Normal processes. | | |
| + | | | |
| + | | | |
| + | [[Image(withoutN.jpg,600px)]] | | |
| + | | | |
| + | '''Figure 14''': Predicted thermal conductivity with varying domain size without the inclusion of Normal processes. | | |
| + | | | |
| + | | | |
| + | [[Image(temp_profile_with_N.jpg,600px)]] | | |
| + | | | |
| + | '''Figure 15''': Dimensionless temperature profile along the centerline for varying temperatures. | | |
| + | | | |
| + | | | |
| + | [[Image(temp_profile_without_N.jpg,600px)]] | | |
| + | | | |
| + | '''Figure 16''': Dimensionless temperature profile along the centerline for varying temperatures. | | |
| + | | | |
| + | Figures 15 and 16 show the temperature profile along the line <math>{}^{x}\!\!\diagup\!\!{}_{L}\;=0.5</math> for L=1um with and without Normal processes, respectively. The addition of Normal processes has a slight effect on the shape of the higher temperature curves. This is easily seen when looking at the temperature at x=L and x=0. When Normal processes are included, these temperatures are closer to the boundary value than without. A decrease in the temperature jump is characteristic of a more diffusive transport regime. The addition of Normal processes contributes to the redistribution of energy, thus incurring a lower temperature onset of near equilibrium transport. | | |
| + | Another, perhaps more interesting phenomenon in Figure 15 and 16 is the inverse temperature gradient as you move away from the top wall. This “hump” disappears with increasing temperature, suggesting this is a ballistic effect due to geometry. The authors are not entirely sure the explanation for this and leave it to future work for more insight. | | |
| + | | | |
| + | | | |
| + | | | |
| + | | | |
| + | '''7. Conclusion''' | | |
| + | | | |
| + | The effect of Normal processes on the thermal conductivity of silicon was explored by solving the phonon Boltzmann transport equation using the relaxation time approximation for both Umklapp scattering and Normal scattering. From our knowledge, this is the first time someone has implemented Callaway‟s approximation for Normal scattering processes into a deterministic solution to the BTE accounting for full dispersion. For Normal scattering, phonons scattered with a shifted equilibrium that was calculated by enforcing momentum conservation for all Normal processes. Several consequences came from the inclusion of Normal scattering. The decrease in thermal conductivity occurred at lower temperatures caused by Normal scattering creating a higher population of high wave vector phonons which directly contribute thermal resistance. The effect of Normal scattering was exacerbated at larger domain lengths due to a higher probability of phonon scattering with smaller Knudsen numbers. The inclusion of Normal processes caused a greater redistribution of phonon energy among phonon modes, which quickens the onset of a diffusive transport regime. | | |
| + | | | |
| + | | | |
| + | '''Future Work''' | | |
| + | | | |
| + | The scattering rates used here were taken from curve fits to match experimental bulk thermal conductivity data. However, using data extracted from curve fitting to thermal conductivity to again predict thermal conductivity is self fulfilling and doesn’t give much insight. Future work should use scattering rates taken from a more fundamental source in order to truly understand the effect of Callaway’s approximation for accounting for Normal scattering processes. | | |
| + | | | |
| + | | | |
| + | | | |
| + | References | | |
| + | | | |
| + | [1] G. E. Moore "Cramming more components onto integrated circuits", Electronics, vol. 38, | | |
| + | p.114 , 1965. | | |
| + | [2] Callaway, J., Model for Lattice Thermal Conductivity at Low Temperatures, | | |
| + | Physical Review, Vol. 113, N. 4, pp. 1046-1051 (1958). | | |
| + | [3] Holland, M. G., Analysis of Lattice Thermal Conductivity, Physical Review, Vol. 132, No. 6, | | |
| + | pp. 2461-2471 (1963). | | |
| + | [4] G. Chen, Particularities of heat conduction in nanostructures. J. Nanoparticle Res., 2 (2000), | | |
| + | pp. 199–204 | | |
| + | [5] R. Berman, Thermal conduction in solids | | |
| + | [6] Dhruv Singh, Jayathi Y. Murthy, and Timothy S. Fisher On the accuracy of classical and long | | |
| + | wavelength approximations for phonon transport in grapheme, J. Appl. Phys. 110, 113510 | | |
| + | (2011) | | |
| + | [7] S. V. J. Narumanchi , J. Y. Murthy and C. H. Amon "Submicron heat transport model in | | |
| + | silicon accounting for phonon dispersion and polarization", J. Heat Transf., vol. 126, p.946, | | |
| + | 2004. | | |
| + | [8] ME 503 class notes | | |
| + | [9] Armstrong, B. H., 1985, ‘‘N Processes, the Relaxation-Time Approximation, and Lattice | | |
| + | Thermal Conductivity,’’ Phys. Rev. B, 32~6!, pp. 3381–3390. | | |
| + | [10] Chen, Y., Li, D., Lukes, J. R., and Majumdar, A., 2005, “Monte Carlo Simulation of Silicon | | |
| + | Nanowire Thermal Conductivity,” ASME J. Heat Transfer, 127, pp. 1129–1137. | | |
| + | [11] C.J. Glassbrenner, G.A. Slack [ Phys. Rev. (USA) vol.134 (1964) p.A1058] | | |
| + | [12] M.G. Holland, L.G. Nueringer [ Proc. Int. Conf. Physics if Semiconductors, Exeter, | | |
| + | England, 1962 (Inst. Phys., Bristol, 1962) p. 474] | | |
| + | [13] P.G. Klemens, Proc. Phys Soc. (London) A68, 1113 (1955). | | |
| + | [14] Patankar, S. V., 1980, Numerical Heat Transfer and Fluid Flow, Taylor & Francis, London. | | |
| + | [15] S. R. Mathur, J. Y. Murthy, Journal of Thermophysics and Heat Transfer, 1999, Vol. 12, | | |
| + | No. 4, pp 467-473. | | |
| + | [16] Bazant, M. Z., Kaxiras, E., and Justo, J. F., Environment-Dependent InteratomicPotential | | |
| + | for Bulk Silicon, Physical Review B, Vol. 56, pp. 8542-8552 (1997). | | |
### Legal
nanoHUB.org, a resource for nanoscience and nanotechnology, is supported by the National Science Foundation and other funding agencies.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 49, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8791717290878296, "perplexity_flag": "middle"}
|
http://simple.wikipedia.org/wiki/Velocity
|
# Velocity
The English used in this article may not be easy for everybody to understand. (January 2012)
Velocity is a measure of how fast something has moved in a particular direction. [1] In physics, velocity means the time it took an object to move from one place to another (displacement), and the direction of movement - this is known as a vector quantity. An object could travel at 7 metres per second in a direction of 30 degress south of east. This is velocity.[2]
$velocity = \frac{displacement}{time}$ plus direction.[1]
So for example something that moves in a square, and finishes back where it started, has not been displaced. This would mean that the object's displacement = zero, and it would have a velocity of zero.[1] It is different to the speed that it moved around the square. People often use velocity and speed to mean the same thing, but they are different, velocity must have a direction.
## References
1. ↑ "Physics Homework Help: Speed, Velocity, Acceleration". physics247.com. Retrieved 25 March 2010.
2. "Vectors, Introduction". id.mind.net. Retrieved 25 March 2010.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9536373019218445, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/198694/minimal-polynomial-of-the-form-zeta-p-frac1-zeta-p-zeta-q-frac1-zeta
|
# Minimal polynomial of the form $\zeta_p+\frac{1}{\zeta_p}+\zeta_q+\frac{1}{\zeta_q }$?
We can calculate the minimal polynomial of $2cos(\frac{2\pi}{7})=\zeta_7+\frac{1}{\zeta_7}$ over Q as x^3+x^-2x-1 and simlary for $2cos(\frac{2\pi}{5})=\zeta_5+\frac{1}{\zeta_5 }$.
Now my question is : Is there any way to calculate the minimal polynomial of $\zeta_7+\frac{1}{\zeta_7}+\zeta_5+\frac{1}{\zeta_5 }$? Thanks in advance
-
## 1 Answer
The key is to use resultants. If $P$ is the minimal polynomial of $x$, and $Q$ is the minimal polynomial of $y$, then the resultant of $P(x)$ and $Q(z-x)$ as a polynomial in $z$ vanishes when $z=x+y$. Factoring gives you the minimal polynomial of $x+y$.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 13, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9009714126586914, "perplexity_flag": "head"}
|
http://mathhelpforum.com/differential-equations/138559-general-solution-de.html
|
# Thread:
1. ## General Solution of DE
Find the general solution of , where y is a function of t.
Could someone help me out with this? I'm not sure how to go about solving for the general solution. All I've got is $dy/dt = 1-2y$.
Thanks!
2. It is a linear non-homogeneous equation.
Solve for the integrating factor.
3. Let $\mu = e^{ \int 2 dt} = e^{2t}$. This will be our integrating factor.
Multiplying $\mu$ through to the whole equation we get that
$e^{2t} y' + 2e^{2t} y = e^{2t}$
You can work it from here.
4. Originally Posted by cdlegendary
Could someone help me out with this? I'm not sure how to go about solving for the general solution. All I've got is $dy/dt = 1-2y$.
good call, now, can you see your equation is separable?
5. alright so this is what I ended up trying, but I don't think it's correct. any pointers on where I went wrong?
$e^{2t}y'+2e^{2t}y = e^{2t}$
$(e^{2t}y)' = e^{2t}$
$e^{2t}y = \int e^{2t} dt$
$e^{2t}y = (e^{2t})/2 +c$
$y= 1/2 +c$
6. Dy/dt=1-2y
dy/(1-2y)=1 dt
integrate both sides
You should be able to get it from here
7. Originally Posted by cdlegendary
alright so this is what I ended up trying, but I don't think it's correct. any pointers on where I went wrong?
$e^{2t}y'+2e^{2t}y = e^{2t}$
$(e^{2t}y)' = e^{2t}$
$e^{2t}y = \int e^{2t} dt$
$e^{2t}y = (e^{2t})/2 +c$
$y= 1/2 +c$
Look at what Krizalid said. Since your equation is separable, how about:
$\int \frac{dy}{1-2y} = \int dt$
8. Originally Posted by harish21
Look at what Krizalid said. Since your equation is separable, how about:
$\int \frac{dy}{1-2y} = \int dt$
Alright, so now I've got:
$\int dy/1-2y = \int dt$
$(-1/2)ln(1-2y) = t + c$
$ln(1-2y) = -2t - 2c$
$1-2y = e^{-2t-2c}$
$y=(e^{-2t-2c}-1)/-2$
9. Originally Posted by cdlegendary
Alright, so now I've got:
$\int dy/1-2y = \int dt$
$(-1/2)ln(1-2y) = t + c$
$ln(1-2y) = -2t - 2c$
$1-2y = e^{-2t-2c}$
$y=(e^{-2t-2c}-1)/-2$
That looks good.I would put it this way:
$1-2y = e^{-2t-2c} = e^{-2t} e^{-2c}$
$y = \frac{1}{2}-\frac{e^{-2t} e^{-2c}}{2}$
$y = \frac{1}{2} + {e^{-2t}. C}$
Note : $C = \frac{-e^{-2c}}{2}$ is a constant
10. y' +2y =1
first find the characteristic roots
y'+2y=0
lambda=l
l^2+2=0
l=-2
l=+-sqrt(2)i
y=e^sqrt(2)i+e^-sqrt(2)i
expand with eulers formula get sin and cosine = yg, then
find particular solution
y=A
y'=0
plug into the question
2A=1
A=1/2
Now yp=1/2
y=yg+yp
Cheers
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 31, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9330011010169983, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/16255/are-specific-heat-and-thermal-conductivity-related?answertab=oldest
|
Are specific heat and thermal conductivity related?
Are there any logical relationship between specific heat capacity and thermal conductivity ?
I was wondering about this when I was reading an article on whether to choose cast iron or aluminium vessels for kitchen.
Aluminium has more thermal conductivity and specific heat than iron ( source ).
This must mean more energy is required to raise an unit of aluminium than iron yet aluminium conducts heat better than cast iron.
Does it mean that aluminium also retains heat better ?
How does mass of the vessel affect the heat retention?
-
Your conclusions are wrong, contarary to reality. In practice, You have to compare the masses of the two pans and specific heat. Conduction is irrelevant for Your question. To Your headline: No, at least not a simple one. – Georg Oct 27 '11 at 9:49
1
@Georg how is conduction irrelevant to a question about the relationship between conduction and specific heat? – Nathaniel Apr 6 at 17:31
3 Answers
For metals there is a connection between the thermal conductivity and electric conductivity (Wiedemann–Franz law).
However specific heat is not directly related. This is because electric and thermal conductivity are due to the electrons, however the specific heat is mostly due to the ion vibrations (phonons).
Despite "classical" intuition electrons contribute almost nothing for specific heat in metals. Electrons in a typical metal behave close to an ideal fermion gas, in a very deep quantum range (typical Fermi temperature is about 40K Kelvins).
-
1st this is named Wiedemann-Franz-Lorenz and 2nd this is a rule, not a law! and 3rd this isn't an answer. – Georg Oct 27 '11 at 21:28
@Georg: (1) Wiedemann–Franz law is a common name for this phenomena, see here: en.wikipedia.org/wiki/Wiedemann%E2%80%93Franz_law – valdo Oct 28 '11 at 6:11
@Georg: (2) Why isn't it "an answer"? If specific heat is contributed mostly by phonons, whereas thermal/electric conductivity is due to the electrons? – valdo Oct 28 '11 at 6:13
Wiki and american "science" isn't reliable for such questions. It is called a rule, because it is not really universal and precise. And not an answer: reread the question! – Georg Oct 28 '11 at 10:54
@Georg: IMHO in physics, unlike math, there are no "universal and precise" laws. Virtually every law has its scope of application. The very fundamental laws eventually get reformulated upon new discoveries. – valdo Oct 28 '11 at 16:41
Did you find this question interesting? Try our newsletter
email address
There is not really a general answer to your question because both, the specific heat capacity and the thermal conductivity are not due to a single process in the material.
Both are in general terms a "sum" over the individual components in the material that can store thermal energy or transport thermal energy.
For metals at room temperature the most important terms of these sums are the electrons and phonons (vibrations of the lattice). Both can store and transport thermal energy. Their exact values, temperature dependence, etc. is highly material specific.
The specific heat part that is due to the electrons is mainly governed by electrons within a certain energy range (the Fermi energy). Exactly the same electrons transport heat in the material. So more electrons in that range means both, more specific heat and a higher thermal conductivity.
This get complicated if you look at a real material. A little bit of impurities or defects will influence the thermal conductivity quite a bit but the specific heat will not be influenced significantly.
1. Yes, Aluminium will be able to store more thermal energy than Iron (http://www.engineeringtoolbox.com/specific-heat-metals-d_152.html) per mass.
2. The mass will linearly increase the heat capacity, more mass, higher heat capacity.
(I did not use your term retention, because it is not really defined, but thermal conductivity and heat capacity are easy to understand)
-
At room temperatures in metals electrons contribute almost nothing to the specific heat. Typical Fermi temperature is about 40K Kelvin. Means - thermal excitations are almost negligible. – valdo Oct 31 '11 at 15:14
The phonon contribution will surely dominate, the electrons will only contribute a few percent at room temperature. They are a nice example though that thermal conductivity and specific heat are connected. The connection cannot be expressed by a single law like Wiedemann-Franz because the specific heat is not really influenced by scattering processes. – Alexander Oct 31 '11 at 17:35
Imagine a substance in the size and form of an ice cube. If you could keep shooting it with a photon of say energy $1$ and you shot $10$ of these photons and noticed that the substance had gained a temperature difference say from $25$ to $26^\circ\mathrm{C}$, then its specific heat capacity would be $10$. (Specific heat capacity is more like a measure of the external energy given to produce the temperature change.) And it might even give off this temperature as fast as it got it.
Now for thermal conductivity (this guy is more like a range thing). If you could place a finger on one side of this substance and start your photon shooting on the other side, you may notice that even if the photon-receiving side has done the $25$ to $26^\circ\mathrm{C}$ climb, the side your finger is on might not have. (What you're doing now is obtaining the thermal conductivity of that substance.) $20$ photons might get the climb or not. Going on to $30$, $40$, ......
So basically to obtain this climb for the same cube of aluminium or iron, it might take $10$.
-
– Chris White Apr 6 at 20:51
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9206827878952026, "perplexity_flag": "middle"}
|
http://www.nag.com/numeric/cl/nagdoc_cl23/html/F/fintro.html
|
# NAG Library Chapter Introductionf – Linear Algebra
## 1 Introduction
The f Chapters of the Library are concerned with linear algebra and cover a large area. This general introduction is intended to help you decide which particular f Chapter is relevant to your problem. The following Chapters are currently available:
The principal problem areas addressed by the above Chapters are
• Systems of linear equations
• Linear least squares problems
• Eigenvalue and singular value problems
The solution of these problems usually involves several matrix operations, such as a matrix factorization followed by the solution of the factorized form, and the functions for these operations themselves utilize lower level support functions, typically from Chapter f16. You will not normally need to be concerned with these support functions.
NAG has been involved in a project, called LAPACK (see Anderson et al. (1999)), to develop a linear algebra package for modern high-performance computers, and the functions developed within that project are being incorporated into the Library as Chapters f07 and f08. It should be emphasized that, while the LAPACK project has been concerned with high-performance computers, the functions do not compromise efficiency on conventional machines.
Chapters f11 and f12 contain functions for solving large scale problems, but a few earlier functions are still located in Chapters f01, f02 and f04.
For background information on numerical algorithms for the solution of linear algebra problems see Golub and Van Loan (1996). In some problem areas you have the choice of selecting a single function to solve the problem, a so-called Black Box function, or selecting more than one function to solve the problem, such as a factorization function followed by a solve function, so-called General Purpose functions. The following sections indicate which chapters are relevant to particular problem areas.
## 2 Linear Equations
The Black Box functions for solving linear equations of the form
$Ax=b and AX=B,$
where $A$ is an $n$ by $n$ real or complex nonsingular matrix, are to be found in Chapters f04 and f07. Such equations can also be solved by selecting a general purpose factorization function from Chapter f01 and combining them with a solve function in Chapter f04, or by selecting a factorization and a solve function from Chapter f07. For large sparse problems, functions from Chapter f11 should be used. In addition there are functions to estimate condition numbers and functions to give error estimates in Chapters f02, f04 and f07.
There are functions to cater for a variety of types of matrix, including general, symmetric or Hermitian, symmetric or Hermitian positive definite, banded, skyline and sparse matrices.
In order to select the appropriate function, you are recommended to consult the f04 Chapter Introduction in the first instance, although the decision trees will often in fact point to a function in Chapters f07 or f11.
## 3 Linear Least Squares
Functions for solving linear least squares problems of the form
$minimize x rTr, where r=b-Ax,$
and $A$ is an $m$ by $n$, possibly rank deficient, matrix, can be solved by selecting one or more general purpose factorization functions from Chapters f02 or f08 and combining them with a solve function in Chapter f04. Linear least squares problems can also be solved by functions in the statistical Chapter g02.
In order to select the appropriate function, you are recommended to consult the f04 Chapter Introduction in the first instance, but if you have additional statistical requirements you may prefer to consult Section 2.2 in the g02 Chapter Introduction.
Chapter f08 also contains functions for solving linear equality constrained least squares problems, and the general Gauss–Markov linear model problem. Chapter e04 contains a function to solve general linearly constrained linear least squares problems.
## 4 Eigenvalue Problems and Singular Value Problems
The Black Box functions for solving standard matrix eigenvalue problems of the form
$Ax=λx,$
where $A$ is an $n$ by $n$ real or complex matrix, and generalized matrix eigenvalue problems of the form
$Ax=λBx and ABx=λx,$
where $B$ is also an $n$ by $n$ matrix, are to be found in Chapters f02, f08 and f12. These eigenvalue problems can also be solved by a combination of General Purpose functions in Chapter f08.
There are functions to cater for various types of matrices, including general, symmetric or Hermitian and banded.
Similarly, the Black Box functions for finding singular values and/or singular vectors of an $m$ by $n$ real or complex matrix $A$ are to be found in Chapters f02 and f08, and such problems may also be solved by functions from Chapter f12, and by combining functions from Chapter f08.
In order to select the appropriate function, you are recommended to consult Chapters f02 and f08 in the first instance.
## 5 Inversion and Determinants
Functions for matrix inversion are to be found in Chapter f07. It should be noted that you are strongly encouraged not to use matrix inversion functions for the solution of linear equations, since these can be solved more efficiently and accurately using functions directed specifically at such problems. Indeed many problems, which superficially appear to be matrix inversion, can be posed as the solution of a system of linear equations and this is almost invariably preferable.
Functions to compute determinants of matrices are to be found in Chapter f03. You are recommended to consult Chapter f03 in the first instance.
## 6 Support Functions
Chapter f16 contains contain a variety of functions to perform elementary algebraic operations involving scalars, vectors and matrices, such as setting up a plane rotation, performing a dot product and computing a matrix norm. Chapter f16 contains functions that meet the specification of the BLAS (Basic Linear Algebra Subprograms) (see Lawson et al. (1979), Dodson et al. (1991), Dongarra et al. (1988), Dongarra et al. (1990) and Blackford et al. (2002)). The functions in this chapter will not normally be required by the general user, but are intended for use by those who require to build specialist linear algebra modules. These functions, especially the BLAS, are extensively used by other NAG C Library functions.
## 7 References
Anderson E, Bai Z, Bischof C, Blackford S, Demmel J, Dongarra J J, Du Croz J J, Greenbaum A, Hammarling S, McKenney A and Sorensen D (1999) LAPACK Users' Guide (3rd Edition) SIAM, Philadelphia
Blackford L S, Demmel J, Dongarra J J, Duff I S, Hammarling S, Henry G, Heroux M, Kaufman L, Lumsdaine A, Petitet A, Pozo R, Remington K and Whaley R C (2002) An updated set of Basic Linear Algebra Subprograms (BLAS) ACM Trans. Math. Software 28 135–151
Dodson D S, Grimes R G and Lewis J G (1991) Sparse extensions to the Fortran basic linear algebra subprograms ACM Trans. Math. Software 17 253–263
Dongarra J J, Du Croz J J, Duff I S and Hammarling S (1990) A set of Level 3 basic linear algebra subprograms ACM Trans. Math. Software 16 1–28
Dongarra J J, Du Croz J J, Hammarling S and Hanson R J (1988) An extended set of FORTRAN basic linear algebra subprograms ACM Trans. Math. Software 14 1–32
Golub G H and Van Loan C F (1996) Matrix Computations (3rd Edition) Johns Hopkins University Press, Baltimore
Lawson C L, Hanson R J, Kincaid D R and Krogh F T (1979) Basic linear algebra supbrograms for Fortran usage ACM Trans. Math. Software 5 308–325
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 19, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8766906261444092, "perplexity_flag": "middle"}
|
http://nrich.maths.org/1045/index?nomenu=1
|
## 'Numerically Equal' printed from http://nrich.maths.org/
### Show menu
I want to draw a square in which the perimeter is numerically equal to the area.
Of course, the perimeter will be measured in units of length, for example, centimetres (cm) while the area will be measured in square units, for example, square centimetres (cm$^2$).
What size square will I need to draw?
What about drawing a rectangle that is twice as long as it is wide which still has a perimeter numerically equal to its area?
Can They Be Equal? offers a suitable extension to this problem.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9354573488235474, "perplexity_flag": "middle"}
|
http://stats.stackexchange.com/questions/11421/what-is-the-difference-between-empirical-variance-and-variance?answertab=oldest
|
# What is the difference between empirical variance and variance?
As far as I know variance is calculated as
$$\text{variance} = \frac{(x-\text{mean})^2}{n}$$
while
$$\text{Empirical Variance} = \frac{(x-\text{mean})^2}{n(n-1)}$$
Is it correct? Or is there some other definition? Kindly explain with example or any refence for reading on this topic
-
I have used Latex to alter the presentation of your question. If this is not what you intended, let me know – Henry Jun 1 '11 at 9:34
## 1 Answer
In your expression for the variance, you need to take a sum (or integral) across the population
$$\text{variance} = \frac{\sum_i(x_i-\text{mean})^2}{n}$$
If your data is a sample from the population then this expression will give you a biased estimate of the population variance. An unbiased estimate would be as follows (note the change in the denominator from your expression), often called the sample variance
$$\text{Sample variance} = \frac{\sum_i(x_i-\text{mean})^2}{n-1}$$
If on the other hand you were trying to estimate the variance of the sample mean, then you vould have a smaller number, closer to your expression. The square root of this is called the standard error of the mean and a reasonable estimate is
$$\text{Standard error} = \sqrt{\frac{\sum_i (x_i-\text{mean})^2}{n(n-1)}}$$
-
2
See en.wikipedia.org/wiki/Bias_of_an_estimator#Sample_variance for an explanation why the variance $1/n \sum_{i} (x_{i} - \bar{x})^2$ is a biased estimator, and vdov.net/~acosta/content/mle-normal for an explanation why it is the maximum-likelihood estimator for normal variables. – caracal Jun 1 '11 at 13:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9117439389228821, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/13961/physical-interpretation-of-describing-mass-in-units-of-length/13969
|
# Physical interpretation of describing mass in units of length
I'm working in Taylor and Wheeler's "Exploring Black Holes" and on p.2-14 they use two honorary constants: Newton's constant divided by the speed of light squared e.g. $G/c^2$ as a term to convert mass measured in $kg$ to distance.
Without doing the arithmetic here, the "length" of the Earth is 0.444 cm; and of the sun is 1.477 km. To what do these distances correspond? What is their physical significance, generally?
-
What do you mean by "in the metric"? (I know what a metric is, but I'm just not seeing why you use that phrase.) – David Zaslavsky♦ Aug 25 '11 at 19:03
Nasty edit. I took it out b/c it added nothing but confusion, as evidenced by your question and comment... – bwkaplan Aug 25 '11 at 19:06
## 3 Answers
They represent the scale on which general relativisic effects dominate physics related to bodies of that mass.
For instance if you were to create a (un-rotating, uncharged) black hole of 1 Earth mass it's event horizon would have a radius of about $9\text{ mm} = 2 * M_\text{Earth}$ in those units.
For scales much, much larger than the "length" of the mass, general relativity may be neglected. For intermediate scale in comes in as corrections on order of $\frac{l}{L}$ where $l$ is the mass in the scaled units and $L$ is the length scale of the problem.
This is similar to what particle physicists do by setting $c = \hbar = 1\text{ (dimensionless)}$ energy scales and length scales become inter-changeable.
-
This answer gets my vote! – bwkaplan Aug 25 '11 at 20:11
Earth is 0.444 cm; and of the sun is 1.477 km
It corresponds to half of the respective Schwarzschild radius.
The $\frac{G}{c^2}$ is covered there and also in Adam’s answer.
-
I'm not sure it's terribly helpful, but it seems like the following analysis helps explain dmckee's response.
The force of gravity is
$F = G \frac{m M}{r^2}$.
Rearranging and dividing by $c^2$ gives
$\frac{G}{c^2} = \frac{F r^2}{M (m c^2)}$
where the $mc^2$ is the rest mass energy $(E_0)$ of the object experiencing the force caused by mass $M$. When you multiply through by the mass of the "large" object you get
$M \frac{G}{c^2} = l = \frac{F_g r^2}{E_0}$
Since we are interested in the length $l$, at that distance we have
$M\frac{G}{c^2} = r = \frac{F_g r^2}{E_0}$
or simply
$E_0 = F_g r$.
In words, this is the distance at which the energy of the system due to the rest mass of an object in a gravitational field is the same as the potential energy due to gravitation.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 18, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9480589628219604, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/tagged/spin-statistics
|
# Tagged Questions
The spin-statistics tag has no wiki summary.
1answer
58 views
### NP-completeness of non-planar Ising model versus polynomial time eigenvalue algorithms
From the papers by Barahona and Istrail I understand that a combinatorial approach is followed to prove the NP-completeness of non-planar Ising models. Basic idea is non-planarity here. On the other ...
1answer
53 views
### Is conservation of statistics logically independent of spin?
If the number of fermions is $n$, we expect the quantity $(-1)^n$ to be conserved, i.e., $n$ never changes between even and odd. This is known as conservation of statistics. In the normal context of ...
0answers
63 views
### Ising Hamiltonian for relativistic particles
An Ising system is described by the simple Hamiltonian: $$H = \sum\limits_{i} c_{1i} x_{i} + \sum\limits_{i,j} c_{2ij} x_i x_j \,\,\,\,\,\,\,\,\,\,(1)$$ Here the $x_i$ are spins (+1 or -1 in units ...
1answer
116 views
### Does the Higgs mechanism address the spin statistics problem?
Since the Higgs mechanism is so intimately tied to binding together massless chiral fermions, does it happen to have anything to say about the spin statistics issue? I'm actually assuming the answer ...
0answers
59 views
### Question about the derivation of an equation in full replica symmetry breaking solution
Using replica method and saddle point method, the free energy of a magnetic system can be expressed as -\beta[f]=\lim_{n\to0}\{\frac{-\beta^2J^2}{4n}\sum_{a\ne b}q_{\alpha\beta}^2-\frac{\beta ...
1answer
237 views
### Why is the majorana particle a fermion?
My knowledge of quantum mechanics is rather limited, but what I always understood was that Bosons have integer spins and Fermions have half-integer spins. My question is very simple: the Majorana ...
1answer
574 views
### Partition function of bosons vs fermions
I have two atoms, both of which are either bosons or fermions, with four allowed energy states: $E_1 = 0$, $E_2 = E$, $E_3 = 2E$, with degeneracies 1, 1, 2 respectively. What's the difference between ...
2answers
192 views
### Fermion Field of Standard Model
Why fermion field is treated as anti-commuting and boson field as truly classical in standard model?
2answers
159 views
### Why is fractional statistics and non-Abelian common for fractional charges?
Why non integer spins obey Fermi statistics? Why is fractional statistics and non-Abelian common for fractional charges?
3answers
1k views
### What are distinguishable and indistinguishable particles in statistical mechanics?
What are distinguishable and indistinguishable particles in statistical mechanics? While learning different distributions in statistical mechanics I came across this doubt; Maxwell-Boltzmann ...
2answers
165 views
### Why Pauli exclusion instead of electrons canceling out?
To quote Wikipedia, The Pauli exclusion principle is the quantum mechanical principle that no two identical fermions (particles with half-integer spin) may occupy the same quantum state ...
1answer
175 views
### Does there exist a nonrelativistic physical system in which the effective long-distance fields violate spin/statistics?
The nonrelativistic Schrodinger field allows spin independent of statistics, so that you can imagine a nonrelativistic Schrodinger scalar field with Fermionic statistics, or a Schrodinger spinor field ...
2answers
263 views
### Occam's razor on spin statistics theorem?
Highly related to A reading list to build up to the spin statistics theorem I see 2 parts to the spin statistics theorem: (spin $n$ or $n+\frac{1}{2}$) step 1 given that a spin is integral or ...
1answer
232 views
### Time reversal symmetry and T^2 = -1
I'm a mathematician interested in abstract QFT. I'm trying to undersand why, under certain (all?) circumstances, we must have $T^2 = -1$ rather than $T^2 = +1$, where $T$ is the time reversal ...
1answer
356 views
### Time reversal symmetry and T^2 = -1
I'm a mathematician interested in abstract QFT. I'm trying to undersand why, under certain (all?) circumstances, we must have $T^2 = -1$ rather than $T^2 = +1$, where $T$ is the time reversal ...
1answer
181 views
### Can the CPT theorem be valid if Lorentz invariance is only spontaneously broken?
Earlier, I asked here whether one can have spontaneous breaking of the Lorentz symmetry and was shown a Lorentz invariant term that can drive the vacuum to not be Lorentz invariant. How relaxed are ...
1answer
383 views
### A reading list to build up to the spin statistics theorem
Wikipedia's article on the spin-statistics theorem sums it up thusly: In quantum mechanics, the spin-statistics theorem relates the spin of a particle to the particle statistics it obeys. The spin ...
2answers
269 views
### Example of a wavefunction that cannot be represented by a single Slater determinant
I know that in general, interacting fermions cannot necessarily be described by a single Slater determinant. Can anyone provide a simple example of a state that has no such representation?
2answers
966 views
### What causes the Pauli exclusion principle (and why does spin 1/2 = fermion)?
It seems to be related to exchange interaction, but I can't penetrate the Wikipedia article. What has the Pauli exclusion principle to do with indistinguishability?
2answers
166 views
### existing bounds on maximum density achieved by a Bose condensate
As we know, fermions are subject to exchange interactions that limit the densities they can achieve. However bosons (simple or composite) are not constrained by this, which implies physical phenomena ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 16, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8964912295341492, "perplexity_flag": "middle"}
|
http://www.reference.com/browse/incisal+guide+angle
|
Definitions
# Angle
[ang-guhl] /ˈæŋgəl/
In geometry and trigonometry, an angle (in full, plane angle) is the figure formed by two rays sharing a common endpoint, called the vertex of the angle . The magnitude of the angle is the "amount of rotation" that separates the two rays, and can be measured by considering the length of circular arc swept out when one ray is rotated about the vertex to coincide with the other (see "Measuring angles", below). Where there is no possibility of confusion, the term "angle" is used interchangeably for both the geometric configuration itself and for its angular magnitude (which is simply a numerical quantity).
The word angle comes from the Latin word angulus, meaning "a corner". The word angulus is a diminutive, of which the primitive form, angus, does not occur in Latin. Cognate words are the Latin angere, meaning "to compress into a bend" or "to strangle", the Greek ἀγκύλος (ankylοs), meaning "crooked, curved," and the English word "ankle." All three are connected with the Proto-Indo-European root *ank-, meaning "to bend" or "bow" .
## History
Euclid defines a plane angle as the inclination to each other, in a plane, of two lines which meet each other, and do not lie straight with respect to each other. According to Proclus an angle must be either a quality or a quantity, or a relationship. The first concept was used by Eudemus, who regarded an angle as a deviation from a straight line; the second by Carpus of Antioch, who regarded it as the interval or space between the intersecting lines; Euclid adopted the third concept, although his definitions of right, acute, and obtuse angles are certainly quantitative.
## Measuring angles
In order to measure an angle θ, a circular arc centered at the vertex of the angle is drawn, e.g. with a pair of compasses. The length of the arc s is then divided by the radius of the circle r, and possibly multiplied by a scaling constant k (which depends on the units of measurement that are chosen):
$theta = frac\left\{s\right\}\left\{r\right\}\left(k\right).$
The value of θ thus defined is independent of the size of the circle: if the length of the radius is changed then the arc length changes in the same proportion, so the ratio s/r is unaltered.
In many geometrical situations, angles that differ by an exact multiple of a full circle are effectively equivalent (it makes no difference how many times a line is rotated through a full circle because it always ends up in the same place). However, this is not always the case. For example, when tracing a curve such as a spiral using polar coordinates, an extra full turn gives rise to a quite different point on the curve.
### Units
Angles are considered dimensionless, since they are defined as the ratio of lengths. There are, however, several units used to measure angles, depending on the choice of the constant k in the formula above. Of these units, treated in more detail below, the degree and the radian are by far the most common.
With the notable exception of the radian, most units of angular measurement are defined such that one full circle (i.e. one revolution) is equal to n units, for some whole number n. For example, in the case of degrees, A full circle of n units is obtained by setting in the formula above. (Proof. The formula above can be rewritten as One full circle, for which units, corresponds to an arc equal in length to the circle's circumference, which is 2πr, so . Substituting n for θ and 2πr for s in the formula, results in )
• The , denoted by a small superscript circle (°) is 1/360 of a full circle, so one full circle is 360°. One advantage of this old sexagesimal subunit is that many angles common in simple geometry are measured as a whole number of degrees. Fractions of a degree may be written in normal decimal notation (e.g. 3.5° for three and a half degrees), but the following sexagesimal subunits of the "degree-minute-second" system are also in use, especially for geographical coordinates and in astronomy and ballistics:
• The (or MOA, arcminute, or just minute) is 1/60 of a degree. It is denoted by a single prime ( ′ ). For example, 3° 30′ is equal to 3 + 30/60 degrees, or 3.5 degrees. A mixed format with decimal fractions is also sometimes used, e.g. 3° 5.72′ = 3 + 5.72/60 degrees. A nautical mile was historically defined as a minute of arc along a great circle of the Earth.
• The (or arcsecond, or just second) is 1/60 of a minute of arc and 1/3600 of a degree. It is denoted by a double prime ( ″ ). For example, 3° 7′ 30″ is equal to 3 + 7/60 + 30/3600 degrees, or 3.125 degrees.
• The is the angle subtended by an arc of a circle that has the same length as the circle's radius (k = 1 in the formula given earlier). One full circle is 2π radians, and one radian is 180/π degrees, or about 57.2958 degrees. The radian is abbreviated rad, though this symbol is often omitted in mathematical texts, where radians are assumed unless specified otherwise. The radian is used in virtually all mathematical work beyond simple practical geometry, due, for example, to the pleasing and "natural" properties that the trigonometric functions display when their arguments are in radians. The radian is the (derived) unit of angular measurement in the SI system.
• The is approximately equal to a milliradian. There are several definitions.
• The full circle (or revolution, rotation, or cycle) is one complete revolution. The revolution and rotation are abbreviated rev and rot, respectively, but just r in (revolutions per minute). 1 full circle = 360° = 2π rad = 400 gon = 4 right angles.
• The is 1/4 of a full circle. It is the unit used in Euclid's Elements. 1 right angle = 90° = π/2 rad = 100 gon.
• The is 1/6 of a full circle. It was the unit used by the Babylonians, and is especially easy to construct with ruler and compasses. The degree, minute of arc and second of arc are sexagesimal subunits of the Babylonian unit. 1 Babylonian unit = 60° = π/3 rad ≈ 1.047197551 rad.
• The , also called grade, gradian, or gon is 1/400 of a full circle, so one full circle is 400 grads and a right angle is 100 grads. It is a decimal subunit of the right angle. A kilometer was historically defined as a centi-gon of arc along a great circle of the Earth, so the kilometer is the decimal analog to the sexagesimal nautical mile. The gon is used mostly in triangulation.
• The point, used in navigation, is 1/32 of a full circle. It is a binary subunit of the full circle. Naming all 32 points on a compass rose is called "boxing the compass". 1 point = 1/8 of a right angle = 11.25° = 12.5 gon.
• The astronomical is 1/24 of a full circle. The sexagesimal subunits were called minute of time and second of time (even though they are units of angle). 1 hour = 15° = π/12 rad = 1/6 right angle ≈ 16.667 gon.
• The binary degree, also known as the binary radian (or brad), is 1/256 of a full circle. The binary degree is used in computing so that an angle can be efficiently represented in a single byte.
• The , or gradient, is not truly an angle measure (unless it is explicitly given in degrees, as is occasionally the case). Instead it is equal to the tangent of the angle, or sometimes the sine. Gradients are often expressed as a percentage. For the usual small values encountered (less than 5%), the grade of a slope is approximately the measure of an angle in radians.
### Positive and negative angles
A convention universally adopted in mathematical writing is that angles given a sign are positive angles if measured anticlockwise, and negative angles if measured clockwise, from a given line. If no line is specified, it can be assumed to be the x-axis in the Cartesian plane. In many geometrical situations a negative angle of −θ is effectively equivalent to a positive angle of "one full rotation less θ". For example, a clockwise rotation of 45° (that is, an angle of −45°) is often effectively equivalent to a anticlockwise rotation of 360° − 45° (that is, an angle of 315°).
In three dimensional geometry, "clockwise" and "anticlockwise" have no absolute meaning, so the direction of positive and negative angles must be defined relative to some reference, which is typically a vector passing through the angle's vertex and perpendicular to the plane in which the rays of the angle lie.
In navigation, bearings are measured from north, increasing clockwise, so a bearing of 45 degrees is north-east. Negative bearings are not used in navigation, so north-west is 315 degrees.
### Approximations
• 1° is approximately the width of a little finger at arm's length
• 10° is approximately the width of a closed fist at arm's length.
• 20° is approximately the width of a handspan at arm's length.
## Identifying angles
In mathematical expressions, it is common to use Greek letters (α, β, γ, θ, φ, ...) to serve as variables standing for the size of some angle. (To avoid confusion with its other meaning, the symbol π is typically not used for this purpose.) Lower case roman letters (a, b, c, ...) are also used. See the figures in this article for examples.
In geometric figures, angles may also be identified by the labels attached to the three points that define them. For example, the angle at vertex A enclosed by the rays AB and AC (i.e. the lines from point A to point B and point A to point C) is denoted ∠BAC or BÂC. Sometimes, where there is no risk of confusion, the angle may be referred to simply by its vertex ("angle A").
Potentially, an angle denoted, say, ∠BAC might refer to any of four angles: the clockwise angle from B to C, the anticlockwise angle from B to C, the clockwise angle from C to B, or the anticlockwise angle from C to B, where the direction in which the angle is measured determines its sign (see Positive and negative angles). However, in many geometrical situations it is obvious from context that the positive angle less than or equal to 180° degrees is meant, and no ambiguity arises. Otherwise, a convention may be adopted so that ∠BAC always refers to the anticlockwise (positive) angle from B to C, and ∠CAB to the anticlockwise (positive) angle from C to B.
## Types of angles
• An angle of 90° (/2 radians, or one-quarter of the full circle) is called a .
• :Two lines that form a right angle are said to be or .
• Angles smaller than a right angle (less than 90°) are called acute angles ("acute" meaning "sharp").
• Angles larger than a right angle and smaller than two right angles (between 90° and 180°) are called obtuse angles ("obtuse" meaning "blunt").
• Angles equal to two right angles (180°) are called straight angles.
• Angles larger than two right angles but less than a full circle (between 180° and 360°) are called reflex angles.
• Angles that have the same measure are said to be .
• Two angles opposite each other, formed by two intersecting straight lines that form an "X" like shape, are called or opposite angles. These angles are congruent.
• Angles that share a common vertex and edge but do not share any interior points are called .
• Two angles that sum to one right angle (90°) are called .
• :The difference between an angle and a right angle is termed the complement of the angle.
• Two angles that sum to a straight angle (180°) are called .
• :The difference between an angle and a straight angle is termed the supplement of the angle.
• Two angles that sum to one full circle (360°) are called explementary angles or conjugate angles.
• An angle that is part of a simple polygon is called an if it lies in the inside of that the simple polygon. Note that in a simple polygon that is concave, at least one interior angle exceeds 180°.
• :In Euclidean geometry, the measures of the interior angles of a triangle add up to π radians, or 180°; the measures of the interior angles of a simple quadrilateral add up to 2π radians, or 360°. In general, the measures of the interior angles of a simple polygon with n sides add up to [(n − 2) × π] radians, or [(n − 2) × 180]°.
• The angle supplementary to the interior angle is called the . It measures the amount of "turn" one has to make at this vertex to trace out the polygon. If the corresponding interior angle exceeds 180°, the exterior angle should be considered negative. Even in a non-simple polygon it may be possible to define the exterior angle, but one will have to pick an orientation of the plane (or surface) to decide the sign of the exterior angle measure.
• :In Euclidean geometry, the sum of the exterior angles of a simple polygon will be 360°, one full turn.
• Some authors use the name exterior angle of a simple polygon to simply mean the explementary (not supplementary!) of the interior angle This conflicts with the above usage.
• The angle between two planes (such as two adjacent faces of a polyhedron) is called a . It may be defined as the acute angle between two lines normal to the planes.
• The angle between a plane and an intersecting straight line is equal to ninety degrees minus the angle between the intersecting line and the line that goes through the point of intersection and is normal to the plane.
• If a straight transversal line intersects two parallel lines, corresponding (alternate) angles at the two points of intersection are congruent; adjacent angles are supplementary (that is, their measures add to π radians, or 180°).
## A formal definition
### Using trigonometric functions
A Euclidean angle is completely determined by the corresponding right triangle. In particular, if $theta$ is a Euclidean angle, it is true that
$cos theta = frac\left\{x\right\}\left\{sqrt\left\{x^2 + y^2\right\}\right\}$
and
$sin theta = frac\left\{y\right\}\left\{sqrt\left\{x^2 + y^2\right\}\right\}$
for two numbers x and y. So an angle in the Euclidean plane can be legitimately given by two numbers x and y.
To the ratio y/x there correspond two angles in the geometric range 0 < θ < 2π, since
$frac\left\{sin theta\right\}\left\{cos theta \right\} = frac\left\{y/sqrt\left\{x^2 + y^2\right\}\right\}\left\{x/sqrt\left\{x^2 + y^2\right\}\right\} = frac\left\{y\right\}\left\{x\right\} = frac\left\{-y\right\}\left\{-x\right\} = frac\left\{sin \left(theta + pi\right)\right\}\left\{cos \left(theta + pi\right) \right\}.$
### Using rotations
Suppose we have two unit vectors $vec\left\{u\right\}$ and $vec\left\{v\right\}$ in the euclidean plane $mathbb\left\{R\right\}^2$. Then there exists one positive isometry (a rotation), and one only, from $mathbb\left\{R\right\}^2$ to $mathbb\left\{R\right\}^2$ that maps $u$ onto $v$. Let r be such a rotation. Then the relation $vec\left\{a\right\}mathcal\left\{R\right\}vec\left\{b\right\}$ defined by $vec\left\{b\right\}=r\left(vec\left\{a\right\}\right)$ is an equivalence relation and we call angle of the rotation r the equivalence class $mathbb\left\{T\right\}/mathcal\left\{R\right\}$, where $mathbb\left\{T\right\}$ denotes the unit circle of $mathbb\left\{R\right\}^2$. The angle between two vectors will simply be the angle of the rotation that maps one onto the other. We have no numerical way of determining an angle yet. To do this, we choose the vector $\left(1,0\right)$, then for any point M on $mathbb\left\{T\right\}$ at distance $theta$ from $\left(1,0\right)$ (on the circle), let $vec\left\{u\right\}=overrightarrow\left\{OM\right\}$. If we call $r_theta$ the rotation that transforms $\left(1,0\right)$ into $vec\left\{u\right\}$, then $left\left[r_thetaright\right]mapstotheta$ is a bijection, which means we can identify any angle with a number between 0 and $2pi$.
## Angles between curves
The angle between a line and a curve (mixed angle) or between two intersecting curves (curvilinear angle) is defined to be the angle between the tangents at the point of intersection. Various names (now rarely, if ever, used) have been given to particular cases:—amphicyrtic (Gr. ἀμφί, on both sides, κυρτόσ, convex) or cissoidal (Gr. κισσόσ, ivy), biconvex; xystroidal or sistroidal (Gr. ξυστρίσ, a tool for scraping), concavo-convex; amphicoelic (Gr. κοίλη, a hollow) or angulus lunularis, biconcave.
## The dot product and generalisation
In the Euclidean plane, the angle θ between two vectors u and v is related to their dot product and their lengths by the formula
$mathbf\left\{u\right\} cdot mathbf\left\{v\right\} = cos\left(theta\right) |mathbf\left\{u\right\}| |mathbf\left\{v\right\}|.$
This allows one to define angles in any real inner product space, replacing the Euclidean dot product · by the Hilbert space inner product $langlecdot,cdotrangle$.
## Angles in Riemannian geometry
In Riemannian geometry, the metric tensor is used to define the angle between two tangents. Where U and V are tangent vectors and gij are the components of the metric tensor G,
$$
cos theta = frac{g_{ij}U^iV^j} {sqrt{ left| g_{ij}U^iU^j right| left| g_{ij}V^iV^j right}.
## Angles in geography and astronomy
In geography we specify the location of any point on the Earth using a . This system specifies the latitude and longitude of any location, in terms of angles subtended at the centre of the Earth, using the equator and (usually) the Greenwich meridian as references.
In astronomy, we similarly specify a given point on the celestial sphere using any of several , where the references vary according to the particular system.
Astronomers can also measure the angular separation of two stars by imagining two lines through the centre of the Earth, each intersecting one of the stars. The angle between those lines can be measured, and is the angular separation between the two stars.
Astronomers also measure the of objects. For example, the full moon has an angular measurement of approximately 0.5°, when viewed from Earth. One could say, "The Moon subtends an angle of half a degree." The small-angle formula can be used to convert such an angular measurement into a distance/size ratio.
## See also
• Complementary angles
• Supplementary angles
• Central angle
• Inscribed angle
• Solid angle for a concept of angle in three dimensions.
• Astrological aspect
• Protractor
• Clock angle problem
• Great circle distance
• .
• .
## External links
• Angle Bisectors in a Quadrilateral at cut-the-knot
• Constructing a triangle from its angle bisectors at cut-the-knot
• Convert angles in sexagesimal degree format to decimal degrees, and vice-versa
• Angle Estimation -- for basic astronomy.
• Angle definition pages with interactive applets.
• Various angle constructions with compass and straightedge Animated demonstrations
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 30, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9119166731834412, "perplexity_flag": "middle"}
|
http://eddiema.ca/category/brain/
|
# Ed's Big Plans
## Partial Derivatives for Residuals of the Gaussian Function
without comments
I needed to get the partial derivatives for the residuals of the Gaussian Function this week. This is needed for a curve fit I’ll use later. I completely forgot about Maxima, which can do this automatically — so I did it by hand (Maxima is like Maple, but it’s free). I’ve included my work in this post for future reference. If you want a quick refresh on calculus or a step-by-step for this particular function, enjoy . The math below is rendered with MathJax.
The Gaussian Function is given by …
$$f(x) = ae^{-\frac{(x-b)^2}{2c^2}}$$
• a, b, c are the curve parameters with respect to which we differentiate the residual function
• e is Euler’s number
Given a set of coordinates I’d like to fit (xi, yi), i ∈ [1, m], the residuals are given by …
$$r_i = y_i – ae^{-\frac{(x_i-b)^2}{2c^2}}$$
We want to get …
$$\frac{\partial{r}}{\partial{a}}, \frac{\partial{r}}{\partial{b}}, \frac{\partial{r}}{\partial{c}}$$
Read the rest of this entry »
Written by Eddie Ma
October 10th, 2011 at 11:40 am
Posted in Brain,Featured
Tagged with Curve Fitting, Gaussian Function, linkedin, Partial Derivatives, Residuals
## Searching for a Continuous Bit Parity function
with 2 comments
Update: The function being sought is better described as “continuous bit-parity” rather than “Fuzzy XOR”, the title of the post has been changed from “Fuzzy Exclusive OR (XOR)” to reflect that.
About two weeks ago, I was working on a project wherein I needed to define a continuous XOR function. The only stipulations are that (1) the function must either be binary and commutative, or it must be variadic; and (2) the function must be continuous.
In my first attempt, I used the classic arrangement of four binary NAND gates to make an XOR where each NAND gate was replaced with the expression { λ: p, q → 1.0 - pq }. The algebraic product T-norm { λ: p, q → pq } is used instead of the standard fuzzy T-norm { λ: p, q → min(p, q) } in order to keep it continuous. Unfortunately, this attempt does not preserve commutativity, so the search continued.
At this point, Dr. Kremer suggested I consider a shifted sine curve. I eventually chose the equation
{ λ: p[1..n] → 0.5 - 0.5cos(π Σi=1npi) }.
This is shown graphically in the below figure …
```# gnuplot source ...
set xrange[-2*pi:2*pi]
set output "a.eps"
set terminal postscript eps size 2.0, 1.5
plot 0.5 - 0.5 * cos(pi * x)
```
This can be considered a variadic function because it takes the sum of all fuzzy bits pi in a given string and treats the arguments the same no matter the number of bits n.
Whenever the sum of all bits is equal to an even number, the function returns a zero — whenever the sum is an odd number, the function returns a one. This function offers a continuous (although potentially meaningless) value between integer values of the domain and can handle bitstrings of any length.
If you’re aware of a purely binary Fuzzy XOR (instead of variadic) that is a legal extension of classic XOR, continuous, and commutative — please let me know for future reference
Andre Masella says...
Ugh. You can’t do that because it won’t be an XOR any more and it won’t hold for the definitions of triangular norms. XOR is defined to be x·¬y+y·¬x. So, in fuzzy logic, it should be (xT(Cy))S(yT(Cx)) where T is the triangular norm and S is the matching conorm. If you’re using the Gödel T-norm (min), then that’s max(min(x, 1-y), min(y, 1-x)).
If your concern is commutativity, then well, it depends entirely on the norms you choose. In general, it doesn’t hold because there is no requirement that the norms work that way. This is obvious in the case of the Łukasiewicz t-norm. T-norms are not require to be distributive, so, you can’t generalise an XOR. In the case of the Gödel t-norm, it just so happens to be distributive, so you can make such an XOR. For three variables: x⊗y⊗z = x·¬(y⊗z)+¬x·(y⊗z) = x·¬(y·¬z+¬y·z)+¬x·(y·¬z+¬y·z) = x·(¬y+z)·(y+¬z)+¬x·(¬y+z)·(y+¬z) = (x·¬y+x·z)·(y+¬z)+(¬x·¬y+¬x·z)·(y+¬z) = x·¬y·y+x·z·y+¬z·x·¬y+¬z·x·z+¬x·¬y·y+¬x·z·y+¬x·¬y·¬z
Which is much uglier than the regular Boolean version because x·¬x is normally 0, but min(x, 1-x) is not necessarily 0.
Eddie Ma says...
Good idea — so in reality, if I just find a T-norm and S-norm that satisfies { pT(Cq))S(qT(Cp) } for the XOR table and all of those properties I need, then I’m set.
Alternatively, I may have misidentified the problem afterall — maybe I got hung up on Fuzzy Logic, and forgot the overarching goal — continuous bit parity for any number of values in [0, 1] — not necessarily fuzzy bits. Maybe the transformed sine curve is as good as I really needed for my purposes.
It’s been a while since I’ve thought about the project that this problem belonged to … hmm … I’ll leave this thought to run in the background since I don’t need it right away. I’ll let you know if I bump into a relevant function
Written by Eddie Ma
June 1st, 2011 at 9:42 pm
Posted in Brain
Tagged with 4-NAND gates, Bit parity, Bitstring, Cosine, Fuzzy XOR, linkedin
## C & Math: Sieve of Eratosthenes with Wheel Factorization
without comments
In the first assignment of Computer Security, we were to implement The Sieve of Eratosthenes. The instructor gives a student the failing grade of 6/13 for a naive implementation, and as we increase the efficiency of the sieve, we get more marks. There are the three standard optimizations: (1) for the current prime being considered, start the indexer at the square of the current prime; (2) consider only even numbers; (3) end crossing out numbers at the square root of the last value of the sieve.
Since the assignment has been handed in, I’ve decided to post my solution here as I haven’t seen C code on the web which implements wheel factorization.
We can think of wheel factorization as an extension to skipping all even numbers. Since we know that all even numbers are multiples of two, we can just skip them all and save half the work. By the same token, if we know a pattern of repeating multiples corresponding to the first few primes, then we can skip all of those guaranteed multiples and save some work.
The wheel I implement skips all multiples of 2, 3 and 5. In Melissa O’Neill’s The Genuine Sieve of Erastothenes, an implementation of the sieve with a priority queue optimization is shown in Haskell while wheel factorization with the primes 2, 3, 5 and 7 is discussed. The implementation of that wheel (and other wheels) is left as an exercise for her readers
But first, let’s take a look at the savings of implementing this wheel. Consider the block of numbers in modulo thirty below corresponding to the wheel for primes 2, 3 and 5 …
| | | | | | | | | | |
|----|----|----|----|----|----|----|----|----|----|
| 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 |
| 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 |
| 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | 28 | 29 |
Only the highlighted numbers need to be checked to be crossed out during sieving since the remaining values are guaranteed to be multiples of 2, 3 or 5. This pattern repeats every thirty numbers which is why I say that it is in modulo thirty. We hence skip 22/30 of all cells by using the wheel of thirty — a savings of 73%. If we implemented the wheel O’Neill mentioned, we would skip 77% of cells using a wheel of 210 (for primes 2, 3, 5 and 7).
(Note that the highlighted numbers in the above block also correspond to the multiplicative identity one and numbers which are coprime to 30.)
Below is the final code that I used.
```#include <stdlib.h>
#include <stdio.h>
#include <math.h>
const unsigned int SIEVE = 15319000;
const unsigned int PRIME = 990000;
int main(void) {
unsigned char* sieve = calloc(SIEVE + 30, 1); // +30 gives us incr padding
unsigned int thisprime = 7;
unsigned int iprime = 4;
unsigned int sieveroot = (int)sqrt(SIEVE) +1;
// Update: don't need to zero the sieve - using calloc() not malloc()
sieve[7] = 1;
for(; iprime < PRIME; iprime ++) {
// ENHANCEMENT 3: only cross off until square root of |seive|.
if(thisprime < sieveroot) {
// ENHANCEMENT 1: Increment by 30 -- 4/15 the work.
// ENHANCEMENT 2: start crossing off at prime * prime.
int i = (thisprime * thisprime);
switch (i % 30) { // new squared prime -- get equivalence class.
case 1:
if(!sieve[i] && !(i % thisprime)) {sieve[i] = 1;}
i += 6;
case 7:
if(!sieve[i] && !(i % thisprime)) {sieve[i] = 1;}
i += 4;
case 11:
if(!sieve[i] && !(i % thisprime)) {sieve[i] = 1;}
i += 2;
case 13:
if(!sieve[i] && !(i % thisprime)) {sieve[i] = 1;}
i += 4;
case 17:
if(!sieve[i] && !(i % thisprime)) {sieve[i] = 1;}
i += 2;
case 19:
if(!sieve[i] && !(i % thisprime)) {sieve[i] = 1;}
i += 4;
case 23:
if(!sieve[i] && !(i % thisprime)) {sieve[i] = 1;}
i += 6;
case 29:
if(!sieve[i] && !(i % thisprime)) {sieve[i] = 1;}
i += 1; // 29 + 1 (mod 30) = 0 -- just in step
}
for(; i < SIEVE; i += 30) {
if(!sieve[i+1] && !((i+1) % thisprime)) sieve[i+1] = 1;
if(!sieve[i+7] && !((i+7) % thisprime)) sieve[i+7] = 1;
if(!sieve[i+11] && !((i+11) % thisprime)) sieve[i+11] = 1;
if(!sieve[i+13] && !((i+13) % thisprime)) sieve[i+13] = 1;
if(!sieve[i+17] && !((i+17) % thisprime)) sieve[i+17] = 1;
if(!sieve[i+19] && !((i+19) % thisprime)) sieve[i+19] = 1;
if(!sieve[i+23] && !((i+23) % thisprime)) sieve[i+23] = 1;
if(!sieve[i+29] && !((i+29) % thisprime)) sieve[i+29] = 1;
}
}
{
int i = thisprime;
switch (i % 30) { // write down the next prime in 'thisprime'.
case 1:
if(!sieve[i]) {thisprime = i; sieve[i] = 1; goto done;}
i += 6;
case 7:
if(!sieve[i]) {thisprime = i; sieve[i] = 1; goto done;}
i += 4;
case 11:
if(!sieve[i]) {thisprime = i; sieve[i] = 1; goto done;}
i += 2;
case 13:
if(!sieve[i]) {thisprime = i; sieve[i] = 1; goto done;}
i += 4;
case 17:
if(!sieve[i]) {thisprime = i; sieve[i] = 1; goto done;}
i += 2;
case 19:
if(!sieve[i]) {thisprime = i; sieve[i] = 1; goto done;}
i += 4;
case 23:
if(!sieve[i]) {thisprime = i; sieve[i] = 1; goto done;}
i += 6;
case 29:
if(!sieve[i]) {thisprime = i; sieve[i] = 1; goto done;}
i += 1;
}
for(; i < SIEVE; i += 30) {
if(!sieve[i+1]) {thisprime = i+1; sieve[i+1] = 1; goto done;}
if(!sieve[i+7]) {thisprime = i+7; sieve[i+7] = 1; goto done;}
if(!sieve[i+11]) {thisprime = i+11; sieve[i+11] = 1; goto done;}
if(!sieve[i+13]) {thisprime = i+13; sieve[i+13] = 1; goto done;}
if(!sieve[i+17]) {thisprime = i+17; sieve[i+17] = 1; goto done;}
if(!sieve[i+19]) {thisprime = i+19; sieve[i+19] = 1; goto done;}
if(!sieve[i+23]) {thisprime = i+23; sieve[i+23] = 1; goto done;}
if(!sieve[i+29]) {thisprime = i+29; sieve[i+29] = 1; goto done;}
}
done:;
}
}
printf("%d\n", thisprime);
free(sieve);
return 0;
}
```
Notice that there is a switch construct — this is necessary because we aren’t guaranteed that the first value to sieve for a new prime (or squared prime) is going to be an even multiple of thirty. Consider sieving seven — the very first prime to consider. We start by considering 72 = 49. Notice 49 (mod 30) is congruent to 19. The switch statement incrementally moves the cursor from the 19th equivalence class to the 23rd, to the 29th before pushing it one integer more to 30 — 30 (mod 30) is zero — and so we are able to continue incrementing by thirty from then on in the loop.
The code listed is rigged to find the 990 000th prime as per the assignment and uses a sieve of predetermined size. Note that if you want to use my sieve code above to find whichever prime you like, you must also change the size of the sieve. If you take a look at How many primes are there? written by Chris K. Caldwell, you’ll notice a few equations that allow you to highball the nth prime of your choosing, thereby letting you calculate that prime with the overshot sieve size.
Note also that this sieve is not the most efficient. A classmate of mine implemented The Sieve of Atkin which is magnitudes faster than this implementation.
Written by Eddie Ma
February 3rd, 2011 at 11:08 pm
Posted in Brain,Featured
Tagged with C, linkedin, Sieve of Eratosthenes, Wheel Factorization, Wheel235
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8985006809234619, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/discrete-math/81497-gauss-jordan-help.html
|
# Thread:
1. ## gauss jordan help
Ok here I am working on this problem for like 3 hours to no avail.
x1 + x2 = 1
-x1 + x2 + x3 = -1
-1x2 + x3 = 3
I am trying to solve this in matrix form and get the point of multiplaying a cooefficiant but applying it to this set is baffling my mind. Any insights?
is the matrix
1 1 0
-1 1 0
0 -1 1
2. Originally Posted by wonderstrike
Ok here I am working on this problem for like 3 hours to no avail.
x1 + x2 = 1
-x1 + x2 + x3 = -1
-1x2 + x3 = 3
I am trying to solve this in matrix form and get the point of multiplying a coefficient but applying it to this set is baffling my mind. Any insights?
is the matrix
1 1 0
-1 1 0
0 -1 1
You should use an "augmented" matrix, with an extra column consisting of the coefficients on the right=hand side of the equations. So the matrix is $\begin{bmatrix}1&1&0&1\\ -1&1&1&-1\\ 0&-1&1&3\end{bmatrix}$. Now apply the Gauss–Jordan process to the matrix, and then you should be able to read off the solution.
3. $<br /> <br /> \begin{bmatrix}1&1&0&1\\ 0&1&.5&0\\ 0&0&1&2\end{bmatrix}<br /> <br /> <br />$
$<br /> \begin{bmatrix}1&0&0&2\\ 0&1&0&-1\\ 0&0&1&2\end{bmatrix}$
have I broken this down right? so the answer would be the last column 2,-1,2
?
4. Originally Posted by wonderstrike
$<br /> <br /> \begin{bmatrix}1&1&0&1\\ 0&1&.5&0\\ 0&0&1&2\end{bmatrix}<br /> <br /> <br />$
$<br /> \begin{bmatrix}1&0&0&2\\ 0&1&0&-1\\ 0&0&1&2\end{bmatrix}$
have I broken this down right? so the answer would be the last column 2,-1,2
?
That is correct (as you could check for yourself by substituting 2, –1, 2 for $x_1,x_2,x_3$ in the original equations).
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9228969812393188, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/tagged/metric-spaces+convergence
|
# Tagged Questions
2answers
88 views
### Continuous functions uniformly convergent to a function, metric spaces, equivalent conditions
Let $X, \ (Y, d)$ be metric spaces, $f_1, f_2, \ldots \ : X \rightarrow Y$ be continuous functions, $f: X \rightarrow Y$ an arbitrary function. Prove that the following condtions are equivalent: 1) ...
2answers
49 views
### Find a convergent function in metric space
Let $C[−1, 1]$ be the space of continuous functions equipped with the metric $p(f,g) = \max\{|f(x)−g(x)| \mid x \in [−1, 1]\}$. Then the sequence of functions $(f_n):[−1,1]\rightarrow \mathbb{R}$ ...
0answers
45 views
### convergence in metric space
Let $C[-1, 1]$ be the space of continuous functions equipped with the metric $(f, g) = \displaystyle\max_{x \in [-1, 1]} |f(x)-g(x)|$. Consider the sequence $(f_n)$ of functions \$f_n : [-1, 1] \to ...
2answers
22 views
### Equality of limits with respect to different metrics.
Suppose that $X$ is a set equipped with two metrics, say $d_1$ and $d_2$. Let $\{x_n\}_{n\in\mathbb{N}}\subset X$ be a sequence of points which converges to $x\in X$ with respect to metric $d_1$. ...
2answers
38 views
### If $x\in X$ and sequence $(x_n) \in X^{\Bbb N}$ converges in $(X,d)$ , then so does every subsequence of $(x_n)$.
A subequence of a sequence $(x_n)_{n\ge 1}$ is a sequence $(x_{n_1}, x_{n_2},x_{n_3},...$) where $n_1,n_2,n_3,... \in \Bbb N$ with $n_1\lt n_2\lt n_3\lt ...$ Let $(X,d)$ be a metric space and let ...
1answer
26 views
### Is a 'normally' convergent sequence still convergent in a metric space which barely excludes its 'normal' limit?
For example, suppose $$x_n = \frac 1n \\ X = (0, 1)$$ Is $x_n$ convergent in $X$? My guess would be no, since there exists no $x \in X$ which $x_n$ approaches; $x_n$ will eventually surpass any ...
1answer
30 views
### Correctness of Converging sequence and Adherent Points
$x\in X$ is an adherent point of $A\subset X$ if for every $\epsilon>0$ there exists $y\in A$ s.t. $y\in B(x, \epsilon)$ $B(x, \epsilon)$ is the open ball centered at $x$ with radius $\epsilon$ ...
2answers
92 views
### Kernel of $p$-adic logarithm.
I'm completely clueless as to how to answer the following question: Let $K$ be a field of characteristic zero which is complete with respect to a non-Archimedean aboslute value $|\cdot|$. Let ...
1answer
61 views
### Subspace $Y$ of metric space with finitely many points is complete.
Show that if a subspace $Y$ of a metric space consists of finitely many points, then $Y$ is complete. This is what I have so far, but I don't know where to go from here: Suppose the the subspace ...
1answer
139 views
### Show $\mathbb R^n$ is complete.
Show $\mathbb R^n$ is complete. At this point, I am trying to work through the problem in my textbook, there is one step that I do not understand and would like explained. Here's my proof so far: ...
1answer
60 views
### Showing convergence in Space of Squared Summable Sequences
My Problem: Show that the sequence ${x_n}_{n\geq 1}$, where $x_n=(1,\frac{1}{2},\ldots,\frac{1}{n},0,0,\ldots)$ converges to $x=(1,\frac{1}{2},\frac{1}{3},\ldots,\frac{1}{n},\ldots)$ in $l_2$ My ...
2answers
65 views
### When and how does this sequence converge?
I cannot prove this statement, I tried to prove by using the definition of open sets however i feel that it is necessary prove it in two directions since it's an iff statement. The question is, Let ...
1answer
120 views
### Why is $L^3$ weaker than $L^2$?
Someone told me today that if I can show $\Vert A_n-B_n\Vert_3\to 0$ as $n\to \infty$, then claiming $A=B$ as $n\to \infty$ (where $A$ and $B$ are the respective limits of $A_n$ and $B_n$) is a weaker ...
0answers
65 views
### Convergence in Skorokhod metric and unifrom metric
Is there a relationship between convergence in the Skorokhod space and convergence in the uniform metric. I.e. does weak convergence in the Skorokhod space imply convergence in the uniform metric?
3answers
435 views
### In what spaces does the Bolzano-Weierstrass theorem hold?
The Bolzano-Weierstrass theorem says that every bounded sequence in $\Bbb R^n$ contains a convergent subsequence. The proof in Wikipedia evidently doesn't go through for an infinite-dimensional space, ...
0answers
99 views
### Convergence of a function in a metric space to its metric
Given a metric space $(\mathbb{A},d)$ with a metric $d$ being the Euclidean metric, if $\lim_{t \rightarrow \infty}||A_{t+1}-A_t||\rightarrow 0$ is a convergent sequence where $A$ is a matrix with the ...
1answer
161 views
### Cauchy nets in a metric space
Say that a net $a_i$ in a metric space is cauchy if for every $\epsilon > 0$ there exists $I$ such that for all $i, j \geq I$ one has $d(a_i,a_j) \leq \epsilon$. If the metric space is complete, ...
1answer
90 views
### Uniform convergence of functions, Spring 2002
The question I have in mind is (see here, page 60, the solution is at page 297): Assume $f_{n}$ is a sequence of functions from a metric space $X$ to $Y$. Suppose $f_{n}\rightarrow f$ uniformly and ...
2answers
102 views
### Is Completeness intrinsic to a space?
Is completeness an intrinsic property of a space that is independent of metric? For example, since $\mathbb{R}^n$ is complete with the Euclidean metric, is it complete with any other metric? If ...
2answers
64 views
### Is the set $E$ of sequences containing only entries $0$ and $1$ in $(m,\left \| \cdot \right \|_\infty)$ complete?
I can't really wrap my head around $E$, or a Cauchy sequence in $E$. I need to take a Cauchy sequence in $E$ and show it's Cauchy in $(m,\left \| \cdot \right \|_\infty)$? I think I can show \$(m,\left ...
3answers
110 views
### continuous map of metric spaces and compactness
Let $f:X\rightarrow Y$ be a continuous map of metric spaces. Show that if $A\subseteq X$ is compact, then $f(A)\subseteq Y$ is compact. I am using this theorem: If $A\subseteq X$ is sequentially ...
3answers
275 views
### Why doesn't $d(x_n,x_{n+1})\rightarrow 0$ as $n\rightarrow\infty$ imply ${x_n}$ is Cauchy?
What is an example of a sequence in $\mathbb R$ with this property that is not Cauchy?
1answer
134 views
### A question on norm of error vector
Let $(s_n)_{n \in \mathbb{N}}\in\ell^2(\mathbb{N})$ (i.e. $\displaystyle \sum_{n=0}^{\infty}\vert s_n\vert^2<\infty$). Define vectors $A=[A_1,\ldots,A_M]$ and $B=[B_1,\ldots,B_M]$ with coordinates ...
2answers
138 views
### How to prove that the sequence of $x_n = (1,\frac{1}{2}, \frac{1}{3}, … \frac{1}{n}, 0, 0…)$ does not converge under $\|\cdot\|_1$?
I'm reviewing past assignments and am still having trouble formulating a proof for this: Consider the sequence $(x_n)$, where $x_n = (1,\frac{1}{2}, \frac{1}{3}, \ldots, \frac{1}{n}, 0, 0, \ldots)$. ...
3answers
1k views
### Examples of function sequences in C[0,1] that are Cauchy but not convergent
To better train my intuition, what are some illustrative examples of function sequences in C[0,1] that are Cauchy but do not converge under the integral norm?
3answers
452 views
### How to prove that convergence is equivalent to pointwise convergence in $C[0,1]$ with the integral norm?
I'm trying to prove (or disprove) that in the set $C[0,1]$ of continuous (bounded) functions on the real interval [0,1] with the integral norm $\|f(x)\|_1 = \int_0^1|f(x)|dx$ that a sequence of ...
1answer
101 views
### If no Cauchy subsequence exists, must a uniformly separated subsequence exist?
Given a sequence $(x_n)$ in a metric space $M$, call it uniformly separated if all pairwise distances $d(x_n,x_m)$ between distinct terms are uniformly bounded away from zero. Suppose that a given ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 104, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9217633008956909, "perplexity_flag": "head"}
|
http://cms.math.ca/Reunions/ete11/abs/cmt
|
Réunion d'été SMC 2011
Université de l'Alberta, Edmonton, 3 - 5 juin 2011 www.smc.math.ca//Reunions/ete11
Théorie combinatoire des matrices
Org: Shaun Fallat (Regina) et Kevin N. Vander Meulen (Redeemer College)
[PDF]
WAYNE BARRETT, Brigham Young University
The Combinatorial Inverse Eigenvalue Problem [PDF]
Let $G=(V,E)$ be an undirected graph on $n$ vertices, and let $S(G)$ be the set of all real symmetric $n \times n$ matrices whose nonzero off-diagonal entries occur in exactly the positions corresponding to the edges of $G$, i.e., for $i \ne j$, $a_{ij}\ne 0 \iff \{i,j\} \in E$. The combinatorial inverse eigenvalue problem asks:
Given a graph $G$ on $n$ vertices and real numbers $\lambda_1,\lambda_2,\ldots,\lambda_n$, is there a matrix in $S(G)$ with eigenvalues equal to $\lambda_1,\lambda_2,\ldots,\lambda_n$?
Previous results focus on solving the problem for trees. Another fairly large class of graphs for which it is possible to obtain general results is the class of minimum rank 2 graphs; for these all possible pairs of nonzero eigenvalues which are attainable for a rank 2 matrix in $S(G)$ are characterized. Time permitting, we will discuss properties of minimum rank matrices and the associated inverse inertia problem.
MICHAEL CAVERS, University of Calgary
Allow Problems Concerning Spectral Properties of Patterns [PDF]
Let $S\subseteq\{0,+,-,+_0,-_0,*,\#\}$ be a set of symbols, where $+$ (resp. $-$, $+_0$ and $-_0$) denotes a positive (resp. negative, nonnegative and nonpositive) real number, and $*$ (resp. $\#$) denotes a nonzero (resp. ambiguous) real number. An $S$-pattern is a matrix with entries in $S$. In particular, a $\{0,+,-\}$-pattern is a sign pattern and $\{0,*\}$-pattern is a zero-nonzero pattern. In this talk, we will discuss various allow problems concerning spectral properties of $S$-patterns.
LOUIS DEAETT, University of Victoria
The principal rank characteristic sequence of a real symmetric matrix [PDF]
Given an $n\times n$ real symmetric matrix $A$ we associate to $A$ a sequence $r_0r_1\cdots r_n \in \{0,1\}^{n+1}$ defined by\[r_k=\begin{cases} 1 & \mbox{ if $A$ has a principal submatrix of rank $k$, and}\0&\mbox{ otherwise,} \end{cases} \] or, equivalently, \begin{equation}\label{alt_char} r_k=\begin{cases} 1 & \mbox{ if $A$ has a nonzero principal minor of order $k$, and }\0&\mbox{ otherwise} \end{cases} \end{equation} for $1\le k \le n$, with $r_0=1$ if and only if $A$ has a zero entry on its main diagonal. Denote this sequence by $\text{pr}(A)$.
Now, given an arbitrary sequence of $0$s and $1$s, is it $\text{pr}(A)$ for any real symmetric matrix $A$? If so, call the sequence $attainable$. The problem, then, is to characterize the attainable sequences.
We will discuss how this problem relates to graph eigenvalues and to both some quite old and some quite recent results concerning algebraic relationships between the principal minors of a symmetric matrix.
Joint work with Richard Brualdi, Dale Olesky and Pauline van den Driessche.
RANDY ELZINGA, Royal Military College of Canada
Graphs with Rational Normalized Adjacency Eigenvalues [PDF]
If $A$ is the adjacency matrix of a graph $G$ and $D$ is the diagonal matrix of vertex degrees, then the {\em Laplacian matrix} of $G$ is $L=D-A$. Longstanding problems include determining which graphs have only integral adjacency eigenvalues and which have only integral Laplacian eigenvalues. The {\em normalized adjacency matrix} of $G$ is $N=D^{-1/2}AD^{-1/2}$. I will show that the graphs with only integral normalized adjacency eigenvalues are graphs whose components are complete bipartite graphs, show that the analogous problem is to determine which graphs have only rational normalized adjacency eigenvalues, and present some results on trees with only normalized rational eigenvalues.
SHAUN FALLAT, University of Regina
On Two Colin de Verdiere Parameters of Chordal Graphs [PDF]
Two important graph parameters, developed by Colin de Verdiere, are connected with the maximum nullity of certain real symmetric matrices associated with a given graph. In this talk, these parameters, called $\mu$ and $\nu$, are calculated for chordal graphs.
YI ZHENG FAN, University of Regina
Quadratic Forms on Graphs [PDF]
The eigenvalues of a graph are defined as the eigenvalues of a certain matrix associated with that graph. Maximizing or minimizing an extreme eigenvalue in some class of graphs is a topic in spectral graph theory, from which we can understand the structure of graphs. The quadratic forms on graphs are combinatorial viewpoint or method on this topic, as it contains information on the graph structure and it has more meaning than the quadratic form of matrices. In this talk I will introduce the quadratic forms on graphs and illustrate it with some examples.
CHRIS GODSIL, University of Waterloo
Graph Spectra and Quantum Computing [PDF]
If $A$ is the adjacency matrix of a graph $X$, we define a transition matrix $H_X(t)$ by $$H_X(t) := \exp(itA).$$ This is a symmetric unitary matrix, underlying a so-called continuous quantum walk. Work in quantum computing leads to a number of questions which can be attacked using ideas from the theory of graph spectra. I will present examples, along with a number of open questions.
IN-JAE KIM, Minnesota State University
Unordered multiplicity lists of $\Phi$-binary trees [PDF]
The unordered multiplicity lists of eigenvalues of a graph were introduced in the study of Inverse Eigenvalue Problem for graphs (IEP-G). In this talk we study the unordered multiplicity lists of $\Phi$-binary trees, and discuss some other results related to the lists.
ZHONGSHAN LI, Georgia State University
Irreducible 4 by 4 sign patterns that require 4 distinct eigenvalues [PDF]
A sign pattern (matrix) is a matrix whose entries are from the set $\{+, -, 0\}$. Some necessary or sufficient conditions for a square sign pattern to require all distinct eigenvalues are presented. In particular, it is known that such sign patterns require a fixed number of real eigenvalues. The $3 \times 3$ irreducible sign patterns that require 3 distinct eigenvalues have been identified previously. The $4 \times 4$ irreducible sign patterns that require four distinct real eigenvalues and those that require four distinct nonreal eigenvalues are characterized. The $4 \times 4$ irreducible sign patterns that require two distinct real eigenvalues and two distinct nonreal eigenvalues are investigated.
JUDI MCDONALD, Washington State University
Spectrally Arbitrary Matrix Patterns that Depend on Field Structure [PDF]
An nxn pattern P of zeros and stars (nonzeros) is said to be spectrally arbitrary over a field F provided any n-th degree monic polynomial in F[x] can be realized as the characteristic polynomial of a matrix formed from replacing the stars in P by nonzero elements from F. A pattern may be spectrally arbitrary over some fields, but not others. In this talk we will look at some specific patterns for which the algebraic properties of a given field play a critical role in whether or not the pattern is spectrally arbitrary for that field.
KAREN MEAGHER, University of Regina
The Erd\H{o}s-Ko-Rado Theorem: an algebraic perspective [PDF]
The Erd\H{o}s-Ko-Rado (EKR) Theorem is a major result in extremal set theory. It gives the exact size and structure of the largest system of sets that has the property that any two sets in the system have non-trivial intersection. There are many extensions of this theorem to combinatorial objects other than set systems, such as vectors subspaces over a finite field, integer sequences, partitions, and recently, there have been several results that extend the EKR theorem to permutations.
I will describe an algebraic method that can be used to prove the EKR theorem for several of these combinatorial objects. Using the eigenvalues of the adjacency matrix of an appropriately defined graph we can often bound the size of the largest intersecting set of objects. Further, by considering the structure of the eigenspace we can also determine the structure of these sets. I will present several examples where this works and show some open problems.
VLADIMIR NIKIFOROV, University of Memphis
Extremal norms of graphs and matrices [PDF]
The energy of a graph, a parameter introduced by Gutman and much studied recently, turns out to be just the nuclear norm of the adjacency matrix. Similar matrix norms seem to be interesting as well. Thus, this talk presents some extremal results about the Schatten and Ky Fan norms of the adjacency matrices of graphs and of matrices in general.
DALE OLESKY, University of Victoria
Sign Patterns with a Nest of Positive Principal Minors [PDF]
A matrix $A\in\ M_n\,(\mathbb{R})$ has a nest of positive principal minors if $PAP^T$ has positive leading principal minors for some permutation matrix $P$. A sign pattern is a matrix with entries $\in\, \{+,\; -,\; 0\}$. A sign pattern ${\cal A}$ requires a nest of positive principal minors if every real matrix $B$ with that sign pattern has a nest of positive principal minors, and ${\cal A}$ allows a nest of positive principal minors if there exists such a matrix $B$ that has a nest of positive principal minors. Motivated by the fact that a matrix $A$ with a nest of positive principal minors can be positively scaled so all its eigenvalues lie in the open right-half-plane, conditions are investigated so that a square sign pattern either requires or allows a nest of positive principal minors. This is joint work with Michael Tsatsomeros and Pauline van den Driessche.
PAULINE VAN DEN DRIESSCHE, University of Victoria
Refined Inertia of Pattern Matrices [PDF]
The refined inertia of a real matrix $A$ of order $n$ is an ordered quadruple $(n_+, n_-, n_z, 2n_p)$ of nonnegative integers that sum to $n$, where $n_+, n_-$ is the number of eigenvalues of $A$ with real part positive, negative, respectively, $n_z$ is the number of zero eigenvalues, and $2n_p$ is the number of nonzero pure imaginary eigenvalues. This concept has application in detecting the possibility of Hopf bifurcation in dynamical systems. Some results on refined inertias of zero-nonzero pattern matrices (matrices with entries $0$ or $*$) and of sign pattern matrices (matrices with entries $+, -$ or $0$) are given and open problems are stated.
KEVIN VANDER MEULEN, Redeemer University College
Index of Nilpotent Matrices and the Nilpotent-Jacobian Method [PDF]
A nonzero pattern is a matrix with entries in $\{0, \ast\}$. A pattern is potentially nilpotent if there is some nilpotent real matrix with nonzero entries in precisely the entries indicated by the pattern. We construct some potentially nilpotent balanced tree patterns, and explore their index. Using the Nilpotent-Jacobian method, we observe that some balanced tree patterns are spectrally arbitrary. Inspired by an argument of Pereira, we uncover a feature of the Nilpotent-Jacobian method. In particular, we show that if $N$ is the nilpotent matrix employed by this method to show that a pattern is a spectrally arbitrary pattern, then $N$ must have full index. [Joint work with Hannah Bergsma and Adam van Tuyl]
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 84, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8667595982551575, "perplexity_flag": "head"}
|
http://mathhelpforum.com/discrete-math/136052-fibonacci-sequence.html
|
# Thread:
1. ## Fibonacci sequence
I have from a previous question these answers:
Fn+3 = Fn+2 + Fn+1 & Fn = Fn+2 - Fn+1
I have to use these answers to show that:
Fn+3 + Fn = 2Fn+2 & Fn+3 - Fn = 2Fn+1 for n = 0, 1, 2,...
I'm not sure how to do this although I understand that the second set of statements are true.
Any help would be really appreciated
2. Originally Posted by bigroo
I have from a previous question these answers:
Fn+3 = Fn+2 + Fn+1 & Fn = Fn+2 - Fn+1
I have to use these answers to show that:
Fn+3 + Fn = 2Fn+2 & Fn+3 - Fn = 2Fn+1 for n = 0, 1, 2,...
I'm not sure how to do this although I understand that the second set of statements are true.
Any help would be really appreciated
You want to show that:
$F_{n+3}+F_n=2F_{n+2}$
You begin by using the recurrence for the Fibonacci numbers on $F_{n+3}$, then group terms if you try that and have further difficulties post what you have done and tell us what difficulties you are having with it.
CB
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9701312780380249, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/55041/two-general-relativity-questions
|
# Two General Relativity questions
Hi When contracting $T^{\mu \nu}$ with $g_{\mu \nu}$ does one get $T^{\mu \nu}_{\mu \nu} = T$?
is the metric tensor already a sum over its component, so it is effectively a trace of a matrix with its components. e.g $$g^{\mu\nu}=Tr A$$ if A is a matrix with the same components as $g^{\mu\nu}$.
-
## 1 Answer
Contraction implies a sum over indices, i.e.
$T^{\mu\nu}g_{\mu\nu}=\sum_{\mu=0}^3\sum_{\nu=0}^3T^{\mu\nu}g_{\mu\nu}=T.$
An expression like $T^{\mu\nu}_{\mu\nu}$ makes no sense, since the amount of indices of $T^{\mu\nu}$ does not change.
Furthermore, it does not make a statement about the trace of an object before it is summed.
-
sorry just realised i miss posted and had a mistake, making it unclear what i mean – user21119 Feb 25 at 11:56
thx sorted it out – user21119 Feb 25 at 12:51
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9253929257392883, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/87285?sort=newest
|
## Reference Request: Steinberg’s 1975 paper “On a paper of Pittie”(retrieved)
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I am currently work on a senior project trying to prove for semisimple Lie groups, $R(T)$ is a free module over $R(G)$ by computing an explicit basis for all the A,B,C,D cases. The canoical reference is a paper by Pittie( H.V. Pittie: Homogeneous vector bundles on homogeneous spaces, Topology II (1972) 199-203), but I could not find it online or in any books available in the library. Steinberg generalized Pittie's statement in his paper (Robert Steinberg, On a theorem by Pittie, Topology Vol. 14. pp. 173-177. Pergamon Press, 1975, Printed in Great Britain. Received 1 October 1974).
Since they already proved this in the past, I would like to see their papers before I finish my project, even if at some monetary cost. But I could not access either of them. Not knowing their work would not hinder my research, for I work in a much more elementary level than they did, but I think their work might be related to my eventual results and I should acknowledge in case they proved some formula I proved again on my own. So I want to ask where I can find them in paper or electronically. I can read parts of Steinberg's paper via google books, but I would like a pdf file or something (so I may check).
ADDED:
With advisor's help and the links provided by all the people below, I retrieved the two papers.
ADDED:
Received Steinberg's replying email. He notes "A correction should be made on p.175, line 6 ( which starts with "Consider now ") by putting the exponent "n sub a" on the item over which the product is being taken.The paper by Pittie appears in Topology, vol. 11, 1972, pp. 199-203, and, if I remember correctly, does not contain an explicit basis for the quotient. " This is important so I put in here.
-
1
Changwei, both are available online. Pittie: sciencedirect.com/science/article/pii/… Steinberg: sciencedirect.com/science/article/pii/… Hopefully, your institution has a membership which will allow you to get them without paying full price... – BR Feb 1 2012 at 23:21
@BR: I accessed both websites via the link you provided, but when I press the button "view the full text", there is no output. Thanks for your help though. – Changwei Zhou Feb 1 2012 at 23:41
Changwei, I think you might have to either purchase the article or log in through an institutional account. I was able to download both through my institution (technically, I went through MathSciNet). – BR Feb 2 2012 at 3:25
@BR: This is disappointing, but at least I can try something now. Thank you. – Changwei Zhou Feb 2 2012 at 3:54
3
Not to be too public about it, but I'm sure that if all else fails in retrieving these papers, various people on MO would be glad to download and email to you. – Steve D Feb 3 2012 at 20:46
## 2 Answers
To supplement Barry's citations, I'd point out that the journal Topology was at that time managed by a company which eventually gave up on it after editors resigned partly in protest against the high prices charged. While the online rights now belong to the ScienceDirect conglomerate, it's expensive to access. This can be frustrating because each paper discussed here is only 4+ pages long.
On the other hand, Steinberg's paper is reprinted in the moderately priced one volume Collected Papers (AMS 1997). Though Steinberg is long retired from UCLA, he maintains an email link there, and might be able to supply a reprint of his article. Pittie is an Indian mathematician who has taught at one of the colleges of City University of New York but has not published for many years; his entry in the combined membership list CML (www.ams.org) does give a current mailing address in New York City.
Some users of MO including myself do have access to both papers and might be able to answer precisely stated questions about them.
ADDED: I hadn't heard previously about the recent death of Harsh Pittie. I was somewhat acquainted with him when we were both at NYU-Courant decades ago and recall hearing some of his lectures on topology of Lie groups. His paper from that period was grounded in topology and K-theory, but Steinberg's follow-up (in his typical concise style) rounded out the discussion of representation rings in a more algebraic framework. Moreover, Steinberg exhibits an explicit basis for `$R(T)$` as a free `$R(G)$`-module in the crucial case where `$G$` is a semisimple simply connected compact Lie group and `$T$` any maximal torus. In particular, the rank here is the order of the Weyl group `$W$`. (He also observes that the same ideas work for algebraic groups over any algebraically closed field.)
Though I've never worked through the details of Steinberg's paper carefully, the underlying idea can be observed (in an oversimplified way) in the rank 1 case. Denoting the weight lattice (character group of `$T$` in additive notation) by `$X$`, the respective representation rings look like `$\mathbb{Z}[X]$` and `$\mathbb{Z}[X]^W$`. Then Steinberg's basis elements, one for each element `$w \in W$`, are defined by applying `$w^{-1}$` to a product of symbols (in my notation `$e^\lambda$`) with `$\lambda$` running over suitable fundamental weights. In rank 1, the basis just consists of `$e^0, e^{-\rho}$`.
-
@Jim Humphreys: Thanks for the comment. I think I should contact them directly. My main question is whether their means to prove $R(T)$ is a free module over $R(G)$ related to the action of the Weyl group on fundamental weights(thus the weight lattice), for that is the approach I am going to take. I also wish to express the gratitude as I learnt a lot by reading your small book. – Changwei Zhou Feb 2 2012 at 1:50
@Jim Humphreys: Unfortunately I found Pittie has already passed away (see jxxcarlson.wordpress.com/2012/01/25/…). But thank you for the information. – Changwei Zhou Feb 2 2012 at 2:04
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
If you're willing to pay, you can go to the Topology website and track the articles down. Here's a link that'll take you straight to the issue with the Pittie piece: http://www.sciencedirect.com/science/journal/00409383/11/2 -- you can find a link to the Steinberg issue there too. (Caveat: I don't know for a fact the articles are actually available; it's possible the site will say the order can't be filled. I didn't want to plunk down the coin to find out.)
-
@Barry Cipra: Hi, I will try to access the Topology website after dinner. For the link you provided (as well the links provided from above comments), I just do not know how to get the full article as there is no output by pressing the button. – Changwei Zhou Feb 1 2012 at 23:45
Unfortunately, the button you have to press is the one that says "Purchase." Hopefully someone can arrange to get you reprints. – Barry Cipra Feb 2 2012 at 2:44
@Barry Cipra: I see, so essentially I need to log in to access both articles. Let me contact my school (Bard)'s librarian to see if there is something they can do in this situation. – Changwei Zhou Feb 2 2012 at 11:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9550671577453613, "perplexity_flag": "middle"}
|
http://www.digplanet.com/wiki/Root_system
|
digplanet beta 1: Athena
Share digplanet:
# Root system
Euclidean geometry > Root system Lie groups > Root system Lie algebras > Root system
Sections
Agriculture
Applied sciences
Arts
Belief
Chronology
Culture
Education
Environment
Geography
Health
History
Humanities
Language
Law
Life
Mathematics
Nature
People
Politics
Science
Society
Technology
Lie groups
• General linear GL(n)
• Special linear SL(n)
• Orthogonal O(n)
• Special orthogonal SO(n)
• Unitary U(n)
• Special unitary SU(n)
• Symplectic Sp(n)
Lie groups in physics
Scientists
In mathematics, a root system is a configuration of vectors in a Euclidean space satisfying certain geometrical properties. The concept is fundamental in the theory of Lie groups and Lie algebras. Since Lie groups (and some analogues such as algebraic groups) and Lie algebras have become important in many parts of mathematics during the twentieth century, the apparently special nature of root systems belies the number of areas in which they are applied. Further, the classification scheme for root systems, by Dynkin diagrams, occurs in parts of mathematics with no overt connection to Lie theory (such as singularity theory). Finally, root systems are important for their own sake, as in graph theory in the study of eigenvalues.
## Definitions and first examples
The six vectors of the root system A2.
As a first example, consider the six vectors in 2-dimensional Euclidean space, R2, as shown in the image at the right; call them roots. These vectors span the whole space. If you consider the line perpendicular to any root, say β, then the reflection of R2 in that line sends any other root, say α, to another root. Moreover, the root to which it is sent equals β + n α, where n is an integer (in this case, n equals 1). These six vectors satisfy the following definition, and therefore they form a root system; this one is known as A2.
### Definition
Let V be a finite-dimensional Euclidean vector space, with the standard Euclidean inner product denoted by $(\cdot,\cdot)$. A root system in V is a finite set Φ of non-zero vectors (called roots) that satisfy the following conditions:[1][2]
1. The roots span V.
2. The only scalar multiples of a root x ∈ Φ that belong to Φ are x itself and –x.
3. For every root x ∈ Φ, the set Φ is closed under reflection through the hyperplane perpendicular to x.
4. (Integrality) If x and y are roots in Φ, then the projection of y onto the line through x is a half-integral multiple of x.
An equivalent way of writing conditions 3 and 4 is as follows:
1. For any two roots x and y, the set Φ contains the element $\sigma_x(y) =y-2\frac{(x,y)}{(x,x)}x \in \Phi.$
2. For any two roots x and y, the number $\langle y, x \rangle := 2 \frac{(x,y)}{(x,x)}$ is an integer.
Some authors only include conditions 1–3 in the definition of a root system.[3] In this context, a root system that also satisfies the integrality condition is known as a crystallographic root system.[4] Other authors omit condition 2; then they call root systems satisfying condition 2 reduced.[5] In this article, all root systems are assumed to be reduced and crystallographic.
In view of property 3, the integrality condition is equivalent to stating that β and its reflection σα(β) differ by an integer multiple of α. Note that the operator
$\langle \cdot, \cdot \rangle \colon \Phi \times \Phi \to \mathbb{Z}$
defined by property 4 is not an inner product. It is not necessarily symmetric and is linear only in the first argument.
Root system $A_1 \times A_1$ Root system $D_2$
Root system $A_2$ Root system $G_2$
Root system $B_2$ Root system $C_2$
The rank of a root system Φ is the dimension of V. Two root systems may be combined by regarding the Euclidean spaces they span as mutually orthogonal subspaces of a common Euclidean space. A root system which does not arise from such a combination, such as the systems A2, B2, and G2 pictured to the right, is said to be irreducible.
Two root systems (E1, Φ1) and (E2, Φ2) are called isomorphic if there is an invertible linear transformation E1 → E2 which sends Φ1 to Φ2 such that for each pair of roots, the number $\langle x, y \rangle$ is preserved.[6]
The group of isometries of V generated by reflections through hyperplanes associated to the roots of Φ is called the Weyl group of Φ. As it acts faithfully on the finite set Φ, the Weyl group is always finite.
The root lattice of a root system Φ is the Z-submodule of V generated by Φ. It is a lattice in V.
### Rank two examples
There is only one root system of rank 1, consisting of two nonzero vectors $\{\alpha, -\alpha\}$. This root system is called $A_1$.
In rank 2 there are four possibilities, corresponding to $\sigma_\alpha(\beta) = \beta + n\alpha$, where $n = 0, 1, 2, 3$. Note that the lattice generated by a root system is not unique: $A_1 \times A_1$ and $B_2$ generate a square lattice while $A_2$ and $G_2$ generate a hexagonal lattice, only two of the five possible types of lattices in two dimensions.
Whenever Φ is a root system in V, and U is a subspace of V spanned by Ψ = Φ ∩ U, then Ψ is a root system in U. Thus, the exhaustive list of four root systems of rank 2 shows the geometric possibilities for any two roots chosen from a root system of arbitrary rank. In particular, two such roots must meet at an angle of 0, 30, 45, 60, 90, 120, 135, 150, or 180 degrees.
## History
The concept of a root system was originally introduced by Wilhelm Killing around 1889 (in German, Wurzelsystem[7]).[8] He used them in his attempt to classify all simple Lie algebras over the field of complex numbers. Killing originally made a mistake in the classification, listing two exceptional rank 4 root systems, when in fact there is only one, now known as F4. Cartan later corrected this mistake, by showing Killing's two root systems were isomorphic.[9]
Killing investigated the structure of a Lie algebra $L$, by considering (what is now called) a Cartan subalgebra $\mathfrak{h}$. Then he studied the roots of the characteristic polynomial $\det (ad_L x - t)$, where $x \in \mathfrak{h}$. Here a root is considered as a function of $\mathfrak{h}$, or indeed as an element of the dual vector space $\mathfrak{h}^*$. This set of roots form a root system inside $\mathfrak{h}^*$, as defined above, where the inner product is the Killing form.[10]
## Elementary consequences of the root system axioms
The integrality condition for <β, α> is fulfilled only for β on one of the vertical lines, while the integrality condition for <α, β> is fulfilled only for β on one of the red circles. Any β perpendicular to α (on the Y axis) trivially fulfills both with 0, but does not define an irreducible root system.
Modulo reflection, for a given α there are only 5 nontrivial possibilities for β, and 3 possible angles between α and β in a set of simple roots. Subscript letters correspond to the series of root systems for which the given β can serve as the first root and α as the second root. (or in F4 as the middle 2 roots)
The cosine of the angle between two roots is constrained to be a half-integral multiple of a square root of an integer. This is because $\langle \beta, \alpha \rangle$ and $\langle \alpha, \beta \rangle$ are both integers, by assumption, and
$\langle \beta, \alpha \rangle \langle \alpha, \beta \rangle = 2 \frac{(\alpha,\beta)}{(\alpha,\alpha)} \cdot 2 \frac{(\alpha,\beta)}{(\beta,\beta)} = 4 \frac{(\alpha,\beta)^2}{\vert \alpha \vert^2 \vert \beta \vert^2} = 4 \cos^2(\theta) = (2\cos(\theta))^2 \in \mathbb{Z}.$
Since $2\cos(\theta) \in [-2,2]$, the only possible values for $\cos(\theta)$ are $0, \pm \tfrac{1}{2}, \pm\tfrac{\sqrt{2}}{2}, \pm\tfrac{\sqrt{3}}{2}, \pm\tfrac{\sqrt{4}}{2} = \pm 1$, corresponding to angles of 90°, 60° or 120°, 45° or 135°, 30° or 150°, and 0 or 180°. Condition 2 says that no scalar multiples of α other than 1 and -1 can be roots, so 0 or 180°, which would correspond to 2α or -2α are out.
## Positive roots and simple roots
Given a root system Φ we can always choose (in many ways) a set of positive roots. This is a subset $\Phi^+$ of Φ such that
• For each root $\alpha\in\Phi$ exactly one of the roots $\alpha$, –$\alpha$ is contained in $\Phi^+$.
• For any two distinct $\alpha, \beta\in \Phi^+$ such that $\alpha+\beta$ is a root, $\alpha+\beta\in\Phi^+$.
If a set of positive roots $\Phi^+$ is chosen, elements of $-\Phi^+$ are called negative roots.
An element of $\Phi^+$ is called a simple root if it cannot be written as the sum of two elements of $\Phi^+$. The set $\Delta$ of simple roots is a basis of $V$ with the property that every vector in $\Phi$ is a linear combination of elements of $\Delta$ with all coefficients non-negative, or all coefficients non-positive. For each choice of positive roots, the corresponding set of simple roots is the unique set of roots such that the positive roots are exactly those that can be expressed as a combination of them with non-negative coefficients, and such that these combinations are unique.
### The root poset
The set of positive roots is naturally ordered by saying that $\alpha \leq \beta$ if and only if $\beta-\alpha$ is a nonnegative linear combination of simple roots. This poset is graded by $\operatorname{deg}\big(\sum_{\alpha \in \Delta} \lambda_\alpha \alpha\big) = \sum_{\alpha \in \Delta}\lambda_\alpha$, and has many remarkable combinatorial properties, one of them being that one can determine the degrees of the fundamental invariants of the corresponding Weyl group from this poset.[11]
## Dual root system and coroots
See also: Langlands dual group
If Φ is a root system in V, the coroot αV of a root α is defined by
$\alpha^\vee= {2\over (\alpha,\alpha)}\, \alpha.$
The set of coroots also forms a root system ΦV in V, called the dual root system (or sometimes inverse root system). By definition, αV V = α, so that Φ is the dual root system of ΦV. The lattice in V spanned by ΦV is called the coroot lattice. Both Φ and ΦV have the same Weyl group W and, for s in W,
$(s\alpha)^\vee= s(\alpha^\vee).$
If Δ is a set of simple roots for Φ, then ΔV is a set of simple roots for ΦV.
## Classification of root systems by Dynkin diagrams
Pictures of all the irreducible Dynkin diagrams
Irreducible root systems correspond to certain graphs, the Dynkin diagrams named after Eugene Dynkin. The classification of these graphs is a simple matter of combinatorics, and induces a classification of irreducible root systems.
Given a root system, select a set Δ of simple roots as in the preceding section. The vertices of the associated Dynkin diagram correspond to vectors in Δ. An edge is drawn between each non-orthogonal pair of vectors; it is an undirected single edge if they make an angle of $2 \pi / 3$ radians, a directed double edge if they make an angle of $3 \pi / 4$ radians, and a directed triple edge if they make an angle of $5 \pi / 6$ radians. The term "directed edge" means that double and triple edges are marked with an angle sign pointing toward the shorter vector.
Although a given root system has more than one possible set of simple roots, the Weyl group acts transitively on such choices. Consequently, the Dynkin diagram is independent of the choice of simple roots; it is determined by the root system itself. Conversely, given two root systems with the same Dynkin diagram, one can match up roots, starting with the roots in the base, and show that the systems are in fact the same.
Thus the problem of classifying root systems reduces to the problem of classifying possible Dynkin diagrams. Root systems are irreducible if and only if their Dynkin diagrams are connected. Dynkin diagrams encode the inner product on E in terms of the basis Δ, and the condition that this inner product must be positive definite turns out to be all that is needed to get the desired classification.
The actual connected diagrams are as follows. The subscripts indicate the number of vertices in the diagram (and hence the rank of the corresponding irreducible root system).
## Properties of the irreducible root systems
$\Phi$ $|\Phi|$ $|\Phi^{<}|$ I D $|W|$
An (n ≥ 1) n(n + 1) n + 1 (n + 1)!
Bn (n ≥ 2) 2n2 2n 2 2 2n n!
Cn (n ≥ 3) 2n2 2n(n − 1) 2 2 2n n!
Dn (n ≥ 4) 2n(n − 1) 4 2n − 1 n!
E6 72 3 51840
E7 126 2 2903040
E8 240 1 696729600
F4 48 24 4 1 1152
G2 12 6 3 1 12
Irreducible root systems are named according to their corresponding connected Dynkin diagrams. There are four infinite families (An, Bn, Cn, and Dn, called the classical root systems) and five exceptional cases (the exceptional root systems).[12] The subscript indicates the rank of the root system.
In an irreducible root system there can be at most two values for the length (α, α)1/2, corresponding to short and long roots. If all roots have the same length they are taken to be long by definition and the root system is said to be simply laced; this occurs in the cases A, D and E. Any two roots of the same length lie in the same orbit of the Weyl group. In the non-simply laced cases B, C, G and F, the root lattice is spanned by the short roots and the long roots span a sublattice, invariant under the Weyl group, equal to r2/2 times the coroot lattice, where r is the length of a long root.
In the table to the right, |Φ < | denotes the number of short roots, I denotes the index in the root lattice of the sublattice generated by long roots, D denotes the determinant of the Cartan matrix, and |W| denotes the order of the Weyl group.
## Explicit construction of the irreducible root systems
### An
A3
e1 e2 e3 e4
α1 1 −1 0 0
α2 0 1 −1 0
α3 0 0 1 −1
Let V be the subspace of Rn+1 for which the coordinates sum to 0, and let Φ be the set of vectors in V of length √2 and which are integer vectors, i.e. have integer coordinates in Rn+1. Such a vector must have all but two coordinates equal to 0, one coordinate equal to 1, and one equal to –1, so there are n2 + n roots in all. One choice of simple roots expressed in the standard basis is: αi = ei – ei+1, for 1 ≤ i ≤ n.
The reflection σi through the hyperplane perpendicular to αi is the same as permutation of the adjacent i-th and (i + 1)-th coordinates. Such transpositions generate the full permutation group. For adjacent simple roots, σi(αi+1) = αi+1 + αi = σi+1(αi) = αi + αi+1, that is, reflection is equivalent to adding a multiple of 1; but reflection of a simple root perpendicular to a nonadjacent simple root leaves it unchanged, differing by a multiple of 0.
The lattice generated by the A3 root system is known to crystallographers as the face-centered cubic (fcc) (or cubic close packed) lattice.[13]
### Bn
| | | | |
|----|----|----|----|
| 1 | -1 | 0 | 0 |
| 0 | 1 | -1 | 0 |
| 0 | 0 | 1 | -1 |
| 0 | 0 | 0 | 1 |
| | | | |
Let V = Rn, and let Φ consist of all integer vectors in V of length 1 or √2. The total number of roots is 2n2. One choice of simple roots is: αi = ei – ei+1, for 1 ≤ i ≤ n – 1 (the above choice of simple roots for An-1), and the shorter root αn = en.
The reflection σn through the hyperplane perpendicular to the short root αn is of course simply negation of the nth coordinate. For the long simple root αn-1, σn-1(αn) = αn + αn-1, but for reflection perpendicular to the short root, σn(αn-1) = αn-1 + 2αn, a difference by a multiple of 2 instead of 1.
B1 is isomorphic to A1 via scaling by √2, and is therefore not a distinct root system.
### Cn
| | | | |
|----|----|----|----|
| 1 | -1 | 0 | 0 |
| 0 | 1 | -1 | 0 |
| 0 | 0 | 1 | -1 |
| 0 | 0 | 0 | 2 |
| | | | |
Let V = Rn, and let Φ consist of all integer vectors in V of length √2 together with all vectors of the form 2λ, where λ is an integer vector of length 1. The total number of roots is 2n2. One choice of simple roots is: αi = ei – ei+1, for 1 ≤ i ≤ n – 1 (the above choice of simple roots for An-1), and the longer root αn = 2en. The reflection σn(αn-1) = αn-1 + αn, but σn-1(αn) = αn + 2αn-1.
C2 is isomorphic to B2 via scaling by √2 and a 45 degree rotation, and is therefore not a distinct root system.
Root system B3, C3, and A3=D3 as points within a cube and octahedron
### Dn
| | | | |
|----|----|----|----|
| 1 | -1 | 0 | 0 |
| 0 | 1 | -1 | 0 |
| 0 | 0 | 1 | -1 |
| 0 | 0 | 1 | 1 |
| | | | |
Let V = Rn, and let Φ consist of all integer vectors in V of length √2. The total number of roots is 2n(n – 1). One choice of simple roots is: αi = ei – ei+1, for 1 ≤ i < n (the above choice of simple roots for An-1) plus αn = en + en-1.
Reflection through the hyperplane perpendicular to αn is the same as transposing and negating the adjacent n-th and (n – 1)-th coordinates. Any simple root and its reflection perpendicular to another simple root differ by a multiple of 0 or 1 of the second root, not by any greater multiple.
D3 reduces to A3, and is therefore not a distinct root system.
D4 has additional symmetry called triality.
### E6, E7, E8
| | | |
|--------------------------------------------------------------------------------------------------------------------|------------------------------------------------------|------------------------------------------------------|
| 72 vertices of 122 represent the root vectors of E6 (Orange nodes are doubled in this E6 Coxeter plane projection) | 126 vertices of 231 represent the root vectors of E7 | 240 vertices of 421 represent the root vectors of E8 |
| | | |
E8 will be explained first.
• The E8 root system is any set of vectors in R8 that is congruent to the following set:
D8 ∪ { ½( ∑i=18 εiei) : εi = ±1, ε1•••ε8 = +1}.
The root system has 240 roots. The set just listed is the set of vectors of length √2 in the E8 lattice Γ8, which is the set of points in R8 such that:
1. all the coordinates are integers or all the coordinates are half-integers (a mixture of integers and half-integers is not allowed), and
2. the sum of the eight coordinates is an even integer.
Thus,
E8 = {α ∈ Z8 ∪ (Z+½)8 : |α|2 = ∑αi2 = 2, ∑αi ∈ 2Z}.
• The root system E7 is the set of vectors in E8 that are perpendicular to a fixed root in E8. The root system E7 has 126 roots.
• The root system E6 is not the set of vectors in E7 that are perpendicular to a fixed root in E7, indeed, one obtains D6 that way. However, E6 is the subsystem of E8 perpendicular to two suitably chosen roots of E8. The root system E6 has 72 roots.
| | | | | | | | |
|----|----|----|----|----|----|----|----|
| 1 | -1 | 0 | 0 | 0 | 0 | 0 | 0 |
| 0 | 1 | -1 | 0 | 0 | 0 | 0 | 0 |
| 0 | 0 | 1 | -1 | 0 | 0 | 0 | 0 |
| 0 | 0 | 0 | 1 | -1 | 0 | 0 | 0 |
| 0 | 0 | 0 | 0 | 1 | -1 | 0 | 0 |
| 0 | 0 | 0 | 0 | 0 | 1 | -1 | 0 |
| 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 |
| -½ | -½ | -½ | -½ | -½ | -½ | -½ | -½ |
An alternative description of the E8 lattice which is sometimes convenient is as the set Γ'8 of all points in R8 such that
• all the coordinates are integers and the sum of the coordinates is even, or
• all the coordinates are half-integers and the sum of the coordinates is odd.
The lattices Γ8 and Γ'8 are isomorphic; one may pass from one to the other by changing the signs of any odd number of coordinates. The lattice Γ8 is sometimes called the even coordinate system for E8 while the lattice Γ'8 is called the odd coordinate system.
One choice of simple roots for E8 in the even coordinate system is
αi = ei – ei+1, for 1 ≤ i ≤ 6, and
α7 = e7 + e6
(the above choice of simple roots for D7) along with
α8 = β0 = $-\textstyle\frac{1}{2}(\textstyle \sum_{i=1}^8e_i)$ = (-½,-½,-½,-½,-½,-½,-½,-½).
| | | | | | | | |
|----|----|----|----|----|----|----|----|
| 1 | -1 | 0 | 0 | 0 | 0 | 0 | 0 |
| 0 | 1 | -1 | 0 | 0 | 0 | 0 | 0 |
| 0 | 0 | 1 | -1 | 0 | 0 | 0 | 0 |
| 0 | 0 | 0 | 1 | -1 | 0 | 0 | 0 |
| 0 | 0 | 0 | 0 | 1 | -1 | 0 | 0 |
| 0 | 0 | 0 | 0 | 0 | 1 | -1 | 0 |
| 0 | 0 | 0 | 0 | 0 | 0 | 1 | -1 |
| -½ | -½ | -½ | -½ | -½ | ½ | ½ | ½ |
One choice of simple roots for E8 in the odd coordinate system is
αi = ei – ei+1, for 1 ≤ i ≤ 7
(the above choice of simple roots for A7) along with
α8 = β5, where
βj = $\textstyle\frac{1}{2}(-\textstyle \sum_{i=1}^je_i+\textstyle \sum_{i=j+1}^8e_i)$.
(Using β3 would give an isomorphic result. Using β1,7 or β2,6 would simply give A8 or D8. As for β4, its coordinates sum to 0, and the same is true for α1...7, so they span only the 7-dimensional subspace for which the coordinates sum to 0; in fact –2β4 has coordinates (1,2,3,4,3,2,1) in the basis (αi).)
Deleting α1 and then α2 gives sets of simple roots for E7 and E6. Since perpendicularity to α1 means that the first two coordinates are equal, E7 is then the subset of E8 where the first two coordinates are equal, and similarly E6 is the subset of E8 where the first three coordinates are equal. This facilitates explicit definitions of E7 and E6 as:
E7 = {α ∈ Z7 ∪ (Z+½)7 : ∑αi2 + α12 = 2, ∑αi + α1 ∈ 2Z},
E6 = {α ∈ Z6 ∪ (Z+½)6 : ∑αi2 + 2α12 = 2, ∑αi + 2α1 ∈ 2Z}
### F4
| | | | |
|----|----|----|----|
| 1 | -1 | 0 | 0 |
| 0 | 1 | -1 | 0 |
| 0 | 0 | 1 | 0 |
| -½ | -½ | -½ | -½ |
| | | | |
48-root vectors of F4, defined by vertices of the 24-cell and its dual, viewed in the Coxeter plane
For F4, let V = R4, and let Φ denote the set of vectors α of length 1 or √2 such that the coordinates of 2α are all integers and are either all even or all odd. There are 48 roots in this system. One choice of simple roots is: the choice of simple roots given above for B3, plus α4 = – $\textstyle\frac{1}{2} \sum_{i=1}^4 e_i$.
### G2
| | | |
|----|----|----|
| 1 | -1 | 0 |
| -1 | 2 | -1 |
| | | |
The root system G2 has 12 roots, which form the vertices of a hexagram. See the picture above.
One choice of simple roots is: (α1, β = α2 – α1) where αi = ei – ei+1 for i = 1, 2 is the above choice of simple roots for A2.
## Root systems and Lie theory
Irreducible root systems classify a number of related objects in Lie theory, notably the
• simple Lie groups (see the list of simple Lie groups), including the
• simple complex Lie groups;
• their associated simple complex Lie algebras; and
• simply connected complex Lie groups which are simple modulo centers.
In each case, the roots are non-zero weights of the adjoint representation.
In the case of a simply connected simple compact Lie group G with maximal torus T, the root lattice can naturally be identified with Hom(T, T) and the coroot lattice with Hom(T, T); see Adams (1983).
For connections between the exceptional root systems and their Lie groups and Lie algebras see E8, E7, E6, F4, and G2.
## Notes
1. Bourbaki, Ch.VI, Section 1
2. Humphreys (1972), p.42
3. Humphreys (1992), p.6
4. Humphreys (1992), p.39
5. Humphreys (1992), p.41
6. Humphreys (1972), p.43
7. Killing (1889)
8. Bourbaki (1998), p.270
9. Coleman, p.34
10. Bourbaki (1998), p.270
11. Humphreys (1992), Theorem 3.20
12. Hall, Brian C. (2003), Lie Groups, Lie Algebras, and Representations: An Elementary Introduction, Springer, ISBN 0-387-40122-9 .
13. Conway, John Horton; Sloane, Neil James Alexander; & Bannai, Eiichi. Sphere packings, lattices, and groups. Springer, 1999, Section 6.3.
## References
• Adams, J.F. (1983), Lectures on Lie groups, University of Chicago Press, ISBN 0-226-00530-5
• Bourbaki, Nicolas (2002), Lie groups and Lie algebras, Chapters 4–6 (translated from the 1968 French original by Andrew Pressley), Elements of Mathematics, Springer-Verlag, ISBN 3-540-42650-7 . The classic reference for root systems.
• Bourbaki, Nicolas (1998). Elements of the History of Mathematics. Springer. ISBN 3540647678.
• A.J. Coleman (Summer 1989), "The greatest mathematical paper of all time", The Mathematical Intelligencer 11 (3): 29–38
• Humphreys, James (1992). Reflection Groups and Coxeter Groups. Cambridge University Press. ISBN 0521436133.
• Humphreys, James (1972). Introduction to Lie algebras and Representation Theory. Springer. ISBN 0387900535.
• Killing, Die Zusammensetzung der stetigen/endlichen Transformationsgruppen Mathematische Annalen, Volume 31, Number 2 June 1888, Pages 252-290 doi:10.1007/BF01211904, Volume 33, Number 1 March 1888, Pages 1–48 doi:10.1007/BF01444109, Volume 34, Number 1 March 1889, Pages 57–122 doi:10.1007/BF01446792, Volume 36, Number 2 June 1890,Pages 161-189 doi:10.1007/BF01207837
• Kac, Victor G. (1994), Infinite dimensional Lie algebras .
• Springer, T.A. (1998). Linear Algebraic Groups, Second Edition. Birkauser. ISBN 0817640215.
## Further reading
• Dynkin, E. B. The structure of semi-simple algebras. (Russian) Uspehi Matem. Nauk (N.S.) 2, (1947). no. 4(20), 59–127.
1000000 videos foundNext >
Root System - Don't Worry (Live at The Doghouse)Root System Live at The Doghouse - Part 4 Root System playing "Don't Worry" at The Doghouse in Dundee. ©2008 MADZOMBIE VIDEO http://www.facebook.com/Madzombi... Root systemQuick glance of various types of root system and examples .
Root System - Are You Ready To Stomp? (Live at The Doghouse)Root System Live at The Doghouse - Part 5 Root System playing 'Are You Ready To Stomp?' at The Doghouse in Dundee. ©2008 MADZOMBIE VIDEO http://www.facebook.... Root System - Do You Wanna Dance? (Live at The Doghouse)Root System Live at The Doghouse - Part 6 Root System playing 'Do You Wanna Dance?' at The Doghouse in Dundee. ©2008 MADZOMBIE VIDEO http://www.facebook.com/...
Root System - Keep On Dancing (Live at The DoghouseRoot System Live at The Doghouse - Part 2 Root System playing 'Keep On Dancing' at The Doghouse in Dundee. ©2008 MADZOMBIE VIDEO http://www.facebook.com/Madz... Root System - I Know (Live at The Doghouse)Root System Live at The Doghouse - Part 1 Root System playing 'I Know' at The Doghouse in Dundee. ©2008 MADZOMBIE VIDEO http://www.facebook.com/MadzombieVide...
Root System RevolutionRoot System @ cabaret voltaire filmed by mallard productions. ROOT'SYSTEM les locataires 1.wmvmusic ROOT'SYTEM.
Root'System- BienvenueBienvenue ou " A la goute d'eau " Root System - Keeping it inRoot System playing Keeping it in, in Cafe Drummonds (Aberdeen)
1000000 videos foundNext >
748 news items
BizDay Zimbabwe Mon, 13 May 2013 10:32:56 -0700 The bamboo spends most of its time developing a root system that can sustain the burst of growth that it has when it eventually germinates. This usually takes years to accomplish. Meanwhile there is nothing visible above the ground. But when it begins ...
AgWeb AgWeb Fri, 19 Apr 2013 22:12:35 -0700 Corn growers chomping at the bit to plant should practice patience to prevent harm to their plants' root systems. By Linda Geist, University of Missouri Extension. Corn growers chomping at the bit to plant should practice patience to prevent harm to ...
Mdcp.nwaonline.com Thu, 02 May 2013 01:26:25 -0700 Planting Corn In Wet Soil Can Damage Root System. By University Of Missouri Extension. Thursday, May 2, 2013. Print item. Missouri corn growers chomping at the bit to plant should practice patience to prevent harm to their plants' root systems ...
Belleville News Democrat Sat, 18 May 2013 07:11:33 -0700 Avoid summer unless you want to do a lot of watering to establish a good root system. You may find some ground covers that can be established from seed, but that would take several years of growth to provide a good cover. Plants are more expensive but ...
Kansas.com Kansas.com Sat, 18 May 2013 05:15:32 -0700 It grows so fast that it can become pot-bound, so be sure to inspect the root system when you buy one in a container. • Southwestern white pine. A better alternative to Austrian pine, which gets tip blight. • Deodar cedar. Not quite as cold hardy as ...
Times Record News Sat, 18 May 2013 01:04:19 -0700 Your best bet in controlling them on your privet would be a systemic insecticide drench poured over the root system. It should be reasonably effective on white flies on ornamental plants, but look for label clearance on vegetable crops. And the black ...
Chattanooga Times Free Press Chattanooga Times Free Press Fri, 17 May 2013 21:07:34 -0700 Smallwood said the tree's root system was "above ground," and there was hardly a hole left behind when it fell. "It was a great old tree," she said. "I understand why nobody wants to cut those [down], but there are trees on either side. The wind didn't ...
Sioux Falls Argus Leader Fri, 17 May 2013 13:40:23 -0700 Take a look at the root system of the tree or shrub, and try to dig a hole that is twice as wide as the roots are. But do not dig the hole too deep; one of the most common reasons young trees die is that they were planted too deeply. Look where the ...
Limit to books that you can completely read online Include partial books (book previews) .gsc-branding { display:block; }
Oops, we seem to be having trouble contacting Twitter
# Talk About Root system
You can talk about Root system with people all over the world in our discussions.
#### Support Wikipedia
A portion of the proceeds from advertising on Digplanet goes to supporting Wikipedia. Please add your support for Wikipedia!
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 63, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8580595850944519, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/differential-geometry/183975-how-show-b-0-1-not-separable.html
|
# Thread:
1. ## How to show B[0,1] is not separable
I'm studying for comps, and this one has me stumped:
Show that the space of bounded functions $f:[0,1]\rightarrow \mathbb{R}$ under the sup norm is not separable.
I suspect that you'd either have to show an arbitrary dense set is uncountable or that an arbitrary countable set can't be dense, but I don't have any idea how to actually implement those strategies.
2. ## Re: How to show B[0,1] is not separable
For all $x\in [0,1]$ consider the family of bounded functions $f_x(t)=\begin{Bmatrix} 1 & \mbox{ if }& t=x\\0 & \mbox{if}& t\neq x\end{matrix}$
We verify $d(f_x,f_y)=1$ for all $x\neq y$ i.e., the family $\mathcal{F}=\{B(f_x,1/2):x\in[0,1]\}$ is pairwise disjoint. Now, choose $A\subset B[0,1]$ dense and use that $[0,1]$ is non denumerable.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 8, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9300147294998169, "perplexity_flag": "head"}
|
http://stats.stackexchange.com/questions/13090/switching-from-unsupervised-to-supervised-learning
|
# Switching from unsupervised to supervised learning
Disclaimer: This is reposted from stackoverflow.
I am working on a research-oriented system of collaborating agents.
The agents perform many stochastic experiments (thousands per second), interacting with each other, in a complex high-dimension environment. Each experiment is reproducible and deterministic. The system is trying to learn optimal collaboration patterns.
Previous attempts (several skilled PhDs) have tried both rule-based algorithms, and also unsupervised learning. Both approaches topped out at between 10-20% of brute-force optimal scoring.
I now want to try to use supervised or reinforced learning. Previously, this was impossible because just labeling the data required NP runtime (per experiment!).
I have now devised a new set of faster P-time labels/classifiers. And I have a large amount ($10^9$ experiments) of labeled training data.
My questions are:
1. Can I hope for significantly better results with supervised or reinforced learning (vs. unsupervised learning)?
2. In general, has unsupervised learning been able to match the result of supervised learning?
ADDED COMMENT1
Yes i realized unsupervised and supervised are different domains Bur consider the typical problem of OCR, which we can approach with or without labeling... obviously labeling gives us more information... but we are still trying to solve the same problem, no?
ADDED COMMENT2
Some of the agents were hand coded with rule-based algorithms Complex rules are progressively harder to write AND more cpu-intensve (the rule itself often involves an NP search of a constrained solution space)
We have many samples from simulated AND production runs. The production runs include noise, non-optimal agents, and external changes
With unsupervised learning, we are able to isolate clusters of agent interactions Some clusters was manually selected and "converted" into a rule (In a nutshell, this rule tries to "approximate" some of the NP decisions made by agents) Some rules turn out to be good heuristics for agent behavior.
NOW, i (might) be able to actually score/label agent interaction. So if i relabel all previous runs, i can now run _supervised_learning_ And i am hoping to be able to find rules
-
I have troubles understanding you because you seem to use non standard terminology. Comparing "classified" and "unclassified" learning suggests that you are not talking about classification, since it's a "problem domain" and not a "technique". – bayerj Jul 15 '11 at 12:56
2
Can you please invest some time in better description of your data and overall clarifying? In a current state you have little chance of getting an answer. – mbq♦ Jul 15 '11 at 13:37
I tweaked the original post. I now refer to supervised and unsupervised learning. If it still sucks i will go back to reading more papers... – Y A Jul 15 '11 at 22:06
4
Unsupervised and supervised learning don't try to solve the same problem. If your question is "which is better" it does not make sense, because they can't be compared. – bayerj Jul 16 '11 at 14:43
2
May I suggest you add some references about "previous attempts" (3rd para) so that we can get a better idea of the problem and how supervised and unsupervised learning methods might concur in your domain-specific application? – chl♦ Jul 16 '11 at 21:09
## 3 Answers
Supervised learning depends on the quality of the labelling, and in fact mislabelled examples can be highly problematic in some regimes (e.g. consider a 0-1 problem and boosting combined with a generative classifier, when a mislabelled point is very representative of one class but labelled as belonging to the other).
With good labelling I think that you would expect a supervised approach to outperform a semi-supervised applied to the same problem on average (but not always). I don't have a reference, but I think one could show this using an information theoretic argument along the lines of "conditioning reduces entropy".
Unsupervised learning is rather different, but I imagine when you compare this to supervised approaches you mean assigning an unlabelled point to a cluster (for example) learned from unlabelled data in an analogous way to assigning an unlabelled point to a class learned from labelled data. Similarly to semi-supervised, I think supervised should do better on average assuming the labels on training data are good.
If I understand you correctly, you are using an unsupervised method to apply labels to training data, and you then want to employ a supervised method trained on that labelled training data. Then I think you can only expect to do well if your unsupervised method is good at labelling, and probably poorly otherwise.
-
Consider an active learning approach. From Wikipedia's Active learning (machine learning) article:
There are situations in which unlabeled data is abundant but labeling data is expensive. In such a scenario the learning algorithm can actively query the user/teacher for labels. This type of iterative supervised learning is called active learning. Since the learner chooses the examples, the number of examples to learn a concept can often be much lower than the number required in normal supervised learning. With this approach there is a risk that the algorithm might focus on unimportant or even invalid examples.
-
Not an answer, but a link: Hastie, Tibshirani and Friedman, Elements of Statistical Learning (2009, 763p, free pdf) describe on page 495 ff. a way of transforming unsupervised to supervised learning. (In a nutshell, Y=1 on the real data, Y=0 on Monte Carlo data). They note, though,
Although this approach ... seems to have been part of the statistics folklore for some time, it does not appear to have had much impact despite its potential to bring well-developed supervised learning methodology to bear on unsupervised learning problems.
I haven't used it myself; anyone ?
(Sigh: the questioner gives us no idea of how many data points, features, clusters he or she has; one size cannot possibly fit all.)
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9388558268547058, "perplexity_flag": "middle"}
|
http://medlibrary.org/medwiki/Stefan-Boltzmann_law
|
# Stefan-Boltzmann law
Welcome to MedLibrary.org. For best results, we recommend beginning with the navigation links at the top of the page, which can guide you through our collection of over 14,000 medication labels and package inserts. For additional information on other topics which are not covered by our database of medications, just enter your topic in the search box below:
See also: Black body, Black body radiation, Planck's law, and Thermal radiation
Graph of a function of total emitted energy of a black body $j^{\star}$ proportional to its thermodynamic temperature $T\,$. In blue is a total energy according to the Wien approximation, $j^{\star}_{W} = j^{\star} / \zeta(4) \approx 0.924 \, \sigma T^{4} \!\,$
The Stefan–Boltzmann law, also known as Stefan's law, is a relation which described the power radiated from a black body in terms of its temperature. Specifically, the Stefan–Boltzmann law states that the total energy radiated per unit surface area of a black body across all wavelengths per unit time (also known as the black-body irradiance or emissive power), $j^{\star}$, is directly proportional to the fourth power of the black body's thermodynamic temperature T:
$j^{\star} = \sigma T^{4}.$
The constant of proportionality σ, called the Stefan–Boltzmann constant or Stefan's constant, derives from other known constants of nature. The value of the constant is
$\sigma=\frac{2\pi^5 k^4}{15c^2h^3}= 5.670 400 \times 10^{-8}\, \mathrm{J\, s^{-1}m^{-2}K^{-4}},$
where k is the Boltzmann constant, h is Planck's constant, and c is the speed of light in a vacuum. Thus at 100 K the energy flux density is 5.67 W/m2, at 1000 K 56,700 W/m2, etc. The radiance (watts per square metre per steradian) is equal to these values divided by π.
A body that does not absorb all incident radiation (sometimes known as a grey body) emits less total energy than a black body and is characterized by an emissivity, $\varepsilon < 1$:
$j^{\star} = \varepsilon\sigma T^{4}.$
The irradiance $j^{\star}$ has dimensions of energy flux (energy per time per area), and the SI units of measure are joules per second per square metre, or equivalently, watts per square metre. The SI unit for absolute temperature T is the kelvin. $\varepsilon$ is the emissivity of the grey body; if it is a perfect blackbody, $\varepsilon=1$. In the still more general (and realistic) case, the emissivity depends on the wavelength, $\varepsilon=\varepsilon(\lambda)$.
To find the total power radiated from an object, multiply by its surface area, $A$:
$P= A j^{\star} = A \varepsilon\sigma T^{4}.$
Metamaterials may be designed to exceed the Stefan–Boltzmann law.[1]
## History[]
The law was deduced by Jožef Stefan (1835–1893) in 1879 on the basis of experimental measurements made by John Tyndall and was derived from theoretical considerations, using thermodynamics, by Ludwig Boltzmann (1844–1906) in 1884. Boltzmann considered a certain ideal heat engine with light as a working matter instead of gas. The law is valid only for ideal black objects, the perfect radiators, called black bodies. Stefan published this law in the article Über die Beziehung zwischen der Wärmestrahlung und der Temperatur (On the relationship between thermal radiation and temperature) in the Bulletins from the sessions of the Vienna Academy of Sciences.
## Examples[]
### Temperature of the Sun[]
With his law Stefan also determined the temperature of the Sun's surface. He learned from the data of Charles Soret (1854–1904) that the energy flux density from the Sun is 29 times greater than the energy flux density of a certain warmed metal lamella (a thin plate). A round lamella was placed at such a distance from the measuring device that it would be seen at the same angle as the Sun. Soret estimated the temperature of the lamella to be approximately 1900 °C to 2000 °C. Stefan surmised that ⅓ of the energy flux from the Sun is absorbed by the Earth's atmosphere, so he took for the correct Sun's energy flux a value 3/2 times greater, namely 29 × 3/2 = 43.5.
Precise measurements of atmospheric absorption were not made until 1888 and 1904. The temperature Stefan obtained was a median value of previous ones, 1950 °C and the absolute thermodynamic one 2200 K. As 2.574 = 43.5, it follows from the law that the temperature of the Sun is 2.57 times greater than the temperature of the lamella, so Stefan got a value of 5430 °C or 5700 K (the modern value is 5778 K[2]). This was the first sensible value for the temperature of the Sun. Before this, values ranging from as low as 1800 °C to as high as 13,000,000 °C were claimed. The lower value of 1800 °C was determined by Claude Servais Mathias Pouillet (1790–1868) in 1838 using the Dulong-Petit law. Pouilet also took just half the value of the Sun's correct energy flux.
### Temperature of stars[]
The temperature of stars other than the Sun can be approximated using a similar means by treating the emitted energy as a black body radiation.[3] So:
$L = 4 \pi R^2 \sigma T_{e}^4$
where L is the luminosity, σ is the Stefan–Boltzmann constant, R is the stellar radius and T is the effective temperature. This same formula can be used to compute the approximate radius of a main sequence star relative to the sun:
$\frac{R}{R_\odot} \approx \left ( \frac{T_\odot}{T} \right )^{2} \cdot \sqrt{\frac{L}{L_\odot}}$
where $R_\odot$, is the solar radius, and so forth.
With the Stefan–Boltzmann law, astronomers can easily infer the radii of stars. The law is also met in the thermodynamics of black holes in so-called Hawking radiation.
### Temperature of the Earth[]
Similarly we can calculate the effective temperature of the Earth TE by equating the energy received from the Sun and the energy radiated by the Earth, under the black-body approximation. The amount of power, ES, emitted by the Sun is given by:
$E_S = 4\pi r_S^2 \sigma T_S^4$
At Earth, this energy is passing through a sphere with a radius of a0, the distance between the Earth and the Sun, and the energy passing through each square metre of the sphere is given by
$E_{a_0} = \frac{E_S}{4\pi a_0^2}$
The Earth has a radius of rE, and therefore has a cross-section of $\pi r_E^2$. The amount of solar power absorbed by the Earth is thus given by:
$E_{abs} = \pi r_E^2 \times E_{a_0} :$
The amount of energy emitted must equal the amount of energy absorbed, and so:
$\begin{align} 4\pi r_E^2 \sigma T_E^4 &= \pi r_E^2 \times E_{a_0} \\ &= \pi r_E^2 \times \frac{4\pi r_S^2\sigma T_S^4}{4\pi a_0^2} \\ \end{align}$
TE can then be found:
$\begin{align} T_E^4 &= \frac{r_S^2 T_S^4}{4 a_0^2} \\ T_E &= T_S \times \sqrt\frac{r_S}{2 a_0} \\ & = 5780 \; {\rm K} \times \sqrt{696 \times 10^{6} \; {\rm m} \over 2 \times 149.598 \times 10^{9} \; {\rm m} } \\ & \approx 279 \; {\rm K} \end{align}$
where TS is the temperature of the Sun, rS the radius of the Sun, and a0 is the distance between the Earth and the Sun. This gives an effective temperature of 6°C on the surface of the Earth, assuming that it perfectly absorbs all emission falling on it and has no atmosphere.
The Earth has an albedo of 0.3, meaning that 30% of the solar radiation that hits the planet gets scattered back into space without absorption. The effect of albedo on temperature can be approximated by assuming that the energy absorbed is multiplied by 0.7, but that the planet still radiates as a black body (the latter by definition of effective temperature, which is what we are calculating). This approximation reduces the temperature by a factor of 0.71/4, giving 255 K (−18 °C).[4][5]
However, long-wave radiation from the surface of the earth is partially absorbed and re-radiated back down by greenhouse gases, namely water vapor, carbon dioxide and methane.[6][7] Since the emissivity with greenhouse effect (weighted more in the longer wavelengths where the Earth radiates) is reduced more than the absorptivity (weighted more in the shorter wavelengths of the Sun's radiation) is reduced, the equilibrium temperature is higher than the simple black-body calculation estimates. As a result, the Earth's actual average surface temperature is about 288 K (15 °C), which is higher than the 255 K effective temperature, and even higher than the 279 K temperature that a black body would have.
## Derivation[]
### Thermodynamic derivation[]
The fact that the energy density of the box containing radiation is proportional to $T^{4}$ can be derived using thermodynamics. It follows from classical electrodynamics that the radiation pressure $P$ is related to the internal energy density:
$P=\frac{u}{3}$
The total internal energy of the box containing radiation can thus be written as:
$U=3PV\,$
Inserting this in the fundamental thermodynamic relation
$dU=T dS - P dV\,$
yields
$dU = 3p dV + 3V dp = T dS - p dV$
so
$dS=4\frac{P}{T}dV + 3\frac{V}{T}dP$
This equation can be used to derive a Maxwell relation. From the above equation it can be seen that:
$\left(\frac{\partial S}{\partial V}\right)_{P}\!\!=4\frac{P}{T}$
and
$\left(\frac{\partial S}{\partial P}\right)_{V}\!\!=3\frac{V}{T}$
The symmetry of second derivatives of $S$ with regard to $P$ and $V$ then implies:
$4\left(\frac{\partial \left(P/T\right)}{\partial P}\right)_{V}\!\!= 3\left(\frac{\partial \left(V/T\right)}{\partial V}\right)_{P}$
Because the pressure is proportional to the internal energy density it depends only on the temperature and not on the volume. In the derivative on the right hand side, the temperature is thus a constant. Evaluating the derivatives gives the differential equation:
$\frac{1}{P}\frac{dP}{dT}=\frac{4}{T}$
This can be solved by integrating with respect to T to give
$\ln P = 4 \ln T + c = \ln (\alpha \times T^4)$
This implies that
$u=3P \propto T^{4}$
### Stefan–Boltzmann's law in n dimensional space[]
It can be shown that the radiation pressure in $n$ dimensional space is given by
$P=\frac{u}{n}$
So in $n$ dimensional space,
$T dS= (n+1)P dV + n V dP\,$
So,
$\frac{1}{P}\frac{dP}{dT}=\frac{(n+1)}{T}$
yielding
$P \propto T^{n+1}$
or
$u \propto T^{n+1}$
implying
$\frac{dQ}{dt} \propto T^{n+1}$
The same result is obtained as the integral over frequency of Planck's law for $n$ dimensional space, albeit with a different value for the Stefan-Boltzmann constant at each dimension. In general the constant is
$\sigma=\frac{1}{p(n)} \frac{\pi^{\frac{n}{2}}}{\Gamma(1+\frac{n}{2})} \frac{1}{c^{n-1}} \frac{n(n-1)}{h^{n}} k^{(n+1)} \Gamma(n+1) \zeta(n+1)$
where $\zeta(x)$ is Riemann's zeta function and $p(n)$ is a certain function of $n$, with $p(3)=4$.
### Derivation from Planck's law[]
The law can be derived by considering a small flat black body surface radiating out into a half-sphere. This derivation uses spherical coordinates, with φ as the zenith angle and θ as the azimuthal angle; and the small flat blackbody surface lies on the xy-plane, where φ = π/2.
The intensity of the light emitted from the blackbody surface is given by Planck's law :
$I(\nu,T) =\frac{2 h\nu^{3}}{c^2}\frac{1}{ e^{\frac{h\nu}{kT}}-1}.$
where
• $I(\nu,T)\,$ is the amount of energy per unit surface area per unit time per unit solid angle emitted at a frequency $\nu \,$ by a black body at temperature T.
• $h \,$ is Planck's constant
• $c \,$ is the speed of light, and
• $k \,$ is Boltzmann's constant.
The quantity $I(\nu,T) ~A ~d\nu ~d\Omega$ is the power radiated by a surface of area A through a solid angle dΩ in the frequency range between ν and ν + dν.
The Stefan–Boltzmann law gives the power emitted per unit area of the emitting body,
$\frac{P}{A} = \int_0^\infty I(\nu,T) d\nu \int d\Omega \,$
To derive the Stefan–Boltzmann law, we must integrate Ω over the half-sphere and integrate ν from 0 to ∞. Furthermore, because black bodies are Lambertian (i.e. they obey Lambert's cosine law), the intensity observed along the sphere will be the actual intensity times the cosine of the zenith angle φ, and in spherical coordinates, dΩ = sin(φ) dφ dθ.
$\begin{align} \frac{P}{A} & = \int_0^\infty I(\nu,T) \, d\nu \int_0^{2\pi} \, d\theta \int_0^{\pi/2} \cos \phi \sin \phi \, d\phi \\ & = \pi \int_0^\infty I(\nu,T) \, d\nu \end{align}$
Then we plug in for I:
$\frac{P}{A} = \frac{2 \pi h}{c^2} \int_0^\infty \frac{\nu^3}{ e^{\frac{h\nu}{kT}}-1} d\nu \,$
To do this integral, do a substitution,
$u = \frac{h \nu}{k T} \,$
$du = \frac{h}{k T} \, d\nu$
which gives:
$\frac{P}{A} = \frac{2 \pi h }{c^2} \left(\frac{k T}{h} \right)^4 \int_0^\infty \frac{u^3}{ e^u - 1} \, du.$
The integral on the right can be done in a number of ways (one is included in this article's appendix) – its answer is $\frac{\pi^4}{15}$, giving the result that, for a perfect blackbody surface:
$j^\star = \sigma T^4 ~, ~~ \sigma = \frac{2 \pi^5 k^4 }{15 c^2 h^3} = \frac{\pi^2 k^4}{60 \hbar^3 c^2}.$
Finally, this proof started out only considering a small flat surface. However, any differentiable surface can be approximated by a bunch of small flat surfaces. So long as the geometry of the surface does not cause the blackbody to reabsorb its own radiation, the total energy radiated is just the sum of the energies radiated by each surface; and the total surface area is just the sum of the areas of each surface—so this law holds for all convex blackbodies, too, so long as the surface has the same temperature throughout.
### Appendix[]
In one of the above derivations, the following integral appeared:
$J=\int_0^\infty \frac{x^{3}}{\exp\left(x\right)-1} \, dx = 6\,\mathrm{Li}_4(1) = 6 \zeta(4)$
where $\mathrm{Li}_s(z)$ is the polylogarithm function and $\zeta(z)$ is the Riemann zeta function. If the polylogarithm function and the Riemann zeta function are not available for calculation, there are a number of ways to do this integration; a simple one is given in the appendix of the Planck's law article. This appendix does the integral by contour integration. Consider the function:
$f(k) = \int_0^\infty \frac{\sin\left(kx\right)}{\exp\left(x\right)-1} \, dx.$
Using the Taylor expansion of the sine function, it should be evident that the coefficient of the k3 term would be exactly -J/6. By expanding both sides in powers of $k$, we see that $J$ is minus 6 times the coefficient of $k^3$ of the series expansion of $f(k)$. So, if we can find a closed form for f(k), its Taylor expansion will give J.
In turn, sin(x) is the imaginary part of eix, so we can restate this as:
$f(k)=\lim_{\varepsilon\rightarrow 0}~\text{Im}~\int_\varepsilon^\infty \frac{\exp\left(ikx\right)}{\exp\left(x\right)-1} \, dx.$
To evaluate the integral in this equation we consider the contour integral:
$\oint_{C(\varepsilon, R)}\frac{\exp\left(ikz\right)}{\exp\left(z\right)-1} \, dz$
where $C(\varepsilon,R)$ is the contour from $\varepsilon$ to $R$, then to $R+2\pi i$, then to $\varepsilon+2\pi i$, then we go to the point $2\pi i - \varepsilon i$, avoiding the pole at $2\pi i$ by taking a clockwise quarter circle with radius $\varepsilon$ and center $2\pi i$. From there we go to $\varepsilon i$, and finally we return to $\varepsilon$, avoiding the pole at zero by taking a clockwise quarter circle with radius $\varepsilon$ and center zero.
Integration contour
Because there are no poles in the integration contour we have:
$\oint_{C(\varepsilon, R)}\frac{\exp\left(ikz\right)}{\exp\left(z\right)-1} \, dz = 0.$
We now take the limit $R\rightarrow\infty$. In this limit the contribution from the segment from $R$ to $R+2\pi i$ tends to zero. Taking together the integrations over the segments from $\varepsilon$ to $R$ and from $R+2\pi i$ to $\varepsilon+2\pi i$ and using the fact that the integrations over clockwise quarter circles withradius $\varepsilon$ about simple poles are given up to order $\varepsilon$ by minus $\textstyle \frac{i \pi}{2}$ times the residues at the poles we find:
$\left[1-\exp\left(-2\pi k\right) \right]\int_\varepsilon^\infty \frac{\exp\left(ikx\right)}{\exp\left(x\right)-1} \, dx = i \int_\varepsilon^{2\pi-\varepsilon} \frac{\exp\left(-ky\right)}{\exp\left(iy\right)-1} \, dy + i\frac{\pi}{2}\left[1 + \exp \left(-2\pi k\right)\right] + \mathcal{O} \left(\varepsilon\right) \qquad \text{ (1)}$
The left hand side is the sum of the integral from $\varepsilon$ to $R$ and from $R+2 \pi i$ to $2 \pi i + \varepsilon$. We can rewrite the integrand of the integral on the r.h.s. as follows:
$\frac{1}{\exp\left(iy\right)-1} = \frac{\exp\left(-i\frac{y}{2}\right)}{\exp \left(i \frac{y}{2}\right) - \exp\left(-i\frac{y}{2}\right)} = \frac{1}{2i} \frac{\exp\left(-i\frac{y}{2}\right)}{\sin\left(\frac{y}{2}\right)}$
If we now take the imaginary part of both sides of Eq. (1) and take the limit $\varepsilon\rightarrow 0$ we find:
$f(k) = -\frac{1}{2k} + \frac{\pi}{2}\coth\left(\pi k\right)$
after using the relation:
$\coth\left(x\right) = \frac{1+\exp\left( -2x\right)}{1 - \exp\left( -2x \right)}.$
Using that the series expansion of $\coth(x)$ is given by:
$\coth(x)= \frac{1}{x}+\frac{1}{3}x-\frac{1}{45}x^{3} + \cdots$
we see that the coefficient of $k^3$ of the series expansion of $f(k)$ is $\textstyle -\frac{\pi^4}{90}$. This then implies that $\textstyle J = \frac{\pi^4}{15}$ and the result
$j^\star = \frac{2\pi^5 k^4}{15 h^3 c^2} T^4$
follows.
## Notes[]
1. "Luminosity of Stars". Australian Telescope Outreach and Education. Retrieved 2006-08-13.
2. P. K. Das, , Resonance. Vol. 1. No. 3. pp. 54-65, 1996
3. Cole, George H. A.; Woolfson, Michael M. (2002). Planetary Science: The Science of Planets Around Stars (1st ed.). Institute of Physics Publishing. pp. 36–37, 380–382. ISBN 0-7503-0815-X [Amazon-US | Amazon-UK].
## References[]
• Stefan, J.: Über die Beziehung zwischen der Wärmestrahlung und der Temperatur, in: Sitzungsberichte der mathematisch-naturwissenschaftlichen Classe der kaiserlichen Akademie der Wissenschaften, Bd. 79 (Wien 1879), S. 391-428.
• Boltzmann, L.: Ableitung des Stefan'schen Gesetzes, betreffend die Abhängigkeit der Wärmestrahlung von der Temperatur aus der electromagnetischen Lichttheorie, in: Annalen der Physik und Chemie, Bd. 22 (1884), S. 291-294
Content in this section is authored by an open community of volunteers and is not produced by, reviewed by, or in any way affiliated with MedLibrary.org. Licensed under the Creative Commons Attribution-ShareAlike 3.0 Unported License, using material from the Wikipedia article on "Stefan-Boltzmann law", available in its original form here:
http://en.wikipedia.org/w/index.php?title=Stefan-Boltzmann_law
• ## Finding More
You are currently browsing the the MedLibrary.org general encyclopedia supplement. To return to our medication library, please select from the menu above or use our search box at the top of the page. In addition to our search facility, alphabetical listings and a date list can help you find every medication in our library.
• ## Questions or Comments?
If you have a question or comment about material specifically within the site’s encyclopedia supplement, we encourage you to read and follow the original source URL given near the bottom of each article. You may also get in touch directly with the original material provider.
• ## About
This site is provided for educational and informational purposes only and is not intended as a substitute for the advice of a medical doctor, nurse, nurse practitioner or other qualified health professional.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 117, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8880840539932251, "perplexity_flag": "middle"}
|
http://www.physicsforums.com/showthread.php?t=473640
|
Physics Forums
Recognitions:
Gold Member
## Un-spannable vector space?
If you have a vector space you can find a set of elements and consider their span, and then look for elements that cannot be spanned by them and so add them to the set, if you can't add anymore then you have a basis.
My question is what happens if this process continues forever, do you automatically call it infinite dimensional or is there such a thing as an unspannable space.
Also what happens if there is a set but it cannot be labeled nicely such as {sin(nx)}..
Thanks!!
PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug
Mentor Such spaces are said to be infinite-dimensional. Every vector space has a basis, so there's no need for terms like unspannable. You can use a notation like $\{\mathbb R\ni x\mapsto \sin nx\in\mathbb R|n\in\mathbb Z^+\}$, or be less formal and shorten it to $\{\sin nx|n\in\mathbb Z^+\}$. You can also e.g. define, for each n=1,2,..., $u_n:\mathbb R\rightarrow\mathbb R$, by $u_n(x)=\sin nx$ for all $x\in \mathbb R$, and write the set as $\{u_n|n\in\mathbb Z^+\}$.
there's a theorem out there that says every finite dimensional vector space must have a basis ( and it uses Zorns lemma ) , so that should clear things up for oyu
Mentor
## Un-spannable vector space?
In the finite-dimensional case, the existence of a basis follows immediately from the definition of "basis" and "finite dimensional". So you only need Zorn when the vector space is infinite dimensional.
Blog Entries: 8 Recognitions: Gold Member Science Advisor Staff Emeritus Like Frederik has already said: using Zorn's lemma, we can show that every possible vector space has a basis. But this uses the axiom of choice and is highly unconstructive. If we do not assume the axiom of choice, then there may be some vector spaces without a basis (for example: consider $$\mathbb{R}$$ as $$\mathbb{Q}$$-vector space). So while we can show (using choice) that every vector space has a basis, there is something undesirable about this. Namely the fact that we can never write down the basis in any way. But the entire point of having a basis is so that we can use it to know more about the vector space. Thus having a basis of an infinite dimensional vector space seems to be a little useless. That's why some people proposed things which weren't a basis, but which did have some desirable properties. For example, a Schauder-basis is a set of elements such that every element can be written as an infinite linear combination of basis elements. In infinite-dimensional (separable) spaces, the concept of Schauder basis is a good replacement for the concept of basis...
Recognitions:
Science Advisor
Quote by alemsalem If you have a vector space you can find a set of elements and consider their span, and then look for elements that cannot be spanned by them and so add them to the set, if you can't add anymore then you have a basis. My question is what happens if this process continues forever, do you automatically call it infinite dimensional or is there such a thing as an unspannable space. Also what happens if there is a set but it cannot be labeled nicely such as {sin(nx)}.. Thanks!!
There is no general algorithm for finding the basis of an infinite dimensional vector space. In this case a basis means a set of independent vectors that span the entire space by finite linear combination. That means that an arbitrary vector in the space is a linear combination of finitely many basis vectors. Since there is no algorithm one must make a postulate that allows one to conclude that there is a basis. A typical such postulate is the Axiom of Choice.
If there is a metric then one can talk about infinite linear combinations of basis vectors that converge to an arbitrary vector in the space. The classic examples are L^2 normed function spaces. This is a different idea of basis.
Recognitions:
Gold Member
Science Advisor
Staff Emeritus
Quote by micromass But this uses the axiom of choice and is highly unconstructive.
It is an eye of the beholder thing. e.g. with the well-ordering theorem* in hand, the algorithm quoted in the opening post can be continued transfinitely to produce a basis. I wouldn't call it highly unconstructive, but I know opinion differs on such points.
*: For the OP, the well-ordering theorem is equivalent to the axiom of choice and to Zorn's lemma
Recognitions:
Gold Member
Science Advisor
Staff Emeritus
Quote by dexdt there's a theorem out there that says every finite dimensional vector space must have a basis ( and it uses Zorns lemma ) , so that should clear things up for oyu
More correctly, Zorn's Lemma shows that every vector space has a basis but isn't really needed for finite dimensional vector spaces. Did you mean to say "infinite dimensional"?
alemsalem, the definition of "finite dimensional" is that the space can be spanned by some finite set of vectors. So any "unspannable space" would be infinite dimensional.
Yes, there do exist spaces with uncountable bases. "What happens"? You avoid them like the plague!
Recognitions: Homework Help Science Advisor every vector space spans itself, so strictly speaking "unspannable spaces" (with no mention of independence) do not exist.
Recognitions:
Gold Member
Science Advisor
Staff Emeritus
Quote by mathwonk every vector space spans itself, so strictly speaking "unspannable spaces" (with no mention of independence) do not exist.
Absolutely true. But since alemsalem said "If you have a vector space you can find a set of elements and consider their span, and then look for elements that cannot be spanned by them and so add them to the set, if you can't add anymore then you have a basis." I assumed he was referring to spanning by a finite set.
Recognitions: Homework Help Science Advisor yes indeed halls, your answer was more useful whereas mine was just picky. but thats my strong suit!
Thread Tools
| | | |
|-------------------------------------------------|----------------------------|---------|
| Similar Threads for: Un-spannable vector space? | | |
| Thread | Forum | Replies |
| | Calculus & Beyond Homework | 5 |
| | Linear & Abstract Algebra | 1 |
| | Calculus & Beyond Homework | 2 |
| | Classical Physics | 0 |
| | Calculus & Beyond Homework | 9 |
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9333885908126831, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/293737/are-these-two-10-vertex-graphs-isomorphic
|
Are these two 10-vertex graphs isomorphic?
Explain if these two graphs are isomorphic. If so, give the 1-1 correspondence of nodes.
I've checked that the two graphs have the same degrees, edges, and vertices, and check that they both aren't bipartite. I just can't seem to come up with a correct 1-1 node correspondence.
-
1
Just try! Finding out whether two graphs are isomorphic is actually a quite hard problem, so hard that we can use in cryptography. However, one hint is that isomorphisms cannot change the degree of nodes, so if the graphs aren't isomorphic, you might find this out by counting how many nodes are there of degree 1, 2, $\ldots$. – Dario Feb 3 at 17:05
I've checked that the two graphs have the same degrees,edges, and vertices, and check that they both aren't bipartite. I just can't seem to come up with a correct 1-1 node correspondence. – thebottle394 Feb 3 at 17:07
2
Have Powerpoint? Just create one graph with points and lines and move them around till they look like the other graph – Dario Feb 3 at 17:15
1
I downvoted this question for lack of effort; then saw your comment. I added it to your question so I can change my vote to a +1. – Douglas S. Stones Feb 3 at 17:34
2 Answers
The graph at left has a 3-cycle $(a,b,j)$ (also $(f,e,g)$). The graph at right has none. They are not isomorphic.
It may be worth noting that the graph at right is simply (the skeleton of) a pentagonal prism; consequently, each vertex is (in an appropriate sense) "equivalent" to every other vertex. This is not the case in the graph at left.
Also, the graph at right, as illustrated, is planar; no edges intersect. In the graph at left, you can replace "chords" $bj$ and $eg$ with paths that travel "outside the circle", eliminating intersections with $af$, but you can't do that with both of $ch$ and $di$ and not have $ch$ and $di$ cross; an edge intersection is inevitable (although this needs rigorous proof), so the graph is non-planar.
-
1
The outer cycle together with the three chords $af$, $ch$, $di$ give a subdivision of $K_{3,3}$, so the graph on the left is not planar. – Chris Godsil Feb 3 at 20:36
The right one has four 4-cycles: 1,2,3,10; 3,4,9,10; 4,5,8,9; 5,6,7,8. I can't find more than c,d,i,h on the left. The one on the right has two disjoint 5-cycles: 1,7,8,9,10; 2,3,4,5,6. On the right I can find more 5-cycles, but none involve a or f. Maybe I'm overlooking some.
Added: Yes, this is a route to show they are not isomorphic. For them to be isomorphic you need the adjacency matrix to be the same once you find the proper mapping. If you can find any property that doesn't match they are not isomorphic. The degrees, number of vertices, number of edges are easy to check, so should be the first step. To make sure, you would say that whatever 3 maps to has to be part of two four-cycles, which also include the thing 10 maps to. Look through all the vertices and see that you can't satisfy this.
-
is that a valid reason for two graphs to not be isomorphic? – thebottle394 Feb 3 at 17:59
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9527236223220825, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/69222/h4-of-the-monster
|
## H^4 of the Monster
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
The Monster group $M$ acts on the moonshine vertex algebra $V^\natural$.
Because $V^\natural$ is a holomorphic vertex algebra (i.e., it has a unique irreducible module), there is a corresponding cohomology class $c\in H^3(M;S^1)=H^4(M;\mathbb Z)$ associated to this action.
Roughly speaking, the construction of that class goes as follows:
• For every $g\in M$, pick an irreducible twisted module $V_g$ (there is only one up to isomorphism).
• For every pair $g,h\in M$, pick an isomorphism $V_g\boxtimes V_h \to V_{gh}$,
where $\boxtimes$ denotes the fusion of twisted reps.
• Given three elements $g,h,k\in M$, the cocycle $c(g,h,k)\in S^1$ is the discrepancy between $$(V_g\boxtimes V_h)\boxtimes V_k \to V_{gh}\boxtimes V_k \to V_{ghk}\qquad\text{and}\qquad V_g\boxtimes (V_h\boxtimes V_k) \to V_g\boxtimes V_{hk} \to V_{ghk}$$
I think that not much known about $H^4(M,\mathbb Z)$...
But is anything maybe known about that cohomology class? Is it non-zero?
Assuming it is non-zero, would that have any implications?...
More importantly: what is the meaning of that class?
-
2
Clearly, it's some sort of associator for the 2-gerbe sitting over the delooping of $M$.... ;-) but I reckon you'd have guessed this yourself. Breen's 1994 Asterisque volume may be useful here. – David Roberts Jul 1 2011 at 0:25
BTW, very nice question :) – David Roberts Jul 1 2011 at 0:25
## 1 Answer
There is some evidence from characters that $H^4(M,\mathbb{Z})$ contains $\mathbb{Z}/12\mathbb{Z}$. In particular, the conjugacy class 24J (made from certain elements of order 24) has a character of level 288, and the corresponding irreducible twisted modules have a character whose expansion is in powers of $q^{1/288}$. Fusion in a cyclic group generated by a 24J element then yields a $1/12$ discrepancy in $L_0$-eigenvalues, meaning you will pick up 12th roots of unity from the associator. If you pull back along a pointed map $B(\mathbb{Z}/24\mathbb{Z}) \to BM$ corresponding to an element in class 24J (i.e., if you forget about twisted modules outside this cyclic group) you get a cocycle of order 12. This is the largest order you can get by this method - everything else divides 12. I don't know how the cocycles corresponding to different cyclic groups fit together.
I don't know if you've seen Mason's paper, Orbifold conformal field theory and cohomology of the monster, but it is about related stuff. I don't understand how he got his meta-theorem with the number 48 at the end, though.
As far as implications or meaning of the cocycle, all I can say is that the automorphism 2-group of the category of twisted modules of $V^\natural$ has the monster as its truncation, and its 2-group structure is nontrivial. I've heard some speculation about twisting monster-equivariant elliptic cohomology, but I don't understand it. If you believe in AdS/CFT, this might say something about pure quantum gravity in 3 dimensions, but I have no idea what that would be.
Update Nov 2, 2011: I was at a conference in September, where G. Mason pointed out to me that $H^4(M,\mathbb{Z})$ probably contains an element of order 8, and therefore also $\mathbb{Z}/24\mathbb{Z}$. I believe the argument was the following: there is an order 8 element $g$ whose centralizer in the monster acts projectively on the unique irreducible $g$-twisted module of the monster vertex algebra $V^\natural$, such that one needs to pass to a cyclic degree 8 central extension to get an honest action. Rather than just looking at $L_0$-eigenvalues, one needs to examine character tables to eliminate smaller central extensions here. Naturally, like the claims I described before, the validity of this argument depends on some standard conjectures about the structure of twisted modules.
It seems that the relevant group-theoretic computation may have been known to S. Norton for quite some time. In his 2001 paper From moonshine to the monster that reconstructed information about the monster from a revised form of the generalized moonshine conjecture, he explicitly included a 24th root-of-unity trace ambiguity. I had thought perhaps he just liked the number 24 more than 12, but now I am leaning toward the possibility that he had a good reason.
-
Thank you Scott. This is an awesome answer! – André Henriques Jul 1 2011 at 22:07
"I've heard some speculation about twisting monster-equivariant elliptic cohomology, but I don't understand it": from whom? – André Henriques Jul 2 2011 at 21:47
from Jacob and Nora, but their stories seem to differ. – S. Carnahan♦ Jul 3 2011 at 14:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 26, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9456915855407715, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/32924/uniform-distributions-probability
|
# Uniform Distributions - Probability
Suppose $U_1$, $U_2$ and $U_3$ are independent uniform $(0,1)$. I am supposed to find $P(\max(U_1,U_2) > U_3)$.
What I rewrote the question as was this is equal to:
$$2P(U_1>U_3) - P(U_1 \mathrm{ and }\,\, U_2 > 3) = 2(1/2) - 1/3 = 2/3.$$
Can someone check to see if $2/3$ is also what they got? Thanks
-
3
Please accept answers to your questions. Accepting answers is a simple way of thanking the strangers who are helping you out. You can accept an answer by ticking the check-mark next to the one you found most helpful. – Qiaochu Yuan Apr 15 '11 at 4:30
## 3 Answers
This question is an excellent example of a setting where symmetry considerations yield the answer with almost no effort. Symmetry is of common use in statistical mechanics and, in the more general setting of mathematics as a whole, one could go back (at least) to Hermann Weyl for thoughts about its role. As regards probability, some recent discussion is in Symmetry and Probability by Jill North, with some Comments on the preceding, by Branden Fitelson.
The key remark in the question at hand is that the distribution of the random vector $(U_1,U_2,U_3)$ is invariant by the permutations of its coordinates, for example $(U_1,U_2,U_3)$ and $(U_3,U_1,U_2)$ share the same statistical properties, as well as $(U_{s(1)},U_{s(2)},U_{s(3)})$ for any permutation $s$ in the symmetric group $\mathfrak{S}_3$.
Now, one is interested in the probability of the event $[\max(U_1,U_2) > U_3]$. The complementary event is $$A_3=[\max(U_1,U_2) < U_3]$$ and $A_3$ simply means that $U_3$ is the largest of the three values $U_1$, $U_2$, $U_3$ (there is almost surely no tie here because the common distribution has no atom). By symmetry, the events $A_1$, $A_2$ and $A_3$ have the same probability where $$A_1=[\max(U_2,U_3) < U_1]\quad\mbox{and}\quad A_2=[\max(U_1,U_3) < U_2].$$ Since these events are disjoint and their union is the universe, their common probability is $1/3$ and $$P(\max(U_1,U_2) > U_3)=1-P(A_3)=2/3.$$ One sees that everything above works for an i.i.d. sample of size $n$ based on any atomless distribution on the real line (just replace $P(A_i)=1/3$ by $P(A_i)=1/n$), and even in the more general setting of exchangeable random variables, as soon as the ties have probability zero.
-
So I believe I know the setting of the problem that you're asking about here. (This is review exercise 5.14(e) in Pitman's text, which I know because I assigned it to my students this week.) In particular, given the context, you might be tempted to try to find the density of $\max(U_1, U_2)$, state that it's independent of $U_3$, write down the joint density of $\max(U_1, U_2)$ and $U_3$, and integrate over the appropriate region. These are probably all things you should be able to do.
But don't do them! Symmetry is the right approach to this problem. Actually, the simplest way to solve this problem is to note that the event $\max(U_1, U_2) < U_3$ is just the event that $U_3$ is the largest of the three random variables.
-
Using `\max` will render $\max$ correctly. Cheers. – cardinal Apr 21 '11 at 22:10
Yes, the answer $2/3$ is correct.
Hints for a solution. Work with the complement of the event $\max\{U_1,U_2\}>U_3$, and use the law of total probability conditioning on $U_3$. More generally, try showing the following (using the hints): ${\rm P}(\max\{U_1,\ldots,U_n\}>U_{n+1})=n/(n+1)$, where the $U_i$ are independent uniform$(0,1)$ random variables.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 38, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9557673335075378, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/12136/freyd-cover-of-a-category/12149
|
## Freyd cover of a category.
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I’ve couldn’t find any information about the free category built up from that Freyd cover. Where can I find more about the Freyd cover of a category (not a topos!)?
Edit: The definition has been given in Lambek and Scott's "Higher order categorical logic". I think (according to L. Román) it is initial among all categories endowed with products and a weak nno.
Edit: (Added by Tom Leinster) Here's the definition of Freyd cover, taken from Lambek and Scott (22.1). Let $\mathcal{T}$ be a category with terminal object. Its Freyd cover $\hat{\mathcal{T}}$ is the comma category whose objects are the triples $(X, \xi, U)$ where:
• $X$ is a set
• $U$ is an object of $\mathcal{T}$
• $\xi: X \to \mathcal{T}(1, U)$ is a function.
Lambek and Scott emphasize that $\hat{\mathcal{T}}$ has a terminal object and that it comes equipped with a terminal-object-preserving functor $G: \hat{\mathcal{T}} \to \mathcal{T}$. Strictly speaking, the Freyd cover is the pair $(\hat{\mathcal{T}}, G)$, not just the category $\hat{\mathcal{T}}$ itself.
-
Could you give a definition of a Freyd Cover? Maybe if you don't know it in the case you're interested in, give the case you do know, and why you think the definition works in your case somehow? – Charles Siegel Jan 17 2010 at 19:43
Ximo: I thought it would be helpful to merge your comment (giving the reference) into the main question, so I did it. You might want to delete that comment now. Also, if you don't like the way I've edited your question, you can edit it yourself (and undo my changes if you want). – Tom Leinster Jan 17 2010 at 21:26
## 2 Answers
I don't know anything about it myself, but here are some other phrases you might try looking up.
The Freyd cover of a category is sometimes known as the Sierpinski cone, or "scone". It's also a special case of Artin gluing. Given a category $\mathcal{T}$ and a functor $F: \mathcal{T} \to \mathbf{Set}$, the Artin gluing of $F$ is the comma category $\mathbf{Set}\downarrow F$ whose objects are triples $(X, \xi, U)$ where:
• $X$ is a set
• $T$ is an object of $\mathcal{T}$
• $\xi$ is a function $X \to F(U)$.
So the Freyd cover is the special case $F = \mathcal{T}(1, -)$.
You can find more on Artin gluing in this important (and nice) paper:
Aurelio Carboni, Peter Johnstone, Connected limits, familial representability and Artin glueing, Mathematical Structures in Computer Science 5 (1995), 441--459
plus
Aurelio Carboni, Peter Johnstone, Corrigenda to 'Connected limits...', Mathematical Structures in Computer Science 14 (2004), 185--187.
(Incidentally, my Oxford English Dictionary tells me that the correct spelling is 'gluing', but some people, such as these authors, use 'glueing'. I'm sure Peter Johnstone has a reason.)
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Freyd covers are a fundamental tool in the semantics of programming languages. Here, the technique is called "logical relations" or sometimes "Tait-Girard reducibility candidates".
The general idea is that you start with a crude categorical semantics of a programming language, which does not validate all the properties you want, and then use a Freyd cover to show that every definable program actually does satisfy those properties. We use these things when the category in question is closed (monoidal or cartesian), but it does not necessarily have to form a topos.
John Mitchell and Andre Scedrov have a paper, "Notes on Sconing and Relators", in which they study the applications to programming languages.
-
So you mean: is it another way to pass from syntactic (what you call crude semantics) to semantic in every case? – Doctor Gibarian Dec 29 2010 at 13:21
It's the other way around: we start with a simple model which may contain elements which do not correspond to any syntactically definable program, and may have unwanted properties. Then, we use a logical relation to construct a cut-down model only containing elements with the desired properties, and then we prove that every program you can define may also be interpreted within this new model. – Neel Krishnaswami Dec 29 2010 at 14:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 22, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9245396852493286, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/67824?sort=oldest
|
## Why does Hom need an identity in the definition of the category?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I was studying the axioms of a category, and noted that one axiom says there is an element $1_X\in Hom(X,X)$ for any object $X$ which serves as the identity. Why is this axiom necessary? What happens if I drop this axiom?
Background: I can define the category of affine holomorphic symplectic varieties, by saying
1. The objects are semisimple algebraic groups
2. The morphisms $Hom(G,G')$ are affine holomorphic symplectic varieties with Hamiltonian $G\times G'$ action
3. The composition of two morphisms $X\in Hom(G,G')$ and $Y\in Hom(G',G'')$ is given by the holomorphic symplectic quotient $X\times Y//G'$.
This becomes a nice symmetric monoidal category; the identity in $Hom(G,G)$ is $T^*G$.
Suppose I want to consider the category of hyperkähler manifolds instead. I can try the following
1. The objects are semisimple compact groups
2. The morphisms $Hom(G,G')$ are hyperkähler manifolds with Hamiltonian $G\times G'$ action
3. The composition of two morphisms $X\in Hom(G,G')$ and $Y\in Hom(G',G'')$ is given by the hyperkähler quotient $X\times Y///G'$.
Now, the problem is that $T^*G_\mathbb{C}$ has a hyperkähler metric (constructed by Kronheimer) and almost acts like an identity, but not quite: given a hyperkähler manifold $X$ with $G$ action, $T^*G_\mathbb{C} \times X /// G$ is equivalent to $G$ as holomorphic symplectic varieties but not equivalent as hyperkähler manifolds.
What should I do?
For my purpose, I guess using the terminology semigroupoid would suffice (I just want to define the target "category" of a TQFT precisely.) But I'm curious what kind of hell will break loose if I drop this axiom, why the people who originally defined categories included this into the axiom, etc.
-
It looks like you want to take the quotient by $G'$ in both cases. – S. Carnahan♦ Jun 15 2011 at 3:44
If you drop the axiom, you get a "category without identity" which is also called a semigroupoid. – S. Carnahan♦ Jun 15 2011 at 3:52
1
Try ncatlab.org/nlab/show/semicategory instead. One would only have a semigroupoid if all arrows were invertible. And really, you can't do that, because you can't express invertibility, because you need identity arrows for that. As far as asking 'why identities', consider asking the same question for groups: why do groups have identity elements? – David Roberts Jun 15 2011 at 4:42
1
@David: wikipedia en.wikipedia.org/wiki/Semigroupoid says it doesn't have the invertibility axiom. Which is the standard definition of the terminology? – Yuji Tachikawa Jun 15 2011 at 4:44
1
Monoid= group without inverses. Semigroup= group without inverses and without identity. Category= many-object monoid or monoid-oid. So, a category without identity should be called "semigroup-oid". – Qfwfq Jun 15 2011 at 13:11
show 3 more comments
## 4 Answers
Your structure can be described as a "category without identity", which has been given the names "semicategory" and "semigroupoid" presumably due to independent discoveries.
Some Googling suggests the term "semicategory" came first, in a 1972 TAMS paper by Mitchell. The name is motivated by applying an analogy connecting groups and semigroups to categories (as categories without identities or inverses), and it seems to be popular among people who study categories.
The term "semigroupoid" seems to have appeared first in Tilson, Categories as Algebra: an essential ingredient in the theory of monoids in Journal of Pure and Applied Algebra 48 (1987) 83-198. The name is motivated by applying an analogy connecting groups and groupoids to semigroups (as semigroups with multiple objects), and it seems to be popular among people who study semigroups.
I think the analogy is a bit weak on the semicategory side, since categories don't straightforwardly generalize groups. I'm not in charge, though.
John Baez points out in his TWF week 296 that there is a canonical way to make a category out of a semicategory, by formally adding, for each object, an identity element to the set of morphisms from that object to itself (and preserving all other morphism sets and composition laws). Any previously existing identities become idempotents. He notes that the categories formed this way are distinguished among all categories by the property that all invertible morphisms are identities. In particular, this process of formally adding identities is reversible in a canonical way, and no hell will break loose.
-
Thanks Scott, TWF 296 was particularly useful. – Yuji Tachikawa Jun 15 2011 at 15:01
I guess one can selectively add identities, so as to avoid those extra idempotents. – Mariano Suárez-Alvarez Jun 15 2011 at 16:14
1
I've never liked this "add an identity everywhere" approach. In the algebra case, this corresponds to the "one-point compactification", and a better approach is the "Stone-Cech compactification". For the latter, on the algebra side: if A is a possibly-non-unital algebra, you consider the algebra of endomorphisms of A-as-a-right-A-module. This is unital, and satisfies the right universal property. Something like this should be doable for categories as well. – Theo Johnson-Freyd Jun 16 2011 at 4:55
@Theo: I must confess I do not understand your need to attach value judgments to definitions and constructions. On a more mathematical note, I can see how your "endomorphism monoid" method will work on each object, but I don't know how to make it into a functor from $SemiCat$ to $Cat$. – S. Carnahan♦ Jun 16 2011 at 5:38
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
One is often interested in isomorphisms in a category. After all, categories are seldom regarded as rigid objects, two categories are considered to be 'the same' if the are equivalent, and the notion of equivalence of categories relies on isomorphisms. Categories without identities are too rigid (at least from a categorical point of view).
-
I suppose the extent to which hell breaks loose depends entirely on the purpose you're using categories for. Dropping identities presumably invalidates the Yoneda lemma, and therefore all the results in category theory that depend on it. But if you just want to use (monoidal) functors as bookkeeping devices without expecting "deeper" category theory to predict things for you, nothing much happens. To follow up on Scott's answer, there is for example a perfectly good theory about adjunctions when one ignores identities, and there is in fact a relation to formally adding identities. See Hayashi "Adjunction of semifunctors" in Theoretical Computer Science 41:95--104, 1985, and Hoofman and Moerdijk "A remark on the theory of semi-functors" in Mathematical Structures in Computer Science 5:1--8, 1995.
-
Thank you for the reply. To talk about Yoneda's lemma, Hom needs to be a set, right? I'm not sure if the class of all hyperkahler manifolds is a set... I've heard that the class of all groups is not a set. ??? – Yuji Tachikawa Jun 15 2011 at 19:12
2
@Yuji Tachikawa As far as there is a bound on the cardinality of an atlas, then it is a set. For instance, the collection of compact hyperkahler manifolds is a set. The class of all groups is clearly not a set because each set has a group of bijections associated and these groups already form a class. – Leo Alonso Jun 15 2011 at 21:20
@Leo Thanks, that was very clear. – Yuji Tachikawa Jun 16 2011 at 0:41
As Fernando points out, you can't talk about isomorphisms in a semicategory, which means that they won't be as much use as categories in describing universes of mathematical objects. But the category of semicategories has a surprisingly interesting relationship to that of categories. There is of course a forgetful functor $\mathrm{Cat} \to \mathrm{Semicat}$, and as Scott says it has a left adjoint that does what you expect. But it also has a right adjoint, which takes a semicategory S to the category of idempotents in S: the objects are idempotents $e \colon a \to a$ and a morphism $e \to e'$ is a morphism $f \colon a \to a'$ in S such that $fe = f = e'f$. So we get a monad on Cat whose unit is the canonical functor from a category to its idempotent-splitting completion, or Cauchy completion, or Karoubi envelope.
Böhm, Lack and Street use this framework here to talk about weak Hopf algebras. They show that 'weak monoids' fall naturally out of the formal theory of monads if instead of working directly in a bicategory you Cauchy-complete the hom-categories first.
Another application of semicategories and semifunctors is in computer science: Hayashi, Adjunction of semifunctors: categorical structures in nonextensional $\lambda$-calculus, TCS 41, shows how to describe $\lambda$-calculus without the $\eta$-law quite elegantly. I haven't worked it out, but it seems to me that this framework should also give a way of talking about 'weak limits' (the kind with not-necessarily-unique mediating morphisms) in terms of adjunctions.
-
You mean a forgetful functor $\text{Cat} \to \text{Semicat}$. – Qiaochu Yuan Jun 15 2011 at 16:27
Dammit, yes, thanks. Fixed. – Finn Lawler Jun 15 2011 at 16:40
1
+1 -- very interesting; I didn't know any of this! – Todd Trimble Jun 15 2011 at 16:50
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 31, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9289510846138, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/67058/are-there-lebesgue-measurable-functions-not-almost-everywhere-equal-to-a-continu?answertab=votes
|
# Are there Lebesgue-measurable functions not almost everywhere equal to a continuous function
This is why I originally meant to ask with Are there Lebesgue-measurable functions non-continuous almost everywhere?
Does there exist a function $f\colon [0,1]\to\mathbb{R}$ such that:
1. $f$ is Lebesgue measurable; and
2. For every continuous $g\colon [0,1]\to\mathbb{R}$, the set of points where $f(x)\neq g(x)$ has positive measure?
-
2
– Arturo Magidin Sep 23 '11 at 20:33
2
The question listed as duplicate is actually a stronger statement. For this one, $f = 1_{[0,1/2]}$ is a counterexample. – Nate Eldredge Dec 10 '11 at 2:07
## 1 Answer
Yes. Fix any measurable set $A$ such that both $A$ and its complement have non-null intersection with each nonempty open interval. Examples are discussed here. Then the characteristic function of $A$ is as desired, since removing a null set does not change this intersection property, which rules out having a continuous extension.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9346503019332886, "perplexity_flag": "head"}
|
http://www.physicsforums.com/showthread.php?s=3c6e864be97c1387737927bfeafa691c&p=4278721
|
Physics Forums
## Partition function related to number of microstates
Hi,
I have a question about the partition function.
It is defined as ## Z = \sum_{i} e^{-\beta \epsilon_{i}} ## where ##\epsilon_i## denotes the amount of energy transferred from the large system to the small system. By using the formula for the Shannon-entropy ##S = - k \sum_i P_i \log P_i## (with ##k## a random constant or ##k_B## in this case), I end up with the following: $$S = - k \sum_i P_i \log P_i = (k \sum_i P_i \beta \epsilon_i) + (k \sum_i P_i \log Z) = \frac{U}{T} + k \log Z$$
This simplifies to ##Z = e^{-\beta F}## by using the Helmholtz free energy defined as ##F = U - T S##. But Boltzmann's formula for entropy states ##S = k \log \Omega##, where ##\Omega## denotes the number of possible microstate for a given macrostate. So we will get $$\Omega = e^{S/k} = e^{\beta (U - F)} = Z e^{\beta U}$$
So the partition function is related to the number of microstates, but multiplied by a factor ##e^{\beta U}##. And this bring me to my question: why is it multiplied by that factor? Maybe the answer is quite simple, but I can't seem to think of anything.
PhysOrg.com physics news on PhysOrg.com >> Promising doped zirconia>> New X-ray method shows how frog embryos could help thwart disease>> Bringing life into focus
Recognitions:
Homework Help
Quote by Troy124 Hi, I have a question about the partition function. It is defined as ## Z = \sum_{i} e^{-\beta \epsilon_{i}} ## where ##\epsilon_i## denotes the amount of energy transferred from the large system to the small system. By using the formula for the Shannon-entropy ##S = - k \sum_i P_i \log P_i## (with ##k## a random constant or ##k_B## in this case), I end up with the following: $$S = - k \sum_i P_i \log P_i = (k \sum_i P_i \beta \epsilon_i) + (k \sum_i P_i \log Z) = \frac{U}{T} + k \log Z$$ This simplifies to ##Z = e^{-\beta F}## by using the Helmholtz free energy defined as ##F = U - T S##. But Boltzmann's formula for entropy states ##S = k \log \Omega##, where ##\Omega## denotes the number of possible microstate for a given macrostate. So we will get $$\Omega = e^{S/k} = e^{\beta (U - F)} = Z e^{\beta U}$$ So the partition function is related to the number of microstates, but multiplied by a factor ##e^{\beta U}##. And this bring me to my question: why is it multiplied by that factor? Maybe the answer is quite simple, but I can't seem to think of anything.
Boltzmann's formula ##S = k_B \ln \Omega## is applicable only to the case of a microcanonical ensemble - a system in which every microstate is equally likely. Note that setting ##P_i = 1/\Omega## in ##S = -k_B \sum_{i=1}^\Omega P_i \ln P_i## gives Boltzmann's formula.
The partition function ##Z = \sum_i \exp(-\beta \epsilon_i)## corresponds to a canonical ensemble. The microstates in a canonical ensemble are not equally likely, so Boltzmann's formula ##S = k_B \ln \Omega## does not apply. (However, the more general formula, ##S = -k_B \sum_{i=1}^\Omega P_i \ln P_i##, does still apply).
You can thus not equate ##\Omega## to ##Ze^{\beta U}##, as the two formulas you used for entropy are not simultaneously true.
Hi, Thanks for your reply. I finally figured out that I mixed up the entropy of the environment with the entropy of the system, because my idea was that the total system, so environment + system, could be described by the microcanonical ensemble and I could use Boltzmann's formula, but then you will end up with something different: The system including its environment can be described as a microcanonical ensemble. The number of possible configurations for this ensemble are ##\Omega_{total} = \sum_i w_i## where ##w_i## denotes the number of possible configurations given an ##\epsilon_i##. We know $$w_i = \Omega (E-\epsilon_i) \Omega (\epsilon_i)$$ (with ##\Omega (\epsilon_i) = 1##, ##\Omega (E-\epsilon_i)## the number of microstates of the system when its energy equals ##E-\epsilon_i## and ##\Omega (E)## the number of microstates of the environment, when it is not thermally connected to another system) and thus $$\Omega_{total} = e^{S_{total}/k} = e^{S/k} e^{S_{env}/k} = e^{\beta (U - F)} \Omega_{env} = \sum_i \Omega (E - \epsilon_i) = \Omega (E) \sum_i e^{-\beta \epsilon_i} = \Omega (E) e^{-\beta F}$$ This simplifies to $$\Omega_{env} = \Omega (E) e^{-\beta U}$$ Do you know if this is correct, because I have never seen this result before. It does seem okay to me though.
Thread Tools
| | | |
|--------------------------------------------------------------------------|---------------------------|---------|
| Similar Threads for: Partition function related to number of microstates | | |
| Thread | Forum | Replies |
| | Advanced Physics Homework | 1 |
| | Classical Physics | 9 |
| | Advanced Physics Homework | 3 |
| | Linear & Abstract Algebra | 8 |
| | Classical Physics | 2 |
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 7, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8853641748428345, "perplexity_flag": "middle"}
|
http://mathoverflow.net/revisions/8365/list
|
## Return to Answer
3 tex wrestling; added 4 characters in body; deleted 2 characters in body
This might help.
Lemma If $A$ does not split freely and $C$ is a non-trivial subgroup of $A$ then the HNN extension $G=A*_C$ does not split freely.
The proof uses Bass--Serre theory---see Serre's book Trees.
Proof. Let $T$ be the Bass--Serre tree of a free splitting of $G$. Because $A$ does not split freely, $A$ stabilizes some unique vertex $v$. But $C$ is non-trivial, so $C$ also stabilizes a unique vertex, which must be $v$. Therefore, $G$ stabilizes $v$, which means the free splitting was trivial. QED
A similar argument shows the following.
Lemma If $A*C$ A *_C $splits non-trivially as an amalgamated free product$ A'*{C'} A' *_{C'} B'$then either$A$splits over$C'$or$C$is conjugate into$C'\$.
2 Tried to fix broken LaTeX.
This might help.
Lemma If $A$ does not split freely and $C$ is a non-trivial subgroup of $A$ then the HNN extension $G=A*_C$ does not split freely.
The proof uses Bass--Serre theory---see Serre's book Trees.
Proof. Let $T$ be the Bass--Serre tree of a free splitting of $G$. Because $A$ does not split freely, $A$ stabilizes some unique vertex $v$. But $C$ is non-trivial, so $C$ also stabilizes a unique vertex, which must be $v$. Therefore, $G$ stabilizes $v$, which means the free splitting was trivial. QED
A similar argument shows the following.
Lemma: If $A*C$ splits non-trivially as an amalgamated free product $A'*{C'} B'$ then either $A$ splits over $C'$ or $C$ is conjugate into $C'$.
1
This might help.
Lemma If $A$ does not split freely and $C$ is a non-trivial subgroup of $A$ then the HNN extension $G=A*_C$ does not split freely.
The proof uses Bass--Serre theory---see Serre's book Trees.
Proof. Let $T$ be the Bass--Serre tree of a free splitting of $G$. Because $A$ does not split freely, $A$ stabilizes some unique vertex $v$. But $C$ is non-trivial, so $C$ also stabilizes a unique vertex, which must be $v$. Therefore, $G$ stabilizes $v$, which means the free splitting was trivial. QED
A similar argument shows the following.
Lemma: If $A*C$ splits non-trivially as an amalgamated free product $A'*{C'} B'$ then either $A$ splits over $C'$ or $C$ is conjugate into $C'$.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 60, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8534326553344727, "perplexity_flag": "head"}
|
http://mathhelpforum.com/discrete-math/23249-probability-eventual-return-random-walk.html
|
# Thread:
1. ## Probability of Eventual Return in Random Walk
The probability of eventual return is given by:
$f_{2} + f_{4} + f_{6} + ...+ f_{\infty} = 1$
where $f_{n}$ = Probability of return at Time Period n.
Note that returns must be in even numbers, as a +1 needs -1 to cancel each other out.
Ok, I understand the theory, but I cannot find a suitable proof.
Edit: Proof found. Problem solved.
$f_{n}$ is given by $f_{2n}= u_{2n-2} - u_{2n}$ where $u_{2n}={2n \choose n}2^{-2n}$
So, $f_{2} + f_{4} + f_{6} + ...+ f_{\infty} = u_{0} - u_{2} + u_{2} - u_{4} + ..... = u_{0} = 1$
Still, I'll be very grateful if anyone can explain this Theorem:
$f_{2n}= u_{2n-2} - u_{2n}$
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 7, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9146713614463806, "perplexity_flag": "middle"}
|
http://quant.stackexchange.com/questions/tagged/numerical-methods?sort=votes&pagesize=15
|
# Tagged Questions
The numerical-methods tag has no wiki summary.
2answers
978 views
### How to quickly estimate a lower bound on correlation for a large number of stocks?
I would like to find stock pairs that exhibit low correlation. If the correlation between A and B is 0.9 and the correlation between A and C is 0.9 is there a minimum possible correlation for B and C? ...
3answers
608 views
### When do Finite Element method provide considerable advantage over Finite Differences for option pricing?
I'm looking for concrete examples where a Finite Element method (FEM) provides a considerable advantages (e.g. in convergence rate, accuracy, stability, etc.) over the Finite Difference method (FDM) ...
1answer
209 views
### What weights should be used when adjusting a correlation matrix to be positive definite?
I have a correlation matrix $A$ for an equity market that is not positive definite. Higham (2002) proposes the Alternating Projections Method, minimising the weighted Frobenius norm $||A-X||_W$ where ...
2answers
202 views
### Effective Euro-USD (EURUSD) Exchange Rate Prior to Euro's Existence
Motivation: I am running a quantitative analysis that requires long-term, exchange rate data. Problem: Does anyone have methods for dealing with the EURUSD exchange rate prior to the Euro's ...
3answers
420 views
### Reference on Markov chain Monte Carlo method for option pricing?
I have to implement option pricing in c++ using Markov chain Monte Carlo. Is there some paper which describes this in detail so that I can learn from there and implement?
3answers
712 views
### What tools are used to numerically solve differential equations in Quantitative Finance?
There are a lot of Quantitative Finance models (e.g. Black-Scholes) which are formulated in terms of partial differential equations. What is a standard approach in Quantitative Finance to solve these ...
1answer
219 views
### When pricing options, what precision should I work with?
I'm wondering if there's any point at all in double-precision calculations, or whether it's ok to just do everything in single-precision, seeing how the difference on non-Tesla GPUs for single and ...
2answers
391 views
### What is Quantization ?
I have asked myself many times about Quantization Numerical Methods, is anyone here familiar with the subject and could give a reasonable insight of what Quantization concepts are about, and what are ...
1answer
371 views
### QuantLib and exact numerical simulation
I've just downloaded quantlib and started playing around with it, and it looks like it's designed primarily to use Euler discretizations for everything -- so far as I can tell, there's not even a ...
1answer
336 views
### How to apply quasi-Monte Carlo to path-dependent options?
Following up on my recent question on variance reduction in a Cox-Ingersoll-Ross Monte Carlo simulation, I would like to learn more about using a quasi-random sequence, such as Sobol or Niederreiter, ...
0answers
141 views
### Use of Local Times in Option Pricing
I know two applications of local time in option pricing theory. First, it allows a derivation of Dupire's formula on local volatility in a neat way (i.e. without resorting to differential operator ...
1answer
199 views
### What is a cubature scheme?
Ideally an intuitive explanation with an example, please.
1answer
127 views
### Parameter estimation using martingale measures - include real world data?
Please note: I posted this in nuclearphynance first, but didn't get any replies. For desks which sell exotics it is common practice (as far as I know it) to calibrate the model (Stochastic ...
2answers
137 views
### How can I estimate the parameters of an option value model of retirement?
I am modelling an option value model of retirement, see for instance Stock and Wise (1990). I am however not sure to which class of problems this model falls into and hence which optimization method I ...
3answers
337 views
### Why C is still in use especially in area of numerical optimization (instead of C++)? [closed]
Why C is still in use especially in area of numerical optimization (instead of C++) ? C and C++ aren't fully compatible so mayby you know some differances that make the difference ?
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9094595909118652, "perplexity_flag": "middle"}
|
http://stats.stackexchange.com/questions/18480/interpretation-of-log-transformed-predictor
|
Interpretation of log transformed predictor
I'm wondering if it makes a difference in interpretation whether only the dependent, both the dependent and independent, or only the independent variables are log transformed.
In the case of
````log(DV) = Intercept + B1*IV + Error
````
I can interpret the IV as the percent increase but how does this change when I have
````log(DV) = Intercept + B1*log(IV) + Error
````
or when I have
````DV = Intercept + B1*log(IV) + Error?
````
-
I have a feeling the "percent increase" interpretation is not correct but I don't have enough of a grasp to say why exactly. I hope someone can help....Beyond that, I'd recommend modeling using logs if they help to better establish an X-Y relationship, but reporting selected examples of that relationship using the original variables. Especially if dealing with an audience that is not too technically savvy. – rolando2 Nov 19 '11 at 15:52
@rolando2: I disagree. If a valid model requires transformation, then a valid interpretation will usually rely on coefficients from the transformed model. It remains the onus of the investigator to appropriately communicate the meaning of those coefficients to the audience. That is, of course, why we get paid such big bucks that out salaries have to be log transformed in the first place. – jthetzel Nov 19 '11 at 19:36
@BigBucks: Well, look at it this way. Suppose your audience just can't understand what you mean when you explain that for every change of 1 in the log (base 10) of X, Y will change by b. But suppose they can understand 3 examples using X values of 10, 100, and 1000. They at that point will likely catch on to the nonlinear nature of the relationship. You could still report the overall, log-based b, but giving those examples could make all the difference. – rolando2 Nov 19 '11 at 20:17
....Though now that I read your great explanation below, maybe using those "templates" could help a lot of us clear up these sorts of problems in understanding. – rolando2 Nov 19 '11 at 20:23
Readers here may also want to look at these closely related threads: How to interpret logarithmically transformed coefficients in linear regression, & when-and-why-to-take-the-log-of-a-distribution-of-numbers. – gung Mar 3 at 2:37
2 Answers
Charlie provides a nice, correct explanation. The Statistical Computing site at UCLA has some further examples: http://www.ats.ucla.edu/stat/sas/faq/sas_interpret_log.htm , and http://www.ats.ucla.edu/stat/mult_pkg/faq/general/log_transformed_regression.htm
Just to compliment Charlie's answer, below are specific interpretations of your examples. As always, coefficient interpretations assume that you can defend your model, that the regression diagnostics are satisfactory, and that the data are from a valid study.
Example A: No transformations
````DV = Intercept + B1 * IV + Error
````
"One unit increase in IV is associated with a (`B1`) unit increase in DV."
Example B: Outcome transformed
````log(DV) = Intercept + B1 * IV + Error
````
"One unit increase in IV is associated with a (`B1 * 100`) percent increase in DV."
Example C: Exposure transformed
````DV = Intercept + B1 * log(IV) + Error
````
"One percent increase in IV is associated with a (`B1 / 100`) unit increase in DV."
Example D: Outcome transformed and exposure transformed
````log(DV) = Intercept + B1 * log(IV) + Error
````
"One percent increase in IV is associated with a (`B1`) percent increase in DV."
-
In the log-log- model, see that $$\begin{equation*}\beta_1 = \frac{\partial \log(y)}{\partial \log(x)}.\end{equation*}$$ Recall that $$\begin{equation*} \frac{\partial \log(y)}{\partial y} = \frac{1}{y} \end{equation*}$$ or $$\begin{equation*} \partial \log(y) = \frac{\partial y}{y}. \end{equation*}$$ Multiplying this latter formulation by 100 gives the percent change in $y$. We have analogous results for $x$.
Using this fact, we can interpret $\beta_1$ as the percent change in $y$ for a 1 percent change in $x$.
Following the same logic, for the level-log model, we have
$$\begin{equation*}\beta_1 = \frac{\partial y}{\partial \log(x)} = 100 \frac{\partial y}{100 \times \partial \log(x)}.\end{equation*}$$ or $\beta_1/100$ is the unit change in $y$ for a one percent change in $x$.
-
I never have grasped this. It must be straight forward but I have never seen it... What exactly is \begin{equation*} \partial \log(y) = \frac{\partial y}{y}? \end{equation*} and how do you go from here to a percentage change? – B_Miner Nov 19 '11 at 18:54
All that line does is take the derivative of $\log(y)$ with respect to $y$ and multiply both sides by $\partial y$. We have $\partial y \approx y_1 - y_0$. This fraction, then is the change in $y$ divided by $y$. Multiplied by 100, this is the percent change in $y$. – Charlie Nov 19 '11 at 19:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 15, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9034788608551025, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/40632/what-is-the-inverse-image-sheaf-necessary-for-in-algebraic-geometry/40639
|
## What is the inverse image sheaf necessary for in algebraic geometry?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Given a continuous map $f \colon X \to Y$ of topological spaces, and a sheaf $\mathcal{F}$ on $Y$, the inverse image sheaf $f^{-1}\mathcal{F}$ on $X$ is the sheafification of the presheaf $$U \mapsto \varinjlim_{V \supseteq f(U)} \Gamma(V, \mathcal{F}).$$ If $X$ and $Y$ happen to be ringed spaces, $f$ a morphism of ringed spaces, and $\mathcal{F}$ an $\mathcal{O}_X$-module, one then defines the pullback sheaf $f^* \mathcal{F}$ on $X$ as $$f^{-1}\mathcal{F} \otimes_{f^{-1} \mathcal{O}_Y} \mathcal{O}_X.$$ However, I cannot think of any other usage of the inverse image sheaf in algebraic geometry. Moreover, if $X$ and $Y$ are schemes and $\mathcal{F}$ is quasicoherent, there is an alternate way of defining $f^* \mathcal{F}$. Given $f \colon \mathrm{Spec} B \to \mathrm{Spec} A$, and $\mathcal{F} = \widetilde{M}$, where $M$ is an $A$-module, one defines $f^* \mathcal{F}$ to be the sheaf associated to the $B$-module $M \otimes_A B$. To extend this to arbitrary schemes, it is necessary to prove that it is well-defined; but I still think it is easier to work with than the other definition, which involves direct limits and two sheafifications of presheaves (the inverse image, and the tensor product). I have not checked, but I imagine that something similar can be done for formal schemes.
Hence, my question:
What uses, if any, does the inverse image sheaf have in algebraic geometry, other than to define the pullback sheaf?
A closely related question is
In a course on schemes, is there a good reason to define the inverse image sheaf and the pullback sheaf for ringed spaces in general, rather than simply defining the pullback of a quasicoherent sheaf by a morphism of schemes?
To go from the first question to the second question, I suppose one must also address whether there are $\mathcal{O}_X$-modules significant to algebraic geometers that are not quasicoherent.
Edit: I think the question deserves a certain amount of clarification. Several people have given interesting descriptions or explications of the inverse image sheaf. While I appreciate these, they are not the point of my question; I am, specifically, interested to know whether there are constructions or arguments in algebraic geometry that cannot reasonably be done without using the inverse image sheaf. So far, the answer seems to be that such things exist, but are not really within the scope of, say, a one-year first course on schemes. There are other constructions (such as the inverse image ideal sheaf) that do not, strictly speaking, require the inverse image sheaf, but for which it may be more appropriate to use the inverse image sheaf as a matter of taste.
-
1
This exact question came up in the Wisconsin reading group for Ravi's notes! – JSE Sep 30 2010 at 15:46
3
The other place I'm aware of where it appear in Hartshorne's algebraic geometry book is in the discussion of blow-ups. See page 163. This definition can be describe by $f^*$ though too as Hartshorne points out in 7.12.2. – Karl Schwede Sep 30 2010 at 15:50
2
I really like the way it's covered in the stacks project. Hartshorne's definition crams the sheafification and the inverse image presheaf into one functor. It becomes much clearer when viewed as a composition. – Harry Gindi Sep 30 2010 at 16:43
3
Charles, how do you propose to define pullback maps in sheaf cohomology for a map between general (perhaps non-noetherian and non-separated) schemes, since one can't expect to get by using quasi-coherent injective sheaves of modules? There is something nice about having one concept of ringed space cohomological pullback which unifies all others (e.g., in deRham theory, topology, etc.). – BCnrd Sep 30 2010 at 17:20
5
What about the definition of Y-linear differential operators on a Y-scheme X/Y (as in EGA 4 Ch. 16, for example)? Or, for that matter, what about the cotangent complex of X/Y? Alternatively, if one is thinking about sheaves of sets on Y that are not O_Y-modules, then, of course, there is nothing but the inverse image. One might consider sheaves of sets that are not O_X-modules when formulating moduli problems or, to change the context a bit, when working with etale cohomology. – james-parson Sep 30 2010 at 18:33
show 3 more comments
## 6 Answers
By some coincidence, I have a student going through this stuff now, and we got to this point this just yesterday.
The definition of $f^{-1}$ is certainly disconcerting at first, but it's not that bad. You'd like to say $$f^{-1}\mathcal{F}(U) = \mathcal{F}(f(U))$$ except it doesn't make sense as it stands, unless $f(U)$ is open. So we approximate by open sets from above. A section on the left is a germ of a section of $\mathcal{F}$ defined in some open neighbourhood of $f(U)$, where by germ I mean the equivalence class where you identify two sections if they agree on a smaller neigbourhood. Even if you're still unhappy with this, the adjointness property tells you that it is the right thing to look at.
Also, some of us work with non-quasicoherent sheaves (e.g. locally constant sheaves or constructible sheaves), so it's nice to have a general construction.
Addendum: In my answer yesterday, I had somehow forgotten to mention the etale space or sheaf as a bunch of stalks $$\coprod_y \mathcal{F}_y\to Y$$ viewpoint discussed by Emerton and Martin Brandenburg. Had you started with this "bundle picture", we would be having this discussion in reverse, because pullback is the natural operation here and pushforward is the thing that seems strange.
-
8
Bingo -- there are so many sheaves of interest apart from quasi-coherent sheaves (e.g., etale topology, or just injective sheaves on modules on a general scheme!), and even when proving things about ringed-space pullback it cleans things up to separate the issues in topological pullback from the issues in the tensoring step. Plus, later in life one meets complex-analytic spaces, rigid-analytic spaces, formal schemes, etc., and so developing good habits early on makes later adjustments straightforward (as opposed to having to "unlearn" bad definitions and redo all the basic proofs). – BCnrd Sep 30 2010 at 17:17
4
Charles, why the objection to sheafifying? It's an absolutely basic fact of life with sheaves (e.g., quotients). There are plenty of non-scheme geometric theories (e.g., manifolds) where one forms tensor products of sheaves (e.g., with vector bundles) and uses quotients, etc. for which there is no crutch of affines and one has to sheafify to get the big picture. Moreover, the sheaf-theoretic unification of topology and function theory in complex variables is marvelous, and there's no crutch of affine opens or quasi-coherence there either. Over-reliance on quasi-coherence misses too much. – BCnrd Sep 30 2010 at 18:22
11
Harry, it is important to be able to "compute" things in both proofs and examples. For example, how do you prove using abstract adjoint nonsense stuff that sheafification preserves monicity (which involves maps in the "wrong" direction relative to the adjunction property)? For this and other reasons, I think it is a mistake to disregard the hands-on construction too much. – BCnrd Sep 30 2010 at 18:59
2
Dear Charles: you're right that computing sheafification is often impossible over "general" open sets, and so when we can nail it down on specific opens that is good for proofs, calculations, etc. But theoretically it is important to not develop a theory which suppresses it (e.g., when teaching/learning about sheaves, getting a grip on sheafification is an essential part of the process, both where it's easy to compute on the nose and, when not, how to deal with it). For example, good exercise that $\Omega^4_M$ is sheafification of presheaf 4th wedge power of $\Omega^1_M$ for manifolds $M$. – BCnrd Oct 1 2010 at 4:36
2
... teach, and what to omit, it is not just a question of how quickly you get to some desired destination, but what techniques and habits of thought are taught along the way. (Indeed, the latter is really more important than the former, in my view.) I think I'm speaking for Brian as well as myself in saying that the techniques and the intuitions that you pick up in learning to deal with sheafification and inverse image are important and useful, and will well repay the time and effort taken to learn them. – Emerton Oct 1 2010 at 4:52
show 19 more comments
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Here is a fairly polemical answer, in a similar spirit to Brian's:
Sheafification is not a painful process: you take a presheaf, and you think about how you need to change it so that the stalks are the same, but sections can be glued. It is very natural.
The inverse image is also naturally understood in the same kind of terms: you have a sheaf $\mathcal F$ on $X$, and you would like to make a sheaf on $Y$ whose stalk at $y$ is equal to the stalk of $\mathcal F$ at $f(y)$ (i.e. $(f^{-1}\mathcal F)_x = \mathcal F_{f(x)}$). If you ponder how you can make a rigorous construction with these properties, you will be led to the inverse image. (It's essentially taking the fibre product of $\mathcal F$ over $X$ with the map $f:Y \to X$, and indeed thinking about the inverse image is good practice for developing intuitions about fibre products in lots of other contexts.)
Using the crutch of affines and quasi-coherent sheaves discourages thinking about the (fairly simple and natural) local picture of a sheaf as a bunch of stalks glued together. A lot of the power of the geometric ideas in algebraic geometry comes from thinking geometrically, so one doesnt' want to be discouraging thinking about sheaves in this way; rather, you want to encourage it.
As for applications, Donu notes some in his answer.
Let me note another here: if $\mathcal I$ is an ideal sheaf on $X$, then $f^{-1}\mathcal I$ is naturally a subsheaf of $f^{-1} \mathcal O_X$ (because $f^{-1}$ is exact, as one sees immediately by looking on stalks and using the fact that $f^{-1}$ doesn't change stalks!), and one often wants to look at the ideal sheaf in $\mathcal O_Y$ generated by this. This is not the same (typically) as $f^*\mathcal O_Y$. (Just as, if $I$ is an ideal in $A$ and $B$ is an $A$-algebra, $B\otimes_A I$ is typically is not isomorphic to the ideal in $B$ generated by $I$.)
Now there are other ways to describe this ideal sheaf in $\mathcal O_Y$ (e.g. it is the image of the natural map $f^*\mathcal I \to \mathcal O_Y$), but the description of it in terms of $f^{-1}\mathcal I$ is convenient and very natural.
-
Dear Emerton, certainly you have made a good case for thinking of sheaves on spaces as a bunch of stalks glued together, but unless I'm quite mistaken, doesn't this picture fail to generalize to sheaves on more general sites? – Harry Gindi Oct 1 2010 at 3:00
2
If the site doesn't have enough points, then perhaps so. But it's a bit like the Grothendieck "functor of points" view-point: eventually we can think of any morphism as being a "point" of its source, but in the beginning, its good to understand geometric objects as being collections of points (in the more conventional sense) sweeping out some shape. This lets when one develop some geometric intuition (which is, after all, the point of the "functor of points" metaphor: one wants to use it as a tool through which geometric intuition can be channelled) before moving onto more abstract things. – Emerton Oct 1 2010 at 3:12
1
Returning more directly to your comment: in the etale site one has enough points, and so the picture I'm emphasizing continues to make sense. (And this is one of the things that makes the etale site so pleasant.) On the other hand, if one wants to work in the crystalline site/topos (just to give an example where a more topos-theoretic viewpoint is needed), one has to deal with constructions that are much more involved than forming $f^{-1}$, and so the advice that one should learn this first (and not skip over it) still seems to be pretty reasonable. – Emerton Oct 1 2010 at 3:16
Very interesting, thanks! – Harry Gindi Oct 1 2010 at 3:33
Dear Emerton, you don't mean that $f^{-1}$ is not useful in the crystalline site/topos though, right? The functor $f^{-1}$ is the "more important half" of the adjunction defining a geometric morphism (induced by a morphism of underlying sites). – Harry Gindi Oct 1 2010 at 5:54
show 3 more comments
One quick answer is that the stalk of a sheaf $F$ at a point (say, given by an inclusion $f\colon pt \to X$) is just $f^{-1}F$.
-
Donu Arapura (and BCnrd) already made this point, but I want to emphasize it: algebraic geometry employs a whole universe of sheaves that do not have $\mathscr{O}_X$ actions, and in those cases, the inverse image is the pullback of choice. Standard examples include:
1. Sheaves of solutions to a system of linear algebraic differential equations, in other words, flat sections of a quasicoherent sheaf with respect to a connection. Sometimes these are reasonably familiar, e.g., when they are locally constant.
2. $\ell$-adic sheaves on (the étale site of) a variety over a finite field of characteristic $p$ - this was the first toolset for proving the Weil conjectures.
3. Sheaves of sets, for studying representability and so on.
4. Sheaves of commutative monoids, in log geometry.
5. Sheaves of closed differential forms (which appear when studying e.g., characteristic classes related to twisted differential operators)
I've definitely seen the inverse image employed in the first 4 cases, and I wouldn't be surprised if it appeared in the fifth.
-
I would also add : 6. Sheaves of groups (such as the multiplicative group whose first cohomology group is the Picard group). – ACL Mar 14 at 21:55
ACL, that is a very nice example. – S. Carnahan♦ Mar 15 at 9:21
I prefer the definition of $f^*$ as a left-adjoint to $f_* : Mod(X) \to Mod(Y)$. The formula involving inverse image is then basically abstract nonsense using a transitivity argument with constant sheaves, at least philosophical. The proof of existence is another issue, but it follows from rather general facts of category theory (Kan extensions).
Anyway, your question was about the use of $f^{-1}$ in algebraic geometry. An example is the reduced structure sheaf on a closed subset of a locally ringed space. You take the vanishing ideal $I$ and then pull back $\mathcal{O}_X / I$ along the inclusion map, which is a priori just a continuous map. You can also view this as a module pull back, but only if you have already defined the structure sheaf.
Also, I think it is very important to learn the somewhat old-fashioned view on sheaves, namely as sections of the etale space. Then you quickly arrive at the question to which sheaf corresponds to restriction of the etale space to a subset, which is not necessarily open. Well, it is just the pullback with respect to the inclusion map.
Finally, it is good to know that the morphism `$f^{\#} : \mathcal{O}_Y \to f_{*} \mathcal{O}_X$`, appearing in the definiton of a morphism of a ringed spaces, corresponds to a morphism `$f^{-1} \mathcal{O}_Y \to \mathcal{O}_X$`, from which you get the stalk maps directly.
-
Roughly speaking, an element of $\varinjlim_{V\supseteq f(U)} \Gamma(V, \mathcal{F})$ is just a section in an open neighborhood of $f(U)$ (with some proper identifications).
On the other hand, a section $s\in \Gamma(U, f^{-1}\mathcal{F})$ is given by an open cover `$\{V_i\}_{i\in I}$` of $f(U)$ and a section $s_i\in \Gamma(V_i, \mathcal{F})$ for each $i\in I$ for which we require that `$$ s_i\mid_{V_i\cap V_j \cap f(U)} = s_j\mid_{V_i\cap V_j \cap f(U)}.$$` Here equality means stalk-wise equality. In other words, $s_i=s_j \in \mathcal{F}_y$ for every $y\in V_i\cap V_j \cap f(U)$. Given two such collections `$\{(s_i, V_i)\}$` and `$\{(t_j, W_j)\}$`, we say they are equal if they match stalk-wise for every point $y\in f(U)$.
The difference is just that one substitute a "global" statement where a section in $\Gamma(U,f^{-1}\mathcal{F})$ is required to extend to a nbhd of the whole of $f(U)$ for a "local" version of essentially the same statement and require the sections to glue together on $f(U)$.
For example, if we have $V_1\cup V_2 \supseteq f(U)$ and sections $s_i$ over the respective open sets. We might have $s_1$ and $s_2$ matches on $f(U)\cap V_1\cap V_2$ but differs at some point $y\not\in f(U)$, so they fail to glue to a section on $V_1\cup V_2$. The good thing here is that in this case the support of $s_1-s_2$ is closed in $V_1\cap V_2$, after shrinking our open sets $V_1$ and $V_2$, the sections do match on the intersection. Of course you expect this method to fail when infinite open sets are involved. This also leads to the idea that once compactness is involved, then one may have hope for this to work. For example, see Lemma 1 of Akhil Mathew's Note.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 97, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9471724033355713, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/266833/how-to-maximize-this-integral?answertab=oldest
|
# How to maximize this integral?
Rudin asked me to maximize $$\int^{1}_{-1}x^{3}g(x)dx$$ under the restraint that $$\int^{1}_{-1}g(x)=\int^{1}_{-1}xg(x)dx=\int^{1}_{-1}x^{2}g(x)dx=0$$
This is clearly a Hilbert space problem need to use orthogonal relations. I computed $x^{3}$'s coefficients under the $L^{2}$ inner product and it turns out $x^{3}=\frac{3}{5}x^{2}+c$, with $c$ being orthogonal to $\{1,x,x^{2}\}$. But how does this help to find $g$?
A related question I also do not know how to solve is to find the minimum of $$\int^{\infty}_{0}|x^{3}-a-bx-cx^{2}|^{2}e^{-x}dx$$And it is not clear to me what the linearly independent underlying set is - $\{1,x,x^{2},e^{-x/2}\}$?
-
I finally had some ideas, we just need to normalize $c$ to get $g$. – user32240 Dec 29 '12 at 3:01
1
Hint for the second one: $e^{-x}$ is the weight function in the inner product. Also, do you know about Laguerre polynomials and/or Gram-Schmidt with respect to a weighted inner product? – JohnD Dec 29 '12 at 3:10
No I don't. I have to sleep now but really grateful for the hint. – user32240 Dec 29 '12 at 3:15
## 1 Answer
Hopefully you can recognize the second question as an application of what is sometimes called the "best approximation theorem" or "orthogonal projection theorem" in a Hilbert space. I'll show one approach, but there are others that people can chime in on.
If $\{\varphi_i(x)\}$ forms a complete orthonormal family on an interval $I$, with respect to the weight function $w(x)$, i.e., $$\langle \varphi_i,\varphi_j\rangle_w:=\int_I \varphi_i(x)\varphi_j(x)w(x)\,dx=\begin{cases} 0, &i\not=j,\\ 1, &i=j,\end{cases}$$ and we want to approximate $f\in L^2_w(I)$ as $$f(x)\approx\sum_{i=1}^n c_i \varphi_i(x), \quad x\in I,$$ then the "best" choice for the coefficients $c_i$ in the sense of minimizing the weighted $L^2$ norm of the error, $$\left\|f(x)-\sum_{i=1}^n c_i \varphi_i(x)\right\|_{L^2_w(I)},$$ is simply $c_i=\langle f,\varphi_i\rangle_w$, $i=1,\dots,n$.
To bring this to bear on your problem, the weight on the inner product is $e^{-x}$, $I=[0,\infty)$, $f(x)=x^3$, and we want to minimize the weighted $L^2_w(I)$ error when using the approximation $$x^3\approx a+bx+cx^2=\sum_{i=0}^2 c_i L_i(x),$$ where $L_i(x)$ denotes the $i$th Laguerre polynomial.
Since $L_0(x)=1$, $L_1(x)=1-x$, and $L_2(x)={1\over 2}(2-4x+x^2)$, from the best approximation theorem above, \begin{align} c_0&=\langle x^3,L_0\rangle_w=\int_0^\infty x^3 e^{-x}\,dx=6,\\ c_1&=\langle x^3,L_1\rangle_w=\int_0^\infty x^3(1-x)e^{-x}\,dx=-18,\\ c_2&=\langle x^3,L_2\rangle_w=\int_0^\infty x^3\cdot{1\over 2}(2-4x+x^2)e^{-x}\,dx=18, \end{align} which results in $$a=c_0+c_1+c_2=6, \quad b=-c_1-2c_2=-18, \quad c={c_2\over 2}=9.$$
Of course, if we want to let Mathematica do the work for us, it can:
but where's the fun in that? ;-)
Hope that helps.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 27, "mathjax_display_tex": 8, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9058251976966858, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/112466/can-assumptions-about-forcing-produce-mice/112510
|
## Can Assumptions about forcing produce Mice? [closed]
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
This is going to take some build up to completely describe what is a very strange question I seem to have walked into by accident:
For every partial order $\mathbb{P}$ and regular cardinal $\lambda > \omega$ we can define the following two statements
$$\mathcal{C}(\mathbb{P}, \lambda) \iff 1 \Vdash_{\mathbb{P}} \forall \alpha \in \check{\lambda}\ \forall f: \alpha \to \check{\lambda}\ \exists \gamma \in \check{\lambda}\ \forall \xi \in \alpha\ (f(\xi) \neq \gamma)$$
(this is the formalized version of the statement "$\mathbb{P}$ preserves $\lambda$ is a cardinal" in the forcing language, this statement is normally certified by reasoning which does not involve the forcing relation and depends on the structure of $\mathbb{P}$-names)
and
$$Cof(\mathbb{P}, \lambda) \iff 1 \Vdash_{\mathbb{P}} \forall \alpha \in \check{\lambda}\ \forall f:\alpha \to \check{\lambda}\ \exists \gamma \in \check{\lambda} \ (\sup(ran(f)) \le \gamma)$$
(Again a forcing language version of the statement $\mathbb{P}$ preserves $\forall \alpha \in \lambda \ (cf(\alpha) < cf(\lambda))$: we had to be careful here because we need to be able to distinguish between the two (If this is not the correct way to formalize this please let me know.))
Now, here comes the question: Does the following conjunction:
$\exists \lambda > \omega\ \exists\ \mathbb{P}$ such that
• $\lambda$ is a Regular cardinal.
• $\vert \mathbb{P} \vert = \lambda^{+}$
• $\forall \mu \ (\mu$ is a cardinal $\implies \mathcal{C}(\mu,\mathbb{P}))$
• $\neg Cof(\lambda, \mathbb{P})$
Imply there is an inner model with a measurable cardinal? (changed based on the answers.)
(Namba for $\omega_2$ and threading a generic square collapse cardinals; moreover if $0^\sharp$ exists then $\aleph_\omega^{V}$ is regular in $L$ producing a model in some sense)
Edit:
(It was not my intention to scare a lot of nice mice)
(also, mice need to be more damn direct and stop subtly hinting things.... didn't realize what was going on until just now....)
-
Being forced by a dense set and being forced by $1$ are equivalent, as long as your poset has a $1$. Your formalization of $Cof(\mathbb{P},\lambda)$ says that $\mathbb{P}$ forces (and hence preserves) that $\lambda$ is regular. That's not the same as $\forall \alpha \in \lambda (cf(\alpha) < \lambda)$, which is just always true: $cf(\alpha) \leq \alpha < \lambda$. – Amit Kumar Gupta Nov 15 at 10:25
4
PLEASE do not deface your own questions. People have put work in answering it, and removing the text makes their effort go to waste. – Mariano Suárez-Alvarez Nov 19 at 4:54
3
Recent editing activity and comments make me feel that the question should probably be closed now as "no longer relevant". – Todd Trimble Nov 19 at 7:08
2
I don't know what exactly has happened here, but if Joel or Andreas answered one or more of your questions satisfactorily, it might be good to mark one of the answers as accepted, and leave the question in its most usable form. If you have other things to ask, it might be best to open a new question. – S. Carnahan♦ Nov 19 at 8:27
1
I think it is a very natural and interesting question. – Joel David Hamkins Nov 19 at 10:26
show 17 more comments
## 2 Answers
Let me answer the question that I believe you are trying to ask. Namely, if we can make a regular cardinal $\kappa$ into a singular cardinal $\kappa$ by forcing of size at most $\kappa^+$, without collapsing any cardinals, must $\kappa$ be measurable?
The question is very natural, since Prikry forcing is the main way to do something like that, but it requires a measurable cardinal. Nevertheless, the answer is no.
The reason is that we can have a non-measurable cardinal that becomes measurable, and so the combined forcing of first making it measurable and then using Prikry forcing can exhibit your features. Specifically, it is consistent with ZFC (relative to the existence of a measurable cardinal) that there is a non-measurable cardinal $\kappa$ that becomes measurable in a forcing extension, by forcing to add a Cohen subset to $\kappa$. This is explained in my answer to Trevor Wilson's question Can measures be added by forcing? One can arrange in that argument that the GCH holds and that there are no other measurable cardinals.
So suppose that $V$ satisfies ZFC+GCH and there are no measurable cardinals in $V$, but $\kappa$ becomes measurable in $V[g]$, where $g$ was $V$-generic for the forcing to add a Cohen set $g\subset\kappa$. This does not collapse cardinals. Since $\kappa$ is measurable in $V[g]$, we may now perform Prikry forcing over $V[g]$ to add a Prikry sequence $s$, which changes the cofinality of $\kappa$ to $\omega$, while preserving all cardinals.
So in $V$, there were no measurable cardinals and $\kappa$ was regular, but the combined forcing to add $g\ast s$, forcing which has size $\kappa^+$ under the GCH, made $\kappa$ into a singular cardinal without collapsing any cardinals. Thus, this is a counterexample to the requested implication.
Meanwhile, although $\kappa$ is not measurable in $V$, it was measurable in an inner model of $V$, and this leads naturally to a closely related version of your question:
Question. If we can force a regular cardinal $\kappa$ to be singular with forcing of size at most $\kappa^+$ and without collapsing any cardinals, must there be an inner model with a measurable cardinal?
I don't know without further thought (although I recall having had conversations about this question). It seems likely that one might get $0^\sharp$ and perhaps much more out the hypothesis by combining the forcing with a collapse of $\kappa^+$, which would violate Jensen's theorem. We may have to wait for the inner model theory experts.
-
Joel, Thank you very much for the counter-example; this question has been bothering me for a couple of weeks, and for the life of me I couldn't figure out how to construct a forcing with these properties without first assuming either Con(ZFC) or \neg Con(ZFC); but as you seem to hint at, this is not at a problem because the extension of the forcing language to include the constant $\check{\lambda}$ for a fixed regular cardinal $\lambda$ is necessarily a more expressive language than that without constants. In particular forcing cheats way harder than I'd ever expected. – Michael Blackmon Nov 15 at 18:01
(and makes me finally feel like I can sanely use forcing to produce consistency results again.) – Michael Blackmon Nov 15 at 18:07
Oh no, don't get me wrong that is exactly where my intuition about forcing comes from and is why I was a bit worried, since proofs always carry more weight than intuitions. – Michael Blackmon Nov 15 at 19:38
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Here's an argument for an affirmative answer to Joel's modified version of the question. Suppose we have a forcing that preserves cardinals but singularizes some cardinal $\lambda$ that was regular in the ground model. Note that $\lambda$ had to be a limit cardinal, since otherwise singularizing it would collapse it down to its immediate predecessor cardinal (if not even lower). Now let $C$ in the forcing extension be a cofinal subset of $\lambda$ of smaller cardinality $\kappa$. I claim that C is not included in any set $D$ in the ground model of cardinality `$\leq\max\{\kappa,\aleph_1\}$`; in other words, I claim that $C$ is a counterexample to the assertion that the forcing extension satisfies the covering lemma over the ground model. Indeed, suppose we had such a $D$. Intersecting it with $\lambda$, we'd have a cofinal subset of $\lambda$ strictly smaller than $\lambda$ in the ground model, contrary to the assumption that $\lambda$ is regular in the ground model. ("Strictly smaller" in the preceding sentence uses that $\lambda>\aleph_1$, which is why I pointed out earlier that $\lambda$ has to be a limit cardinal.) So the forcing extension doesn't satisfy the covering lemma over the ground model. That implies the existence of an inner model with a measurable cardinal, by an ancient result of mine --- "Small extensions of models of set theory" in "Axiomatic Set Theory" (Proc. of 1983 Boulder Conference, edited by Baumgartner, Martin, and Shelah) Contemporary Math. 31 (1984) pp. 35-39.
-
Note that the size of the forcing notion doesn't matter here, as long as it's a set. – Andreas Blass Nov 15 at 18:52
Great ! – Joel David Hamkins Nov 15 at 19:13
Andreas, Thank you so much, I owe you a ladder system on \omega_1. – Michael Blackmon Nov 15 at 22:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 71, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9638259410858154, "perplexity_flag": "head"}
|
http://nrich.maths.org/7557
|
### Consecutive Numbers
An investigation involving adding and subtracting sets of consecutive numbers. Lots to find out, lots to explore.
### Tea Cups
Place the 16 different combinations of cup/saucer in this 4 by 4 arrangement so that no row or column contains more than one cup or saucer of the same colour.
### Exploring Wild & Wonderful Number Patterns
EWWNP means Exploring Wild and Wonderful Number Patterns Created by Yourself! Investigate what happens if we create number patterns using some simple rules.
# Building with Rods
##### Stage: 2 Challenge Level:
We have three rods that are each $2$ units long.
The different colours are used to make the diagrams clearer and they always remain in the same place i.e the blue as the bottom layer, the green as the top layer and the red as the middle layer.
The challenge is to find how many different ways you can stack these rods.
The rule is that a small cube must sit squarely on top of another small cube.
It does not matter if they are likely to topple over.
Both these two arrangements fit the rule.
However, these two arrangements do not fit the rule as the rods have to be lined up squarely and each little cube must sit on top of one other cube and not overlap two cubes.
How can you convince someone that you have found all the possibilities?
The NRICH Project aims to enrich the mathematical experiences of all learners. To support this aim, members of the NRICH team work in a wide range of capacities, including providing professional development for teachers wishing to embed rich mathematical tasks into everyday classroom practice. More information on many of our other activities can be found here.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9361985921859741, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/14124/finding-inverse-of-x-bmod-y
|
finding inverse of $x\bmod y$
I am working through a review problem asking to find the inverse of $4\bmod 9$. Through examples I know that I first need to verify that the gcd is equal to 1 and write it as a linear combination of 4 and 9 to find the inverse. I can do this in just one step:
````gcd(4,9)
9 = 2 * 4 + 1
1 = 9 - 2 * 4
````
This would suggest that the inverse is 1 if I am understanding this correctly. However, the solution manual doesn't show the work but says the LC should actually be
````1 = 7 * 4 - 3 * 9
````
making the answer to the question 7.
Can anyone explain to me what is going on here and how to properly find the inverse? Thanks!
P.S. wish I could add tags for congruency, gcd, and inverse. I can't believe their isn't an inverse tag already :(
-
3
You seem to have some confusion over the definition of an inverse. The inverse of $a$ in a ring is an element $b$ such that $ab=1$. So what do you have to multiply 4 by to get 1 modulo 9? Re tags: would you also like to have a tag saying 4 and one saying 9? Seriously, "inverse" is not a mathematical topic or an area of maths, it's a concept - one out of several hundred. Tags should reflect the topic of the question. – Alex B. Dec 13 '10 at 2:34
1
@Tony K: Two things: `\mod` is an operator used in CS; `x mod y` means the (nonnegative) remainder when dividing $x$ by $y$; by contrast, `\pmod` is the name of an equivalence relation, which consists of the symbol $\equiv$ and the `(mod y)`. Second: the names of operators and functions in mathematics follows the following convention: if they are one or two symbols long, then italics are prefered; if they are three or more symbols long, then roman typeface should be used. So $x\ mod\ y$ does not follow that convention; although it is probably better to use `\mathrm{mod}` than `\mod`; I did now – Arturo Magidin Dec 13 '10 at 16:42
2
@Tony K: Truth is, misuse of `\mod` is one of my peeves that I raise whenever I proofread/review/referee papers. – Arturo Magidin Dec 13 '10 at 16:47
1
Anyway, perhaps it's time to sum this up: Arturo has a pet peeve about misuse of \mod, although he doesn't understand its spacing. I have a pet peeve about people making gratuitous edits to my posts, but that's my problem. schwiz has finals tomorrow, so schwiz doesn't care. – TonyK Dec 13 '10 at 17:25
2
@TonyK: Pretty much. As always, the rule is X, except when it isn't. And as J.M. said, perhaps these off-topic comments should be moved to a CW thread instead. (-: – Arturo Magidin Dec 13 '10 at 19:28
show 17 more comments
5 Answers
I know you've already gotten lots of responses, but instead of trying to fit responses into the comments under an answer I thought I'd just reiterate exactly why you need to do what you're doing.
An inverse to $4 \mod 9$ is an integer $a$ such that $4a \equiv 1 \mod 9$. If we rewrite this, it means precisely that $9| (4a-1)$, or that there is another integer $b$ so that $9b = 4a-1$. What this says is that you need to find integer solutions to $4x-9y=1$. If you find that $x=a$ and $y=b$ works then $a$ is an inverse.
So what you do to solve something like this is run the Euclidean algorithm for $9$ and $4$, and then "reverse" the steps.
$9=4\cdot 2 + 1$, so it is actually over in 1 step in this case. Just subtract over to get $9-4\cdot 2 = 1$, or in the form we wrote it above $4\cdot (-2)-9\cdot (-1)=1$. So a solution is $x=-2$ and $y=-1$ and we said the $x$-value was the inverse, so the inverse is $-2\mod 9$.
Hopefully if you understand the process of why you're doing these things, then if you get confused about which one is which on the final you can always just rederive it from these steps.
-
thanks this makes it much more clear – schwiz Dec 13 '10 at 3:47
$-2$ is congruent to $7$ mod $9$. Your arithmetic steps are correct, but the conclusion should be that the congruence class of $-2$ is the inverse of $4$ mod $9$, and you probably want to use representatives in $\{1,2,3,4,5,6,7,8\}$ for your inverses, for simplicity, by adding appropriate multiples of $9$.
-
thanks for the swift response but the original question was finding the inverse of 4 mod 9 not 7 mod 9 – schwiz Dec 13 '10 at 2:25
1
Your work shows that $-2$ is the inverse of $4$ mod 9. The solution manual shows that $7$ is the inverse of $4$ mod 9. The point is that these are the same thing. Your only problem was in misinterpreting what you'd shown. – Jonas Meyer Dec 13 '10 at 2:26
ok I almost understand can you briefly explain why -2 is the inverse of 4 mod 9? I understand why given -2 is the inverse so must be 7. – schwiz Dec 13 '10 at 2:29
@schwiz: More explicitly, 9-2=7 – J. M. Dec 13 '10 at 2:29
2
There is never a unique way to express $1$ as $ax+by$ when $x$ and $y$ are relatively prime, so the exact numbers $a$ and $b$ aren't particularly relevant for this problem. What is important is that $ax=1-by$ is congruent to $1$ mod $y$, because it differs from $1$ by $by$, which is a multiple of $y$. This says that the class of $a$ is the inverse of the class of $x$. (I don't know how to address your concern about the examples having "larger numbers" being multiplied; at first I thought the issue was uniqueness of $a$ and $b$, but I'm not sure. In any case, I hope it's clear now.) – Jonas Meyer Dec 13 '10 at 2:39
show 3 more comments
HINT $\$ Congruences preserve inverses, i.e. $\rm\ A\equiv a\ \Rightarrow\ A^{-1}\equiv a^{-1}\:.\$ This follows from the fact that congruences preserve products, i.e. it's the special case $\rm\: AB \equiv 1\:$ in this congruence product rule
LEMMA $\rm\ \ A\equiv a,\ B\equiv b\ \Rightarrow\ AB\equiv ab\ \ (mod\ m)$
Proof $\rm\ \ m\: |\: A-a,\:\:\ B-b\ \Rightarrow\ m\ |\ (A-a)\ B + a\ (B-b)\ =\ AB - ab$
This congruence product rule is at the heart of many other similar product rules, for example Leibniz's product rule for derivatives in calculus, e.g. see my post here.
-
2
@schwiz: Alas, unfortunately their is no universal light bulb I can supply for everyone's head. Apologies if yours exploded! Perhaps at some later point in your studies you may safely see the light. You can ignore the link with no loss to comprehension of the above. – Gone Dec 13 '10 at 4:05
hah I certainly appreciate it, perhaps if my brain weren't already so stressed from studying all day. – schwiz Dec 13 '10 at 4:09
1/4 = 4/16 = 4/7 = 8/14 = 8/5 = 16/10 = 16/1 = 16 = 7.
Note that in the first step I multiplied the numerator and denominator by 4 instead of 3. That is because 3 is not co-prime with 9.
-
Using Gauss Fraction Method: 1/4 = 2/8 = 2/(-1) = -2 = 7
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 61, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9592304825782776, "perplexity_flag": "head"}
|
http://physics.aps.org/articles/v3/95
|
# Viewpoint: New horizons for Hawking radiation
, Université de Franche-Comté, Institut FEMTO CNRS UMR 6174, Besançon, France
, Centre for Photonics and Photonic Materials, Department of Physics, University of Bath,Bath BA2 7AY, United Kingdom
Published November 8, 2010 | Physics 3, 95 (2010) | DOI: 10.1103/Physics.3.95
Evidence for radiation analogous to that predicted near black holes by Hawking has been found in the emission of optical filaments propagating through glass.
#### Hawking Radiation from Ultrashort Laser Pulse Filaments
F. Belgiorno, S. L. Cacciatori, M. Clerici, V. Gorini, G. Ortenzi, L. Rizzi, E. Rubino, V. G. Sala, and D. Faccio
Published November 8, 2010 | PDF (free)
In 1974, Steven Hawking predicted that black holes were not completely black, but were actually weak emitters of blackbody radiation generated close to the event horizon—the boundary where light is forever trapped by the black hole’s gravitational pull [1]. Hawking’s insight was to realize how the presence of the horizon could separate virtual photon pairs (constantly being created from the quantum vacuum) such that while one was sucked in, the other could escape, causing the black hole to lose energy. Hawking’s idea was significant in suggesting a possible optical signature of a black hole’s existence. Yet, even though the prediction created an extensive theoretical literature in cosmology, calculations have since shown that Hawking radiation from black holes is so weak that it would be practically impossible to measure.
It turns out, however, that the physics of how waves interact with a horizon does not depend in a fundamental way on the presence of gravity at all. In principle, an analogous Hawking radiation should occur in other systems [2, 3]. The key requirement is simply that the interaction between waves and the medium in which they propagate causes there to be a boundary between zones where the wave and the medium have different velocities [4, 5]. In a paper in Physical Review Letters [6], Francesco Belgiorno at the Università degli Studi di Milano, in collaboration with researchers at several other institutes, also in Italy, describe a series of experiments where high-intensity filaments of light in glass perturb the optical propagation environment in an analogous manner to the way a gravitational field affects light near a black hole horizon. This perturbation creates the optical equivalent of an event horizon that allows Belgiorno et al. to make convincing measurements of analog Hawking radiation at optical frequencies [7]. These results are highly significant in suggesting a system in which Hawking’s prediction can be fully explored in a convenient laboratory environment.
A really useful way to visualize an analog event horizon is to imagine a fish swimming upstream in a river flowing towards a waterfall [4, 5]. The point at which the current flows faster than the fish can swim represents a boundary at which fish cannot escape and they are swept over the waterfall. This “point of no return” for the fish is equivalent to a black hole horizon. A similar analogy exists for white holes, associated with horizons that fish can never enter, namely, if they were attempting to swim upstream towards the bottom of a waterfall. This simple picture is surprisingly powerful at capturing the physics of horizons, and can be extended rigorously to describe horizon physics in diverse systems including acoustics, cold atoms, and gravity-capillary waves on water [8, 9, 10, 11]. Indeed, horizon effects and a stimulated form of Hawking radiation have recently been explicitly observed in the vicinity of an obstacle placed in an open channel flow [12]. However, the question has remained open as to whether these analog gravity systems also generate the spontaneous thermal radiation Hawking predicted.
Belgiorno et al.’s experiments suggest that the answer is yes. They created an optical event horizon by using intense, ultrashort light pulses that change the refractive index of the glass in the vicinity of the moving pulse [13]. The change in the refractive index modifies the effective propagation “geometry” (Fig. 1, left) as seen by copropagating light rays such that a trapping horizon forms and, in principle, Hawking radiation should occur. The changing index is an effect called a Kerr nonlinearity, whereby a pulse modifies the refractive index of glass such that it is higher at the center of the pulse than the wings. Because wave speed depends on refractive index, the propagating pulse induces an effective velocity gradient in the material such that horizons can appear at points on the pulse leading and trailing edges.
It is important to note here that Kerr-induced trapping in itself is not a new effect, but rather one that is well known in nonlinear fiber optics [14, 15, 16]. In fact, the Raman frequency shifting on solitons (where the spectrum of short pulses moves to longer wavelengths) is a deceleration effect that was even shown to lead to an equivalent gravitational potential [16, 17], but it was only recently that scientists appreciated the possibility of extending the gravitational analogy to test predictions of Hawking radiation [13]. The particular attraction of experiments in optics is that the intensity and wavelength of the Hawking radiation depends on the induced refractive index gradient, and pulses containing only a few optical cycles or a steep shock front would be expected to generate measureable emission of visible light. Unfortunately, although it is straightforward to demonstrate the existence of an event horizon in an optical fiber [13], dispersive and dissipative effects present in the fiber appear to prevent the clean formation of the particular pulse profiles needed for the spontaneous generation of Hawking radiation.
The experiments of Belgiorno and co-workers attempt to overcome this problem in a novel way. In fact, they move away from the fiber environment altogether, performing experiments in bulk glass, using laser pulses in the form of needlelike beams known as optical filaments (Fig. 1, right) to generate the nonlinear refractive index perturbation [18]. Optical filament pulses generally have a complex spatiotemporal structure, but it is possible to experimentally synthesize them so that their internal group and phase velocity gradients can be controlled. This is what Belgiorno et al. did in their experiments, using properties of what are called Bessel beams, where the filament is preshaped so as to control its group velocity relative to the velocity change induced by the refractive index perturbation. This is a crucial aspect of their experiments because it allows them to fine tune the window of Hawking radiation emission into the near infrared, around $850nm$ away from any other possible contaminating signals. With a cooled CCD camera, they detect a clear signal above background that they associate with Hawking radiation, and the spectral shift of this signal with incident energy is in good agreement with the predictions of theory. They also performed experiments where the filaments were formed from Gaussian pulses, and although the dynamics are more complicated here, the spectral emission window of the spontaneous radiation is again in agreement with theory.
Overall, this work provides significant evidence for the observation of Hawking radiation in an analog gravity system. The results must nevertheless be interpreted carefully. For example, it is essential to state very plainly that measuring analog Hawking radiation gives no direct insight into quantum gravity because there is no physical gravitational potential involved. Indeed, one might even argue that the description of this emission as “Hawking” radiation is inappropriate, but this seems an unnecessary restriction because the study of analog horizon systems was clearly motivated by Hawking’s work over $35$ years ago. Of course additional experiments still remain to be carried out. Specifically, the spontaneous photon pairs emitted on either side of the horizon may be detectable with suitable angular resolution [18], and measurements of their entanglement and correlation will be an essential next step. Moreover, the polarization properties of filaments are very subtle and may need to be taken into account more fully in a complete interpretation of the experiments.
This field of research is at the interface of several areas of physics, and “standardizing” the terminology would be welcome. For example, effects related to Cerenkov radiation are seen both in filament and fiber soliton propagation [15, 16], and distinguishing similarities and differences is important to avoid confusion. On the other hand, Belgiorno et al.’s results now point out the clear need to study the quantum electrodynamics of soliton radiation, as this may have direct bearing on their experiments. This work is also likely to be far reaching in other ways. By showing how tailored spatiotemporal fields provide a high degree of control over the interaction geometry of propagating light pulses, they may represent a new example of experiments in “ultrafast transformation optics,” where geometrical modifications of an optical propagation environment can be induced on ultrafast timescales. This may allow a much wider study of other analog physical effects, using a convenient benchtop platform.
Corrections (6 December 2010): Paragraph 3, sentence 5, “capillary waves” changed to “gravity-capillary waves.” References 2, 3, 10, 13 and 18, changed/updated.
### References
1. S. W. Hawking, Nature 248, 30 (1974).
2. W. G. Unruh, Phys. Rev. Lett. 46, 1351 (1981).
3. P. C. W. Davies and S. A. Fulling, Proc. R. Soc. London A 356, 237 (1977); M. Visser, Classical Quantum Gravity 15, 1767 (1998).
4. U. Leonhardt and T. G. Philbin, Philos. Trans. R. Soc. A 366, 2851 (2008).
5. W. G. Unruh, Philos. Trans. R. Soc. A 366, 2905 (2008).
6. F. Belgiorno, S. L. Cacciatori, M. Clerici, V. Gorini, G. Ortenzi, L. Rizzi, E. Rubino, V. G. Sala, and D. Faccio, Phys. Rev. Lett. 105, 203901 (2010).
7. D. Faccio, S. Cacciatori, V. Gorini, V. G. Sala, A. Averchi, A. Lotti, M. Kolesik, and J. V. Moloney, Europhys. Lett. 89, 34004 (2010).
8. P. Ball, Nature 411628 (2001).
9. Artificial Black Holes, edited by M. Novello, M. Visser, and G. E. Volovik (World Scientific, Singapore, 2002)[Amazon][WorldCat].
10. G. Rousseaux, C. Mathis, P. Maïssa, T. G. Philbin, and U. Leonhardt, New J. Phys. 10,053015 (2008); R. Schützhold and W. G. Unruh, Phys. Rev. D 66, 044019 (2002).
11. G. Rousseaux, P. Maïssa, C. Mathis, P. Coullet, T. G. Philbin, and U. Leonhardt, New J. Phys. 12,095018 (2010).
12. S. Weinfurtner et al., Phys. Rev. Lett. (to be published); arXiv:1008.1911v2 (gr-qc).
13. T. G. Philbin, C. Kuklewicz, S. Robertson, S. Hill, F. König, and U. Leonhardt, Science 319, 1367 (2008); R. Schützhold and W. G. Unruh, Phys. Rev. Lett. 95, 031301 (2005).
14. N. Nishizawa and T. Goto, Opt. Express 10, 1151 (2002).
15. J. M. Dudley, G. Genty, and S. Coen, Rev. Mod. Phys. 78, 1135 (2006).
16. D. V. Skryabin and A. V. Gorbach, Rev. Mod. Phys. 82, 1287 (2010).
17. A. V. Gorbach and D. V. Skryabin, Nature Photon. 1, 653 (2007).
18. F. Belgiorno, S. L. Cacciatori, G. Ortenzi, V. G. Sala, and D. Faccio, Phys. Rev. Lett., 104,140403 (2010).
### About the Author: John M. Dudley
Originally from Otahuhu in New Zealand, John Dudley received his Ph.D. degree from the University of Auckland in 1992. After working in Scotland and New Zealand, he was appointed Professor at the University of Franche-Comté, France, in 2000. He was named a member of the Institut Universitaire de France in 2005 and was elected a Fellow of the Optical Society of America in 2007. He is the author of over 100 major research papers, and has presented numerous invited talks at major international conferences, workshops, and summer schools.
### About the Author: Dmitry Skryabin
Dmitry Skryabin is an Associate Professor at the University of Bath, England, and has also worked as a Royal Society research fellow at the University of Strathclyde in Scotland. He completed his Ph.D. at Strathclyde and did his first degree at St. Petersburg University in Russia. His research interests are focused around light propagation and trapping in nonlinear and structured materials.
## Related Articles
### More Optics
Nanostructures Put a Spin on Light
Synopsis | May 16, 2013
Wave-Shaping Surfaces
Viewpoint | May 6, 2013
### More Gravitation
Remove the Noise
Synopsis | Apr 25, 2013
Keeping Track of Nonconservative Forces
Synopsis | Apr 22, 2013
## New in Physics
Wireless Power for Tiny Medical Devices
Focus | May 17, 2013
Pool of Candidate Spin Liquids Grows
Synopsis | May 16, 2013
Condensate in a Can
Synopsis | May 16, 2013
Nanostructures Put a Spin on Light
Synopsis | May 16, 2013
Fire in a Quantum Mechanical Forest
Viewpoint | May 13, 2013
Insulating Magnets Control Neighbor’s Conduction
Viewpoint | May 13, 2013
Invisibility Cloak for Heat
Focus | May 10, 2013
Desirable Defects
Synopsis | May 10, 2013
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 3, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9046326875686646, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/29829/a-known-hypergeometric-identity
|
## A (known?) hypergeometric identity
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Incidentally I've obtained a hypergeometric identity that I've not seen before:
$${}_3F_2(-m,-n,m+n; 1, 1; 1) = \frac{m^2+n^2+mn}{(m+n)^2} {\binom{m+n}{m}}^2$$
So, I wonder if it is well-known and possibly represents a particular case of something more general?
P.S. I've tried to simplify() the l.h.s. in Maple but it did not succeed, giving a hope that the identity is not completely trivial. ;)
EDIT: There seems to be a bug in formula rendering, so I'm repeating it below in plain LaTeX:
{}_3F_2(-m,-n,m+n; 1, 1; 1) = \frac{m^2+n^2+mn}{(m+n)^2} {\binom{m+n}{m}}^2
-
2
Have you tried to see if it follows from the methods in Petovsek, Wilf, and Zeilberger? math.upenn.edu/~wilf/AeqB.html – Qiaochu Yuan Jun 28 2010 at 20:27
1
I don't need a proof. I just wonder if it is known. – Max Alekseyev Jun 28 2010 at 20:31
4
I think Qiaochu's point may be that his reference actually provides a Maple program to do this sort of computation. If their program can do this computation (and it likely can) then the identity you've found is known in the sense that anyone curious could derive it in a couple minutes by computer. Of course, if you have a combinatorial proof that's very cool. – Daniel Litt Jun 28 2010 at 21:23
Thank you for suggestions! I've found that such programs are already incorporated in Maple as packages 'sumtools' and 'SumTools'. And sumtools[hypersum]() was able to evaluate the closed form. So, there is an automated way to prove this identity that makes it somewhat less interesting. – Max Alekseyev Jun 28 2010 at 23:16
## 1 Answer
Your relation is a particular case of the Karlsson--Minton relations (see Section 1.9 in the $q$-Bible by Gasper and Rahman). It's also a contiguous identity to Pfaff--Saalschütz.
EDIT. First of all I apologise for giving insufficient comments on the problem. I learned from Max a very nice graph-theoretical interpretation of the identity which makes good reasons for not burring it in the list of "ordinary" problems.
The hypergeometric series (function) $${}_ {p+1}F_ p\biggl(\begin{matrix} a_ 0,\ a_ 1,\ \dots,\ a_ p \cr b_ 1,\ \dots ,\ b_ p\end{matrix};x\biggr) = \sum_ {k=0}^\infty \frac{(a_ 0)_ k(a_ 1)_ k\dots (a_ p)_ k}{(b_ 1)_ k\dots (b_ p)_ k}\frac{x^k}{k!},$$ where $$(a)_ 0=1 \quad\text{and}\quad (a)_ k =\frac{\Gamma(a+k)}{\Gamma(a)}= a(a+1)\dots (a+k-1) \quad\text{for } k\in \mathbb Z_ {>0}$$ (I consider the ones with finite domain of convergence $|z|<1$), have very nice history and links to practically everything in mathematics. There are many transformation and summation theorems for them, both classical and contemporary. There are very efficient algorithms and packages for proving them, like the algorithm of creative telescoping (due to W. Gosper and D. Zeilberger) and the package HYP which allows one to manipulate and identify binomial and hypergeometric series (due to C. Krattenthaler). An example of classical summation theorem is the Pfaff--Saalschütz sum $${}_ 3F_ 2\biggl(\begin{matrix} -m,\ a,\ b \cr c,\ 1+a+b-c-m\end{matrix};1\biggr) =\frac{(c-a)_ m(c-b)_ m}{(c)_ m(c-a-b)_ m}$$ where $m$ is a negative integer, with a generalisation $${}_ {p+1}F_ p\biggl(\begin{matrix} a,\ b_ 1+m_ 1,\ \dots,\ b_ p+m_ p \cr b_ 1,\ \dots ,\ b_ p\end{matrix};1\biggr)=0 \quad\text{if } \operatorname{Re}(-a)>m_ 1+\dots+m_ p$$ and $${}_ {p+1}F_ p\biggl(\begin{matrix} -(m_ 1+\dots+m_ p),\ b_ 1+m_ 1,\ \dots,\ b_ p+m_ p \cr b_ 1,\ \dots ,\ b_ p\end{matrix};1\biggr)=(-1)^{m_ 1+\dots+m_ p} \frac{(m_ 1+\dots+m_ p)!}{(b_ 1)_ {m_ 1}\dots (b_ p)_ {m_ p}}$$ due to B. Minton and Per W. Karlsson (here $m_ 1,\dots,m_ p$ are nonnegative integers). Max's original identity is not a straightforward particular case but a linear combination of three contiguous Pfaff--Saalschütz-summable hypergeometric series. (Two hypergeometric functions are said to be contiguous if they are alike except for one pair of parameters, and these differ by unity.) Because of having three hypergeometric functions, I do not see any fun in writing the corresponding details but indicate a simpler hypergeometric derivation.
Applying Thomae's transformation $${}_ 3F_ 2\biggl(\begin{matrix} -m,\ a,\ b \cr c,\ d\end{matrix};1\biggr) =\frac{(d-b)_ m}{(d)_ m}\cdot{}_ 3F_ 2\biggl(\begin{matrix} -m,\ c-a,\ b \cr c,\ 1+b-d-m\end{matrix};1\biggr)$$ the problem reduces to evaluation of the series $${}_ 3F_ 2\biggl(\begin{matrix} -m,\ n+1,\ m+n \cr 1,\ n\end{matrix};1\biggr).$$ Writing $$\frac{(n+1)_ k}{(n)_ k}=\frac{n+k}{n}=1+\frac kn$$ the latter series becomes $${}_ 3F_ 2\biggl(\begin{matrix} -m,\ n+1,\ m+n \cr 1,\ n\end{matrix};1\biggr) ={}_ 2F_ 1\biggl(\begin{matrix} -m,\ m+n \cr 1 \end{matrix};1\biggr) +\frac{(-m)(m+n)}{n} {}_ 2F_ 1\biggl(\begin{matrix} -m+1,\ m+n+1 \cr 2 \end{matrix};1\biggr)$$ and the latter two series are summed with the help of the Chu--Vandermonde summation (a particular case of the Gauss summation theorem).
As for general forms of Max's identity, I can mention that there is no use of the integrality of $n$ in the last paragraph, and I could even expect something a la Minton--Karlsson in general.
-
2
Wadim, what does "contiguous identity" mean? Also, would you mind stating KM relations for those of us not doing our daily prayers? – Victor Protsak Jun 28 2010 at 23:44
1
From Slater's book: "Two hypergeometric functions are said to be contiguous if they are alike except for one pair of parameters, and these differ by unity." What I'm saying about PS is that a linear combination of two contiguous hypergeometrics summed by PS gives the OP. As for the KM identities, I know see that they don't quite include the OP. In any case, after rereading of the question, I'd assume that the author wonders whether the cyclotomic $m^2+mn+n^2$ can be generalized to a higher degree, at least to explain its appearance. I don't believe so, but this isn't in hypergeometry. – Wadim Zudilin Jun 29 2010 at 0:39
1
Thanks a lot! I suspected that it should be well-known. – Max Alekseyev Jun 29 2010 at 0:43
2
Wadim, thank you for clarifying the term "contiguous". I don't know what made you angry, maybe you need to take a break for a few days? Your contributions to MO are certainly valuable, but this answer is a bit cryptic, please, fill in some details for the benefit of educated non-specialists like myself when you get a chance. – Victor Protsak Jun 29 2010 at 2:01
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 10, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.936359167098999, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/advanced-algebra/6699-sylow-theorem-number-one.html
|
# Thread:
1. ## Sylow Theorem Number One
For anyone who has John Fraleigh "A First Course in Abstract Algebra" 7/e would be easier to follow. (I definitely know some of you have it, because many people referred to it already, very popular book).
Let $p$ be a prime.
On Page, 324-325
John is talking about the first Sylow theorem.
"Given a finite group $|G|=p^n m$ with $n\geq 1$ and $p\not | m$, then there exists a subgroup of order $p^{i}$ for $0\leq i\leq n$.
Furthermore, each subgroup of order $p^i$ is a normal subgroup of order $p^{i+1}$. For $0\leq i<n$.
----
The first part of the proof I understand, the second part ails my soul and makes my blood cold and so by degrees.
I believe that John wanted to write,
$p^i$ is a normal subgroup of some group of order $p^{i+1}$. He makes it appear as though it is true for all subgroups, which is unlikely.
Anyone know what I am talking about?
2. Originally Posted by ThePerfectHacker
For anyone who has John Fraleigh "A First Course in Abstract Algebra" 7/e would be easier to follow. (I definitely know some of you have it, because many people referred to it already, very popular book).
Let $p$ be a prime.
On Page, 324-325
John is talking about the first Sylow theorem.
"Given a finite group $|G|=p^n m$ with $n\geq 1$ and $p\not | m$, then there exists a subgroup of order $p^{i}$ for $0\leq i\leq n$.
Furthermore, each subgroup of order $p^i$ is a normal subgroup of order $p^{i+1}$. For $0\leq i<n$.
----
The first part of the proof I understand, the second part ails my soul and makes my blood cold and so by degrees.
I believe that John wanted to write,
$p^i$ is a normal subgroup of some group of order $p^{i+1}$. He makes it appear as though it is true for all subgroups, which is unlikely.
Anyone know what I am talking about?
If I understand which part you are asking about you are correct. My book: "Algebra" by Hungerford writes the 1st Sylow theorem as:
Let G be a group of order $p^nm$ with $n \geq 1$, p prime, and (p,m) = 1. Then G contains a subgroup of order $p^i$ for each $1 \leq i \leq n$ and every subgroup of G of order $p^i$ (i < n) is normal in some subgroup of order $p^{i+1}$.
I can provide you with this book's proof, but you are on your own for about half of it. I'm too rusty to follow the whole thing anymore.
-Dan
3. Originally Posted by topsquark
If I understand which part you are asking about you are correct. My book: "Algebra" by Hungerford writes the 1st Sylow theorem as:
Let G be a group of order $p^nm$ with $n \geq 1$, p prime, and (p,m) = 1. Then G contains a subgroup of order $p^i$ for each $1 \leq i \leq n$ and every subgroup of G of order $p^i$ (i < n) is normal in some subgroup of order $p^{i+1}$.
Thank you, that is what I needed.
I think I seen people refer to Hungerford. Is it good?
Originally Posted by topsquark
I can provide you with this book's proof, but you are on your own for about half of it.
That is okay, I perfectly understand the proof in my book. Tell me, does it use Cauchy's theorem and the fact that:
$|X_G|\equiv |X| (\mbox{mod }p)$?
The reason why I am asking is because John says that is a new way and more elegant way to appraoch Sylow theorems.
Originally Posted by topsquark
Did you consider re-reading it? It should go fast and smooth a second time if you knew if well the first time.
One last question. Did you read study Group theory before or after you got a Ph.D in Physics?
4. Originally Posted by ThePerfectHacker
Thank you, that is what I needed.
I think I seen people refer to Hungerford. Is it good?
It seems fairly clear and well organized. My only trouble with it is that I've had to teach myself all the background and thus I have trouble with some of the details.
Originally Posted by ThePerfectHacker
That is okay, I perfectly understand the proof in my book. Tell me, does it use Cauchy's theorem and the fact that:
$|X_G|\equiv |X| (\mbox{mod }p)$?
The reason why I am asking is because John says that is a new way and more elegant way to appraoch Sylow theorems.
It uses Cauchy's theorem, but I don't think it uses the equivalence. (Presumably to save space it refers to a number of lemmas and corrolaries, which I haven't looked up, so it may be in there.)
Originally Posted by ThePerfectHacker
Did you consider re-reading it? It should go fast and smooth a second time if you knew if well the first time.
Actually I have approached it twice now. Each time I get a bit further into it. There always seems to be a point where it just seems that I'm reading gobbledegook so I put it down for a couple of months and come back to it. The last time I got up to chapter 4, which is on modules, so I got all the way through the Sylow theorems (which are in chapter 2). The problem is that I've been working with QFT for the last 6 months or so and I've apparently forgotten some of the notation.
Originally Posted by ThePerfectHacker
One last question. Did you read study Group theory before or after you got a Ph.D in Physics?
I would say that is a "yes" because I don't have a PhD yet... Group theory (The Physics version: it was rushed and didn't cover details) was a requirement at Purdue for the Graduate program, meaning the Master's level students also took it. But it wasn't offered at Binghamton for their Master's program. I have a Mathematics "level" text on group theory and we really didn't go that deep in the Physics course. It seems that the goal of the Physics course was more or less character tables and how to use them to determine the irreducible representations of a group. (Which is the big deal for Quantum Mechanics, but I don't know where else in Physics you might use them. Presumably Classical Mechanics as well, but I haven't done and in-depth study of that field.)
-Dan
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 36, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9735485315322876, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/814/how-do-you-calculate-the-anomalous-precession-of-mercury/18418
|
# How do you calculate the anomalous precession of Mercury?
One of the three classic tests of general relativity is the calculation of the precession of the perihelion of Mercury's orbit.
This precession rate had been precisely measured using data collected since the 1600's, and it was later found that Newton's theory of gravity predicts a value that differs from the observed value. That difference, which I am calling the anomalous precession, was estimated to be about 43 arcseconds per century in Einstein's time.
I have heard that general relativity predicts an additional correction that is almost exactly sufficient to account for that 43"/century difference, but I've never seen that calculation done, at least not correctly. Can anyone supply the details?
-
I am not sure whether you've seen my answer, but I now realize that I should add this as a comment: your question is quite broad. What exactly entails the details of the calculation? And what is your background in GTR (so one knows whether they should explain basic points of GTR, or just move straight to the problem). – Marek Nov 15 '10 at 14:49
@Marek: I'm actually just getting to looking at the answers now. I do have a fair amount of experience in GR (despite never having seen this particular calculation done properly before now), so I should be able to follow your answer. But what I was really hoping for was an outline of the math involved, i.e. something like a summary of one of the links sigoldberg1 posted. – David Zaslavsky♦ Nov 15 '10 at 18:35
– Frédéric Grosshans Jun 10 '11 at 9:32
## 4 Answers
A very detailed computation with a comparison between the classical and the relativistic solution: The Precession of Mercury’s Perihelion.
-
Good find! At least based on a first impression (I only had time to skim through it of course). I do like that it includes a discussion of plugging in numerical values. – David Zaslavsky♦ Nov 16 '10 at 4:08
I also skimmed (carefully, though). Looks good. By the way, see the references. Weinberg. It's obvious, right? Although I bet it's not as explicitly derived as in this work by Biesel. I don't have the book in this computer, so I'll take a look at it later. – Robert Smith Nov 16 '10 at 4:14
3
Weinberg - Gravitation and cosmology, page 194, 6. Bound Orbits: Precession of Perihelia. Very different derivation than the one developed by Biesel, with different assumptions. $\Delta \phi = 6\pi \frac{MG}{L}(\text{radians/revolution}) =43.03^{''} \text{per century}$. – Robert Smith Nov 16 '10 at 19:13
Thanks, I'll have a look at that next time I can get my hands on Weinberg's book. – David Zaslavsky♦ Nov 17 '10 at 4:11
Well, it goes like this: consider Schwarzschild metric and testing particle (this is Mercury) with energy $E$ and momentum $L$. Because you have enough integrals of motion, the equations basically solve themselves and you obtain an effective potential that contains basic Newton potential, the centrifugal potential and a correction term. Then you apply Binet equation and you are left with some differential equation that is not easy to solve, but essentially is an equation for conic section (as in the classical case) plus some correction terms. So you make some approximation (based on the parameters of the problem) and are left with "conic section" that precesses a little.
Now, I wonder: how much more precise do you want the above argument to be made? Do you want a complete derivation, or just to clarify some confusing point? Also, how much are you familiar with GTR? I am asking just so that I know at what level should I be explaining.
Also, see the wikipedia pages, it looks like they have some derivation there (although I did not check it and you can't always trust wikipedia).
-
Is there really a need for this level of condescension? It's a perfectly reasonable question. If you don't want to take the time to type out an answer, that's fine, but you don't need to be a dick about it. – Chad Orzel Nov 15 '10 at 14:28
3
@Chad: I didn't mean to be condescending. I am genuinely interested in what level of rigour is David interesting in. I provided a sketch of the answer and wonder if I should add something more. I will rephrase the text so that it's more polite. On the other hand, I don't think someone calling me d*ck without knowing anything about me is really entitled to be giving lectures in morality ;-) – Marek Nov 15 '10 at 14:38
I would guess that Marek's first language is not English, so the tone of some of his responses might not be exactly what he intends. – j.c. Nov 16 '10 at 0:21
2
@j.c.: that is certainly possible but there is also (I think more important) fact that the same miscommunication and wrong impression can happen to the native speaker even if he honestly has only the best intentions. No one is perfect. That is why I see no reason in starting ad-hominem attacks (not to mention swear words) instead of just calmly pointing out the possible problem. – Marek Nov 16 '10 at 1:09
Try http://www.mathpages.com/rr/s6-02/6-02.htm . Caveat, I haven't looked at it carefully yet.
There is a detailed discussion at http://wapedia.mobi/en/Two-body_problem_in_general_relativity?t=3.
-
Sorry this is one year late - but there is a rather detailed calculation of the precession of Mercury's orbit using General Relativity in Cornelius Lanczos book The Variational Principles of Mechanics, Dover Publications. The first edition appeared in 1949, the Dover edition in 1986.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9481619596481323, "perplexity_flag": "middle"}
|
http://medlibrary.org/medwiki/Eccentricity_(mathematics)
|
# Eccentricity (mathematics)
Welcome to MedLibrary.org. For best results, we recommend beginning with the navigation links at the top of the page, which can guide you through our collection of over 14,000 medication labels and package inserts. For additional information on other topics which are not covered by our database of medications, just enter your topic in the search box below:
All types of conic sections, arranged with increasing eccentricity. Note that curvature decreases with eccentricity, and that none of these curves intersect.
Ellipses, hyperbolas with all possible eccentricites from zero to infinity and a parabola on one cubic surface.
In mathematics, the eccentricity, denoted e or $\varepsilon$, is a parameter associated with every conic section. It can be thought of as a measure of how much the conic section deviates from being circular.
In particular,
• The eccentricity of a circle is zero.
• The eccentricity of an ellipse which is not a circle is greater than zero but less than 1.
• The eccentricity of a parabola is 1.
• The eccentricity of a hyperbola is greater than 1.
Furthermore, two conic sections are similar if and only if they have the same eccentricity.
## Definitions
Any conic section can be defined as the locus of points whose distances to a point (the focus) and a line (the directrix) are in a constant ratio. That ratio is called eccentricity, commonly denoted as "e."
The eccentricity can also be defined in terms of the intersection of a plane and a double-napped cone associated with the conic section. If the cone is oriented with its axis being vertical, the eccentricity is
$e=\frac{\sin \alpha}{\sin \beta}$
where α is the angle between the plane and the horizontal and β is the angle between the cone and the horizontal.
The linear eccentricity of a conic section, denoted c (or sometimes f or e), is the distance between its center and either of its two foci. The eccentricity can be defined as the ratio of the linear eccentricity to the semimajor axis a: that is, $e = \frac{c}{a}$.
## Alternative names
The eccentricity is sometimes called first eccentricity to distinguish it from the second eccentricity and third eccentricity defined for ellipses (see below). The eccentricity is also sometimes called numerical eccentricity.
In the case of ellipses and hyperbolas the linear eccentricity is sometimes called half-focal separation.
## Notation
Three notational conventions are in common use:
1. e for the eccentricity and c for the linear eccentricity.
2. $\varepsilon$ for the eccentricity and e for the linear eccentricity.
3. e or $\epsilon$ for the eccentricity and f for the linear eccentricity (mnemonic for half-focal separation).
## Values
conic section equation eccentricity (e) linear eccentricity (c)
circle $x^2+y^2=r^2$ $0$ $0$
ellipse $\frac{x^2}{a^2}+\frac{y^2}{b^2}=1$ $\sqrt{1-\frac{b^2}{a^2}}$ $\sqrt{a^2-b^2}$
parabola $y^2=4ax$ $1$ $a$
hyperbola $\frac{x^2}{a^2}-\frac{y^2}{b^2}=1$ $\sqrt{1+\frac{b^2}{a^2}}$ $\sqrt{a^2+b^2}$
where, when applicable, a is the length of the semi-major axis and b is the length of the semi-minor axis.
When the conic section is given in the general quadratic form
$Ax^2 + Bxy + Cy^2 +Dx + Ey + F = 0, \,$
the following formula gives the eccentricity e if the conic section is not a parabola (which has eccentricity equal to 1), not a degenerate hyperbola or degenerate ellipse, and not an imaginary ellipse:[1]
$e=\sqrt{\frac{2\sqrt{(A-C)^2 + B^2}}{\eta (A+C) + \sqrt{(A-C)^2 + B^2}}}$
where $\eta$= 1 if the determinant of the 3×3 matrix
$\begin{bmatrix}A & B/2 & D/2\\B/2 & C & E/2\\D/2&E/2&F\end{bmatrix}$
is negative or $\eta$= -1 if that determinant is positive.
Ellipse and hyperbola with constant a and changing eccentricity e.
## Ellipses
The eccentricity of an ellipse is strictly less than 1. When circles are counted as ellipses, the eccentricity of an ellipse is greater than or equal to 0; if circles are given a special category and are excluded from the category of ellipses, then the eccentricity of an ellipse is strictly greater than 0.
For any ellipse, let a be the length of its semi-major axis and b be the length of its semi-minor axis.
We define a number of related additional concepts (only for ellipses):
name symbol in terms of a and b in terms of e
first eccentricity $e\,$ $\sqrt{1-\frac{b^2}{a^2}}$ $e\,$
second eccentricity $e'\,$ $\sqrt{\frac{a^2}{b^2}-1}$ $\frac{e}{\sqrt{1-e^2}}$
third eccentricity $e''=\sqrt m$ $\frac{\sqrt{a^2-b^2}}{\sqrt{a^2+b^2}}$ $\frac{e}{\sqrt{2-e^2}}$
angular eccentricity $\alpha\,\!$ $\cos^{-1}\left(\frac{b}{a}\right)$ $\sin^{-1} e\,\!$
### Other formulas for the eccentricity of an ellipse
The eccentricity of an ellipse is, most simply, the ratio of the distance between its two foci, to the length of the major axis.
The eccentricity is also the ratio of the semimajor axis a to the distance d from the center to the directrix:
$e = \frac{a}{d}.$
The eccentricity can be expressed in terms of the flattening factor g (defined as g = 1 – b/a for semimajor axis a and semiminor axis b):
$e = \sqrt{g(2-g)}.$
Comment: flattening is denoted by f in some subject areas, particularly geodesy.
Define the maximum and minimum radii $r_{max}$ and $r_{min}$ as the maximum and minimum distances from either focus to the ellipse (that is, the distances from either focus to the two ends of the major axis). Then with semimajor axis a, the eccentricity is given by
$e = \frac{r_{max}-r_{min}}{r_{max}+r_{min}} = \frac{r_{max}-r_{min}}{2a}.$
## Hyperbolas
The eccentricity of a hyperbola can be any real number greater than 1, with no upper bound. The eccentricity of a rectangular hyperbola is $\sqrt{2}$.
## Quadrics
The eccentricity of a three-dimensional quadric is the eccentricity of a designated section of it. For example, on a triaxial ellipsoid, the meridional eccentricity is that of the ellipse formed by a section containing both the longest and the shortest axes (one of which will be the polar axis), and the equatorial eccentricity is the eccentricity of the ellipse formed by a section through the centre, perpendicular to the polar axis (i.e. in the equatorial plane).
## Celestial mechanics
In celestial mechanics, for bound orbits in a spherical potential, the definition above is informally generalized. When the apocenter distance is close to the pericenter distance, the orbit is said to have low eccentricity; when they are very different, the orbit is said be eccentric or having eccentricity near unity. This definition coincides with the mathematical definition of eccentricity for ellipses, in Keplerian, i.e., $1/r$ potentials.
## Analogous classifications
A number of classifications in mathematics use derived terminology from the classification of conic sections by eccentricity:
• Classification of elements of SL2(R) as elliptic, parabolic, and hyperbolic – and similarly for classification of elements of PSL2(R), the real Möbius transformations.
• Classification of discrete distributions by variance-to-mean ratio; see cumulants of some discrete probability distributions for details.
## References
1. Ayoub, Ayoub B., "The eccentricity of a conic section," 34(2), March 2003, 116-121.
Content in this section is authored by an open community of volunteers and is not produced by, reviewed by, or in any way affiliated with MedLibrary.org. Licensed under the Creative Commons Attribution-ShareAlike 3.0 Unported License, using material from the Wikipedia article on "Eccentricity (mathematics)", available in its original form here:
http://en.wikipedia.org/w/index.php?title=Eccentricity_(mathematics)
• ## Finding More
You are currently browsing the the MedLibrary.org general encyclopedia supplement. To return to our medication library, please select from the menu above or use our search box at the top of the page. In addition to our search facility, alphabetical listings and a date list can help you find every medication in our library.
• ## Questions or Comments?
If you have a question or comment about material specifically within the site’s encyclopedia supplement, we encourage you to read and follow the original source URL given near the bottom of each article. You may also get in touch directly with the original material provider.
• ## About
This site is provided for educational and informational purposes only and is not intended as a substitute for the advice of a medical doctor, nurse, nurse practitioner or other qualified health professional.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 41, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8919256925582886, "perplexity_flag": "middle"}
|
http://csgillespie.wordpress.com/tag/amcmc/
|
# Why?
## January 12, 2011
### Random variable generation (Pt 3 of 3)
Filed under: AMCMC, R — Tags: AMCMC, computer, R, random-numbers, ratio-of-uniforms, statistics, uniform — csgillespie @ 3:59 pm
# Ratio-of-uniforms
This post is based on chapter 1.4.3 of Advanced Markov Chain Monte Carlo. Previous posts on this book can be found via the AMCMC tag.
The ratio-of-uniforms was initially developed by Kinderman and Monahan (1977) and can be used for generating random numbers from many standard distributions. Essentially we transform the random variable of interest, then use a rejection method.
The algorithm is as follows:
Repeat until a value is obtained from step 2.
1. Generate $(Y, Z)$ uniformly over $\mathcal D \supseteq \mathcal C_h^{(1)}$.
2. If $(Y, Z) \in \mathcal C_h^{(1)}$. return $X = Z/Y$ as the desired deviate.
The uniform region is
$\mathcal C_h^{(1)} = \left\{ (y,z): 0 \le y \le [h(z/y)]^{1/2}\right\}.$
In AMCMC they give some R code for generate random numbers from the Gamma distribution.
I was going to include some R code with this post, but I found this set of questions and solutions that cover most things. Another useful page is this online book.
## Thoughts on the Chapter 1
The first chapter is fairly standard. It briefly describes some results that should be background knowledge. However, I did spot a few a typos in this chapter. In particular when describing the acceptance-rejection method, the authors alternate between $g(x)$ and $h(x)$.
Another downside is that the R code for the ratio of uniforms is presented in an optimised version. For example, the authors use `EXP1 = exp(1)` as a global constant. I think for illustration purposes a simplified, more illustrative example would have been better.
This book review has been going with glacial speed. Therefore in future, rather than going through section by section, I will just give an overview of the chapter.
## December 2, 2010
### Random variable generation (Pt 2 of 3)
Filed under: AMCMC, R — Tags: acceptance-rejection, AMCMC, MCMC, R, random-numbers — csgillespie @ 5:44 pm
# Acceptance-rejection methods
This post is based on chapter 1.4 of Advanced Markov Chain Monte Carlo.
Another method of generating random variates from distributions is to use acceptance-rejection methods. Basically to generate a random number from $f(x)$, we generate a RN from an envelope distribution $g(x)$, where $\sup f(x)/g(x) \le M \le \infty$. The acceptance-rejection algorithm is as follows:
Repeat until we generate a value from step 2:
1. Generate $x$ from $g(x)$ and $U$ from $Unif(0, 1)$
2. If $U \le \frac{f(x)}{M g(x)}$, return $x$ (as a random deviate from $f(x)$).
## Example: the standard normal distribution
This example illustrates how we generate $N(0, 1)$ RNs using the logistic distribution as an envelope distribution. First, note that
$\displaystyle f(x) = \frac{1}{2\pi} e^{-x^2/2} \quad \mbox{and} \quad g(x) = \frac{e^{-x/s}}{s(1+ e^{-x/s})^2}$
On setting $s=0.648$, we get $M = 1.081$. This method is fairly efficient and has an acceptance rate of
$\displaystyle r = \frac{1}{M}\frac{\int f(x) dx}{\int g(x) dx} = \frac{1}{M} = 0.925$
since both $f$ and $g$ are normalised densities.
### R code
This example is straightforward to code:
```myrnorm = function(M){
while(1){
u = runif(1); x = rlogis(1, scale = 0.648)
if(u < dnorm(x)/M/dlogis(x, scale = 0.648))
return(x)
}
}
```
To check the results, we could call `myrnorm` a few thousand times:
```hist(replicate(10000, myrnorm(1.1)), freq=FALSE)
lines(seq(-3, 3, 0.01), dnorm(seq(-3, 3, 0.01)), col=2)
```
## Example: the standard normal distribution with a squeeze
Suppose the density $f(x)$ is expensive to evaluate. In this scenario we can employ an easy to compute function $s(x)$, where $0 \le s(x) \le g(x)$ . $s(x)$ is called a squeeze function. In this example, we’ll use a simple rectangular function, where $s(x) = 0.22$ for $x=-1, \ldots, 1$. This is shown in the following figure:
The modified algorithm is as follows:
Repeat until we generate a value from step 2:
1. Generate $x$ from $g(x)$ and $U$ from $Unif(0, 1)$
2. If $U \le \frac{s(x)}{M g(x)}$ or $U \le \frac{f(x)}{M g(x)}$, return $x$ (as a random deviate from $f(x)$).
Hence, when $U \le \frac{s(x)}{M g(x)}$ we don’t have to compute $f(x)$. Obviously, in this example $f(x)$ isn’t that difficult to compute.
## November 28, 2010
### Random variable generation (Pt 1 of 3)
Filed under: AMCMC, R — Tags: AMCMC, inverse-cdf, logistic, R, random-numbers — csgillespie @ 7:35 pm
As I mentioned in a recent post, I’ve just received a copy of Advanced Markov Chain Monte Carlo Methods. Chapter 1.4 in the book (very quickly) covers random variable generation.
## Inverse CDF Method
A standard algorithm for generating random numbers is the inverse cdf method. The continuous version of the algorithm is as follows:
1. Generate a uniform random variable $U$
2. Compute and return $X = F^{-1}(U)$
where $F^{-1}(\cdot)$ is the inverse of the CDF. Well known examples of this method are the exponential distribution and the Box-Muller transform.
## Example: Logistic distribution
I teach this algorithm in one of my classes and I’m always on the look-out for new examples. Something that escaped my notice is that it is easy to generate RN’s using this technique from the Logistic distribution. This distribution has CDF
$\displaystyle F(x; \mu, s) = \frac{1}{1 + \exp(-(x-\mu)/s)}$
and so we can generate a random number from the logistic distribution using the following formula:
$\displaystyle X = \mu + s \log\left(\frac{U}{1-U}\right)$
Which is easily converted to R code:
```
myRLogistic = function(mu, s){
u = runif(1)
return(mu + s log(u/(1-u)))
}```
## November 27, 2010
### Advanced Markov Chain Monte Carlo Methods (AMCMC)
Filed under: AMCMC, R — Tags: AMCMC, MCMC, R — csgillespie @ 4:55 pm
I’ve just received my copy of Advanced Markov Chain Monte Carlo Methods, by Liang, Liu, & Carroll. Although my PhD didn’t really involve any Bayesian methodology (and my undergrad was devoid of any Bayesian influence), I’ve found that the sort of problems I’m now tackling in systems biology demand a Bayesian/MCMC approach. There are a number of introductory books on MCMC, but not that many on advanced techniques.
This book suggests that it could be used as a possible textbook or reference guide in a one-semester statistics graduate course. I’m not entirely convinced that it would be a good textbook, but as a reference it looks very promising. A word of warning, if you’re not familiar with MCMC then you should try the Dani Gamerman MCMC book first. Some later chapters look particularly interesting, including topics on adaptive proposals, population-based MCMC methods and dynamic weightings.
Anyway, I intend to work through the book (well at least a few of the chapters) and post my results/code as I go. Well that’s the plan anyway.
Theme: Shocking Blue Green. Blog at WordPress.com.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 46, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8973820805549622, "perplexity_flag": "middle"}
|
http://conservapedia.com/Quotient_rule
|
Quotient rule
From Conservapedia
$\frac{d}{dx} \sin x=?\,$ This article/section deals with mathematical concepts appropriate for a student in late high school or early university.
The Quotient Rule is a rule in calculus pertaining to the derivative of a variable or function divided by another. It can be written as follows:
$\frac{d}{dx} \left(\frac{u}{v}\right) = \frac{v \left(\frac{du}{dx}\right) - u \left(\frac{dv}{dx}\right)}{v^2}$
Alternatively, in prime notation, it can be written as:
$\left(\frac{u}{v}\right)' = \frac{u'v - v'u}{v^2}$
It is easily remembered by the rhyme "Low 'D' High minus High 'D' Low, draw the line, and square below" (the denominator times the derivative of the numerator minus the numerator times the derivative of the denominator all over the denominator squared).
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 3, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8952154517173767, "perplexity_flag": "middle"}
|
http://crypto.stackexchange.com/questions/tagged/provable-security+protocol-design
|
# Tagged Questions
1answer
96 views
### Is this scheme a provably fair random number generation?
I have thought up a method for generating random numbers between a client and a server which I hope is fair: The client and server decide on a range in advance, $0$ trough $n-1$. The server ...
2answers
81 views
### What are the cryptographic assumptions in the Dolev Yao model?
In the Dolev Yao model for interactive protocols, the cryptographic primitive (encryption, for example) is considered as a blackbox. Does blackbox here mean that the primitive is to be considered CPA ...
1answer
143 views
### Why are protocols often proven secure under the random oracle model instead of a hash assumption?
Is this true that whenever you design a protocol using a hash function, you must prove its security under the random oracle? I mean, is it possible to devise a protocol $P$ using a function $H$, and ...
1answer
150 views
### Exact mathematical definition of simulation based security?
I've been trying to understand cryptographic protocols and how to define their security. The problem is that while I can understand what the intuitive definition says, I have trouble understanding how ...
0answers
92 views
### Setting protocol parameters to achieve concrete security
Background One issue with modern security proofs is that they are usually asymptotic. In other words, such proofs are usually formulated as follows: For any polynomial-time adversary $\mathcal A$, we ...
1answer
216 views
### How do process calculi, CSP, Promela, … compare?
In protocol analysis, formal verification is a very important tool. What are the major differences between ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9181420207023621, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/294255/what-base-did-ancient-egyptians-use
|
# What base did Ancient Egyptians use?
I'm wondering if anyone would know anything about Egyptian mathematics in a prehistorical setting. I've been reading mixed answers, with Egyptians using base 10 or base 12, (interestingly, without them using zero sometimes, which goes on to complicate things.
My question is what base the ancient Egyptians used? How did they come to using it? Did they in fact have the concept zero expressed? How has it affected modern mathematics/society as a whole (if it has)?
Thanks!
-
– Amzoti Feb 4 at 5:34
## 1 Answer
Egyptians used a base-10 system, but it was not a positional system. They had symbols for 1, 10, 100, etc. (I don't know what their upper bound was).
A simple but good exposition can be found here.
The system falls in line with their hieroglyph-writing system. I don't know how they came to use either this number system or hieroglyphics in general.
As for its impact: it also happens to be that Egyptians wrote fractions as a sum of unit fractions. So $\frac{3}{4}$ would be written as $\frac{1}{2} + \frac{1}{4}$. I don't know why this was the case either. But in Struik's History of Mathematics, he says that this notation was one of the two dominant notations for writing fractions well into the Middle Ages.
Today, we refer to these as Egyptian Fractions, and they appear occasionally in recreational mathematics or number theory.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9736160039901733, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/35362/concerning-the-countability-of-the-set-of-primitive-recursive-functions
|
# Concerning the countability of the set of primitive recursive functions
If we generally would define the smallest set, that has to have some properties, as the set obtained by intersecting all the sets that have those properties (if the intersection is non-empty) and the tiniest set, that has to have some properties, as the set with the least cardinality, that has to have those properties (if there is only one set with the least cardinality), would then the set $F$ of primitive recursive function be also the tiniest set (besides being the smallest one) $F \subseteq \cup_{k\in \mathbb{N}} \left\{f:\mathbb{N}^k \rightarrow \mathbb{N} \right\}$, that contains the base functions and is closed under composition and primitive recursion ? (Or if I may rephrase the question: Is $F$ the only countable set, that contains the base functions and is closed under composition and primitive recursion ?)
Side question: I know $F$ is countable set, since to every function $f \in F$ there corresponds a very basic program containing only bounded loops and such; and a program is just a finite (but arbitrary long) string over a finite (and fixed) alphabet; thus $F$ can be seen as a reunion of all finite strings (that make synthactically sense; i.e. are programs) over a finite fixed alphabet - and therefore has to be countable. But how can I prove that $F$ is countable without falling back to interpret a function $f\in F$ as a program (just by staying "inside mathematics") ?
-
I cannot understand in what way that is going outside of mathematics! – Mariano Suárez-Alvarez♦ Apr 27 '11 at 6:43
Of course choosing the way over the set of all programs is also inside mathematics, but for me it is still a difficult jump to see programs as a well-defined part of mathematics, so that's why I said "inside mathematics". Maybe I should have rather said something like "inside set theory"... – temo Apr 28 '11 at 4:17
## 2 Answers
You can prove $F$ is countable without appealing to its characterization as the set of all programs by the following argument. $F$ is constructed as follows: Let $F_0$ be the collection of base functions which is easily seen to be countable. Let $F_{n+1}$ be $F_n$ together with every composition of finitely many elements of $F_n$ and all applications of primitive recursion to elements of $F_n$. It's easy to prove by induction that each $F_n$ is countable, and hence the union of the $F_n$ is countable. It's clear that this union satisfies the desired closure properties, hence this union is in fact $F$.
This can be generalized: Closing an infinite set $X$ under $\leq |X|$ many finitary operations results in a set of the same size $|X|$. This can be generalized further still, but I'll leave at this for now.
-
## Did you find this question interesting? Try our newsletter
email address
The closure under composition and primitive recursion of the set of base functions and the Ackermann function is countable.
-
You can iterate this, of course. Pick one function which is not in the set I described, add it and take the closure under composition and primitive recursion. You can repeat this transfinitely many times, once per countable ordinal (I guess...) – Mariano Suárez-Alvarez♦ Apr 27 '11 at 6:56
1
Each one is countable, but the union is not, i.e. there are uncountably many countable ordinals! – Kaveh May 10 '11 at 3:06
1
I interpreted "You can repeat this transfinitely many times, once per countable ordinal" as meaning we continue doing it once per each countable ordinal, and the only way of making sense of it that I see is taking the union, this is what is usually done for limit ordinals in definitions using transfinite recursion. If you are not taking the union you are doing it up to some fixed countable ordinal (which can be chosen arbitrarily). – Kaveh May 10 '11 at 3:15
1
Then I think we agree in content, but I wouldn't put "once per ordinal" (it gives the sense of parts of a process,) but rather use "you can do this for any countable ordinal". :) – Kaveh May 10 '11 at 3:20
1
It is misleading (at least for me :), I think the alternative that I wrote above expresses the same thing without possibility of confusion. ps: btw, you are still taking union at some steps (for limit ordinals, like $\omega+\omega$). – Kaveh May 10 '11 at 3:23
show 11 more comments
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 22, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9499518275260925, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/tagged/faq+integral
|
# Tagged Questions
2answers
897 views
### Evaluating $\int P(\sin x, \cos x) \text{d}x$
Suppose $\displaystyle P(x,y)$ a polynomial in the variables $x,y$. For example, $\displaystyle x^4$ or $\displaystyle x^3y^2 + 3xy + 1$. Is there a general method which allows us to evaluate the ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8126652240753174, "perplexity_flag": "head"}
|
http://mathoverflow.net/revisions/17856/list
|
## Return to Answer
2 Added case x=0
Joels answer made me think a bit and I believe I found an interesting solution for $f(x)$ :
$f(x) = \begin{cases} ix & \text{if } Im(x) = 0, x\neq 0 \\ \cos(ix) & \text{if } Re(x) = 0,x \neq 0 \\ 2\pi i & \text{if } x = 0 \end{cases}$
It is of course a bit of a trick (reminds me of Wick Rotation), but I it works for all $x\ \epsilon\ R$, because
$f(f(x)) = \cos(i(ix))=\cos(-x) = \cos(x)$
Update: Added the case $x=0$. For this we have
$f(f(0)) = \cos(i(2\pi i))=\cos(-2\pi) = \cos(0)$
1
Joels answer made me think a bit and I believe I found an interesting solution for $f(x)$ :
$f(x) = \begin{cases} ix & \text{if } Im(x) = 0 \\ \cos(ix) & \text{if } Re(x) = 0 \end{cases}$
It is of course a bit of a trick (reminds me of Wick Rotation), but I it works for all $x\ \epsilon\ R$, because
$f(f(x)) = \cos(i(ix))=\cos(-x) = \cos(x)$
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9532084465026855, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/32242/how-to-solve-this-question
|
# how to solve this question?
Show that if B is a basis for a topology on X , then the topology generated by B is equal the intersection of all topologies on X that contains B.
-
I'm a little confused: for any family $\mathcal{F}$ of subsets of a set $X$, the topology generated by $\mathcal{F}$ is by definition the intersection of all topologies on $X$ that contain $\mathcal{F}$. So what is there to show? – Pete L. Clark Apr 11 '11 at 6:33
2
@Pete: Isn't there also another definition of the topology generated by $B$ as the set of arbitrary unions of elements of $B$? If that is so, the question would amount to showing the equivalence of these two definitions. – joriki Apr 11 '11 at 7:07
@joriki: Thanks for you comment: it is indeed a plausible interpretation of the question. (I actually had a similar thought after I made my comment, but rather than address it I chose to go to sleep...) – Pete L. Clark Apr 11 '11 at 14:33
## 1 Answer
Perhaps, as joriki suggests, the question is to show that for a base $\mathcal{B}$ for a topology on $X$, the family $\tau_{\mathcal{B}}$ of arbitrary unions of elements of $\mathcal{B}$ -- including, I suppose, the empty union, which gives the empty set -- is the smallest topology containing $\mathcal{B}$.
If so, this is certainly true and can be shown as follows:
Step 1: Since any topology is closed under arbitrary unions, any topology $\tau$ containing $\mathcal{B}$ must contain $\tau_{\mathcal{B}}$, so it is enough to show that $\tau_{\mathcal{B}}$ is a topology.
Recall that what we are assuming about $\mathcal{B}$ is that
(B1) $\bigcup_{B \in \mathcal{B}} B = X$ and
(B2) For all $B_1, B_2 \in \mathcal{B}$, if $x \in B_1 \cap B_2$, then there exists $B_3 \in \mathcal{B}$ such that $x \in B_3 \subset B_1 \cap B_2$.
Step 2: Thus $\emptyset, X$ are unions of elements of $\mathcal{B}$: the former by taking the empty union, the latter by (B1).
Step 3: Being the set of all unions of a certain family of sets, $\tau_{\mathcal{B}}$ is certainly closed under arbitrary unions.
Step 4: So the matter of it is to show that $\tau_{\mathcal{B}}$ is closed under finite intersections. For this, it is enough to show that if $U_1,U_2 \in \tau_{\mathcal{B}}$, then so is $U_1 \cap U_2$. To show this we need to use condition (B2), which notice has not yet been used. This verification takes two or three lines. I urge the OP to try it herself and tell us whether she succeeded and if not what she tried.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 28, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9624462127685547, "perplexity_flag": "head"}
|
http://www.physicsforums.com/showthread.php?t=342591
|
Physics Forums
Blog Entries: 1
## Centripetal acceleration and airplane lift
1. The problem statement, all variables and given/known data
An airplane is flying in a horizontal circle at a speed of 460 km/h (Fig. 6-42). If its wings are tilted at angle θ = 37° to the horizontal, what is the radius of the circle in which the plane is flying? Assume that the required force is provided entirely by an “aerodynamic lift” that is perpendicular to the wing surface.
2. Relevant equations
Newton's second law, for centripetal motion:
$$F_{net} = m*\left( \frac{v^2}{r} \right)$$
3. The attempt at a solution
First let us identify the forces acting on the plane. There are exactly 3: The force of gravity, $$F_g$$, the force of the lift, causing it to fly, $$F_L$$, and the centripetal force caused by the rotation, $$m*\left( \frac{v^2}{r} \right)$$
We know that there must be some lift on the plane, keeping it in air. Because there is no vertical motion, we know that $$F_L - F_g*\cos(\theta) = 0$$. So $$F_L = F_g*\cos(\theta)$$
Next, we also know that the plane must be pulled in by the centripetal force. So the horizontal component of the lift must be caused by this force, and we have
$$m*\left( \frac{v^2}{r} \right) = F_L_x = F_L*\sin(\theta) = F_g*\cos(\theta)*\sin(\theta)$$
Solving for the radius r
$$\frac{m*v^2}{F_g*\cos(\theta)*\sin(\theta)} = \frac{v^2}{g*\cos(\theta)*\sin(\theta)} = r$$
Subbing the values
$$\frac{460^2}{9.8*\cos(37)*\sin(37)} = 44923 = r$$
The computer system for my homework has me entering this as meters instead of kilometers, which is the units of the shown result. However, it's not lining up. I suspect that there might be something wrong with my lifting force, but I'm not sure.
PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug
Recognitions: Homework Help I tried it, interesting problem. My approach was to draw the wing going like the "/" character, Fg down, Fc to the left toward the center and FL up and to the left 53 degrees above horizontal. I reasoned that the vertical component of FL must cancel Fg. So I got FL*sin(53) = Fg. I ended up with a much smaller radius than you did.
Blog Entries: 1
Quote by Delphi51 I tried it, interesting problem. My approach was to draw the wing going like the "/" character, Fg down, Fc to the left toward the center and FL up and to the left 53 degrees above horizontal. I reasoned that the vertical component of FL must cancel Fg. So I got FL*sin(53) = Fg. I ended up with a much smaller radius than you did.
This is the free body diagram I used for the problem. The blue line represents the airplane. The solid green line pointing downward represents the force of gravity, and the dashed green line is the perpendicular component of that force. The solid green line is the force of lift on the plane, and the dashed green line is the horizontal component of the lift force.
Oh yea, I tried your answer.
$$F_L*\sin(90 - \theta) = F_L*\cos(\theta) = F_g$$
or
$$\frac{F_g}{\cos(\theta)} = F_L$$
Plugging this into the centripetal motion equation
$$m*\left( \frac{v^2}{r} \right) = F_L_x = F_L*\sin(\theta) = \frac{F_g}{\cos(\theta)}*\sin(\theta) = F_g*\tan(\theta)$$
Then for radius:
$$\frac{m*v^2}{F_g*\tan(\theta)} = \frac{v^2}{g*\tan(\theta)} = r$$
$$\frac{460^2}{9.8*\tan(37)} = 28653 = r$$
Suffice to say, still not correct :(
Attached Thumbnails
Mentor
Blog Entries: 1
## Centripetal acceleration and airplane lift
Careful with units. What's the speed in m/s?
Recognitions: Homework Help Thanks, Doc! knight, I agree with all that right down to your last line. Using the velocity in m/s I've got R just over 2200 m.
Blog Entries: 1 Okay, I see what I was doing wrong.. My units was wrong but my model was wrong also. I was treating $$F_L$$ as a normal force, therefore $$F_L - F_g*\cos(\theta) = 0$$. But this is different. A lifting force counteracts gravity, so instead you'd have $$F_L_y - F_g = 0$$, which is why tangent turns up later.. Thanks for the help, guys..
Thread Tools
| | | |
|-----------------------------------------------------------------|-------------------------------|---------|
| Similar Threads for: Centripetal acceleration and airplane lift | | |
| Thread | Forum | Replies |
| | Introductory Physics Homework | 1 |
| | Classical Physics | 3 |
| | Introductory Physics Homework | 8 |
| | Classical Physics | 2 |
| | Introductory Physics Homework | 3 |
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 17, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9314502477645874, "perplexity_flag": "middle"}
|
http://mathforum.org/mathimages/index.php?title=Markus-Lyapunov_Fractals&diff=17025&oldid=17023
|
# Markus-Lyapunov Fractals
### From Math Images
(Difference between revisions)
Line 20: Line 20:
This comes from the field of Verhulst Dynamics. Basic, unrestricted growth can be represented by This comes from the field of Verhulst Dynamics. Basic, unrestricted growth can be represented by
{{EquationRef2|1}}<math>x_{(n+1)}=Rx_n</math> {{EquationRef2|1}}<math>x_{(n+1)}=Rx_n</math>
- Where ''x''<sub>(n+1)</sub> is the population size at time n+1. But this, as discussed above, is not a realistic model of population growth in the ecological world. To account for the changing rate of change, ''R'', of an actual population, Verhulst constructed + Where ''x''<sub>(''n''+1)</sub> is the population size at time (''n'' + 1). But this, as discussed above, is not a realistic model of population growth in the ecological world. To account for the changing rate of change, ''R'', of an actual population, Verhulst constructed
{{EquationRef2|2}}<math>R=\mathbf{r}(1-x_n)</math> {{EquationRef2|2}}<math>R=\mathbf{r}(1-x_n)</math>
- Where '''r''' is a parameter for the potential rate of change of the population. In this way, the overall rate of change, ''R'', is higher when ''x''<sub>n</sub> is lower and lower when ''x''<sub>n</sub> is higher. Re-inserting this in our initial representation of growth, we have: + Where '''r''' is a parameter for the potential rate of change of the population. In this way, the overall rate of change, ''R'', is higher when ''x''<sub>''n''</sub> is lower and lower when ''x''<sub>''n''</sub> is higher. Re-inserting this in our initial representation of growth, we have:
{{EquationRef2|3}}<math>x_{(n+1)}=(1-x_n)\mathbf{r}x_n</math> {{EquationRef2|3}}<math>x_{(n+1)}=(1-x_n)\mathbf{r}x_n</math>
This is the logistic formula. This is the logistic formula.
[[Image:Bifurcation.gif|frame|left|A bifurcation diagram for the logistic formula. The populations sizes resulting from 120 iterations of the formula based each '''r''' value are plotted above that '''r''' value (after an initial, unrecorded period of 5000 iterations to level the systems). Note the self-similarity shown in the enlarged section.<ref name=Peitgen>Peitgen, H., & Richter, P. (1986). ''The Beauty of Fractals: Images of Complex Dynamic Systems''. Berlin: Springer-Verlag.</ref>]] [[Image:Bifurcation.gif|frame|left|A bifurcation diagram for the logistic formula. The populations sizes resulting from 120 iterations of the formula based each '''r''' value are plotted above that '''r''' value (after an initial, unrecorded period of 5000 iterations to level the systems). Note the self-similarity shown in the enlarged section.<ref name=Peitgen>Peitgen, H., & Richter, P. (1986). ''The Beauty of Fractals: Images of Complex Dynamic Systems''. Berlin: Springer-Verlag.</ref>]]
====Bifurcation==== ====Bifurcation====
- The logistic equation is interesting partly for its properties of bifurcation. Bifurcation occurs when a system "branches" (hence the name) into multiple values. In the logistic formula, this means that, as the '''r''' value grows, ''x''<sub>n</sub> goes from a single value to oscillating among two or more values; the population volume ceases to be constant and begins fluctuating between multiple volumes. + The logistic equation is interesting partly for its properties of bifurcation. Bifurcation occurs when a system "branches" (hence the name) into multiple values. In the logistic formula, this means that, as the '''r''' value grows, ''x''<sub>''n''</sub> goes from a single value to oscillating among two or more values; the population volume ceases to be constant and begins fluctuating between multiple volumes.
- The diagram to the left shows how the logistic formula bifurcates as the value of '''r''' changes. Population sizes, ''x''<sub>n</sub>, (''y''-axis) are plotted against the '''r''' values (''x''-axis) that generate them. The most stable state therefore appears as a single horizontal line. When this line appears to "branch" into two, we are observing bifurcation; the '''r''' value has changed so that the population is now oscillating between two volumes. As the branching continues, so does bifurcation: Three lines show oscilation among three volumes, four lines show oscillation among four volumes, and so forth. The grey areas show where the system bifurcates to the extent that it essentially "oscillates" among ''all'' possible ''x''<sub>n</sub> values. That is, it becomes chaotic. As '''r''' values increase, we see wider and wider "bands" of chaos where ranges of '''r''' values yield only chaotic systems. Above the '''r''' value of 3, these "bands" become continuous, and all '''r''' values yield chaos. + The diagram to the left shows how the logistic formula bifurcates as the value of '''r''' changes. Population sizes, ''x''<sub>''n''</sub>, (''y''-axis) are plotted against the '''r''' values (''x''-axis) that generate them. The most stable state therefore appears as a single horizontal line. When this line appears to "branch" into two, we are observing bifurcation; the '''r''' value has changed so that the population is now oscillating between two volumes. As the branching continues, so does bifurcation: Three lines show oscilation among three volumes, four lines show oscillation among four volumes, and so forth. The grey areas show where the system bifurcates to the extent that it essentially "oscillates" among ''all'' possible ''x''<sub>''n''</sub> values. That is, it becomes chaotic. As '''r''' values increase, we see wider and wider "bands" of chaos where ranges of '''r''' values yield only chaotic systems. Above the '''r''' value of 3, these "bands" become continuous, and all '''r''' values yield chaos.
In the enlarged portion, we can see that the diagram of logistic bifurcation is self-similar. This is the fractal property that carries through into Markus-Lyapunov fractals. In the enlarged portion, we can see that the diagram of logistic bifurcation is self-similar. This is the fractal property that carries through into Markus-Lyapunov fractals.
Line 37: Line 37:
The discrete form of the Lyapunov exponent is The discrete form of the Lyapunov exponent is
{{EquationRef2|4}}<math>\lambda=\lim_{N \to \infty}\frac{1}{N}\sum_{n=1}^N \log_2 \frac{dx_{(n+1)}}{dx_n}</math> {{EquationRef2|4}}<math>\lambda=\lim_{N \to \infty}\frac{1}{N}\sum_{n=1}^N \log_2 \frac{dx_{(n+1)}}{dx_n}</math>
- In other words, the Lyapunov exponent <math>\lambda</math> represents the [[Limit|limit]] of the {{EasyBalloon|Link=mean|Balloon=The presence of <math>\frac{1}{N}\sum_{n=1}^N ...</math> shows that this is a mean.}} of the {{EasyBalloon|Link=exponential|Balloon=The presence of <math>\log_2</math> makes the relationship between <math>\lambda</math> and the original equation exponential.}} {{EasyBalloon|Link=rates of change|Balloon=<math>\frac{dx_{(n+1)}}{dx_n}</math> defines the rate of change.}} that occur in each transition, ''x''<sub>n</sub> <math>\rightarrow</math> ''x''<sub>(n+1)</sub>, as the number of transitions approaches infinity. + In other words, the Lyapunov exponent <math>\lambda</math> represents the [[Limit|limit]] of the {{EasyBalloon|Link=mean|Balloon=The presence of <math>\frac{1}{N}\sum_{n=1}^N ...</math> shows that this is a mean.}} of the {{EasyBalloon|Link=exponential|Balloon=The presence of <math>\log_2</math> makes the relationship between <math>\lambda</math> and the original equation exponential.}} {{EasyBalloon|Link=rates of change|Balloon=<math>\frac{dx_{(n+1)}}{dx_n}</math> defines the rate of change.}} that occur in each transition, ''x''<sub>''n''</sub> <math>\rightarrow</math> ''x''<sub>(''n''+1)</sub>, as the number of transitions approaches infinity.
What does this have to do with stability? The key is the log<sub>2</sub> component, which renders numbers under 1 negative and those over 1 positive. This is what yields the properties of Lyapunov exponents laid out in the "Basic Explanation" – those mean overall rates of change that make the system {{EasyBalloon|Link=finite|Balloon=A geometric series or sequence is finite if is multiplied by a factor, '''r''' < 1, that makes it converge to a discrete value or set of values.}} must be less than 1, giving us a negative Lyapunov exponent, while those rates of change that expand the system to the point of chaos must be greater than one, giving us a positive exponent. When the mean overall rate of change is zero, the logarithmic component no longer exists, showing exactly what happens in a superstable system; the rate of change ceases to exist. What does this have to do with stability? The key is the log<sub>2</sub> component, which renders numbers under 1 negative and those over 1 positive. This is what yields the properties of Lyapunov exponents laid out in the "Basic Explanation" – those mean overall rates of change that make the system {{EasyBalloon|Link=finite|Balloon=A geometric series or sequence is finite if is multiplied by a factor, '''r''' < 1, that makes it converge to a discrete value or set of values.}} must be less than 1, giving us a negative Lyapunov exponent, while those rates of change that expand the system to the point of chaos must be greater than one, giving us a positive exponent. When the mean overall rate of change is zero, the logarithmic component no longer exists, showing exactly what happens in a superstable system; the rate of change ceases to exist.
Line 48: Line 48:
Using this and a sufficiently large ''N'' number of iterations, we can approximate the Lyapunov exponent for the logistic formula to be: Using this and a sufficiently large ''N'' number of iterations, we can approximate the Lyapunov exponent for the logistic formula to be:
{{EquationRef2|6}}<math>\lambda \approx \frac{1}{N}\sum_{n=1}^N \log_2 \mathbf{r}-2\mathbf{r}x_0^2</math> {{EquationRef2|6}}<math>\lambda \approx \frac{1}{N}\sum_{n=1}^N \log_2 \mathbf{r}-2\mathbf{r}x_0^2</math>
- Here we can see much more clearly a property that we have been assuming -- the variable that has the greatest impact on the stability of the logistic equation is '''r''', not ''x''<sub>n</sub>. This is clear here because, as we differentiate the logistic formula in order to examine its overall rate of change, ''x''<sub>n</sub> is reduced to the constant value ''x''<sub>0</sub> (the starting volume of the population). It is therefore no longer a variable when we look at the formula on the level of its Lyapunov exponent, and so changing the value of ''x''<sub>n</sub> does not change whether the logistic function yields chaos or stability. + Here we can see much more clearly a property that we have been assuming -- the variable that has the greatest impact on the stability of the logistic equation is '''r''', not ''x''<sub>''n''</sub>. This is clear here because, as we differentiate the logistic formula in order to examine its overall rate of change, ''x''<sub>''n''</sub> is reduced to the constant value ''x''<sub>0</sub> (the starting volume of the population). It is therefore no longer a variable when we look at the formula on the level of its Lyapunov exponent, and so changing the value of ''x''<sub>''n''</sub> does not change whether the logistic function yields chaos or stability.
===Forcing the Rates of Change=== ===Forcing the Rates of Change===
## Revision as of 10:11, 26 May 2011
Markus-Lyapunov Fractal
A representation of the regions of chaos and stability over the space of two population growth rates.
Markus-Lyapunov Fractal
Fields: Dynamic Systems and Fractals
Created By: BernardH, using Mathematica 5
# Basic Description
The Markus-Lyapunov fractal is much more than a pretty picture; it is a map. The curving bodies and sweeping arms of the image are in fact a color-coded plot that shows us how a population changes as its rate of growth moves between two values. All the rich variations of color in the fractal come from the different levels of stability and chaos possible in such change.
Before anything else, in order to generate a Markus-Lyapunov fractal, we must be able to represent animal population change mathematically. This entails more than simply writing an equation for constant growth or constant reduction – a population cannot grow infinitely, but rather is constrained by the amounts of food, space, etc., available to it. To model the pattern of growth and reduction that occurs as populations approach and retreat from their maximum sizes, mathematicians have developed the logistic formulaThis is an iterative formula that determines each new iteration based on the difference between 1 and the value of the previous iteration. In this way, the values cannot become larger than 1., which models population growth fairly accurately by including a factor that diminishes as population size grows, just as food and space would diminish.
The logistic formula is driven by the initial population size and by the potential rate of change of that population. Mathematically, the more powerful of these values is the rate of change; it will determine whether the size of the population settles to a specific value, oscillates between two or more values, or becomes chaotic. To help determine which of these outcomes would occur, a mathematician named Aleksandr Lyapunov developed a method for comparing changes in growth and time in order to calculate what has been dubbed the Lyapunov exponentThe Lyapunov exponent represents the overall rate of change of a system over many iterations, expressed logarithmically.. This is a handy little indicator, and here's why:
• If it is zero, the population change is neutral; at some point in time, it reaches a fixed point and remains there.
• If it is less than zero, the population will become stableStability is different from a fixed point; A system that oscillates between two values is stable, and a system that oscillates between sixteen values is still stable.. The lower the number, the faster and more thoroughly the population will stabilize.
• If it is positive, the population will become chaotic.
Another example of a Markus-Lyapunov fractal, this one with chaos in black and stability in gold.
What does all this have to do with the fantastical shapes of the Markus-Lyapunov fractal? Well, a scientist named Mario Markus wanted a way to visualize the potential represented by the Lyapunov exponent as a population moved between two different rates of growth. So he created a graphical space with one rate of growth measured along the x-axis and the other along the y. Thus for any point, (x,y), there is one specific Lyapunov exponent that predicts how a population with those rates of change will behave. Markus then assigned a color to every possible Lyapunov exponent – one color for positive numbers and another for negative numbers and zero. This second color he placed on a gradient, so that lower negative numbers are lighter and those closer to zero are darker, with zero itself being black. Some Markus-Lyapunov fractals also display superstable$\lambda=-\infty$, as the lowest possible Lyapunov exponent, indicates the fastest possible approach to stability. points in a third color or black. By this code, Markus could color every point on his graph space based on its Lyapunov exponent.
Consider the main image on this page. The blue "background" shows all the points where the combination of the rates of change on the x and y axes will result in chaotic population growth. The "floating" yellow shapes show where the population will move toward stability. The lighter the yellow, the more stable the population.
# A More Mathematical Explanation
[Click to view A More Mathematical Explanation]
### The Logistic Forumula
This comes from the field of Verhulst Dynamics. Basic, unrestricted growt [...]
[Click to hide A More Mathematical Explanation]
### The Logistic Forumula
This comes from the field of Verhulst Dynamics. Basic, unrestricted growth can be represented by
$x_{(n+1)}=Rx_n$
Where x(n+1) is the population size at time (n + 1). But this, as discussed above, is not a realistic model of population growth in the ecological world. To account for the changing rate of change, R, of an actual population, Verhulst constructed
$R=\mathbf{r}(1-x_n)$
Where r is a parameter for the potential rate of change of the population. In this way, the overall rate of change, R, is higher when xn is lower and lower when xn is higher. Re-inserting this in our initial representation of growth, we have:
$x_{(n+1)}=(1-x_n)\mathbf{r}x_n$
This is the logistic formula.
A bifurcation diagram for the logistic formula. The populations sizes resulting from 120 iterations of the formula based each r value are plotted above that r value (after an initial, unrecorded period of 5000 iterations to level the systems). Note the self-similarity shown in the enlarged section.[1]
#### Bifurcation
The logistic equation is interesting partly for its properties of bifurcation. Bifurcation occurs when a system "branches" (hence the name) into multiple values. In the logistic formula, this means that, as the r value grows, xn goes from a single value to oscillating among two or more values; the population volume ceases to be constant and begins fluctuating between multiple volumes.
The diagram to the left shows how the logistic formula bifurcates as the value of r changes. Population sizes, xn, (y-axis) are plotted against the r values (x-axis) that generate them. The most stable state therefore appears as a single horizontal line. When this line appears to "branch" into two, we are observing bifurcation; the r value has changed so that the population is now oscillating between two volumes. As the branching continues, so does bifurcation: Three lines show oscilation among three volumes, four lines show oscillation among four volumes, and so forth. The grey areas show where the system bifurcates to the extent that it essentially "oscillates" among all possible xn values. That is, it becomes chaotic. As r values increase, we see wider and wider "bands" of chaos where ranges of r values yield only chaotic systems. Above the r value of 3, these "bands" become continuous, and all r values yield chaos.
In the enlarged portion, we can see that the diagram of logistic bifurcation is self-similar. This is the fractal property that carries through into Markus-Lyapunov fractals.
### The Lyapunov Exponent
The discrete form of the Lyapunov exponent is
$\lambda=\lim_{N \to \infty}\frac{1}{N}\sum_{n=1}^N \log_2 \frac{dx_{(n+1)}}{dx_n}$
In other words, the Lyapunov exponent $\lambda$ represents the limit of the meanThe presence of $\frac{1}{N}\sum_{n=1}^N ...$ shows that this is a mean. of the exponentialThe presence of $\log_2$ makes the relationship between $\lambda$ and the original equation exponential. rates of change$\frac{dx_{(n+1)}}{dx_n}$ defines the rate of change. that occur in each transition, xn $\rightarrow$ x(n+1), as the number of transitions approaches infinity.
What does this have to do with stability? The key is the log2 component, which renders numbers under 1 negative and those over 1 positive. This is what yields the properties of Lyapunov exponents laid out in the "Basic Explanation" – those mean overall rates of change that make the system finiteA geometric series or sequence is finite if is multiplied by a factor, r < 1, that makes it converge to a discrete value or set of values. must be less than 1, giving us a negative Lyapunov exponent, while those rates of change that expand the system to the point of chaos must be greater than one, giving us a positive exponent. When the mean overall rate of change is zero, the logarithmic component no longer exists, showing exactly what happens in a superstable system; the rate of change ceases to exist.
In other words, the Lyapunov exponent is a method for examining the rate of change of a system considered over infinite iterations, then taking that rate of change and making it easily identifiable as a value that induces either chaos or stability.
#### In the Logistic Formula
Basic differentiation shows us that, for the logistic formula (3):
$\frac{dx_{(n+1)}}{dx_n}=\mathbf{r}-2\mathbf{r}x_0^2$
Using this and a sufficiently large N number of iterations, we can approximate the Lyapunov exponent for the logistic formula to be:
$\lambda \approx \frac{1}{N}\sum_{n=1}^N \log_2 \mathbf{r}-2\mathbf{r}x_0^2$
Here we can see much more clearly a property that we have been assuming -- the variable that has the greatest impact on the stability of the logistic equation is r, not xn. This is clear here because, as we differentiate the logistic formula in order to examine its overall rate of change, xn is reduced to the constant value x0 (the starting volume of the population). It is therefore no longer a variable when we look at the formula on the level of its Lyapunov exponent, and so changing the value of xn does not change whether the logistic function yields chaos or stability.
### Forcing the Rates of Change
Mathematically, the important part of Markus's contribution to understanding this type of system was not his method for generating fractals, but his use of periodic rate-of-change forcing. We have been discussing the great impact of the r value in determining the output of the logistic formula, but this value can have still greater impact if we do not choose to keep it constant. Anyone who has studied biology, as Markus has, knows that the rates of change of a populations size do not simply fluctuate with changing supplies of food and space, but also often alternate between two or more specific potential rates of change depending on such things as weather and mating seasons.
A Markus-Lyapunov fractal with rate-of-change pattern ab
In terms of the logistic formula, this means we choose a set of rates of change, r1, r2, r3,..., rp, where p is the period over which the rates of change loop. When we force the rates of change to follow such a loop, we have a new, modular logistic equation (3):
$x_{(n+1)}=\mathbf{r}_{n \text{mod} p}x_n(1-x_n)$
It is in these forced alterations in rates of change that the fascinating shapes of the Markus-Lyapunov fractal come out. Each of the fractals is formed from some pattern of two rates of change, a and b. So a pattern aba would mean each point on the fractal is colored based on the Lyapunov exponent of the logistic formula 7, where r1 = a, r2 = b, and r3 = a. That is, the r values would cycle a,b,a,a,b,a,a,b,a....
Because the axes used to map these fractals are measurements of changes in a and b, the pattern a would simply yield a set of vertical bars, just as the pattern b would yield horizontal bars. However, once the patterns start to become mixed, more interesting results come out. The image to the right shows an ab pattern. Note that it is much simpler than other images shown on this page; the main image, for instance, is a bbbbbbaaaaaa pattern.
# Why It's Interesting
An enlargement of a section of "Zircon Zity," showing self-similarity.
### Fractal Properties
The movements from light to dark and the dramatic curves of the boundaries between stability and chaos here create an astonishing 3D effect. But the image is striking not only for its beauty but also for its self-similarity. Self-similarity is that trait that makes fractals what they are – zooming in on the image reveals smaller and smaller parts that resemble the whole. Consider the image to the right, enlarged from a section of the main image above. Here we see several shapes that repeat in smaller and smaller iterations. Perhaps ironically, this type of pattern is a common property of chaos.
For more images of the fractal properties of chaotic systems, see the Henon Attractor, the Harter-Heighway Dragon Curve, and Julia Sets.
One artist superimposed and edited several real Markus-Lyapunov fractals to create this piece of art.
### Artistic Extensions
After Markus saw the incredible beauty and intriguing three-dimensionality of the images generated by his plotting system, he immediately sent the images to a gallery in the hopes that it would display his images in an exhibition.[2] It's easy to see why he did so, and in fact, pictures based on these fractals have become a large part of what is called "fractalist" art. As with all domains of fractalist art, there is a great deal of debate in the art community over whether these images are truly "art" given their intrinsic reliance on a purely scientific, algorithmically-generated chart. One could say that such a process is devoid of creativity, but it is equally valid to say that the identification and presentation of the beauty in the science is an art in itself – a concept that is critical in modern art. Either way, there has been an undeniable artistic fascination with Markus-Lyapunov fractals; if the image seems familiar, you have likely seen it on posters, t-shirts, or any other canvas for graphic design.
# Teaching Materials
There are currently no teaching materials for this page. Add teaching materials.
# References
1. ↑ Peitgen, H., & Richter, P. (1986). The Beauty of Fractals: Images of Complex Dynamic Systems. Berlin: Springer-Verlag.
2. ↑ Dewdney, A. K. (1991). Leaping into Lyapunov Space. Scientific American, (130-132).
Other Sources Consulted
Elert, G. (2007). The Chaos Hypertextbook. http://hypertextbook.com/chaos/
Leave a message on the discussion page by clicking the 'discussion' tab at the top of this image page.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 14, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8988322019577026, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/19187/is-there-such-a-thing-as-a-countable-set-with-an-uncountable-subset
|
Is there such a thing as a countable set with an uncountable subset?
Is there such a thing as a countable set with an uncountable subset?
Actually I know the answer. Well, I believe I know the answer, which is NO. Unfortunately, the professor in a Theory of Computation class said that yes, there is such a subset.
This is to settle a discussion with fellow students. A discussion that is going nowhere so we go to the internets for a verdict.
Thanks in advance for weighing in on this question.
-
4 Answers
Of course it is easy to see that any subset of a countable set is countable. While enumerating the larger set, just skip over any elements that are not in the subset, and you have an enumeration of the subset.
Nevertheless, since you mentioned you were in a computability theory course, there are a few computability-like senses in which something like a positive answer to the question is possible.
• Every infinite computably enumerable set has numerous subsets that are not computably enumerable. This is simply because every infinite set has uncountably many subsets, and most of these will not be computably enumerable, since there are only countably many c.e. sets. But from a computability perspective, computably enumerable is often the right analogue of countable, since those are the sets with a computable enumeration function, and so this is a very reasonable sense in which the claim is nearly true.
• There can be c.e. sets whose complement is infinite, but contains no infinite c.e. subset. Thus, you can enumerate the set $S$, and infinitely many numbers are not in $S$, but there is no computable way to enumerate an infinite set of numbers not in $S$. These are known as the simple sets, and were studied from the time of Post.
• Without AC, it is consistent that you can have a set $A$ with an equivalence relation $\sim$ on it, such that the number of equivalence classes of $\sim$ on $A$ is strictly larger than the cardinality of $A$. For example, we can make an equivalence relation $\sim$ having exactly $\mathbb{R}+\omega_1$ many equivalence classes, by saying that two reals are equivalent iff they are equal, or else they both code relations on $\omega$ that are well-orders of the same length. But under the Axiom of Determinacy, there is no $\omega_1$-sequence of distinct real numbers, and so this cardinality is strictly larger than $\mathbb{R}$.
-
2
I agree with the computable enumerable statement completely. I think what happened was that the prof used "countable" instead of "computably enumerable." I agree that there are infinitely many subsets, but each subset is still countable. The subset may not be computably enumerable. If in the subject you replace "countable" with "computably enumerable" then I agree! :) – starflyer Jan 27 '11 at 5:24
2
Wait, so when you say it's "nearly true", you mean it's true if we think of countable as computably enumerable. But strictly speaking, they are not equivalent. – starflyer Jan 27 '11 at 6:08
12
Good on you to figure out a reasonable interpretation of the question. – Ross Millikan Jan 27 '11 at 7:45
4
@starflyer Yes. I would advise you to not even to start thinking of ever using the terms synonymously. That would only get you in trouble. – Raphael Jan 27 '11 at 8:38
A set is countably infinite if there is an injective function from it to the natural numbers.
Suppose $A$ is countable and $f\colon A \to \omega$ is an injection witnessing that, then if $B\subseteq A$ the restriction of $f$ to $B$ is an injective function from $B$ to the natural numbers, therefore $B$ is countable.
A side note is that without the axiom of choice there are infinite sets without a countable subset. That's a whole other deal though.
-
typo in the second word, I think... – Arturo Magidin Jan 27 '11 at 5:11
Thank you for the answer. It's pretty much the proof I gave. I know it's pretty simple. As for your side note, so without AC, there exist infinite sets without a countable subset. So this means that there still exists infinite sets with countable subsets. How is this possible? Thanks in advance. – starflyer Jan 27 '11 at 5:13
@Arturo: thanks, iPhones are nice but typos are not uncommon when writing from them. :-) – Asaf Karagila Jan 27 '11 at 5:15
1
@starflyer: You have it slightly off: if we don't assume AC, then the statement "every infinite set has a countable subset" is undecidable (assuming ZF consistent); so there are models of set theory without AC in which this is a false statement, meaning that there are at least some infinite subsets (in that model) without countable subsets. But there will also be infinite subsets with countable subsets (e.g., every infinite ordinal contains $\omega$ as a subset). So in ZF without AC you can add the axiom "there exists at least one set $S$ that is infinite and has no countable subset." – Arturo Magidin Jan 27 '11 at 5:19
@Arturo, thanks for the answer. Definitely food for thought. – starflyer Jan 27 '11 at 13:42
If your course was a TCS-Course, then maybe your professor mixed up countable sets with enumerable sets. (This happens sometimes, especially in german language areas, where it is "abzählbar" vs. "aufzählbar).
Every subset of a countable set is countable or finite (or empty). However, not every subset of an enumerable set is enumerable.
For example, $\mathbb{N}$ is trivially enumerable. But the codomain of the busy bever function, which is a subset of $\mathbb{N}$, is not.
-
As I mentioned earlier, I suspected this to be the case. But the prof was adamant about the statement, even after defining "countable". – starflyer Jan 30 '11 at 6:18
Consider a set which contains a countable collection of uncountable sets.
One example would be the set whose $m^{th}$ member (even though they are not ordered) is the set of real numbers between $m$ and $m+1$.
Then I can say that such a set has a countable subset, and that each of these subsets contains an uncountable number of elements.
However, I could not say that such a set contains an uncountable subset of a countable subset of real numbers, merely that it contains a countable subset of (uncountable) sets.
-
Like you said, that is different from the original statement. The uncountable subset is not a subset of the original set but an element of the set. So strictly speaking is not a subset. – starflyer Jan 27 '11 at 13:06
2
This is irrelevant. – Qiaochu Yuan Jan 27 '11 at 15:47
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 27, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9504237771034241, "perplexity_flag": "head"}
|
http://nrich.maths.org/90/index?nomenu=1
|
## 'Making Cuboids' printed from http://nrich.maths.org/
### Show menu
Let's say you can only use two different lengths - $2$ units and $4$ units.
If you are using these lengths to make the sides of rectangles, how many different ones can you make? (Squares are just special rectangles!)
There are three because two are the same - just rotated.
But we are not making $2$-dimensional rectangles!
We are going to make $3$-dimensional cuboids.
Using just these two lengths as the edges of the cuboids how many different ones can you make?
Here are two of them. How many more are there?
It is a good idea to make them from squared paper and sticky tape, but you also need to find a way to record your results.
This is one way to show the results above, making a list.
The smallest is $2 \times 2 \times2$
The middle sized one is $2 \times4 \times4$
You should find a way to record your results that makes sense to you.
How can you make sure that your cuboids are not the same if rotated?
How many different cuboids did you get?
But what if we have three different lengths: $2$ units, $3$ units and $4$ units?
How many different cuboids can you make now?
Make them from squared paper and sticky tape.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9515073895454407, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/pre-calculus/2246-polynomial-rational-functions.html
|
# Thread:
1. ## Polynomial and Rational Functions
I'm not sure how to do this...
Determine the domain and the range of each function.
a) Y = X3+3X2-9X-10
was wondering... if it was possible to do it just looking at it or if I had to make a table of numbers
2. Originally Posted by impreza02
I'm not sure how to do this...
Determine the domain and the range of each function.
a) Y = X3+3X2-9X-10
was wondering... if it was possible to do it just looking at it or if I had to make a table of numbers
Once this is rendered into a more conventional form the answer is, if you
know enough, the domain and range of this function can be found by
inspection (just looking). Though you will still need to explain how you
know that the range and domain are correct.
In plain ASCII we would normally write this as:
y=x^3+3x^2-9x-10,
or we can used TeX:
$<br /> y=x^3+3x^2-9x-10<br />$
RonL
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9183754324913025, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/87714?sort=oldest
|
## Does a “composite field” always exist?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Suppose $F$ is a field, and $F_1, F_2$ are two extension fields of $F$. Is it always the case that there is a field $L$, containing three subfields $F, K_1, K_2$ and two ring isomorphisms $\varphi_{i}:F_i\rightarrow K_1$ fixing $F$?
Note 1: We lose no generality assuming $F$, rather than an isomorphic copy of $F$, is a subfield of $L$.
I ask this because I was wondering if there is a way to combine the reals and the $p$-adic numbers into a single extension of $\mathbb{Q}$.
Note 2: I seem to recall someone telling me this couldn't be done (perhaps with additional topological data preserved). But I cannot seem to remember the reason why. In any case, I want to know if there is something other than topology which prevents it.
-
Concerning the p-adics, reals: We have $\mathbb{R} \subseteq \mathbb{C}$, $\mathbb{Q}_p \subseteq \mathbb{C}_p$ and $\mathbb{C} \cong \mathbb{C}_p$. Using this isomorphism you can embedd $\mathbb{Q}_p$ into $\mathbb{C}$. Of course this iso. isn't defined in a constructible way. – Ralph Feb 6 2012 at 20:12
2
Regarding Note 2: There is no topological field containing topological copies of $\mathbb{R}$ and $\mathbb{Q}_p$, since each of these induce distinct topologies on $\mathbb{Q}$. The isomorphism Ralph describes is not continuous. – Kevin Ventullo Feb 7 2012 at 3:05
## 3 Answers
The tensor product $F_1 \otimes_F F_2$ is not 0, hence it has a quotient which is a field. This contains the images of both $F_i$.
-
Years ago, I first learnt the solution in David's answer and could not really find anything enlightening or memorable about it. Later the argument using tensor products was used in a text on algebraic geometry (namely the lemma that $|X \times_S Y| \to |X| \times_{|S|} |Y|$ is surjective) and of course now everything was clear as crystal. As a side remark, both proofs use variants of the axiom of choice (even twice). – Martin Brandenburg Feb 7 2012 at 11:00
(This also reminds me of the concise tensor product construction of the algebraic closure of a field: In $k' := \bigotimes_{0 \neq f \in k[x]} k[x]/(f)$ every polynomial as a root, thus the colimit of $k \subseteq k' \subseteq k'' \subseteq k''' \subseteq ...$ is an algebraic closure of $k$. One can show $k''=k'$, but this is not trivial.) – Martin Brandenburg Feb 7 2012 at 11:03
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Sure. Find $T_i$ between $F$ and $F_i$ such that $T_i/F$ is pure transcendental and $F_i/T_i$ is purely algebraic. Let $T_i = F(S_i)$, with the $S_i$ algebraically independent. Without loss of generality, suppose that the cardinality of $S_1$ is less than or equal to that of $S_2$. Then the algebraic closure of $F(S_2)$ is a suitable $L$.
-
In the language of Model Theory, your question can be rewritten as: "does the theory of fields have the amalgamation property? and the answer is yes.
Well known examples of theories with the amalgamation property include: fields, ordered fields, groups, abelian groups and boolean algebras.
-
+1 since you provide the general background, but it would be even a better answer if you add a specific reference where the amalgamation property for fields is proved. I believe that model theorists argue as in David's answer or equivalently with "$\kappa$-categorical" arguments. – Martin Brandenburg Feb 7 2012 at 11:07
@Martin: To be honest, I´ve never seen a published proof of the fact that fields have the amalgamation property. Model Theory textbooks (e.g. Hodges, Chang & Keisler) just mention the fact and use it to prove other things, like quantifier elimination for algebraically closed fields. – Ramiro de la Vega Feb 7 2012 at 11:53
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 39, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9307929873466492, "perplexity_flag": "head"}
|
http://mathhelpforum.com/advanced-algebra/176562-splitting-field-polynomial-over-finite-field-print.html
|
# Splitting Field of a Polynomial over a Finite Field
Printable View
• April 1st 2011, 02:53 PM
slevvio
Splitting Field of a Polynomial over a Finite Field
Hello everyone, I was wondering if I could get some help with this.
Find the splitting field of the polynomial $f = x^3 + 2x +1 \in \mathbb{Z}_3 [x]$
Well I know that $\mathbb{Z}_3[x] / \langle x^3 + 2x + 1 \rangle$ is a field extension containing $\alpha = x + \langle f \rangle$ which is a root of the polynomial $f$.
But is this a splitting field ? Can there not be another element $\alpha '$ which hasn't appeared in this field extension? Thanks for any help.
• April 1st 2011, 03:45 PM
tonio
Quote:
Originally Posted by slevvio
Hello everyone, I was wondering if I could get some help with this.
Find the splitting field of the polynomial $f = x^3 + 2x +1 \in \mathbb{Z}_3 [x]$
Well I know that $\mathbb{Z}_3[x] / \langle x^3 + 2x + 1 \rangle$ is a field extension containing $\alpha = x + \langle f \rangle$ which is a root of the polynomial $f$.
But is this a splitting field ? Can there not be another element $\alpha '$ which hasn't appeared in this field extension? Thanks for any help.
Dividing $f(x)=x^3+2x+1$ by $w:=x+<f>$ , we get that
$x^3+2x+1=(x+2w)(x^2+wx+w^2+2)$ , and
since the field's characteristic is not 2 we know the above quadratic splits on $\mathbb{Z}/3\mathbb{Z}[x]/<f>$
iff its discriminant is a square. Now just chek the discriminant indeed is square in this field...
Tonio
Pd For example, $w+1$ is another root of $f(x)$ ...
All times are GMT -8. The time now is 08:50 AM.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 16, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9518508911132812, "perplexity_flag": "head"}
|
http://mathhelpforum.com/algebra/136033-exponential-question.html
|
# Thread:
1. ## Exponential Question
I am not positive this is the correct spot for this question ( it is an Algebra question but it was not given in the context of an Algebra class. I am currently working my way through Michael Spivak's book "Calculus"). Here is the question:
$<br /> <br /> x + 3^x<4<br />$
Now just by looking at it, I can see that the answer has to be x<1 but I don't see how I could show that mathematically. Any insights would be appreciated.
Dag
2. Hi
$f(x) = x + 3^x$ is an increasing function and f(1)=4 therefore for all x<1 f(x)<4
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9681634306907654, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/134626/square-integrability-and-continuity-for-local-martingales
|
Square integrability and continuity for local martingales
For local martingales (as in Pro05)
$\text{[Local/general square integrability]} \overset{?}{\leftrightarrow} \text{[continuity]}$
That is, does one imply the other? I believe $[\text{continuity}] \rightarrow [\text{local square integrability}]$
Can anyone say one way or another? Any counterexamples to help develop intuition? (I have very little here)
-
2 Answers
Neither implies the other.
Let $X$ be any integrable random variable with $EX = 0$ and $E X^2 < \infty$, and let $X_t = X 1_{(1, \infty)}(t)$. Then $X_t$ is a square integrable martingale which is not continuous.
Let $Y$ be any integrable random variable with $E Y^2 = \infty$. Let $Y_t = Y$ for all $t$. Then $Y_t$ is a continuous martingale which is not even locally square integrable.
-
If $(X_t)_{t\geq 0}$ is a continuous local martingale and $(\sigma_n)$ is a localizing sequence such that $(X^{\sigma_n}_t)_{t\geq 0}$ is a martingale, then you can use the localizing sequence $$\tau_n=\inf \{t>0\mid |X_t|>n\},\quad n\geq 1,$$ and put $\rho_n=\tau_n\wedge \sigma_n$. Then by continuity we have $|X^{\rho_n}_t|\leq n$ for all $n\geq 1$ and hence $(X^{\rho_n}_t)$ is a bounded martingale for every $n$.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 20, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9136150479316711, "perplexity_flag": "head"}
|
http://mathhelpforum.com/differential-geometry/179154-fourier-transform-test-function-space.html
|
# Thread:
1. ## Fourier transform on the test function space.
Hi All,
I am reading many books on the Fourier transform. They all state that the Fourier transform is not a mapping from test function space ( $C^\infty$ functions of compact support) to the test function space. In one of the books I read, they explain that if $\phi$ is a test function, and F( $\phi$) is a test function, then $\phi$ is 0. Does anyone know how to start on proving this? Or can anyone give me a test function (by above, any non zero one will do!) that I can compute the FT of? I have tried, and end up with having to integrate something that even Mathematica can't do.
Thanks.
2. Does the Fourier transform of a function with compact support itself have compact support?
3. I suspect not, but I can't think of an example of this.
4. Don't think of examples; look at the definition. Suppose f is compactly supported, and suppose the FT of f is compactly supported, and see if you can derive a contradiction.
5. I suppose I am trying to deduce that supp [tex]\phi[\MATH] is the empty set, and so phi is 0, but I can't see how to proceed on your hint.
6. By one of the Paley–Wiener theorems, if a square-integrable function is compactly supported then its Fourier transform is the restriction to the real axis of an entire function. But if such a function has compact support then it must be identically zero, because a nonzero entire function can only have isolated zeros.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.932127058506012, "perplexity_flag": "head"}
|
http://mathhelpforum.com/discrete-math/133795-invertible-relation.html
|
# Thread:
1. ## invertible relation
hi
my teacher wants us to think a conjecture for the invertible relation and then prove it, but I have absolutely no idea how to find that conjecture. Any help is appreciated.
$R,S \subset X \times X$ , we define $R \bullet S \text{ to be the set:} (x,y) \in X \times X , \text{s.t there exsits one and only one}$ $z \in X : (x,z) \in S,(z,y) \in R$
like a relation $\mathcal{R} \subset X \times X$ is invertible with respect to $\bullet$ if and only if (what condition does R has to satisfy?)
any idea?
cheers.
2. any idea?
3. , we define
This is not a regular composition of relations. In the standard definition of composition there is no restriction that there is at most one $z$. Is this a point of the exercise or a typo?
4. Also, by $R$ being invertible you mean that there exists an $S$ such that $S\bullet R=Id$ or ( $R\bullet S=Id$) where $Id$ is the diagonal relation?
Probably this has something to do with surjection and injection. It is well-known that a function is an injection iff it has a left inverse, and is a surjection iff it has a right inverse (I hope I did not mix it up). Maybe something similar is expected here.
5. Originally Posted by emakarov
This is not a regular composition of relations. In the standard definition of composition there is no restriction that there is at most one $z$. Is this a point of the exercise or a typo?
yea this is the exericese that our lecturer gives us.
6. Originally Posted by emakarov
Also, by $R$ being invertible you mean that there exists an $S$ such that $S\bullet R=Id$ or ( $R\bullet S=Id$) where $Id$ is the diagonal relation?
Probably this has something to do with surjection and injection. It is well-known that a function is an injection iff it has a left inverse, and is a surjection iff it has a right inverse (I hope I did not mix it up). Maybe something similar is expected here.
$S\bullet R= R\bullet S=Id$) where $Id$ is the diagonal relation. yes.
any idea what the conjecture is??
my thought is $\forall y \in x$ there exists one and only one $x \in X, (x,y) \in R$
is this correct?
7. my thought is there exists one and only one
I think this not correct. Let X = {1, 2} and let R = {(1, 1), (1, 2), (2, 2)}. Then R^2 = Id.
I don't know what the answer is, it requires some thinking and considering examples.
8. Originally Posted by emakarov
I think this not correct. Let X = {1, 2} and let R = {(1, 1), (1, 2), (2, 2)}. Then R^2 = Id.
I don't know what the answer is, it requires some thinking and considering examples.
i dont know either, but anyway thanks for your attention.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 21, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9516792297363281, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/12305/local-global-principles-in-group-cohomology
|
## local-global principles in group cohomology
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Let $G$ be a (profinite) group. It is known that if $H^n(G_p,A) = 0$ for all $p$, $S_p$ the Sylow subgroups of $G$, then $H^n(G,A) = 0$.
Are there other local-global principles for different sets of subgroups?
-
2
One I see commonly in my area is when $G$ is the absolute Galois group of a global field (for example a number field) and the subgroups are the absolute Galois groups of the completions of the field (these are subgroups of $G$). In many instances local-global principles do not hold, and their failure is measured by a "class group" or "Tate-Schaferevich group". – Kevin Buzzard Jan 19 2010 at 13:09
## 1 Answer
The local-global principle you are citing comes from the fact that for any open subgroup $H\leq G$, $H^n(G,A)\stackrel{\text{Res}}{\longrightarrow}H^n(H,A)\stackrel{\text{Cor}}{\longrightarrow}H^n(G,A)$ is multiplication by $[G:H]$. So from that you can derive lots of local-global principles. E.g. as a generalisation of the one you cite, you can deduce that if $H_1$ and $H_2$ are two open subgroups of co-prime index such that $H^n(H_i,A)=0$ for $i=1,2$, then $H^n(G,A)=0$.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 16, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8905556797981262, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/51216/some-questions-on-graded-rings/51226
|
# Some questions on graded rings
At the moment I'm mostly interested in commutative graded rings (and in particular those graded over $\mathbb{N}$), but any comments or references about more general graded rings would also be appreciated.
1. What is the name for ring homomorphisms which preserve the gradation? What about ring homomorphisms which respect the gradation, e.g. the map $k[x] \to k[x]$ defined by $x \mapsto x^2$?
2. It is clear that the category of graded rings and homomorphisms which preserve the gradation contains the usual category of rings as a full (even reflective) subcategory. As far as I can tell, the category is complete and the forgetful functor to $\textbf{CRing}$ is is continuous. The construction given by Mac Lane [CWM 2nd ed., p. 123] seems to give a solution set, so that means there is a left adjoint. What is it?
3. In some contexts, e.g. the Proj construction, it seems that non-homogeneous elements never enter the picture and only get in the way. It is not hard to construct a definition of something akin to a graded ring without non-homogeneous elements (simply take a $\mathbb{N}$-indexed sequence of abelian groups $S_\bullet$ and define multiplication maps $\mu_{n,m} : S_n \otimes S_m \to S_{n+m}$ satisfying an associative law and require $S_0$ to be a ring and $\mu_{0,0} : S_0 \otimes S_0 \to S_0$ to be the ring multiplication), but perhaps there is an advantage of working with non-homogeneous elements that I'm not seeing. Is there?
4. The irrelevant ideal seems troublesome. Can we define it away somehow? For example, the improper ideal is not prime because its residue ring is the zero ring, which is defined to not be an integral domain.
5. If $S$ is a graded ring and $\mathfrak{m}$ is a maximal homogeneous ideal, what properties does $S / \mathfrak{m}$ have? Is there an analogue of the non-graded result that $A / \mathfrak{a}$ is a field if and only if $\mathfrak{a}$ is maximal? [Edit: It turns out this is not the question I meant to ask.]
6. Why are modules graded over $\mathbb{Z}$ but rings over $\mathbb{N}$? Are there difficulties with $\mathbb{Z}$-graded rings?
7. Finally, why is $S$ the canonical name for a graded ring?
-
When you say "graded commutative" do you really just mean graded and commutative with respect to the trivial symmetric monoidal structure, or do you mean "graded-commutative," which means graded and commutative with respect to the nontrivial symmetric monoidal structure? – Qiaochu Yuan Jul 13 '11 at 13:53
@Qiaochu: I mean a graded ring which has commutative multiplication. What monoidal structure are you referring to? – Zhen Lin Jul 13 '11 at 13:55
The one given by $a \otimes b \mapsto (-1)^{|a| |b|} b \otimes a$ on homogeneous elements. For whatever reason, this is what people generally mean when they say "graded-commutative" (which seems mildly confusing to me). – Qiaochu Yuan Jul 13 '11 at 13:58
@Qiaochu: Ah. That is confusing, yes. I'll have to remember that one. I'm not convinced that the forgetful functor has any adjoints (either left or right) at all, but the conditions for Freyd's adjoint functor theorem seem to be satisfied. The trivial grading is the inclusion functor, but I don't believe it's right adjoint to the forgetful functor, since the category consists of gradation-preserving arrows only, meaning if an element has grade $n$, then its homomorphic image also has grade $n$. There is a bigger category (hinted in question 1), but I'm not considering it at the moment. – Zhen Lin Jul 13 '11 at 14:04
yes, I was confused about which forgetful functor you were using ("the degree-zero part" is also a forgetful functor). – Qiaochu Yuan Jul 13 '11 at 14:28
## 1 Answer
Regarding #3, there is no disadvantage (that I can see) in working with non-homogeneous elements, so I don't think it really matters either way in the sense that there are two categories one could write down and they are equivalent, if not even isomorphic.
Regarding #4, I don't see why you would want to do this.
Regarding #6, I don't see a difficulty with $\mathbb{Z}$-graded rings: for example the ring of Laurent polynomials is $\mathbb{Z}$-graded. Probably for whatever applications you're looking at they are just unnecessary. As far as the Proj construction goes, for example, the irrelevant ideal is not an ideal in a $\mathbb{Z}$-graded ring...!
The reason you could still have $\mathbb{Z}$-graded modules over an $\mathbb{N}$-graded ring is that modules have an obvious notion of degree shift, whereas the multiplication on a ring prevents the degree shift of a graded ring from being a graded ring (with the same multiplication, anyway).
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 26, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9560722708702087, "perplexity_flag": "head"}
|
http://www.physicsforums.com/showthread.php?t=550964
|
Physics Forums
## Induced Magnetic Field in a Non-Uniform Electric Field
1. The problem statement, all variables and given/known data
An electric field is directed out of the page within a circular region of radius R = 3.00 cm. The field magnitude is $E = (0.500 V/ms)(1 - \frac{r}{R})t$, where t is in seconds and r is the radial distance (r≤R). What is the magnitude of the induced magnetic field at a radial distance of 2.00 cm?
2. Relevant equations
Maxwell - Ampere's Law
$\oint \vec{B} \cdot d\vec{s} = μ_0ε_0 \frac{d\Phi_B}{dt}$
3. The attempt at a solution
Since B and ds are parallel, their dot product will equal Bds, or since ds is a circle at radius r, $2\pi rB$. I got caught up on the left-hand side of the equation. I know I can simplify it to:
$μ_0ε_0 \frac{d}{dt}\int \vec{E} \cdot d\vec{A}$, which further simplifies to (since E and dA are parallel):
$μ_0ε_0 \frac{d}{dt}\int E dA$
I'm lost as to how to simplify that integral. I think I could do a double integral to solve to surface integral, however, this is an introductory physics course only requiring Calculus 1 and 2, so I wouldn't have assumed such integrals would be necessary.
Also, assuming I've simplified correctly, would I integrate, then take the time-derivative, or could I take the time derivative of E first, then integrate?
Thanks!
PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug
Let E = Fxy(x,y,t) d/dt ∫ E dA = d/dt ∫ Fxy(x,y,t) dA = d/dt ∫x1x2∫y1y2 Fxy(x,y,t) dy dx = d/dt ∫x1x2 Fx(x,y2,t) - Fx(x,y1,t) dx = d/dt [ (F(x2,y2,t) - F(x1,y2,t)) - ((F(x2,y1,t) - F(x1,y1,t)) ] = (Ft(x2,y2,t) - Ft(x1,y2,t)) - (Ft(x2,y1,t) - Ft(x1,y1,t)) = ∫x1x2 Fxt(x,y2,t) - (Fxt(x,y1,t) dx = ∫x1x2∫y1y2 Fxyt(x,y,t) dy dx = ∫x1x2∫y1y2 d/dt Fxy(x,y,t) dy dx = ∫x1x2∫y1y2 d/dt E dA = ∫ d/dt E dA You mean that E is everywhere normal to A right? The question is poorly worded (maybe it comes with a diagram), but if I'm understanding it correctly, the equation that you wrote for the electric field implies that it's constant across the entire surface of a sphere. Therefore, instead of numerically evaluating ∫E(t).dA, you can argue by symmetry that ∫E(t).dA = ∫|E(t)|.dA = |E(t)|∫dA = E(t)*4πr2. Saves you the trouble of actually evaluating the surface integral.
So since the integral depends only on two variables, of which time is neither, the time-derivative can essentially be freely taken before or after integration? Thanks for the detailed proof as well. Yes, it does come with a diagram, but it's just a drawing of a circle so I didn't think it was all to vital to include. But I am saying that E is parallel to the area vector everywhere (it would be normal to the surface though). And since the electric field varies with radial distance from the center of the circular region, wouldn't it be inappropriate to remove it from an integral with respect to r? That is what I initially attempted, but it seemed incorrect for that reason. I think most of the ambiguity is coming from the problem description, and apologies there.
## Induced Magnetic Field in a Non-Uniform Electric Field
There's no integration with respect to r. The integration is on the surface of the sphere, which means r is constant.
To make this more intuitive, compare the above surface integral, which I shall rewrite as the following:
∫ E.dA = ∫∫ E.dS
with this volume integral:
∫∫∫ E.dS r dr
= ∫∫∫ F dV
In words, after summing up all the elements which make up the surface area of the sphere, you integrate outward.
*Note that the extra r term arises from the Jacobian.
*Also note that I've written F for the volume integral since F=div(E).
The surface is a circle, not a sphere. I found a picture of the region online, sorry for not including it originally. I'm finding the induced magnetic field at a radial distance from the center of this region; the electric field is coming out of the page in that picture, and its magnitude varies radially. So I assumed that to find the total flux through the circle, I would need to integrate across the radius. But in your volume example, I think I understand the logic. Since you're only integrating at one radius, it can be disregarded from the integration, correct? Attached Thumbnails
That makes more sense now. It's just like a cross section of a wire. The same logic that you recognized in your first post still stands. Since E is everywhere perpendicular to A, you don't have to worry about the dot product. ∫ E.dA = ∫ E dA (The dot product is there as a vector projection, kind of like finding the "effective" flux, but the projection of a vector E on another vector in the same direction An is just itself if both E and n are in the same direction.) Do the integral in polar coordinates: ∫ E dA = ∫ E r dr dθ You'll get your result immediately. Again, it doesn't matter if you take d/dt first or evaluate the integral first. I would recommend taking d/dt first.
Excellent; thank you. I do apologize for the confusion in the beginning. Just out of curiosity, is it possible to solve the integral without resorting to a double integral? I don't see anyway to, since you have both A and E varying radially outwards (its not like A is constant, simplifying to just the product of A and E).
There's no choice in this problem but to integrate with respect to r, since the electric field varies (continuously) according to the r. If you really want to avoid a double integral that badly, you can transform this into a one-dimensional problem by just deforming the circle into a rectangle with width 2π. Then the problem would be equivalent to: 2π ∫ E r dr It isn't much different from the double-integral though, and I'd just do the double-integral to preserve the logic behind doing it.
The double integral definitely makes the most sense. Like I stated in the original post, I was curious if it could be solved without the double integral since the course doesn't have multivariable calculus as a pre-req. Seemed strange to have problems with double integrals without a foundation in the area. Thanks again for your help (and the reminder that when switching to polar coordinates one needs to include r dr dθ, not just dr dθ!).
I just noticed this:
Quote by DNAPolymerase (its not like A is constant, simplifying to just the product of A and E).
The problem with the logic here is that A is constant because it's a flat surface.
∫ dA = A
In other words, a surface is f(x,y), or any function of two variables. If the surface you had wasn't A, but instead was f(x,y), then the surface isn't constant and the above relationship isn't true.
The quantity that isn't constant is ∫ E dA. Think of E as that f(x,y) that distorts the area (which is equivalently a density function). It "causes" the flat surface to become a concave down surface, so the "new" area is deformed into something that isn't constant.
Tags
e&m, electromagnetism, homework, maxwell
Thread Tools
Similar Threads for: Induced Magnetic Field in a Non-Uniform Electric Field
Thread Forum Replies
Introductory Physics Homework 33
Classical Physics 21
Advanced Physics Homework 5
Introductory Physics Homework 1
Introductory Physics Homework 0
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9434523582458496, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/20119/if-s-and-g0-are-integers-how-can-i-prove-that-x-and-y-exist-satisfyin
|
# If $s$ and $g>0$ are integers, how can I prove that $x$ and $y$ exist satisfying $x+y=s$ and $(x,y)=g$ if and only if $g \mid s$?
I am already able to prove that $g \mid s$ assuming $x+y=s$ and $(x,y)=g$, but I am having some trouble showing that assuming $g \mid s$, there exists an $x$ and $y$ such that $x+y=s$ and $(x,y)=g$. So far, I've started with saying that $g \mid 0$ necessarily. Therefore we also know that $g \mid (0x+sy)$. I'd like to be able to set the values of $x$ and $y$ to something to show that there exist an $x$ and a $y$ that satisfy $x+y=s$ and $(x,y)=g$, but I'm not really sure where to go from here. Can anyone offer any help?
-
## 2 Answers
HINT. $(gn,g) = g(n,1)=g$.
The converse follows because $a|b$ and $a|c$ implies $a|b+c$.
-
I'm not sure I understand what your hint is referring to. Can you offer a little more clarification please? – user6548 Feb 2 '11 at 20:48
@user6548: $gn+g = g(n+1)$. If $g|s$, can you write $s$ as $g(n+1)$ for some $n$? – Arturo Magidin Feb 2 '11 at 20:51
I'm very sorry, but I'm still having trouble wrapping my head around how this fits in. Does this eliminate the need to mention that g|0 and thus, the need to mention that g|(0x+sy)? If not, how do these follow one another? – user6548 Feb 2 '11 at 21:07
@user6548: If $g|s$, then $s=gd$ for some $d$. Write $d=(d-1)+1$. Can you now write $s$ as the sum of $gn$ and $g$ for some $n$? – Arturo Magidin Feb 2 '11 at 21:08
Ah, that clears things up perfectly. Thank you. – user6548 Feb 2 '11 at 21:18
$\rm\ \ g = (x,s-x) = (x,s)\ \iff\ 1 = (x/g,\:s/g)\:,\$ so choose $\rm x/g\$ coprime to $\rm s/g\:,\$ e.g. $\rm\ s/g + 1$
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 44, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9536942839622498, "perplexity_flag": "head"}
|
http://mathhelpforum.com/math-challenge-problems/163235-line-function-moving-tangency.html
|
Thread:
1. line, function, moving tangency
Given the line $y=-\frac{k}{m}x+k$, where k is the y axis intercept and m is the x axis intercept, and the function $y=f(x)$, where $f(x)=\frac{1}{x-1}$, find a formula for the scalar $a$ such that $af(\frac{x}{a})$ is always tangent to the line $y=-\frac{k}{m}x+k$ for any positive real k and m.
See the attached graph for an illustration. In plain English the question is: if the line moves, what do you have to do to "a" so that af(x/a) is still tangent to the line.
I will now send my answer to the moderators.
Moderator approved CB
Attached Thumbnails
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.898940920829773, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/155078/closed-form-for-eigenvectors-of-rotation-matrix?answertab=active
|
# Closed-form for eigenvectors of rotation matrix
For matrices that are elements of $SO(3)$ is there a formula for the eigenvectors corresponding to the eigenvalue $1$ in terms of the entries of the matrix?
-
## 1 Answer
Let $A \in SO(3)$. The matrix $A-A^T$ is skew-symmetric, hence of the form $$\begin{pmatrix} 0 & a & b \\ -a & 0 & c \\ -b & -c & 0 \end{pmatrix}$$ for some $a$, $b$, $c$. Then the vector $(-c,b,-a)^T$ is an eigenvector of $A$ with eigenvalue $1$.
This works since if $F$ is a rotation by the angle $\phi$ around the vector $n$, then the transformation $F-F^{-1}$ geometrically equals $\sin\phi$ times the operation of taking the cross product with $n$.
-
In computer graphics quaternion spherical linear interpolation is common. Since finding the eigenvector is straightforward would a pure matrix approach to spherical linear interpolation be competitive to quaternions? – user782220 Jun 7 '12 at 22:01
I have no idea about that! I should add that this method fails if $\phi=\pi$ (since then $a=b=c=0$), but in that case any nonzero column of $I+A$ will give you the eigenvector instead. – Hans Lundmark Jun 8 '12 at 4:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 19, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.881434440612793, "perplexity_flag": "head"}
|
http://mathforum.org/mathimages/index.php?title=Parametric_Equations&redirect=no
|
# Parametric Equations
### From Math Images
Butterfly Curve
Field: Algebra
Image Created By: Direct Imaging
Website: [1]
Butterfly Curve
The Butterfly Curve is one of many beautiful images generated using parametric equations.
# Basic Description
Parametric Equations can be used to define complicated functions and figures in simpler terms, using one or more additional independent variables, known as parameters . For the many useful shapes which are not "functions" in that they fail the vertical line test, parametric equations allow one to generate those shapes in a function format. In particular, Parametric Equations can be used to define and easily generate geometric figures, including(but not limited to) conic sections and spheres.
The butterfly curve in this page's main image uses more complicated parametric equations as shown below.
# A More Mathematical Explanation
Note: understanding of this explanation requires: *Linear Algebra
[Click to view A More Mathematical Explanation]
[[Image:Animated_construction_of_butterfly_curve.gif|thumb|right|500px|Parametric construction of the [...]
[Click to hide A More Mathematical Explanation]
Parametric construction of the butterfly curve
Sometimes curves which would be very difficult or even impossible to graph in terms of elementary functions of x and y can be graphed using a parameter. One example is the butterfly curve, as shown in this page's main image.
This curve uses the following parametrization:
$\begin{bmatrix} x \\ y\\ \end{bmatrix}= \begin{bmatrix} \sin(t) \left(e^{\cos(t)} - 2\cos(4t) - \sin^5\left({t \over 12}\right)\right) \\ \cos(t) \left(e^{\cos(t)} - 2\cos(4t) - \sin^5\left({t \over 12}\right)\right)\\ \end{bmatrix}$
### Parametrized Curves
Many useful or interesting shapes otherwise inexpressible as xy-functions can be represented in coordinate space using a non-coordinate parameter, such as circles. A circle cannot be expressed a function where one variable is dependent on another. If a parameter (t) is used to represent an angle in the coordinate plane, the parameter can be used to generate a unit circle, as shown below. The parameter $t$ does, in the case of a unit circle, represent a physical quantity in space: the angle between the x-axis and a vector of magnitude 1 going to point $(x,y)$ on the coordinate plane.
The components of the vector that goes to $(x,y)$ have magnitudes of $x$ (horizontally) and $y$ (vertically), and form a right triangle with hypotenuse 1.
using trigonometric ratios, in this, the quantity $y$ can be represented in terms of $t$ as
$\sin (t) =\frac {\text{opposite}} {\text{hypotenuse}} = \frac{y}{1}$
so
$y = \sin (t)$ .
Smaurer1 We have agreed to indent math displays. Also, names of more than one symbol are generally in roman. Finally, while something like $\sin(3t+4)$ needs parentheses, generally $\sin t$ is written without them .
Likewise; the quantity $x$ can be represented in terms of $t$ as
$\cos t = \frac{\text{adjacent}} {\text{hypotenuse}} = \frac{x}{1}$
so
$x = \cos t$
Thus, $t$ generates physical points $(x,y)$ on the coordinate plane, controlling both variables.Since the values of the ratios have a set domain and range, the same proportional distance is maintained around the origin, creating a series of points equidistant to a fixed point,otherwise known as a circle:
In other quadrants, sines and cosines are defined in terms of compliments to angles in the first quadrant (between 0° and 90°) . Thus, directed distances stay the same , creating an equidistant set of of points around the origin identified as a circle.
Thus, a parameter $t$ is used to generate a shape that is otherwise not a function, with simpler component functions.
### Parametrized Surfaces
The surface of a sphere can be graphed using two parameters.
In the above cases only one independent variable was used, creating a parametrized curve. We can use more than one independent variable to create other graphs, including graphs of surfaces. For example, using parameters s and t, the surface of a sphere can be parametrized as follows: $\begin{bmatrix} x \\ y\\ z\\ \end{bmatrix}= \begin{bmatrix} \sin(t)\cos(s) \\ \sin(t)\sin(s) \\\cos(t) \end{bmatrix}$
### Parametrized Manifolds
While two parameters are sufficient to parametrize a surface, objects of more than two dimensions, such as a three dimensional solid, will require more than two parameters. These objects, generally called manifolds, may live in higher than three dimensions and can have more than two parameters, so cannot always be visualized. Nevertheless they can be analyzed using the methods of vector calculus and differential geometry.
### Parametric Equation Explorer
This applet is intended to help with understanding how changing an alpha value changes the plot of a parametric equation. See the in-applet help for instructions.
If you can see this message, you do not have the Java software required to view the applet.
# Teaching Materials
There are currently no teaching materials for this page. Add teaching materials.
# Related Links
### Additional Resources
If you can see this message, you do not have the Java software required to view the applet.
• applet is intended to help with understanding how changing an alpha value changes the plot of a parametric equation. See the in-applet help for instructions.
Leave a message on the discussion page by clicking the 'discussion' tab at the top of this image page.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 20, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8358291387557983, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/statistics/208615-covariance-blp-error-cef-error.html
|
# Thread:
1. ## Covariance of BLP error and CEF error
Hi all,
I've been struggling a bit with the following problem:
The random variables X and Y are jointly distributed. Let $\epsilon = Y - E(Y|X)$ and $U = Y - L(Y|X)$, where $E(Y|X)$ is the CEF and $L(Y|X)$ is the BLP. Determine whether the following is true or false: $Cov(\epsilon, U) = Var(\epsilon)$.
My working out is as follows:
We are given that $Var(\epsilon) = E[Var(\epsilon|X)] = E(\sigma^2_{Y|X})$
$Cov(\epsilon, U) = E(\epsilon U) - E(\epsilon)E(U) = E(\epsilon U) = E[[Y - E(Y|X)][Y - L(Y|X)]]$
$= E[[Y - E(Y|X)][Y - (\alpha + \beta X)]]$
$= E[Y^2 - YE(Y|X) - YL(Y|X) + E(Y|X)L(Y|X)]$
$= E(Y^2) - E[E(Y|X)] - E[Y(\alpha + \beta X)] + E[E(Y|X)(\alpha + \beta X)]$
$= E(Y^2) - E(Y^2) - [\alpha E(Y) + \beta E(YX)] + E[\alpha E(Y|X) + \beta XE(Y|X)]$
$= - [\alpha E(Y) + \beta E(YX)] + [\alpha E[E(Y|X)] + \beta E[XE(Y|X)]]$
$= - [\alpha E(Y) + \beta E(YX)] + [\alpha E[Y] + \beta E[XY]] = 0$
Hence, $Cov(\epsilon, U) \not= Var(\epsilon)$, assuming $E(\sigma^2_{Y|X}) \not= 0$.
Is that okay? The zero covariance between espilon & U seems like a strange result to me, but there are no clues in my textbook.
Thanks!
2. ## Re: Covariance of BLP error and CEF error
Would it help if I explained the question a little more?
The CEF is the conditional expectation function, i.e., the expectation of the conditional distribution of Y given X, and the BLP is the best linear prediction of the CEF, i.e., the function that minimizes $E(W^2)$, where $W = E(Y|X) - (a + b X)$.
In my workings, I've taken advantage of the law of iterated expectations (i.e., the marginal expectation of Y is the expectation of its conditional expectation), and the iterated product law (i.e., the expected product of X and Y is equal to the expected product of and the conditional expectation of Y given X).
I suppose the thing that is confusing me, is that the parameters of the BLP (alpha and beta) are functions of random variables, so are random variables (right?), so should I really be pulling them outside the expectations operator and treating them as constants?
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 17, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9395170211791992, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/tagged/constrained-dynamics
|
# Tagged Questions
The constrained-dynamics tag has no wiki summary.
0answers
31 views
### Lagrangian with a general constraint [closed]
Can any body help me out to solve this problem? I am familiar with mechanism of Lagrangian and I can solve some problems with constraints but this one is really hard to solve.
1answer
252 views
### How do I find constraints on the Nambu-Goto Action?
Let $X^\mu (t,\sigma ^1,\ldots ,\sigma ^p)$ be a $p$-brane in space-time and let $g$ be the metric on $X^\mu$ induced from the ambient space-time metric. Then, the Nambu-Goto action on $X^\mu$ is ...
2answers
208 views
### primary constraints for constrained Hamiltonian systems
I would be most thankful if you could help me clarify the setting of primary constraints for constrained Hamiltonian systems. I am reading "Classical and quantum dynamics of constrained Hamiltonian ...
2answers
146 views
### From Lagrangian to Hamiltonian in Fermionic Model
While going from a given Lagrangian to Hamiltonian for a fermionic field, we use the following formula. $$H = \Sigma_{i} \pi_i \dot{\phi_i} - L$$ where \$\pi_i = \dfrac{\partial L}{\partial ...
0answers
40 views
### The consistency conditions of constrained Hamiltonian systems
I am studying the Hamiltonian description of a constrained system. There are some questions puzzled me for days, which I have been stuck on it. From the lagrangian, we can obtain the primary ...
1answer
140 views
### Significance of the the Lagrange multipliers in statistical mechanics
In classic thermodynamics one can derive the Maxwell Boltzmann statistics by solving a Lagrange multipliers equation. In this process a new parameter $\beta$ is introduced to take account of the total ...
2answers
108 views
### Are Poisson brackets of second-class constraints independent of the canonical coordinates?
Say we have a constraint system with second-class constraints $\chi_N(q,p)=0$. To define Dirac brackets we need the Poisson brackets of these constraints: $C_{NM}=\{\chi_N(q,p),\chi_M(q,p)\}_P$ . Is ...
1answer
103 views
### Euler-Lagrange for constrained system
Suppose we have Euler-Lagrange system with generalized coordinate $r_1$ and $r_2$, and input $u_1$ and $u_2$. I know how to prove this system is indeed Euler-Lagrange system. Suppose now if we have a ...
2answers
77 views
### Hamiltonian constraint in spherical Friedmann cosmology
I'm taking a GR course, in which the instructor discussed the 'Hamiltonian constraint' of spherical Friedmann cosmology action. I'm not quite clear about the definition of 'Hamiltonian constraint' ...
2answers
88 views
### How is the physical Lagrangian related to the constrained minimization Lagrangian?
If we're minimizing an energy $V(q)$ subject to constraints $C(q) = 0$, the Lagrangian is $$L = V(q) + \lambda C(q).$$ I have fairly solid intuition for this Lagrangian, namely that the energy ...
1answer
111 views
### Question about non-holonomic geometric constraints
Suppose a point particle is constrained to move on the curve $y=x^2$. This would then be a non-holonomic geometric constraint since the particle has one degree of freedom and requires two coordinates ...
3answers
199 views
### Writing $\dot{q}$ in terms of $p$ in the Hamiltonian formulation
In the Hamiltonian formulation, we make a Legendre transformation of the Lagrangian and it should be written in terms of the coordinates $q$ and momentum $p$. Can we always write $dq/dt$ in terms of ...
5answers
312 views
### How are constraint forces represented in Lagrangian mechanics?
Suppose we try to obtain the movement equation for a particle sliding on a sphere (no friction, ideal bodies...). The only forces acting on the particle are its weight and - here's my problem - a ...
4answers
231 views
### What makes an equation an 'equation of motion'?
Every now and then, I find myself reading papers/text talking about how this equation is a constraint but that equation is an equation of motion which satisfies this constraint. For example, in the ...
1answer
149 views
### A particular case when Lagrange equation is equivalent to equation of motion on a Riemannian manifold
Suppose a particle is moving on a surface of a sphere,then it contains a holonomic constraint and so the three Cartesian co-ordinates are available with a constraint equation(equation of surface in ...
1answer
89 views
### Rotating sphere and circular trajectory: minimum speed
I have a sphere (mass = 3 kg), constrained to a fixed length rope, rotating (radius = 5 m) on a vertical plane. My textbook ask me about the minimum speed in the highest point in order to keep the ...
0answers
59 views
### Secondary constraints leads to the value of lagrange multiplier
From Lagrangian I got two primary constraint $\phi_i$ and $\phi$. And my Hamiltonian in presence of the constraints becomes- $$H_p=p\dot q-L+\lambda_i\phi_i+\lambda\phi$$ here the $\lambda_i$ and ...
3answers
159 views
### Odd number of second class constraints (!)
For my thesis, I have calculated the constraints for a system using Dirac method of constraint analysis. The problem is I got odd number of second class constraints (!), which gives me unusual numbers ...
3answers
129 views
### Quantizing first-class constraints for open algebras: can Hermiticity and noncommutativity coexist?
An open algebra for a collection of first-class constraints, $G_a$, $a=1,\cdots, r$, is given by the Poisson bracket $\{ G_a, G_b \} = {f_{ab}}^c[\phi] G_c$ classically, where the structure constants ...
1answer
248 views
### Gauss law in classical U(1) gauge theory
I can see that $a_{0}$ is not an independent field and Gauss law is a constraint on the theory arising from field equations. But, I don't get the geometrical picture. Let $A$ be the space of all ...
2answers
84 views
### Commutation for constraints
Suppose from the Hamiltonian I got the Primary constraints $$(\Phi_m,\Phi)$$ And $\dot \Phi_m$ , $\dot \Phi$ leads to secondary constraints $$(\gamma_m,\gamma)$$ respectively. Now if the commutation ...
2answers
414 views
### degree of freedom of a rigid body 5 or 6?
I'm confused here. I have a three particle (rigid) system. What would be the degree of freedom? I found out five. 3 coordinates for center of mass and 2 for describing orientation. But we have only ...
1answer
167 views
### Calculation of Commutation in constraint analysis
During analysis the constraint from a theory, suppose my canonical Hamiltonian is $$H_c=P^A\dot{A}+P^B\dot{B}-L$$ where $P^A=\frac{\partial L}{\partial \dot A}$ and \$P^B=\frac{\partial L}{\partial ...
2answers
213 views
### Counting degrees of freedom in presence of constraints
In a $N$ dimensional phase space if I have $M$ 1st class and $S$ 2nd class constraints, then I have $N-2M-S$ degrees of freedom in phase space. How can I calculate the degrees of freedom in ...
4answers
395 views
### First class and second class constraints
Hello I am working on a project that involves the constraints. I checkout the paper of Dirac about the constraints as well as some other resources. But still confuse about the first class and second ...
2answers
439 views
### Why does tension not do work in this pulley system? etc
I have a slight difficulty understanding the solution to the following problem: A light inextensible string with a mass $M$ at one end passes over a pulley at a distance $a$ from a vertically fixed ...
1answer
248 views
### Relation between Dirac's generalized Hamiltonian dynamics method and path integral method to deal with constraints
What is the relation between path integral methods for dealing with constraints (constrained Hamiltonian dynamics involving non-singular Lagrangian) and Dirac's method of dealing with such systems ...
1answer
105 views
### Request for Reference: BRST formalism/transformations
Could anyone please suggest a very basic paper/reference/literature on BRST symmetry/formalism that requires rudimentary knowledge of Dirac's method for dealing with constrained systems and generation ...
2answers
154 views
### Elimination of velocities from momenta equations for singular Lagrangian
this doubt is related to Generalized Hamiltonian Dynamics paper by Dirac. Consider the set of $n$ equations : $p_i$ = $∂L/∂v_i$, (where $v_i$ is $q_i$(dot) = $dq_i/dt$, or time derivative of ...
1answer
382 views
### When is the principle of virtual work valid?
The principle of virtual work says that forces of constraint don't do net work under virtual displacements that are consistent with constraints. Goldstein says something I don't understand. He says ...
1answer
124 views
### Showing constraint is nonholonomic
One example of a nonholonomic constraint is a disk rolling around in the cartesian plane that is constrained to not be slipping. These leads to the constraint $dx - a \sin\theta d\phi = 0$ and \$dy - ...
6answers
583 views
### Degree of freedom paradox for a rigid body
Suppose we consider a rigid body, which has $N$ particles. Then the number of degrees of freedom is $3N - (\mbox{# of constraints})$. As the distance between any two points in a rigid body is fixed, ...
2answers
167 views
### Virasoro constraints in quantization of the Polyakov action
The generators of the Virasoro algebra (actually two copies thereof) appear as constraints in the classical theory of the Polyakov action (after gauge fixing). However, when quantizing only "half" of ...
2answers
204 views
### Why so many arguments for the transformation equations of generalized coordinates?
For a system of $N$ particles with $k$ holonomic constraints, their Cartesian coordinates are expressed in terms of generalized coordinates as \mathbf{r}_1 = \mathbf{r}_1(q_1, q_2,..., q_{3N-k}, ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 49, "mathjax_display_tex": 6, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9005897045135498, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/3474?sort=newest
|
## Decomposition of k[G]
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
There's a well-known decomposition of $L^2(G)$, a regular representation of compact complex group Lie $G$, called Peter-Weyl theorem.
Turns out for some reason I automatically think that there is a similar theorem that decomposes regular representation $k[G]$ of algebraic group $G$:
$$k[G] = \bigoplus_R \ R^* \otimes R$$
where sum goes over representations to $GL(n, k)$. For this to work I think we need $G$ to be a linear reductive group over, say, algebraically closed field $k$ of characteristic 0. Also, perhaps we need $\pi_1(G) = 1$.
But perhaps this is not true — the search hasn't given me a reference yet, but I wasn't able to provide a counterexample either.
Consider, for example, the multiplicative group $\mathbb G_m$. Then $k[\mathbb G_m] = k[x, x^{-1}]$ where each summand $k\cdot x^n$ is a separate representation of $\mathbb G_m$ into $\mathbb G_m = GL(1, k)$, specifically the one given by $a \mapsto a^n$. So the identity works.
So, is there such a theorem? What's a reference or a counterexample?
-
I also found books.google.com/… – Ilya Nikokoshev Oct 30 2009 at 22:04
What happens for SL(2,k)? When k is the complex numbers, this has no non-trivial finite-dimensional unitary representations, so one can't get decomposition of the left-reg rep of SL(2,C) as a sum of fin-dim reps. But your question is slightly different. Nonetheless, SL(n,k) or its universal cover seems an obvious first case to consider – Yemon Choi Oct 30 2009 at 22:24
I tried checked it before posting an answer — it feels like it works and passes some checks, e.g. you find the 3- and 5- dimensional reps. But not sure yet. – Ilya Nikokoshev Oct 30 2009 at 23:03
Indeed, `SL(2)` algebraic `<--->` `SU(2)` complex (for the purposes of this question at least). – Ilya Nikokoshev Oct 30 2009 at 23:04
2
Yemon- algebraic functions on SL(2,R) aren't L^2. They don't form a Hilbert space, and the action isn't unitary. – Ben Webster♦ Oct 31 2009 at 0:55
show 1 more comment
## 3 Answers
This is true for reductive groups, more or less by definition. An algebraic representation of an algebraic group is a comodule V over the algebra of functions O(G) of the group. Therefore, every representation V induces a map V -> V ⊗ O(G), or equivalently V^* ⊗ V --> O(G) (call the source of this map C(V) for coefficient space of V). It is not hard to see that the latter is a map of G x G modules. If G is reductive, then its representation category is semi-simple, and thus so is the representation category of G x G. In this case the simples of G x G are external tensor product of simples of V, and Hom(A ⊗' B, C ⊗' D) = d(A,C) ⊗ d(B,D) where d(V,W)=0 if v \cong W, C else. Here ⊗' means external tensor product. There doesn't appear to be a ⊠
For non-reductive groups, you can still form O(G) in an analogous way:
Let A = ⊕V V^* ⊗' V, where here the sum is over ALL finite dimensional modules V (not just isoclass representatives, and not just simples), and again the tensor product is external, so this lives in a completion of Rep(G) ⊗' Rep(G), and ⊗' means Deligne tensor product of categories.
Well this A is way too big, but now let's quotient A by the images of f^* ⊗' id - id ⊗' f, for all f:V-->W. This cuts A back down, for instance it identifies C(V) and C(V') whenever V and V' are isomorphic. If the category Rep(G) is semi-simple, you can similarly use the projectors and inclusions of simple objects to reduce to a Peter-Weyl type decomposition.
One nice thing about this construction (even in the semi-simple case) is that it is basis free because you don't choose representatives of simple objects, and also it makes the multiplication structure completely trivial: V^* ⊗' V ⊗2 W^* ⊗' W = V^* ⊗ W^* ⊗' V ⊗ W --> W^* ⊗ V^* ⊗' V ⊗ W, using the braiding (tensor swap). It also works in braided tensor categories and explains the multiplication structure on the "covariantized" quantum group.
-
Thanks David! It's great to meet again, funny how MathOverflow connects people. By the way, I changed &boxtimes to its Unicode code, since it appears Safari doesn't like the former one. – Ilya Nikokoshev Nov 1 2009 at 8:30
Good to see you too, and read all your questions! – David Jordan Nov 1 2009 at 14:19
I'd like to add Chuck's comment from below to here, lest anyone be confused by what I wrote above. As Chuck pointed out, "reductive" doesn't mean "the category of its modules is semi-simple" in non-zero characteristic. So the first sentence above should have read "In the case your category is semi-simple"... However, the general construction starting in paragraph 2 should hold in any characteristic. Probably Jantzen's filtration is related to what is discussed in paragraph 4 above. – David Jordan Nov 12 2009 at 15:30
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
This statement is false in general for algebraic groups. It's true in characteristic 0, but it is not in general true in positive characteristic. Instead, one has a weaker statement in positive characteristic (cf Proposition 4.20 on page 213 in Jantzen's "Algebraic Groups"):
Let $G$ be a reductive linear algebraic group over an algebraically closed field of positive characteristic $k$. Then $k[G]$ has an increasing filtration whose subquotients are of the form $H(\lambda) \otimes H(-w_0 \lambda)$, where $\lambda$ runs over the dominant weights for $G$ and the $H(\lambda)$ are the modules arising as global sections of line bundles on the flag variety of $G$ (the so-called costandard modules for $G$).
Moreover, this is true when $k[G]$ is considered as a $G\times G$-module.
Note that unlike in characteristic 0, these modules $V$ are not in general irreducible. (It's worth noting that the category of modules over a reductive algebraic group is not in general a semisimple category — this is only true in characteristic 0).
-
Good to know! The question was restricted to char 0 since I suspected something will not work, thanks for an explanation of what breaks down! – Ilya Nikokoshev Nov 6 2009 at 16:52
Absolutely! I just fixed a small problem in what I'd written, but it wasn't a big thing. You can search inside Jantzen's book on Google Books if you're interested in the proof -- it's on page 213. – Chuck Hague Nov 6 2009 at 17:49
Your answer is very informative: I returned to change formatting and expand it a bit; feel free to revert! – Ilya Nikokoshev Feb 3 2010 at 22:35
The result is true for linear algebraic reductive groups over C. The sum is over all (isomorphism classes of) irreducible regular finite dimensional representations and the isomorphism is an isomorphism of G\times G-modules.
See Theorem 12.1.4 of Goodman and Wallach Representations and Invariants of the Classical Groups.
-
The proof does not use much more than Schur's Lemma and the fact that k[G] is a locally regular repn of G\times G so I guess it goes through for any algebraically closed k. – Fran Burstall Oct 31 2009 at 0:15
That's what I naively think, but then how to explain the fact that textbooks and papers almost always consider only the case of `k=C`? What if there are some important nuances? – Ilya Nikokoshev Oct 31 2009 at 10:32
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 26, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9192168712615967, "perplexity_flag": "middle"}
|
http://www.conservapedia.com/Schrodinger_equation
|
# Schrodinger equation
### From Conservapedia
The Schrodinger equation is a linear differential equation used in various fields of physics to describe the time evolution of quantum states. It is a fundamental aspect of quantum mechanics. The equation is named for its discoverer, Erwin Schrodinger.
## Mathematical forms
### General time-dependent form
The Schrodinger equation may generally be written
$i\hbar\frac{\partial}{\partial t}|\Psi\rangle=\hat H|\Psi\rangle$
where i is the imaginary unit,
$\hbar$ is Planck's constant divided by 2π,
$|\Psi\rangle$ is the quantum mechanical state or wavefunction (expressed here in Dirac notation), and
$\hat H$ is the Hamiltonian operator.
The left side of the equation describes how the wavefunction changes with time; the right side is related to its energy. For the simplest case of a particle of mass m moving in a one-dimensional potential V(x), the Schrodinger equation can be written
$-\frac{\hbar^2}{2m}\frac{\partial^2\psi}{\partial x^2}+V(x)\psi=i\hbar\frac{\partial \psi}{\partial t}$
### Derivation
The quickest and easiest way to derive Schrodinger's equation is to understand the Hamiltonian operator in quantum mechanics. In classical mechanics, the total energy of a system is given by
$E = \frac{p^2}{2m} + V(x)$
where p is the momentum of the particle and V(x) is its potential energy. Applying the quantum mechanical operator for momentum:
$p = \frac{\hbar}{i}\frac{\partial}{\partial x}$
and subbing into the classical mechanical form for energy, we get the same Hamiltonian operator in quantum mechanics:
$\hat H = \frac{-\hbar^2}{2m}\frac{\partial^2}{\partial x^2} + V(x)$
from which Schrodinger's equation and the eigenvalue problem $\hat H\Psi = E\Psi$ can be easily seen.
### Eigenvalue problems
In many instances, steady-state solutions to the equation are of great interest. Physically, these solutions correspond to situations in which the wavefunction has a well-defined energy. The energy is then said to be an eigenvalue for the equation, and the wavefunction corresponding to that energy is called an eigenfunction or eigenstate. In such cases, the Schrodinger equation is time-independent and is often written
$E\psi=\hat H\psi$
Here, E is energy, H is once again the Hamiltonian operator, and ψ is the energy eigenstate for E.
One example of this type of eigenvalue problem is an electrons bound inside an atom.
## Examples for the time-independent equation
### Free particle in one dimension
In this case, V(x) = 0 and so we see that the solution to the Schrodinger equation must be
ψ = Ae − ikx
with energy given by
$E=\frac{\hbar^2 k^2}{2m}$
Physically, this corresponds to a wave travelling with a momentum given by $\hbar k$, where k can in principle take any value.
### Particle in a box
Consider a one-dimensional box of width a, where the potential energy is 0 inside the box and infinite outside of it. This means that ψ must be zero outside the box. One can verify (by substituting into the Schrodinger equation) that
ψ = sin(kx)
is a solution if k = nπ where n is any integer. Thus, rather than the continuum of solutions for the free particle, for the particle in a box there is a set of discrete solutions with energies given by
$E_n=\frac{\hbar^2 k^2}{2m}=\frac{\hbar^2n^2\pi^2}{2m}$
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 13, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8940966129302979, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/49779/can-we-extend-an-over-determined-set-of-polynomials-so-that-they-intersect-comple
|
## Can we extend an over-determined set of polynomials so that they intersect completely?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
For a fixed degree $d>1$, let $\{x^\alpha: |\alpha|=d\}$ be the set of monomials of degree $d$ in the variables $x_1,\ldots, x_n$. View these monomials as $N=\binom{n+d-1}{d-1}>n$ complex polynomials in ${\mathbb{C}}[x_1,\ldots, x_n]$. For each collection $I$ of multi-indices of length $d$, consider the subscheme $\bigcap_{\alpha\in I}\{x^\alpha=0\}$ in affine space $\mathbb{C}^N$. As we let $I$ range over collections of at most $n$ multi-indices, these subschemes are nonempty and distinct from each other, i.e. these intersections have distinct zero sets considering multiplicity.
I'm wondering if we can extend this property to the entire collection. Is it possible to find $N$ polynomials $g_1(x_1,\ldots, x_N),\ldots, g_N(x_1,\ldots, x_N)$ such that
(1) $g_\alpha(x_1,\ldots,x_n,0,\ldots, 0)=x^\alpha$. (i.e. $g_i$ is formed from $x^\alpha$ by adding extra variables)
(2) As we range over arbitrary $I$, the subschemes $\bigcap_{\alpha\in I}\{g_\alpha=0\}$ are non-empty and distinct from each other?
-
1
Yes. Since you are allowed to use arbitrarily large degrees on the g's, you can choose the coefficients in the gs so that their intersections are distinct. – J.C. Ottem Dec 18 2010 at 9:04
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 19, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9011259078979492, "perplexity_flag": "head"}
|
http://mathhelpforum.com/differential-equations/130621-linear-equation-integrating-factor.html
|
# Thread:
1. ## linear equation with integrating factor
Hello,
I have been trying this problem for about an hour now and I just can't seem to figure it out:
dA/dt + 2A/(50+t) = 3
I found that the integrating factor = (50+t)^2 and so d/dt (50+t)^2(A) = 3(50+t)^2
but after this my problem falls apart... I can't seem to figure out how to go from there... I tried integrating both sides and factoring but that was not the right answer... Any help would be greatly appreciated! Thank you!
2. Originally Posted by collegestudent321
Hello,
I have been trying this problem for about an hour now and I just can't seem to figure it out:
dA/dt + 2A/(50+t) = 3
I found that the integrating factor = (50+t)^2
Until now, everything's fine.
Originally Posted by collegestudent321
and so d/dt (50+t)^2(A) = 3(50+t)^2
I'm not really sure what you did here. Maybe Latex could help me to understand.
but after this my problem falls apart... I can't seem to figure out how to go from there... I tried integrating both sides and factoring but that was not the right answer... Any help would be greatly appreciated! Thank you!
From it, multiply both side by the IF. Then integrate with respect to t. You'll reach $(50+t)^2y=3\int (50+t)^2 dt$.
3. Originally Posted by collegestudent321
Hello,
I have been trying this problem for about an hour now and I just can't seem to figure it out:
dA/dt + 2A/(50+t) = 3
I found that the integrating factor = (50+t)^2 and so d/dt (50+t)^2(A) = 3(50+t)^2
but after this my problem falls apart... I can't seem to figure out how to go from there... I tried integrating both sides and factoring but that was not the right answer... Any help would be greatly appreciated! Thank you!
$\frac{dA}{dt} + \frac{2A}{50 + t} = 3$
$(50 + t)^2\frac{dA}{dt} + 2(50 + t)A = 3(50 + t)^2$
$\frac{d}{dt}[(50 + t)^2A] = 3(50 + t)^2$
$(50 + t)^2A = \int{3(50 + t)^2\,dt}$
$(50 + t)^2A = (50 + t)^3 + C$
$A = 50 + t + \frac{C}{(50 + t)^2}$.
4. wow... I would have never guessed that. Thank you so much!
5. Originally Posted by collegestudent321
wow... I would have never guessed that. Thank you so much!
I suggest you to read the second post of this thread: http://www.mathhelpforum.com/math-he...-tutorial.html.
If you would have never guessed how to proceed after getting the IF, it means you didn't really understand the method itself. Chris' explanation of the method in 3 lines explain it very well. Have a look.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 7, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9751611948013306, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/87914/solving-a-nonlinear-integral-equation
|
## Solving a nonlinear integral equation
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Consider the integral equation $$f=g^2+H[g]^2$$ where $f\colon\mathbf R\to \mathbf R$ is an even and integrable function, $g$ is the function to be solved for, and $H[g]$ is the Hilbert transform of $g$. Furthermore, $g$ is (should be) even and real-valued.
The equation can be rewritten in several ways, for example, $$P\int_{-\infty}^\infty g(y)\frac1{\pi(x-y)}dy = \sqrt{f(x)-g(x)^2}$$ whose general form is $$P\int_{-\infty}^\infty g(y)h(x-y)dy = \phi(f(x),g(x))$$ where $P\int$ denotes the principal value. Although this form it resembles a Fredholm equiation, I have not been able to find any litterature that covers it (integral equations are not my specialty). Therefore, I would greatly appreciate any pointers to litterature on the solution (especially numerical) of this kind of equations.
Best regards, Emil Hedevang
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9089697599411011, "perplexity_flag": "head"}
|
http://mathhelpforum.com/advanced-algebra/156386-matrix-norms-sup-max-headache-print.html
|
# Matrix norms, sup,max and headache
Printable View
• September 16th 2010, 02:48 AM
Mollier
Matrix norms, sup,max and headache
Hi,
I am reading (again) about matrix norms and have a few questions.
The definition I have says that given any norm $||*||$ on the space $\mathbb{R}^n$ of $n$-dimensional vectors with real entries, the subordinate matrix norm on the space $\mathbb{R}^{n x n}$ of $n x n$ matrices with real entries is defined by
$||A|| = \max_{x\in\mathbb{R}^n\backslash\{0\}}\frac{||Ax|| }{||x||}.$
I've also read that it can be defined as,
$||A|| = \sup_{||x||=1}||Ax||.$
The way I "understand" the second definition is that we take all the vectors $x$ in $\mathbb{R}^n$ whose norm is one, and multiply them with the matrix A to create a set of vectors, say,
$\{v_1,v_2,\cdots\}$
I believe that there is an infinite number of vectors with norm one..
We then take the norm of all these new vectors to get a set of real numbers. We now find the smallest real number that is larger than all the real numbers in this set, and use it as the norm of the matrix $A$. By the definition of $\sup$, I believe that this number that is larger than all numbers in our set is not part of the set.
As for the first definition, I am tempted to say that we take all vectors $x$ in $\mathbb{R}^n$ and divide then by $||x||$ to make a unit vector.We then multiply this unit vector by $A$ to get the same set of vectors ( $v_i$) as before. We take the norm of these vectors and get a set of numbers. Since $\max$ is involved I guess that this implies that the upper bound is in this set, and not outside of it as it is with $\sup$... I do not understand this..
I read somewhere that since $||x||_p$ is a scalar, we have that
$||A||_p = \sup_{x\neq 0}\frac{||Ax||_p}{||x||} = \sup_{x\neq 0}||\frac{Ax}{||x||_p}||_p$,
not sure how that works either..
By the way, I've also seen the matrix norm defined as,
$<br /> ||A|| = \sup_{x\in\mathbb{R}^n\backslash\{0\}}\frac{||Ax|| }{||x||}.$
I am confused. Hope someone can take the time and explain this to me, thanks.
• September 16th 2010, 04:04 AM
Mollier
After a bit more research I've found that if
$||A||=\sup_{||x||=1}||Ax||$ the domain of the continuous function $||Ax||$ of $x$ is closed and bounded and therefore the function will achieve its maximum and minimum values. We may then replace $\sup$ with $\max$..
Sorry for all the ranting..
All times are GMT -8. The time now is 10:06 AM.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 27, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9550709128379822, "perplexity_flag": "head"}
|
http://mathematica.stackexchange.com/questions/tagged/statistics
|
# Tagged Questions
Questions on the statistical functions of Mathematica.
1answer
109 views
### Computing Correlations and p-values
I have two vectors $A$ and $B$ of length (say) $50$ or so, and I want to determine whether there is any correlation between their entries. I computed their correlation directly and found it to be ...
1answer
143 views
### Calculate variance of random walk?
How can I symbolically calculate the variance of the following random walk in Mathematica? Given several discrete random variables such that $p(Z_i=1-2k)=p$, where $k$ is a small real number, and ...
2answers
126 views
### Probability density histogram with unequal bin widths
I am confused by output of Histogram and HistogramList for probability density ("PDF") when ...
1answer
58 views
### Standard Deviation and StandardDeviationFilter
I found this scant description of StandardDeviationFilter in the documentation, implying one could use it to generate a moving standard deviation: I've got a ...
1answer
69 views
### How to get confidence bands of parameters from a fitting procedure?
I have done fitting of data points with a given model that has two parameters (A and B), using NonlinearModelFit. The result of the fit is the maximum of the ...
0answers
87 views
### build and estimate a time series process
I want to generate an EGARCH process. My problem is that I do not see how to create new processes beyond those available. The process itself is : $$\epsilon(t) = \sigma(t) \eta(t)$$ ...
0answers
49 views
### Is there a common name for OrderDistribution? [closed]
I can't seem to find any reference material on what Mathematica calls OrderDistribution via Google or Wolfram|Alpha or the ASA. I can read what the Documentation ...
4answers
120 views
### Descriptive statistics of two events
I am trying to use the descriptive statistics feature of Mathematica to answer the following question: suppose I have two events, A and B, whose occurrence is described by a normal distribution around ...
1answer
83 views
### No Tukey Test on ANOVA package
I'm trying to use the ANOVA package and have mathematica display the Tukey PostTest, but all I get is this: ...
1answer
127 views
### Multinomial logistic regression
Has anyone done multinomial logistic regression in Mathematica? The binomial case is essentially done on the LogitModelFit documentation page and works fine. I am ...
1answer
99 views
### PDF on TransformedDistribution of two BinomialDistribution too slow
I'd been doing my own convolutions of distributions for some calculations, decided to use built-ins. With ...
2answers
451 views
### How to use a 3×3 covariance matrix to plot an error ellipsoid?
I have a 3×3 error covariance in Mathematica, but I don't know how to use it for plotting the error ellipsoid. It would be great if you can show me how I can do that for the below covariance matrix: ...
1answer
109 views
### EmpiricalDistribution for Gini Coefficient Runs Slow
I am repurposing code that I found in this blog post (http://datavoreconsulting.com/sports/gini-coefficients-and-the-olympic-medal-race/) to generate the Gini coefficient for other data sets I am ...
1answer
77 views
### ConditionalEntropy and StatisticsLibraryNConditionalEntropy
There is a function available from the Statistics`Library context called NConditionalEntropy that appears to compute ConditionalEntropy. Thus ... ...
2answers
180 views
### Plotting 2D Data with Categorical Data in One of the Dimensions
I have a 2D data set in the form of: data={{Category, Integer}, {Category, Integer},{Category, Integer}...} It is a pretty simple data set with four categories ...
1answer
136 views
### Obtaining standardised regression coeffiecients
Regression coefficients are the constant that indicate the rate of change in one variable as a function of change in another. Standardised regression coefficients are the same, but refer to a change ...
1answer
107 views
### Discrete 3D plots of median ratios of two 2D matrices of lists of values
Lets say I have 2 2D arrays where each cell contains a list of values: Example: ...
2answers
136 views
### BoxWhiskerChart with logarithmic axes
I would like to make a following chart Ideally I would also like to control the boxes position on a logarithmic x-axis. Some inspiration how to do the latter may be found in the answer to my ...
3answers
219 views
### BoxWhiskerPlot - how to specify boxes position on the horizontal axis
Let's say I have a folowing set of data: k = 1 : list of values k = 3 : list of values k = 10 : list of values I know that to make a BoxWhiskerChart I have to ...
2answers
154 views
### Fitting data to an ARProcess using FindProcessParameters
I have 50 data points that I would like to represent as an AR(4) process. I'd like to over-plot the behaivor of the estimated process model with that of the original (raw) data before I use the model ...
1answer
89 views
### Using NSolve on an equation that involves Mean and TruncatedDistribution
What I would like to do is create a mixture distribution that has a specified mean by varying one parameter in the distribution. To do this I've written the following code; ...
1answer
204 views
### Using DistributionFitTest on custom distributions in Mathematica 8
I originally posted this question on Stack Overflow, but I didn't get any answers, and I'm hoping to have better luck here. I'm trying to compute the goodness-of-fit of a bi-modal Gaussian ...
1answer
332 views
### Recommended book on random processes to understand new functionality in Mathematica 9?
I am interested in exploring the new functionality on random processes available in Mathematica 9, but I am not familiar with all of the underlying mathematics. Could you recommend a book that ...
1answer
153 views
### Filtering Lists in Mathematica [duplicate]
Possible Duplicate: Select/Delete with Sublist elements? I need help in filtering long lists of x,y coordinates.Lets use the following list as an ...
1answer
70 views
### Draw from HistogramDistribution with ParallelTable
I wanted to check something, but ran into troubles using HistogramDistribution in combination with ParallelTable. The code does the following: Compute a HistogramDistribution of some sample and use ...
0answers
70 views
### How to map over non-Null values in a list [duplicate]
Possible Duplicate: Selectively Mapping over elements in a List I have a list of distributions and a list of samples. I'm trying to calculate probabilities of some record being within one ...
0answers
165 views
### Matrix algebra vs. PrincipalComponents and Varimax/Oblimin
Using matrix algebra I can calculate loadings and scores from the covariance matrix (data matrix is column centered): ...
1answer
165 views
### How do you pattern match a DataDistribution
I have a function, f[dist_, samp_]:=somework[dist, samp] that I want to return Null or zero if passed a null distribution. I ...
3answers
442 views
### Forest plot with Mathematica 9
Does anyone have experience with or a package for the creation of forest plots in Mathematica 9? For instance for subgroup analyses of cox model or meta-analysis (e.g rmeta package in R)?
1answer
119 views
### Representative Smooth Kernel Distribution from Truncated Distribution
I am trying to produce a better distribution from a dataset that is bounded to be greater than 0. Here is an example distribution from the documentation that mimics the behavior of the actual dataset: ...
2answers
167 views
### Generating a range of numbers according to some rules
I'm pretty new to Mathematica, and I'm mainly a programmer so I don't have a lot of knowledge about maths. I want to generate a set of UNIQUE incremental numbers (series) according to the following ...
1answer
186 views
### EstimatedProcess hangs with documentation example
Not sure if this is a bug or a typo but for the first example of the ARMAProcess in Mathematica 9 we have: ...
2answers
392 views
### Finding distribution parameters of a gaussian mixture distribution
Short version: how to estimate the parameters of a mixture of multivariate normal distributions (i.e.: Gaussian mixture model)? Long version. I am trying to estimate the parameters of a mixture of ...
1answer
188 views
### Fix end point in smooth kernel distribution density
I am using some extreme value fitting method which results in a parametric distribution for values exceeding some threshold, all values $\geq 0$. For smaller values I'd like to use a smooth kernel ...
1answer
421 views
### Grain(Particle) Size Distribution (PSD) Analysis with Mathematica
I would like to do an analysis of the grain size distribution of a Monte Carlo Grain Growth simulation I implemented based on the nice example by Rituraj Nandan (see Link). The result of this ...
1answer
164 views
### Overlay of pdf using PairedHistogram in Mathematica
With a standard 1D histogram, I can generate a pdf overlay using Show and Plot. I'd like to do something similar with ...
1answer
143 views
### Getting slightly different results in fitting a logit model in R and Mathematica
I'm fitting some data to a Logit model in both Mathematica and R and I'm getting slightly different results. R code: ...
1answer
155 views
### Trying to plot this probability
Can anyone help me plot this? log P(X >= x) = alpha logx x=0.001 + k(0.001) k= 0, ..., 100 I can't figure out the coding for this.. I've been trying this for a while, and can't seem to figure it ...
1answer
215 views
### Incorrect means of order statistics for the standard normal distribution
Table[Mean[OrderDistribution[{NormalDistribution[], 4}, i]], {i, 1, 4}] // N (* {-1.02938, -6.47326, 0.297011, 1.02938} *) Above are the means of order statistics ...
1answer
163 views
### Inconsistency in Histogram's “Probability” Binsize
Context Let me define a Probability distribution (following the documentation and with some connection to this question) ...
2answers
132 views
### How to find synchronization offset?
I have 2 sets of data containing delays (audio / video). ...
0answers
154 views
### Mathematica Complains about Non Symmetric Covariance matrix, when it's not the case
I was doing some fitting with Mathematica7 using NonlinearModelFit. It's quite long the program to do the fit and that's why I am not displaying here ... It goes ok, and I can get the fit parameters ...
1answer
171 views
### How to create function that classifies sample data
I want to create expression, that classifies samples ( 4 weather features -> decision). Example training data: For the values that i have data for, I can use rules: ...
5answers
2k views
### Estimate error on slope of linear regression given data with associated uncertainty
Given a set of data, is it possible to create a linear regression which has a slope error that takes into account the uncertainty of the data? This is for a high school class, and so the normal ...
2answers
130 views
### How to find the variance under the assumption that x follows some probability distribution
I am aware that we can find the expectation under the assumption that x follows some probability distribution, something like this: ...
1answer
185 views
### Problem with Estimating the parameters value of gamma distribution
can anyone please help me with this problem? I am trying to estimate parameters of gamma distribution (fitted into a set of data). Following are my command and the output produced by mathematica: ...
2answers
295 views
### Total Variation Distance of probability matrix
How can I calculate the Total Variation Distance of a transition Matrix? is there any built in function? I've searched all documentation and haven;t found anything. ** More information: Let me try ...
1answer
385 views
### stationary distribution of a transition matrix
How can I solve the stationary distribution of a finite Markov Chain? In other words, how can I estimate the eigenvectors of a transition matrix?
1answer
227 views
### BUGS-type calculations in Mathematica
Recently I have been teaching myself how to Bayesian calculations with the BUGS language (JAGS, in particular). However, I find myself wondering how one might use Mathematica to do similar ...
2answers
246 views
### How to fit one distribution to another?
I have a custom distribution created to model some experimental observations. While too complicated to include in this question, I can provide an example and some illustrations to convey a sense of ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8752188086509705, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/4621/simple-formula-for-curve-of-dj-crossfader-volume-dipped/4622
|
# Simple Formula for Curve of DJ Crossfader volume : Dipped
Below are the common set of curves that are used in a DJ mixer on the crossfader. I have a software equivalent and am using the "Transition" type curve but would like to include some different curve types.
On both the x and y axis' my range is from 0-1. What I usually do is have a function that gives me one side of the curve. (i.e. just the red from the diagrams below) Then as I need two levels (the red and the blue) for a given x value I invert the x value and feed it to the same formula (something like invertedX = x * -1 + 1)
I need the formula for the Dipped curve in the diagram below.
Extra credit goes to those who can give me the formulas to the other curves
• Intermediate
• Constant Power
• I'm pretty sure Slow Fade, Slow Cut and Fast Cut are all the same formula with just a parameter difference or two.
I have Transition (the easiest of them all)
-
Is "intermediate" supposed to be linear? The "fade" and "cut" ones would have to be piecewise, and the rest might be represented with simple powers. – J. M. Sep 14 '10 at 9:45
## 3 Answers
Intermediate is clearly $y = 1 - x$ and $y = x$; should be similarly related to transition.
I want to say that Dipped is a parabola (not sure it is; hard to tell); in which case it would be
$y = (x-1)^2$ and $y = x^2$
But there are many parabolas that fit to the points (0,0) and (1,1), or (0,1) and (1,0).
The name "Power" seems to imply Power in Sound, which would mean logarithms. I could imagine fitting logarithms into that, but perhaps it would be easier to just use 1-dipped. That is, $y = 1 - (x-1)^2$ and $y = 1 - x^2$
The fade/cut/cut could be cubic formulas, shifted upwards +0.5. So they'd be variations of $x^3 + 0.5$ stretched vertically/horizontally.
I'll get back to this when I'm less sleepy, heh.
-
1-dipped worked like a charm. I would still like to see your ideas on fade/cut/cut however. thanks. – Aran Mulholland Sep 14 '10 at 11:13
For the slow fade may be you can take the Gaussian curve $y = e^{-x^{2}}$ and here is the diagram below.
And for the transition you could try $y=-|x|$.
-
could you explain how to do this in the range 0..1? – Aran Mulholland Sep 14 '10 at 11:14
@Aran: Please see the figure: Dont take the negative side of the graph, that is from -1 to 0. – anonymous Sep 14 '10 at 11:49
I was wrestling with some of these same questions myself earlier and never really found completely solid answers for what makes a good curve. I can, however, share my functions which I wound up using. They're all constant-power ones, but with different levels of fade/cut.
To understand what makes a curve constant-power, you have to understand that the signal is a sound-pressure level signal and that power goes as sound pressure squared. So if we have input signal $w_1$ and $w_2$ and we're attenuating the signal by multiplying $w_1 \cdot f(x)$ and $w_2 \cdot f(1-x)$ then $f$ is constant-power if $f^2(x)+f^2(1-x) = 1$. So, in this case, the easiest way would be to make $f(x) = \cos(\frac{\pi}{2}x)$.
In fact, we can generalize this and say that any function $g$ with range [0,1] for domain [0,1] can be used to produce a constant-power crossfade function $f(x)=\cos(\frac{\pi}{2}g(x))$. However, it'll make the most sense if $g(0) = 0$, $g(0.5)=0.5$, $g(1)=1$, and $g$ is monotonic. With this in mind, I more looked at functions $h$ where $h(-1) = -1$, $h(0)=0$, and $h(1)=1$ and then just did a simple linear transform between [-1,1] and [0,1]. So, the first thing I tried was $h(x)=x^{2n+1}$ for non-negative integers n. This turned out to work quite well. $n=0$ gives the constant power curve you show above and then as I go to $n=1,3,10$ I get curves a lot like slow fade, slow cut, and fast cut (although not identical since these are all constant power). There's obviously a lot of room to adjust the sharpness by using other values for n.
So my final function is $f(x)=\cos(\frac{\pi}{4}((2x-1)^{2n+1}+1))$. As you can see from the graphs below, they're quite similar to slow fade, slow cut, and fast cut, except that the plateau in the middle is at about 0.7 (actually $\frac{1}{\sqrt{2}}$) rather than at 1 and the track which doesn't fall off rises to 1 at the outside edge. You might think that going down to 0.7 in the middle would have a big sound impact on the other track, but it really doesn't. Obviously, you could convert them into the exact functions by doing $\sqrt{2} \min(f(x),1/\sqrt{2})$. But I've tried them out in a software fader in Pure Data without doing that adjustment, and they sound pretty good to my ears. I was also happy because it meant that I could switch between the four different curves with only a single parameter so that keeps the logic simple.
Below: Curves for Constant Power (n=0), Constant Power Slow Fade (n=1), Constant Power Slow Cut (n=3), and Constant Power Fast Cut (n=10).
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 32, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9609413146972656, "perplexity_flag": "middle"}
|
http://mathoverflow.net/revisions/68441/list
|
## Return to Answer
2 changed notation
Also: there is a classical result due to Charles Morrey, "Analyticity of the solutions of analytic non-linear elliptic systems of partial differential equations", that says that if $F(x,u,u',u'',...)$ F(x,u,\nabla u,\nabla^2 u,...)$is analytic in its arguments and elliptic then the solution of$F(x,u,u',u'',...)=0$F(x,u,\nabla u, \nabla^2 u,...)=0$ will be as well. (It actually goes one step further to deal with systems, but the notion of ellipticity is complicated to explain.) This result generalizes work done since the early 1900's; references can be found in Fritz John's (and two other author's I can't recall) pde book.
1
Also: there is a classical result due to Charles Morrey, "Analyticity of the solutions of analytic non-linear elliptic systems of partial differential equations", that says that if $F(x,u,u',u'',...)$ is analytic in its arguments and elliptic then the solution of $F(x,u,u',u'',...)=0$ will be as well. (It actually goes one step further to deal with systems, but the notion of ellipticity is complicated to explain.) This result generalizes work done since the early 1900's; references can be found in Fritz John's (and two other author's I can't recall) pde book.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9418250322341919, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/76526?sort=oldest
|
## Serre’s open image theorem for products of elliptic curves over function fields via specialization
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
In Propriétés galoisiennes des points d'ordre fini des courbes elliptiques, Invent. Math. 15, 259--331 (1972), Serre proved the following (Theorem 6 ′′, p. 325):
Let $K$ be a number field and let $K^{cycl}$ be the cyclotomic extension of $K$ generated by all roots of unity. Let $E$ and $E'$ be two elliptic curves such that, over $\bar{K}$,
(i) $E$ and $E'$ have no complex multiplication;
(ii) The $l$-adic representations $(\rho_l)$, $(\rho'_l)$ attached to $E$ and $E'$ don't become isomorphic over any finite extension of $K$.
Then $K(E_{tors}) \cap K(E'_{tors})$ is finite over $K^{cycl}$.
My question is whether this holds for $E$ and $E'$ defined over a function field? If this hasn't already been considered somewhere with an argument specific to the function field case, then maybe a specialization argument might work? Could anyone please provide a reference where there are similar specialization arguments used, or a standard reference for the basic theory of these specialization theorems?
-
If $j,j'$ are non-constant functions on the same algebraic curve, then it's not hard to show that they would have the same poles. Beyond that, I have nothing else to add, except that making trivial edits to keep bumping your question to the front page is not cool. – Felipe Voloch Sep 30 2011 at 17:21
I mean if they don't have the same poles the answer is yes. – Felipe Voloch Sep 30 2011 at 17:24
I think Serre's proof may go through for a finite extension of $\mathbb{Q}(T)$, but I'll have a look at the more geometric approach you suggest also. Thanks for the help - sorry about the edits, won't be doing that again! – Adam Harris Oct 2 2011 at 8:59
## 1 Answer
If $K$ is a function field over an algebraically closed field and one of your elliptic curves is constant (which does not necessarily violate your hypotheses unless the constant field is the algebraic closure of a finite field) then the answer is no. What kind of constant field are you interested in? You might want to add some non-isotriviality condition. The person to ask is probably Chris Hall, but I don't think he reads MO.
-
Thanks Felipe - I should have given more information: I specifically am thinking of a situation where I have two non-CM, non-isogenous curves $E$ and $E'$ with $j$-invariants $j(\tau)$ and $j(\tau')$ which are transcendental over $\mathbb{Q}$ but $j(\tau')$ is algebraic over $\mathbb{Q}(j(\tau))$. Also $E$ is defined over $\mathbb{Q}(j(\tau))$ and $E'$ over $\mathbb{Q}(j(\tau'))$. – Adam Harris Sep 28 2011 at 14:47
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 33, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9197346568107605, "perplexity_flag": "head"}
|
http://www.physicsforums.com/showthread.php?t=135222
|
Physics Forums
Thread Closed
Page 1 of 2 1 2 >
## Some divisibility questions
Hi,
I am going through the book on number theory by ivan niven. well its tough book though, and i am stuck with problems in the first topic divisbility.Hope some help.
1. prove that a|bc if and only if a/(b,c)|c where (b,c) is the lcm of b and c.
2. Prove that there are no positive integers a,b,n>1 such that
$$(a^n-b^n)|(a^n+b^n)$$.
In the first one i have no clue how to start the proof. any hint will be appreciated.
In the second one i had the following proof:
if posssible let there be a +ve integer k such that
$$\frac{a^n-b^n}{a^n+b^n} = k$$
clearly k is not equal to 1, since then b=0 which is contradictory.
applying compodendo and dividendo to the above frac we write
(a/b)^n = (k+1)/(k-1) ---------------------------(1)
now (k+1)/(k-1) can be written as (1 + 2/(k-1) )
It can be directly checked that k=2,3 do not satisfy (1)
for k>3, (k+1)/(k-1) lies between 1 and 2 and hence cannot be a perfect nth number. this contradiction gives the desired proof.
well,are my lines of reasoning true?
Recognitions:
Gold Member
Science Advisor
Staff Emeritus
lies between 1 and 2 and hence cannot be a perfect nth number.
Sure it can. 121/100 is the perfect square of 11/10, for example.
Ok, i didn't check that .Sorry, but I don't know how to attack the problem. Any hints?
Recognitions:
Homework Help
Science Advisor
## Some divisibility questions
Rule 1: never write things like x/2 in number theory unless you know before hand that x is divisible by 2. Divisibility is not about quotients, it is about multiplication.
We may suppose that a and b are coprime, and that a^n-b^n =/=1 since n=/1. Now think about things.....
sorry matt,I am not able to see what conclusions will i get on those assumption. a little more help, please. thanks.
Blog Entries: 1 Recognitions: Homework Help EDIT: Never mind, I'm retarded
Recognitions:
Homework Help
Science Advisor
Quote by AlbertEinstein sorry matt,I am not able to see what conclusions will i get on those assumption. a little more help, please. thanks.
I wasn't expecting you to 'see' conclusions just like that. You're supposed to play around with things. Do you see that you may assume a and b are coprime irrespective of why that may be useful? Do you see that we may assume that a^n-b^n does not equal 1? Given we may assume that, what can we do that contradicts these assumptions if a^n-b^n divides a^n+b^n? (Note, this is a very useful idea in general.)
Let's try a different tack. x divides y means that hcf(x,y)=x. Remember the proof of the euclidean algorithm too. hcf(x,y)=hcf(x,x+y)=hcf(x,x-y).
1. prove that a|bc if and only if a/(b,c)|c where (b,c) is the lcm of b and c.
This is problem 43 in chapter 1, and it actually wants you to prove that a|bc <=> a/(a, b)|c. Also, (a, b) denotes the gcd of a and b. The lcm is often written [a, b]. This problem can be solved by (for example) writing things out in terms of the prime factorizations of a, b, c.
first of all sorry for the typo here (b,c) represents the hcf of b and c. I did the following:If a^n-b^n divides a^n+b^n then (a^n-b^n , a^n+b^n) = a^n-b^n now since (x,y)=(x,x+y)=(x,x-y) we have $$((a^n-b^n),(a^n+b^n))=((a^n-b^n),2 a^n)=((a^n-b^n),2 b^n)$$ This implies that $$a^n = m b^n$$ therefore $$\frac{a^n+b^n}{a^n-b^n} = \frac{m+1}{m-1}$$ since the last fraction is not an integer for m=/=2,3. we arrive at a contradiction. Hoping to be correct
Recognitions: Homework Help Science Advisor Never ever write fractions like that, only ever write integers. ONly ever work in terms of things that divide some thing, never rational numbers. If p is some prime that divides a^n-b^n, and a^n+b^n, then it divides which two other things you've written there. Now, remember we can assume that a and b a coprime...
Never ever write fractions like that, only ever write integers. ONly ever work in terms of things that divide some thing, never rational numbers.
Matt, please explain the reason for this with some more detail, I am not able to understand it.
then it divides which two other things you've written there
Are they 2a^n and 2b^n ?
assume that a and b a coprime...
But what if they are not co-prime?
Recognitions:
Homework Help
Science Advisor
Matt, please explain the reason for this with some more detail, I am not able to understand it.
Ontological commitment? That's one philosophical reaon to do this. Another is that it makes you prove things for the correct reason, rather than invoking the rational numbers for benefit what-so-ever. Divisibility is not about dividing things it is about multiplying things: x divides y means there is a z with zx=y. It is good practice, and makes you think about these things in the 'correct' way. I.e. that something is true for reasons that are apply to the objects in question, not because something is not true about some object that a priori does not exist. You are working in the integers. The rational numbers should not be something you invoke when thinking about these things. This assures us amongst other things that you are working the the right notions, and that the result applies to other situations in which it might be nonsensical to talk about rationals.
But what if they are not co-prime?
I asked you if you understood why we may assume they are coprime, and you didn't answer.... Prove that if we can find an example, we can find an example with a,b coprime. If we can we may as well assume they are and try to get a contradiction from that assumption. Given that you can assume they are coprime, then above has shown that any prime divisor of a^n-b^n is a divisor of 2a^n and 2b^n, thus since a and b are coprime a^n-b^n equals 1 or 2, both of which are impossible: (x+1)^n => x+nx+1 just throwing away terms in the expansion. So if a^n=b^n+1 or b^n+2, then n=1 which we assumed it wasn't, thus we have finished the proof, and nowhere did I invoke the rational numbers.
No, I didn't understand the assumption of a and b being coprime.But I understood that if they were not coprime then let a=x*a' and b=x*b' , then we can take x^n as common and then a'^n and b'^n will be coprime. Is this the reasoning behind the assumption? Please correct me. >>Prove that if we can find an example, we can find an example with a,b coprime well how do I prove this?
>>since a and b are coprime a^n-b^n equals 1 or 2 How? >>(x+1)^n => x+nx+1 just throwing away terms in the expansion. So if a^n=b^n+1 or b^n+2, then n=1 which we assumed it wasn't I am not able to follow. Please Matt will you write the whole proof with rather more details.
Recognitions:
Homework Help
Science Advisor
Quote by AlbertEinstein No, I didn't understand the assumption of a and b being coprime.But I understood that if they were not coprime then let a=x*a' and b=x*b' , then we can take x^n as common and then a'^n and b'^n will be coprime. Is this the reasoning behind the assumption? Please correct me. >>Prove that if we can find an example, we can find an example with a,b coprime well how do I prove this?
You just did.
Recognitions:
Homework Help
Science Advisor
Quote by AlbertEinstein >>since a and b are coprime a^n-b^n equals 1 or 2 How?
because you know a^n-b^n divides both 2a^n and 2b^n, and a and b are coprime. The definition of highest common factor hcf(x,y) is that it divides x and y and evey other comon divisor of x and y divides it. Thus a^n-b^n divides the highest common factor of 2a^n and 2b^n which is 2.
>>(x+1)^n => x+nx+1 just throwing away terms in the expansion. So if a^n=b^n+1 or b^n+2, then n=1 which we assumed it wasn't I am not able to follow. Please Matt will you write the whole proof with rather more details.
what don't you follow? The next n'th power after x^n, which is (x+1)^n is at least 3 more than x^n if n=>2?
I used:
1. binomial expansion
2. subtraction of some terms
that is all.
Ok What I understood I will reproduce.Please look for any flaw. Without the loss of generality we can suppose that (a,b) =1. Obviously, $$a^n-b^n =/=1$$ let us assume that $$(a^n-b^n)|(a^n+b^n)$$ Then $$(a^n-b^n , a^n+b^n) = a^n-b^n$$ But since (x,y) = (x,x+y) = (x,x-y) we have, $$((a^n-b^n),(a^n+b^n))=((a^n-b^n),2 a^n)=((a^n-b^n),2 b^n)=a^n-b^n$$ Therefore $$a^n-b^n$$ must divide both 2a^n and 2b^n. Hence $$a^n = 1+b^n or 2+b^n$$ The first one contradicts our initial assumption and the second one contradicts the statement that the next n'th power after x^n, which is (x+1)^n is at least 3 more than x^n if n=>2, which is clear from binomial expansion. These contradictions establish the given theorem. If this is correct then please give some hints about the first question. Thanks JItendra
Thread Closed
Page 1 of 2 1 2 >
Thread Tools
| | | |
|--------------------------------------------------|----------------------------------|---------|
| Similar Threads for: Some divisibility questions | | |
| Thread | Forum | Replies |
| | Linear & Abstract Algebra | 10 |
| | Precalculus Mathematics Homework | 6 |
| | Brain Teasers | 9 |
| | Calculus & Beyond Homework | 1 |
| | Calculus & Beyond Homework | 5 |
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 11, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9339463710784912, "perplexity_flag": "middle"}
|
http://quant.stackexchange.com/questions/4162/how-would-you-hedge-this-structure
|
# How would you hedge this structure?
I have a contingent claim and I want to find out what is the best structure to meet the continent claim, how to price it and how to hedge it. I am looking more for a qualitative answer.
Suppose I want to best replicate this claim $H$:
Given a stock $S_t$, $\text{exp} = 1$ (yrs), I need a payoff $H$ in which,
Conditional on $S_\text{exp} / S_0 \leq 0.8$, i.e the stock price decreased $20\%$ one year from now relative to the current price, then $H = \max{(0, V_\text{exp} - 0.17)}$, where $V_\text{exp}$ is the realized volatility one year from now. If the stock price did not meet the first criteria, the payout is just zero.
I decided to to use a stochastic vol process. I found the parameters of the stochastic vol process by running Monte Carlo simulations and simulating stock paths, and trying to find the parameters such that I am able to best fit the market prices.
An important assumption is that I can only trade the stock and options on the stock. I cannot trade volatility. Clearly, the market is incomplete because I have two uncertainties (Brownian motion in the stock and in the stochastic volatility). I am having difficulty deciding what is the best structure to best fulfill this contingent claim and yet be able to sufficiently hedge it using stocks and options.
-
## 1 Answer
So just to clear the payoff, it's an option on realized volatility (not variance) conditional on the stock? Are you sure it's not a conditional variance swap or a knock-in variance swap?
(a) I hope you are doing it in some sort of index, cause I'd hate to hedge this in single stock. (b) In an index this would be very costly (the skew would make the probability pretty rich. (c) No model properly replicates the volatility dynamics, you are going to have be super-conservative about your hedging assumptions.
-
Sorry if I wasn't clear. It is basically a conditional option on realized volatility. And the condition is that the drawdown from the max stock price over the year at the end of the year is greater than 10%. Note that the condition is on stock price and the payoff is on the realized vol so its tricky. Yes it is on some index and not a single name stock. Can you explain point b)? Yea I am making a huge assumption about the volatility dynamics – inquisitive Sep 21 '12 at 1:18
1
on the point (b) simple local volatility expectation is approximately 0.5*ATM + 0.5*Barrier. I assume this is on S&P (the only other index I'd trade it on would be Stoxx) and it is sufficiently long-dated, say 1y. One year sk10 is approximately 3.5, so you gain 3.5 vols in addition to ATM which is over 20. So, even without any vol of vol, your option is already ITM. Since you also going to assume high vol of vol at lowet strike, the option will come in very very rich. Just my intuition as an ex-index-exotics-trader. What sort of client is this for anyway, a hedge fund? – Strange Sep 21 '12 at 1:31
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9494351148605347, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/35883/kinematics-find-theta-with-coefficient-of-friction?answertab=oldest
|
# Kinematics - Find theta with Coefficient of Friction?
I recently found a problem that looked like this:
A box sits on a horizontal wooden ramp. The coefficient of static friction between the box and the ramp is `.30`. You grab one end of the ramp and lift it up, keeping the other end of the ramp on the ground. What is the Angle between the ramp and the horizontal direction when the box begins to slide down the ramp?
The only thing this question gives me is the Coefficient of static friction between the box and the ramp. I don't think it's enough to solve the problem, is this true?
Equations Related:
$F_f=μ_sN$
$N=mg$
-
## 1 Answer
Since my answer was deleted, I will say two things. First, rethink your normal force equation. Second, try using a rotated reference frame.
-
Thanks I got a good look at your answer so I think I can get it from there thanks! +1 – Nate Sep 7 '12 at 21:36
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9498221278190613, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/123380/tensor-products/123461
|
Tensor products
I'm trying to get my head round tensor products of vector spaces (I'm happy to see arguments in a more general setting, though).
I am concerned principally with two statements:
i) If $U,V,W$ are vector spaces then there is a one-to-one correspondence $\{ \mathrm{linear \ maps} \ V \otimes W \to U \} \longleftrightarrow \{ \mathrm{bilinear \ maps} \ V\times W \to U \}$.
ii) There is a natural (basis-independent) isomorphism $(U \oplus V) \otimes W \to (U \otimes W) \oplus (V \otimes W)$
For the first of these statements, I can see map from left to right; any linear map $\phi : V \otimes W \to U$ gives rise to a bilinear map $V \times W \to V \otimes W \to U$, where the first of these maps is the canonical map $p: (v,w) \mapsto v \otimes w$ and the second is $\phi$. I can't see, however, why any bilinear map $V \times W \to U$ necessarily factors into $\phi \circ p$ for some suitable linear map $\phi$.
I haven't got much experience with commutative diagrams. I think I've convinced myself that ii) is true with a commutative diagram, but I don't know if it's correct (and I also don't know how to LaTeX it easily...)
Any help would be appreciated. Thanks!
-
i) is usually part of the definition of a tensor product. What definition are you working with? – Qiaochu Yuan Mar 22 '12 at 19:00
@QIaochu: it sounds like Matt has a construction of tensor products anbd is trying to prove it has the correct properties. – Mariano Suárez-Alvarez♦ Mar 22 '12 at 19:05
@QiaochuYuan The definition I have is that if $V,W$ are vector spaces (with bases $v_1, \ldots , v_m$ and $w_1, \ldots w_n$), then the tensor product of $V$ and $W$ is the vector space with basis $\{ v_i \otimes w_j \ | \ 1 \leq i \leq m, 1 \leq j \leq n \}$. My notes then go on to define the tensor product of $v \in V$, $w \in W$ to be $v \otimes w = \sum \lambda_i v_i \otimes \sum \mu_j w_j = \sum_{i,j} \lambda_i \mu_j (v_i \otimes w_j)$ – Matt Mar 22 '12 at 19:33
This definition does seem strange to me, since it doesn't come with any intuition (it just seems like formal sums of symbols, which I don't like) – Matt Mar 22 '12 at 19:35
4
– KCd Mar 22 '12 at 20:46
show 1 more comment
1 Answer
In your definition (you should really look up the universal property of the tensorproduct!) you can argue as follows:
If $b$ is a bilinear map $V\times W \rightarrow U$, you can simply define a linear map $$v_i \otimes w_j \mapsto b(v_i,w_j)$$ since you know the $v_i\otimes w_j$ are a basis and a linear map can be defined by choosing arbitrary values for the elements of a basis.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 22, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.940687358379364, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/186891/in-how-many-ways-a-number-gt-5000-can-be-formed-using-given-digits-without-re/186906
|
# In how many ways a number $\gt 5000$ can be formed using given digits without repeating?
In how many ways one can form a number greater than $5000$ when allowed only to arrange digits taken from $2,3,4,5,8$ without repeating any digit?
I would think it would be $2\cdot4\cdot3\cdot2$.
Would this be correct?
-
1
It is extremely rare that using an abbreviation like «w/o» in a title is of any help to anyone except the person writing it! – Mariano Suárez-Alvarez♦ Aug 25 '12 at 22:35
1
it was edited, was not me – fosho Aug 25 '12 at 22:36
Well: I was referring to the person writing it! :-) – Mariano Suárez-Alvarez♦ Aug 25 '12 at 22:40
can we ignore a number, or must they all be used in our arrangement? – Deven Ware Aug 25 '12 at 22:46
@MarianoSuárez-Alvarez Thank you for your comment, I will be aware. – Kuba Helsztyński Aug 25 '12 at 22:48
show 1 more comment
## 2 Answers
We need to think about all the ways to make a number bigger than $5000$,
any $5$ digit number we make will be bigger than $5000$, since none of our numbers are $0$.
so any arrangement of all $5$ does it, and there are $5!$ ways to arrange our $5$ numbers.
Then we consider $4$ digit numbers. For a $4$ digit number to be bigger than $5000$, its leading digit must be $\geq 5$ and so we have two possibilities for the first number (5 and 8).
For each choice of leading number we then have to order $3$ objects from a choice of $4$ objects. which means we have $\frac{4!}{(4-3)!} = 4!$ options for each leading digit.
So we have a total of $$5! + 4! + 4!$$
where the $5!$ is from all the $5$ digit numbers, and the two $4!$'s are from $4$ digit numbers with leading $5$ and $8$ respectively
EDIT: Throughout I've used the fact that the number of ways to order $m$ objects from a choice of $n$ objects is $$\frac{n!}{(n-m)!}$$ We can understand this easily enough: For the first object we have $n$ choices, for the second object we have $(n - 1)$ choices and this continues until for the last object we have $(n - m + 1)$ choices. So we have $$n(n-1)\cdots(n- m +1)$$ total possibilities. Which can be expressed more succinctly as
$$\frac{n!}{(n-m)!}$$
-
you can make it 5 digits or 4 digits. Any five digit number satisfies the condition.
so there are 5*4*3*2*1 numbers that cut it.
If you are trying to make a 4 digit numbethe first two numbers can only be 5 or 8. the remaining digits can be in any order. therefore the number is 2*4*3*2. So the total number is 7*4*3*2=168
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 29, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.911739706993103, "perplexity_flag": "head"}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.