url
stringlengths 17
172
| text
stringlengths 44
1.14M
| metadata
stringlengths 820
832
|
---|---|---|
http://physics.stackexchange.com/questions/17056/time-what-is-it/17061
|
# Time, what is it? [closed]
If you ask any person about time, she/he will give you some answer. I suspect that it is extremely difficult, (if not impossible) to define time. Is there a definition of what it is in physics? Is it an "axiom" that has to be taken as it is, without explanations? I also noticed that the tag "time" has no pop-ups with comments/definition/explanations.
-
– Qmechanic♦ Nov 16 '11 at 14:23
– Luboš Motl Nov 16 '11 at 14:29
There isn't a single answer I feel confortable voting up – Diego Nov 16 '11 at 22:30
2
This question is attracting philosophers. :( – Pratik Deoghare Nov 17 '11 at 19:45
1
This doesn't seem to be going anywhere, doesn't fit exactly into any of the allowed slots in the FAQ, and is attracting more discussion than physics (and that's not hard as there is almost no physics here). – dmckee♦ Nov 18 '11 at 14:57
show 1 more comment
## closed as not constructive by dmckee♦Nov 18 '11 at 14:57
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or specific expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, see the FAQ for guidance.
## 7 Answers
Your question is answered by Albert Einstein himself.
If you want to know what time is to a physicist then please read
I. KINEMATICAL PART
§ 1. Definition of Simultaneity
of Einsteins paper On Electrodynamics of Moving Bodies.
It is surprisingly easy to understand (requires almost no math and no previous knowledge of anything).
-
No it is not surprisingly easy to understand. – Pacerier Jul 8 '12 at 1:33
Time is like colour in that it's difficult to explain to someone who's time-blind: A clock and a mind is to time what an eye and a brain is to colour. To someone that is partially time-understanding to a sufficient degree, I can say that time is that quality which orders identical events at the same location as occuring before, simultaneously or after one another.
-
2
A color-blind person could understand about wavelengths of light, corresponding to energy levels of photons, and how structures in an eye could be receptive to particular wavelengths, and how the visual cortext could summarize the rates of firing of three different types of such structure. This wouldn't cause the color-blind person to experience color, but they could understand the physical process of how someone would. How would you explain to a "time-blind" person the physical process of how some would experience time? – JGWeissman Nov 17 '11 at 18:22
@JGWeissman The Amondawa people of Brazil don't have a concept of time as developed as in west, so I would explain to them, using what's available from their culture, the idea of at the same location: 1. Two things happening together, 2. One thing happening after another, 3. One thing happening before another. Having got this, I would show how things happening can be ordered further by assigning a symbol to each happening which is called allocating a point in time. How successful I am depends upon mine and their intellect and how sophisticated their culture is. – Physiks lover Nov 17 '11 at 22:23
It's the universe' way of keeping everything from happening at once. ;-)
-
2
I was serious. Your answer is funny and useless (recursive, it uses the concept of simultaneity i.e. at the same time...). – user6090 Nov 16 '11 at 14:38
I'm not sure if Daedelus meant it as such, but in a way, this is a serious answer. It gets at the nature of time in physical theories as a dimension in which different events can be given different coordinates. – David Zaslavsky♦ Nov 16 '11 at 18:43
OK, in this sense it is clearer. Thank you for the clarification. – user6090 Nov 16 '11 at 20:25
Hey, Physiks lover said the same thing with more words. – Daedelus Nov 17 '11 at 0:36
3
"Nature" is strange, whereas 3 dimensions are used to avoid everything being at the same place, one dimension is sufficient to avoid everything happening at once ! :=) – Georg Nov 17 '11 at 10:03
Unforntunately, for any such definitions, it will depend on the individual situation in which the concept is being used. Typically physicists use Einstein's definition that "time is what is measured by a clock" but I think this is a bit too generic; after all, it is possible to use the physical dimensions of the clock to measure all sorts of things. Using this definition you have to then define what you mean by clock and it just gets more complicated from there.
My own personal definition of "time" would run something like:
Time is that property whereby one state of a well-defined system is transformed into another.
Again, the definitions of the words used becomes a complicating factor. I use "well-defined system" in a sense similar to how mathematicians use "well-behaved function." I mean a system that is logically coherent and consistent and follows the fundamental laws of nature.
## edit
I should probably mention that the word "state" is also very misleading as the phrase "equation of state" for a system typically refers to a property that does not change with time. For example, the equation of state for an ideal gas is given by the ideal gas law $PV = nRT$. I would then say that time is the property of a system for which an equation of state can be written whereby the variables in that equation are transformed to different values.
-
I am not a physicist, so take this with a massive grain of salt. I recently watched "What is Time?" on the new series, The Fabric of the Cosmos. I've also been reading a lot about this from other sources. I will try to summarize what is explained in that show in a way that makes sense.
There are some other aspects to the "what is time" question that I won't cover here much. They involve aspects of how time is measured in different parts of the universe based on the position of observers in relation to each other. The premise is that there isn't a single "now" moment that is universal. If you and an alien are stationary, you can be considered to be in the same "now". However, if you or the alien start moving around then you have to take into account how that speed is measured in relation to the other observer, and the "now" moment can either be in the future or the past of the other observer. It all ties into the relative nature of space and time, and is at the core of special relativity.
So, to the question of what is time (and putting all of the special relativity stuff on a shelf).
In general, what is presented is that time is what we perceive as the increase in entropy in the universe.
Entropy can be described as energy starting in a highly ordered state and moving into an increasingly disordered state.
A physical example, as an illustration, might be an ice cube with a drop of food coloring frozen in the center. As long as the ice cube is cold, all of the molecules of water remain frozen and the ice's crystalline structure is maintained. The molecules of the food color also remain locked into their position. The state of the entire ice cube can be consider a highly ordered state.
When the cube starts to warm up, the molecules in the ice begin moving around. The molecules in the food coloring begin moving too. The density of the food coloring decreases as its molecules spread around and mix with the water. When the energy level of the water and food coloring meet the level of energy outside, in one manner of thinking they have now entered their high-entropy state. They are throughly mixed and all the densities of h2o and dye molecules are evenly distributed. Since there are no more unequal densities, there isn't any lower-order state for them to be in.
Similar examples can be given for gases, or rocks, or plasma, or whatever. Pick any medium and the same principal applies. The biggest difference is simply the rate at which the entropy of a given system increases.
When you look at the equations that govern physics, they are solvable in either "direction". If you drop a glass and it breaks, if you reversed the directions and momentum of all the particles in the glass it should reassemble. The equations work in either direction. This means that there isn't anything really saying that energy has to move from a highly ordered to lesser ordered state. Why this happens, and why we don't spontaneously see things happen in reverse, is still a question that needs solving.
So back to time.
Current thinking is that the universe started in a highly ordered state. This means that at the exact first Planck moment of the Big Bang, all of the energy in the universe existed all together in an extremely highly ordered state. All of the forces were unified, as was the energy that became matter, antimatter, dark energy and dark matter, etc. Everything we are now was just one thing. (I have no idea what the one thing was. Like I said, SALT.)
So the idea is that, in a conceptually similar way the ice cube melted, during the last 13.7 billion years all of the energy from that initial state has been moving from a highly ordered state to a disordered state.
In the process of becoming disordered, all of the initial energy has formed the forces that guided the development of everything from subatomic particles to everything we see and are today.
As living creatures, we perceive the passage of energy through all of the physical systems around us as time. I am not entirely comfortable with this definition, as I think a barren rock in the middle of space between two galaxies would still undergo an increase in entropy regardless of whether there was a living thing to witness it or not. Again, I'm not a physicist and am simply trying to understand this stuff like many others.
So time, under this model of entropy and disorder, is not a thing in itself. Its not a force, its not something that can be pulled apart and studied with a particle accelerator. Its a term we use to describe the passage of energy through physical systems and how those systems change as a result.
So does time end? If we can say that time had a beginning as a perfectly ordered thing of energy, does time have an end? In a way, it might.
If we go back to the ice cube analogy, eventually all of the molecules in the water and dye reach an equilibrium. If left alone for long enough without external influence, the movement of molecules will cease. (Notice that in this analogy, I'm not referencing Brownian motion or movement from quantum particles emerging from the vacuum. Its just an analogy.)
When cosmologists take into account the discovery of dark energy, it paints a very said picture for the fate of the universe. Dark energy is a loose term that was defined in the last decade or so to describe observations that all galaxies are moving further and further apart. The further away a galaxy is, the faster its moving.
Scientists don't know yet what dark energy actually is, but we can see its effects in much the same way as we can see the effects of dark matter. What dark energy appears to be is some universal force that is causing everything in the universe to spread out. It was originally predicted, in a fashion, by Albert Einstein.
In his calculations, Einstein found that the universe should either be expanding or contracting. In his day, astronomers had not yet discovered that galaxies were moving away from each other. It was commonly believed that the universe was static and eternal. As a way to correct his calculations, he added an additional mathematical constant that balanced out his equations to account for what was believed to be that static universe. It was only a few years later that Edwin Hubble made the first red-shift observations that indicated the universe was, indeed, expanding.
Einstein was never very happy with his Cosmological Constant, and said,
"It is to be emphasized, however, that a positive curvature of space is given by our results, even if the supplementary term [cosmological constant] is not introduced. That term is necessary only for the purpose of making possible a quasi-static distribution of matter, as required by the fact of the small velocities of the stars."
Observations of the ever increasing rate of expansion of the universe were only made in the last 15 years or so. Its source, dubbed "dark energy", hasn't been explained yet.
When you take into consideration what dark energy means for the universe and time, here's the scenario. In something like one hundred billion years, all of the galaxies of the universe will be so far away from each other that they will be moving faster than the light they emit can reach each other. If this sounds like it conflicts with the proverbial, "nothing can travel faster than light", remember that we're talking about the expansion of space itself. Space is the fabric in which all energy and matter exist, and it can and has expanded faster than the speed of light in the past. Physicists call it inflation and it helps to explain why the universe looks the way it does now. I won't go into that too much though.
What it means is that to any future intelligent creatures in the far future, when they look out into their skies they won't see the abundance of the cosmos we see now. All they will see is an infinite black sky. The left over radiation from the Big Bang, the Cosmic Background Radiation, will have spread out so much as to be undetectable. They will not be able to observationally deduce the history of the universe as we have. To them, their universe will be just their single galaxy. They may attempt to come up with an explanation of the universe that fits the evidence they can see, but will be hopelessly wrong.
Going even further into the future, trillions of years from now (sorry, I don't have an accurate estimate for this. Its a long, long, long time though) and dark energy will become the pervading force in the universe. It will become stronger than gravity and will force any surviving collections of galaxies, star systems, and planets to fly apart. It will become stronger than the strong and weak nuclear forces and will pry apart atoms at the subatomic level.
Black holes will dominate the late universe for a long time. Eventually, even these will evaporate away their energy into space and cease to be. Look up "Hawking radiation" if you want to learn more about that.
What will be left is an immensely huge void of space filled very sparsely with subatomic particles. Particles will be spread so thinly that they will almost never interact with each other. Even if they happened to be close enough, the nuclear forces would not be able to overcome dark energy to interact with each other. Essentially, the universe will be a cold and empty place.
In many ways, when all of the particles themselves evaporate away into the quantum vacuum, time can be considered to have ended. When the energy of the universe finally has smoothed out to such a low entropy state that nothing can interact, time is done as we perceive it.
On a positive note...
I love to look up at the stars and know that I am lucky to be alive at a time in the universe where everything is still young. Interesting things are still happening and we are able to discover really interesting, important, and beautiful things about the place we exist in.
There are so many things left to be discovered that current ideas about the ultimate fate of the universe could be drastically wrong. Its very hard to say, because even the folks that have dedicated their lives to studying the science involved in these questions are still observing, experimenting, and trying to come up with explanations. Its a great time to be a scientist.
-
There's a lot of innane jibbering in this answer; couldn't you cut it down a bit? – Physiks lover Nov 17 '11 at 15:19
Geuis, your answer make me happy like a small boy with a new bicycle. May be I will add some comments later becasue it is really interesting discussion. – user6090 Nov 17 '11 at 17:32
@Physikslover I don't appreciate your tone. There are many more laymen who are interested in physics than people with phd's in physics in the world. Understanding high level concepts is not something that comes easily to many. The problem with most "science for dummies" books and documentaries is that they wrap it up in silly analogies and dumb down the material so much that the core concepts become incomprehensible. Throwing a bunch of equations in front of someone who doesn't know the math is just as bad. All I have done is attempt to distill it down so that people can understand accurately. – Geuis Nov 17 '11 at 19:52
@user6090 Glad you like it. – Geuis Nov 17 '11 at 19:52
@Geuis: Could it be, instead, that time is the result of complexity? This can have some weak analogy with the concept of disorder. Talking about the increase of entropy, how is that sometime the disorder decreases (life, organized beings, creations of stars by gravity...). I can't see time going in the reverse direction in such cases. I mean, if life is acting against the flow of time, creating highly organized structures, my perception of time remain the same. – user6090 Nov 17 '11 at 21:31
show 2 more comments
Neuroscience tells us our perceptions are stored in short term memory and after much processing, are backdated in time to give the perception of "now" at a later time. This can take many many milliseconds later, even close to a second. Or imagine watching a family movie filmed years or decades ago and becoming so engrossed in it so that it seems to be happening all over again in the present. When is now? The neurosurgeon Penfield stimulated parts of the brains of some patients and they relived events from years or decades before as if they were happening in the present. Now is not when you think it is.
In near death experiences, a person reviews memories of their life flashing by. How do you know your now isn't actually happening much later and backdated? Let's say decades in the future, you upload your mind to a computer and a replay of your life memories is made on some playback machine to be observed by some computer mind. It would seem backdated to 2011, would it not, even if it's much later.
This process and go on over and over until the end of time. Maybe "now" is a playback at the end of time at the end of the universe and we have been fooled about the actual date. Whatever it is observing the playback at the end of the world sustains us over and over again by the act of observation. Being, with no becoming.
-
.... too difficult.... – user6090 Nov 17 '11 at 21:19
Each paradigm shift leads to a change in the conception of time. Newton introduced the notion of absolute time. That was hardly a mainstream view before him. Special relativity overturned the nature of time, and so did general relativity after that. When quantum gravity is cracked, the nature of time will need to be revised again appropriately. Quantum gravity may or may not be the last word in fundamental physics. If not, we might expect yet another overturning of what we think time is.
At any rate, defining what time is prematurely runs the risk of stifling future paradigm shifts.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9657920002937317, "perplexity_flag": "middle"}
|
http://quant.stackexchange.com/questions/tagged/risk-management+asset-allocation
|
Tagged Questions
2answers
183 views
How many data points are required to perform a fitting of GPD?
A friend of mine told me that their firm is using Extreme Value Theory (EVT) to compute value of the Expected Shortfall 99% of a portfolio for their asset allocation process. To do so, they try to fit ...
1answer
2k views
Risk Parity portfolio construction
If I would like to construct a fully invested long only portfolio with two asset classes (Bonds $B$ and Stocks $S$) based on the concept of 'risk parity' the weights $W$ of my portfolio would be the ...
2answers
118 views
How do you handle short-term asset allocation with Hedge-Funds?
Assuming I want to run an optimization over a short period, say 2 years, I would decide to take daily values in order to compute the efficient frontier of a portfolio. That works fine as long as I ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9359105229377747, "perplexity_flag": "middle"}
|
http://en.wikipedia.org/wiki/Negative_binomial_distribution
|
# Negative binomial distribution
Notation
Different texts adopt slightly different definitions for the negative binomial distribution. They can be distinguished by whether the support starts at k = 0 or at k = r, and whether p denotes the probability of a success or of a failure.
Probability mass function The orange line represents the mean, which is equal to 10 in each of these plots; the green line shows the standard deviation.
$\mathrm{NB}(r,\,p)$
r > 0 — number of failures until the experiment is stopped (integer, but the definition can also be extended to reals) p ∈ (0,1) — success probability in each experiment (real)
k ∈ { 0, 1, 2, 3, … } — number of successes
${k+r-1 \choose k}\cdot (1-p)^r p^k,\!$involving a binomial coefficient
$1-I_p(k+1,\,r),$ the regularized incomplete beta function
$\frac{pr}{1-p}$
$\begin{cases}\big\lfloor\frac{p(r-1)}{1-p}\big\rfloor & \text{if}\ r>1 \\ 0 & \text{if}\ r\leq 1\end{cases}$
$\frac{pr}{(1-p)^2}$
$\frac{1+p}{\sqrt{pr}}$
$\frac{6}{r} + \frac{(1-p)^2}{pr}$
$\biggl(\frac{1-p}{1 - p e^t}\biggr)^{\!r} \text{ for }t<-\log p$
$\biggl(\frac{1-p}{1 - p e^{i\,t}}\biggr)^{\!r} \text{ with }t\in\mathbb{R}$
$\biggl(\frac{1-p}{1 - pz}\biggr)^{\!r} \text{ for }|z|<\frac1p$
In probability theory and statistics, the negative binomial distribution is a discrete probability distribution of the number of successes in a sequence of Bernoulli trials before a specified (non-random) number of failures (denoted r) occurs. For example, if we define a "1" as failure, and all non-"1"s as successes, and we throw a die repeatedly until the third time “1” appears (r = three failures), then the probability distribution of the number of non-“1”s that had appeared will be negative binomial.
The Pascal distribution (after Blaise Pascal) and Polya distribution (for George Pólya) are special cases of the negative binomial. There is a convention among engineers, climatologists, and others to reserve “negative binomial” in a strict sense or “Pascal” for the case of an integer-valued stopping-time parameter r, and use “Polya” for the real-valued case. The Polya distribution more accurately models occurrences of “contagious” discrete events, like tornado outbreaks, than the Poisson distribution by allowing the mean and variance to be different, unlike the Poisson. “Contagious” events have positively correlated occurrences causing a larger variance than if the occurrences were independent, due to a positive covariance term.
## Definition
Suppose there is a sequence of independent Bernoulli trials, each trial having two potential outcomes called “success” and “failure”. In each trial the probability of success is p and of failure is (1 − p). We are observing this sequence until a predefined number r of failures has occurred. Then the random number of successes we have seen, X, will have the negative binomial (or Pascal) distribution:
$X\ \sim\ \text{NB}(r; p)$
When applied to real-world problems, outcomes of success and failure may or may not be outcomes we ordinarily view as good and bad, respectively. Suppose we used the negative binomial distribution to model the number of days a certain machine works before it breaks down. In this case “success” would be the result on a day when the machine worked properly, whereas a breakdown would be a “failure”. If we used the negative binomial distribution to model the number of goal attempts a sportsman makes before scoring a goal, though, then each unsuccessful attempt would be a “success”, and scoring a goal would be “failure”. If we are tossing a coin, then the negative binomial distribution can give the number of heads (“success”) we are likely to encounter before we encounter a certain number of tails (“failure”). In the probability mass function below, p is the probability of failure, and (1-p) is the probability of success.
The probability mass function of the negative binomial distribution is
$f(k; r, p) \equiv \Pr(X = k) = {k+r-1 \choose k} (1-p)^rp^k \quad\text{for }k = 0, 1, 2, \dots$
Here the quantity in parentheses is the binomial coefficient, and is equal to
${k+r-1 \choose k} = \frac{(k+r-1)!}{k!\,(r-1)!} = \frac{(k+r-1)(k+r-2)\cdots(r)}{k!}.$
This quantity can alternatively be written in the following manner, explaining the name “negative binomial”:
$\frac{(k+r-1)\cdots(r)}{k!} = (-1)^k \frac{(-r)(-r-1)(-r-2)\cdots(-r-k+1)}{k!} = (-1)^k{-r \choose k}. \qquad (*)$
To understand the above definition of the probability mass function, note that the probability for every specific sequence of k successes and r failures is (1 − p)rpk, because the outcomes of the k + r trials are supposed to happen independently. Since the rth failure comes last, it remains to choose the k trials with successes out of the remaining k + r − 1 trials. The above binomial coefficient, due to its combinatorial interpretation, gives precisely the number of all these sequences of length k + r − 1.
### Extension to real-valued r
It is possible to extend the definition of the negative binomial distribution to the case of a positive real parameter r. Although it is impossible to visualize a non-integer number of “failures”, we can still formally define the distribution through its probability mass function.
As before, we say that X has a negative binomial (or Pólya) distribution if it has a probability mass function:
$f(k; r, p) \equiv \Pr(X = k) = {k+r-1 \choose k} (1-p)^rp^k \quad\text{for }k = 0, 1, 2, \dots$
Here r is a real, positive number. The binomial coefficient is then defined by the multiplicative formula and can also be rewritten using the gamma function:
${k+r-1 \choose k} = \frac{(k+r-1)(k+r-2)\cdots(r)}{k!} = \frac{\Gamma(k+r)}{k!\,\Gamma(r)}.$
Note that by the binomial series and (*) above, for every 0 ≤ p < 1,
$(1-p)^{-r}=\sum_{k=0}^\infty{-r \choose k}(-p)^k =\sum_{k=0}^\infty{k+r-1\choose k}p^k,$
hence the terms of the probability mass function indeed add up to one.
### Alternative formulations
Some textbooks may define the negative binomial distribution slightly differently than it is done here. The most common variations are:
• The definition where X is the total number of trials needed to get r failures, not simply the number of successes. Since the total number of trials is equal to the number of successes plus the number of failures, this definition differs from ours by adding constant r. In order to convert formulas written with this definition into the one used in the article, replace everywhere “k” with “k - r”, and also subtract r from the mean, the median, and the mode. In order to convert formulas of this article into this alternative definition, replace “k” with “k + r” and add r to the mean, the median and the mode. Effectively, this implies using the probability mass function
$f(k; r, p) \equiv \Pr(X = k) = {k-1 \choose k-r} (1-p)^r p^{k-r} \quad\text{for }k = r, r+1, r+2, \dots,$
which perhaps resembles the binomial distribution more closely than the version above. Note that the arguments of the binomial coefficient are decremented due to order: the last "failure" must occur last, and so the other events have one fewer positions available when counting possible orderings. Note that this definition of the negative binomial distribution does not easily generalize to a positive, real parameter r.
• The definition where p denotes the probability of a failure, not of a success. In order to convert formulas between this definition and the one used in the article, replace “p” with “1 − p” everywhere.
• The definition where the support X is defined as the number of failures, rather than the number of successes. This definition — where X counts failures but p is the probability of success — has exactly the same formulas as in the previous case where X counts successes but p is the probability of failure. However, the corresponding text will have the words “failure” and “success” swapped compared with the previous case.
• The two alterations above may be applied simultaneously, i.e. X counts total trials, and p is the probability of failure.
## Occurrence
### Waiting time in a Bernoulli process
For the special case where r is an integer, the negative binomial distribution is known as the Pascal distribution. It is the probability distribution of a certain number of failures and successes in a series of independent and identically distributed Bernoulli trials. For k + r Bernoulli trials with success probability p, the negative binomial gives the probability of k successes and r failures, with a failure on the last trial. In other words, the negative binomial distribution is the probability distribution of the number of successes before the rth failure in a Bernoulli process, with probability p of successes on each trial. A Bernoulli process is a discrete time process, and so the number of trials, failures, and successes are integers.
Consider the following example. Suppose we repeatedly throw a die, and consider a “1” to be a “failure”. The probability of failure on each trial is 1/6. The number of successes before the third failure belongs to the infinite set { 0, 1, 2, 3, ... }. That number of successes is a negative-binomially distributed random variable.
When r = 1 we get the probability distribution of number of successes before the first failure (i.e. the probability of the first failure occurring on the (k + 1)st trial), which is a geometric distribution:
$f(k; r, p) = (1-p) \cdot p^k \!$
### Overdispersed Poisson
The negative binomial distribution, especially in its alternative parameterization described above, can be used as an alternative to the Poisson distribution. It is especially useful for discrete data over an unbounded positive range whose sample variance exceeds the sample mean. In such cases, the observations are overdispersed with respect to a Poisson distribution, for which the mean is equal to the variance. Hence a Poisson distribution is not an appropriate model. Since the negative binomial distribution has one more parameter than the Poisson, the second parameter can be used to adjust the variance independently of the mean. See Cumulants of some discrete probability distributions. An application of this is to annual counts of tropical cyclones in the North Atlantic or to monthly to 6-monthly counts of wintertime extratropical cyclones over Europe, for which the variance is greater than the mean.[1][2][3] In the case of modest overdispersion, this may produce substantially similar results to an overdispersed Poisson distribution.[4][5]
## Related distributions
• The geometric distribution (on { 0, 1, 2, 3, ... }) is a special case of the negative binomial distribution, with
$\text{Geom}(p) = \text{NB}(1,\, 1-p).\,$
• The negative binomial distribution is a special case of the discrete phase-type distribution.
• The negative binomial distribution is a special case of the stuttering Poisson distribution.[6]
### Poisson distribution
Consider a sequence of negative binomial distributions where the stopping parameter r goes to infinity, whereas the probability of success in each trial, p, goes to zero in such a way as to keep the mean of the distribution constant. Denoting this mean λ, the parameter p will have to be
$\lambda = r\,\frac{p}{1-p} \quad \Rightarrow \quad p = \frac{\lambda}{r+\lambda}.$
Under this parametrization the probability mass function will be
$f(k; r, p) = \frac{\Gamma(k+r)}{k!\cdot\Gamma(r)}(1-p)^rp^k = \frac{\lambda^k}{k!} \cdot \frac{\Gamma(r+k)}{\Gamma(r)\;(r+\lambda)^k} \cdot \frac{1}{\left(1+\frac{\lambda}{r}\right)^{r}}$
Now if we consider the limit as r → ∞, the second factor will converge to one, and the third to the exponent function:
$\lim_{r\to\infty} f(k; r, p) = \frac{\lambda^k}{k!} \cdot 1 \cdot \frac{1}{e^\lambda},$
which is the mass function of a Poisson-distributed random variable with expected value λ.
In other words, the alternatively parameterized negative binomial distribution converges to the Poisson distribution and r controls the deviation from the Poisson. This makes the negative binomial distribution suitable as a robust alternative to the Poisson, which approaches the Poisson for large r, but which has larger variance than the Poisson for small r.
$\text{Poisson}(\lambda) = \lim_{r \to \infty} \text{NB}\Big(r,\ \frac{\lambda}{\lambda+r}\Big).$
### Gamma–Poisson mixture
The negative binomial distribution also arises as a continuous mixture of Poisson distributions (i.e. a compound probability distribution) where the mixing distribution of the Poisson rate is a gamma distribution. That is, we can view the negative binomial as a Poisson(λ) distribution, where λ is itself a random variable, distributed according to Gamma(r, p/(1 − p)).
Formally, this means that the mass function of the negative binomial distribution can be written as
$\begin{align} f(k; r, p) & = \int_0^\infty f_{\text{Poisson}(\lambda)}(k) \cdot f_{\text{Gamma}\left(r,\, \frac{p}{1-p}\right)}(\lambda) \; \mathrm{d}\lambda \\[8pt] & = \int_0^\infty \frac{\lambda^k}{k!} e^{-\lambda} \cdot \lambda^{r-1}\frac{e^{-\lambda (1-p)/p}}{\big(\frac{p}{1-p}\big)^r\,\Gamma(r)} \; \mathrm{d}\lambda \\[8pt] & = \frac{(1-p)^r p^{-r}}{k!\,\Gamma(r)} \int_0^\infty \lambda^{r+k-1} e^{-\lambda/p} \;\mathrm{d}\lambda \\[8pt] & = \frac{(1-p)^r p^{-r}}{k!\,\Gamma(r)} \ p^{r+k} \, \Gamma(r+k) \\[8pt] & = \frac{\Gamma(r+k)}{k!\;\Gamma(r)} \; p^k (1-p)^r. \end{align}$
Because of this, the negative binomial distribution is also known as the gamma–Poisson (mixture) distribution.
### Sum of geometric distributions
If Yr is a random variable following the negative binomial distribution with parameters r and p, and support {0, 1, 2, ...}, then Yr is a sum of r independent variables following the geometric distribution (on {0, 1, 2, ...}) with parameter p. As a result of the central limit theorem, Yr (properly scaled and shifted) is therefore approximately normal for sufficiently large r.
Furthermore, if Bs+r is a random variable following the binomial distribution with parameters s + r and 1 − p, then
$\begin{align} \Pr(Y_r \leq s) & {} = 1 - I_p(s+1, r) \\ & {} = 1 - I_{p}((s+r)-(r-1), (r-1)+1) \\ & {} = 1 - \Pr(B_{s+r} \leq r-1) \\ & {} = \Pr(B_{s+r} \geq r) \\ & {} = \Pr(\text{after } s+r \text{ trials, there are at least } r \text{ successes}). \end{align}$
In this sense, the negative binomial distribution is the "inverse" of the binomial distribution.
The sum of independent negative-binomially distributed random variables r1 and r2 with the same value for parameter p is negative-binomially distributed with the same p but with "r-value" r1 + r2.
The negative binomial distribution is infinitely divisible, i.e., if Y has a negative binomial distribution, then for any positive integer n, there exist independent identically distributed random variables Y1, ..., Yn whose sum has the same distribution that Y has.
### Representation as compound Poisson distribution
The negative binomial distribution NB(r,p) can be represented as a compound Poisson distribution: Let {Yn, n ∈ ℕ0} denote a sequence of independent and identically distributed random variables, each one having the logarithmic distribution Log(p), with probability mass function
$f(k; r, p) = \frac{-p^k}{k\ln(1-p)},\qquad k\in{\mathbb N}.$
Let N be a random variable, independent of the sequence, and suppose that N has a Poisson distribution with mean λ = −r ln(1 − p). Then the random sum
$X=\sum_{n=1}^N Y_n$
is NB(r,p)-distributed. To prove this, we calculate the probability generating function GX of X, which is the composition of the probability generating functions GN and GY1. Using
$G_N(z)=\exp(\lambda(z-1)),\qquad z\in\mathbb{R},$
and
$G_{Y_1}(z)=\frac{\ln(1-pz)}{\ln(1-p)},\qquad |z|<\frac1p,$
we obtain
$\begin{align}G_X(z) &=G_N(G_{Y_1}(z))\\ &=\exp\biggl(\lambda\biggl(\frac{\ln(1-pz)}{\ln(1-p)}-1\biggr)\biggr)\\ &=\exp\bigl(-r(\ln(1-pz)-\ln(1-p))\bigr)\\ &=\biggl(\frac{1-p}{1-pz}\biggr)^r,\qquad |z|<\frac1p,\end{align}$
which is the probability generating function of the NB(r,p) distribution.
## Properties
### Cumulative distribution function
The cumulative distribution function can be expressed in terms of the regularized incomplete beta function:
$f(k; r, p) \equiv \Pr(X\le k) = 1 - I_{p}(k+1, r). \!$
### Sampling and point estimation of p
Suppose p is unknown and an experiment is conducted where it is decided ahead of time that sampling will continue until r successes are found. A sufficient statistic for the experiment is k, the number of failures.
In estimating p, the minimum variance unbiased estimator is
$\hat{p}=\frac{r-1}{r+k-1}.$
The maximum likelihood estimate of p is
$\tilde{p}=\frac{r}{r+k},$
but this is a biased estimate. Its inverse (r + k)/r, is an unbiased estimate of 1/p, however.[7]
### Relation to the binomial theorem
Suppose Y is a random variable with a binomial distribution with parameters n and p. Assume p + q = 1, with p, q >=0. Then the binomial theorem implies that
$1=1^n=(p+q)^n=\sum_{k=0}^n {n \choose k} p^k q^{n-k}.$
Using Newton's binomial theorem, this can equally be written as:
$(p+q)^n=\sum_{k=0}^\infty {n \choose k} p^k q^{n-k},$
in which the upper bound of summation is infinite. In this case, the binomial coefficient
${n \choose k}={n(n-1)(n-2)\cdots(n-k+1) \over k! }.$
is defined when n is a real number, instead of just a positive integer. But in our case of the binomial distribution it is zero when k > n. We can then say, for example
$(p+q)^{8.3}=\sum_{k=0}^\infty {8.3 \choose k} p^k q^{8.3 - k}.$
Now suppose r > 0 and we use a negative exponent:
$1=p^r\cdot p^{-r}=p^r (1-q)^{-r}=p^r \sum_{k=0}^\infty {-r \choose k} (-q)^k.$
Then all of the terms are positive, and the term
$p^r {-r \choose k} (-q)^k$
is just the probability that the number of failures before the rth success is equal to k, provided r is an integer. (If r is a negative non-integer, so that the exponent is a positive non-integer, then some of the terms in the sum above are negative, so we do not have a probability distribution on the set of all nonnegative integers.)
Now we also allow non-integer values of r. Then we have a proper negative binomial distribution, which is a generalization of the Pascal distribution, which coincides with the Pascal distribution when r happens to be a positive integer.
Recall from above that
The sum of independent negative-binomially distributed random variables r1 and r2 with the same value for parameter p is negative-binomially distributed with the same p but with "r-value" r1 + r2.
This property persists when the definition is thus generalized, and affords a quick way to see that the negative binomial distribution is infinitely divisible.
## Parameter estimation
### Maximum likelihood estimation
The likelihood function for N iid observations (k1, ..., kN) is
$L(r,p)=\prod_{i=1}^N f(k_i;r,p)\,\!$
from which we calculate the log-likelihood function
$\ell(r,p) = \sum_{i=1}^N \ln{(\Gamma(k_i + r))} - \sum_{i=1}^N \ln(k_i !) - N\ln{(\Gamma(r))} + Nr\ln{(1-p)} + \sum_{i=1}^N k_i \ln(p).$
To find the maximum we take the partial derivatives with respect to r and p and set them equal to zero:
$\frac{\partial \ell(r,p)}{\partial p} = - Nr\frac{1}{1-p} + \sum_{i=1}^N k_i \frac{1}{p} = 0$ and
$\frac{\partial \ell(r,p)}{\partial r} = \sum_{i=1}^N \psi(k_i + r) - N\psi(r) + N\ln{(1-p)} =0$
where
$\psi(k) = \frac{\Gamma'(k)}{\Gamma(k)} \!$ is the digamma function.
Solving the first equation for p gives:
$p = \frac{ \sum_{i=1}^N k_i / N } {r + \sum_{i=1}^N k_i / N }$
Substituting this in the second equation gives:
$\frac{\partial \ell(r,p)}{\partial r} = \sum_{i=1}^N \psi(k_i + r) - N\psi(r) + N\ln{\left(\frac{r}{r + \sum_{i=1}^N k_i / N}\right)} =0$
This equation cannot be solved in closed form. If a numerical solution is desired, an iterative technique such as Newton's method can be used.
## Examples
### Selling candy
Pat is required to sell candy bars to raise money for the 6th grade field trip. There are thirty houses in the neighborhood, and Pat is not supposed to return home until five candy bars have been sold. So the child goes door to door, selling candy bars. At each house, there is a 0.4 probability of selling one candy bar and a 0.6 probability of selling nothing.
What's the probability of selling the last candy bar at the nth house?
Recall that the NegBin(r, p) distribution describes the probability of k failures and r successes in k + r Bernoulli(p) trials with success on the last trial. Selling five candy bars means getting five successes. The number of trials (i.e. houses) this takes is therefore k + 5 = n. The random variable we are interested in is the number of houses, so we substitute k = n − 5 into a NegBin(5, 0.4) mass function and obtain the following mass function of the distribution of houses (for n ≥ 5):
$f(n) = {(n-5) + 5 - 1 \choose n-5} \; 0.4^5 \; 0.6^{n-5} = {n-1 \choose n-5} \; 2^5 \; \frac{3^{n-5}}{5^n}.$
What's the probability that Pat finishes on the tenth house?
$f(10) = 0.1003290624. \,$
What's the probability that Pat finishes on or before reaching the eighth house?
To finish on or before the eighth house, Pat must finish at the fifth, sixth, seventh, or eighth house. Sum those probabilities:
$f(5) = 0.01024 \,$
$f(6) = 0.03072 \,$
$f(7) = 0.055296 \,$
$f(8) = 0.0774144 \,$
$\sum_{j=5}^8 f(j) = 0.17367.$
What's the probability that Pat exhausts all 30 houses in the neighborhood?
This can be expressed as the probability that Pat does not finish on the fifth through the thirtieth house:
$1-\sum_{j=5}^{30} f(j) = 1 - I_{0.4}(5, 30-5+1) \approx 1 - 0.99849 = 0.00151.$
### Polygyny in African societies
Data on polygyny among a wide range of traditional African societies suggest that the distribution of wives follow a range of binomial profiles. The majority of these are negative binomial indicating the degree of competition for wives. However some tend towards a Poisson Distribution and even beyond towards a true binomial, indicating a degree of conformity in the allocation of wives. Further analysis of these profiles indicates shifts along this continuum between more competitiveness or more conformity according to the age of the husband and also according to the status of particular sectors within a society. In this way, these binomial distributions provide a tool for comparison, between societies, between sectors of societies, and over time.[8]
## References
1. Villarini, G.; Vecchi, G.A. and Smith, J.A. (2010). "Modeling of the dependence of tropical storm counts in the North Atlantic Basin on climate indices". 138 (7): 2681–2705. doi:10.1175/2010MWR3315.1.
2. Mailier, P.J.; Stephenson, D.B.; Ferro, C.A.T.; Hodges, K.I. (2006). "Serial Clustering of Extratropical Cyclones". 134 (8): 2224–2240. doi:10.1175/MWR3160.1.
3. Vitolo, R.; Stephenson, D.B.; Cook, Ian M.; Mitchell-Wallace, K. (2009). "Serial clustering of intense European storms". 18 (4): 411–424. doi:10.1127/0941-2948/2009/0393.
4. McCullagh, Peter; Nelder, John (1989). Generalized Linear Models, Second Edition. Boca Raton: Chapman and Hall/CRC. ISBN 0-412-31760-5.
5. Cameron, Adrian C.; Trivedi, Pravin K. (1998). Regression analysis of count data. Cambridge University Press. ISBN 0-521-63567-5.
6. Huiming, Zhang; Lili Chu, Yu Diao (2012). "Some Properties of the Generalized Stuttering Poisson Distribution and its Applications". 5 (1): 11–26. doi:10.3968/j.sms.1923845220120501.Z0697.
7. J. B. S. Haldane, "On a Method of Estimating Frequencies", , Vol. 33, No. 3 (Nov., 1945), pp. 222–225. JSTOR 2332299
8. Spencer, Paul, 1998, The Pastoral Continuum: the Marginalization of Tradition in East Africa, Clarendon Press, Oxford (pp. 51-92).
## Further reading
• Hilbe, Joseph M., Negative Binomial Regression, Cambridge, UK: Cambridge University Press (2007) Negative Binomial Regression – Cambridge University Press
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 56, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8497570157051086, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/calculus/137239-convergence-sequence.html
|
# Thread:
1. ## Convergence of sequence
A) How do you prove that if 0(</=)x(</=)10, then 0(</=)sqrt(x+1)(</=)10?
B) So once that is found, then how can you prove that if 0(</=)u(</=)v(</=)10, then 0(</=)sqrt(u+1)(</=)sqrt(v+1)(</=)10?
C) They give a recursively defined sequence: a_1=0.3; a_(n+1)=sqrt((a_n)+1)for n>1
How do you find out the first five terms for it (I know what they are). then prove that this sequence converges. What is a specific theorem that will guarantee convergence, along with the algebraic results of parts A and B?
D) How do you find out the exact limit of the sequence defined in part C? Are you supposed to square the recursive equation and take limits using limit theorems? If so, then which are these theorems?
2. A) Graph it or use a table. Also remember that sqrt (|x|) is going to be less than |x| for |x| > 1 and if |x| < 1 than sqrt (|x|) < 1 though greater than x
3. The sequence can be expressed as...
$\Delta_{n} = a_{n+1} - a_{n} = f(a_{n}) = 1 + \sqrt{a_{n}} - a_{n}$ (1)
The function that generates the sequence...
$f(x) = 1 + \sqrt{x} - x$ (1)
... is represented in figure...
... and, because is has only one fixed point at $x_{0} = \frac{3 + \sqrt{5}}{2} = 2,6180339887\dots$ and that is an attractive fixed point, any $a_{0} \ge 0$ will produce a sequence convergent at $x_{0}$ without oscillations, because the slope of $f(x)$ in $x=x_{0}$ is in absolute value less than $1$...
Kind regards
$\chi$ $\sigma$
4. Originally Posted by chisigma
The sequence can be expressed as...
$\Delta_{n} = a_{n+1} - a_{n} = f(a_{n}) = 1 + \sqrt{a_{n}} - a_{n}$ (1)
The function that generates the sequence...
$f(x) = 1 + \sqrt{x} - x$ (1)
... is represented in figure...
... and, because is has only one fixed point at $x_{0} = \frac{3 + \sqrt{5}}{2} = 2,6180339887\dots$ and that is an attractive fixed point, any $a_{0} \ge 0$ will produce a sequence convergent at $x_{0}$ without oscillations, because the slope of $f(x)$ in $x=x_{0}$ is in absolute value less than $1$...
Kind regards
$\chi$ $\sigma$
How did you find out delta_n?
5. The sequence is defined as...
$a_{n+1} = 1 + \sqrt{a_{n}}$ (1)
... so that is...
$\Delta_{n} = a_{n+1} - a_{n} = 1 + \sqrt{a_{n}} - a_{n}$ (2)
Kind regards
$\chi$ $\sigma$
6. If $0\le x\le 10$ then $1\le x+1\le 11$ so $1\le \sqrt{x+1}\le \sqrt{10}< 10$
7. After a more carefull reading it seems to Me that the sequence is defined as...
$a_{n+1} = \sqrt {1 + a_{n}}$ (1)
If that is true I apologize for my mistake ... fortunately the solving procedure is almost the same we have described. The (1) can be alternatively written as...
$\Delta_{n} = a_{n+1} - a_{n} = f(a_{n}) = \sqrt{1 + a_{n}} - a_{n}$ (2)
... the 'generating function' of which is...
$f(x)= \sqrt{1 + x} - x$ (3)
... that is represented in figure...
As in the previous case there is only one attractive fixed point in $x_{0} = \frac{1 + \sqrt{5}}{2} = 1,6180339887\dots$ and, since the slope of $f(x)$ at $x=x_{0}$ is in absolute value less than $1$, all $a_{0} \ge -1$ will produce a sequence converging at $x_{0}$ without oscillations...
Kind regards
$\chi$ $\sigma$
8. Yes the correction is true. Thank you very much.
9. So, what would be the exact limit of the sequence defined in part c? This is just for myself. But would you square the recursive equation and take limits using some limit theorems?
10. Let suppose that a sequence is defined by an initial term $a_{0}$ an the recursive relation...
$\Delta_{n} = a_{n+1} - a_{n} = f(a_{n})$ (1)
... where $f(x)$ is a continous function in $a \le x \le b$. From (1) it follows immediately that ...
$\lim_{n \rightarrow \infty} a_{n} = a_{0} + \sum_{n=0}^{\infty} \Delta_{n}$ (2)
The limit (2) will exist only if the series converges and a necessary condition for that is...
$\lim_{n \rightarrow \infty} \Delta_{n} =0$ (3)
In order to find a sufficient condition let suppose now that $f(x)$ has in $[a,b]$ a single zero in $x=x_{0}$ and that in that interval is $|f(x)|\le |r(x)|$ where $r(x)$ is a linear function crossing the x-axis in $x_{0}$ with slope equal to $-1$. Such situation is illustrated in figure...
... where $r(x)$ is the red line. Under these condition we can use the root test to verify the convergence of the series in (2) and we find that $\forall n$ is...
$\frac{\Delta_{n+1}}{\Delta_{n}} < 1$ (4)
... so that the series converges and for (3) the sequence converges at $x_{0}$...
Kind regards
$\chi$ $\sigma$
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 57, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9300738573074341, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/156711/distance-between-empirical-and-uniform-continuous-distribution
|
# Distance between empirical and uniform continuous distribution
I would like to define a measure of the non-uniformity of the distribution. I have a sample of $n$ iid values drawn from an unknown underlying continuous distribution with cdf $F(x), x \in [0,1)$. I need to define a distance from the empirical distribution to the uniform distribution. What I tried to use Kolmogorov-Smirnov distance to the uniform distribution ($F_{uniform}(x) \equiv x$): $$K_n = \sqrt{n} \sup_x |F_n(x)-x|$$
where $F_n(x)$ is empirical cdf. It would be perfect if the underlying distribution was uniform. It also works really great when the underlying distribution is close to uniform. Unfortunately, if the underlying distribution is non-uniform, one cannot say anything about the distribution of $K_n$ and it also becomes strongly $n$-dependent
That statistic would still work for me as a distance if all my samples (drawn from different $F(x)$) had the same $n$ in the sense that I could arrange all these samples by the distance to the uniform distribution and say which one is more uniform. In the case of variable $n$ and $F(x)$ I am trying to think of the KS distance as: $$K_n = \sup_x |\sqrt{n}(F_n(x)-F(x)) + \sqrt{n}(F(x)-x)|$$
So, the first "part" of this expression is the standard KS statistic and has a universal distribution independent of $n$ (for big enough $n$), and the second part grows as $\sqrt{n}$, but of course it's hard or impossible to say anything about the whole expression. Still, my physiscist's intuition tells me that $K_n \propto A+B\sqrt{n}$ and the B coefficient can be a measure of distance between $F(x)$ and $F_{uniform}(x) \equiv x$. Perhaps I can estimate $B$ if I resample my original sample (either with or without replacement).
Does it look like the right direction? Does anyone have better ideas?
P.S: In my original problem the distributions are defined on circles, so I use Kuiper's statistic instead of KS. However, I expect people to be more familiar with KS and it seems to me that any developments for KS distance can be simply moved to Kuiper's distance.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 17, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9563597440719604, "perplexity_flag": "head"}
|
http://mathhelpforum.com/calculus/59371-graphic.html
|
# Thread:
1. ## graphic
How do I construct the graph of the solid limited by: $2y^2=x$ , $\frac{x}{4}+\frac{y}{2}+\frac{z}{4}=1$ , $z=0$ , $y=0$
2. Hello Apprentice. I don't know how you guys do those triple integrals without plotting them first. Actually I had this when those guys were helping you with it a few days ago but thought it wasn't necessary. It's a little tough to see without interactively rotating it but below is the domain of integration and the other is the three surfaces. The volume is under the blue section. Look carefully at the domain and then it's easy to see that its:
$\int_0^1\int_{2y^2}^{4-2y}\int_0^{4-x-2y} dzdxdy$
Here's the Mathematica code to draw the surfaces. Also, don't get discouraged by the code. Once you get good at it, it only takes a few minutes to code it.
Code:
```polys = Graphics3D[{Opacity[0.2],
LightPurple, {Polygon[{{5, 0, 5},
{-5, 0, 5}, {-5, 0, -5},
{5, 0, -5}}]}}];
polys2 = Graphics3D[{Opacity[0.8],
LightPurple, {Polygon[{{5, 5, 0},
{-5, 5, 0}, {-5, -5, 0},
{5, -5, 0}}]}}];
c1 = ContourPlot3D[{2*y^2 == x}, {x, 0, 5},
{y, -2, 2}, {z, 0, 5}]
p1 = Plot3D[4*(1 - x/4 - y/2), {x, 0, 4},
{y, 0, 2}, PlotStyle -> {Opacity[0.5]}]
p2 = Plot3D[4*(1 - x/4 - y/2), {x, 0, 4},
{y, 0, 2}, PlotStyle -> Blue,
RegionFunction -> Function[{x, y},
x > 2*y^2 && 4*(1 - x/4 - y/2) > 0]]
final = Show[{p1, p2, polys, polys2, c1},
BoxRatios -> {1, 1, 1}, AxesLabel ->
{Style["X", 20], Style["Y", 20],
Style["Z", 20]}]
domain = Plot[{Sqrt[x/2], 2 - x/2},
{x, 0, 5}]
GraphicsGrid[{{domain, final}}]```
Attached Thumbnails
3. Originally Posted by shawsend
Hello Apprentice. I don't know how you guys do those triple integrals without plotting them first. Actually I had this when those guys were helping you with it a few days ago but thought it wasn't necessary. It's a little tough to see without interactively rotating it but below is the domain of integration and the other is the three surfaces. The volume is under the blue section. Look carefully at the domain and then it's easy to see that its:
$\int_0^1\int_{2y^2}^{4-2y}\int_0^{4-x-2y} dzdxdy$
Here's the Mathematica code to draw the surfaces. Also, don't get discouraged by the code. Once you get good at it, it only takes a few minutes to code it.
Code:
```polys = Graphics3D[{Opacity[0.2],
LightPurple, {Polygon[{{5, 0, 5},
{-5, 0, 5}, {-5, 0, -5},
{5, 0, -5}}]}}];
polys2 = Graphics3D[{Opacity[0.8],
LightPurple, {Polygon[{{5, 5, 0},
{-5, 5, 0}, {-5, -5, 0},
{5, -5, 0}}]}}];
c1 = ContourPlot3D[{2*y^2 == x}, {x, 0, 5},
{y, -2, 2}, {z, 0, 5}]
p1 = Plot3D[4*(1 - x/4 - y/2), {x, 0, 4},
{y, 0, 2}, PlotStyle -> {Opacity[0.5]}]
p2 = Plot3D[4*(1 - x/4 - y/2), {x, 0, 4},
{y, 0, 2}, PlotStyle -> Blue,
RegionFunction -> Function[{x, y},
x > 2*y^2 && 4*(1 - x/4 - y/2) > 0]]
final = Show[{p1, p2, polys, polys2, c1},
BoxRatios -> {1, 1, 1}, AxesLabel ->
{Style["X", 20], Style["Y", 20],
Style["Z", 20]}]
domain = Plot[{Sqrt[x/2], 2 - x/2},
{x, 0, 5}]
GraphicsGrid[{{domain, final}}]```
Its building in method $dzdydx$ ???
4. Originally Posted by Apprentice123
Its building in method $dzdydx$ ???
Wait a minute Apprentice . . . that code has nothing to do with the integral. I'm just plotting the graphs without even thinking about the integral, and then looking at the graphs to figure out the limits of an integration. I hope I'm not causing confusion by posting complicated code which interferes with the underlying mathematics.
Just plot one thing:
Code:
```p2 = Plot3D[4*(1 - x/4 - y/2), {x, 0, 4},
{y, 0, 2}]```
Understand that part before you do anything else. You know why I'm using 4(1-x/4-y/2) right? Now just experiment with that one plot, nothing else: add axes, add labels, add plotting regions, add colors, get that one down perfect then move on to the other pieces. If you don't have Mathematica, use whatever software you have but don't try and do it all at once. It's too confusing.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 8, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9582046866416931, "perplexity_flag": "middle"}
|
http://nrich.maths.org/7053/index
|
### Weekly Challenge 43: A Close Match
Can you massage the parameters of these curves to make them match as closely as possible?
### Weekly Challenge 44: Prime Counter
A weekly challenge concerning prime numbers.
### Weekly Challenge 28: the Right Volume
Can you rotate a curve to make a volume of 1?
# Weekly Challenge 12: Venn Diagram Fun
##### Stage: 5 Short Challenge Level:
A Venn diagram is a way of representing all possible logical relationships between a collection of sets.
The image shows a Venn diagram for three sets $A$, $B$ and $C$
How would you describe each of the seven regions in the diagram using unions $\cup$ and intersections $\cap$ of $A, B, C, A^c, B^c, C^c$ where the complements $A^c, B^c$ and $C^c$ of the sets $A, B$ and $C$ are the sets of elements not contained in $A, B$ and $C$ respectively relative to a universal set $A\cup B\cup C$
Create a Venn diagram for 4 sets $A$, $B$, $C$ and $D$. Make sure that your diagram contains regions for all possible intersections and you might like to experiment to create a particularly pleasing diagram.
Did you know ... ?
Venn diagrams are useful in many problems in logic, probability, computer science and set theory. As well as a useful tool, they are an area of study in themselves and much research has gone into the creation of particularly beautiful or symmetric Venn diagrams for larger numbers of sets.
The NRICH Project aims to enrich the mathematical experiences of all learners. To support this aim, members of the NRICH team work in a wide range of capacities, including providing professional development for teachers wishing to embed rich mathematical tasks into everyday classroom practice. More information on many of our other activities can be found here.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 17, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9180005788803101, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/151658/x-cos-n-i-sin-n-n-geq-0-is-dense-in-mathbbs1-subset-mathbbc
|
# $X=\{\cos n + i\sin n : n\geq 0\}$ is dense in $\mathbb{S}^1 \subset \mathbb{C}$
I'd like a hint to prove the above assertion. My idea was to find a convergent sequence, of points of $X$, to each point $z \in \mathbb{S}^1$, but I don't think it's right.
-
– Martin Sleziak May 30 '12 at 17:20
@MartinSleziak maybe,I don't know... – Jr. May 30 '12 at 17:25
Hint: $n$ $\text{mod} 2\pi$ is dense in $[0,2\pi].$ – bobobinks May 30 '12 at 17:29
## 2 Answers
First solution (longer but more general): I propose to show that the set $G=\{n+2k\pi: n\in Z, k\in Z\}$ is dense in R. You can use the fact that the subgroups of R are cyclic or dense. G is not cyclic else $\pi$ would be a rational. So it is dense.
Direct and easier solution: For $\varepsilon>0$ divide the unit circle in parts of equal size whose length does not exceed $\varepsilon$. For $m\ne n$, $e^{im}\ne e^{in}$ (else $\pi$ would be rational). Hence there is an infinite number of $e^{in}$. As there is only a finite number of parts, one can find $m<n$ such that $e^{im}$ and $e^{in}$ are note equal and in the same part. Hence there exists $\theta\ne0$ such as $|\theta|<\varepsilon$ and $e^{i(n-m)}=e^{i\theta}$. Then $r=e^{i(n-m)}\in X$ and using powers of $r$ you can find an element of $X$ at distance less then than $\varepsilon$ for any point on the circle.
-
+1 Concise, perhaps a little too much, but precise. – DonAntonio May 30 '12 at 18:04
@rannousse I think $|r|=1$, did you mean $|n-m|< \epsilon$? – Jr. May 31 '12 at 21:35
@rannousse How do I know that some power of $r$ will be at the same part of the choosen point? – Jr. May 31 '12 at 21:48
@Jr You are right about $|r|=1$. I fixed the proof. – ranousse May 31 '12 at 22:06
@Jr For your second question, multiply by $r$ until $r^n$ is before the considered point $z$ and $r^{n+1}$ after it. Then $|r^n-z|<\varepsilon$ (notice that each time you multiply by $r$ you advance of $\theta$ along the circle) – ranousse May 31 '12 at 22:12
show 1 more comment
## Did you find this question interesting? Try our newsletter
email address
Hints: 1) Show the $n$-th roots of unity on $\,S^1\,$ are the vertices of a regular n-gon (i.e., they're
equally distributed over the circle)
2) since $\,\displaystyle{\lim_{n\to\infty}\frac{2\pi}{n}=0}\,$ , there exists a root of unity whose arc distance from $\,1=e^{2\pi i}\,$ is arbitrarily
small (try some $\epsilon > 0\,$ stuff here)
3) Deduce now that any element $\,z=e^{it}\,\,,2k\pi\leq t<2(k+1)\pi\,,\,k\in\mathbb{Z}\,$ in $\, S^1\,$ is close enough to
some root of unity.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 43, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9476937651634216, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/tagged/electromagnetism+solid-state-physics
|
# Tagged Questions
2answers
135 views
+50
### Ferromagnetism with mobile spins
How can electron spins in Iron at room temperature have ferromagnetic order even though they are travelling at very high speeds? One could argue that spin and motion are completely uncorrelated and ...
3answers
169 views
### There must be free positive charges, moving oppositely to electrons for the wire with current to stay neutral
All popular expositions (e.g. these ones) of relativistic electromagnetism claim univocally that electrons in motion become more dense due to the speed. They teach that Lorentz contraction of charges ...
1answer
43 views
### Understanding drift velocities in currents
I have a doubt about the understanding of drift velocities in a current. My problem is that the textbook speaks very loosely about this. It's like: "well, if we apply a field $E$ then the charges will ...
1answer
96 views
### Fermi level with Landau levels
So my question is regarding where the Fermi energy is when you have 2D electron gas in an applied magnetic field. My book explains that, using the Landau gauge, you find that the 2D density of states ...
3answers
253 views
### Why is copper diamagnetic?
Cu has an unpaired electron in 4s, but it is diamagnetic. I thought that it has to be paramagnetic. What am I missing?
2answers
349 views
### What is the difference between a photon and a phonon?
More specifically, how does a wave-particle duality differ from a quasiparticle/collective excitation? What makes a photon a gauge boson and a phonon a Nambu–Goldstone boson?
0answers
41 views
### Order of magnetic phase transitions
Is there any phase transition occur in paramagnetism to diamagnetism transitions state. What should be the order and how will I calculate the order?
1answer
901 views
### Why are some materials diamagnetic, others paramagnetic, and others ferromagnetic?
Why are some materials diamagnetic, others paramagnetic, and others ferromagnetic? Or, put another way, which of their atomic properties determines which of the three forms of magnetism (if at all) ...
0answers
32 views
### Can one obtain saturation magnetization from FMR measurements?
Especially for magnetic thin films. Normally this is done by magneto-optical Kerr effect or SQUID measurements. Or is there a way to calculate the saturation magnetization based on other measured ...
1answer
267 views
### Pull up and Pull down register
Currently I am working with Pull up and pull down registers and trying to understand what does it mean? But could not able to understand. I searched in Wikipedia ...
1answer
164 views
### What is the time correlation function in the Green-Kubo formulation of ionic current?
I am reading a paper, and I came across the Green-Kubo formulation, where the conductivity $\sigma$ of charged particles is related to the time correlation function of the $z$-component of the ...
2answers
90 views
### What are the specific electronic properties that make an atom ferromagnetic versus simply paramagnetic?
As I understand it, paramagnetism is similar in its short-term effect to ferromagnetism (spins of the electrons line up with the magnetic field, etc.), though apparently the effect is weaker. What is ...
1answer
240 views
### Lorentz invariance of a frequency- and wavelength- dependent dielectric tensor
Suppose we have a material described by a dielectric tensor $\bar{\epsilon}$. In frequency domain, this tensor depends on the wave frequency $\omega$ and the wave vector $\vec{k}$. Clearly not all ...
3answers
560 views
### Shine a light into a superconductor
A type-I superconductor can expel almost all magnetic flux (below some critical value $H_c$) from its interior when superconducting. Light as we know is an electromagnetic wave. So what would happen ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9231845140457153, "perplexity_flag": "middle"}
|
http://unapologetic.wordpress.com/2010/04/20/borel-sets-and-lebesgue-measure/?like=1&_wpnonce=d19428bcb3
|
# The Unapologetic Mathematician
## Borel Sets and Lebesgue Measure
Let’s consider some of the easy properties of the Borel sets and Lebesgue measure we introduced yesterday.
First off, every countable set of real numbers is a Borel set of measure zero. In particular, every single point $\{a\}$ is a Borel set. Indeed, $\{a\}$ can be written as the countable intersection
$\displaystyle\{a\}=\bigcap\limits_{n=1}^\infty\left[a,a+\frac{1}{n}\right)$
so it’s a Borel set. Further, monotonicity tells us that
$\displaystyle\mu\left(\{a\}\right)=\lim\limits_{n\to\infty}\mu\left(\left[a,a+\frac{1}{n}\right)\right)=\lim\limits_{n\to\infty}\frac{1}{n}=0$
and so the singleton $\{a\}$ has measure zero. But $\mu$ is countably additive, so given any countable collection $A\subseteq\mathbb{R}$ the measure $\mu(A)$ is the sum of the measures of the individual points, each of which is zero.
Next, as I said when I introduced semiclosed intervals, we could have started with open intervals, but the details would have been messier. Now we can see that the $\sigma$-ring $\mathcal{S}$ generated by the collection $\mathcal{P}$ of semiclosed intervals is the same as that generated by the collection $\mathcal{U}$ of all open sets.
We can see, in particular, that each open interval $\left(a,b\right)$ is a Borel set. Indeed, the point $\{a\}$ is a Borel set, as is the semiclosed interval $\left[a,b\right)$, and we have the relation $\left(a,b\right)=\left[a,b\right)\setminus\{a\}$. Every other open set in $\mathbb{R}$ is a countable union of open intervals, and so they’re all Borel sets as well. Conversely, we could write
$\displaystyle\{a\}=\bigcap\limits_{n=1}^\infty\left(a-\frac{1}{n},a+\frac{1}{n}\right)$
and find the singleton $\{a\}$ in the $\sigma$-ring generated by $\mathcal{U}$. Then we can write $\left[a,b\right)=\left(a,b\right)\cup\{a\}$ and find every semiclosed interval in this $\sigma$-ring as well. And thus $\mathcal{S}=\mathcal{S}(\mathcal{P})\subseteq\mathcal{S}(\mathcal{U})$
We can also tie our current measure back to the concept of outer Lebesgue measure we introduced before. Back then, we defined the “volume” of a collection of open intervals to be the sum of the “volumes” of the intervals themselves. We defined the outer measure of a set to be the infimum of the volumes of finite open covers. And, indeed, this is exactly the outer measure $\mu^*$ corresponding to Lebesgue measure $\mu$.
Remember that the outer measure $\mu^*(E)$ is defined for a set $E\subseteq\mathbb{R}$ by
$\displaystyle\mu^*(E)=\inf\left\{\mu(F)\vert F\in\mathcal{S},E\subseteq F\right\}$
Since $\mathcal{U}\subseteq\mathcal{S}$, we have the inequality
$\displaystyle\mu^*(E)\leq\inf\left\{\mu(U)\vert U\in\mathcal{S},E\subseteq F\right\}$
On the other hand, if $\epsilon$ is any positive number, then by the definition of $\mu^*$ we can find a sequence $\left\{\left[a_n,b_n\right)\right\}$ of semiclosed intervals so that
$\displaystyle E\subseteq\bigcup\limits_{n=1}^\infty\left[a_n,b_n\right)$
and
$\displaystyle\sum\limits_{n=1}^\infty(b_n-a_n)\leq\mu^*(E)+\frac{\epsilon}{2}$
We can thus widen each of these semiclosed intervals just a bit to find
$\displaystyle E\subseteq\bigcup\limits_{n=1}^\infty\left(a_n-\frac{\epsilon}{2^{n+1}},b_n\right)=U\in\mathcal{U}$
and
$\displaystyle\mu(U)\leq\sum\limits_{i=1}^\infty(b_n-a_n)+\frac{\epsilon}{2}\leq\mu^*(E)+\epsilon$
Since $\epsilon$ was arbitrary, we find that $\mu(U)\leq\mu^*(E)$. And, thus, that
$\displaystyle\mu^*(E)=\inf\left\{\mu(U)\vert U\in\mathcal{S},E\subseteq F\right\}$
In effect, we’ve replaced the messily-defined “volume” of an open cover by the more precise Lebesgue measure $\mu$, but the result is the same. The “outer Lebesgue measure” from our investigations of multiple integrals is the same as the outer measure induced by our new Lebesgue measure.
### Like this:
Posted by John Armstrong | Analysis, Measure Theory
## 5 Comments »
1. I’m still reading and pondering these with delight. Haven’t commented much lately, as had to complete a grant proposal, make progress in the quantum computing theory in a novel I’m writing, and be the key eyewitness in a legal malpractice lawsuit. And — for complicated job-search reasons — formally applied to the Caltech Math department, where I got my B.S. in 1973, for Grad School next year (albeit I’ve been an adjunct professor in the interim and published a great deal). Your unapologetic blogmaster and some of his readers are also sensitive to the job search mess during Global Recession.
Comment by | April 21, 2010 | Reply
2. [...] . This sends the semiclosed interval to the interval . But this is a Borel set: , and the singletons are Borel sets. Thus we see that the reflection sends Borel sets to Borel sets. It should also be clear that it [...]
Pingback by | April 22, 2010 | Reply
3. [...] is the class of open sets. We already know [...]
Pingback by | April 23, 2010 | Reply
4. [...] binary expansions, which are exactly the rational numbers. But this is a countable set, and countable sets have Lebesgue measure zero. Consequently, we find that . Since , there must be some positive measure in in order to make up [...]
Pingback by | May 5, 2010 | Reply
5. [...] things only go wrong at the one point, and the singleton has measure zero. That is, the sequence converges almost everywhere to the function with constant value . The [...]
Pingback by | May 17, 2010 | Reply
« Previous | Next »
## About this weblog
This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”).
I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 42, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.93956059217453, "perplexity_flag": "head"}
|
http://mathoverflow.net/revisions/108974/list
|
## Return to Answer
6 added 112 characters in body
For convenience consider the representation $Y=V_n\oplus V_{n-1,1}$ instead of $V_{n-1,1}$. Then the multiplicity of the representation of $S_n$ indexed by the partition $\lambda$ of $n$ in the $k$th tensor power of $Y$ equals the scalar product of the symmetric function $s_1^k$ (where $s_1=x_1+x_2+\cdots$ denotes a Schur function) with the plethysm $s_\lambda[1+h_1+h_2+h_3+\cdots]$, where $h_i$ is the complete symmetric function of degree $i$. This follows from the theory of inner plethysm; see Exercise 7.74 of Enumerative Combinatorics, volume 2. Since plethysm is in general intractable, I don't expect anything much simpler. This result does allow, however, these decompositions to be computed using Stembridge's Maple package SF for small values of $n$ and $k$.
Addendum. I used the method of Exercise 7.74 to get the analogous result for $V_{n-1,1}$. Namely, the multiplicity of the representation of $S_n$ indexed by the partition $\lambda$ of $n$ in the $k$th tensor power of $V_{n-1,1}$ equals the scalar product of $s_1^k$ with the symmetric function $(1-e_1+e_2-e_3+\cdots)\cdot s_\lambda[1+h_1+h_2+h_3+\cdots]$, where $e_i$ is an elementary symmetric function.
Addendum #2. A alternative formulation is the following. The multiplicity of the representation of $S_n$ indexed by the partition $\lambda$ of $n$ in the $k$th tensor power of $V_{n-1,1}$ equals the scalar product of $(s_1-1)^k$ with the symmetric function $s_\lambda[1+h_1+h_2+h_3+\cdots]$.
News flash! I said above that plethysm in in general intractable. Indeed, the Schur function expansion of $s_\lambda[1+h_1+h_2+\cdots]$ looks hopeless to me. However, taking the scalar product with $s_1^k$ results in a lot of simplification. I can show the following. The multiplicity of the representation of $S_n$ indexed by the partition $\lambda$ of $n$ in the $k$th tensor power of $V_n\oplus V_{n-1,1}$ equals the coefficient of $s_\lambda$ in the Schur function expansion of $(1+h_1+h_2+\cdots)\cdot \sum_{j=1}^k S(k,j)s_1^j$, where $S(k,j)$ is a Stirling number of the second kind. I wonder whether (After obtaining this is known result. , I noticed that it is essentially the same as Corollary 2 of the Goupil-Chauve paper mentioned in Vasu Vineet's comment.) Since for fixed $j$ we have $S(k,j)=\frac{1}{j!}\sum_{i=1}^j (-1)^{j-i}{j\choose i}i^k$, we can get explicit formulas for the multiplicities for fixed $\lambda$ that don't involve Stirling numbers. For instance, when $\lambda=(3)$ the multiplicity is $\frac{1}{6}(3^k+3)$, for $\lambda=(2,1)$ it is $3^{k-1}$, and for $\lambda=(1,1,1)$ it is $\frac{1}{6}(3^k-3)$. In particular, the multiplicity for $\lambda = (1^n)$ (i.e., $n$ parts equal to 1) is $S(k,n)+S(k,n-1)$.
5 News flash added.
News flash! I said above that plethysm in in general intractable. Indeed, the Schur function expansion of $s_\lambda[1+h_1+h_2+\cdots]$ looks hopeless to me. However, taking the scalar product with $s_1^k$ results in a lot of simplification. I can show the following. The multiplicity of the representation of $S_n$ indexed by the partition $\lambda$ of $n$ in the $k$th tensor power of $V_n\oplus V_{n-1,1}$ equals the coefficient of $s_\lambda$ in the Schur function expansion of $(1+h_1+h_2+\cdots)\cdot \sum_{j=1}^kS(k,j)s_1^j$, where $S(k,j)$ is a Stirling number of the second kind. I wonder whether this is known result. Since for fixed $j$ we have $S(k,j)=\frac{1}{j!}\sum_{i=1}^j (-1)^{j-i}{j\choose i}i^k$, we can get explicit formulas for the multiplicities for fixed $\lambda$ that don't involve Stirling numbers. For instance, when $\lambda=(3)$ the multiplicity is $\frac{1}{6}(3^k+3)$, for $\lambda=(2,1)$ it is $3^{k-1}$, and for $\lambda=(1,1,1)$ it is $\frac{1}{6}(3^k-3)$. In particular, the multiplicity for $\lambda = (1^n)$ (i.e., $n$ parts equal to 1) is $S(k,n)+S(k,n-1)$.
4 second addendum added; deleted 1 characters in body
For convenience consider the representation $Y=V_n\oplus V_{n-1,1}$ instead of $V_{n-1,1}$. Then the multiplicity of the representation of $S_n$ indexed by the partition $\lambda$ of $n$ in the $k$th tensor power of $Y$ equals the scalar product of the symmetric function $s_1^k$ (where $s_1=x_1+x_2+\cdots$ denotes a Schur function) with the plethysm $s_\lambda[1+h_1+h_2+h_3+\cdots]$, where $h_i$ is the complete symmetric function of degree $i$. This follows from the theory of inner plethysm; see Exercise 7.74 of Enumerative Combinatorics, volume 2. Since plethysm is in general intractable, I don't expect anything much simpler. This result does allow, however, these decompositions to be computed using Stembridge's Maple package SF for small values of $n$ and $k$.
Addendum. I used the method of Exercise 7.74 to get the analogous result for $V_{n-1,1}$. Namely, the multiplicity of the representation of $S_n$ indexed by the partition $\lambda$ of $n$ in the $k$th tensor power of $V_{n-1,1}$ equals the scalar product of $s_1^k$ with the symmetric function $(1-e_1+e_2-e_3+\cdots)\cdot s_\lambda[1+h_1+h_2+h_3+\cdots]$, where $e_i$ is an elementary symmetric function.
Addendum #2. A alternative formulation is the following. The multiplicity of the representation of $S_n$ indexed by the partition $\lambda$ of $n$ in the $k$th tensor power of $V_{n-1,1}$ equals the scalar product of $(s_1-1)^k$ with the symmetric function $s_\lambda[1+h_1+h_2+h_3+\cdots]$.
3 minor improvements
For convenience consider the representation $Y=V_n\oplus V_{n-1,1}$ instead of $V_{n-1,1}$. Then the multiplicity of the representation of $S_n$ indexed by the partition $\lambda$ of $n$ in the $k$th tensor power of $Y$ equals the scalar product of the Schur symmetric function $s_1^k$ (where $s_1=x_1+x_2+\cdots$ denotes a Schur function) with the plethysm $s_\lambda[1+h_1+h_2+h_3+\cdots]$, where $h_i$ is the complete symmetric function of degree $i$. This follows from the theory of inner plethysm; see Exercise 7.74 of Enumerative Combinatorics, volume 2. Since plethysm is in general intractable, I don't expect anything much simpler. This result does allow, however, these decompositions to be computed using Stembridge's Maple package SF for small values of $n$ and $k$.
Addendum. I used the method of Exercise 7.74 to get the analogous result for $V_{n-1,1}$. Namely, the multiplicity of the representation of $S_n$ indexed by the partition $\lambda$ of $n$ in the $k$th tensor power of $V_{n-1,1}$ equals the scalar product of the Schur function $s_1^k$ with the symmetric function $(1-e_1+e_2-e_3+\cdots)\cdot s_\lambda[1+h_1+h_2+h_3+\cdots]$, where $e_i$ is an elementary symmetric function.
2 addendum added
For convenience consider the representation $Y=V_n\oplus V_{n-1,1}$ instead of $V_{n-1,1}$. Then the multiplicity of the representation of $S_n$ indexed by the partition $\lambda$ of $n$ in the $k$th tensor power of $Y$ equals the scalar product of the Schur function $s_1^k$ with the plethysm $s_\lambda[1+h_1+h_2+h_3+\cdots]$, where $h_i$ is the complete symmetric function of degree $i$. This follows from the theory of inner plethysm; see Exercise 7.74 of Enumerative Combinatorics, volume 2. Since plethysm is in general intractable, I don't expect anything much simpler. This result does allow, however, these decompositions to be computed using Stembridge's Maple package SF for small values of $n$ and $k$.
Addendum. I used the method of Exercise 7.74 to get the analogous result for $V_{n-1,1}$. Namely, the multiplicity of the representation of $S_n$ indexed by the partition $\lambda$ of $n$ in the $k$th tensor power of $V_{n-1,1}$ equals the scalar product of the Schur function $s_1^k$ with the symmetric function $(1-e_1+e_2-e_3+\cdots)\cdot s_\lambda[1+h_1+h_2+h_3+\cdots]$, where $e_i$ is an elementary symmetric function.
1
For convenience consider the representation $Y=V_n\oplus V_{n-1,1}$ instead of $V_{n-1,1}$. Then the multiplicity of the representation of $S_n$ indexed by the partition $\lambda$ of $n$ in the $k$th tensor power of $Y$ equals the scalar product of the Schur function $s_1^k$ with the plethysm $s_\lambda[1+h_1+h_2+h_3+\cdots]$, where $h_i$ is the complete symmetric function of degree $i$. This follows from the theory of inner plethysm; see Exercise 7.74 of Enumerative Combinatorics, volume 2. Since plethysm is in general intractable, I don't expect anything much simpler. This result does allow, however, these decompositions to be computed using Stembridge's Maple package SF for small values of $n$ and $k$.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 162, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8961379528045654, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/36286/linearizing-gravity-to-cal-oh3
|
# Linearizing Gravity to ${\cal O}(h^3)$
I've seen the action of linearized gravity in many places. We basically have
$${\cal L} \sim \frac{1}{G_N}\left( - \frac{1}{2}h^{\alpha\beta} \Box h_{\alpha\beta} + \frac{1}{4} h \Box h + {\cal O}(h^3)\right)$$
in the gauge where the trace-reversed field is divergenceless. I'm do do some field theory, on linearized gravity backgrounds by treating $h_{\mu\nu}$ as a massless spin-2 field. I can't seem to find the ${\cal O}(h^3)$ terms in the Lagrangian anywhere. I know how to evaluate it, but it looks nasty.
Are there any known references that just lists the next to leading order terms in the above Lagrangian?
Thanks a lot!
-
## 2 Answers
I thinnk the earliest papers where this was written down were by DeWitt. But for a reference easily available via the arXiv look at hep-th/9411092. Eq (2.17) has the expansion you want and eq. (2.18) even has it to fourth order in $h$.
-
Thank you so much!! This was exactly what I was looking for. – Prahar Sep 13 '12 at 15:13
Most of the difficulties of these calculations comes from the fact that people write the Lagrangian directly in terms of the metric perturbation $h_{\mu\nu}$.
It is much simpler to write it in terms of the difference of connections tensor $F_{\mu\nu}{}^\beta$, i.e., $$(\nabla_\mu-\bar{\nabla}_\mu)A_\nu = \mathcal{F}_{\mu\nu}{}^\beta{}A_\beta,$$ where $\nabla_\mu$ is the connection compatible with the full metric ($g_{\mu\nu}$) and $\bar{\nabla}_\mu$ compatible with the background metric ($\bar{g}_{\mu\nu}$ in your case Minkowski). The curvature scalar is naturally quadratic in $\mathcal{F}_{\mu\nu}{}^\beta$, hence, the higher order terms comes from the expansion of $\sqrt{-g}$ and $\delta{}g^{\mu\nu}$ in terms of the metric difference $\xi_{\mu\nu} = g_{\mu\nu} - \bar{g}_{\mu\nu}$. Using this method one don't need to choose a gauge a priori.
In our paper http://arxiv.org/abs/1206.4374 we develop the Lagrangian and Hamiltonian up to second order using the methodology described above. Additionally, we write the action in term of the kinetic quantities defined in geodesic space-time foliation. Using the results of this paper it is easy to generalize (in the Minkowski background) the Lagrangian to higher orders.
-
very interesting! +1 – lurscher Sep 18 '12 at 18:39
I have never seen this before! Very interesting! I'll look into it. Thanks! +1 – Prahar Sep 23 '12 at 16:59
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9373323917388916, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/144668/two-sided-chebyshev-inequality-for-event-not-symmetric-around-the-mean/144673
|
# Two-sided Chebyshev inequality for event not symmetric around the mean?
Let $X$ be a random variable with finite expected value $μ$ and non-zero variance $σ^2$. Then for any real number $k > 0$, two-sided Chebyshev inequality states $$\Pr(|X-\mu|\geq k\sigma) \leq \frac{1}{k^2}.$$
1. I saw a paper applies two-sided Chebyshev inequality to $\Pr(|X|\geq b)$ and got an upper bound $Var(X)/b^2$, when $X$ does not necessarily have mean zero. I think it is not correct. Is there any way to apply two-sided Chebyshev inequality to this case?
2. Instead, for $\Pr(|X-a|\geq b)$, I think it is only possible to apply one-sided Chebyshev inequality to $\Pr(X-a\geq b)$ and $\Pr(X-a \leq -b)$ respectively. Am I right?
Thanks and regards!
-
## 2 Answers
Edited in response to OP's comment and request
It depends on what you mean by applying the two-sided Chebyshev Inequality. The same method that is used to prove the Chebyshev Inequality (viz. bound $E[\mathbf 1_{(-\infty, \mu-b]} + \mathbf 1_{[\mu+b,\infty)}]$ from above by $E[(X-\mu)^2/b^2] = \sigma^2/b^2$ can be used to show that $$P\{|X-a| \geq b\} \leq \frac{E[(X-a)^2]}{b^2} = \frac{\sigma^2 + (\mu-a)^2}{b^2}.$$ In more detail, $\displaystyle \mathbf 1_{(-\infty, a-b]} + \mathbf 1_{[a+b,\infty)} \leq \left(\frac{x-a}{b}\right)^2$ for all $x \in \mathbb R$ since the parabola $\displaystyle \left(\frac{x-a}{b}\right)^2$ passes through $(a-b,1), (a,0)$, and $(a+b,1)$. Consequently, $$\begin{align*} P\{|X-a| \geq b\} &= P\{X \leq a-b\} + P\{X \geq a + b\}\\ &= \int_{-\infty}^{a-b} f_X(x)\,\mathrm dx + \int_{a+b}^{\infty} f_X(x)\,\mathrm dx\\ &= \int_{-\infty}^{\infty}(\mathbf 1_{(-\infty, a-b]} + \mathbf 1_{[a+b,\infty)})f_X(x)\,\mathrm dx\\ &\leq \int_{-\infty}^{\infty}\left(\frac{x-a}{b}\right)^2f_X(x)\,\mathrm dx\\ &= \frac{1}{b^2}\int_{-\infty}^{\infty}(x-a)^2f_X(x)\,\mathrm dx\\ &= \frac{E[(X-a)^2]}{b^2} \end{align*}$$ If $a$ equals the mean $\mu$, that expectation on the right is the variance $\sigma^2$ and we get Chebyshev's Inequality. More generally, it is a standard result in probability theory that $$\begin{align*} E[(X-a)^2] &= E[((X-\mu) + (\mu - a))^2\\ &= E[(X-\mu)^2] + (\mu-a)^2 + 2(\mu-a)E[X-\mu]\\ &= \sigma^2 + (\mu-a)^2 \end{align*}$$ and we get the inequality I stated initially.
With regard to the second question in the OP's comment,
I saw a paper applies two-sided Chebyshev inequality to $\Pr(|X|\geq b)$ and got an upper bound $Var(X)/b^2$, when $X$ does not necessarily have mean zero. I think it is not correct.
Yes, that is not right when the mean is not zero; the upper bound from the variation on the Chebyshev Inequality described above is $(\sigma^2+\mu^2)/b^2$
-
Thanks! (1) I wonder how to prove the inequality in your reply using the same method for proving Chebyshev inequality? (2) Are you saying the paper is right, and I am wrong about it? – Tim May 13 '12 at 19:02
There can exist no such non-trivial upper bound since, for every fixed $b$, $$\sup\limits_{a\in\mathbb R}\mathrm P(|X-a|\geqslant b)=1.$$
-
Thanks! Is the example easy to construct? – Tim May 13 '12 at 18:42
Every random variable $X$ and real number $b$ do the job. – Did May 13 '12 at 18:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 27, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9362566471099854, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/27023/unitarity-of-s-matrix-in-qft/27024
|
# Unitarity of S-matrix in QFT
I am a beginner in QFT, and my question is probably very basic.
As far as I understand, usually in QFT, in particular in QED, one postulates existence of IN and OUT states. Unitarity of the S-matrix is also essentially postulated. On the other hand, in more classical and better understood non-relativistic scattering theory unitarity of S-matrix is a non-trivial theorem which is proved under some assumptions on the scattering potential, which are not satisfied automatically in general. For example, unitarity of the S-matrix may be violated if the potential is too strongly attractive at small distances: in that case a particle (or two interacting with each other particles) may approach each other from infinity and form a bound state. (However the Coulomb potential is not enough attractive for this phenomenon.)
The first question is why this cannot happen in the relativistic situation, say in QED. Why electron and positron (or better anti-muon) cannot approach each other from infinity and form a bound state?
As far as I understand, this would contradict the unitarity of S-matrix. On the other hand, in principle S-matrix can be computed, using Feynmann rules, to any order of approximation in the coupling constants. Thus in principle unitarity of S-matrix could be probably checked in this sense to any order.
The second question is whether such a proof, for QED or any other theory, was done anywhere? Is it written somewhere?
-
Why do you say that two particles can't form a bound state in QFT? I'm pretty sure there are two-dimensional integrable field theories with scattering $A+B \to C$ and where $A$, $B$ and $C$ are perfectly stable particle states. – Sidious Lord Feb 26 '12 at 15:36
@Sidious Lord: Can I read somewhere about such examples? Can it happen in QED? (As far as I heard, the 2d case is somewhat exceptional in QED: in the Schwinger model polarization of vacuum has an effect of creation of a bound state of electron-positron pair which is a free boson. But I might be wrong about this, I do not really know this.) – MKO Feb 26 '12 at 18:56
Hi @Dilaton: Concerning the tag edit(v3) I would suggest the unitarity tag and the s-matrix-theory tag instead of the qed tag (because OP is really asking about qft) and the research-level tag (because the question is textbook material). – Qmechanic♦ Jan 1 at 16:27
Thanks @Qmechanic, it never hurts when you hava a look at it too when I retag, since you are much much much more knowledgable. I change the tags as you suggest. And happy new year to you :-) – Dilaton Jan 1 at 17:33
## 3 Answers
In principle, bound states are possible in a QFT. In this case, their states must be part of the S-matrix in- and out- state space in order that the S-matrix is unitary. (Weinberg, QFT I, p.110)
However, for QED proper (i.e., without any other species of particles apart from photon, electron, and positron) it happens that there are no bound states; electron and positron only form positronium, which is unstable, and decays quickly into two photons. http://en.wikipedia.org/wiki/Positronium
[Edit: Positronium is unstable: http://arxiv.org/abs/hep-ph/0310099 - muonium is stable electromagnetically (i.e., in QED + muon without weak force), but decays via the weak interaction, hence is unstable, too: http://arxiv.org/abs/nucl-ex/0404013. About how to make muonium, see page 3 of this article, or the paper discovering muonium, Phys. Rev. Lett. 5, 63–65 (1960). There is no obstacle in forming the bound state; due to the attraction of unlike charges, an electron is easily captured by an antimuon.]
Note that the current techniques for relativistic QFT do not handle bound states well. Bound states of two particles are (in the simplest approximation) described by Bethe-Salpeter equations. The situation is technically difficult because such bound states always have multiparticle contributions.
-
Thanks for the answer. Can I read somewhere a proof that positronium is unstable? Another related question, let us consider QED with photon, electron, muon and their anti-particles. I have heard that electron and anti-muon can form a bound state (muonium). Is there a good place to read a proof that muonium is stable? Also in fact the question I asked is more specialized: even if in QED bound states exist, electron and anti-muon coming from infinity may not collide to form this bound state, like in non-relativistic situation. That was the question. – MKO Mar 14 '12 at 9:20
I edited my answer to reply to this. – Arnold Neumaier Mar 14 '12 at 13:19
Thanks a lot. The reference seems to be very relevant. I will have a look. – MKO Mar 15 '12 at 13:19
Unitarity of the S-matrix can be checked perturbatively. Bound states tend to be non-perturbative effects, so may not show up naive perturbative calculations. Unfortunately, the datailed proof is not discussed in many places. One book that has it is Scharf's book on QED. When looking through other books you should look for keywords like optical theorem and Cutkosky rules. Bound states are usefully discussed in the last chapter of vol.1 of Weinberg's tretease on QFT.
-
Thanks a lot. I will have a look at these references. – MKO Mar 4 '12 at 10:20
In and Out states are not obligatory free states, but can be bound states too, so transitions from free to bound states are also possible. In case of QED with electron-antimuon bound state, its formation is accompanied with photon emission present in the final system state. It does not contradict the unitarity.
Problems with proofs in QED and other QFTs are due to wrong coupling term like $jA$ which is not correct alone and is corrected with counterterms. In addition, these counterterms cannot be treated exactly but only perturbatively so the true interaction of true constituents is not seen.
-
Thanks for the comment. I realize that In and Out states may be bound states in principle. However in QED bound states are not taken into account. That means that free electrons, muons etc. cannot become a bound state (am I wrong?). Also I realize that when one uses Feynmann rules to compute S-matrix, one should include all counterterms. So I think it does not really answer the question. – MKO Feb 26 '12 at 12:40
There is a cross section of producing bound states when two opposite-charge particles collide. All what is necessary is to emit the excess of energy-momentum as photons that is quite possible. Also it is possible to create a pair in the final state that is in a bound state, not free electron and positron. In QED there is no problem with unitarity in this respect. Renormalized and infra-red fixed QED is adequate theory. Feynman rules can include bound states in In and Out states, as a matter of fact. – Vladimir Kalitvianski Feb 26 '12 at 16:47
If I understand correctly, in QED in 4d space-time there is no cross-section of producing bound states when two opposite-charge particles collide. Definitely in the non-relativistic setting two particle coming from infinity and interacting according to Coulomb law (at short distances) cannot collide. – MKO Feb 26 '12 at 18:50
A pair of non interacting electron and positron is described with a product of two plane waves. A bound state is described with a product of a plane wave (center of mass) and a wave function of a bound state, easy. – Vladimir Kalitvianski Feb 26 '12 at 19:53
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9309242367744446, "perplexity_flag": "middle"}
|
http://www.math.ksu.edu/math240/book/chap1/fobern.php
|
• People
• Research
• Courses
• Undergraduate
• Graduate
• Alumni & Friends
• News & Events
Math 240 Home, Textbook Contents, Online Homework Home
If your browser supports JavaScript, be sure it is enabled.
### First Order Bernoulli Equations
#### Additional Examples
Solve the following initial value problem $$\begin{align}\frac{dy}{dx} + 8y &= -6\exp(-4x)y^{4}\\ y(0) &= 3 \end{align}$$ This is a Bernoulli equation. First we find the general solution following the paradigm.
1. We substitute $y = v^{1/(1-4)} = v^{-1/3}$, so $dy/dx = -(1/3)v^{-4/3}dv/dx$, and our equation becomes $$-(1/3)v^{-4/3}\frac{dv}{dx} + 8v^{-1/3} = -6\exp(-4x)v^{-4/3}$$
2. Multiply by $-3v^{4/3}$ to obtain a linear equation in the usual form. $$\frac{dv}{dx} - 24v = 18\exp(-4x)$$
3. Solve the linear equation.
1. Find the integrating factor $$\mu(x) = \exp\left(\int -24 dx \right) = \exp(-24x)$$
2. Multiply through by the integrating factor $$\exp(-24x)\frac{dv}{dx} - 24\exp(-24x)v = 18\exp(-4x)\exp(-24x) = 18\exp(-28x)$$
3. Recognize the left-hand-side as $\displaystyle \frac{d}{dx}(\mu(x)v).$ $$\frac{d}{dx}(\exp(-24x)v) =18exp(-28x)$$
4. Integrate both sides. $$\exp(-24x)v = -(9/14)\exp(-28x) + C$$
5. Divide through by $\mu(x)$ to solve for $v.$ $$v = -(9/14)\exp(-4x) + C\exp(24x)$$
4. Back substitute for $y.$
5. $$y = (-(9/14)\exp(-4x) + C\exp(24x))^{-1/3}$$
6. We check that $y = 0$ is indeed a singular solution.
Now we plug in the initial values $x = 0$ and $y = 3$ and solve for $C = 257/378,$ to obtain the solution to the initial value problem $$y = ( -(9/14)\exp(-4x) + (257/378)\exp(24x) )^{-1/3}$$ You may reload this page to generate additional examples.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 10, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8534610271453857, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/63348/integral-interpolation-by-polynomial/63835
|
## Integral interpolation by polynomial
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
This question arises from a discussion with my friends on a commonly encountered IQ test questions: "What's the next number in this series 2,6,12,20,...". Here a "number" usually means an integer. I was wondering whether there is a systematical way to solve such problems.Let us call a point on a plane integer point if all the components of it are integers. I want to know the following:
Give a finite set of integer points, Can we always find a corresponding polynomial that passes all these points and maps integers to integers?
-
Presumably the points are supposed to have different x coordinates ... – Ricky Demer Apr 28 2011 at 21:46
1
Perhaps you are referring to Newton Polynomials? – Alex R. Apr 28 2011 at 22:10
2
en.wikipedia.org/wiki/Lagrange_polynomial – J.C. Ottem Apr 28 2011 at 23:04
Somewhat related question: mathoverflow.net/questions/4442/… – quid May 3 2011 at 13:39
## 3 Answers
Let the points be $(x_j, y_j), j=1\ldots n$. If the $x_j$ are consecutive, the Lagrange interpolating polynomial will take integers to integers: easy proof by induction, using the difference operator $\Delta(p)(x) = p(x+1) - p(x)$. If not, choose arbitrary integers for the $y$ values to fill in the gaps.
-
1
Alternatively, if the $x_i$ are consecutive, you can express the values of the Lagrange polynomials as binomial coefficients (which are integers). – Joël Cohen Apr 29 2011 at 1:19
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Just to make a few comments:
1) As noted, If we have a list of values $a_0,a_1,\cdots, a_n$ of integers then there is a (unique) polynomial $f(t)$ of degree no more than $n$ with integer coefficients which maps integers to integers and such that $f(k)=k$ for `$0 \le k \le n$`.
2) There is a method involving the differences and differences of the differences etc. which reveals $f(n)$ as an integer linear combination of the polynomials $\binom{t}{j}$ for `$0 \le j \le n$`. Furthermore, the polynomials of this form are exactly the polynomials sending integers to integers. These (specialized) Newton Polynomials are very similar to the Taylor series which uses the basis $\frac{t^k}{k!}$
3) If you just want the next term (as predicted by this polynomial) then you don't need to explicitly find the polynomial, just extend the differences. Many test takers realize this. $$\begin{matrix}2&\ &6&\ &12&\ &20&\ &\mathbf{30}\\ &4&&6&&8&&\mathbf{10}&\\ &&2&&2&&\mathbf{2}&&\end{matrix}$$ corresponds to $f(n)=2+4n+2\binom{n}{2}=n^2+3n+2$
4) There is also a polynomial of degree 3 that gives $2,6,12,20,\mathbf{2011}$ so there is no unique extension.
5) If the given sequence is $1,2,4,8,?$ then the polynomial interpolation gives $15$ next from $\binom n0+\binom n1+\binom n2+\binom n3$ although most tests would favor another continuation.
-
On 5, the polynomial interpolation is actually a very useful sequence, the so-called "cake numbers". – Charles May 3 2011 at 18:28
and 1,2,4,8,16,31 are 4 dimensional cake numbers... – Aaron Meyerowitz May 3 2011 at 19:23
Conway and Guy show how to extend the method of differences in different directions (literally) in "The Book of Numbers" and deal with some non-polynomial sequences. – Mark Bennet May 3 2011 at 19:33
It's a topic I liked to cover when I was still teaching junior-level Algebra, even if it didn't fit in well with the other topics. You start with a function defined on the set of integers from $0$ to $n$ inclusive, and end with a polynomial of degree $\le n$ that agrees at those $n+1$ points, and if the values you started with are integers, your function always sends integers to integers. You take successive differences, as indicated by Robert Israel above, and then you list $f(0)$, $\Delta f(0)$, $\Delta^2f(0)$, up to $\Delta^n$, and use these as coefficients, which you multiply to $C_0(x)=1$, $C_1(x)=x$, $C_2(x)=x(x-1)/2$, etc., the binomial polynomials. Your assignment is to try it out for a few examples, and then prove that the method works.
-
I like to work this in when I can. It is nice to compare it to Taylor series: $f(x)=\sum D^nf\ (0)\frac{x^n}{n!}$ vs $f(x)=\sum \Delta^nf\ (0)\frac{(x)_n}{n!}$ Where $(x)_3=x(x-1)(x-2)$ is the falling factorial. – Aaron Meyerowitz May 3 2011 at 7:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 31, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9151257276535034, "perplexity_flag": "head"}
|
http://www.all-science-fair-projects.com/science_fair_projects_encyclopedia/Hypergeometric_distribution
|
All Science Fair Projects
Science Fair Project Encyclopedia for Schools!
Search Browse Forum Coach Links Editor Help Tell-a-Friend Encyclopedia Dictionary
Science Fair Project Encyclopedia
For information on any area of science that interests you,
enter a keyword (eg. scientific method, molecule, cloud, carbohydrate etc.).
Or else, you can start by choosing any of the categories below.
Hypergeometric distribution
In mathematics, the hypergeometric distribution is a discrete probability distribution that describes the number of successes in a sequence of n draws from a finite population without replacement.
A typical example is the following: There is a shipment of N objects in which D are defective. The hypergeometric distribution describes the probability that in a sample of n distinctive objects drawn from the shipment exactly k objects are defective.
In general, if a random variable X follows the hypergeometric distribution with parameters N, D and n, then the probability of getting exactly k successes is given by
$P(X = k) = {{{D \choose k} {{N-D} \choose {n-k}}}\over {N \choose n}}$
The probability is positive when k is between max{ 0, D + n − N } and min{ n, D }.
The formula can be understood as follows: There are $N \choose n$ possible samples (without replacement). There are $D \choose k$ ways to obtain k defective objects and there are ${N-D} \choose {n-k}$ ways to fill out the rest of the sample with non-defective objects.
When the population size is large (i.e. N is large) the hypergeometric distribution can be approximated reasonably well with a binomial distribution with parameters n (number of trials) and p = D / N (probability of success in a single trial).
The fact that the sum of the probabilities, as k runs through the range of possible values, is equal to 1, is essentially Vandermonde's identity from combinatorics.
03-10-2013 05:06:04
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 4, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8646562695503235, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/19471?sort=oldest
|
## Is the sum of 2 Lebesgue measurable sets measurable?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Is the sum of two measurable set measurable? I think it is not...
-
See also the previous MO question concerning sums of Borel sets mathoverflow.net/questions/48571/… – Andrey Rekalo May 27 2011 at 14:51
## 3 Answers
Evidently, there are measure zero sets with a non measurable sum. The article begins as follows:
Krzysztof Ciesielski, Hajrudin Fejzi´c, Chris Freiling,
Measure zero sets with non-measurable sum
Abstract
For any C ⊆ R there is a subset A ⊆ C such that A + A has inner measure zero and outer measure the same as C + C. Also, there is a subset A of the Cantor middle third set such that A+A is Bernstein in [0, 2]. On the other hand there is a perfect set C such that C + C is an interval I and there is no subset A ⊆ C with A + A Bernstein in I.
1 Introduction.
It is not at all surprising that there should be measure zero sets, A, whose sum A+A = {x+y : x ∈ A, y ∈ A} is non-measurable. Ask a typical mathematician why this should be so and you are likely to get the following response:
The Cantor middle-third set, when added to itself gives an entire interval, [0, 2]. So certainly there exists a measure zero set that when added to itself gives a non-measurable set.
The intuition being that an interval has much more content than is needed for a non-measurable set. Indeed such sets do exist (in ZFC). Sierpi´nski (1920) seems to be the first to address this issue. Actually, he shows the existence of measure zero sets X, Y such that X+Y is non-measurable (see [7]). The paper by Rubel (see [6]) in 1963 contains the first proof that we could find for the case X = Y (see also [5]). Ciesielski [3] extends these results to much greater generality, showing that A can be a measure zero Hamel basis, or it can be a (non-measurable) Bernstein set and that A+A can also be Bernstein. He also establishes similar results for multiple sums, A + A + A etc.
This paper is mainly about the statement above and the intuition behind it. Below we list four conjectures, each of which seems justified by extending this line of reasoning.
1. Not only does such a set exist, but it can be taken to be a subset of the Cantor middle-third set, C. (This does not seem to immediately follow from any of the above proofs. Thomson [9, p. 136] claims this to be true, but without proof.)
2. The intuition really has nothing to do with the precise structure of the Cantor set, which might lead one to conjecture the following. Suppose C is any set with the property that C + C contains a set of positive measure. Then there must exist a subset A ⊆ C such that A + A is non-measurable.
3. The intuition relies on the fact that non-measurable sets can have far less content than an entire interval. Therefore, the claim should also hold when non-measurable is replaced by other similar qualities. Recall that if I is a set then a set S is called Bernstein in I if and only if both S and its complement intersect every non-empty perfect subset of I. Constructing a set that is Bernstein in an interval is one of the standard ways of establishing non-measurability. Certainly, any set that is Bernstein in an interval has far less content than the interval itself. Therefore, we might conjecture that there is a subset A ⊆ C with A+A Bernstein in [0,2].
4. Combining the reasoning behind the Conjectures 2 and 3, let C be any set with the property that C + C contains an interval, I. We might conjecture that there must exist a subset A ⊆ C such that A + A is Bernstein in I.
We will settle these four conjectures in the next four sections.
The paper goes on to show that conjectures 1, 2 and 3 are true, but 4 is false.
-
"The Cantor middle-third set, when added to itself gives an entire interval, [0, 2]. So certainly there exists a measure zero set that when added to itself gives a non-measurable set." -- If I am not mistaken, is this claiming that the closed interval [o,2] is not Lebesgue measurable? – Regenbogen Mar 26 2010 at 23:04
@Regenbogen: I think the measure zero set is going to be a proper subset of the Cantor middle-third set :-) – Kevin Buzzard Mar 26 2010 at 23:07
Regenbogen, I believe that the idea is to pass to subsets to get the counterexample. – Joel David Hamkins Mar 26 2010 at 23:08
3
What about Borel sets? – John Jiang Mar 27 2010 at 21:07
4
Here is a construction of two Borel sets whose sum is not Borel: P. Erdõs, A. H. Stone: On the sum of two Borel sets, Proc. Amer. Math. Soc. 25 (1970), 304--306 available here: renyi.hu/~p_erdos/1970-15.pdf – Péter Komjáth Jun 13 2010 at 12:48
show 12 more comments
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
I think the sum of 2 Borel sets is analytic, hence measurable.
-
Note that the problem is trivial if you talk about subsets of the plane $\mathbb R\times \mathbb R$. Let $A\subseteq \mathbb R$ be non-measurable, then `$A\times \{0\}$` and `$\{0\}\times \mathbb R$` both have Lebesgue measure 0 in the plane, but their sum $A\times \mathbb R$ is not measurable.
-
I stole this argument from gowers [Borel plus Borel = Borel?](mathoverflow.net/questions/48571/…). (Who probably stole it from Sierpinski, or Euclid.) – Goldstern May 27 2011 at 16:06
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9387120008468628, "perplexity_flag": "head"}
|
http://nrich.maths.org/55
|
### Exploring Wild & Wonderful Number Patterns
EWWNP means Exploring Wild and Wonderful Number Patterns Created by Yourself! Investigate what happens if we create number patterns using some simple rules.
### Dice and Spinner Numbers
If you had any number of ordinary dice, what are the possible ways of making their totals 6? What would the product of the dice be each time?
### Month Mania
Can you design a new shape for the twenty-eight squares and arrange the numbers in a logical way? What patterns do you notice?
# I'm Eight
##### Stage: 1, 2, 3 and 4 Challenge Level:
When I went into a classroom earlier this week a child rushed up to tell me she was 8 that day!
Well, Happy Birthday to everyone who has a birthday today!
If you are 8 then this could be for you, but if it is another number then you just change the 8 to whatever your age is today.
There is not a lot to say to introduce this challenge. It's really just to find a great variety of ways of asking questions which make $8$.
Things like $6 + 2$, $22 - 14$, etc.
But you need to get examples that use all the different mathematical ideas that you know about.
$1$) So you could show some multiplications and some divisions.
$2$) If you know about fractions then you can add or subtract numbers involving fractions. You could also ask questions like "What is half of $16$?''; "What is four-fifths of 10?'' and so on.
$3$) If you've come across decimals then do a few of those also, perhaps using all the four rules [addition, subtraction, multiplication and division].
And so on.
Use whatever mathematics you know to find as many different ways of getting the answer $8$.
You may find some patterns that would go on for ever and ever. If you do, just put down a few, and then see if you can describe how the pattern works.
So if you're $8$ years old maybe you'll write something like this:
$16 \div 2$, $8 \div 1$, $4 + 4$, $2 + 6$, $9 - 1$, $12 - 4$
$1 + 1 + 1 + 1 + 1 + 1 + 1 + 1, 2 + 2 + 2 + 2$
$15 - 3 - 2 - 1 - 1, 5 + 3 + 6 - 3 - 3$
and so on.
But if you're much much older you may write something like:-
$4 \sin (\pi/2) + \sqrt{5^2 - 3^2}$
Whatever your age, and whatever ones you get caught up with, have a look at the ways that you can make new ones that have a similar pattern.
Your "What would happen if ...?'' questions may be a little different from our usual ones.
The 8 year old might ask "I wonder what would happen if I tried to use multiplication and addition to make 8?''
The much older person (17 years old perhaps) may well ask "I wonder what would happen if I used matrices?''
The NRICH Project aims to enrich the mathematical experiences of all learners. To support this aim, members of the NRICH team work in a wide range of capacities, including providing professional development for teachers wishing to embed rich mathematical tasks into everyday classroom practice. More information on many of our other activities can be found here.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 18, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9466589689254761, "perplexity_flag": "middle"}
|
http://unapologetic.wordpress.com/2007/03/22/a-rough-overview/?like=1&source=post_flair&_wpnonce=8cb43e0c56
|
# The Unapologetic Mathematician
## A rough overview
I’ve had a flood of incoming people in the past couple days, and have even been linked from the article in The New York Times (or at least in their list of blogs commenting on the news). As I said before, their coverage is pretty superficial, and I’ve counted half a dozen errors in their picture captions alone.
One of the main reasons I write this weblog is because I believe anyone can follow the basic ideas of even the most bleeding-edge mathematics. Few mathematicians write towards the generally interested lay audience (“GILA”) the way physicists tend to do, and when mathematics does make it into the popular press the journalists don’t even make the effort they do in physics to get what they do say right.
My uncle, no mathematician he but definitely a GILA member, emailed me to mention he’d read that mathematicians had “solved E8″, but had no idea what it meant. Mostly he was asking if I knew Adams (I do), but I responded with a high-level overview of what they were doing and why. I’m going to post here what I told him. It’s designed to be pretty self-contained, and has been refined from a few days of explaining the ideas to other nonmathematicians.
Oh, and I’m not above link-baiting. If you find this coherent and illuminating, please pass the link to this post around. If there’s something that I’ve horribly screwed up in here, please let me know and I’ll try to smooth it over while keeping it accessible. I’m also trying to explain the ideas at a somewhat higher level (though not in full technicality) within the category “Atlas of Lie Groups”. If you want to know more, please keep watching there.
[UPDATE: I now also have another post trying to answer the "what's it good for?" question. That response starts at the fourth paragraph: "I also want to...".]
I understand not knowing what the news reports mean, because most of them are pretty horrible. It’s possible to give a stripped-down explanation, but the popular press doesn’t seem to want to bother.
A group is a collection of symmetries. A nice one is all the transformations of a square. You can flip it over left-to-right, flip it up-to-down, or rotate it by quarter turns. This group isn’t “simple” because there are smaller groups sitting inside it [yes, it's a bit more than that as readers here should know. --ed] — you could forget the flips and just consider the group of rotations. All groups can be built up from simple groups that have no smaller ones sitting inside them, so those are the ones we really want to understand. Think of it sort of like breaking a number into its prime factors.
The kinds of groups this project is concerned with are called Lie groups (pronounced “lee”) after the Norwegian mathematician Sophus Lie. They’re made up of continuous transformations like rotations of an object in 3-dimensional space. Again, the Lie groups we’re really interested in are the simple ones that can’t be broken down into smaller ones.
A hundred years ago, Élie Cartan and others came up with a classification of all these simple Lie groups. There are four infinite families like rotations in spaces of various dimensions or square matrices of various sizes with determinant 1 (if you remember any matrix algebra). These are called $A_n$, $B_n$, $C_n$, and $D_n$. There are also five extras that don’t fit into those four families, called $G_2$, $F_4$, $E_6$, $E_7$, and $E_8$. That last one is the biggest. It takes three numbers to describe a rotation in 3-D space, but 248 numbers to describe an element of $E_8$.
Classifying the groups is all well and good, but they’re still hard to work with. We want to know how these groups can act as symmetries of various objects. In particular, we want to find ways of assigning a matrix to each element of a group so that if you take two transformations in the group and do them one after the other, the matrix corresponding to that combination is the product of the matrices corresponding to the two transformations. We call this a “matrix representation” of the group. Again, some representations can be broken into simpler pieces, and we’re concerned with the simple ones that can’t be broken down anymore.
What the Atlas project is trying to do is build up a classification of all the simple representations of all the simple Lie groups, and the hardest chunk is $E_8$, which has now been solved.
### Like this:
Posted by John Armstrong | Atlas of Lie Groups
## 7 Comments »
1. I am not a mathematician but still I try to comprehend. Are there 2 or 3 dimensional diagrams of these simple Lie groups?
Comment by eliza | March 24, 2007 | Reply
2. Well, sort of. The lower ones down, at least. For example, B1 is the collection of all rotations of three-dimensional space. In general, Bn is made up of rotations in (2n+1)-dimensional space, and the D series gives rotations in even-dimensional spaces. Unfortunately, due to a technicality, rotations in the plane aren’t considered a simple Lie group.
If you’re thinking of a diagram like the one that ran alongside all the news reports, it’s actually not the Lie group they’re talking about. It’s a sort of tool used in Cartan’s classification called a “root system”, and the picture is a 2-dimensional rendering of an 8-dimensional (for E8) shape. There are pictures like these for all Lie groups, and John Baez has a bunch of them in his most recent column.
Comment by | March 24, 2007 | Reply
3. Can you please dumb it down further and explain any possible practical application for this? Maybe cite something Star Trek or Star Wars and a maybe a reference to a weapon or some cool space ship? Or even cooler some invading alien force? I understand what your saying but I have no real frame of reference for it because I am not a mathematician but I do understand stuff like “fundamental underlying principle to teleporation” or “fundamental underlying principle to big explosions”. Or even “fundamental underlying principle for making space craft fly”.
I just don’t have a frame of reference for this formula or how it matters.
Comment by Kenny Coffin | March 25, 2007 | Reply
4. Kenny, that’s a great question and it deserves its own post. I’m going to mull it over and write it up in the next day or so.
Comment by | March 25, 2007 | Reply
5. Cool! Thanks for the explanation. Seems like the press could certainly have explained that. So can this be applied to computer graphics?
Comment by SomeGuy | March 26, 2007 | Reply
6. I’m actually not sure what this can directly apply to. I just like it ’cause it’s pretty (in an intellectual sense). I’ve just put up another post linking to other sketches by the people directly involved, and my thoughts on why (in a real-world sense) we care about this.
Comment by | March 26, 2007 | Reply
7. [...] I gave a quick overview of the idea of a Lie group, a Lie algebra, and a representation; a rough overview for complete neophytes of what, exactly, had been calculated; and an attempt to explain why we [...]
Pingback by | January 11, 2010 | Reply
« Previous | Next »
## About this weblog
This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”).
I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 11, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.942044198513031, "perplexity_flag": "middle"}
|
http://en.wikipedia.org/wiki/Tree_(graph_theory)
|
# Tree (graph theory)
Trees
A labeled tree with 6 vertices and 5 edges
Vertices v
Edges v − 1
Chromatic number 2 if v > 1
In mathematics, more specifically graph theory, a tree is an undirected graph in which any two vertices are connected by exactly one simple path. In other words, any connected graph without simple cycles is a tree. A forest is a disjoint union of trees.
The various kinds of data structures referred to as trees in computer science are equivalent as undirected graphs to trees in graph theory, although such data structures are generally rooted trees, thus in fact being directed graphs, and may also have additional ordering of branches.
The term "tree" was coined in 1857 by the British mathematician Arthur Cayley.[1]
## Definitions
A tree is an undirected simple graph G that satisfies any of the following equivalent conditions:
• G is connected and has no cycles.
• G has no cycles, and a simple cycle is formed if any edge is added to G.
• G is connected, but is not connected if any single edge is removed from G.
• G is connected and the 3-vertex complete graph $K_3$ is not a minor of G.
• Any two vertices in G can be connected by a unique simple path.
If G has finitely many vertices, say n of them, then the above statements are also equivalent to any of the following conditions:
• G is connected and has n − 1 edges.
• G has no simple cycles and has n − 1 edges.
As elsewhere in graph theory, the order-zero graph (graph with no vertices) is generally excluded from consideration: while it is vacuously connected as a graph (any two vertices can be connected by a path), it is not 0-connected (or even (−1)-connected) in algebraic topology, unlike non-empty trees, and violates the "one more node than edges" relation.
A leaf is a vertex of degree 1. An internal vertex is a vertex of degree at least 2.
An irreducible (or series-reduced) tree is a tree in which there is no vertex of degree 2.
A forest is an undirected graph, all of whose connected components are trees; in other words, the graph consists of a disjoint union of trees. Equivalently, a forest is an undirected cycle-free graph. As special cases, an empty graph, a single tree, and the discrete graph on a set of vertices (that is, the graph with these vertices that has no edges), all are examples of forests.
The term hedge sometimes refers to an ordered sequence of trees.
A polytree or oriented tree is a directed graph with at most one undirected path between any two vertices. In other words, a polytree is a directed acyclic graph for which there are no undirected cycles either.
A directed tree is a directed graph which would be a tree if the directions on the edges were ignored. Some authors restrict the phrase to the case where the edges are all directed towards a particular vertex, or all directed away from a particular vertex (see arborescence).
A tree is called a rooted tree if one vertex has been designated the root, in which case the edges have a natural orientation, towards or away from the root. The tree-order is the partial ordering on the vertices of a tree with u ≤ v if and only if the unique path from the root to v passes through u. A rooted tree which is a subgraph of some graph G is a normal tree if the ends of every edge in G are comparable in this tree-order whenever those ends are vertices of the tree (Diestel 2005, p. 15). Rooted trees, often with additional structure such as ordering of the neighbors at each vertex, are a key data structure in computer science; see tree data structure. In a context where trees are supposed to have a root, a tree without any designated root is called a free tree.
In a rooted tree, the parent of a vertex is the vertex connected to it on the path to the root; every vertex except the root has a unique parent. A child of a vertex v is a vertex of which v is the parent.
A labeled tree is a tree in which each vertex is given a unique label. The vertices of a labeled tree on n vertices are typically given the labels 1, 2, …, n. A recursive tree is a labeled rooted tree where the vertex labels respect the tree order (i.e., if u < v for two vertices u and v, then the label of u is smaller than the label of v).
An n-ary tree is a rooted tree for which each vertex has at most n children. 2-ary trees are sometimes called binary trees, while 3-ary trees are sometimes called ternary trees.
A terminal vertex of a tree is a vertex of degree 1. In a rooted tree, the leaves are all terminal vertices; additionally, the root, if not a leaf itself, is a terminal vertex if it has precisely one child.
### Plane Tree
An ordered tree or plane tree is a rooted tree for which an ordering is specified for the children of each vertex. This is called a "plane tree" because an ordering of the children is equivalent to an embedding of the tree in the plane (up to homotopy through embeddings or ambient isotopy). Given an embedding of a rooted tree in the plane, if one fixes a direction of children (starting from root, then first child, second child, etc.), say counterclockwise, then an embedding gives an ordering of the children. Conversely, given an ordered tree, and conventionally draw the root at the top, then the child nodes in an ordered tree can be drawn left-to-right, yielding an essentially unique planar embedding (up to embedded homotopy, i.e., moving the edges and nodes without crossing).
A terminal vertex of a tree is a vertex of degree 1. In a rooted tree, the leaves are all terminal vertices; additionally, the root, if not a leaf itself, is a terminal vertex if it has precisely one child.
## Example
The example tree shown to the right has 6 vertices and 6 − 1 = 5 edges. The unique simple path connecting the vertices 2 and 6 is 2-4-5-6.
## Facts
• Every tree is a bipartite graph and a median graph. Every tree with only countably many vertices is a planar graph.
• Every connected graph G admits a spanning tree, which is a tree that contains every vertex of G and whose edges are edges of G.
• Every connected graph with only countably many vertices admits a normal spanning tree (Diestel 2005, Prop. 8.2.4).
• There exist connected graphs with uncountably many vertices which do not admit a normal spanning tree (Diestel 2005, Prop. 8.5.2).
• Every finite tree with n vertices, with n > 1, has at least two terminal vertices (leaves). This minimal number of terminal vertices is characteristic of path graphs; the maximal number, n − 1, is attained by star graphs.
• For any three vertices in a tree, the three paths between them have exactly one vertex in common.
## Enumeration
### Labeled trees
Cayley's formula states that there are nn−2 trees on n labeled vertices. It can be proved by first showing that the number of trees with vertices 1,2,...,n, of degrees d1,d2,...,dn respectively, is the multinomial coefficient
${n-2 \choose d_1-1, d_2-1, \ldots, d_n-1}.$
An alternative proof uses Prüfer sequences.
Cayley's formula is the special case of complete graphs in a more general problem of counting spanning trees in an undirected graph, which is addressed by the matrix tree theorem. The similar problem of counting all the subtrees regardless of size has been shown to be #P-complete in the general case (Jerrum (1994)).
### Unlabeled trees
Counting the number of unlabeled free trees is a harder problem. No closed formula for the number t(n) of trees with n vertices up to graph isomorphism is known. The first few values of t(n) are:
1, 1, 1, 1, 2, 3, 6, 11, 23, 47, 106, 235, 551, 1301, 3159, ... (sequence in OEIS).
Otter (1948) proved the asymptotic estimate:
${t(n) \sim C \alpha^n n^{-5/2} \quad\text{as } n\to\infty,}$
with C = 0.534949606… and α = 2.95576528565… (sequence in OEIS). (Here, $f \sim g$ means that $\lim_{n \to \infty} f/g = 1$.) This is a consequence of his asymptotic estimate for the number $r(n)$ of unlabeled rooted trees with n vertices:
$r(n) \sim D\alpha^n n^{-3/2} \quad\text{as } n\to\infty,$
with D = 0.43992401257… and α the same as above (cf. Knuth (1997), Chap. 2.3.4.4 and Flajolet & Sedgewick (2009), Chap. VII.5).
The first few values of r(n) are:[2]
1, 1, 2, 4, 9, 20, 48, 115, 286, 719, 1842, 4766, 12486, 32973, ...
## Types of trees
A star graph is a tree which consists of a single internal vertex (and n − 1 leaves). In other words, a star graph of order n is a tree of order n with as many leaves as possible. Its diameter is at most 2.
A tree with two leaves (the fewest possible) is a path graph; a forest in which all components are isolated nodes and path graphs is called a linear forest. If all vertices in a tree are within distance one of a central path subgraph, then the tree is a caterpillar tree. If all vertices are within distance two of a central path subgraph, then the tree is a lobster.
## References
1. Cayley (1857) "On the theory of the analytical forms called trees," Philosophical Magazine, 4th series, 13 : 172-176.
However it should be mentioned that in 1847, K.G.C. von Staudt, in his book Geometrie der Lage (Nürnberg, (Germany): Bauer und Raspe, 1847), presented a proof of Euler's polyhedron theorem which relies on trees on pages 20-21. Also in 1847, the German physicist Gustav Kirchhoff investigated electrical circuits and found a relation between the number (n) of wires/resistors (branches), the number (m) of junctions (vertices), and the number (μ) of loops (faces) in the circuit. He proved the relation via an argument relying on trees. See: Kirchhoff, G. R. (1847) "Uber die Auflösung der Gleichungen auf welche man bei der Untersuchung der linearen Vertheilung galvanischer Ströme geführt wird" (On the solution of equations to which one is led by the investigation of the linear distribution of galvanic currents), Annalen der Physik und Chemie, 72 (12) : 497-508.
## Further reading
Wikimedia Commons has media related to: Tree (graph theory)
• Diestel, Reinhard (2005), Graph Theory (3rd ed.), Berlin, New York: Springer-Verlag, ISBN 978-3-540-26183-4 .
• Flajolet, Philippe; Sedgewick, Robert (2009), Analytic Combinatorics, Cambridge University Press, ISBN 978-0-521-89806-5
• Knuth, Donald E. (November 14, 1997), The Art of Computer Programming Volume 1: Fundamental Algorithms (3rd ed.), Addison-Wesley Professional
• Jerrum, Mark (1994), "Counting trees in a graph is #P-complete", Information Processing Letters 51 (3): 111–116, doi:10.1016/0020-0190(94)00085-9, ISSN 0020-0190 .
• Otter, Richard (1948), "The Number of Trees", Annals of Mathematics. Second Series 49 (3): 583–599, doi:10.2307/1969046, JSTOR 1969046 .
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 7, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9245637059211731, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/33774/existence-of-zero-cycles-of-degree-one-vs-existence-of-rational-points/49401
|
## Existence of zero cycles of degree one vs existence of rational points
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Let $k$ be a field (I'm mainly interested in the case where $k$ is a number field, however results for other fields would be interesting), and $X$ a smooth projective variety over $k$.
By a zero cycle on $X$ over $k$ I mean a formal sum of finitely many (geometric) points on $X$, which is fixed under the action of the absolute Galois group of $k$. We can define the degree of a zero cycle to be the sum of the multiplicities of the points.
Now, if $X$ contains a $k$-rational point then it is clear that $X$ contains a zero cycle of degree one over $k$.
What is known in general about the converse? That is, which classes of varieties are known to satisfy the property that the existence of a zero cycle of degree one over $k$ implies the existence of a $k$-rational point? For example what about rational varieties and abelian varieties?
As motivation I shall briefly mention that the case of curves is easy. Since here zero cycles are the same as divisors we can use Riemann-Roch to show that the converse result holds if the genus of the curve is zero or one, and there are plently of counter-examples for curves of higher genus. However in higher dimensions this kind of cohomological argument seems to fail as we don't (to my knowledge) have such tools available to us.
-
## 5 Answers
There has been a lot of work on this problem, although nothing like a general answer is known. By way of abbreviation, the index of a nonsingular projective variety is the least positive degree of a $k$-rational zero cycle, so you are asking about the relationship between index one and having a $k$-rational point.
First, you ask whether rational varieties and abelian varieties with index one must have a rational point. Here you probably mean $k$-forms of such things: i.e., geometrically rational varieties and torsors under abelian varieties. (Both rational varieties and abelian varieties have rational points, the latter by definition, the former e.g. by the theorem of Lang-Nishimura which says that having rational points is a birational invariant of a nonsingular projective variety.) I can answer this:
1) A torsor under an abelian variety has index one iff it has a rational point. This follows from the cohomological interpretation of torsors as elements of $H^1(k,A)$.
2) A geometrically rational surface of index one need not have a rational point: this is a theorem of Colliot-Thelene and Coray. (A reference appears in the link below.)
On to the general question. A very nice recent paper which proves a big result of this type and gives useful bibliographic information about other results is Parimala's 2005 paper on homogeneous varieties:
http://projecteuclid.org/DPubS?service=UI&version=1.0&verb=Display&handle=euclid.ajm/1144070587
Finally, there are some fields $k$ for which every geometrically irreducible projective variety has index one -- most notably finite fields. In this case any variety without a rational point over such a field gives a counterexample to "index one implies rational point". For instance, for any finite field $\mathbb{F}_q$ and all sufficiently large $g$, one can easily write down a hyperelliptic cuve over $\mathbb{F}_q$ of genus $g$ without rational points. [N.B.: What I had written before was too strong: if instead you fix $g$ and let $q$ be sufficiently large, then by the Weil bounds you must have a rational point.] There are also K3 surfaces over finite fields without rational points, and so forth.
Some further discussion of fields over which every (geometrically irreducible) variety has index one occurs in the appendix of a recent paper of mine:
http://math.uga.edu/~pete/trans.pdf
There are many more results than the ones I've mentioned so far. If you have further questions, please don't hesitate to ask!
-
Thanks very much for the answer! And you are right that I meant $k$-forms, sorry for not making this clear in my question. Ill go away and have a read of some of the references you suggested. – Daniel Loughran Jul 29 2010 at 13:45
@DL: You're quite welcome. Also, in your defense, many people say "rational variety" when they mean "geometrically rational variety". For instance, Colliot-Thelene has a famous, long running seminar on "rational varieties". – Pete L. Clark Jul 29 2010 at 16:38
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
On the positive side, there are many cases where it is known that existence of a degree 1 zero cycle implies existence of a rational point. For a projective homogeneous variety over the function field of a surface (over an algebraically closed field) whose Picard group is a trivial Galois module (e.g., if the Picard group is rank 1), this was proved by de Jong, Xuhua He and myself following a suggestion of Ph. Gille. For homogeneous varieties over more general fields of cohomological dimension 2, the question whether degree 1 zero cycles imply rational points is closely related to a conjecture of Serre, cf. Theorem 3.8 of the following article of Borovoi, Colliot-Thélène and Skorobogatov. "The elementary obstruction and homogeneous spaces", Duke Math. J. 141 (2008) 321-364. http://www.math.u-psud.fr/~colliot/BoCTSk21jan08.pdf
Technically the connection above is between existence of rational points and vanishing of the "elementary obstruction". Existence of a degree 1 zero cycle implies vanishing of the "elementary obstruction". Merkurjev-Suslin proved that for every (perfect) field of cohomological dimension > 2, there exist principal homogeneous varieties for groups of type SL_n which have vanishing elementary obstruction but which have no rational point.
-
Ahh thanks I was already aware of the elementry obstruction, however only that it was to do with the existence of universal torsors, it's nice to know that it is related to this as well! – Daniel Loughran Jul 30 2010 at 11:40
In addition to Jason's answer, I mention the following result, which I found out to be not known to experts (except Jason).
Theorem. Let $X$ be a homogeneous space of a connected linear algebraic group $G$ over a field $k$, with connected geometric stabilizers. Assume that $X$ has a zero cycle of degree 1. If $k$ is a either a $p$-adic field or a number field, then $X$ has a $k$-point.
I give a proof based on Jason's observation (actually the case of a $p$-adic field is contained in his answer) and use the paper by Borovoi, Colliot-Thélène and Skorobogatov [BCS] that Jason cites.
Proof. If $X$ has a zero cycle of degree 1, then the elementary obstruction for $X$ is 0. If $k$ is a $p$-adic field, then by [BCS], Thm. 3.3, $X$ has a $k$-point. If $k$ is a number field, then for any real place $v$ of $k$, $X$ has a zero cycle of degree 1 over $k_v$, hence $X$ has a $k_v$-point (because $k_v$ is isomorphic to $\mathbf{R}$), and by [BCS], Thm. 3.10, $X$ has a $k$-point.
Another proof of this theorem was recently obtained by Cyril Demarche and Liang Yongqi.
Note that both assumptions of the theorem, namely that geometric stabilizers are connected and that the base field $k$ is either a $p$-adic field or a number field, are important.
Mathieu Florence in the paper Zéro-cycles de degré un sur les espaces homogènes, Int. Math. Res. Not. 2004, no. 54, 2897–2914, http://alg-geo.epfl.ch/~florence/esphomog.pdf, constructed homogeneous spaces $X$ over $p$-adic and number fields with non-connected (finite) geometric stabilizers, such that $X$ has a zero cycle of degree 1, but neither $X$ nor any smooth compactification of $X$ has rational points.
Parimala in the paper Homogeneous varieties — zero cycles of degree one versus rational points, Asian J. Math. 9 (2005), 251–256, see the link in Artie's answer, constructed a projective homogeneous space $X$ (hence with connected geometric stabilizers) over the Laurent series field over a $p$-adic field, such that again $X$ has a zero cycle of degree 1, but no rational points.
Note that Jodi Black http://arxiv.org/abs/1010.1582 recently proved that if a principal homogeneous space $X$ of a connected linear group $G$ over a field $k$ of virtual cohomological dimension $\le 2$ has a zero cycle of degree 1, and $G$ satisfies the Hasse principle, then $X$ has a $k$-point.
-
Even for projective homogeneous varieties the converse is not true in general, as shown in http://www.mathcs.emory.edu/~parimala/homogeneous.pdf.
-
On the positive side (with no restriction on the field), I don't think anyone has mentioned quadrics. For a smooth quadric, the existence of a 0-cycle of degree one is equivalent to the existence of a point with odd degree. This implies the existence of a rational point, by a theorem of Springer.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 69, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9290911555290222, "perplexity_flag": "head"}
|
http://www.citizendia.org/Elliptic_cylindrical_coordinates
|
Coordinate surfaces of elliptic cylindrical coordinates. The coordinate surfaces of a three dimensional Coordinate system are the Surfaces on which a particular coordinate of the system is constant while the coordinate The yellow sheet is the prism of a half-hyperbola corresponding to ν=-45°, whereas the red tube is an elliptical prism corresponding to μ=1. The blue sheet corresponds to z=1. The three surfaces intersect at the point P (shown as a black sphere) with Cartesian coordinates roughly (2. In Mathematics, the Cartesian coordinate system (also called rectangular coordinate system) is used to determine each point uniquely in a plane 182, -1. 661, 1. 0). The foci of the ellipse and hyperbola lie at x = ±2. 0.
Elliptic cylindrical coordinates are a three-dimensional orthogonal coordinate system that results from projecting the two-dimensional elliptic coordinate system in the perpendicular z-direction. In Mathematics, orthogonal coordinates are defined as a set of d coordinates q = ( q 1 q 2. In Mathematics and its applications a coordinate system is a system for assigning an n - Tuple of Numbers or scalars to each point Elliptic coordinates are a two-dimensional orthogonal Coordinate system in which the Coordinate lines are confocal Ellipses and Hyperbolae Hence, the coordinate surfaces are prisms of confocal ellipses and hyperbolae. The coordinate surfaces of a three dimensional Coordinate system are the Surfaces on which a particular coordinate of the system is constant while the coordinate General right and uniform prisms A right prism is a prism in which the joining edges and faces are perpendicular to the base faces In Mathematics, an ellipse (from the Greek ἔλλειψις literally absence) is a Conic section, the locus of points in a In Geometry, a hyperbola ( Greek, "over-thrown" has several equivalent definitions The two foci F1 and F2 are generally taken to be fixed at − a and + a, respectively, on the x-axis of the Cartesian coordinate system. In Geometry, the foci (singular focus) are a pair of special points used in describing Conic sections The four types of conic sections are the Circle In Mathematics, the Cartesian coordinate system (also called rectangular coordinate system) is used to determine each point uniquely in a plane
## Basic definition
The most common definition of elliptic cylindrical coordinates (μ,ν,z) is
$x = a \ \cosh \mu \ \cos \nu$
$y = a \ \sinh \mu \ \sin \nu$
$z = z\!$
where μ is a nonnegative real number and $\nu \in [0, 2\pi)$.
These definitions correspond to ellipses and hyperbolae. The trigonometric identity
$\frac{x^{2}}{a^{2} \cosh^{2} \mu} + \frac{y^{2}}{a^{2} \sinh^{2} \mu} = \cos^{2} \nu + \sin^{2} \nu = 1$
shows that curves of constant μ form ellipses, whereas the hyperbolic trigonometric identity
$\frac{x^{2}}{a^{2} \cos^{2} \nu} - \frac{y^{2}}{a^{2} \sin^{2} \nu} = \cosh^{2} \mu - \sinh^{2} \mu = 1$
shows that curves of constant ν form hyperbolae. In Mathematics, an ellipse (from the Greek ἔλλειψις literally absence) is a Conic section, the locus of points in a In Geometry, a hyperbola ( Greek, "over-thrown" has several equivalent definitions
## Scale factors
The scale factors for the elliptic cylindrical coordinates μ and ν are equal
$h_{\mu} = h_{\nu} = a\sqrt{\sinh^{2}\mu + \sin^{2}\nu}$
whereas the remaining scale factor hz = 1. Consequently, an infinitesimal volume element equals
$dV = a^{2} \left( \sinh^{2}\mu + \sin^{2}\nu \right) d\mu d\nu dz$
and the Laplacian equals
$\nabla^{2} \Phi = \frac{1}{a^{2} \left( \sinh^{2}\mu + \sin^{2}\nu \right)} \left( \frac{\partial^{2} \Phi}{\partial \mu^{2}} + \frac{\partial^{2} \Phi}{\partial \nu^{2}} \right) + \frac{\partial^{2} \Phi}{\partial z^{2}}$
Other differential operators such as $\nabla \cdot \mathbf{F}$ and $\nabla \times \mathbf{F}$ can be expressed in the coordinates (μ,ν,z) by substituting the scale factors into the general formulae found in orthogonal coordinates. In Mathematics, orthogonal coordinates are defined as a set of d coordinates q = ( q 1 q 2.
## Alternative definition
An alternative and geometrically intuitive set of elliptic coordinates (σ,τ,z) are sometimes used, where σ = coshμ and τ = cosν. Hence, the curves of constant σ are ellipses, whereas the curves of constant τ are hyperbolae. The coordinate τ must belong to the interval [-1, 1], whereas the σ coordinate must be greater than or equal to one.
The coordinates (σ,τ,z) have a simple relation to the distances to the foci F1 and F2. For any point in the (x,y) plane, the sum d1 + d2 of its distances to the foci equals 2aσ, whereas their difference d1 − d2 equals 2aτ. Thus, the distance to F1 is a(σ + τ), whereas the distance to F2 is a(σ − τ). (Recall that F1 and F2 are located at x = − a and x = + a, respectively. )
A drawback of these coordinates is that they do not have a 1-to-1 transformation to the Cartesian coordinates
$x = a\sigma\tau \!$
$y^{2} = a^{2} \left( \sigma^{2} - 1 \right) \left(1 - \tau^{2} \right)$
## Alternative scale factors
The scale factors for the alternative elliptic coordinates (σ,τ,z) are
$h_{\sigma} = a\sqrt{\frac{\sigma^{2} - \tau^{2}}{\sigma^{2} - 1}}$
$h_{\tau} = a\sqrt{\frac{\sigma^{2} - \tau^{2}}{1 - \tau^{2}}}$
and, of course, hz = 1. In Mathematics, the Cartesian coordinate system (also called rectangular coordinate system) is used to determine each point uniquely in a plane Hence, the infinitesimal volume element becomes
$dV = a^{2} \frac{\sigma^{2} - \tau^{2}}{\sqrt{\left( \sigma^{2} - 1 \right) \left( 1 - \tau^{2} \right)}} d\sigma d\tau dz$
and the Laplacian equals
$\nabla^{2} \Phi = \frac{1}{a^{2} \left( \sigma^{2} - \tau^{2} \right) }\left[\sqrt{\sigma^{2} - 1} \frac{\partial}{\partial \sigma} \left( \sqrt{\sigma^{2} - 1} \frac{\partial \Phi}{\partial \sigma} \right) + \sqrt{1 - \tau^{2}} \frac{\partial}{\partial \tau} \left( \sqrt{1 - \tau^{2}} \frac{\partial \Phi}{\partial \tau} \right)\right] + \frac{\partial^{2} \Phi}{\partial z^{2}}$
Other differential operators such as $\nabla \cdot \mathbf{F}$ and $\nabla \times \mathbf{F}$ can be expressed in the coordinates (σ,τ) by substituting the scale factors into the general formulae found in orthogonal coordinates. In Mathematics, orthogonal coordinates are defined as a set of d coordinates q = ( q 1 q 2.
## Applications
The classic applications of elliptic cylindrical coordinates are in solving partial differential equations, e. In Mathematics, partial differential equations ( PDE) are a type of Differential equation, i g. , Laplace's equation or the Helmholtz equation, for which elliptic cylindrical coordinates allow a separation of variables. In Mathematics, Laplace's equation is a Partial differential equation named after Pierre-Simon Laplace who first studied its properties The Helmholtz equation, named for Hermann von Helmholtz, is the Elliptic partial differential equation (\nabla^2 + k^2 A = 0 In Mathematics, separation of variables is any of several methods for solving ordinary and partial Differential equations in which algebra allows one to re-write an A typical example would be the electric field surrounding a flat conducting plate of width 2a. In Physics, the space surrounding an Electric charge or in the presence of a time-varying Magnetic field has a property called an electric field (that can
The three-dimensional wave equation, when expressed in elliptic cylindrical coordinates, may be solved by separation of variables, leading to the Mathieu differential equations. The wave equation is an important second-order linear Partial differential equation that describes the propagation of a variety of Waves such as Sound waves In Mathematics, the Mathieu functions are certain Special functions useful for treating a variety of interesting problems in applied mathematics including
The geometric properties of elliptic coordinates can also be useful. A typical example might involve an integration over all pairs of vectors $\mathbf{p}$ and $\mathbf{q}$ that sum to a fixed vector $\mathbf{r} = \mathbf{p} + \mathbf{q}$, where the integrand was a function of the vector lengths $\left| \mathbf{p} \right|$ and $\left| \mathbf{q} \right|$. (In such a case, one would position $\mathbf{r}$ between the two foci and aligned with the x-axis, i. e. , $\mathbf{r} = 2a \mathbf{\hat{x}}$. ) For concreteness, $\mathbf{r}$, $\mathbf{p}$ and $\mathbf{q}$ could represent the momenta of a particle and its decomposition products, respectively, and the integrand might involve the kinetic energies of the products (which are proportional to the squared lengths of the momenta). In Classical mechanics, momentum ( pl momenta SI unit kg · m/s, or equivalently N · s) is the product
## See also
• Orthogonal coordinates
• Two dimensional orthogonal coordinate systems
• Three dimensional orthogonal coordinate systems
• Elliptic cylindrical coordinates
• Toroidal coordinates
• Bispherical coordinates
• Bipolar cylindrical coordinates
• Conical coordinates
• Flat-Ring cyclide coordinates
• Flat-Disk cyclide coordinates
• Bi-cyclide coordinates
• Cap-cyclide coordinates
## Bibliography
• Morse PM, Feshbach H (1953). In Mathematics, orthogonal coordinates are defined as a set of d coordinates q = ( q 1 q 2. In Mathematics, the Cartesian coordinate system (also called rectangular coordinate system) is used to determine each point uniquely in a plane In Mathematics, the polar coordinate system is a two-dimensional Coordinate system in which each point on a plane is determined by Parabolic coordinates are a two-dimensional orthogonal Coordinate system in which the Coordinate lines are Confocal Parabolas A three-dimensional two-center bipolar coordinates Bipolar coordinates are a two-dimensional orthogonal Coordinate system. In Mathematics, biangular coordinates are a Coordinate system for the plane where A and B are two fixed points and the position In Mathematics, two-center bipolar coordinates is a coordinate system based on two coordinates which give distances from two fixed centers C1 and C2 In Mathematics, hyperbolic coordinates are a useful method of locating points in Quadrant I of the Cartesian plane \{(x y \:\ x > 0\ Elliptic coordinates are a two-dimensional orthogonal Coordinate system in which the Coordinate lines are confocal Ellipses and Hyperbolae In Mathematics, the Cartesian coordinate system (also called rectangular coordinate system) is used to determine each point uniquely in a plane The cylindrical coordinate system is a three-dimensional Coordinate system which essentially extends circular polar coordinates by adding a third coordinate (usually In Mathematics, the spherical coordinate system is a Coordinate system for representing geometric figures in three dimensions using three coordinates the radial Parabolic coordinates are a two-dimensional orthogonal Coordinate system in which the Coordinate lines are Confocal Parabolas A three-dimensional Parabolic cylindrical coordinates are a three-dimensional orthogonal Coordinate system that results from projecting the two-dimensional parabolic coordinate Paraboloidal coordinates are a three-dimensional orthogonal Coordinate system (\lambda \mu \nu that generalizes the two-dimensional parabolic Oblate spheroidal coordinates are a three-dimensional orthogonal Coordinate system that results from rotating the two-dimensional elliptic coordinate system Prolate spheroidal coordinates are a three-dimensional orthogonal Coordinate system that results from rotating the two-dimensional elliptic coordinate system Ellipsoidal coordinates are a three-dimensional orthogonal Coordinate system (\lambda \mu \nu that generalizes the two-dimensional elliptic coordinate Toroidal coordinates are a three-dimensional orthogonal Coordinate system that results from rotating the two-dimensional bipolar coordinate system Bispherical coordinates are a three-dimensional orthogonal Coordinate system that results from rotating the two-dimensional bipolar coordinate system Bipolar cylindrical coordinates are a three-dimensional orthogonal Coordinate system that results from projecting the two-dimensional bipolar coordinate Conical coordinates are a three-dimensional orthogonal Coordinate system consisting of concentric spheres (described by their radius r and by two Philip McCord Morse, ( Aug 6, 1903 - Sep 5, 1985) was an American physicist administrator and pioneer of Operations research (OR Herman Feshbach (born in 1917 in New York City &mdash died 22 December 2000 in Cambridge Massachusetts) was an American physicist Methods of Theoretical Physics, Part I. New York: McGraw-Hill, p. 657. ISBN 0-07-043316-X, LCCN 52-11515. The Library of Congress Control Number or LCCN is a serially based system of numbering cataloging records in the Library of Congress in the United
• Margenau H, Murphy GM (1956). Henry Margenau ( 1901 - February 8, 1997) was a German - US Physicist, and Philosopher of science. The Mathematics of Physics and Chemistry. New York: D. van Nostrand, pp. 182–183. LCCN 55-10911. The Library of Congress Control Number or LCCN is a serially based system of numbering cataloging records in the Library of Congress in the United
• Korn GA, Korn TM (1961). Mathematical Handbook for Scientists and Engineers. New York: McGraw-Hill, p. 179. LCCN 59-14456, ASIN B0000CKZX7. The Library of Congress Control Number or LCCN is a serially based system of numbering cataloging records in the Library of Congress in the United
• Sauer R, Szabó I (1967). Mathematische Hilfsmittel des Ingenieurs. New York: Springer Verlag, p. 97. LCCN 67-25285. The Library of Congress Control Number or LCCN is a serially based system of numbering cataloging records in the Library of Congress in the United
• Zwillinger D (1992). Handbook of Integration. Boston, MA: Jones and Bartlett, p. 114. ISBN 0-86720-293-9. Same as Morse & Feshbach (1953), substituting uk for ξk.
• Moon P, Spencer DE (1988). "Elliptic-Cylinder Coordinates (η, ψ, z)", Field Theory Handbook, Including Coordinate Systems, Differential Equations, and Their Solutions, corrected 2nd ed. , 3rd print ed. , New York: Springer-Verlag, pp. 17–20 (Table 1. 03). ISBN 978-0387184302.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 29, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8503134250640869, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/pre-calculus/117490-heaviside-function.html
|
# Thread:
1. ## Heaviside Function
The Heaviside Function, H(t) is defined as
H(t) =
1 if t>=0
0 if t<0
It is used in the study of electric circuits to represent the sudden surge of electric current, or voltage, when a switch is instantaneously turned on
My student asked me the following question,
"since time cannot be negative, then why the function is defined for negative values of t?"
What is the best explanation? is it because usually we want to have a function that is defined for all real numbers?
Thanks!
2. The Heaviside Function, H(t) is defined as
H(t) =
1 if t>=0
0 if t<0
It is used in the study of electric circuits to represent the sudden surge of electric current, or voltage, when a switch is instantaneously turned on
My student asked me the following question,
"since time cannot be negative, then why the function is defined for negative values of t?"
What is the best explanation? is it because usually we want to have a function that is defined for all real numbers?
Thanks!
--------------------------------------------------------------------
analogy
the graph of the function basically turns "on" ((H(t)=1) => turns on signal) if the switch is turned on hence you get time for a surge, so if the switch is off, for the duration it is off it could be said to be negative up until your turn it on, at that moment the instaneous time=0 hence H(0)=1,((H(t<0)=0) => turns off signal). Now, we know that time cannot be taken away, i.e. go back in time.but rather the time(t) is actually in respect to the moment you turn it on t=0, so for generality, now t isused for the function to be defined for all real numbers.
another analogy-
so the unprocessing of the signal i.e. reversing time (going from right to left on a graph of H(t)) in the function H(t) is basically undoing w/e we did to it. that is it was a value of one when t>=0, but when we go back in time, "we undo what happened" and bring it back to 0 for t<0
and it is easier to have a function for all real numbers.
3. No, there must be a better explanation, because if you remove negative values of $t$, then your function becomes $H(t) = 1$, which is a bit useless (although in my opinion it was useless before anyway, no offense).
Maybe you could also tell him that the input is not necessarily time (I'm not sure about that), and hence the necessity of defining the value for negative inputs ...
You could also tell him that it looks nicer and is easier when defined on all reals
Maybe some research could be interesting here ?
4. Originally Posted by Bacterius
No, there must be a better explanation, because if you remove negative values of $t$, then your function becomes $H(t) = 1$, which is a bit useless (although in my opinion it was useless before anyway, no offense).
Maybe you could also tell him that the input is not necessarily time (I'm not sure about that), and hence the necessity of defining the value for negative inputs ...
You could also tell him that it looks nicer and is easier when defined on all reals
Maybe some research could be interesting here ?
yeah i was wrong the first time but after some research im positive im right now.
5. Originally Posted by purebladeknight
yeah i was wrong the first time but after some research im positive im right now.
Actually I wasn't answering you when I said that there must be a better explanation (I posted right after you), sorry for the confusion
6. Originally Posted by Bacterius
Actually I wasn't answering you when I said that there must be a better explanation (I posted right after you), sorry for the confusion
haha, no hurt there. it was good, it made me research and made me realise i was wrong the first time
7. 1) The Heaviside function, though used in physics and electrical engineering, as well as other applications, is mathematics, not physics, and knows nothing about "time". It variable is simply a number and, as far as the function is concerned, is can be any number.
2) It is simply not true, even in physics and electrical engineering, that "time cannot be negative". I cannot conceive where your student got that idea or why you did not immediately correct him. Time is a coordinate, not an actual quantity (like mass). That is, it is a something we impose in order to be able to measure. The only thing that is important is the difference between two times which can be either positive or negative. If I choose to take "t= 0" at this instant, then anything before has negative time.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9524399638175964, "perplexity_flag": "middle"}
|
http://stats.stackexchange.com/questions/24373/calibrating-a-household-survey-to-household-level-and-person-level-control-total
|
# Calibrating a household survey to household-level and person-level control totals
Imagine a (reasonably large) household survey where all persons in every household have been questioned. For the purpose of microsimulation, this survey needs to be expanded to a full population. In a first step, weights are attached to each observation so that external control totals are obeyed (calibration).
If we only have control totals that describe how many households of this-and-that type are in a zone, we can use IPF (also known as raking) which gives a maximum-likelihood estimate of the weights. Minimizing the relative entropy is equivalent to raking/IPF. EDIT: But what if we have control totals at person and household level? Like, telling us how many households of which type and how many persons of which sex/age/education level/... there are. I was unable to find a "standard" approach here.
Is raking/IPF the "correct" approach from a statistical point of view? Are there other options? What would be, from a statistical point of view, the most reasonable approach to calibrate the weights in the presence of control totals at household and person level?
See the original question for more context. (It was probably too big, I'm splitting it into parts.)
-
I'm not sure I understand (even though I've skimmed the original question), but I'm interested. This is the bread and butter of official statistics - using surveys to report on population totals for unemployment, profitability, tourism average spend, whatever - unless I've misunderstood things. What is missing from all the standard approaches to 'weighting-to-population' through stratification, post-stratification weighting, etc.? Basically, I don't understand your weighting problem - a few words of clarification might help. – Peter Ellis Mar 14 '12 at 11:22
@PeterEllis: Post-stratification weighting seems similar, but I'm interested specifically in the multilevel case here. An important part of my question was missing, I have updated it. – krlmlr Mar 14 '12 at 13:36
## 3 Answers
This is a straightforward problem for weighting to population from a two stage sampling process. Your population of interest is individuals, but your primary sampling unit is household. You happen to sample all the individuals within each PSU.
Any software that deals with complex surveys (eg Thomas Lumley's survey package in R - which also has an excellent book) can calculate for you the appropriate weights, given the population totals it sounds like you have. Rather than me try to explain it here, hopefully the tip that this is a two stage sampling process with household as PSU will mean you can find the definitive explanation of all the issues (and there are lots) in some such book.
It is not so much a question of raking - raking is a particular technique for giving you post-stratification weights, which sometimes is easier (less arbitrary decisions for the analyst) than other ways of calculating post-stratification weights that require exact matches of each combination of subject in your sample to the population (raking just matches the marginal totals of each variable, not each combination of each variable).
-
Thanks for the hints. Do you have a special book in mind that deals with two-stage sampling? -- In fact, the population totals may well be multidimensional, e.g., counts for each combination of age, sex and education level. Is this what you mean by your last paragraph? What are the alternatives to raking? – krlmlr Mar 15 '12 at 0:59
1
– Peter Ellis Mar 15 '12 at 1:16
So, raking does not apply for post-stratification weighting on attribute combinations? – krlmlr Mar 15 '12 at 1:34
It gives a set of weights that gives the exactly correct marginal totals (eg the two total weights for female, and for 20-29 year olds, will be exactly the population totals) but only approximately correct totals for combinations (eg the weights for female 20-29 year olds will only approximately match the population for that combination). Common reasons for using it are a) you don't know the population totals for every combination or b) you have small sample sizes for many of the combinations and want to avoid complex decision making about which combinations to collapse together. – Peter Ellis Mar 15 '12 at 3:14
Reweighting with some criteria defined at household level and others at individual level can be achieved with calibration estimators (proposed by Deville, Särndal and Sautory, JASA, 1993). These procedures are sometimes referred to as CALMAR. There is an implementation in the R Survey package (grake).
Suppose we have a vector of design weighs $\boldsymbol{d}$ for the $n$ households and we have an $n \times p$ design matrix $\boldsymbol{X}$ whose columns refer to attributes of the households. We now want to obtain new weights $\boldsymbol{w}$ such that when we project the design matrix according to these weights (e.g. $\boldsymbol{X}^T\boldsymbol{w}$) we reproduce a ($p \times 1$) vector $\boldsymbol{y}$ containing known universe totals. Generalized raking will find weights $\boldsymbol{w}$ which are in some sense close to the original design weights $\boldsymbol{d}$.
There is full freedom in how we set up the design matrix. Some criteria may refer household categories (calibrate to a known household count), some criteria may refer to numeric properties of the household (calibrate to a known total). A special case of the latter is that some criteria refer to the number of persons of some type in the household. An example may clarify.
Suppose we want to calibrate according to (1) the total number of households, (2-4) the total number of households for 3 regions (East, Center, West), (5-6) the total number of 1 and 2+ households; (7) the total number of privately owned cars, (8-9) the number of male-female persons, (10-13) the number of -18yr, 19-40yr, 40-60yr, 60+yr.
A household living in the center, having 2 cars, has 5 members, of which 3 are male and 2 are female, and has 3 children (-18yr) and 2 adults (19-40) will be encoded in the design matrix as (1 0 1 0 0 1 2 3 2 3 2 0 0). When setting up the calibration targets, the first 6 elements of $\boldsymbol{y}$ will contain universe household counts, the next element will contain a universe car count, the remaining 6 elements will contain universe person counts.
-
– krlmlr Mar 14 '12 at 23:22
I have a similar problem. If you use the statistical sotware R, you need the Survey package. It seems that to Statisticians, the term Rake is used for IPF. There is a rake function in the Survey package.
This will result in non-integer weights for each record. I presume this can feed your population synthesiser?
-
Thank you. I will edit the question so that it mentions raking, and also to clarify. You are right about the general idea. However, I asked specifically about multilevel raking/fitting/... algorithms and methods; the single level case has been chewed through already. -- I'll take a look at the `survey` package. – krlmlr Mar 14 '12 at 9:03
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9457212686538696, "perplexity_flag": "middle"}
|
http://www.amateurtopologist.com/blog/tag/planet-sipb/
|
Tag: planet sipb
# Some useful approximations
As much as I hate to admit it, mathematicians tend to deal with approximations. A lot of times, formulas are just too complicated to deal with the full complicated formula, and you have to simplify it. So here’s some handy approximations, as well as about where they’re valid.
• for
• for , and in particular and
• for
• for , especially for
• , where is the Euler-Mascheroni constant
• , where is the number of primes less than
• , where is the th prime number.
What do you think every mathematician should know?
No Comments December 1, 2010 /
Tagged: approximations, planet sipb
# Facebook’s privacy settings: another look
Much has been made in the news recently about Facebook’s relative lack of privacy controls, and the degree to which they’re hidden and made unintuitive to use. Naturally, people have been speculating about why they do this. The intuitive answer, and the one that I’ve heard a lot of people claim, is that it allows them to sell data to advertisers; the reasoning goes that they can’t sell your data to third parties and make money off of you if your profile settings are all set to private. But as far as I know this isn’t the case; I’ve got all my Facebook data pretty locked down; the only information about me if you’re not my friend is my name and a picture of me, which I figure is enough to allow people I know to friend me while being certain that they’re getting the right me. Yet I still get ads that I know are targeted to me because they mention my college, my location, etc.
Instead, I suspect that the real money that Facebook makes off of people who don’t have their information private is off advertising impressions. A lot of people, when they want to find more about someone, will immediately check Facebook to see if they have a profile, and if they do, they’ll spend a while ‘stalking’ them on it. If the ‘stalkee’ has their information private and the stalker isn’t friends with them, then they have to send a friend request, which means that most of the time they’ll back off. But if the stalkee’s information is public, then the stalker can spend large amounts of time looking at their information. Which means large amounts of pageviews, and large amounts of advertisements being displayed to them, which means more money for Facebook. And I think that that’s one point that a lot of people miss when they write about Facebook; not only do they want you to put all of your information on there so they can sell it, they want you to put your information on there so that other people will spend time on their site looking at your information.
1 Comment November 22, 2010 /
# Perl will never go away, ever
Perl was one of the first languages that I ever learned and actually truly did things with; it was the first language I ever wrote a nontrivial program in (a DES implementation that I have unfortunately lost the source code to, or else I would post it). The first language I ever wrote a program in was something I don’t even remember in BASIC; I seem to have blocked all memory of it from my memory, probably for the better. So I have a bit of a soft spot for the language, and so I still have some of my bad habits; since I didn’t `use strict` or `-w`, my code would likely be full of uninitialized variables and barewords. It’s a bad habit, and to this day I still have to be reminded occasionally that other languages, such as Python, do require variables to be declared.
But Perl is old now, and I’ve mostly moved on to other languages, like Python. I like the object-orientation, the support for functional paradigms and other nice things like list comprehensions and lambda functions. I like not having to sigil all of my variables with \$ or @ or %, I like being able to supply keyword arguments to my functions so that I don’t have to remember which weird order I decided to use, I like the sheer amount of fun things that you can do with object orientation combined with reflection, metaprogramming, and everything being a first-class object. And yet, I still think it’ll stick around for a while.
Why do I say that? Simple. I was talking with someone who had left in the middle of an online IRC-based role-playing game, and they had asked for chatlogs of what had happened after they left. I had them, since I run weechat in tmux (like irssi in screen, but better!) and so am in every IRC channel I’m in 24/7. But the question was: how could I pull out just the lines that were said when he left? And the answer was Perl. It turns out that the .. operator, which in a for loop or other situations where a list is expected produces a range (so (1..9) as a list produces the list (1,2,3,4,5,6,7,8,9)), does something completely different in a scalar context, like in the conditional of an if statement. Take the statement `print if (/Person.*has quit/ .. /Person.*has joined/)`. Each time this statement is run, the conditional will evaluate to false, until the left-hand side evaluates to true. Then it’ll start evaluating to true, until the right-hand side evaluates to false, and then it’ll stop being true (but it’ll still be true until it’s evaluated again!), etc., etc. So if this is in an implicit while loop running through the lines of a file, it’ll start printing when it sees a line saying Person has quit, including that line, then stop when they rejoin, but still print that line, and then it’ll keep going until it sees another quit line, etc. And the best part is, if you call perl with -n, you automatically get a while loop that assigns the current line of the file it’s reading from to \$_, the implicit variable in the matching and print.
If I wanted to do that in something like Python, I’d have to manually set up the read loop, write a function to trawl through, build up regexp objects to match on, etc. And that’s fine for a piece of code I intend to maintain. But for a quick one-line script like this? Too much effort. All I need is perl -ne.
4 Comments May 20, 2010 / Posted in: Computers, Programming
Tagged: perl, planet sipb
# Security vulnerability in Haskell with CGI
Compiled Haskell programs all include special RTS (Run Time System) options, that change things like the number of cores that it runs on, various internal things relating to how often garbage collection runs, etc. They’re specified by invoking the program like ./foo +RTS -m10 -k2000 -RTS to run the GHC-compiled program ‘foo’, reserving 10% of the heap for allocation and setting each thread’s stack size to a maximum of 2000 bytes. In the current build of GHC, there is no way to disable these options from working (although the option –RTS will make all further options be interpreted as normal, non-RTS options). The problem is that the option -tout will write profiling data to the file out. So, if your program is setuid root, anybody who runs it can write the profiling data to, say /etc/passwd and render the system unusable. They don’t get to pick what gets written, so they can’t add a backdoor for themselves, but they can essentially scribble over whatever files they want. This is bug #3910, and the fix (disabling RTS by default) has been uploaded.
Now, one of the more little-known features of CGI is that if you pass a query string that does not contain any = signs to a CGI script, the httpd may pass the string along as command-line arguments. This is specified in section 4.4 of RFC 3875, and it specifies how the query string SHOULD be turned into arguments (although it does not say anything about whether the httpd should behave this way, only that some do). This is an example script that only outputs its arguments in a comma-separated list; the link gives it some sample arguments. Note that by URL-escaping, you can send arbitrary strings through… including +RTS. So if that were, say, a Haskell script, I could pass the query string ?%2BRTS+-tindex.html+-RTS and overwrite index.html.
There are three ways to get around this: first, GHC 6.12.2 has the -no-rtsopts option, which will obviously disable RTS options. So if you just recompile your script with that, it’ll be safe. Note that 6.14 will disable the RTS options by default; the 6.12.2 patch didn’t for backwards-compatibility reasons. Second, if you don’t want to use 6.12.2 for whatever reason, you can wrap it in a shell script that calls it with no options. For example, replace the Haskell script with a shell script called, say, hscript.cgi (if your Haskell program is called hscript) that calls it with no arguments, e.g.
#!/bin/bash
./hscript.real
and rename the Haskell script to hscript.real, so that it doesn’t get run as CGI (I’m assuming that .real files don’t get run as CGI on your machine!) Another thing you can do is to add the following to your .htaccess, which will give 403 Forbidden errors to anybody passing RTS arguments in the URL:
RewriteEngine on
RewriteCond %{QUERY_STRING} ^(?:[^=]*\+)?(?:%2[bB]|(?:-|%2[dD]){1,2})(?:%52|R)(?:%54|T)(?:%53|S)(?:\+[^=]*)?\$
RewriteRule ^ - [F]
This will solve it for every Haskell script you use, but relies on the regex being correct, which isn’t something I can guarantee.
6 Comments April 23, 2010 / Posted in: Computers, Programming
Tagged: cgi, haskell, planet sipb, security
# dissociated-blogosphere: never have to write an original post again!
For the past two weeks or so, I’ve been working off and on on a project called dissociated blogosphere (OSX and Linux binaries here). It takes a bunch of URLs, looks through them for an RSS for the raw content of the posts, and then stores the words of the posts in an array. It then picks N random, consecutive words (where in this case N is 2), and starts generating new text, by picking a new word x% of the time if x% of the time, the previous N words were followed by that word. For example, if 90% of the time, the words ‘the quick’ were followed by ‘brown’, and the other 10% of the time, they were followed by ‘red’, then when the two-word phrase ‘the quick’ was randomly generated, it would pick ‘brown’ 9 times out of 10, and ‘red’ 1 time out of 10. This is the algorithm Emacs‘s dissociated press feature uses, hence the name. Running it a few times on this site and picking some of my favorite sentences gives:
Second, I ignored the axes of the work you envision. So start small, and think about the free group on two generators, which is obviously highly undesirable behavior. However, it does have the web interface, I’ll have it up by last week, but that obviously didnt happen. Taking into account the fact that I’m using. The central thing that makes MS Paint Adventures unique to the point where it’s my go-to language for random programs (I still use Python for that), but if we pick two of them and rotate one of the set of all rotations that you have some custom function you want soup or salad, both is not a valid answer.
It’s my first medium-scale project written in Haskell (even though there isn’t a lot of code, what little was there was not trivial to write), and I’ve learned several lessons from it:
• The Haskell wiki is an excellent resource. When I was trying to learn how to use HXT, the Haskell XML Toolbox, I found the provided documentation somewhat inadequate. But the HXT article on the Haskell wiki is an excellent introduction to the filter abstraction, which is all that I need for the basic stuff that I’m using.
• Read the Haddock documentation. The HXT article, as useful as it was, didn’t cover a couple essential things I needed to know (such as how to pull all elements with type “application/rss+xml”). So I look at the documentation for Text.XML.HXT.Arrow.XmlArrow (the module containing the arrows that HXT uses to filter XML), and saw that `hasAttrValue :: String -> (String -> Bool) -> a XmlTree XmlTree` looks about right; from the type, I can guess (correctly) that I need to pass it the attribute and a prediate on the value of the attribute (i.e., `hasAttrValue "href" (== "application/rss+xml")`).
• One goal at a time. This isn’t specific to Haskell. When I started on this, I meant for it to require you to provide the RSS feed. Then, I realized that having a larger corpus might be better, so I added the ability to pull from multiple feeds. Then I decided that expecting people to find the RSS feed by hand might be a bit much, so I rewrote it to pull the RSS feed from the site. And I eventually plan to write a CGI frontend so that you can just run it online. If I had decided from the start to do all these things, I probably never would’ve gotten started. As Linus Torvalds said:
Nobody should start to undertake a large project. You start with a small trivial project, and you should never expect it to get large. If you do, you’ll just overdesign and generally think it is more important than it likely is at that stage. Or worse, you might be scared away by the sheer size of the work you envision. So start small, and think about the details. Don’t think about some big picture and fancy design. If it doesn’t solve some fairly immediate need, it’s almost certainly over-designed.
• Strip and gzip your executables if you’re going to distribute them. Due to the fact that I’m statically linking in HXT, which is a sizeable library, the compiled, non-stripped version of dissociated-blogosphere is a whopping 12 megabytes. This isn’t due to inefficiencies in my own code, but due to the sheer size of the HXT library. Running the Unix command line utility strip (which only removes internal debugging information) cuts it down to about 5 MB, and then gzipping the binaries takes it down to a little over a megabyte.
• Split things into libraries where it’s appropriate. Part of the problem with using HXT is that it makes recompilation slow; if I could do it all over again, I might have used HaXml, but HXT has the advantage of having nontrivial amounts of documentation written about it (on the Haskell wiki). If I had instead split the RSS parsing code into its own library, I could have only recompiled those parts whenever I touched them, which wasn’t nearly as often as I touched the code frontend. Plus, it’s just good programming practice.
So what do I have planned for dissociated-blogosphere? First off, I plan to make it faster by caching RSS lookups; by storing a map from page URLs to RSS feed, I can cut the number of network requests in half. Second, I plan to implement actual error handling; right now if you give it a bad URL it fails and doesn’t produce any useful output, regardless of whether other URLs are good. Third, I’m going to split out the RSS part into its own library, which I might make its own package on hackage. Fourth, I intend to eventually write a web interface (either in Haskell or in Python) so that you don’t have to download and install it. I originally intended to have the web interface up by last week, but that obviously didn’t happen. Taking into account the fact that it’ll take longer than I think it does, I’m guessing I’ll have it up by two weeks from now (so, a month). And finally, when/if I do the web interface, I’ll have it color the text according to which blog it’s from, or maybe even output xterm color codes if I don’t write the web interface.
1 Comment April 6, 2010 / Posted in: Computers, Programming
# What comes after reCAPTCHA?
reCAPTCHA, the system I use to keep spam out of the comments, is probably one of the most popular CAPTCHAs (Completely Automated Method[s] to Tell Computers and Humans Apart) out there. And for a very good reason: it draws its source words only from texts that current optical character recognition (OCR) technology is unable to read; therefore, no spam bot should be able to read them, especially after reCAPTCHA applies some extra distortion to render it absolutely non-machine-readable. But what do we do when the state of OCR technology advances to the point where they get as good as humans at reading text? As technology for reading words improves, it seems likely that within the next decade or two, the level of distortion necessary to render it unreadable by machines will also make it illegible to humans. So what next?
One class is image-recognition CAPTCHA: you present the user with ten distorted images (to prevent random guessing by bots) and ask them which ones contain a cat, or which ones have been rotated upside-down, or which ones are people. This is essentially a generalization of text-based CAPTCHAS, but it has several problems. First and foremost, you need a large source of images to show the user. This is one of the huge advantages of text-based CAPTCHAs: they can be procedurally generated. If the image database for a CAPTCHA service is small, then it’ll be passed around by spam bots; since recognizing whether two images are the same is a fairly solved problem, all they have to do is answer your question for each of the images once. (The distortion’s purpose is to make image comparison harder in case spammers do get a hold of your database, not to make it impossible). One method would be to browse Flickr for photos tagged with an object and assume that each such photo contains an object, but you’re running into copyright issues as well as essentially relying on the fact that someone won’t tag a photo ‘cat’ just because it has a kitten in the distant background.
One other idea that I’ve seen a couple sites use is knowledge-based, relying on the fact that machines can’t yet parse natural language. So it asks a question like “what is 2 plus 2?”. The fundamental problem I see with this is that, again, you’re going to have a very small repertoire of questions; a CAPTCHA has to be able to be generated by a computer. Not to mention the fact that whatever question-generating algorithm you use could just be reverse-engineered to extract content, then passed to Google or Wolfram Alpha to get the answer. Unlike images, there’s no way to ‘distort’ a question.
A third possibility, orthogonal to trying to tell real people from computers, is to look at the content of the message, rather than require the message sender to pass some arbitrary test. This is the approach Akismet (which comes by default on WordPress) uses, and is similar to the way e-mail clients detect spam. This has the downside of having a higher false positive rate than CAPTCHA-based methods. A short comment saying ‘Hey, I read your article and liked it; check out this link’ can either be legitimate or spam, and determining which one it is would require knowing the contents of the link. So your CAPTCHA system would have to visit links posted by users, which is obviously highly undesirable behavior. However, it does have the advantage of not relying on some problem being ‘hard’ to solve, and it also removes the (admittedly small) barrier to commenting that CAPTCHAs produce.
For now, reCAPTCHA will remain good enough; it’s easy to solve, and the word combinations that I can’t easily read can be dismissed with a click of the refresh button. And since I have very low traffic, I can afford to have an e-mail sent to me for every comment I get here; if it does wind up being spam (apparently, either reCAPTCHA isn’t completely impervious to computer solving or there’s some sweatshop worker whose job is to spam sites with cheap Viagra ads) I can just delete it.
3 Comments March 16, 2010 /
# Why I Love Currying
So I’ve been playing around with Haskell a lot lately and using it for various random stuff; I haven’t progressed to the point where it’s my go-to language for random programs (I still use Python for that), but I at least have an idea of how to use it. And there’s one feature of Haskell that I miss sorely when I write code in Python, or pretty much any other vaguely functional language: currying.
In Haskell, every function takes a single argument. A function of multiple arguments, such as , which applies a function to every element in a list, actually only has one argument; for example, map can be interpreted either as taking a function and a list and returning a list, or as taking a function and returning a function that takes a list and returns a list. More formally, in Haskell, these two type declarations are equivalent:
map :: (a -> b) -> [a] -> [b]
map :: (a -> b) -> ([a] -> [b])
This process, of taking a multi-argument function and converting it into a series of single-argument functions is known as currying, after the mathematician Haskell Curry (who, obviously, is also the source of the name Haskell); the process of partially applying arguments to a function in this way is known as ‘partial application’, but is also called currying. One of the most obvious examples of currying is in sections: the function `(0 ==)` is syntactic sugar for `(==) 0`, and returns whether its argument is equal to zero. Furthermore, we can also partially apply the predicate to filter, to make a function that filters its argument on a fixed predicate. So, these three examples are completely equivalent:
removeZeros :: [Integer] -> [Integer]
removeZeros xs = filter (\x -> x /= 0) xs
removeZeros xs = filter (/= 0) xs
removeZeros = filter (/= 0)
(where `/=` is Haskell’s not-equal operator). The first is the most explicitly-written version, using no currying at all. The second curries the predicate; `(/= 0) x` is the same as `x /= 0`. Finally, since `removeZeroes` applied to an argument is the same as applying `filter (/= 0)` to it, we might as well define the former as the latter. Or, to take another example, look at the sortBy: it has type `(a -> a -> Ordering) -> [a] -> [a]`, where Ordering is a datatype that can either be `EQ, LT` or `GT` for equal, less than, or greater than. So if you have some custom function you want to sort a list on, you can just say `mySort = sortBy f` and it will be the same as writing `mySort xs = sortBy f xs`, only cleaner and neater. Or in my Data.Nimber module (specifically lines 38, 39, and 43), many operations on Nimbers that’re required in order for me to call then ‘numbers’ are just the identity operation. So instead of saying `abs x = x`, I can just say `abs = id`.
Furthermore, without currying, you couldn’t have variadic functions; in order to work inside Haskell’s type system, the two types `a -> b -> c` and `a -> (b -> c)` have to be the same type. The full explanation involves typeclasses, and is (in my opinion) worth a read, because it’s a good explanation of a pretty horriblexcellent (it’s both at once, you see) type system hack.
As an aside, this also means that `id :: a -> a`, the identity function, is in a sense the same thing as `($) :: (a -> b) -> a -> b`, which is function application. You can see this by substituting `(b -> c)` for `a` in the type of id, then removing parentheses:
id :: a -> a
id :: (b -> c) -> (b -> c)
id :: (b -> c) -> b -> c
So, in particular, is the same as `f $ x`, which is just `f x`. Another way to think of this is that `f `id` x = id f x = (id f) x = f x`.
9 Comments February 12, 2010 / Posted in: Computers, Programming
# Variadic Functions in Haskell
Most modern languages have some kind of printf analogue: a function that takes a format string, and a series of things to be inserted into that string, and formats them all accordingly. At first glance, Haskell’s strong type system would seem to preclude this. There’s no built-in system for writing functions that take variable numbers of arguments, and it seems like it would be difficult to write one. The standard approach is to take a list instead, but this fundamentally doesn’t work for printf, since you’re going to be wanting to print Integers, Strings, and Floats. It’s possible to just pre-apply show to everything, but that’s not really a good idea, because you might want to show them in a different way than the built-in show does. You can use an extension called existential types to create a list of PrintfWrappers which wrap integers/floats/strings (more on that below), but that requires your users to manually do the wrapping, which is, once again, not a good idea. Haskell’s Text.Printf module takes a third approach. Look at the following lines:
instance (IsChar c) => PrintfType [c]
instance PrintfType (IO a)
instance (PrintfArg a, PrintfType r) => PrintfType (a -> r)
instance PrintfArg Integer
instance (IsChar c) => PrintfArg [c]
printf :: (PrintfType r) => String -> r
Here’s how to interpret this: PrintfType is the type of things that can be printed to. Printing to a just gives you a string, much like sprintf in C or Perl, printing to an `IO ()` will actually print it out (so you can use it like a normal printf in do blocks, a behavior which I personally find distasteful.). However, printf will return undefined when asked to return an ; the reason that you can nevertheless return one is that only declaring IO () as an instance of PrintfType is invalid according to Haskell 98.
`PrintfArg`, by comparison, are the elements that are valid arguments to `printf`; they basically consist of the various `WordN/IntN` types, `Integer, Float, Char`, and `(IsChar c) => [c]`. The point of the last instance is that, while you can’t have a specific version of a polymorphic type be an instance of a typeclass, you can restrict it to types whose parameters are themselves instance of another typeclass; the only instance of `isChar` is .
So now that we have that clarified, let’s suppose we want to call printf with “%s %d %f” “foo” 42 3.1, passing it the format string, String, an Integer, and a Float. This causes printf’s type to become
printf :: String -> String -> Integer -> Float -> String
Does this match the pattern `(PrintfType r) => String -> r`? Let’s go in reverse. is an instance of `PrintfType`, and is an instance of `PrintfArg`, so `Float -> String` is an instance of `PrintfType`. Therefore, `Integer -> (Float -> String)` is an instance of `PrintfArg`, and so is `String -> (Integer -> (Float -> String))`. Dropping parentheses, this becomes `String -> Integer -> Float -> String`. So the types all check out. If you pass an invalid type, then you’ll run into something that isn’t an instance of `PrintfArg` and so the types won’t check.
I mentioned above that if you use something called ‘existential types’, you can do something similar. The way it works is that you define a new type whose data constructor only requires that its argument be of a given typeclass. Look at the following example
{-# LANGUAGE ExistentialQuantifiers #-}
data Box = forall s. (Show s) => Box s
boxes = [Box 2, Box "f", Box [8,3]]\
showBoxes :: [Box] -> String
showBoxes [] = ""
showBoxes ((Box x):xs) = show x ++ " " ++ showBoxes xs
When you run `showBoxes boxes`, you get `2 "f" 83`, exactly as you’d expect. Note that, however, the function `unbox (Box x) = x` cannot be written; it would have to be of type `(Show s) => Box -> s`, and there’s just no real way to do that. So once you’ve wrapped something up in a Box, you can only get at it by ing it. From this, you can see how to pass a heterogeneous list to `printf`. The reason that this approach is suboptimal is that it would require Text.Printf to export a Printf data constructor which would wrap up everything to make it of the appropriate type, and that would be rather annoying, especially since it relies on show preserving enough information for you to format the number after ing it back in.
This pattern can obviously extended to any other variadic, heterogeneous function, as long as you can define a suitable typeclass that its arguments must all be instances of. And that’s not really a restriction at all; if you can’t specify a behavior that the instances must have, then you don’t really know what you can do with the arguments, and so you can’t do anything at all!
5 Comments January 12, 2010 / Posted in: Computers, Programming
# Electrons are not like planets
One of my first posts on this blog was about the stability of the atomic nucleus; given that it consists of a bunch of positive/neutral charges clumped together, why doesn’t it fly apart? The answer involves the strong force, which is strong at atomic distances but miniscule at inter-nuclear distances; on distances comparable to that of an atomic nucleus, it’s strong enough to overcome the electromagnetic repulsion. But there’s another question, and it involves the model of orbiting electrons.
Classically (meaning without quantum mechanics), the electron is pictured as a pointlike particle orbiting the nucleus. For simplicitly, we’ll look at the hydrogen atom. The electron orbits at the Bohr radius , cm, which can’t really be derived in an easy way from theory; we’ll just take it as a given. So the acceleration the electron undergoes is (using cgs units to avoid factors of and such everywhere), using good old .
However, there is a problem: any accelerating charge radiates energy. The reason for this is, roughly speaking, an accelerating charge has more energy in its electromagnetic field, so you therefore have to expend more energy to accelerate it; since an atom is a closed system, there can be no energy source, so it ‘extracts’ it by spiraling inwards. The derivation of formula is complicated, but it turns out that a particle of charge accelerating at a rate will radiate energy at a rate
(the second equation what we get when we plug in our value for acceleration).
So what’s the energy of an electron orbiting its atom? The kinetic energy is equal to , and the potential energy is equal to $-q^2/r$, so the total energy is just
Considering both energy and radiated power as functions of time, we get ; but since the only variable that can change is the radius, we then get the following differential equation for the radius:
The method of solving this equation isn’t important; I used Mathematica. What is important is the solution:
where is the Bohr radius I mentioned earlier. This will obviously become zero at ; plugging in the cgs values for these constants, we get that seconds. So according to the Bohr model, a hydrogen atom decays in less than the time it takes light to move a centimeter.
So what’s the answer? One is to use the Bohr model, which requires a minimum energy; the electron cannot fall farther into an energy well. This model works to a certain extent, but fails with more complicated atoms; not only that, but it predicts that the hydrogen atom has a minimum nonzero angular momentum, which is not the case. But a full treatment of the failings of the Bohr model goes beyond what I know.
No Comments January 1, 2010 /
Tagged: particle physics, physics, planet sipb
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9399769306182861, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/136542/a-question-about-mobius-transformation-in-complex-variables
|
# A question about Möbius transformation in Complex Variables
What does it mean that $$w=\frac{az+b}{cz+d}$$ can map a circle to a line, a line to a line, a circle to a circle or a line to a circle?
Thanks for your explanation. I noticed many problems regarding Conformal Mapping are asking to form a map in order to get a desired result, for instance, fix some points or map a area to another one. But I don't know how can I find the desired mapping. Is there a certain rules I should follow? The following is an example. Let U be the upper half plane from which the points of the
-
is $a,b\in\mathbb{R}$ or $a,b\in\mathbb{C}$ ? – Abdelmajid Khadari Apr 25 '12 at 1:08
– John Adamski Apr 25 '12 at 3:39
@ John It truly is very enlightening an beautiful. Thank you. – Megan Apr 25 '12 at 18:49
## 1 Answer
It does not really matter about the coefficients. Any (invertible) transformation has $ad-bc \neq 0.$ Such a mapping, written as a function, is the composition of a finite string of just three types:
First $f(z) = A z$ for some nonzero (real or complex) $A$ takes circles with center at $0$ to others, lines through $0$ to others.
Second $g(z) = z + B$ takes lines to lines and circles to circles, it is just a translation.
Third, $h(z) = \frac{-1}{z}$ does a little of both. Since $0$ is sent out to $\infty$ and $\infty$ is brought back to $0,$ the following bunch of things happen:
A) a line through $0$ is sent to a line through $0.$
B) a line not through $0$ is sent to a circle that passes through $0,$ with center elsewhere.
C) A circle that passes through $0$ is sent to a line that does not pass through $0.$
D) A circle that does not pass through $0$ is sent to another circle that does not pass through $0.$
You really ought to draw some things yourself for $h(z) = -1/z.$ There is a good reason for the minus sign, but if you prefer draw $H(z) = 1/z.$
-
Thanks for your explanation. – Megan Apr 25 '12 at 1:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 23, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9498059153556824, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/115624/confusion-on-legendre-symbol
|
# confusion on legendre symbol
i know that $\left(\frac{1}{2}\right)=1$ since $1^2\equiv 1 \pmod2$ now since $3\equiv 1\pmod2$ we should have $\left(\frac{3}{2}\right)=\left(\frac{1}{2}\right)=1$ but on Maple i get that $\left(\frac{3}{2}\right)=-1$ why?
-
The command `\pmod` produces the appropriate spacing and font for this. – joriki Mar 2 '12 at 9:17
## 1 Answer
The Legendre symbol, the Jacobi symbol and the Kronecker symbol are successive generalizations that all share the same notation. The first two are usually only defined for odd lower arguments (primes in the first case), whereas the Kronecker symbol is also defined for even lower arguments.
Since the distinction is merely historic, I guess it makes sense for math software to treat them all the same; Wolfram|Alpha returns $-1$ for `JacobiSymbol(3,2)`. See the Wikipedia article for the definition for even lower arguments; the interpretation that a value of $-1$ indicates a quadratic non-residue is no longer valid in this case.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8413698673248291, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/90980/what-is-the-dimension-of-the-product-ring-prod-mathbb-z-2n-mathbb-z
|
## What is the dimension of the product ring $\prod \mathbb Z/2^n\mathbb Z$ ?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
In an anwswer to a question on our sister site here I mentioned that a reduced commutative ring $R$ has zero Krull dimension if and only if it is von Neumann regular i.e. if and only if for any $r\in R$ the equation $r=r^2x$ has a solution $x\in R$.
A user asked in a comment whether this implies that an arbitrary product of zero-dimensional rings is zero dimensional.
I answered that indeed this is true and follows from von Neumann regularity if the rings are all reduced but I gave the following counterexample in the non reduced case:
Let $R$ be the product ring $R=\prod_{n=1}^\infty \mathbb Z/2^n\mathbb Z$.
Every $\mathbb Z/2^n\mathbb Z$ is zero dimensional but $R$ has $\gt 0$ dimension.
My argument was that its Jacobson radical $Jac(R)=\prod_{n=1}^\infty Jac (\mathbb Z/2^n\mathbb Z)=\prod_{n=1}^\infty2\mathbb Z/2^n\mathbb Z$ contains the non-nilpotent element $(2,2,\cdots,2,\cdots)$.
However in a zero dimensional ring the Jacobson radical and the nilpotent radical coincide and thus $R$ must have positive dimension.
My question is then simply: we know that $dim(R)\gt 0$, but what is the exact Krull dimension of $R$ ?
Edit
Many thanks To Fred and Francesco who simultaneously (half an hour after I posted the question!) referred to an article by Gilmer and Heinzer answering my question .
Here is a non-gated link to that paper.
Interestingly the authors, who wrote their article in 1992, explain that already in 1983 Hochster and Wiegand had outlined (but not published) a proof that $R$ was infinite dimensional.
Already after superficial browsing I can recommend this article, which contains many interesting results like for example infinite-dimensionality of $\mathbb Z^{\mathbb N}$.
New Edit
As I tried to read Hochster and Wieland's article, I realized that it refers to an article of Maroscia to which I have no access. Here is a more self-contained account of some of Hochster and Wieland's results.
-
## 2 Answers
The ring $R$ is infinite-dimensional. More generally, the product of a family of zero-dimensional rings has dimension $0$ if and only if it has finite dimension. This is proven as Theorem 3.4 in R. Gilmer, W. Heinzer, Products of commutative rings and zero-dimensionality, Trans. Amer. Math. Soc. 331 (1992), 663--680.
-
You have beaten me for a few seconds :-) – Francesco Polizzi Mar 12 2012 at 13:09
Thanks a lot, Fred. – Georges Elencwajg Mar 13 2012 at 6:25
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
According to the paper Products of Commutative Rings and Zero-Dimensionality, your ring must be infinite-dimensional.
-
Thanks a lot, Francesco. I'm sorry I can accept only one answer: I would have liked to accept yours too. – Georges Elencwajg Mar 13 2012 at 8:21
You are welcome Georges. Since Fred answered first, it is fair that you accept his answer – Francesco Polizzi Mar 13 2012 at 12:21
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 19, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9349356889724731, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/81281?sort=votes
|
## How to see isometries of figure 8 knot complement
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
The figure 8 knot complement $M$ is the orientable double cover of the Gieseking manifold, which implies that $M$ has a fixed-point free involution. If we think of $M$ with its hyperbolic metric, this involution is an isometry. Is there some way of visualizing this isometry?
I know how to produce one relatively easy to see isometry of $M$. The fundamental group of $M$ is a two generator group. Lift geodesic representatives of a pair of generators to $\mathbb{H}^3$. These geodesics have a mutual perpendicular, and $180^{\circ}$ rotation about that geodesic descends to an involution of $M$. However, this map has fixed points, and I'd like to "see" one that doesn't.
-
## 2 Answers
If you're interested in the involution only defined on the complement, Igor's answer does a fine job.
But the involution extends to an involution of $S^3$ and perhaps you'd like to see that?
I think the symmetry is a little tricky to traditionally visualize, because as a map of $S^3$, thought of as the unit sphere $S^3 \subset \mathbb C^2$ it's of the form $(z_1,z_2) \longmapsto (\overline{z_1}, -z_2)$. If you stereographically project at one of these fixed points, this becomes the involution of $\mathbb R^3$ given by $(x,y,z) \longmapsto (-x,-y,-z)$. Both the fixed points have to be on the knot, so this means we have to find a "long" embedding of the figure-8 knot in $\mathbb R^3$ which is invariant under the antipodal map. That's easy. Here are two such (approximate) positions:
In the latter picture the fixed point is easier to see -- the blue straight line intersects the knot in 5 points, one at the "center" and four other points indicated by blue balls.
Looking through the code I used to generate the latter picture, the parametrization I use of the figure-8 knot is:
$$t \longmapsto (5(t^3-3t), 0.25(t^7-42t), t^5-5t^3+4t)$$
which is easy to verify is equivariant with respect to the above involution, as all the terms in the polynomial are odd.
-
1
Nice! What did you use to generate the picture? – auniket Nov 19 2011 at 0:45
1
The latter image was generated in two steps. The knot and the line were computed in some C++ code that I wrote. Then the 3-dimensional data was output into a PoVRay format, which was then rendered in PoVray: povray.org The top image I just grabbed in a Google search. – Ryan Budney Nov 19 2011 at 1:49
Maybe I'm confused; isn't the OP trying to visualize a fixed-point free involution? – jc Nov 19 2011 at 6:14
1
The only fixed points are on the knot -- it's free on the knot complement. – Ryan Budney Nov 19 2011 at 6:24
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
The figure 8 complement decomposes into 2 regular ideal tetrahedra (see Thurston's notes, here for example.) This gives the involution quite explicitely.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 15, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9518728256225586, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/34475/order-topology-and-discrete-topology?answertab=votes
|
# order topology and discrete topology
I have this homework question. Consider the set $X = \{1,2,3\}$.
(a) With the natural order on $X$, find the basis for its order topology, (b) Show that the order topology on $X$ equals its discrete topology.
I suppose the natural order to be $1<2<3$, so that $1$ the is the least element and $3$ is the largest element, then $B=\{[1,3),(1,3),(1,3]\}$ is the basis for the order topology on $X$.
For part (b), I would like to write $B=\bigg\{\{1,2\},\{2\},\{2,3\}\bigg\}$ but I see it will not satisfy. I need help! Thanks.
-
"homework" should go in a tag, not the title. – joriki Apr 22 '11 at 7:34
## 1 Answer
I hope we are working with the same definition of order topology.
According to this defintion, the base contains all intervals $(a,b)$, $(a,\infty)$, $(-\infty,b)$. (Of course, the base for a topological space is not determinecd uniquely, but this is the one from definition.) Since this set has largest and smallest element, you can rewrite them as (a,b), (a,3], [1,b).
Since, [1,2)={1}, (1,3)={2}, (2,3]={3}, the base contains all singletons and thus the space is discrete.
-
thanks, am fine – temba alloyce Apr 22 '11 at 7:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9223370552062988, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/71571?sort=newest
|
Example of symplectic and hamiltonian diffeomorphism on $S^2$ and $T^2$
Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Hello everybody. For a purpose of consolidation of some result I am trying to set down, I need to construct an example to sustain the theory and I am looking for symplectic and Hamiltonian diffeomorphisms. So does someone can help me writing some non-trivial explicit examples of symplectic and Hamiltonian diffeomorphisms on compact surface? (at least examples for $S^2$ and $T^2$ ) NB : By surface I mean 2-dimensional manifold. Thanks a lot
-
Sorry, what is a symplectic diffeomorphism on a manifold on which you haven't specified a symplectic structure? Do you mean a symplectic diffeomorphism of the cotangent bundle? – Qiaochu Yuan Jul 29 2011 at 15:45
1
The standard area forms on the torus and $S^2$ are symplectic structures. I assume those would be good for explicit examples. – Elizabeth S. Q. Goodman Jul 30 2011 at 8:24
In fact as long as you endow a symplectic structure, it doesn't matter which one it is because they are all isotopic up to a scaling. For example if you give two symplectic structures $\omega_i$, $i=0,1$ which represent the same cohomology class, which can always be done by scaling, then you just apply standard Moser's trick on $\omega_t=(1-t)\omega_0+t\omega_1$. You can see everything gets through because of the dimension reason. – Weiwei Jul 30 2011 at 17:36
Thanks all for your interest to my question. First of all, sorry for not being precise. So could use on $S^2$ $\sin\varphi d\theta\wedge d\varphi)$ in spherical coordinate or $d\theta\wedge d z$ in cylindrical coordinate. Feel free to consider the one that can be helpful. Besides, for $T^2$, $d\theta\wedge d\varphi$ where $\theta, \varphi \in S^1$ are general angular coordinates. – Tatou Papora Aug 1 2011 at 15:43
3 Answers
Ari's answer is good because you can see how even flowing along a symplectic vector field is not enough, but you could add a nice touch to the picture. Because a Hamiltonian diffeomorphism is exact, a simple closed curve $\gamma$ on a torus is non-displaceable: i.e., under a Hamiltonian flow $\phi^t$ the image $\phi^t(\gamma)$ will intersect $\gamma$, because together the two curves will bound a region of zero signed area. (If you're curious, the concept for this comes from symplectic quasi-states and quasi-measures, which tell you when subsets can be displaced by Hamiltonian flows. I haven't learned much about them.)
On the other hand, again as mentioned, rotation along one of the two angles of the torus is a perfectly good symplectic action that can never be Hamiltonian because it displaces simple closed curves. (I was saddened to hear that because this action is not Hamiltonian, the torus is not a toric manifold.)
-
You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
On an orientable 2-manifold (such as $S^2$ or $T^2$), the symplectic 2-form $\omega$ can be given by the signed area. In this case, symplectic diffeomorphisms are just those which preserve area and orientation.
Now, let's set aside the difference between diffeomorphisms and flows of vector fields (which is a complicated enough issue by itself), and just focus on the difference between symplectic and Hamiltonian vector fields.
A vector field $X$ is Hamiltonian if $i_X \omega = dH$ for some Hamiltonian function $H$ (where $i_X$ is the contraction by $X$ and $d$ is the exterior derivative). On the other hand, $X$ is symplectic if the Lie derivative $L_X \omega$ vanishes. Applying Cartan's "magic formula" $L_X \omega = di_X \omega + i_X d \omega = d i_X \omega$ (since $\omega$ is closed), this means that $X$ is symplectic when $d i_X \omega = 0$. In summary, $X$ is Hamiltonian when $i_X \omega$ is exact, and symplectic when $i_X \omega$ is closed. The difference between these is given by the 1st de Rham cohomology of the manifold in question.
So, as Weiwei said, "symplectic" and "Hamiltonian" are identical on $S^2$, since $S^2$ is simply connected. On the other hand, they're not the same on $T^2$, since the torus has nontrivial 1st cohomology. Putting coordinates $(\theta,\phi)$ on $T^2$, the vector fields $\partial/\partial \theta$ and $\partial/\partial \phi$ are both symplectic but not Hamiltonian.
-
1
The difference between hamiltonian diffeomorphism and symplectomorphism that is isotopic to the identity is quite small and describe by the flux homomorphism (The breakthrough by Banyaga). So any symplectic isotopy (path to identity in the group of symplectiomorphisms) with zero flux is ended by an Hamiltonian diffeo. Precisely speaking, the isotopy can be made homotopic to hamiltonian isotopy having the same ends. – Tatou Papora Aug 1 2011 at 16:01
For $S^2$ all symplectic diffeomorphisms are hamiltonian, and the symplectomorphism group is homotopy equivalent to $SO(3)$. For other surfaces, I find the papers by Andrew Cotton-Clay helpful, for example http://arxiv.org/abs/0807.2488. But of course, if you just want some hands-on non-trivial symplectomorphisms, I think Dehn twists would be interesting enough.
-
Sure. More than being homotopy equivalent it deformation retracts to $SO(3)$. One can find the proof in Mu-Tao's paper mrlonline.org/mrl/2001-008-005/…. But the interst here is to handle an explicit diffeomorphism like one can write for example $f:\mathbb R\longrightarrow \mathbb R, x\mapsto \frac{x}{x^2+4}\sin(x)$ and whatever. – Tatou Papora Aug 1 2011 at 15:52
Sorry about the confusion. The point I mentioned this homotopy equivalence is to demonstrate that there aren't any specifically interesting Hamiltonians on $S^2$, and all hamiltonian are connected to one another. So unless you have further restrictions, e.g. requiring the symplectomorphism fixing a couple points etc., rigid motions are basically what you need to consider. But maybe you have more specific goals in your mind. – Weiwei Aug 1 2011 at 18:51
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 48, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9402299523353577, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/37740/micro-canonical-ensemble-and-classical-reality/37757
|
# Micro-canonical ensemble and classical reality
I seem to find a contradiction in the notion of probability density used by Landau and the notion of micro-canonical ensemble.
To see this, take an isolated classical system and we know experimentally that its energy lies between $E-\Delta$ and $E+\Delta$. So, we take a hypershell corresponding to these energies in phase space and say that at equilibrium, the probability density is constant in the whole shell. Now, we know that the system would be, in reality, at a fixed energy E' and the hypersurface corresponding to this energy would lie in the previous hypershell. Also, as the system is isolated, the representative point of the system would move only on this hypersurface. Now, take a point in the shell which lies outside the surface. Choose a small enough neighborhood of it that doesn't intersect the surface. Because, the probability distribution is constant, the probability of finding the system in this neighborhood is some non-zero positive number. But, as the system always remains on the surface, it never visits that neighborhood and hence the probability of finding it in that neighborhood is zero.
Am I doing something wrong?
-
Sorry, I personally found it very hard to find out what you're asking. First of all, it is totally unclear whether you want to talk about classical physics or quantum physics: the title and the word "shell" suggests that it is classical physics, the word "Hilbert space" makes it equally clear that you want to talk about quantum mechanics. Second, the rest of the text suggests that you don't want to allow distributions that are nonzero for states of differing values of energy: you would prefer if only states with a sharp, fixed energy were allowed. But I didn't understand why you think so. – Luboš Motl Sep 19 '12 at 9:02
I am really sorry over there. I meant phase space and not Hilbert space. Guess, I wasn't paying much attention while typing. I will edit it. My question is in the domain of classical physics. – Lakshya Bhardwaj Sep 19 '12 at 9:13
No prob! Still, the bulk of the question seems hard to deal with. The energy may be sharply known but it may be known with an uncertainty. Microcanical ensemble says that all states in the thick shell, interval, are equally likely. The energy is still conserved. If you're on a surface of a fixed E to start with, you will remain on the same surface. But if you don't know what the exact surface is, and in microcanonical ensemble, you don't know what it is for the initial state, you won't know the surface for the final state, either. – Luboš Motl Sep 19 '12 at 9:48
@LubošMotl The energy is known with an uncertainty as mentioned in the question. My confusion is the apparent contradiction between the uniform probability distribution in the shell (as the system is in micro-canonical ensemble) and the meaning of this probability distribution as mentioned by Landau in his book (which I reviewed in the comment of the only answer to this question yet). Landau's definition implies that the probability distribution should be zero outside the "real" surface of energy $E'$ as I argue in the question above. – Lakshya Bhardwaj Sep 19 '12 at 10:29
## 2 Answers
No, you're not doing anything wrong, this is all correct. As an analogy, imagine I roll a die and hide it under a cup. Since you don't know which side of the die is facing upward, you represent it with a probability distribution, with an equal probability assigned to each of the six spaces. This probability distribution doesn't change over time, in this case for the trivial reason that the die isn't moving.
You know that in reality, the die is sitting there with one particular side facing upward, and that it never "visits" any of the other sides. But unless I lift up the cup, you have no choice but to keep on thinking of it as being in a probability distribution, because you don't know which state is the true one.
With the microcanonical distribution it's the same. There is indeed one "true" energy $E'$ that doesn't change, and the system cannot visit states with any other value of $E$. But the assumption is that you don't have any way to measure the energy beyond a certain level of accuracy. So, in the analogy, the die remains hidden under the cup and you have to keep representing it with a probability distribution.
Although many text books fail to make this clear (because it was widely misunderstood for much of the 20th century), the probability distribution doesn't represent the set of states the system can visit, it just represents experimental uncertainty about which state the system is in. It is this uncertainty that remains invariant in equilibrium.
-
Hi, thanks for your answer. I understand what you are saying but the part that troubles me is the definition of this probability distribution attached to the system as defined by Landau and Lifschitz in their book. They define the probability distribution, $p$, such that if I take a neighborhood of volume $V$ in phase space, then $pV=\lim_{T \to +\infty}\frac{\Delta t}{T}$ where $\Delta T$ is the time system is found in that neighborhood if it is observed for time $T$. This definition implies that $p=0$ at any point outside the surface. – Lakshya Bhardwaj Sep 19 '12 at 9:43
I haven't read Landau and Lifschitz. I'm aware that their book is famous and respected, but still I wouldn't be too surprised if they were just being inconsistent about this. The view I expressed in my answer is widely held now, but during much of the 20th century there was a strongly held opinion that the only reasonable definition of probability was the amount of time the system spends in a given state in the infinite limit (essentially the formula you quote). But that just turned out not to be a very useful way of thinking about it. – Nathaniel Sep 19 '12 at 10:47
Since from what you say they're talking about knowledge and measurement in saying that $E$ is known with uncertainty, it could even be that they're paying lip-service to the "standard" definition in terms of the infinite time limit, while actually doing calculations that refer to the uncertainty interpretation. That's a bit of a wild guess on my part though. – Nathaniel Sep 19 '12 at 10:54
Do you have some link where the problems with this infinite time limit is discussed? I have come across many different definitions of probability distribution and the successive establishment of formalism and the only way that appealed to me was Landau's way of doing things. In other definitions, the notion of ensemble doesn't make much sense to me. So, what is the currently accepted definition of probability distribution and the accepted meaning of ensemble? – Lakshya Bhardwaj Sep 21 '12 at 15:44
– Nathaniel Sep 21 '12 at 16:43
In classical framework one defines an isolated system as that which is not interacting with any other system and thus whose energy is fixed.
Now the (naive) hypothesis would be that upon observation an isolated system will be found on its constant energy surface and its probability to be found in any of the states on its constant energy surface will be equal.
However as such this hypothesis is self contradictory because the very act of observation will make the system non-isolated and so in particular its energy may change by act of observation.
So one refines the hypothesis as:
Upon observation an (initially) isolated system (of energy $E$) will be found with equal probability in any of the states between energy $E-\Delta$ and $E+\Delta$ (Here $\Delta$ is a small number that takes into account the disturbance that your act of observation may produce.)
-
I don't think what you propose is correct. From whatever I know about classical statistical mechanics, it is the physical information about the system that constrains what region of phase space we are concerned with and no measurement is performed thereafter, that is, we analyze the problem statistically from that point onward. It is not that we are first concerned with the whole phase space and then some physical information about the system appears and we expect statistical probabilistic laws to still hold true given this new information. – Lakshya Bhardwaj Sep 21 '12 at 15:49
In the end physics has to connect itself to experiments, and the point is that no experiment can ever observe a purely isolated system. – user10001 Sep 22 '12 at 1:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 16, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9701947569847107, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/152786/cardinality-of-real-number
|
# Cardinality of real number
Why is cardinality of real number $2^{\aleph_0}$? While I know that real numbers can be constructed using sets of natural numbers, that solely does not mean that real number has cardinality $2^{\aleph_0}$. So, what makes real number cardinality $2^{\aleph_0}$?
I can't find why it is like this in my textbook...
-
1
Cantor set.${}{}{}$ – Asaf Karagila Jun 2 '12 at 8:32
Binary expressions – Henry Jun 2 '12 at 8:56
– Asaf Karagila Jun 2 '12 at 11:45
## 1 Answer
Use the Cantor–Bernstein–Schroeder theorem:
Choose $x\in \mathbb{R}$, and let $\{b_i\}_{i=1}^{\infty}$ be the (shortest) binary expansion of $\frac{1}{2}+\frac{\arctan x}{\pi}$ (which is in $(0,1)$). Then define $\phi(x) = \{ i | b_i = 1 \}$. This establishes an injective map $\phi:\mathbb{R}\to 2^{\mathbb{N}}$.
To go the other way, suppose $A \subset \mathbb{N}$. Then let $t_i = 1_A(i)$ (indicator function), and define $\eta(A) = \sum_{i=1}^{\infty}\frac{t_i}{3^i}$. This establishes an injective map $\eta : 2^{\mathbb{N}} \to \mathbb{R}$.
The desired result follows from the Cantor–Bernstein–Schroeder theorem.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8457520008087158, "perplexity_flag": "head"}
|
http://www.reference.com/browse/neighbor
|
Definitions
Nearby Words
# Neighbor-joining
In bioinformatics, neighbor-joining is a bottom-up clustering method used for the construction of phylogenetic trees. Usually used for trees based on DNA or protein sequence data, the algorithm requires knowledge of the distance between each pair of taxa (e.g. species or sequences) in the tree.
## The algorithm
Neighbor-joining is an iterative algorithm. Each iteration consists of the following steps:
1. Based on the current distance matrix calculate the matrix Q (explained below).
2. Find the pair of taxa in Q with the lowest value. Create a node on the tree that joins these two taxa (i.e. join the closest neighbors, as the algorithm name implies).
3. Calculate the distance of each of the taxa in the pair to this new node.
4. Calculate the distance of all taxa outside of this pair to the new node.
5. Start the algorithm again, considering the pair of joined neighbors as a single taxon and using the distances calculated in the previous step.
### The Q-matrix
Based on a distance matrix relating r taxa, calculate Q as follows:
$Q\left(i,j\right)=\left(r-2\right)d\left(i,j\right)-sum_\left\{k=1\right\}^r d\left(i,k\right) - sum_\left\{k=1\right\}^r d\left(j,k\right)$
d(i,j) is the distance between taxa i and j.
For example, if we have four taxa (A, B, C, D) and the following distance matrix:
A B C D
A — — — —
B 7 — — —
C 11 6 — —
D 14 9 7 —
We obtain the following values for the Q matrix:
A B C D
A — — — —
B −40 — — —
C −34 −34 — —
D −34 −34 −40 —
In the example above, two pairs of taxa have the lowest value, namely −40. We can select either of them for the second step of the algorithm. We follow the example assuming that we joined taxa A and B together.
### Distance of the pair members to the new node
For each neighbor in the pair just joined, use the following formula to calculate to the new node (f and g are the paired taxa and u is the newly generated node):
$d\left(f,u\right)=frac\left\{1\right\}\left\{2\right\}d\left(f,g\right)+frac\left\{1\right\}\left\{2\left(r-2\right)\right\} left \left[sum_\left\{k=1\right\}^r d\left(f,k\right) - sum_\left\{k=1\right\}^r d\left(g,k\right) right \right] quad$
In the example above, this formula would give a distance of 6 between A and the new node. It would also give and a distance of 1 between B and the new node.
### Distance of the other taxa to the new node
For each taxon not considered in the previous step, we calculate the distance to the new node as follows:
$d\left(u,k\right)=frac\left\{1\right\}\left\{2\right\} \left[d\left(f,k\right)-d\left(f,u\right)\right] + frac\left\{1\right\}\left\{2\right\} \left[d\left(g,k\right)-d\left(g,u\right)\right]$
where u is the new node, k is the node for which we want to calculate the distance and f and g are the members of the pair just joined.
Following the example, the distance between C and the new node is 5. Also, the distance between the new node and D is 8.
### The next recursion step
From the steps above, the following matrix will result (AB acting as a new taxon):
AB C D
AB — — —
C 5 — —
D 8 7 —
We can start the procedure anew taking this matrix as the original distance matrix. In our example, it suffices to do one more step of the recursion to obtain the complete tree.
## Pros and cons of the NJ method
Neighbor-joining is based on the minimum-evolution criterion for phylogenetic trees, i.e. the topology that gives the least total branch length is preferred at each step of the algorithm. However, neighbor-joining may not find the true tree topology with least total branch length because it is a greedy algorithm that constructs the tree in a step-wise fashion. Even though it is sub-optimal in this sense, it has been extensively tested and usually finds a tree that is quite close to the optimal tree. Nevertheless, it has been largely superseded in phylogenetics by methods that do not rely on distance measures and offer superior accuracy under most conditions.
The main virtue of neighbor-joining relative to these other methods is its computational efficiency. That is, neighbor-joining is a polynomial-time algorithm. It can be used on very large data sets for which other means of phylogenetic analysis (e.g. minimum evolution, maximum parsimony, maximum likelihood) are computationally prohibitive. Unlike the UPGMA algorithm for phylogenetic tree reconstruction, neighbor-joining does not assume that all lineages evolve at the same rate (molecular clock hypothesis) and produces an unrooted tree. Rooted trees can be created by using an outgroup and the root can then effectively be placed on the point in the tree where the edge from the outgroup connects.
Furthermore, neighbor-joining is statistically consistent under many models of evolution. Hence, given data of sufficient length, neighbor-joining will reconstruct the true tree with high probability.
## References
• Atteson K (1997). "The performance of neighbor-joining algorithms of phylogeny reconstruction", pp. 101–110. In Jiang, T., and Lee, D., eds., Lecture Notes in Computer Science, 1276, Springer-Verlag, Berlin. COCOON '97.
• Gascuel O, Steel M (2006). "Neighbor-joining revealed". Mol Biol Evol 23 (11): 1997-2000.
• Mihaescu R, Levy D, Pachter L (2006). " Why neighbor-joining works".
• Saitou N, Nei M (1987). "The neighbor-joining method: a new method for reconstructing phylogenetic trees". Mol Biol Evol 4 (4): 406-425.
• Studier JA, Keppler KJ (1988). "A note on the Neighbor-Joining algorithm of Saitou and Nei". Mol Biol Evol 5 (6): 729-731.
## External links
• The Neighbor-Joining Method — a tutorial
Last updated on Saturday October 11, 2008 at 14:00:31 PDT (GMT -0700)
Search another word or see neighboron Dictionary | Thesaurus |Spanish
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 3, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.899416446685791, "perplexity_flag": "middle"}
|
http://simple.wikipedia.org/wiki/Energy_level
|
# Energy level
The English used in this article may not be easy for everybody to understand. (April 2012)
Simply defined as the different states of potential energy for electrons in an atom. A quantum mechanical system can only be in certain states, so that only certain energy levels are possible. The term energy level is most commonly used in reference to the electron configuration in atoms or molecules. In other words, the energy spectrum can be quantized (see continuous spectrum for the more general case).
As with classical potentials, the potential energy is usually set to zero at infinity, leading to a negative potential energy for bound electron states.
Energy levels are said to be degenerate, if the same energy level is obtained by more than one quantum mechanical state. They are then called degenerate energy levels.
The following sections of this article gives an overview over the most important factors that determine the energy levels of atoms and molecules.
## Atoms
### Intrinsic energy levels
#### Orbital state energy level
Assume an electron in a given atomic orbital. The energy of its state is mainly determined by the electrostatic interaction of the (negative) electron with the (positive) nucleus. The energy levels of an electron around a nucleus are given by :
$E_n = - h c R_{\infty} \frac{Z^2}{n^2} \$,
where $R_{\infty} \$ is the Rydberg constant (typically between 1 eV and 103 eV), Z is the charge of the atom's nucleus, $n \$ is the principal quantum number, e is the charge of the electron, $h$ is Planck's constant, and c is the speed of light.
The Rydberg levels depend only on the principal quantum number $n \$.
#### Fine structure splitting
Fine structure arises from relativistic kinetic energy corrections, spin-orbit coupling (an electrodynamic interaction between the electron's spin and motion and the nucleus's electric field) and the Darwin term (contact term interaction of s-shell electrons inside the nucleus). Typical magnitude $10^{-3}$ eV.
#### Hyperfine structure
Spin-nuclear-spin coupling (see hyperfine structure). Typical magnitude $10^{-4}$ eV.
#### Electrostatic interaction of an electron with other electrons
If there is more than one electron around the atom, electron-electron-interactions raise the energy level. These interactions are often neglected if the spatial overlap of the electron wavefunctions is low.
### Energy levels due to external fields
#### Zeeman effect
Main page: Zeeman effect
The interaction energy is: $U = - \mu B$ with $\mu = q L / 2m$
#### Zeeman effect taking spin into account
This takes both the magnetic dipole moment due to the orbital angular momentum and the magnetic momentum arising from the electron spin into account.
Due to relativistic effects (Dirac equation), the magnetic moment arising from the electron spin is $\mu = - \mu_B g s$ with $g$ the gyro-magnetic factor (about 2). $\mu = \mu_l + g \mu_s$ The interaction energy therefore gets $U_B = - \mu B = \mu_B B (m_l + g m_s)$.
#### Stark effect
Interaction with an external electric field (see Stark effect).
## Molecules
Roughly speaking, a molecular energy state, i.e. an eigenstate of the molecular Hamiltonian, is the sum of an electronic, vibrational, rotational, nuclear and translational component, such that:
$E = E_\mathrm{electronic}+E_\mathrm{vibrational}+E_\mathrm{rotational}+E_\mathrm{nuclear}+E_\mathrm{translational}\,$
where $E_\mathrm{electronic}$ is an eigenvalue of the electronic molecular Hamiltonian (the value of the potential energy surface) at the equilibrium geometry of the molecule.
The molecular energy levels are labelled by the molecular term symbols.
The specific energies of these components vary with the specific energy state and the substance.
In molecular physics and quantum chemistry, an energy level is a quantized energy of a bound quantum mechanical state.
## Crystalline Materials
Crystalline materials are often characterized by a number of important energy levels. The most important ones are the top of the valence band, the bottom of the conduction band, the Fermi energy, the vacuum level, and the energy levels of any defect states in the crystals.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 15, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8666965365409851, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/14128?sort=newest
|
## Approximation to divergent integral
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Hi everyone,
I'm a physicist working on stochastic processes and I've come up against an integral that I'm not able to approximate using steepest descent (I don't have a large or small parameter), integration by parts, or any of the other common techniques. So I'd very much appreciate the input of any applied mathematicians!
The integral is $$f(x) = \int_{x}^{\infty} \frac{\Phi(t)}{t^{5}}dt$$ with $\Phi(t) = e^{i \pi t^{2} / 2}[C(t) + i S(t)]$. Here, $C(t)$ and $S(t)$ are the Fresnel integrals defined by $$C(t) + i S(t) = \int_{0}^{t} e^{i \pi u^{2} / 2} du\ .$$ What I really want is the behaviour of $f(x)$ for small $x$. But, the integral is formally divergent if $x = 0$.
Made a little progress with integration by parts, but I wasn't able to entirely separate my integral into convergent pieces.
-
2
You say that you don't have a small parameter, but then also say that you want the behavior for small $x$. What happens if you define $x=\epsilon y$ and then look for the leading terms for fixed $y$ and small $\epsilon$? – Yossi Farjoun Feb 4 2010 at 11:15
## 2 Answers
Hi David,
That answer was perfect -- thanks so much. So it can be approximated directly (with a messy constant), but it doesn't have a Laurent series or anything similar.
That is fine for my needs though.
Cheers,
Irwin
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
First, cut off the tail towards infinity:
$$f(x) = \int_{x}^1 \frac{\Phi(t)}{t^5} dt + \int_1^{\infty} \frac{\Phi(t)}{t^5} dt.$$
The second term is a constant, so you can compute it numerically once and for all.
Write $$e^{i \pi u^2/2} = 1 + \frac{i \pi}{2} u^2 +R(u)$$ and $$\int_{0}^t e^{i \pi u^2/2} du = t + \frac{i \pi}{6} t^3 + \int_{0}^t R(u) du.$$
So $$\frac{\Phi(t)}{t^5} = \left( t^{-4} + \frac{i \pi}{6} t^{-2} + t^{-5} \int_{0}^t R(u) du \right) \left( 1 + \frac{i \pi}{2} t^2 + R(t) \right)=$$ $$t^{-4} + \frac{2 \pi i}{3} t^{-2} + \left( t^{-4} R(t) - \frac{\pi^2}{12} + \int_{0}^t R(u) du \right).$$
So $$\int_{x}^1 \frac{\Phi(t)}{t^5} dt = \frac{1}{3}\left( x^{-3} - 1 \right) + \frac{2 \pi i}{3} \left( x^{-1} -1 \right) + \int_{x}^1 \left( t^{-4} R(t) - \frac{\pi^2}{12} + \int_{0}^t R(u) du \right) du.$$
The integrands in the last term are bounded functions, and they are being integrated over bounded domains, so there is no problem approximating them numerically.
If you want an asymptotic formula, instead of a numerical approximation, you should be able to keep taking more terms out to get a formula like $$f(x) = \frac{1}{3} x^{-3} + \frac{2 \pi i}{3} x^{-1} + C + a_1 x + a_2 x^2 + \cdots + a_n x^n + O(x^{n+1}) \quad \mathrm{as} \ x \to 0.$$ You probably won't be able to get the constant $C$ in closed form, because it involves all those convergent integrals. The other $a_i$ will be gettable in closed form, although they will get worse and worse as you compute more of them.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 9, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9546815156936646, "perplexity_flag": "head"}
|
http://mathhelpforum.com/advanced-algebra/66374-problems-topology-question.html
|
# Thread:
1. ## Problems with Topology question
I'm stuck on a question. It states to consider the set $Y = [-1,1]$ as a subspace of $\mathbb{R}$, it being the standard topology. And now I have to consider which are open in $\mathbb{R}$ and which are open in $Y$.
The first set is $A = \{x : 1/2 < |x| < 1\}$, which I think is $(1/2, 1) \cup (-1, -1/2)$. Now, that is open in $\mathbb{R}$, because it is the union of two open sets in $\mathbb{R}$. But is it open in $Y$? How do I tell if it is? How about the following cases:
$(1/2, 1] \cup [-1, -1/2)$
$[1/2, 1) \cup (-1, -1/2]$
$[1/2, 1] \cup [-1, -1/2]$
My guess is that they are all closed in $\mathbb{R}$, but all open in $Y$. Am I right? Why/why not?
Thanks in advance.
2. Originally Posted by HTale
I'm stuck on a question. It states to consider the set $Y = [-1,1]$ as a subspace of $\mathbb{R}$, it being the standard topology. And now I have to consider which are open in $\mathbb{R}$ and which are open in $Y$.
The first set is $A = \{x : 1/2 < |x| < 1\}$, which I think is $(1/2, 1) \cup (-1, -1/2)$. Now, that is open in $\mathbb{R}$, because it is the union of two open sets in $\mathbb{R}$. But is it open in $Y$? How do I tell if it is? How about the following cases:
$(1/2, 1] \cup [-1, -1/2)$
$[1/2, 1) \cup (-1, -1/2]$
$[1/2, 1] \cup [-1, -1/2]$
My guess is that they are all closed in $\mathbb{R}$, but all open in $Y$. Am I right? Why/why not?
Thanks in advance.
A subset $X$ is open in $[-1,1]$ iff $X = Y \cap [-1,1]$ where $Y$ is open in $\mathbb{R}$.
Now, $A = (-1,-1/2) \cup (1/2,1)$. The set $(-1,-1/2) = (-1,-1/2)\cap [-1,1]$ and $(1/2,1) = (1/2,1)\cap [-1,1]$. Thus, these are open sets and so the union of two open sets is open.
3. Originally Posted by HTale
It states to consider the set $Y = [-1,1]$ as a subspace of $\mathbb{R}$, it being the standard topology. And now I have to consider which are open in $\mathbb{R}$ and which are open in $Y$.
$B=(1/2, 1] \cup [-1, -1/2)$
$C=[1/2, 1) \cup (-1, -1/2]$
$D=[1/2, 1] \cup [-1, -1/2]$
My guess is that they are all closed in $\mathbb{R}$, but all open in $Y$. Am I right? Why/why not?
By “being the standard topology”, I assume you the relative topology on $\mathbb{Y}=[-1,1]$.
I added letters to your set for reference.
Set $B$ is neither open nor closed in $\mathbb{R}$, however it is open in $\mathbb{Y}$. Can you explain this?
Set $C$ is neither open nor closed in $\mathbb{R}$. It is not open in $\mathbb{Y}$ because it contains a boundary point $\frac{1}{2}$ and it is not closed because $-1$ is a limit point not in the set.
Set $D$ is closed in $\mathbb{R}$, and it is closed in $\mathbb{Y}$.
Can you explain this?
4. Originally Posted by Plato
By “being the standard topology”, I assume you the relative topology on $\mathbb{Y}=[-1,1]$.
I added letters to your set for reference.
Set $B$ is neither open nor closed in $\mathbb{R}$, however it is open in $\mathbb{Y}$. Can you explain this?
Set $C$ is neither open nor closed in $\mathbb{R}$. It is not open in $\mathbb{Y}$ because it contains a boundary point $\frac{1}{2}$ and it is not closed because $-1$ is a limit point not in the set.
Set $D$ is closed in $\mathbb{R}$, and it is closed in $\mathbb{Y}$.
Can you explain this?
Thank you very much, I can explain all of them now.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 69, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.984800398349762, "perplexity_flag": "head"}
|
http://en.wikipedia.org/wiki/Cauchy's_integral_formula
|
# Cauchy's integral formula
Not to be confused with Cauchy's integral theorem.
In mathematics, Cauchy's integral formula, named after Augustin-Louis Cauchy, is a central statement in complex analysis. It expresses the fact that a holomorphic function defined on a disk is completely determined by its values on the boundary of the disk, and it provides integral formulas for all derivatives of a holomorphic function. Cauchy's formula shows that, in complex analysis, "differentiation is equivalent to integration": complex differentiation, like integration, behaves well under uniform limits – a result denied in real analysis.
## Theorem
Suppose U is an open subset of the complex plane C, f : U → C is a holomorphic function and the closed disk D = { z : | z − z0| ≤ r} is completely contained in U. Let $\gamma$ be the circle forming the boundary of D. Then for every a in the interior of D:
$f(a) = \frac{1}{2\pi i} \oint_\gamma \frac{f(z)}{z-a}\, dz$
where the contour integral is taken counter-clockwise.
The proof of this statement uses the Cauchy integral theorem and similarly only requires f to be complex differentiable. Since the reciprocal of the denominator of the integrand in Cauchy's integral formula can be expanded as a power series in the variable (a − z0), it follows that holomorphic functions are analytic. In particular f is actually infinitely differentiable, with
$f^{(n)}(a) = \frac{n!}{2\pi i} \oint_\gamma \frac{f(z)}{(z-a)^{n+1}}\, dz.$
This formula is sometimes referred to as Cauchy's differentiation formula.
The circle γ can be replaced by any closed rectifiable curve in U which has winding number one about a. Moreover, as for the Cauchy integral theorem, it is sufficient to require that f be holomorphic in the open region enclosed by the path and continuous on its closure.
## Proof sketch
By using the Cauchy integral theorem, one can show that the integral over C (or the closed rectifiable curve) is equal to the same integral taken over an arbitrarily small circle around a. Since f(z) is continuous, we can choose a circle small enough on which f(z) is arbitrarily close to f(a). On the other hand, the integral
$\oint_C \frac{1}{z-a} \,dz = 2 \pi i,$
over any circle C centered at a. This can be calculated directly via a parametrization (integration by substitution) $z(t) = a + \varepsilon e^{it}$ where 0 ≤ t ≤ 2π and ε is the radius of the circle.
Letting ε → 0 gives the desired estimate
$\begin{align} \left | \frac{1}{2 \pi i} \oint_C \frac{f(z)}{z-a} \,dz - f(a) \right | &= \left | \frac{1}{2 \pi i} \oint_C \frac{f(z)-f(a)}{z-a} \,dz \right |\\[.5em] &\leq \frac{1}{2 \pi} \int_0^{2\pi} \frac{ |f(z(t)) - f(a)| } {\varepsilon} \,\varepsilon\,dt\\[.5em] &\leq \max_{|z-a|=\varepsilon}|f(z) - f(a)| \xrightarrow[\varepsilon\to 0]{} 0. \end{align}$
## Example
Surface of the real part of the function g(z) = z2 / (z2 + 2z + 2) and its singularities, with the contours described in the text.
Consider the function
$g(z)=\frac{z^2}{z^2+2z+2}$
and the contour described by |z| = 2, call it C.
To find the integral of g(z) around the contour, we need to know the singularities of g(z). Observe that we can rewrite g as follows:
$g(z)=\frac{z^2}{(z-z_1)(z-z_2)}$
where $z_1=-1+i,$ $z_2=-1-i.$
Clearly the poles become evident, their moduli are less than 2 and thus lie inside the contour and are subject to consideration by the formula. By the Cauchy-Goursat theorem, we can express the integral around the contour as the sum of the integral around z1 and z2 where the contour is a small circle around each pole. Call these contours C1 around z1 and C2 around z2.
Now, around C1, f is analytic (since the contour does not contain the other singularity), and this allows us to write f in the form we require, namely:
$f(z)=\frac{z^2}{z-z_2}$
and now
$\oint_C \frac{f(z)}{z-a}\, dz=2\pi i\cdot f(a)$
$\oint_{C_1} \frac{\left(\frac{z^2}{z-z_2}\right)}{z-z_1}\,dz =2\pi i\frac{z_1^2}{z_1-z_2}.$
Doing likewise for the other contour:
$f(z)=\frac{z^2}{z-z_1},$
$\oint_{C_2} \frac{\left(\frac{z^2}{z-z_1}\right)}{z-z_2}\,dz =2\pi i\frac{z_2^2}{z_2-z_1}.$
The integral around the original contour C then is the sum of these two integrals:
$\begin{align} \oint_C \frac{z^2}{z^2+2z+2}\,dz &{}= \oint_{C_1} \frac{\left(\frac{z^2}{z-z_2}\right)}{z-z_1}\,dz + \oint_{C_2} \frac{\left(\frac{z^2}{z-z_1}\right)}{z-z_2}\,dz \\[.5em] &{}= 2\pi i\left(\frac{z_1^2}{z_1-z_2}+\frac{z_2^2}{z_2-z_1}\right) \\[.5em] &{}= 2\pi i(-2) \\[.3em] &{}=-4\pi i. \end{align}$
An elementary trick using partial fraction decomposition:
$\oint_C g(z)dz =\oint_C \left(1-\frac{1}{z-z_1}-\frac{1}{z-z_2}\right)dz =0-2\pi i-2\pi i =-4\pi i$
## Consequences
The integral formula has broad applications. First, it implies that a function which is holomorphic in an open set is in fact infinitely differentiable there. Furthermore, it is an analytic function, meaning that it can be represented as a power series. The proof of this uses the dominated convergence theorem and the geometric series applied to
$f(\zeta) = \frac{1}{2\pi i}\int_C \frac{f(z)}{z-\zeta}\,dz.$
The formula is also used to prove the residue theorem, which is a result for meromorphic functions, and a related result, the argument principle. It is known from Morera's theorem that the uniform limit of holomorphic functions is holomorphic. This can also be deduced from Cauchy's integral formula: indeed the formula also holds in the limit and the integrand, and hence the integral, can be expanded as a power series. In addition the Cauchy formulas for the higher order derivatives show that all these derivatives also converge uniformly.
The analog of the Cauchy integral formula in real analysis is the Poisson integral formula for harmonic functions; many of the results for holomorphic functions carry over to this setting. No such results, however, are valid for more general classes of differentiable or real analytic functions. For instance, the existence of the first derivative of a real function need not imply the existence of higher order derivatives, nor in particular the analyticity of the function. Likewise, the uniform limit of a sequence of (real) differentiable functions may fail to be differentiable, or may be differentiable but with a derivative which is not the limit of the derivatives of the members of the sequence.
## Generalizations
### Smooth functions
A version of Cauchy's integral formula holds for smooth functions as well, as it is based on Stokes' theorem. Let D be a disc in C and suppose that f is a complex-valued C1 function on the closure of D. Then (Hörmander 1966, Theorem 1.2.1)
$f(\zeta) = \frac{1}{2\pi i}\int_{\partial D} \frac{f(z) dz}{z-\zeta} + \frac{1}{2\pi i}\iint_D \frac{\partial f}{\partial \bar{z}}(z) \frac{dz\wedge d\bar{z}}{z-\zeta}.$
One may use this representation formula to solve the inhomogeneous Cauchy–Riemann equations in D. Indeed, if φ is a function in D, then a particular solution f of the equation is a holomorphic function outside the support of μ. Moreover, if in an open set D,
$d\mu = \frac{1}{2\pi i}\phi \, dz\wedge d\bar{z}$
for some φ ∈ Ck(D) (k ≥ 1), then $f(\zeta,\bar{\zeta})$ is also in Ck(D) and satisfies the equation
$\frac{\partial f}{\partial\bar{z}} = \phi(z,\bar{z}).$
The first conclusion is, succinctly, that the convolution μ∗k(z) of a compactly supported measure with the Cauchy kernel
$k(z) = \operatorname{p.v.}\frac{1}{z}$
is a holomorphic function off the support of μ. Here p.v. denotes the principal value. The second conclusion asserts that the Cauchy kernel is a fundamental solution of the Cauchy–Riemann equations. Note that for smooth complex-valued functions f of compact support on C the generalized Cauchy integral formula simplifies to
$f(\zeta) = \frac{1}{2\pi i}\iint \frac{\partial f}{\partial \bar{z}}\frac{dz\wedge d\bar{z}}{z-\zeta},$
and is a restatement of the fact that, considered as a distribution, $(\pi z)^{-1}$ is a fundamental solution of the Cauchy-Riemann operator $\partial/\partial\overline{z}$.[1] The generalized Cauchy integral formula can be deduced for any bounded open region X with C1 boundary ∂X from this result and the formula for the distributional derivative of the characteristic function χX of X:
${\partial \chi_X\over \partial \overline z}= {i\over 2} \oint_{\partial X} dz,$
where the distribution on the right hand side denotes contour integration along ∂X.[2]
### Several variables
In several complex variables, the Cauchy integral formula can be generalized to polydiscs (Hörmander 1966, Theorem 2.2.1). Let D be the polydisc given as the Cartesian product of n open discs D1, ..., Dn:
$D = \prod_{i=1}^n D_i.$
Suppose that f is a holomorphic function in D continuous on the closure of D. Then
$f(\zeta) = \frac{1}{(2\pi i)^n}\int\cdots\iint_{\partial D_1\times\dots\times\partial D_n} \frac{f(z_1,\dots,z_n)}{(z_1-\zeta_1)\dots(z_n-\zeta_n)}dz_1\dots dz_n$
where ζ=(ζ1,...,ζn) ∈ D.
### In real algebras
The Cauchy integral formula is generalizable to real vector spaces of two or more dimensions. The insight into this property comes from geometric algebra, where objects beyond scalars and vectors (such as planar bivectors and volumetric trivectors) are considered, and a proper generalization of Stokes theorem.
Geometric calculus defines a derivative operator $\nabla = \hat e_i \partial_i$ under its geometric product—that is, for a $k$-vector field $\psi(\vec r)$, the derivative $\nabla \psi$ generally contains terms of grade $k+1$ and $k-1$. For example, a vector field ($k=1$) generally has in its derivative a scalar part, the divergence ($k=0$), and a bivector part, the curl ($k=2$). This particular derivative operator has a Green's function:
$G(\vec r, \vec r') = \frac{1}{S_n} \frac{\vec r - \vec r'}{|\vec r - \vec r'|^n}$
where $S_n$ is the surface area of a unit ball in the space (that is, $S_2=2\pi$, the circumference of a circle with radius 1, and $S_3 = 4\pi$, the surface area of a sphere with radius 1). By definition of a Green's function, $\nabla G(\vec r, \vec r') = \delta(\vec r- \vec r')$. It is this useful property that can be used, in conjunction with the generalized Stokes theorem:
$\oint_{\partial V} d\vec S \; f(\vec r) = \int_V d\vec V \; \nabla f(\vec r)$
where, for an $n$-dimensional vector space, $d\vec S$ is an $(n-1)$-vector and $d\vec V$ is an $n$-vector. The function $f(\vec r)$ can, in principle, be composed of any combination of multivectors. The proof of Cauchy's integral theorem for higher dimensional spaces relies on the using the generalized Stokes theorem on the quantity $G(\vec r,\vec r') f(\vec r')$ and use of the product rule:
$\oint_{\partial V'} G(\vec r, \vec r')\; d\vec S' \; f(\vec r') = \int_V \left([\nabla' G(\vec r, \vec r')] f(\vec r') + G(\vec r, \vec r') \nabla' f(\vec r')\right) \; d\vec V$
when $\nabla \vec f = 0$, $f(\vec r)$ is called a monogenic function, the generalization of holomorphic functions to higher-dimensional spaces—indeed, it can be shown that the Cauchy–Riemann condition is just the two-dimensional expression of the monogenic condition. When that condition is met, the second term in the right-hand integral vanishes, leaving only
$\oint_{\partial V'} G(\vec r, \vec r')\; d\vec S' \; f(\vec r') = \int_V [\nabla' G(\vec r, \vec r')] f(\vec r') = -\int_V \delta(\vec r - \vec r') f(\vec r') \; d\vec V =- i_n f(\vec r)$
where $i_n$ is that algebra's unit $n$-vector, the pseudoscalar. The result is
$f(\vec r) =- \frac{1}{i_n} \oint_{\partial V} G(\vec r, \vec r')\; d\vec S \; f(\vec r') = -\frac{1}{i_n} \oint_{\partial V} \frac{\vec r - \vec r'}{S_n |\vec r - \vec r'|^n} \; d\vec S \; f(\vec r')$
Thus, as in the two-dimensional (complex analysis) case, the value of an analytic (monogenic) function at a point can be found by an integral over the surface surrounding the point, and this is valid not only for scalar functions but vector and general multivector functions as well.
## See also
• Cauchy–Riemann equations
• Methods of contour integration
• Nachbin's theorem
• Morera's theorem
• Mittag-Leffler's theorem
• Green's function generalizes this idea to the non-linear setup
• Schwarz integral formula
• Parseval–Gutzmer formula
## References
• Ahlfors, Lars (1979), Complex analysis (3rd ed.), McGraw Hill, ISBN 978-0-07-000657-7 .
• Hörmander, Lars (1966), An introduction to complex analysis in several variables, Van Nostrand
• Hörmander, Lars (1983), The Analysis of Linear Partial Differential Operators I, Springer, ISBN 3-540-12104-8
• Doran, Chris; Lasenby, Anthony (2003), Geometric Algebra for Physicists, Cambridge University Press, ISBN 978-0-521-71595-9
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 58, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8935391902923584, "perplexity_flag": "head"}
|
http://quant.stackexchange.com/questions/5981/what-are-the-advantages-disadvantages-of-these-approaches-to-deal-with-volatilit/7033
|
# What are the advantages/disadvantages of these approaches to deal with volatility surface?
I would like to know if someone could provide a summarized view of the advantages and disadvantages of the approaches on the volatility surface issues, such as:
1. Local vol
2. Stochastic Vol (Heston/SVI)
3. Parametrization (Carr and Wu approach)
-
1
Please embed the links to the definitions (wiki) of the different models and avoid to much abbreviations... – SRKX♦ Jan 14 at 10:54
## 4 Answers
The volatiltiy surface is just a representation of European option prices as a function of strike and maturity in a different "unit" - namely implied volatility (while the term implied volatility has to be made precise by the model used to convert prices (quotes) into implied volatilities - for example: we may consider log-normal vols and normal vols). Volatility is often preferred over prices, e.g., when considering interpolations of European option prices (although this may introduce difficulties like arbitrage violations, see, e.g., http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1964634 ).
A local volatility model can generate a perfect fit to the implied volatility surface via Dupire's formula as long as the given surface is arbitrage free. In other words: the model can calibrate to a surface of European option prices. Since this calibration is done by an analytic formula the calibration is exact and fast.
Parametric models, like stochastic volatility models, usually are more difficult to calibrate to Europen option price. The formulas that are derived for calibration are usually more complex and often the model does not produce an exact fit. Obviously, the reason to use a stochastic volatility model (or parametric model) is not given by the need to calibrate to European options. The reason is to capture other effects of the model. An important effect to be considered is the forward volatility. Let $t=0$ denote today. Given the model is calibrated to the implied volatitliy surface. How does the volatlity surface generated by the model look like in $t=t_1$ at state $S(t_1) = S_1$? The forward volatility will describe the option price conditional to a future point in time. It is important for "Options on Options" and "Forward Start Options". In other words: More exotic products depend on this feature. While European option only depend on the terminal distributions conditional to today, such a feature depends on the dynamics (conditional transition probabilities). In a local volatility model the forward volatility shows a possibly unrealistic behavior: it flattens out. The smile is vanishing. A stochastic volatility model can produce a more realistic forward volatility surface, where the smile is almost self similar..
Another aspect are sensitivities (hedge ratios): Using a local volatility model may imply a too rigid assumption on how the volatiltiy surface depends on the spot. This then has implications on the calculation of sensitivities (greeks). Afaik, this was the main motivatoin to introduce the SABR model (which is a stochastic volatility model used to interpolate the implied volatility surface): to have a more realistic behavior w.r.t. Greeks).
To summarize:
Local Volatility Model:
• Advantage: Fast and exact calibration to the volatility surface.
• Suitable for products which only depend on terminal distribution of the underlying (no "conditional properties").
• Not suitable for more complex products which depend heavily on "conditional properties".
Stochastic Volatility Model:
• Advantage: Can produce more realistic dynamics, e.g. forward volatility. Can produce more realistic hedge dynamics.
• Disadvantage: For products which depend only on terminal distributions the fit of the volatility surface may be too poor.
-
Thanks for the very detailed and summarized answer, truly appreciate your help! – AZhu Jan 20 at 3:45
I do not have the time right now to write up a summary concise enough but at the same time trying to really touch on all the points that have to be made to delineate the above. Instead I point you to couple papers that are concise enough to skim over in a matter of minutes in order to understand the differences.
Jim Gatheral on Local vs Stoch Vols: http://www.math.ku.dk/~rolf/teaching/ctff03/Gatheral.1.pdf
Here a great paper that touches on pretty much all the models out there (the widely published ones obviously): http://wiredspace.wits.ac.za/bitstream/handle/10539/1495/lisa.pdf;jsessionid=CD3D69EBEFFD957D0B7BB5293E92C7DA?sequence=1
By the way, stochastic and parameterized models are often not clearly divided, there are a number of stochastic vol models that use parameterization and calibration techniques. What you may want to instead focus on is on what kind of volatility the model is based, for example, unobserved integrated volatility or instantaneous volatility. Also you want to really focus on the volatility dynamics and whether the dynamics of a certain model correspond with the observed market dynamics. This very reason was one of the major reasons the SABR model came about.
Just my two cents ;-)
-
Thanks @Freddy! – AZhu Jan 20 at 3:55
The best overview I have seen so far (in terms of theory AND practicality) is in
Paul Wilmott On Quantitative Finance Second Edition, pp. 826 - 830:
You can find the start: Here
...and the closing overview and summary: Here
(Unfortunately I haven't found the whole excerpt on the web but I would be happy to include a link in this answer [as long as it is referring to a legal copy] - just let me know in the comments).
-
1
Thanks @vonjd, I will do some readings on the book then. – AZhu Jan 20 at 3:55
I can comment since Jim Gatheral is actually my mentor.
The advantages and disadvantages of Stochastic Vol, Local Vol and Parameterized Vol really depends on what you are using it for. If you are using it to price exotic options, Stochastic vol is generally a more accepted principle than local vol. The reason because stochastic vol assumes time homogenous volatility surfaces whereas local vol assumes that the forward vol skew that you calculate today is what the vol skew will look like in the future + some stochastic term. Parameterized vol is generally recommended to use to price VIX options/variance swaps as well as for market makers.
One example to show the difference between local vol and stochastic vol is to hedge a one touch option which is the equivalent to a american binary option. The hedge for this would be to buy a call spread maturing shortly and keep buying the call spread until the one touch expires or it is exercised. Because we assume the atm skew in the future will be what we see the forward skew today in local vol, the price will be lower for the call spread. Whereas under stochastic vol, we assume the atm skew today is what we will see in the future, therefore, the price of the call spread will be more expensive under stochastic vol than local vol.
-
Thanks a lot for your response, I think your response is definitely very practical and is very helpful from a practitioner stand of view. And could you kindly elaborate on why the parametrized vol is useful for VIX options? Is it because of the easiness in computation? However, isn't parametrization generate arbitrage-free surface only under certain conditions? Thanks for your clarification – AZhu Jan 20 at 3:54
1
– Andrew Jan 22 at 17:52
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.908227801322937, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/20906/how-to-find-an-integer-solution-for-general-diophantine-equation-ax-by-cz?answertab=active
|
How to find an integer solution for general Diophantine equation ax + by + cz + dt… = N
I know how to solve `ax + by = c` using Extended Algorithm. But with more than variables, I'm lost :(. To verify if it has an integer solution is easy, since we only need to check for gcd(a,b,c)|d. Other than that, how can we find an integer solution for this equation?
Thanks,
Chan
-
Use the extended Euclidean algorithm several more times. – Qiaochu Yuan Feb 8 '11 at 0:13
@Qiaochu Yuan: for each pair (a,b) then ((a,b), c)? – Chan Feb 8 '11 at 1:01
1 Answer
Suppose you need to solve $$a_1x_1 + a_2x_2 + a_3x_3 = c\qquad (1)$$ in integers.
I claim this is equivalent to solving $$\gcd(a_1,a_2)y + a_3x_3 = c\qquad (2)$$ in integers.
To see this, note that any solution to (1) produces a solution to (2): letting $g=\gcd(a_1,a_2)$, we can write $a_1 = gk_1$, $a_2=gk_2$, so then we have: $$c = a_1x_1 + x_2x_2 + a_3x_3 = g(k_1x_1) + g(k_2x_2) + a_3x_3 = g(k_1x_1+k_2x_2) + a_3x_3,$$ solving (2). Conversely, suppose you have a solution to (2). Since we can find $r$ and $s$ such that $g=ra_1+sa_2$, we have $$c = gy+a_3x_3 = (ra_1+sa_2)y +a_3x_3 = a_1(ry) + a_2(sy) + a_3x_3,$$ yielding a solution to (1).
This should tell you how to solve the general case $$a_1x_1+\cdots+a_nx_n = c$$ in terms of $\gcd(a_1,\ldots,a_n)$, which can in turn be computed recursively.
-
Great thanks ;) – Chan Feb 8 '11 at 5:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.910652220249176, "perplexity_flag": "head"}
|
http://mathoverflow.net/revisions/46860/list
|
## Return to Answer
2 spelling
This is called the fundamental theorem in of affine geometry. Let $f : E \to E'$ be a map between affine spaces over a field $K$. Suppose that
1. $f$ is bijective;
2. $\dim E=\dim E'\ge 2$;
3. If $a, b, c\in E$ are aligned, then so are $f(a), f(b), f(c)$.
Then $f$ is semi-affine: fix some $a_0\in E$, then there exists a field automorphism $\sigma$ of $K$ such that the map $h: v\mapsto f(a_0+v)-f(a_0)$ (which goes from the vector space attached to ${E}$ to that attached to $E'$) is additive and $h(\lambda v)=\sigma(\lambda)h(v)$ for all $v$ and all $\lambda \in K$. I don't have an URL for this theorem, I find it in Jean Fresnel: Méthodes Modernes en Géométrie, Exercise 3.5.7. But I think it is in any standard textbook on affine geometry.
When $K=\mathbb R$, it is known that $K$ has no non-trivial field automorphism. So your $f$ is an affine function, hence continuous. If $K=\mathbb C$, as pointed out by Kevin in above comments, take any non-trivial automorphism of $\mathbb C$, then you get a semi-affine map $\mathbb C^n \to \mathbb C^n$ which will not be affine, even not continuous (if $\sigma$ is not the conjugation).
1
This is called the fundamental theorem in affine geometry. Let $f : E \to E'$ be a map between affine spaces over a field $K$. Suppose that
1. $f$ is bijective;
2. $\dim E=\dim E'\ge 2$;
3. If $a, b, c\in E$ are aligned, then so are $f(a), f(b), f(c)$.
Then $f$ is semi-affine: fix some $a_0\in E$, then there exists a field automorphism $\sigma$ of $K$ such that the map $h: v\mapsto f(a_0+v)-f(a_0)$ (which goes from the vector space attached to ${E}$ to that attached to $E'$) is additive and $h(\lambda v)=\sigma(\lambda)h(v)$ for all $v$ and all $\lambda \in K$. I don't have an URL for this theorem, I find it in Jean Fresnel: Méthodes Modernes en Géométrie, Exercise 3.5.7. But I think it is in any standard textbook on affine geometry.
When $K=\mathbb R$, it is known that $K$ has no non-trivial field automorphism. So your $f$ is an affine function, hence continuous. If $K=\mathbb C$, as pointed out by Kevin in above comments, take any non-trivial automorphism of $\mathbb C$, then you get a semi-affine map $\mathbb C^n \to \mathbb C^n$ which will not be affine, even not continuous (if $\sigma$ is not the conjugation).
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 46, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9261573553085327, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/users/6389/christoph?tab=activity&sort=comments
|
Christoph
reputation
38
bio
website
location
age
member for 1 year, 5 months
seen 10 mins ago
profile views 250
bio visits
1,978 reputation website member for 1 year, 5 months
38 badges location seen 10 mins ago
168 Comments
28m comment Why distinguish between row and column vectors?Why not define vectors to be things that act on covectors to product numbers instead? Because infinite-dimensional spaces need not be reflexive
May13 comment Physical interpretation of Poisson bracket properties@dmckee: notation is domain-specific, and it's quite common to use curlies for Poisson brackets, both in introductory and advanced literature
May10 comment If particles can find themselves spontaneously arranged, isn't entropy actually decreasing?@LubošMotl: care to comment on the revision to my answer?
May10 comment Are there problems solvable with Newtonian physics, GR and QM?two historically relevant examples would be the perihelion precession of mercury (GR vs Newton) and black body radiation (classical Rayleigh–Jeans law vs quantum Planck's law)
May8 comment If particles can find themselves spontaneously arranged, isn't entropy actually decreasing?I'd also like to throw in some nice quotes from this paper: As the thermodynamic entropy is not measurable except when the process is reversible, the second law remains useless as a computational tool. and It is (has?) not been possible to show that the statistical entropy is identical to the thermodynamic entropy in general.
May8 comment If particles can find themselves spontaneously arranged, isn't entropy actually decreasing?@LubošMotl: Entropy is well-defined for time-dependent processes. Indeed, it has to be well-defined because the second law of thermodynamics says how it changes during such processes - I don't think that necessarily follows: In particular, there are formulations of the 2nd law that explicitly state There exists for every system in equilibrium a property called entropy, and for irreversible processes the 2nd law only makes a statement about initial and final equilibrium states
May8 comment If particles can find themselves spontaneously arranged, isn't entropy actually decreasing?@Arnaud: it is indeed hard to define entropy if you don't at least assume local equilibrium; take the dual to your experiment: confine the gas to one side of the box, remove the wall and let it expand; because the rate of expansion is fixed, at each point in time you could re-introduce the wall (freeze the instantaneous system parameter volume) and define the entropy of the expanding gas as the entropy of that equilibrium system
Apr29 comment English translation of Helmholtz' paper: “On the Physical Significance of the Principle of Least Action”I just took a quick look at the paper, and the notation is funny - the same letters are used, but with different meanings: the coordinates are $p$ instead of $q$, which in turn is used for the velocities $\dot q$; momenta are $c$ instead of $p$, potentials are $F$ instead of $V$, the Lagrangian is called $H$ and has the opposite sign, ie corresponds to $-L$; kinetic energy is called $L$ instead of $T$; the Hamiltonian is called $H'$ instead of $H$
Apr28 comment Rate of spontaneous tachyon emission@BenCrowell: the point is that there is no local environment of available tachyons: the interaction is non-local and the environment is basically the whole spacetime (or rather the subset at space-like distance); I should probably edit my answer to make this more explicit; I've yet to think about how general relativity (where there's not necessarily a single critical frame) changes the picture
Apr28 comment Forces as One-Forms and Magnetismvelocity-dependent forces cannot be represented as 1-forms on $M$, but rather as 1-forms on $TM$ (the differential of the Lagrangian) or, factoring out a force of inertia, as sections of the pullback of $T^*M$ over $TM\to M$
Apr27 comment Is this statement about quantum mechanics valid?
Apr22 comment Why do we still need to think of gravity as a force?@ejrb: it somewhat depends on your framework how special gravity is - eg in string theory gravity is less special than in loop quantum gravity, and the teleparallel reformulation of general relativity makes it into a proper force instead of an inertial one
Apr12 comment Why isn't temperature measured in Joules?@Kaz: degree of freedom is not used as a unit - we just count them and divide the energy by that number
Apr12 comment Why isn't temperature measured in Joules?I agree in principle, but keep in mind that if equipartition holds, measuring temperature in units of energy does make sense: it corresponds to average energy per degree of freedom, and as the latter is a unit-less cardinal number, the densities end up with the same unit
Apr12 comment Nonseparable Hilbert space@SudipPaul: quote 1 implies that in general, you can no longer define operators on just a countable subset; quote 2 implies that Hahn-Banach becomes non-constructive: Zorn's lemma will tell you that there is a linear extension, but you have no way to actually get it; quote 3 implies that there's no (countable!) Hilbert basis, which is often assumed in QM when you treat infinite spaces like finite ones; quote 4 implies that restriction to separable spaces doesn't have a known down-side as far as physics go
Apr9 comment How hot is the water in the pot?+∞ for actually modelling the system; small nit: naming a rate of energy $Q$ is a bit unfortunate...
Apr9 comment How hot is the water in the pot?@Taro: for $c \gg t$ the denominator will be approximately $1 + f(t/c)^b$ which explains why $f$ and $c$ can be varied side by side without changing the result and suggests that we haven't found the right physical parameters yet
Apr9 comment How hot is the water in the pot?
Apr9 comment How hot is the water in the pot?@Taro: the basic idea behind using hyperbolic tangent is logistic growth; fitting the measured pot temperatures with $A \tanh B(t−C) + D$ looks good to me
Apr6 comment How hot is the water in the pot?@Taro: reality is messy, and we're dealing with an effective theory; if fudge factors help you get a better fit, use them even if you can't derive them from first principles; explaining where they come from comes after that
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 23, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.925036609172821, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/tagged/spherical-coordinates+integral
|
# Tagged Questions
1answer
37 views
### Triple Integral Spherical Coordinates
So I have to compute the triple integral of this: $\int\int\int \frac{1}{1+x^2+y^2+z^2}$ and it says the equation of the sphere is $x^2 + y^2 + z^2 = z$ which is just an elongated sphere running ...
1answer
48 views
### How to integrate a vector function in spherical coordinates?
How to integrate a vector function in spherical coordinates? In my specific case, it's an electric field on the axis of charged ring (see image below), the integral is pretty easy, but I don't ...
1answer
155 views
### Integration on the unit sphere
I have an integral on the unit sphere as follows. $$I(\mathbf{s}_1, \mathbf{s}_2) = \int_{\mathbb{S}^2} f(\mathbf{x} \cdot \mathbf{s}_1)f(\mathbf{x}\cdot\mathbf{s}_2)d\mathbf{x}$$ where the ...
0answers
151 views
### Stokes' and Divergence Theorem Problems
I have 2 questions on stokes and divergence theorem each. I think I have done both and I just want to make sure that I did them correctly. Question 1 Let C be the boundary of the surface ...
1answer
83 views
### Integration on a sphere
I have an integral at hand which has the form of $$I = \int_{u\in \mathbb{S}^2} f(\mathbf{u}\cdot \mathbf{s}_1) f(\mathbf{u}\cdot \mathbf{s}_2) d\mathbf{u}$$ where $\mathbb{S}^2$ is the unit sphere ...
1answer
63 views
### Integral from Sphere to Disc
Suppose one has an integral of the form $\int_{S_1^{d-1}} f(\phi(v)) d \text{vol}_{S_1^{d-1}}(v)$. Here $S_1^{d-1}\subset \mathbb{R}^d$ is the unit sphere. Let $B_1^{d-1}\subset\mathbb{R}^{d-1}$ be ...
1answer
173 views
### What am I actually doing when I integrate using spherical coordinates in $\mathbb{R}^3$?
When learning vector fields and using Green's Theorem with the Jacobian to find the area of a level surface, I actually realized that most of the examples shown in my book would be much easier to ...
2answers
600 views
### Find the average value of this function
Find the average value of $e^{-z}$ over the ball $x^2+y^2+z^2 \leq 1$.
1answer
262 views
### integral of a spherically symmetric 3-dimensional function over all space
I'm very sorry because it may be a very basic question but I'm not able whether to solve it for sure, nor to find an answer in stackexchange or elsewhere. I have to calculate \$ \int \int ...
1answer
218 views
### Monte carlo integration in spherical coordinates
I was playing around with writing a code for Montecarlo integration of a function defined in spherical coordinates. As a first simple rapid test I decided to write a test code to obtain the solid ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9285522699356079, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/65086?sort=votes
|
## Motivation: $T_{\mathbb P^2}$ isn't an extension of line bundles
Here's a trick to show that the tangent bundle $T$ of $\mathbb P^2$ is not an extension of line bundles. If it were, we would have a short exact sequence $$\def\O{\mathcal O} 0\to \O(a)\to T\to \O(b)\to 0$$ for some integers $a$ and $b$. Then we can compute the total Chern class $$\begin{align*} c(T)& =c(\O(a))\cdot c(\O(b)) \\ &= (1+aH)(1+bH) \\ &= 1+(a+b)H+abH^2, \end{align*}$$
where $H=c_1(\O(1))$ is the class of a hyperplane.
On the other hand, we have the Euler sequence $$0\to \O\to \O(1)^3\to T\to 0$$ which tells us that $$\begin{align*} c(T)&=c(T)\cdot c(\O)=c(\O(1)^3)\\ &=c(\O(1))^3= 1+3H+3H^2. \end{align*}$$
Now observe that there do not exist integers $a$ and $b$ so that $a+b=ab=3$, so $T$ cannot be an extension of line bundles.
## The Question
More generally, whenever we have an extension of vector bundles $0\to L\to E\to M\to 0$, we have $c(E)=c(L)\cdot c(M)$. So to show that $E$ has no sub-bundles (or equivalently, no quotient bundles), it suffices to show that $c(E)$ doesn't factor. The question is whether the converse is true:
Suppose $E$ is a rank $r$ vector bundle on a (smooth quasi-projective) scheme (or manifold) $X$ so that $c(E)=c(L)c(M)$ for vector bundles $L$ and $M$ of rank $i$ and $r-i$, respctively. Must $E$ have a sub-bundle or rank $i$ or $r-i$?
Remark 1: The phrasing is a bit strange compared to the natural-sounding "If the total Chern class of a vector bundle factors, does it have a sub-bundle?" The point is that knowing the rank of $E$ is very important. We showed that $T_{\mathbb P^2}$ has no sub-bundles, but $O(1)^3$ has the same total Chern class and clearly has lots of sub-bundles.
Remark 2: Does either $L$ or $M$ have to be a sub-bundle of $E$? NO! For example, on $\mathbb P^1$, we have that $$c(\O(1)\oplus \O(-1)) = (1+H)(1-H)=1 = c(\O)c(\O)$$ but $\O(1)\oplus \O(-1)$ doesn't have a sub-bundle isomorphic to $\O$ (because it has no non-vanishing sections).
Remark 3: What is the answer in the case $X=\mathbb P^n$?
-
## 2 Answers
The answer for projective spaces is negative. I think the simplest example are 2-bundles on $\mathbb{P}^3(\mathbb{C})$. In that case the Schwarzenberger condition is that $c_1c_2$ should be even. Atiyah and Rees have proved that for any pair $(c_1,c_2)$ satisfying this there are holomorphic vector bundles $\xi$ with $c_1(\xi)=c_1,c_2(\xi)=c_2$ (see Atiyah, Rees, Vector bundles on projective 3-space. Invent. Math. 35 (1976), 131–153.). The number of topologically distinct such bundles in 1 when $c_1$ is odd and 2 when $c_1$ is even. So e.g. there is a topologically nonsplit 2-bundle on $\mathbb{P}^3$ with total Chern class $(1+ka)(1-ka)$ where $a=c_1(\mathcal{O}(1))$.
The topological classification of 2-bundles on $\mathbb{P}^3$ and the existence of a holomorphic structure on them are also proved in Okonek, Schneider, Spindler, Vector bundles on complex projective spaces, chapter 1, 6.3.
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
If you are also asking about the case of topological complex vector bundles over manifolds, consider the case $X=S^5$. There are no nontrivial rank $1$ bundles, but there is a nontrivial rank $2$ bundle, and of course its Chern class $1$ factors as $1\times 1$.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 51, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9341649413108826, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/77366?sort=votes
|
## Do K-equivalent rings have isomorphic Nil-Terms?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Let $R,S$ be rings and $f: R \rightarrow S$ a ring homomorphism such that $f$ induces an isomorphism on the $K$-theory of the rings. The map $f$ also induces a ring homomorphism $f[t]: R[t] \rightarrow S[t]$, which presumably does not have to induce an isomorphism on K-theory. Equivalently, $f$ induces a map $Nil_i(R) \rightarrow Nil_i(S)$, which presumably does not have to be an isomorphism either. However, I have not been able to find an example of such rings. So I am looking for two rings $R,S$ and a ring homomorphism $f: R \rightarrow S$ such that $K_{\star}(f)$ is an isomorphism, but $K_{\star}(f[t])$ is not an isomorphism. Does anybody know anything about this?
-
## 2 Answers
In
MR2657430 (2011g:19003) Cortiñas, G.; Haesemeyer, C.; Walker, Mark E.; Weibel, C. Bass' NK groups and cdh-fibrant Hochschild homology. Invent. Math. 181 (2010), no. 2, 421–448.
The authors exhibit a ring (actually an algebra over a field of characteristic 0) for which $K_{*}(R)= K_{*}(R[t])$ but $K_{*}(R)\neq K_{*}(R[t,x])$, so take just $S=R[t]$.
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
If $f:R\to S$ in map of finitely generated commutative $k$-algebras for some fields $k$ (maybe you should take $k$ of characteristic 0) and $K_{\ast}(f)$ induces an isomorphism than $K_{\ast}(f[t])$ induces an isomorphism! moreover $i:R\to R[t]$ induces an isomorphism in $K$-theory.
May be i missed somme condition on $k$ but this property is called "homotopy invariance of algebraic $K$-theroy.
-
2
Algebraic K-theory is not homotopy invariant – Fernando Muro Oct 6 2011 at 15:58
Hi Fernando, If you take the category of smooth schemes over a noetherian scheme then $K$-theory is actually $\mathbb{A}^{1}$-invariant!! That is a general statement! – Ilias Oct 6 2011 at 16:35
4
If $R$ is a regular ring, then the map $R \to R[t]$ induces an isomorphism on $K$-theory. – Ulrich Pennig Oct 6 2011 at 16:58
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 30, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8991634249687195, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/tagged/software?page=2&sort=newest&pagesize=50
|
# Tagged Questions
The software tag has no wiki summary.
0answers
39 views
### Does Celestia take the lunar orbit precession into account?
Does the lunar orbit plane rotate (8 year period precession)? Does Celestia take this into account while drawing the moon orbit?
2answers
723 views
### Software to simulate and visualize atoms?
Not sure if this is a physics or chemistry question. But if the motion of atoms and it's particles can be described by quantum mechanics, then is there a software that simulate full atoms and it's ...
1answer
355 views
### Physics simulation software to perform this very specific experiment
I need a physics simulation software that allows me to perform the following experiment: 1. Create a frictionless ramp/terrain defined by a parametric function; 2. Create a ball in an arbitrary ...
3answers
297 views
### Numerical simulation of mechanics problem
Say I have a planet and shoot something with a given velocity, which is a significant portion of the escape velocity, in a given angle into the sky. It has some initial velocity and there is the force ...
1answer
18 views
### Code to sample from & integrate light of a cluster?
Are there any publicly available codes to sample from an initial mass function (common ones... Kroupa, Chabrier, Salpeter) to construct a cluster, then use stellar models to generate an integrated ...
2answers
122 views
### Intensity loss due to vignetting
I was trying to get an expression for the loss of intensity due to vignetting in a simple optical system, and got a fairly complex integral. I was wondering if there's an easier way, or any book that ...
0answers
170 views
### Is there any know weakness with GNUplot fits? (especially with gauss) [closed]
my faculty push pretty hard on us using GNUplot but it gives me somewhat unconvincing results while attempting to fit a gauss-like function. Is there any known weakness with GNUplot fits or should I ...
1answer
439 views
### Stellar evolution simulation engine or software
Is there any general purpose stellar evolution simulation engine or software? Something to throw in properties of the star and to watch how (and why) they change along the timeline - with or without ...
3answers
309 views
### Numerical software to manipulate a light beam in its plane wave representation?
Any light field can be expressed as a sum of plane waves. Such an ensemble of plane waves is called the plane wave spectrum of the light field. The plane wave spectrum is the Fourier transform of the ...
0answers
111 views
### Spin 1/2 finite-difference field simulator?
Is there a finite-difference field simulator for spin 1/2 fields, something like meep for electromagnetism (spin 1)? Looking for something free (GNU, MIT or other open/free style license) and easy ...
3answers
39 views
### How to get started in Astronomy (UK based) [closed]
I have always been interested in space and astronomy (in my youth - I wanted to be an astronaut). However for various reasons, I never quite got started. I now want to get started - small but ...
2answers
1k views
### Where can I find simulation software for electricity and magnets?
Is there easily-available* software to simulate coils, solenoids, and other magnetic and electromagnetic devices? I'd like to play around with some design ideas, such as Halbach arrays, but physics ...
1answer
7 views
### What is the current evaluation of a sky map application for mobile devices?
Which sky map application for mobile devices have the best "feature satisfaction"-to-investment ratio? I would like to have a comparison between "sky map" applications for mobile devices and it ...
1answer
48 views
### What is the format for “local catalog” files used by JSkyCat?
I am trying to use JSkyCat to mark a set of coordinates on a FITS image, and have found the dialog for choosing a catalog file to load. However, I am having trouble finding any documentation on what ...
3answers
153 views
### Software for Creating Custom Star Charts?
I need to produce custom star charts for my website. I want to be able to do the following: Specify a region, maybe a constellation or just an arbitrary region Specify what appears on the chart ...
4answers
166 views
### What free software is there for observing the sky (sky map software)
I used until now only stellarium.org and I'm curious if there is any other software that is better than stellarium. By better, I mean: doesn't have high system ...
1answer
54 views
### How to identify the objects in an astrophoto, and what portion of the sky it covers?
Given an astrophoto with a resolution between 0.5 and 5 arcseconds per pixel, which ways exist to identify the direction of view, field of view, and objects in the picture? I believe most amateur ...
2answers
295 views
### Software for simulating 3D Newtonian dynamics of simple geometric objects (with force fields)
I'm looking for something short of a molecular dynamics package, where I can build up simple geometric shapes with flexible linkages/etc and simulate the consequences of electrostatic repulsion ...
1answer
104 views
### Spectral energy distribution fitting tools or routines [closed]
I have observed magnitudes and fluxes for an object in different wavelengths from optical to mm. Now I need a tool, routine or something like that to fit a spectral energy distribution (SED) and ...
3answers
364 views
### How to convert a FITS file to .xls Excel file?
We are trying to determine the isophots in elliptical galaxies in order to check De-Vaucouleurs law. To do so, we want to convert the data from a FITS file to Excel and analyze it using Excel math ...
10answers
75 views
### Are there websites or programmes that permit a simulation of the night sky in the past and the future on an ordinary computer?
Are there websites or programs that permit a simulation of the night sky in the past and the future on an ordinary computer? (For the past, I would be content with objects visible to the naked eye.) ...
1answer
31 views
### When taking a sequence of exposures for stacking/coaddition, what dither patterns are most commonly desired? Why?
When taking a sequence of exposures for stacking/coaddition, what dither patterns are most commonly desired? When visiting a telescope, what default dither patterns would a visiting astronomer like to ...
2answers
199 views
### What open-source n-body codes are available and what are their features?
I'm interested in doing simulations with large numbers of particles and need a good n-body code. Are there any out there in the public domain that are open-source and what are their strengths and ...
1answer
1k views
### What programming languages would be helpful for a physicist to know? [closed]
From the vantage point of a physicist and the kind of problems he would like a computer program to solve, what are the essential programming languages that a physicist should know. I know C++ and I ...
1answer
331 views
### IRIS alternative on mac?
I need to process some images taken from a telescope to determine the intensity of an astroid. This way I can determine the rotation period of this asteroid. The pictures were taken the usual way ...
3answers
300 views
### Supergravity calculation using computer algebra system in early days
I was having a look at the original paper on supergravity by Ferrara, Freedman and van Nieuwenhuizen available here. The abstract has an interesting line saying that Added note: This term has now ...
5answers
473 views
### Online physics collaboration tools
I.e. online discussion with your friends. A forum is probably too overkill in this case. Yet so far nothing can beat direct communication. Important feature: the ability to archive discussions. We ...
2answers
2k views
### Software for geometrical optics
Is there any good software for construction optical path's in geometrical optics. More specifically I want features like: draw $k \in \mathbb{N}$ objects $K_1,\dots,K_n$ with indices of refraction ...
1answer
88 views
### Physics II Video Courseware Recommendations
I'm looking for something to supplement my Physics II class. Last year I started using these video lectures to supplement my Calculus class and it helped tremendously. I also turned to this ...
3answers
541 views
### Is there a nice tool to plot graphs of paper citations? [closed]
I would like a tool which allows me to enter some paper citation, and then will begin drawing a graph, where each paper is linked to other papers that cite the original paper or are cited by it. It ...
1answer
443 views
### Software for simulating supersonic aerodynamics [closed]
Could you please suggest the software, where I can load my 3D model and see how it behave on various conditions (speed - preferably including supersonic, temperature, pressure)? Both free & ...
2answers
511 views
### Trying to model pinball physics for game AI
I'm working on an AI for a pinball-related video game. The ultimate goal for the system is that the AI will be able to fire a flipper at the appropriate time to aim a pinball at a particular point on ...
1answer
285 views
### Software to calculate forces between magnets
I am working on a complex configuration of magnets and every time I make an experiment something unforseen happens. Now I believe I could speed up the development by sitting down and calculating the ...
2answers
218 views
### Isotope properties plotting tool?
I'm looking for something that will generate scatter plots comparing different properties of isotopes. Ideally I'd like some web page that lets me select axis and click go but a CSV file with lost of ...
2answers
234 views
### Searching books and papers with equations
Sometimes I may come up with an equation in mind, so I want to search for the related material. It may be the case that I learn it before but forget the name, or, there is no name for the equation ...
14answers
21k views
### What software programs are used to draw physics diagrams, and what are their relative merits?
People undoubtedly use a variety of programs to draw diagrams for physics, but I am not familiar with many of them. I usually hand-draw things in GIMP. GIMP is powerful in some regards, but it's ...
6answers
1k views
### Physics and Computer Science
Not sure if this is a 'real' question, but what is the relation between physics and computer science? A lot of physicists are also computer scientists and vice versa. My professor has a PhD in Physics ...
7answers
1k views
### Software for physics calculations
What is some good free software for doing physics calculations? I'm mainly interested in symbolic computation (something like Mathematica, but free).
1answer
484 views
### Why is GNUplot so pervasive in Physics when there are much more modern tools? [closed]
First, I want to say upfront that this question need not dissolve into arguments and discussion. This question can and should have a correct answer, please don't respond with your opinions. GNUplot ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9178745746612549, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/3455?sort=oldest
|
## Do convolution and multiplication satisfy any nontrivial algebraic identities?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
For (suitable) real- or complex-valued functions f and g on a (suitable) abelian group G, we have two bilinear operations: multiplication -
(f.g)(x) = f(x)g(x),
and convolution -
(f*g)(x) = ∫y+z=xf(y)g(z)
Both operations define commutative ring structures (possibly without identity) with the usual addition. (For that to make sense, we have to find a subset of functions that is closed under addition, multiplication, and convolution. If G is finite, this is not an issue, and if G is compact, we can consider infinitely differentiable functions, and if G is Rd, we can consider the Schwarz class of infinitely differentiable functions that decay at infinity faster than all polynomials, etc. As long as our class of functions doesn't satisfy any additional nontrivial algebraic identities, it doesn't matter what it is precisely.)
My question is simply: do these two commutative ring structures satisfy any additional nontrivial identities?
A "trivial" identity is just one that's a consequence of properties mentioned above: e. g., we have the identity
f*(g.h) = (h.g)*f,
but that follows from the fact that multiplication and convolution are separately commutative semigroup operations.
Edit: to clarify, an "algebraic identity" here must be of the form "A(f1, ... fn) = B(f1, ..., fn)," where A and B are composed of the following operations:
• addition
• negation
• additive identity (0)
• multiplication
• convolution
(Technically, a more correct phrasing would be "for all f1, ..., fn: A(f1, ... fn) = B(f1, ..., fn)," but the universal quantifier is always implied.) While it's true that the Fourier transform exchanges convolution and multiplication, that doesn't give valid identities unless you could somehow write the Fourier transform as a composition of the above operations, since I'm not giving you the Fourier transform as a primitive operation.
Edit 2: Apparently the above is still pretty confusing. This question is about identities in the sense of universal algebra. I think what I'm really asking for is the variety generated by the set of abelian groups endowed with the above five operations. Is it different from the variety of algebras with 5 operations (binary operations +, *, .; unary operation -; nullary operation 0) determined by identities saying that (+, -, 0, *) and (+, -, 0, .) are commutative ring structures?
-
Minor comment - you may want to fix f*(g.h) = (h.g)*h. Sorry but I can't edit. – Alon Amit Oct 30 2009 at 17:38
I fixed it, hopefully you meant `f*(g.h) = (h.g)*f` rather then `f*(g.h) = (f.g)*h` (the last one is incorrect, of course). – Ilya Nikokoshev Oct 30 2009 at 19:18
Thanks, it was a typo. – Darsh Ranjan Oct 30 2009 at 21:24
## 6 Answers
I think the answer to the original question (i.e. are there any universal algebraic identities relating convolution and multiplication over arbitrary groups, beyond the "obvious" ones?) is negative, though establishing it rigorously is going to be tremendously tedious.
There are a couple steps involved. To avoid technicalities let's restrict attention to discrete finite fields G (so we can use linear algebra), and assume the characteristic of G is very large.
Firstly, given any purported convolution/multiplication identity relating a bunch of functions, one can use homogeneity and decompose that identity into homogeneous identities, in which each function appears the same number of times in each term. (For instance, if one has an identity involving both cubic expressions of a function f and quadratic expressions of f, one can separate into a cubic identity and a quadratic identity.) So without loss of generality one can restrict attention to homogeneous identities.
Next, by depolarisation, one should be able to reduce further to the case of multilinear identities: identities involving a bunch of functions f1, f2, ..., fn, with each term being linear in each of the functions. (I haven't checked this carefully but it should be true, especially since we can permit the functions to be complex valued.)
It is convenient just to consider evaluation of these identities at the single point 0 (i.e. scalar identities rather than functional identities). One can actually view functional identities as scalar identities after convolving (or taking inner products of) the functional identity with one additional test function.
Now (after using the distributive law as many times as necessary), each term in the multilinear identity consists of some sequence of applications of the pointwise product and convolution operations (no addition or subtraction), evaluated at zero, and then multiplied by a scalar constant. When one expands all of that, what one gets is a sum (in the discrete case) of the tensor product f1 \otimes ... \otimes fn of all the functions over some subspace of G^n. The exact subspace is given by the precise manner in which the pointwise product and convolution operators are applied.
The only way a universal identity can hold, then, is if the weighted sum of the indicator functions of these subspaces (counting multiplicity) vanishes. (Note that finite linear combinations of tensor products span the space of all functions on G^n when G is finite.) But when the characteristic of G is large enough, the only way that can happen is if each subspace appears in the identity with a net weight of zero. (Indeed, look at a subspace of maximal dimension in the identity; for G large enough characteristic, it contains points that will not be covered by any other subspace in the identity, and so the only way the identity can hold is if the net weight of that subspace is zero. Now remove all terms involving this subspace and iterate.)
So the final thing to do is to show that a given subspace can arise in two different ways from multiplication and convolution only in the "obvious" manner, i.e. by exploiting associativity of pointwise multiplication and of convolution. This looks doable by an induction argument but I haven't tried to push it through.
-
Thanks, this helps a lot. – Darsh Ranjan Nov 2 2009 at 18:00
I don't see how to carry out the depolarisation, though. How would that apply to something like, say, f.f or f*f? – Darsh Ranjan Nov 6 2009 at 8:49
A putative identity like $f \cdot f = f * f$ would depolarise to $f \cdot g + g \cdot f = f * g + g * f$ (apply the initial identity to $f+g$ and $f-g$, subtract, and divide by 4). – Terry Tao Nov 7 2009 at 0:55
Oh, duh. Yes, you're right: every homogeneous form can be depolarized that way into an equivalent multilinear one. – Darsh Ranjan Nov 7 2009 at 12:11
I think I have a proof now (for real vector spaces), which I've put in a separate community wiki post. Since getting it down to multilinear identities was the key, it's fair to mark this as "accepted." – Darsh Ranjan Nov 15 2009 at 6:55
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
In a finite abelian group, I think we have the relation
f*g = sum over pairs (x,y) in the group of ([y]*([x].f)).([x]*g)
Here [x] denotes the indicator function that takes the value 1 at x and zero elsewhere. On the trivial group this says f*g = f.g, so I don't think it reduces to 0 = 0. The relation can be deduced from the fact that ([u]*h)(v) = h(v-u) for all h, u, and v, and that summing the terms over just y gives f(x).([x]*g) = f(x).g(blank - x), and the definition of convolution.
-
It actually doesn't fit the requirements, since I'm not giving you the elements [x] for x in G. Those should be thought of as nullary operations; anything like that has to be explicitly allowed and is otherwise forbidden. – Darsh Ranjan Oct 31 2009 at 7:47
The very short answer is yes, provided that you also allow yourself a little linear algebra. But then again you rejected David's answer, so you may not be happy with mine. I'll try to convince you that my answer is both trivial and also deep, and that it doesn't depend on more structure than what you've allowed.
### The short answer
For the purposes of my answer, I will pretend that the group G is finite (I won't pretend it's abelian, because I don't need it to be). There are versions of what I'm going to say for, at least, compact groups and algebraic groups, but subtleties emerge which I will ignore. Let R be the ring of functions on G. Since G is finite, R is finite-dimensional. (If G is algebraic, R is like a polynomial ring, and if G is compact, R has a good topology, and any constructions must be completed. This is what I mean by "subtleties".)
The ring of functions on G has a canonical nondegenerate pairing: <f,g> = ∫ fg. Being nondegenerate, the pairing has an "inverse", which is an element of the tensor product R ⊗ R. Explicitly, pick any orthonormal basis of the pairing, e.g. the basis {δx : x ∈ G}, where δx(y) = 1 if y=x and 0 otherwise. Then the inverse is the sum over the basis of the tensor square of each item. So for my basis, it is Σx∈G δx ⊗ δx. But it should be emphasized that the inverse to the pairing does not actually depend on the basis. Since I don't have better notation, though, I'll work in the basis for my answer. The better description is in terms of the physicists' abstract index notation, or Penrose's birdtracks.
Then convolution and multiplication are related by the following:
< f1.f2 , f3*f4 > = Σx1 Σx2 Σx3 Σx4 < f1, δx1*δx2 > < f2, δx3*δx4 > < δx4.δx2, f3 > < δx3.δx1, f4 >
### The long answer
Let X be a set (or more generally a "space"). Write C(X) for the ring of functions on X, and k[X] for the collection of linear combinations of points in X. (I'll write k for the ground field; everything I say will work over any field, but you can take it to be the reals or complexes if you want. The meaning of the word "space" probably depends on your ground field.)
What types of operations are these? Well, C() is a contravariant functor, and k[] is a covariant one, both from the category of SETS (or SPACES) to the category VECT. Let's start with k[] because it's covariant. It's actually better than a functor: it's a monoidal functor, meaning that if you start with a cartesian product you end up with a tensor product: k[X x Y] = k[X] ⊗ k[Y]. Actually, so is C(), although if you work with non-finite sets you have to complete the tensor product. In fact, these two operations are intimately related: for any space X, C(X) is naturally (i.e. functorially in X) the dual space to k[X], so that C() = k[]*. Thus, we can basically completely understand C() by understanding k[], or vice versa.
Since SETS (or SPACES) is cartesian, every object is a coalgebra in a unique way. I'll spell this out. An algebra in a monoidal category is an object X with a "multiplication" map X ⊗ X → X satisfying various conditions. The word is chosen so that in VECT, my notion of algebra and yours match. In SET, an algebra is a monoid. Anyway, a coalgebra is whatever you get by turning all the arrows around. For intuition, think about VECT, where a coalgebra is whatever the natural structure on the dual vector space to an algebra is. (Write the multiplication map as a big matrix from the tensor square of your algebra to your algebra, and think about its transpose.)
The canonical coalgebra structure on a set X, by the way, is given by the diagonal map Δ : X → X x X, where Δ(x) = (x,x).
Since k[] is a monoidal functor, it takes algebras to algebras and coalgebras to coalgebras. Thus for any set X, the vector space k[X] inherits a coalgebra structure. Thus, dually, C(X) inherits an algebra structure (you can say this directly: a monoidal contravariant functor turns coalgebras into algebras). In fact, this is precisely the canonical algebra structure you're calling "." on the ring of functions.
Well, let's say now that X is an algebra in SETS, i.e. a monoid (e.g. a group). Then k[X] inherits an algebra structure, and equally C(X) has a coalgebra structure. But actually it's a bit better than this. Since any set is a coalgebra in a unique way, the algebra and coalgebra structures on X get along. I'll write * for the multiplication in X. Then when I say "get along" what I mean is:
Δ(x) * Δ(x) = Δ(x*y)
where on the left-hand-side I mean the component-wise multiplication in X x X.
Well, k[] is a functor, so it preserves this equation, except that the coalgebra structure on k[X] is not trivial the way Δ is in SETS. Anything that is both a coalgebra and an algebra and that satisfies an equation like the one above is a bialgebra. You can check that the equation is well-behaved under dualizing, so that C(X) is also a bialgebra if X is an algebra.
Ok, so how does all this connect with your question? What's going on is that for sufficiently good spaces, e.g. finite sets, there is a canonical identification between the vector spaces k[X] and C(X) for any X. This identification breaks various functoriality properties, though. But anyway, if G is a finite group, then we can consider k[G] and C(G) to be the same vector space R, and pretend that it just has two separate ring structures on it.
But doing this obscures the bialgebra property. If I'm only allowed to reference the two multiplications, and not their dual maps, then to write the bialgebra property requires explicitly referring to the canonical pairing (what I called ∫ = <,> before) and its inverse. Then the bialgebra property becomes the long equation I wrote in the previous part.
### Final remarks
I should also mention that a group has not just a multiplication but also identities and inverses. These give another equation. In the basis from the first section, the unit in R for . is the function 1 = Σx∈Gδx, and the unit for * is δe, where e is the identity in G. These satisfy the equation:
δe ⊗ 1 = Σx1 Σx1 (δx1 * δx2) ⊗ (δx1 . δx2-1)
where x2-1 is the inverse element to x2. You should be able to recognize the inverse to the canonical pairing in there. Again, the equation is simpler in better notation, e.g. indices or birdtracks, and does not depend on a choice of basis. A bialgebra satisfying an equation like the one above is a Hopf algbera.
Another thing I should mention is that there are similar stories at least for compact groups, but you have to think harder about what "the inverse to the canonical pairing" is. (On a compact group, there is a canonical pairing of functions, given by Haar measure.) In fact, I think a story like this can be told for other spaces, where you change what you mean by k[] and C(), in the first case expanding the notion of linear combination and in the second case restricting the type of function. Then you should put the word "quasi" in front of everything, because the coalgebra structure, the inverse to the pairing, the units, etc. all require completions of your vector spaces.
And there may be special equations for abelian groups. In abelian land, the Fourier/Pontryagin transform does the following: it recognizes the (now commutative) ring k[G] as a ring of functions on some other space: k[G] = C(G*).
But the overall moral is that really convolution and multiplication are going on in different vector spaces; it's just that you have a canonical pairing that you can't tell the spaces apart. And if you insist on conflating the two spaces, then you should allow the canonical pairing and its inverse as basic algebraic operations.
-
Theo, thanks for writing this up. I hadn't thought about it this way before. Your examples of identities don't work, though, since you're making use of distinguished elements of the algebra (i. e., nullary operators), namely δ_x for x in G. Actually, since, as you say, convolution and multiplication usually happen in different vector spaces, we probably shouldn't expect any nontrivial identities (and I don't), but it seems like something somebody ought to have proved... – Darsh Ranjan Nov 1 2009 at 4:17
So, I agree that I'm using slightly more structure than you: I'm using the canonical pairing &\int;. But I am not using any distinguished elements of the algebra. Here's a better way to explain the construction. Let V be any finite-dimensional inner-product space. The inner product identifies V = V* (the dual space), and so V ⊗ V = V &otimes V* = Hom(V,V). The identity map in Hom(V,V) is absolutely canonical. Pull it back to V ⊗ V, and you get the element I'm calling "the inverse to the pairing". To define it explicitly, I wrote it in a basis. But any basis will do. – Theo Johnson-Freyd Nov 1 2009 at 7:06
More generally, I don't think you've quite posed the question well. At least, if the question is: "Let R and S be commutative rings of the same dimension. Is it possible for R and S to be the multiplication and convolution algebras of the same abelian group?" then I'm sure the answer is "no." The reason is that for any abelian group, the two algebra structures along with the canonical pairing are coherent in that they form a Hopf algebra. And I don't believe that any two algebras (even if they individually come from groups) can be made to satisfy the bialgebra identity. – Theo Johnson-Freyd Nov 1 2009 at 7:11
I still don't see how to extract that extra structure (Hopf algebra, bialgebra, ...) if all you have are the two ring structures. I guess right now I'm more interested in what you can say about the operations themselves if you have nothing else to work with; on the other hand, the more general question of how the two ring (or k-algebra) structures interact is interesting as well. – Darsh Ranjan Nov 1 2009 at 9:06
With just the two ring structure, I don't think you can extract the rest of the structure. Indeed, I have a very hard time imagining identities given your restrictions. For example, imagine scrambling the elements of the group. This does not change the multiplication algebra (which depends only on the set structure), but picks out a different convolution product. – Theo Johnson-Freyd Nov 3 2009 at 2:25
I merely suggest to reformulate the question in a way which hopefully will avoid ambiguous interpretations: it asks about identities (in the universal algebra sense, like identitities (laws) of groups, or polynomial identities of rings) of the algebra (again, in the universal algebra sense) defined on the set of all maps from the (fixed) abelian group G to R, subject to two operations as defined above. I cannot help thinking that this may be related to the question of polynomial identities of group rings, though this relation is perhaps too superficial: the convolution operation is similar to multiplication in the group ring (and, if G is finite, they are essentially the same). Of course, if G is finite abelian, then the group ring, being the commutative algebra, satisfies a nontrivial polynomial idenity. So we should bring somehow the second operation . into the picture. Perhaps the various generalizations of polynomial identities considered in the literature ("generalized identities", identities with fixed elements, identities in rings with involution, etc.) could be relevant here.
-
All right, I think I can finish the proof now. I will prove that there are no nontrivial identities on $\mathbb{R}^d$ for any $d$. This proof makes heavy use of the first part of Terry Tao's post (reducing to multilinear identities), but I'll use a different argument to finish it, since I guess I'm just more familiar and comfortable with real vector spaces than with finite groups. It should be possible to complete Terry's line of argument to get a proof for sufficiently large finite groups, which my proof won't cover. Moreover, as Theo pointed out in a comment to his answer, deforming the domain nonlinearly screws up convolution while leaving the other operations intact, and it should be easy to use that to show no identities can hold. In any case, this is a community wiki post, so anybody can make additions or simplifications.
First, by Terry Tao's observations, it suffices to consider multilinear identities of the form $$c_1F_1(f_1,\ldots,f_n) + \cdots + c_kF_k(f_1,\ldots,f_n) = 0$$ where each $F_i$ is a "multilinear monomial," i. e., a composition of multiplication and convolution in which each of $f_1,\ldots,f_n$ appears exactly once. (The original question didn't allow scalar multiplication, but it doesn't introduce any difficulty.) To summarize the argument: by applying the distributive laws as much as necessary and using an easy scaling argument, it suffices to consider identities that are homogeneous in each argument, i. e., sums of monomials in which each argument appears some fixed number of times. To reduce this further to the multilinear case, suppose we have some putative identity of the form $F(f_1,\ldots,f_m) = 0$ that is homogenous of degree $n_i$ in $f_i$ for all $i$. For the moment, consider $f_2,\ldots,f_n$ to be fixed, so we have a homogeneous degree-$n_1$ functional $T(f_1)$ of $f_1$. The polarization identity states that if we define a new functional $S$ by $$S(g_1,\ldots,g_{n_1}) = \frac{1}{n_1!}\sum_{E\subseteq \{1,\ldots,n_1\}} (-1)^{n_1-|E|} T\big(\sum_{j\in E} g_j\big),$$ then $S$ is a (symmetric) multilinear function of $g_1,\ldots,g_{n_1}$ and $S(f_1,\ldots,f_1) = T(f_1)$. Thus, the identity $F(f_1,\ldots,f_m)=0$ is equivalent to the identity $G(g_1,\ldots,g_{n_1},f_2,\ldots,f_n) = 0$, where $G$ is obtained from $F$ by the polarization construction applied on the first argument. Repeating the construction for $f_2,\ldots,f_m$, we obtain an equivalent multilinear identity $H(g_1,\ldots,g_n)=0$ (where $n=n_1+\cdots+n_m$).
Let's fix a nomenclature for monomials: let $C(f_1,\ldots,f_n)=f_1*\cdots*f_n$ and $M(f_1,\ldots,f_n)=f_1\cdot \cdots \cdot f_n$. A monomial is a C-expression if convolution is the top-level operation or an M-expression if multiplication is the top-level expression. $f_1,\ldots,f_n$ are atomic expressions and are considered both M-expressions and C-expressions. We consider two monomials to be identical if they can be obtained from one another by applying the associative and commutative laws for multiplication and convolution. With this equivalence relation, each equivalence class of monomials can be written uniquely in the form $C(A_1,\ldots,A_n)$ or $M(B_1,\ldots,B_n)$ (up to permuting the $A$s or the $B$s), where the $A$s are M-expressions and the $B$s are C-expressions. At this point, we have made maximal use of the algebraic identities for the convolution algebra and the multiplication algebra, so now we have to prove that there are no identities whatsoever of the form $$c_1F_1(f_1,\ldots,f_n) + \cdots + c_kF_k(f_1,\ldots,f_n) = 0$$ where the $c_i$ are nonzero scalars and the $F_i$ are distinct multilinear monomials.
For all $a>0$, let $\phi_a:\mathbb{R}^d\to \mathbb{R}$ be the gaussian function $\phi_a(x)=e^{-a||x||^2}$. We'll prove if the $F_i$ are distinct and the $c_i$ are nonzero, then $$c_1F_1(\phi_{a_1},\ldots,\phi_{a_n}) + \cdots + c_kF_k(\phi_{a_1},\ldots,\phi_{a_n})= 0$$ cannot hold for all $a_1,\ldots,a_n>0$. It's easy to see that $\phi_a\cdot\phi_b = \phi_{a+b}$ and $\phi_a * \phi_b$ = $(\pi(a+b))^{d/2}\phi_{(a^{-1}+b^{-1})^{-1}}$. Therefore, if we define $S(a_1,\ldots,a_n)=a_1+\cdots +a_n$ and $P(a_1,\ldots,a_n)=(a_1^{-1}+\cdots+a_n^{-1})^{-1}$, and $F$ is a multilinear monomial, then $F(\phi_{a_1},\ldots,\phi_{a_n}) = R_F(a_1,\ldots,a_n)^{d/2}\exp(-Q_F(a_1,\ldots,a_n)||x||^2)$, where $R_F$ is a rational function and $Q_F$ is a rational function composed of $S$ and $P$. In fact, if $F$ is written as a composition of $C$ and $M$, then $Q_F(a_1,\ldots,a_n)$ is obtained from $F(\phi_{a_1},\ldots,\phi_{a_n})$ simply by replacing all the $C$s by $P$s, the $M$s by $S$s, and $\phi_{a_i}$ by $a_i$ for all $i$. Therefore, it makes sense to define P- and S-expressions analogously to C- and M-expressions. A PS-expression in $a_1,\ldots,a_n$ is a composition of $P$ and $S$ in which each of $a_1,\ldots,a_n$ appears exactly once. Equivalence of PS-expressions is defined exactly as for C/M monomials; in particular, equivalence of PS-expressions is apparently a stronger condition than equality as rational functions.
The main lemma we need is that it actually isn't a stronger condition: if $F$ and $G$ are distinct multilinear monomials in $n$ arguments, then $Q_F$ and $Q_G$ are distinct rational functions. In other words, distinct PS-expressions define distinct rational functions. (Note that this is false if the adjective "multilinear" is dropped.) To prove this, first note that although $Q_F$ and $Q_G$ are initially defined as functions $(0,\infty)^n\to (0,\infty)$, they extend continuously $[0,\infty)^n\to [0,\infty)$. If $D=\{i_1,\ldots,i_k\}$ is a subset of $\{1,\ldots,n\}$ and $Q$ is a PS-expression in $n$ variables, then $D$ is called a prime implicant of $Q$ if $Q(a_1,\ldots,a_n) = 0$ when $a_{i_1},\ldots,a_{i_k}$ are all set to zero, but no proper subset of $D$ has this property. Let $I(Q)$ be the set of prime implicants of $Q$. It's easy to show that $I(P(Q_1,\ldots,Q_m))$ is the disjoint union of $I(Q_1),\ldots,I(Q_m)$, and $I(S(Q_1,\ldots,Q_m))$ is the set of all $D_1 \cup \cdots \cup D_k$, where $D_i\in I(Q_i)$. (It's important here that none of the variables $a_1,\ldots,a_n$ appears in more than one $Q_i$.) Define the implicant graph of $Q$ as the undirected graph with vertices $1,\ldots,n$ and an edge between $i$ and $j$ if some prime implicant of $Q$ contains both $i$ and $j$. It's easy to see that the implicant graph of an S-expression is connected, and if $Q_1,\ldots,Q_m$ are S-expressions, then the connected components of the implicant graph of $P(Q_1,\ldots,Q_m)$ are the implicant graphs of $Q_1,\ldots,Q_m$. This immediately implies that a P-expression cannot define the same function as an S-expression, so it suffices to show that distinct S-expressions induce distinct rational functions, and distinct P-expressions do. Actually, it suffices to show that distinct P-expressions define distinct expressions, since $P$ and $S$ are exchanged by the involution $\sigma(a)=a^{-1}$: $\sigma(P(a,b))=S(\sigma(a),\sigma(b))$. That different P-expressions induce different functions now follows by induction on the number of variables, since the implicant sets of the S-expressions $Q_i$ are uniquely determined by the implicant set of $P(Q_1,\ldots,Q_m)$ by considering connectivity as above.
The rest of the proof is easy: if the $F_i$ are distinct multilinear monomials, then the $Q_{F_i}$ are distinct rational functions. This implies that for some $a_1,\ldots,a_n$, the $Q_{F_i}(a_1,\ldots,a_n)$ are all distinct positive numbers, since distinct rational functions can't agree on a set of positive Lebesgue measure. To get a contradiction, suppose the $c_i$ are all nonzero and the identity $\sum_i c_i F_i(f_1,\ldots,f_n)=0$ holds. Then $$\sum_i c_i F_i(\phi_{a_1},\ldots,\phi_{a_n}) = \sum_i c_i R_{F_i}(a_1,\ldots,a_n)^{d/2} \exp(-Q_{F_i}(a_1,\ldots,a_n)||x||^2) = 0$$ for all $x$. Without loss of generality, the $Q_{F_i}(a_1,\ldots,a_n)$ are increasing as a function of $i$. But then for large enough $x$, the first term dominates all the others, so the sum can't be zero unless $c_1=0$: a contradiction. This completes the proof.
-
Late to the thread, but I wanted to quickly mention an identity that shows up for separable functions. Although this is a close cousin of your trivial identity and hardly theoretically deep, it turns out to be very useful in practice.
Let's take $\mathrm{R}^2$ as an example. If $f(x,y) = f_1(x)\ f_2(y)$ and $g(x,y) = g_1(x)\ g_2(y)$ then
$$f * g = (f_1\ f_2) * (g_1\ g_2) = (f_1 * g_1)\ (f_2 * g_2).$$
I abused notation a little to highlight the resemblance to distributivity.
This identity finds use in a folklore trick of image processing that is described here:
http://www.stereopsis.com/shadowrect/
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 128, "mathjax_display_tex": 6, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.936247706413269, "perplexity_flag": "head"}
|
http://mathhelpforum.com/algebra/56234-integer-sequences.html
|
# Thread:
1. ## INTEGER SEQUENCES
the nth term of a sequence is given by this formula:
nth term = 62 - 5n
Find an expressipn, in terms of n, for the sum of the nth term and (n+1)th term of the sequence. thanks
2. Hello, abey_27!
It seems simple enough . . . Exactly where is your difficulty?
The $n^{th}$ term of a sequence is given by: . $a_n \:=\:62-5n$
Find an expression, in terms of $n$,
for the sum of the $n^{th}$ term and $(n+1)^{th}$ term of the sequence.
We have:. $\begin{array}{ccc}a_n &=& 62 - 5n \\<br /> a_{n+1} &=& 62-5(n\!+\!1) \end{array}$
Therefore: . $a_n + a_{n+1} \;=\;[62-5n] + [62-5(n+1)] \;=\;119 - 10n$
3. i didnt really understand what it meant by 'sum of the nth term', so is that simply just the formula nth term= 62-5n ?
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 7, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9304230809211731, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/60750/profinite-completion-of-a-semidirect-product
|
## Profinite completion of a semidirect product
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
If we have two finitely generated residually finite groups $G$ and $H$, is there are relation between
the profinite completions $\hat{G},\hat{H}$ and the profinite completion of a semidirect product $\hat{G \rtimes H}$
(and analogous question for pro-p completions)
-
1
Semidirect product may not be even residually finite. – Mark Sapir Apr 5 2011 at 23:27
Sorry , I forgot to say that the groups are finitely generated. – Mustafa Gokhan Benli Apr 5 2011 at 23:37
## 3 Answers
Take a finite non-abelian simple group $A$ and consider the wreath product $G=A\wr \mathbb Z$. Let $N$ be any subgroup of finite index of $G$. Then $N\cap A^{\mathbb Z}\ne 1$. Let $g$ be a non-trivial element in the intersection. Suppose that the $i$-th coordinate $g_i$ of $g$ is not $1$. Since $A$ has trivial center, there exists $h\in A$ such that $[g_i,h]\ne 1$. Let $h'$ be the element of $A^{\mathbb Z}$ with $h$ on the $i$-th coordinate and trivial other coordinates. Then $[g,h']$ is in $N$ and has exactly one non-trivial coordinate (number $i$). Using the fact that $A$ is simple and the action of ${\mathbb Z}$ on $A^{\mathbb Z}$, we get that $N$ contains $A^{\mathbb Z}$. Hence the profinite (pro-p) completion of $G$ is the same as the profinite completion of $\mathbb Z$. Of course $G$ is a semidirect product of $A^{\mathbb Z}$ and $\mathbb Z$, both residually finite.
If $G, H$ are finitely generated, then $P=\hat G\rtimes \hat H=\hat{G\rtimes H}$. Indeed it is easy to see that the profinite completion of $G$ in $P$ is $\hat G$. That is because for every finite index subgroup $N$ of $G$ there exists a finite index subgroup $K$ in $G\rtimes H$ such that $K\cap G < N$.
-
I think you want $N$ to be normal which you can assume of course. I am not sure about the second part. I think it is more complicated. I need to think about it a bit. – Yiftach Barnea Apr 6 2011 at 7:40
Isn't $G$ meant instead of $A$ in the sentence "Of course $A$ is a semidirect product of $A^{\mathbb {Z}}$ and $\mathbb{Z}$ ? – Ralph Apr 6 2011 at 8:01
@Ralph: yes, thanks. – Mark Sapir Apr 6 2011 at 8:18
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
I convinced myself the Mark's second claim is true. Here is a detailed argument. Let us start by checking whether $\widehat{G} \rtimes \widehat{H}$ actually exists. We assume that both $G$ and $H$ are is finitely generated. Let $\varphi:H \to \textrm{Aut}(G)$ be the map that defines the semidirect product.
First we need to check that $\varphi$ can be extended to $\textrm{Aut}(\widehat{G})$. As $G$ is finitely generated it has finetely many subgroups of index $n$, let $G_n$ be their interestion. Then $G_n$ is a characteristic subgroup of finite index in $G$. Moreover, every subgroup of finite index in $G$ contains one of the $G_n$'s. Thus, $\widehat{G}$ is the inverse limit of $G/G_n$. Now every autmorphism of $G$ preserves $G_n$, hence, $\textrm{Aut}(G)$ is embedded in $\textrm{Aut}(\widehat{G})$. We conclude that $\varphi$ can be extended to $\textrm{Aut}(\widehat{G})$.
We now need to recall the topology on $\textrm{Aut}(\widehat{G})$. The open neighborhoods of the identity are defined as $A(G_n)$ the kernel of the map from $\textrm{Aut}(\widehat{G})$ to $\textrm{Aut}(G/G_n)$. To extend $\varphi$ to $\widehat{H}$ we need $\varphi$ to be continuous on the profinite topology of $H$. Thus, we need that $H_n$ the kernel of the map from $H$ to $\textrm{Aut}(G/G_n)$ to be of finite index. If $H$ is finitely generated, tThis is indeed the case as $\textrm{Aut}(G/G_n)$ is a finite group. So $\varphi$ can be extended.
That means we can define $\widehat{G} \rtimes \widehat{H}$. Moreover, from the above argument $\varphi$ is continuous on $\widehat{H}$, so $\widehat{G} \rtimes \widehat{H}$ is a profinite group. We notice that $\widehat{G} \rtimes \widehat{H}$ is the inverse limit of $(G \rtimes H)/(G_n \rtimes N)$, where $n \in \mathbb{N}$ and $N$ is a normal subgroups of finite index in $H$.
We always have a map from the profinite completeion of a group onto any profinite completion with respect to some subgroups of finite index. So we get $\psi$ from $\widehat{G \rtimes H}$ onto $\widehat{G} \rtimes \widehat{H}$. Now, suppose $K$ is a subgroup of finite index in $G \rtimes H$. Let us look at $K \cap G$, it is a subgroup of finite index in $G$. Therefore, it contains some $G_n$. Also, $K \cap H$ is of finite index in $H$. Now, $G_n \rtimes (K \cap H)$ is a subgroup, it is of finite index in $G \rtimes H$, and it is contained in $K$. We deduce that that $\psi$ is an isomorphism.
Edit: I do not think it is necessary for $H$ to be finitely generated so I fixed the argument.
-
1
Thanky you, this is a very nice answer. – Mustafa Gokhan Benli Apr 7 2011 at 4:47
Let $\mathscr{P}$ be any property such that whenever a group has $\mathscr{P}$ then all its subgroups also have $\mathscr{P}$. In [1] Theorem 3.1, K. W. Gruenberg has proved that if the wreath product $W= A \wr B$, is residually $\mathscr{P}$, then either $B$ is $\mathscr{P}$ or $A$ is abelian.
Consider $W= S_3 \wr \mathbb{Z}$, where $S_3$ is the symmetric group of degree 3. Since $S_3$ is not abelian, $\mathbb{Z}$ is not finite, and the subgroup of any finite group is finite, the group $W$ is not RF.
Clearly, $W= \prod_{i \in \mathbb{Z}} S_3 \rtimes \mathbb{Z}$, where $\mathbb{Z}$ and $\prod_{i \in \mathbb{Z}} S_3$ are residually finite.
[1] K. W. Gruenberg, Residual properties of infinite soluble groups}, Prec. London Math. Soc., Ser. 3, 7 (1957), 29--62.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 125, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9399709701538086, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/16966/why-are-relations-of-degree-3-or-less-enough-in-a-presentation-of-the-polynomial/17271
|
## Why are relations of degree 3 or less enough in a presentation of the polynomial current Lie algebra g[t]?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Let $\mathfrak{g}$ be a finite dimensional simple Lie algebra over $\mathbb{C}$.
The polynomial current Lie algebra $\mathfrak{g}[t] = \mathfrak{g} \otimes \mathbb{C} [t]$ has the bracket $$[xt^r, yt^s] = [x,y] t^{r+s}$$ for $x,y \in \mathfrak{g}$. It is graded with deg$(t) = 1$.
If we set $h=0$ in Drinfeld's first presentation of the Yangian (given in Theorem 12.1.1 of Chari and Pressley's Guide to Quantum Groups) then we get a presentation of $U(\mathfrak{g}[t])$ where the generators are the elements $x \in \mathfrak{g}$ and $J(x) = xt$ of $\mathfrak{g}[t]$ with degree $=0,1$, and the relations all have degree of both sides less than $3$.
Specifically we require that all the relation in $\mathfrak{g}$ are satisfied for the elements with degree 0, and (for all $x,y, x_i, y_i, z_i \in \mathfrak{g}$ and complex numbers $\lambda, \mu$):
$$\lambda xt + \mu yt = (\lambda x + \mu y)t$$ $$[x, yt] = [x,y]t,$$ $$\sum_i [x_i, y_i] = 0 \implies \sum_i [x_i t, y_i t ] = 0$$ $$\sum_i [[x_i, y_i], z_i] = 0 \implies \sum_i [[x_i t, y_i t], z_i t]=0$$ Then assuming that all the relations of degree less than or equal to $3$ hold is enough to get the remaining ones. The elements $xt^2, xt^3, \ldots$ are defined inductively. This can be proved by induction, using the Serre presentation of the finite-dimensional Lie algebra and then checking all the required relations in several cases. But even in the $\mathfrak{sl}_2$ case the argument is laborious.
Is there a better way of seeing that one needs only relations of degree less than three in order to get the rest?
-
## 1 Answer
For a nilpotent Lie algebra L, generators could be described in terms of $H_1(L) = L/[L,L]$, and relations in terms of $H_2(L)$. While this is not applicable directly to $\mathfrak g \otimes \mathbb C[t]$, it is close enough: it could be decomposed, for example, as the semidirect sum $\mathfrak g \oplus (\mathfrak g \otimes t\mathbb C[t])$, or, better yet, as $(\mathfrak g_- \oplus \mathfrak h) \oplus (\mathfrak g_+ \oplus (\mathfrak g\otimes t\mathbb C[t]))$, where $\mathfrak g = \mathfrak g_- \oplus \mathfrak h \oplus \mathfrak g+$ is the triangular decomposition. From here, I presume, one may glue the defining relations of the whole algebra from the defining relations of the second summand, and Serre defining relations for $\mathfrak g$. The second summand, $\mathfrak g\otimes t\mathbb C[t]$ or close to it, is, formally, still not nilpotent, but it is an $\mathbb N$-graded algebra with finite-dimensional components, and the isomorphism between the space of defining relations and $H_2(L)$ still applies. So the whole thing boils down to computation of $H_2(\mathfrak g\otimes t\mathbb C[t])$. The whole cohomology $H_*(\mathfrak g\otimes t\mathbb C[t])$ was computed in the celebrated 1976 paper by Garland and Lepowsky, or, one may use a more direct and pedestrian approach and derive it from the known formulae which describe the second (co)homology of the current Lie algebra $L\otimes A$ in terms of symmetric invariant bilinear forms of $L$, cyclic (co)homology of $A$, etc. I am not sure that, written accurately will all the details, this will provide a shorter way, but it is definitely a different one. The case of $sl(2)$ would be exceptional in a sense ($H_2(sl(2)\otimes t\mathbb C[t])$ is bigger than in the case of generic $\mathfrak g$).
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 36, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9428587555885315, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/90929/under-what-conditions-does-mathcalm-vdash-mathsfpa-and-mathcalk-vda
|
## Under what conditions does $\mathcal{M} \vDash \mathsf{PA}$ and $\mathcal{K} \vDash \mathsf{PA}$ such that $\mathcal{M} \ncong \mathcal{K}$?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I'm currently learning some introductory model theory from Marker's "Model Theory: An Introduction", Kaye's "Models of Peano Arithmetic", and Hodges' "Model Theory", and I am confused by the Wikipedia article on Peano Arithmetic. I am interested in the question I posed in the title and this article is really confusing me.
It states there that,
A model of the Peano axioms is a triple $(\mathbb{N}, 0, S)$, where $\mathbb{N}$ an infinite set, $0 \in \mathbb{N}$ and $S : \mathbb{N} \rightarrow \mathbb{N}$ satisfies the axioms. Dedekind proved in his 1888 book, What are numbers and what should they do (German: Was sind und was sollen die Zahlen) that any two models of the Peano axioms (including the second-order induction axiom) are isomorphic. In particular, given two models $(\mathbb{N}_A, 0_A, S_A)$ and $(\mathbb{N}_B, 0_B, S_B)$ of the Peano axioms, there is a unique homomorphism $f : \mathbb{N}_A \rightarrow \mathbb{N}_B$ satisfying, $$f(0_A)=0_B$$ and $$f(S_{A}(n))=S_{B}(f(n))$$ and it is a bijection.
Doesn't Tennenbaums theorem, and the existence of non-standard models, show that not all models of peano arithmetic are isomorphic? If this is the case, then what result did Dedekind actually prove and does anyone know where I can find a reference to the theorem he proved or a proof of it?
-
Regarding your reference question: Dedekind's 1888 Was sind und was sollen die Zahlen is the original source, and Dover has an English edition Essays on the Theory of Numbers (which contains that and also Dedekind's 1872 article where Dedekind cuts are introduced). – Ed Dean Mar 11 2012 at 21:13
@EdDean: Since you knew about the reference, do you happen to know if it is available online? I checked on JSTOR and MathSciNet and couldn't find it. – Samuel Reid Mar 11 2012 at 21:15
archive.org/details/essaysintheoryof00dedeuoft – Ed Dean Mar 11 2012 at 21:18
Hehe, by the way, I just noticed that your blockquote mentions Dedekind's source; I hadn't bothered reading that part of the question ... – Ed Dean Mar 11 2012 at 21:23
## 1 Answer
Your question is answered by the distinction between the first-order and second-order Peano axioms.
The categoricity result of Dedekind refers to the second-order Peano axioms rather the first-order axiomation PA that gives rise to the nonstandard models and other phenomenon you mention.
The second order axiomatization includes the axiom that every subset of the model containing $0$ and closed under successor $S$ is equal to the entireity of the model. This axiom is second-order, because it refers to arbitrary subsets of the universe of the model. It is not difficult to see that any two models of the second order Peano axioms are isomorphic, since each initial segment of one maps uniquely to an initial segment of the other (proved by induction), and these maps union to an isomorphism.
Meanwhile, the first-order axioms of PA are usually stated in a larger language, with symbols for additiona and multiplication, and one has the induction scheme only for subsets that are definable in this language. Meanwhile, the theorems of elementary model theory give rise to nonstandard models of this first-order version of PA. None of these nonstandard models is a model of the second-order axiomatization, since the standard cut of a nonstandard model is a subset containing $0$ and closed under successor, but is not the whole model.
There is quite an interesting interplay between first-order arithmetic, second-order arithmetic and first-order set theory here, because the second-order logic involved in the second-order Peano axioms used by Dedekind can be treated naturally in first-order set theory, such as ZFC (for the subsets of the model of arithmetic are just first order objects, sets, in set theory). In short, one may formalize Dedekind's argument as a result in ZFC. So ZFC proves that there is unique structure of arithmetic $\mathbb{N}$. But meanwhile, we know that different models of ZFC can have different non-isomorphic versions of this unique structure $\mathbb{N}$. So the situation is that there are many different models $M$ of ZFC, each insisting that its own version of arithmetic $\mathbb{N}^M$ is the one-and-only absolute concept of arithmetic, the unique structure of the second-order Peano axioms, but externally, we can see that these different $\mathbb{N}^M$'s are not all isomorphic to each other.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 18, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9382692575454712, "perplexity_flag": "head"}
|
http://mathhelpforum.com/statistics/206838-how-calculate-using-binomial-distribution.html
|
# Thread:
1. ## how to calculate this using binomial distribution
There are 5 balls in total, and 2 are black and 3 are red. If we choose with replacement, meaning we replace the ball we took out each time, what is the probability of choosing one ball of each color?
the binomial distribution says, for an experiment that can have either success of failure as an outcome
$p(x)=n!/x!(n-x)!p^xq^{n-x}$
n is the number of trials, x is the number of successes, p is the probability of p occurring and q=1-p
How can I set that up to solve the question of what the chance of choosing one of each ball or choosing two of the same color?
2. ## Re: how to calculate this using binomial distribution
Hey kingsolomonsgrave.
You have two choices: black or red. You need to formalize your question when it comes to choosing one ball: is it exactly one? At least 1? When you do this then just calculate the probability.
3. ## Re: how to calculate this using binomial distribution
If I ask what is the probability of two red balls with replacement then i would get 3/5 times 3/5 if I choose two black balls it would be 2/5 times 2/5 and one of each color is 2/5 times 3/5 I think.
To get that using the formula I would say the probability of red is 3/5 so P=3/5 and so Q= 2/5 also x=2 so (n-x)=0 and we have two trials so n=2
so I get
2!/(2!0!) times (3/5)^2 times (2/5)^0
which is the same as 3/5*3/5 or 9/25
is that right?
4. ## Re: how to calculate this using binomial distribution
for 1 red and 1 black I would get 2/5*3/5
but with the binomial formula we have
x= number of successes and red= success so x=1
P=3/5 and Q=2/5 and n=2 and(n-x)=1
2!/(1!*1!)(3/5)^1 (2/5)^1
which would be 2 times (3/5)(2/5) [twice as much as I thought it would be]
is this one right?
5. ## Re: how to calculate this using binomial distribution
The binomial model is one where you have two choices (each independent of each other) with the same probability and the probability reflects getting x true successes and n - x failures (successes and failures can be whatever you want them to be: that's just a label).
So in line with what I said earlier: you need to figure out what events you are looking at (and you didn't answer my question before).
The binomial distribution with n trials models getting x successes (0 to n) and n - x failures (0 to n) in any order: if this is not the right model then pick another one that is right.
6. ## Re: how to calculate this using binomial distribution
I assume this is the probability of one red and one black on two draws.
With replacement, the probability of red on the first draw is 3/5 and then the probability of black on the second draw is 2/5. The probability of black on the first draw is 2/5 and then the probability of red on the second is 3/5. The probability of "red then black" is (3/5)(2/5)= 6/25 and the probability of "black then red" is (2/5)(3/5)= 6/25. The probability of "red and black in either order" is 6/25+ 6/25= 2(6/25)= 12/25.
Without replacement,the probability of red on the first draw is 3/5 and then the probability of black on the second draw (because there are now only 4 balls) is 2/4. The probability of black on the first draw is 2/5 and then the probability of red on the second is 3/4. The probability of "red then black" is (3/5)(2/4)= 6/20= 3/10 and the probability of "black then red" is (2/5)(3/4)= 6/20= 3/10. The probability of "one red, one black in any order, without replacement" is 3/10+ 3/10= 2(3/10)= 6/10= 3/5.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9255625605583191, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/26931/infinite-hermitian-matrix
|
## Infinite hermitian matrix
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Suppose we have a finite square n x n matrix of complex numbers H that is Hermitian and skew-symmetric:
$H^\dagger = H$ and $H^T = -H$.
(T denotes transpose, $\dagger$ denote conjugate transpose. I know that these conditions mean that H is a purely imaginary skew-symmetric matrix.)
It is a textbook result that these two conditions ensure that it's eigenvalues are real, its rank is even and its eigenvalues appear in positive and negative pairs.
If the normalised, orthonormal eigenvectors associated to non-zero eigenvalues are denoted by $u_i$ and $v_i$ and the positive eigenvalues are denoted $\lambda_i$ then we have:
$H u_i = \lambda_i u_i$ and $H v_i = -\lambda_i v_i$
where i=1,2,...,s where 2s is the rank of H. The eigenvectors can be chosen to be complex conjugates of each other: $u_i = v_i^*$.
In terms of these eigenvectors and eigenvalues we can write the eigen-decomposition of H as:
$H = \sum_{i=1}^s \lambda_i u_i u_i^\dagger - \lambda_i v_i v_i^\dagger$
We can now define a new matrix Q as just the "positive eigenvalue part" of H:
$Q := \sum_{i=1}^s \lambda_i u_i u_i^\dagger$
This Q is Hermitian, positive semi-definite and satisfies $H = Q - Q^T = Q - Q^*$. Apparently this Q is also the "closest Hermitian positive semi-definite matrix" to H, as measured in the Frobenius norm (and possibly other norms too).
This all goes through smoothly for finite n x n matrices H.
My question is, if H is now a countably infinite dimensional matrix $H_{xy}$ for x,y=1,2,...,$\infty$ which satisfies the same conditions as before:
$H_{xy} = H_{yx}^*$ and $H_{xy} = -H_{yx}$
can we do the same construction and obtain the matrix Q?
It seems like the rigorous way to do this requires treating the infinite matrix as an operator on the $\ell^2$ Hilbert space of infinite square-summable sequences. For general H it seems like this will be an unbounded operator that is probably not defined on the whole Hilbert space (due to issues of convergence when doing the infinite matrix multiplication).
In a best-case scenario we'd like H to define a self-adjoint operator on $\ell^2$. We could then (presumably) apply the spectral theorem and sum the positive eigenvalue part to get a Q operator/infinite-matrix.
What conditions do we have to impose on the infinite H matrix entries to ensure that the Q matrix exists? or to ensure that H defines a self-adjoint operator?
Can we sidestep the use of the eigenvectors and eigenvalues when defining Q and instead seek Q as the "closest positive semi-definite matrix" to H? Does this even make sense in the infinite matrix case?
Can we calculate Q from H if H satisfies the right conditions?
Any thoughts greatly appreciated.
-
## 3 Answers
Not an answer really, but a collection of several comments.
1. The "skew-symmetric" condition is not really natural for an operator on a complex Hilbert space, since it isn't preserved by unitary transformations.
2. Do you have a reference for the statement that Q is the closest Hermitian positive semidefinite matrix to H in Frobenius norm? Does this rely in an essential way on H being skew-symmetric?
3. The "positive part" construction would apply to any (possibly unbounded) self-adjoint operator, using an appropriate version of the spectral theorem; self-adjoint operators can be "diagonalized" in a certain general sense. (One version says that, up to a unitary transformation, your Hilbert space is $L^2(X,\mu)$ for some measure space $(X,\mu)$, and your operator is multiplication by some real-valued function $h$ on $X$. So the positive part corresponds to multiplication by the positive part of $h$.)
4. I am not aware of a condition on the entries of an infinite matrix that's equivalent to the corresponding operator being self-adjoint (though I'd be interested to know if there is). Self-adjointness is a fairly delicate property, in general; it requires the domain of the operator to be neither too large nor too small.
5. You could certainly seek the nearest positive semidefinite Hermitian operator to a given one, with respect to some norm. However you are restricting yourself to those operators for which that norm is finite. The Hilbert-Schmidt norm might be natural as it generalizes the Frobenius norm; the operator norm is another choice. The positive semidefinite Hermitian operators are closed under both norms, so looking for a "nearest" one makes sense. Also, Hilbert-Schmidt operators, being compact, are diagonalizable in the more usual sense (there is an orthonormal basis of eigenvectors).
-
The reference for Q being the closest Hermitian positive semidefinite matrix to H is Theorem 9, p324 in "The electrical engineering handbook" By Richard C. Dorf available <a href="books.google.co.uk/…;. I'm not an electrical engineer, I just came across this reference on google. The skew-symmetric property is not needed explicitly, only the fact that H is Hermitian (the "negative part" is then the closest Hermitian negative semi-definite matrix to H). – StevenJ Jun 4 2010 at 9:27
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
The answer is "yes," provided you can make a good self-adjoint operator out of $H_{x,y}$. The skew-symmetric structure you mention is actually quite natural in some contexts. In quantum mechanics complex conjugation usually represents the time reversal symmetry. The structure here involves three things: a Hilbert space, a conjugation $J$ on the Hilbert space, and a symmetric operator $H$. Your Hilbert space is $\ell^2$ and the conjugation $J$ is $J\psi = \overline{\psi}$ where $\overline{\psi}$ is complex conjugation. (Note that $J$ is a real linear operator, but is complex "skew" linear that is $J\alpha \psi = \overline{\alpha} J\psi$ for a scalar $\alpha$.)
To begin suppose that the symmetric operator $H$ defines a bounded operator. That is suppose $$H \psi(x) = \sum_{y} H_{x,y}\phi(y),$$ makes sense for all $\psi \in \ell^2$ and there is a $C <\infty$ such that $\|H\psi\|_2 \le C \|\psi\|_2$. Your assumption of skew-symmetry for $H$ shows that $$JHJ = -H.$$ It follows that $J p(H) J= \overline{p}(-H)$ for any polynomial $p$, where $\overline{p}$ is the polynomial with conjugate coefficients. (To see this it helps to note that $J^2=1$.) It follows then that $$Jf(H)J=\overline{f}(-H)$$ for any continuous function $f$, with $f(H)$ defined by the functional calculus. Now we can define $Q$, namely $$Q = g(H)$$ where $g(x)= x$ for $x >0$ and $g(x)=0$ for $x\le 0$. Note that $Q$ is positive, $Q$ and $JQJ$ have orthogonal ranges, and $$H= Q - JQJ.$$
However, $H$ as defined above may or may not make sense on the whole Hilbert space. As Jiri Lebl pointed out we must at least assume that $$\sum_{x} |H_{x,y}|^2 <\infty$$ for each $y$. Then we may certainly define an operator on the space $C_f$ of sequences with finite support. Let us call this operator $H_f$ to remind ourselves that is defined on $C_f$. To apply the functional calculus to get $Q$ we need a self-adjoint extension $H$.
The condition for a self adjoint extension to exist is $$\dim \ker (H_f^\dagger +i)=\dim \ker (H_f^\dagger -i)$$ where $H_f^\dagger$ is the adjoint: $$(H_f^\dagger\phi, \psi)= (\phi, H_f\psi)$$ on the domain $\mathcal D^\dagger$ of sequences $\phi$ such that $|(\phi,H_f\psi)|\le C \| \phi \|$. If this condition holds you now have to pick a self-adjoint extension $H$, and pick it carefully so as that $J H J =-H$, and then proceed as in the bounded case. (The extension is unique only if both dimensions are $0$.) I am not aware of a general condition in terms of $H_{x,y}$ for the existence of a self-adjoint extension, but there is a nice review article of Bary Simon from the late 90's in which, among other things, he analyzes under which conditions tri-diagonal matrices ("Jacobi matrices") have self-adjoint extensions and when they are unique.
-
See Akhizer and Glazman's Theory of Linear Operators in Hilbert Space (can be had cheaply as a dover book). See in particular Section 47 on unbounded operators (and 26 for the everywhere defined compact case). I am not touching the skew symmetry thing, I'll just answer about the self-adjointness and spectral theorem application in $\ell^2$.
So I'm assuming that the matrix $H$ is conjugate symmetric (Hermitian).
1) If the entries in the matrix are square summable (sufficient but not necessary) then $H$ defines an everywhere-defined compact self-adjoint operator on $\ell^2$. You can apply the spectral theorem and everything is all wonderful. Then you simply need to worry about that that skew symmetry thing. See page 53 for a necessary and sufficient condition on the matrix to define a bounded operator.
2) (Theorem 4 on page 102) If the columns of the matrix are square summable, then $H$ does define a closed self-adjoint operator. However: the matrix does not transform natural under unitary operators. That is, if you take a unitary matrix and "do" the product formally, the new matrix may not define an operator and even if it does define an operator it may turn out to be a different operator then when you compose the operators. So any sort of formal manipulations on this matrix are just that. They do not necessarily correspond to the same operations in $\ell^2$. Therefore it is unlikely that you can apply Hilbert space theory results to the matrices.
3) If the columns of the matrix are not square summable, there is no hope of using Hilbert spaces at all.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 73, "mathjax_display_tex": 8, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.912950873374939, "perplexity_flag": "head"}
|
http://mathhelpforum.com/math-topics/56283-conservation-energy.html
|
# Thread:
1. ## conservation of energy
an experimental rocket sled on a level , frictionless track has a mass of 1.4*10^4 Kg. It propels itself by expelling gases from its rocket engines at a rate of 10 Kg/s, at an exhaust speed of 2.5*10^4 m/s relative to the rocket. For how many seconds must the engines burn if the sled is to acquire a speed of 50m/s starting from rest? You may ignore the small decrease in mass of the sled and the small speed of the rocket compared to the exhaust gas.
ps: i am learning conservation of energy.. this question confused me coz the reading.. somebody help. thanks_
2. ives,
If the subject is conservation of energy, then you are likely supposed to approach the problem by taking the gas and the rocket as an isolated system. Notice that when both are stopped mechanical energy is zero.
By the law of conservation, the energy is the same in an isolated system if no external forces are applied. Since all forces involved are interactions between of the rocket and the gas, all forces are internal and energy must conserve.
So, what is the kinetic energy of the gas after it is propelled out? What is the relation of this energy with the kinetic energy of the rocket? Tip: you can convention energy to be negative in a specific situation.
3. .. i was confused with the wording, could u list out the equations and list out procedures? that would be great appreciated~~
Originally Posted by Rafael Almeida
ives,
If the subject is conservation of energy, then you are likely supposed to approach the problem by taking the gas and the rocket as an isolated system. Notice that when both are stopped mechanical energy is zero.
By the law of conservation, the energy is the same in an isolated system if no external forces are applied. Since all forces involved are interactions between of the rocket and the gas, all forces are internal and energy must conserve.
So, what is the kinetic energy of the gas after it is propelled out? What is the relation of this energy with the kinetic energy of the rocket? Tip: you can convention energy to be negative in a specific situation.
4. OK let me walk you through a little closer:
1) Isolate the system
Consider the whole gas body and the rocket as an isolated system. Notice that no matter where the gas and the rocket are they are still part of the same system in your model.
Also, notice that all forces in the experiment are internal, as they are interactions of the rocket and the gas. Forces such as weight or friction can be disconsidered because their resultant is null, since they are either being countered by opposite forces (such as weight-normal) or disconsidered.
2) Conservation of energy
If no external forces are applied, then the energy in a given system remains contant. In our particular case:
$E_{system} = E_{gas} + E_{rocket} = constant$
Notice that in the starting scenarion, when both are stopped, $E_{system} = 0$, since you can't have kinetic energy if you have no speed (i.e. body is stopped implies that it has no kinetic energy).
3) After the burn starts
When the gas is burned, it transforms chemical energy into kinetic via expansion. Obviously the details of how this transformation happens are totally irrelevant in the context, so don't worry about them(*).
The kinetic energy that the gas aquires can be measured by the data you have via the kinetic energy expression:
$E_{gas} = \frac{m_{gas}v_{gas}^2}{2}$
Let me suggest that you do not change the letters for numbers yet. (**)
But you are given the rate at which the gas burns, so you know the mass of gas burned (and because of this, in movement) after a time t:
$E_{gas}(t) = \frac{10t(v_{gas}^2)}{2} = 5t(v_{gas}^2)$
4) "Signal" of energy
Energy is a scalar. Because of such, you can't say that it has a direction or a sense. But it is somewhat easy to see, and somewhat overkill to get into the details of, that you can safetly consider that a body moving to the right has positive kinetic energy and one moving to the left has negative kinetic energy. This makes our application of the conservation law easier.
Another approach would be to consider the rocket and the gas in separate systems, and notice that all forces of the gas act upon the rocket and vice versa.
So we can say that, since the energy within the system is constant, both bodies have the same energy, with reverse signs:
$E_{rocket} = -E_{gas}$
5) Conclusion
Since you want the speed you want the rocket to move by, then you know the energy you want it to have:
$E_{rocket} = \frac{m_{rocket}v_{rocket}^{2}}{2}$
Combining the facts we derived previously. I'll jump a step here and say that energies are equal, forgetting about the minus sign. This is because it's a simple situation, and you can perfectly visualize that if the gas is going one sense then the rocket is going the opposite one.
$E_{rocket} = E_{gas}$
$\frac{m_{rocket}v_{rocket}^{2}}{\rlap{/}2} = \frac{10t(v_{gas}^2)}{\rlap{/}2}$
$t = \frac{m_{rocket}v_{rocket}}{10v_{gas}^2} s$
Now it's all joy: just sub in the info you have and you have the number of seconds desired.
(*) A teacher of mine said that lots of students tend to get confused in situations like this one, worrying about unnecessary details. So, focus in the data you are given, as you will 'measure' stuff from them. This is also like it's done in the real world: sometimes you choose to measure what's easier to. In this particular case, for example, you can determine the internal energy of the gas (and from this obtain lots of information about it, had you had more data) indirectly by measuring its kinetic energy after it's burned and knowing how they relate.
(**) Also a tip from the same teacher. Get used to doing this, because in Physics problems numbers often have lots of algarisms, making it boring to work with them all the time. Also, leaving it as letters it's easier to spot simplifications. This tip also works for most computational exercises: it may be a bad idea to efetuate a multiplication, or even an addition, during an intermediate step of the solution because they may simplify later.
Hope I've helped,
5. very detail, thanks!
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 9, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9593982696533203, "perplexity_flag": "middle"}
|
http://en.wikipedia.org/wiki/Hyperplane
|
# Hyperplane
Look up hyperplane in Wiktionary, the free dictionary.
A hyperplane is a concept in geometry. It is a generalization of the plane into a different number of dimensions.
A hyperplane of an n-dimensional space is a flat subset with dimension n − 1. By its nature, it separates the space into two half spaces.
## Technical description
In geometry, a hyperplane of an n-dimensional space V is a "flat" subset of dimension n − 1, or equivalently, of codimension 1 in V; it may therefore be referred to as an (n − 1)-flat of V. The space V may be a Euclidean space or more generally an affine space, or a vector space or a projective space, and the notion of hyperplane varies correspondingly; in all cases however, any hyperplane can be given in coordinates as the solution of a single (due to the "codimension 1" constraint) algebraic equation of degree 1 (due to the "flat" constraint). If V is a vector space, one distinguishes "vector hyperplanes" (which are subspaces, and therefore must pass through the origin) and "affine hyperplanes" (which need not pass through the origin; they can be obtained by translation of a vector hyperplane). A hyperplane in a Euclidean space separates that space into two half spaces, and defines a reflection that fixes the hyperplane and interchanges those two half spaces.
## Dihedral angles
The dihedral angle between two non-parallel hyperplanes of a Euclidean space is the angle between the corresponding normal vectors. The product of the transformations in the two hyperplanes is a rotation whose axis is the subspace of codimension 2 obtained by intersecting the hyperplanes, and whose angle is twice the angle between the hyperplanes.
## Special types of hyperplanes
Several specific types of hyperplanes are defined with properties that are well suited for particular purposes. Some of these specializations are described here.
### Affine hyperplanes
An affine hyperplane is an affine subspace of codimension 1 in an affine space. In Cartesian coordinates, such a hyperplane can be described with a single linear equation of the following form (where at least one of the $a_i$'s is non-zero):
$a_1x_1 + a_2x_2 + \cdots + a_nx_n = b.\$
In the case of a real affine space, in other words when the coordinates are real numbers, this affine space separates the space into two half-spaces, which are the connected components of the complement of the hyperplane, and are given by the inequalities
$a_1x_1 + a_2x_2 + \cdots + a_nx_n < b\$
and
$a_1x_1 + a_2x_2 + \cdots + a_nx_n > b.\$
As an example, a point is a hyper plane in 1-dimensional space, a line is a hyperplane in 2-dimensional space, and a plane is a hyperplane in 3-dimensional space. A line in 3-dimensional space is not a hyperplane, and does not separate the space into two parts (the complement of such a line is connected).
Any hyperplane of a Euclidean space has exactly two unit normal vectors.
Affine hyperplanes are used to define decision boundaries in many machine learning algorithms such as linear-combination (oblique) decision trees, and Perceptrons.
### Vector hyperplanes
In a vector space, a vector hyperplane is a subspace of codimension 1, only possibly shifted from the origin by a vector. Such a hyperplane is the solution of a single linear equation.
### Projective hyperplanes
Projective hyperplanes, are used in projective geometry. Projective geometry can be viewed as affine geometry with vanishing points (points at infinity) added. An affine hyperplane together with the associated points at infinity forms a projective hyperplane. One special case of a projective hyperplane is the infinite or ideal hyperplane, which is defined with the set of all points at infinity.
In real projective space, a hyperplane does not divide the space into two parts; rather, it takes two hyperplanes to separate points and divide up the space. The reason for this is that in real projective space, the space essentially "wraps around" so that both sides of a lone hyperplane are connected to each other.
## References
• Charles W. Curtis (1968) Linear Algebra, page 62, Allyn & Bacon, Boston.
• Heinrich Guggenheimer (1977) Applicable Geometry, page 7, Krieger, Huntington ISBN 0-88275-368-1 .
• Victor V. Prasolov & VM Tikhomirov (1997,2001) Geometry, page 22, volume 200 in Translations of Mathematical Monographs, American Mathematical Society, Providence ISBN 0-8218-2038-9 .
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 4, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8732824325561523, "perplexity_flag": "head"}
|
http://mathhelpforum.com/discrete-math/53369-formal-modulus-proof-how-close-am-i.html
|
Thread:
1. Formal Modulus Proof: How close am I?
Show that if n is an odd positive integer then n^2 = 1(mod 8).
I see that any odd square has 1 as a remainder when calculated. Example: 49 = 7 * 7 = 1(mod 4), and any odd number squared equals an odd number. Let 2k represent all positive even integers. So n^2 = 1(mod 2k) for all odd positive integers.
Is this an acceptable proof?
2. Originally Posted by aaronrj
Show that if n is an odd positive integer then n^2 = 1(mod 8).
I see that any odd square has 1 as a remainder when calculated. Example: 49 = 7 * 7 = 1(mod 4), and any odd number squared equals an odd number. Let 2k represent all positive even integers. So n^2 = 1(mod 2k) for all odd positive integers.
Is this an acceptable proof?
no, this proof is not acceptable. especially since you are trying to prove it with examples.
Assume $n$ is odd. then we can write $n = 2k + 1$ for some integer $k$.
but that means $n^2 = (2k + 1)^2 = 4k^2 + 4k + 1$
now all you need to show is that you can write that expression in the form $8m + 1$ for some integer $m$. since all numbers equivalent to 1 mod 8 have that form
3. need more help
Sorry, I see where you're going but I can't fill in the steps.
We have 4k^2 + 4k + 1
How does 8m + 1 fit into that?
Sub in 8m+1 for k?
4. Originally Posted by aaronrj
Sorry, I see where you're going but I can't fill in the steps.
We have 4k^2 + 4k + 1
How does 8m + 1 fit into that?
Sub in 8m+1 for k?
no, here's a further hint: leave the +1 alone and factor an 8k out of the first two terms. you get
8k(k + 1)/2 + 1
now, obviously, your task is to find out whether k(k + 1)/2 is an integer
5. Correct Proof
Take 4k^2 + 4k + 1 = n^2
Factor: 4(k^2 + k) + 1 - 1 = n^2 - 1
Therefore: 4(k^2 + k) = n^2 - 1
This is how the book proves it.
I don't see where you were going with:
8k(k + 1)/2 + 1
Perhaps a bit more clarity is needed next time you attempt to offer aid.
6. Originally Posted by aaronrj
Take 4k^2 + 4k + 1 = n^2
Factor: 4(k^2 + k) + 1 - 1 = n^2 - 1
Therefore: 4(k^2 + k) = n^2 - 1
This is how the book proves it.
I don't see where you were going with:
8k(k + 1)/2 + 1
Perhaps a bit more clarity is needed next time you attempt to offer aid.
that proof is not correct. look up the definition for $a \equiv b \mod n$. you will see that it means $n \mid (a - b) \Longleftrightarrow a - b = nk$ for some $k \in \mathbb{Z}$.
thus, saying $4(k^2 + k) = n^2 - 1$ is saying $n^2 \equiv 1\mod {\color{red}4}$ not $\mod 8$
to show that $n^2 \equiv 1\mod 8$ you must show that $8 \mid (n^2 - 1)$, or in other words, $n^2 - 1 = 8k$ for some integer $k$. that is what i was telling you to do. we need $8m = n^2 - 1$ for some integer $m$, provided $n$ is odd.
we got to the point $n^2 - 1 = 8 \frac {k(k + 1)}2$
now, we complete the proof if we can show $\frac {k(k + 1)}2$ is an integer. that is what i leave it to you to do
7. Thanks.
Well, I guess I need to double check all of the proofs given in the book. Perhaps the author got lazy when writing the solutions. Thanks for clarifying the proof.
8. Originally Posted by aaronrj
Well, I guess I need to double check all of the proofs given in the book. Perhaps the author got lazy when writing the solutions. Thanks for clarifying the proof.
well, we're not done. how would you finish up? is that expression an integer or not?
9. Did you not answer your own question?
Doesn't the theorem explicitly state that the expression must be an integer?
Theorem:
Let m be a positive integer. The integers a and b are congruent modulo m if and only if there is an integer k such that a = b + km.
I definately should have referred to the definition first before trying to solve the problem; I would have had a much easier time. Lesson learned.
10. Originally Posted by aaronrj
Doesn't the theorem explicitly state that the expression must be an integer?
Theorem:
Let m be a positive integer. The integers a and b are congruent modulo m if and only if there is an integer k such that a = b + km.
I definately should have referred to the definition first before trying to solve the problem; I would have had a much easier time. Lesson learned.
we are to prove that it is an integer. assuming it is is begging the question. of course, if they ask you to prove something, they already know it is true. you are required to show why.
11. Predicate calculus
Let p represent the statement "The integers a and b are congruent modulo m."
Let q represent the statement "there is an integer k such that a = b + km."
If p $\Longleftrightarrow$ q
The domain is the set of all integers Z.
I don't think this is what you are looking for, but I thought I'd at least give it a shot.
12. Originally Posted by aaronrj
Let p represent the statement "The integers a and b are congruent modulo m."
Let q represent the statement "there is an integer k such that a = b + km."
If p $\Longleftrightarrow$ q
The domain is the set of all integers Z.
I don't think this is what you are looking for, but I thought I'd at least give it a shot.
aaronrj, pay attention. i left off exactly where you should pick up. i did everything for you except the last step. all i want you to tell me, is whether k(k + 1)/2 is an integer or not (and why). that is all. do that and you are done. stop beating around the bush. i told you these definitions already, what point is there bringing them up?
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 23, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9497575759887695, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/97210/graph-coloring-problem-on-z3
|
# Graph coloring problem on Z3
I'm trying to solve the next problem:
Trying to prove that for every $k$ there is an integer $n=n(k)$ so that for any coloring of the set $\mathbb Z_3^n$ of all $n$-dimensional vectors with coordinates in $\mathbb Z_3$ by $k$ colors, there are three distinct vectors $X$, $Y$, $Z$ having the same color so that $X_i+Y_i+Z_i\equiv 0 \pmod 3$ for all $1 \le i \le n$.
I guess I need to use SCHUR proof in a different way but I don’t know exactly how to determine the coloring function. Any help will be appreciated. Thank you very much!
-
Groovy guy: I've tried to edit your post (by adding LaTeX) for better readability. Please, check, whether I did not unintentionally change meaning of your question. – Martin Sleziak Jan 7 '12 at 18:35
1
– yoyo Jan 7 '12 at 18:45
– joriki Jan 7 '12 at 18:46
More precisely, not all $\{X,Y,Z\}$ as described in the question are "combinatorial lines" as used in the Wikipedia article (a counterexample is $\{12,21,00\}$) but every "combinatorial line" is a valid $\{X,Y,Z)$, and that is the direction that matters. – Henning Makholm Jan 7 '12 at 18:54
– Pavel Jan 8 '12 at 9:37
show 2 more comments
## 1 Answer
Since we are working modulo $3$, the condition $x+y+z=0$ is the same as $x+y=2z$, which happens if and only if we have an arithmetic progression of length $3$. (Specifically, $x,z,y$ form a progression since $z-x=y-z$, and all progressions of length three give rise to such an equation.)
Meshulam's theorem tells us that if $A\subset \mathbb{F}_3^n$ contains no three term arithmetic progressions, then $|A|\ll \frac{N}{\log N}$ where $N=3^n$ is the size of the set $\mathbb{F}_3^n$. This statement above can be proven using some Fourier analysis, and it implies van der Waerden's Theorem, the statement in your question.
I can provide some more details if this interests you. The proof of Meshulam's is not long, but is lengthened considerably if Fourier transforms need to be defined and introduced.
-
I'll try, yet, searching for an easier answer. Thanks for the insight, though. – Pavel Jan 14 '12 at 10:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 24, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9176238179206848, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/33842?sort=oldest
|
## Suzuki and Ree groups, from the algebraic group standpoint
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
The Suzuki and Ree groups are usually treated at the level of points. For example, if $F$ is a perfect field of characteristic $3$, then the Chevalley group $G_2(F)$ has an unusual automorphism of order $2$, which switches long root subgroups with short root subgroups. The fixed points of this automorphism, form a subgroup of $G_2(F)$, which I think is called a Ree group.
A similar construction is possible, when $F$ is a perfect field of characteristic $2$, using Chevalley groups of type $B$, $C$, and $F$, leading to Suzuki groups. I apologize if my naming is not quite on-target. I'm not sure which groups are attributable to Suzuki, to Ree, to Tits, etc..
Unfortunately (for me), most treatments of these Suzuki-Ree groups use abstract group theory (generators and relations). Is there a treatment of these groups, as algebraic groups over a base field? Or am I being dumb and these are not obtainable as $F$-points of algebraic groups.
I'm trying to wrap my head around the following two ideas: first, that there might be algebraic groups obtained as fixed points of an algebraic automorphism that swaps long and short root spaces. Second, that the outer automorphism group of a simple simply-connected split group like $G_2$ is trivial (automorphisms of Dynkin diagrams mean automorphisms that preserve root lengths).
So I guess that these Suzuki-Ree groups are inner forms... so there must be some unusual Cayley algebra popping up in characteristic 3 to explain an unusual form of $G_2$. Or maybe these groups don't arise from algebraic groups at all.
Can someone identify and settle my confusion?
Lastly, can someone identify precisely which fields of characteristic $3$ or $2$ are required for the constructions of Suzuki-Ree groups to work?
-
## 2 Answers
It is not really a question of inner forms. What happens is that the algebraic group $G_2$ has an extra endomorphism $\varphi$ whose square is the Frobenius map (over the appropriate finite field). Just as for any algebraic group over a finite field $F$ its rational points over $F$ are the fixed points of the Frobenius endomorphism the Suziki groups are, by definition, the fixed points of $\varphi$. Again, just as the Frobenius, on points over the algebraic closure of $F$ $\varphi$ is an automorphism of the abstract group. However, that is misleading, the essential points is that it is an endomorphism (which definitely is not an automorphism) of the algebraic group. Most of the properties of points over $F$ of a semi-simple algebraic group $G$ defined over $F$ follows from the algebro-geometric theory of $G$ and the properties of the Frobenius endomorphism. Similarly, most of the properties of Suziki groups follows from the algebro-geometric theory of $G_2$ together with the properties of $\varphi$. As $\varphi$ is very similar to the Frobenius endomorphism this works almost the same way as if $\varphi$ were indeed a Frobenius endomorphism.
Addendum: As one simple example of the similarity of $\varphi$ to a Frobenius consider the problem of computing the order of the Suzuki groups. As the square of $\varphi$ is the Frobenius, the action of it on the tangent space at any fixed point is nilpotent. This implies that such a fixed point appear with multiplicity one in the Lefschetz fixed point formula and the order of its group of fixed points is thus equal to the Lefschetz trace on (étale) cohomology of the algebraic group $G_2$. That cohomology can be canonically expressed in terms the action of the Weyl group on the character group of the maximal torus (see for instance example in SGA 4 1/2) and how $\varphi$ acts on that character group is essentially part of the definition of $\varphi$.
-
Ah - so the Suzuki-Ree groups are not $F$-points of an algebraic group over $F$ after all then, I guess. I guess that the endomorphism $\phi$ cannot be used to define descent data as required. So weird... – Marty Jul 29 2010 at 20:42
1
Marty: The "Frobenius endomorphism" here incorporates the special isogeny, so its fixed points give the Suzuki or Ree group in question as Steinberg explained in uniform fashion. This broader notion of Frobenius morphism is now standard in the work on characters and such coming out of the Deligne-Lusztig construction in 1976. – Jim Humphreys Jul 29 2010 at 20:56
Thanks Jim! I'll look back at Deligne-Lusztig for more. I'm always a bit slow going between generalities (defining groups via descent data) and the special cases (using Frobenius morphisms over finite fields, and Cartan involutions over the reals). – Marty Jul 29 2010 at 21:13
2
Marty, you may be amused to hear that these purely inseparable special isogenies for types B, C, and F in characteristic 2 and type G_2 in characteristic 3 (i.e., types with a bond of multiplicity $p$ in characteristic $p$), which are bijective on rational points over a perfect field but not otherwise, underlies the "exceptional" pseudo-reductive groups. See Chapter 7 (especially section 7.1) of the book "Pseudo-reductive groups". – BCnrd Jul 29 2010 at 23:09
I am amused! I'll check out the P-Red book shortly. – Marty Jul 29 2010 at 23:19
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
To supplement Torsten's account, the original Suzuki groups of type `$C_2$` in characteristic 2 resulted from a purely group-theoretic investigation but were then recovered in the algebraic group setting. The Ree groups of types `$F_4, G_2$` in respective characteristics 2, 3 were constructed inside the Chevalley groups of these types but also recovered in a uniform way by Steinberg in Endomorphisms of algebraic groups (AMS Memoir). There is also a full account in my recent LMS Lecture Note volume Modular Representations of Finite Groups of Lie Type (Cambridge, 2006). The algebraic group viewpoint is outlined by Torsten. The Suzuki and Ree groups don't arise from the split vs. quasisplit classification over finite fields, but rather involve Chevalley's special isogenies which interchange root lengths while using a finite field automorphism. The orders of the finite fields one starts with are the odd powers of 2, 3 respectively. But notation is tricky, since some people like to express things in terms of square roots to make the finite group orders resemble those of the corresponding split groups.
Since the Suzuki and Ree groups have BN-pairs, it is popular with finite group theorists to use this viewpoint in studying them (simplicity, etc.).
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 40, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9249731302261353, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/30251/using-einsteins-relativity-who-is-younger?answertab=votes
|
# Using Einstein's Relativity: Who is younger?
Suppose we have a person A and a person B.
Person B travels very close to speed of light and never returns. He's constant in speed. Then, we can say two things:
1. B is younger than A.
2. A is younger than B (since we can consider B's reference as inertial).
Who is correct between the two?
-
– Sklivvz♦ Dec 24 '12 at 15:18
3
This is different from the classical twin paradox in that you don't have one of the twins turning around and returning to the starting point. – David Zaslavsky♦ Dec 24 '12 at 16:06
## 2 Answers
See the answers of this question: How is the classical twin paradox resolved?
The point is that both will never be able to compare their ages without experiencing acceleration. And, acceleration makes reference frame non-inertial for which physics isn't valid.
In case of negligible acceleration in orbit to compare ages, this paper addresses the issue:
The twin paradox in compact spaces
Authors: John D. Barrow, Janna Levin
Phys.Rev. A63 (2001) 044104
Abstract: Twins travelling at constant relative velocity will each see the other's time dilate leading to the apparent paradox that each twin believes the other ages more slowly. In a finite space, the twins can both be on inertial, periodic orbits so that they have the opportunity to compare their ages when their paths cross. As we show, they will agree on their respective ages and avoid the paradox. The resolution relies on the selection of a preferred frame singled out by the topology of the space.
-
Interesting reference, nice way to be able to compare clocks without leaving an inertial frame. – twistor59 Jun 17 '12 at 12:51
Will clocks diverge if a stationary massive object were present in one. Side of an orbit? – Argus Jul 8 '12 at 5:40
OK, let's restate the problem just a bit for the sake of clarity.
Two persons, a & b, observe that they are moving uniformly with respect to each other and that their relative speed is close to $c$.
Both a & b observe that the other ages relatively slowly.
Now, your question: which person is correct, i.e., which person is absolutely aging more slowly?
Answer: There is no absolute time in SR.
However, there is an invariant time (proper time) associated with each person and all observers agree on the elapsed proper time for each person.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9436317682266235, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/tagged/toric-geometry
|
# Tagged Questions
The toric-geometry tag has no wiki summary.
1answer
54 views
### Toric Varieties from Cones
Consider the lattice $N=\Bbb{Z}^d$ spanned by $e_1,\dots,e_d$ and the cone $$\sigma=\text{Cone}\{e_1,\dots,e_k\}, \quad k<d.$$ I am trying to understand why the toric variety $V_\sigma$ obtained is ...
1answer
58 views
### Finding a toric variety of a cone
I'm trying to find the toric variety associated to the cone $\sigma_0$ which is the region in the real plane with $x\geq 0$ and $y-x\geq 0.$ I found that it's dual cone is $\check{\sigma_0}$ the ...
1answer
86 views
### If a group scheme $G$ operates on another scheme $X$, how do you define orbits?
In my specific case, $G=\mathrm{Spec}(k[M])$ is an algebraic torus acting on a toric variety $X_\Sigma$ corresponding to a fan $\Sigma$ when $k$ is not necessarily algebraically closed (or maybe even ...
1answer
51 views
### How to recognize a glueing
This is an exercise in chapter 1 of Fulton's book "Introduction to toric varieties". Let $\Delta$ be the fan consisting of the cones $\sigma_1=\langle e_1, e_2\rangle$ and \$\sigma_2=\langle ...
0answers
54 views
### Toric Variety from a fan with “identical” edges
I have this fan that I've been trying to construct a toric variety from. The problem is, it contains certain edges twice. These are the edges: $$(1,0,1)$$ $$(0,1,1)$$ $$(-1,-1,1)$$ $$(0,0,1)$$ ...
1answer
84 views
### Degree of Toric Divisors
Is it possible to calculate the degree of a toric divisor directly from the fan of the toric variety? If so, how is this done? Or is there some alternative way to calculate the degree of these ...
1answer
109 views
### Is locally free sheaf of finite rank coherent?
Let $\mathcal{F}$ be a locally free sheaf of finite rank of scheme $X$, is $\mathcal{F}$ coherent? By the definition of locally free sheaf, there exists an open cover {$U_i$} of $X$ such that ...
0answers
45 views
### Restriction of locally free sheaf associated projective modules
My question comes from the paper of Tamafumi's "On Equivariant Vector Bundles On An Almost Homogeneous Variety" (it can be downloaded freely in ...
0answers
50 views
### Transition functions of toric projective bundle (Proposition in [Cox, Toric Varieties])
My reference: David Cox's "Toric Varieties" My question is the proof of Proposition 7.3.3. Proposition 7.3.3. The cones {$\sigma_i$ | $\sigma \in \Sigma$, $i = 0,\dots,r$} and their faces form ...
2answers
131 views
### If $M$ and $N$ are graded modules, what is the graded structure on $\operatorname{Hom}(M,N)$?
Let $A$ be a graded ring. Note that the grading of $A$ may not be $\mathbb{N}$, for example, the grading of $A$ could be $\mathbb{Z}^n$. Actually, my question comes from the paper of Tamafumi's "On ...
0answers
85 views
### GIT quotient for a certain torus action on an affine space
I'm reading various books and some notes and here is my question. Let $(\mathbb{C}^*)^2$ act on $\mathbb{C}^4$ by (\lambda_1,\lambda_2).(x_1, x_2, y_1,y_2)=(\lambda_1 x_1, \lambda_2 x_2, ...
1answer
71 views
### Question about Tamafumi Kaneyama's Paper: “On Equivariant Vector Bundles On An Almost Homogeneous Variety”
My reference: http://projecteuclid.org/DPubS?verb=Display&version=1.0&service=UI&handle=euclid.nmj/1118795362&page=record I have two question about Proposition 3.3.: Proposition3.3. ...
0answers
73 views
### Exercise in David Cox “Toric Varieties”
I want to do an exercise in the book Toric Varieties (by David Cox) Exercise 3.3.5. Let $\overline{\phi}:N \rightarrow N'$ be a surjective $\mathbb{Z}$-linear mapping and let $\widehat{\sigma}$ ...
1answer
85 views
### How to find the canonical divisor on a nonsingular toric variety?
I am reading Fulton's "Toric Varieties." In it, he explains that if $X$ is a toric variety and if $D_1, \ldots, D_d$ are the irreducible divisors invariant under the big torus action, then ...
1answer
140 views
### Explicit example of a toric flip
I am looking for a toy example of a flip between toric projective 3-folds. More precisely, I would like to see their defining fans (or polytopes). Does anyone know where I can find something like ...
1answer
154 views
### Hodge theory for toric varieties
Say we are given a complex smooth projective toric variety $X$. How can one read off hodge theoretic information from combinatorial data? For example I would like to extract dimensions of the various ...
1answer
67 views
### differential with logarithmic poles
where can I find the computation of the groups $H^i(\mathbb{P}^n,\Omega_{\mathbb{P}^n}^j)$? Moreover, if $D$ is a divisor with normal crossing in $\mathbb{P}^n$, how can I compute the hypercohomology ...
1answer
61 views
### figure out a simple toric variety
consider the plane cones $s=\langle(1,0),(1,n)\rangle$ and $t=\langle(1,0),(1,-n)\rangle$. This produce a toric variety obtained glueing $k[x,xy^n]$ and $k[x,xy^{-n}]$ along $k[x]$. There is a more ...
1answer
213 views
### Cohomology of $\mathcal O_X$ for toric varieties
Motivated by my ignorance here, if $X$ is a projective toric variety, is $$H^m(X, \mathcal O_X) \cong \begin{cases} 0 & m > 0 \\ \mathbb C & m = 1 \end{cases}$$ as for \$\mathbb ...
1answer
205 views
### Equations of a projective toric variety
Given complete fan $\Delta$ defining a projective toric variety (so that $\Delta$ is the normal fan of some polytope). How do one go on to find a defining ideal of the toric variety in projective ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 53, "mathjax_display_tex": 6, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8956041932106018, "perplexity_flag": "head"}
|
http://unapologetic.wordpress.com/2011/10/12/the-curl-operator/?like=1&source=post_flair&_wpnonce=420e8d64d7
|
# The Unapologetic Mathematician
## The Curl Operator
Let’s continue our example considering the special case of $\mathbb{R}^3$ as an oriented, Riemannian manifold, with the coordinate $1$-forms $\{dx, dy, dz\}$ forming an oriented, orthonormal basis at each point.
We’ve already see the gradient vector $\nabla f$, which has the same components as the differential $df$:
$\displaystyle\begin{aligned}df&=\frac{\partial f}{\partial x}dx+\frac{\partial f}{\partial y}dy+\frac{\partial f}{\partial z}dz\\\nabla f&=\begin{pmatrix}\displaystyle\frac{\partial f}{\partial x}\\\displaystyle\frac{\partial f}{\partial y}\\\displaystyle\frac{\partial f}{\partial z}\end{pmatrix}\end{aligned}$
This is because we use the metric to convert from vector fields to $1$-forms, and with respect to our usual bases the matrix of the metric is the Kronecker delta.
We will proceed to define analogues of the other classical differential operators you may remember from multivariable calculus. We will actually be defining operators on differential forms, but we will use this same trick to identify vector fields and $1$-forms. We will thus not usually distinguish our operators from the classical ones, but in practice we will use the classical notations when acting on vector fields and our new notations when acting on $1$-forms.
Anyway, the next operator we come to is the curl of a vector field: $F\mapsto\nabla\times F$. Of course we’ll really start with a $1$-form instead of a vector field, and we already know a differential operator to use on forms. Given a $1$-form $\alpha$ we can send it to $d\alpha$.
The only hangup is that this is a $2$-form, while we want the curl of a vector field to be another vector field. But we do have a Hodge star, which we can use to flip a $2$-form back into a $1$-form, which is “really” a vector field again. That is, the curl operator corresponds to the differential operator $*d$ that takes $1$-forms back to $1$-forms.
Let’s calculate this in our canonical basis, to see that it really does look like the familiar curl. We start with a $1$-form $\alpha=Pdx+Qdy+Rdz$. The first step is to hit it with the exterior derivative, which gives
$\displaystyle\begin{aligned}d\alpha=&dP\wedge dx+dQ\wedge dy + dR\wedge dz\\=&\left(\frac{\partial P}{\partial x}dx+\frac{\partial P}{\partial y}dy+\frac{\partial P}{\partial z}dz\right)\wedge dx\\&+\left(\frac{\partial Q}{\partial x}dx+\frac{\partial Q}{\partial y}dy+\frac{\partial Q}{\partial z}dz\right)\wedge dy\\&+\left(\frac{\partial R}{\partial x}dx+\frac{\partial R}{\partial y}dy+\frac{\partial R}{\partial z}dz\right)\wedge dz\\=&\frac{\partial P}{\partial y}dy\wedge dx+\frac{\partial P}{\partial z}dz\wedge dx\\&+\frac{\partial Q}{\partial x}dx\wedge dy+\frac{\partial Q}{\partial z}dz\wedge dy\\&+\frac{\partial R}{\partial x}dx\wedge dz+\frac{\partial R}{\partial y}dy\wedge dz\\=&\left(\frac{\partial R}{\partial y}-\frac{\partial Q}{\partial z}\right)dy\wedge dz\\&+\left(\frac{\partial P}{\partial z}-\frac{\partial R}{\partial x}\right)dz\wedge dx\\&+\left(\frac{\partial Q}{\partial x}-\frac{\partial P}{\partial y}\right)dx\wedge dy\end{aligned}$
Next we hit this with the Hodge star. We’ve already calculated how the Hodge star affects the canonical basis of $2$-forms, so this is just a simple lookup to find:
$\displaystyle*d\alpha=\left(\frac{\partial R}{\partial y}-\frac{\partial Q}{\partial z}\right)dx+\left(\frac{\partial P}{\partial z}-\frac{\partial R}{\partial x}\right)dy+\left(\frac{\partial Q}{\partial x}-\frac{\partial P}{\partial y}\right)dz$
which are indeed the usual components of the curl. That is, if $\alpha$ is the $1$-form corresponding to the vector field $F$, then $*d\alpha$ is the $1$-form corresponding to the vector field $\nabla\times F$.
### Like this:
Posted by John Armstrong | Differential Geometry, Geometry
## 7 Comments »
1. [...] fact that I didn’t mention when discussing the curl operator is that the curl of a gradient is zero: . In our terms, this is a simple consequence of the [...]
Pingback by | October 13, 2011 | Reply
2. [...] if we define as another -form then we know it corresponds to the curl . But on the other hand we know that in dimension we have , and so we find as well. Thus we [...]
Pingback by | November 23, 2011 | Reply
3. [...] The Curl Operator (unapologetic.wordpress.com) Advertisement LD_AddCustomAttr("AdOpt", "1"); LD_AddCustomAttr("Origin", "other"); LD_AddCustomAttr("LangId", "1"); LD_AddCustomAttr("Autotag", "technology"); LD_AddCustomAttr("Tag", "kronecker-product"); LD_AddCustomAttr("Tag", "leopold-kronecker"); LD_AddSlot("wpcom_below_post"); LD_GetBids(); Rate this: Share this:TwitterPrintFacebookLinkedInEmailDiggRedditStumbleUponLike this:LikeBe the first to like this post. This entry was posted in Uncategorized and tagged Kronecker product, Leopold Kronecker by Adrian McMenamin. Bookmark the permalink. [...]
Pingback by | January 14, 2012 | Reply
4. [...] we flip over to the language of differential forms, we know that the curl operator on a vector field corresponds to the operator on -forms, while the gradient operator corresponds [...]
Pingback by | February 18, 2012 | Reply
5. [...] again with Maxwell’s equations, we see all these divergences and curls which, though familiar to many, are really heavy-duty equipment. In particular, they rely on the [...]
Pingback by | February 22, 2012 | Reply
6. How does the curl generalize to curved 3-space?
Comment by john archer | April 10, 2012 | Reply
7. It’s pretty much the same as in flat 3-space, except that the metric we use to translate back and forth to the language of differential forms — and to define the Hodge star — now varies.
Comment by | April 11, 2012 | Reply
« Previous | Next »
## About this weblog
This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”).
I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 31, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8887322545051575, "perplexity_flag": "middle"}
|
http://mathoverflow.net/revisions/19648/list
|
## Return to Question
4 added 220 characters in body
Again from the Shepp and Lloyd paper "ordered cycle lengths in a random permutation", I found this puzzling equality. This one might require access to the paper itself since it's quite a mouthful:
In equation (15), they claimed it is straightforward that if there is an $F_r$ such that
$$\int_0^1 \exp(-y/\xi) dF_r(\xi) = \int_y^{\infty} \frac{E(x)^{r-1}}{(r-1)!} \frac{\exp(-E(x) -x)}{x} dx$$
then $F_r$ will have moments $G_{r,m}$.
Here
$$G_{r,m} = \int_0^{\infty} \frac{x^{m-1}}{m!} \frac{E(x)^{r-1}}{(r-1)!} \exp(-E(x)-x) dx$$
and
$$E(x) = \int_x^{\infty} \frac{e^{-y}}{y} dy$$
which is related to the thread http://mathoverflow.net/questions/19526/reference-request-for-a-well-known-identity-in-a-paper-of-shepp-and-lloyd
It looks to me like some sort of Laplace transform, but I can't manage to get the algebra to work, because of the inverse exponent $y/\xi$ with respect to $\xi$.
I will be happy enough if one can tell me why we are looking at the transform $\int_0^1 \exp(-y/\xi) dF_r(\xi)$ instead of the usual moment generating function $\int_0^1 \exp(-y \xi) dF_r(\xi)$, or maybe it's a typo?
3 deleted 1 characters in body
Again from the Shepp and Lloyd paper "ordered cycle lengths in a random permutation", I found this puzzling equality. This one might require access to the paper itself since it's quite a mouthful:
In equation (15), they claimed it is straightforward that if there is an $F_r$ such that
$$\int_0^1 \exp(-y/\xi) dF_r(\xi) = \int_y^{\infty} \frac{E(x)^{r-1}}{(r-1)!} \frac{\exp(-E(x) -x)}{x} dx$$
then $F_r$ will have moments $G_{r,m}$.
Here
$$G_{r,m} = \int_0^{\infty} \frac{x^{m-1}}{m!} \frac{E(x)^{r-1}}{(r-1)!} \exp(-E(x)-x) dx$$
and
$$4E(x) $E(x) = \int_x^{\infty} \frac{e^{-y}}{y} dy$$
which is related to the thread http://mathoverflow.net/questions/19526/reference-request-for-a-well-known-identity-in-a-paper-of-shepp-and-lloyd
It looks to me like some sort of Laplace transform, but I can't manage to get the algebra to work, because of the inverse exponent $y/\xi$ with respect to $\xi$.
2 Fixed LaTeX.
Again from the Shepp and Lloyd paper "ordered cycle lengths in a random permutation", I found this puzzling equality. This one might require access to the paper itself since it's quite a mouthful:
In equation (15), they claimed it is straightforward that if there is an $F_r$ such that
\begin{align*} \int_0^1
$$\int_0^1 \exp(-y/\xi) dF_r(\xi) = \int_y^{\infty} (E(x))^{r-1}/(r-1)! \exp(-E(x) frac{E(x)^{r-1}}{(r-1)!} \frac{\exp(-E(x) -x)/x x)}{x} dx \end{align*}$$
then $F_r$ will have moments $G_{r,m}$.
Here
\begin{align*} G_{r,m}
G_{r,m} = \int_0^{\infty} x^{m-1}/m! (E(x))^{r-1}/(r-1)! \exp(-E(x)-x)dx frac{x^{m-1}}{m!} \end{align*}
and frac{E(x)^{r-1}}{(r-1)!} \begin{align*} E(x) exp(-E(x)-x) dx
and
$$4E(x) = \int_x^{\infty} e^{-y}/y dy \end{align*}frac{e^{-y}}{y} dy$$
which is related to the thread http://mathoverflow.net/questions/19526/reference-request-for-a-well-known-identity-in-a-paper-of-shepp-and-lloyd
It looks to me like some sort of Laplace transform, but I can't manage to get the algebra to work, because of the inverse exponent $y/\xi$ with respect to $\xi$.
1
# method of moments and Laplace transform from Shepp and Lloyd
Again from the Shepp and Lloyd paper "ordered cycle lengths in a random permutation", I found this puzzling equality. This one might require access to the paper itself since it's quite a mouthful:
In equation (15), they claimed it is straightforward that if there is an $F_r$ such that
\begin{align*} \int_0^1 \exp(-y/\xi) dF_r(\xi) = \int_y^{\infty} (E(x))^{r-1}/(r-1)! \exp(-E(x) -x)/x dx \end{align*}
then $F_r$ will have moments $G_{r,m}$.
Here
\begin{align*} G_{r,m} = \int_0^{\infty} x^{m-1}/m! (E(x))^{r-1}/(r-1)! \exp(-E(x)-x)dx \end{align*}
and
\begin{align*} E(x) = \int_x^{\infty} e^{-y}/y dy \end{align*}
which is related to the thread http://mathoverflow.net/questions/19526/reference-request-for-a-well-known-identity-in-a-paper-of-shepp-and-lloyd
It looks to me like some sort of Laplace transform, but I can't manage to get the algebra to work, because of the inverse exponent $y/\xi$ with respect to $\xi$.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 24, "mathjax_display_tex": 8, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8970977663993835, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/tagged/tensors?page=3&sort=newest&pagesize=15
|
# Tagged Questions
Use this tag for questions about specific tensors (curvature tensor, stress tensor), or questions regarding tensor computations as they appear in multivariable calculus and differential/Riemannian geometry (specifically, when it is amenable to be treated as objects with multiple indices that ...
1answer
209 views
### How to generate the inverse of a order 3 tensor
Is it possible to generate an inverse of an order 3 tensor? If so, how? I have been searching for a couple days, and cannot seem to find anything online to help with this.
0answers
63 views
### Is there a rigorous exposition of 'tensor methods' for finding lie group representations?
I've seen tensor methods in physics for finding lie group representations, as in Wu-Ki Tungs Group Theory in Physics, which uses tensors physics style, ie with indices; and Cvitonovics Birdtracks, ...
0answers
68 views
### Confusion with vectors and notation
Could someone please explain to me why $$\nabla (\dot{r}\cdot A)$$ take the following form in index notation? \left({\partial A_i\over \partial r^k}-{\partial A_k\over \partial ...
1answer
127 views
### Index notation clarification
Previously, I have seen matrix notation of the form $T_{ij}$ and all the indices have been in the form of subscripts, such that $T_{ij}x_j$ implies contraction over $j$. However, recently I saw ...
2answers
144 views
### Tensor operation on a vector spaces
From the various definitions provided in the article https://en.wikipedia.org/wiki/Tensor , the tensor seems always to be defined, even in the more abstract forms, as a multilinear map, from a product ...
0answers
83 views
### Changing along a tensor field, the Lie Derivative
I can find considerable information about how to use the Lie Derivative to measure the change of a tensor field along a vector field, but I can't seem to find anything for the converse. What if I ...
0answers
64 views
### Relationship between Tensors of Different Rank
Simple question. Can one write every second rank tensor $T^{ab}$ as some finite sum $\sum U^aV^b$ with $U^a$, $V^b$ tensors? Apologies if this is an incredibly standard result - I don't own a textbook ...
2answers
247 views
### $\det(A \otimes B - B \otimes A) = 0$ why? Why $rk(M) = n^2-n$ ? Why x and -x in Spec(M) ?
Let $A$, $B$ be $n\times n$ matrices. It seems $\det(A \otimes B - B \otimes A) = 0$. Moreover it seems that the kernel of $A \otimes B - B \otimes A$ contains $n$ vectors. Here is MatLab code to ...
1answer
71 views
### basic vector being hermitian
If the space has a mixed metric signature, not all the basis vectors are Hermitian. Nevertheless, they are defined to be self-adjoint under reversion. The vector transpose conjugate is, ...
1answer
480 views
### vector/tensor covariance and contravariance notation
As I looked over the Wikipedia article: http://en.wikipedia.org/wiki/Covariance_and_contravariance_of_vectors is said as a contravariant vector and is said as covariant vector (or covector). ...
2answers
339 views
### Tensor calculus - Christoffel symbol of the second kind
and I understand these parts up there, but I cannot understand how the second formula of the last equality leads to the third formula. Can anyone show me what relabeling indices rules are used ...
0answers
108 views
### Extending Tensor Fields defined on Manifolds to Ambient Space
I am currently reading about tensor fields on manifolds, and I came across two comments that sound contradictory to me. The first comment is made in the book by James Munkres "Analysis on Manifolds", ...
0answers
462 views
### Conversion of motion equation from Cartesian to Polar coordinates: Is covariant differentiation necessary?
Say I have the following equation of motion in the Cartesian coordinate system for a typical mass spring damper system: M \; \ddot{x} + C \; \dot{x} + K \; x = ...
1answer
144 views
### What's are these index objects called? And $\mathrm{\LaTeX}$ \sum question
I want to refer to $$A_iB_jC_k$$ using $$\psi(ijk) = A_iB_jC_k$$ So that I can write out quite overwhelming-looking sums of ABC terms as sums of terms that look like 123, 231, 113, etc. If I am not ...
0answers
132 views
### Better Tensor Notation
I am in a General Relativity class, and I am finding the usual tensor notation very difficult to think about -- it seems like there are too many names to express something simple. E.g., I think of ...
1answer
132 views
### Is this tensor question valid?
A tensor exercise in a text reads: If $T_i$ are the components of a covariant vector $T$, show that $S_{ij}:=T_iT_j-T_jT_i$ is an order 2 covariant tensor $S$. Am I missing something or is $S$ ...
2answers
207 views
### What is the difference between tensors and tensor products?
The tensor product $S\otimes_R T$ of $S$ and $N$ over $R$ is a module. A multilinear form $L:V^r \to R$ is called an $r$-tensor on $V$. On the one hand a tensor is a function sending elements of ...
2answers
217 views
### Unique symmetric covariant $k$-tensor satisfying $(\operatorname{Sym} T)(A,…,A)=T(A,…,A)$ for all $A \in V$
Let $T$ be a covariant $k$-tensor on a finite dimensional vector space $V$. I want to prove that the symmetrization of $T$ is the unique symmetric $k$-tensor satisfying the following condition: ...
0answers
325 views
### Inertia tensor transformation under coordinate change
Let $I(x)$ be an inertia tensor in matrix notation of a body in a coordinate system $x\in R^n$. Under a coordinate change $x=\phi(y)$, does the tensor transform as $Dx^TI(\phi(y))Dx$, where ...
2answers
175 views
### $e_1\otimes e_2 \otimes e_3$ cannot be written as a sum of an alternating tensor and a symmetric tensor
Let $(e_1,e_2,e_3)$ be the standard dual basis for $(\mathbb{R}^3)^\ast$. How can I show that $e_1\otimes e_2 \otimes e_3$ cannot be written as a sum of an alternating (or antisymmetric) tensor and a ...
1answer
240 views
### The Dimension of the Symmetric $k$-tensors
I want to compute the dimension of the symmetric $k$-tensors. I know that a covariant $k$-tensor $T$ is called symmetric if it is unchanged under permutation of arguments. Also, I know that the ...
2answers
128 views
### Reference for densities and pseudoforms and non-tensorial representations of $\operatorname{GL}(n)$ and associated vector bundles
I'm looking for a reference that will set me straight on a few things. It started out with densities. In John Lee's book, "Introduction to Smooth Manifolds", densities on vector spaces are functions ...
1answer
105 views
### Simple problem with the normal curvature tensor
If $M$ is a s-R(semi-Riemannian) submanifold of a s-R manifold $\overline{M}$ the function $R^{\perp}:\mathfrak{X}(M)\times\mathfrak{X}(M)\times\mathfrak{X}(M)^\perp\rightarrow\mathfrak{X}(M)^\perp$ ...
1answer
196 views
### Index/Einstein notation to derive Gibbs/Tensor relations
In a few continuum classes I have seen indicial notation used to derive relations in Gibbs notation. However, Gibbs notation is valid for all coordinates while indicial notation is valid only for ...
1answer
63 views
### Help needed with tensors [duplicate]
Possible Duplicate: An Introduction to Tensors Recently I came across the concept of tensors and heard it is very difficult to understand. Is there a ...
1answer
1k views
### What is the divergence of a matrix valued function?
According to Wikipedia: The divergence of a continuously differentiable tensor field $\underline{\underline{\epsilon}}$ is: ...
2answers
95 views
### Intuitive Examples of (r,0) Tensors
It's easy to find "intuitive" examples of $(0, r)$ tensors or even $(k, r)$ tensors $( k, r > 0)$. For the purposes of this question, I am considering a tensor in the "classical" sense as being ...
2answers
199 views
### Index notation for tensors: is the spacing important?
While reading physics textbooks I always come across notation like: $$J_{\alpha}^{\quad\beta},\ \Gamma_{\alpha \beta}^{\quad \gamma}, K^\alpha_{\quad \beta}.$$ Notice the spacing in indices. I can't ...
3answers
720 views
### Tensors, what should I learn before?
Here I will be just posting a simple questions. I know about vectors but now I want to know about tensors. In a physics class I was told that scalars are tensors of rank o and vectors are tensors of ...
2answers
351 views
### is there a way to solve the following tensor equation?
I have the following tensor (takes a vector of length $m$ and returns a matrix $m \times m$): $C(y) = A \operatorname{diag}(A^T y ) A^{-1}$ for some invertible matrix $A$ of size $m \times m$ ($y$ ...
1answer
373 views
### Extracting angular velocity tensor from orthogonal matrices
Let us suppose we have two orthogonal rotation matrices representing a three-dimensional rotations $$\mathbf{R}(t)$$ and $$\mathbf{R}(t+\Delta t)$$ How is it possible to extract the angular velocity ...
2answers
200 views
### Tensors of order 3
I'm wondering what a tensor of order 3 looks like, and what it's purposes are. I've seen them written down before, but they look like matrices; I'm probably not understanding the concept well. How is ...
1answer
157 views
### general (asymmetric) real rank-2 tensor visualization in 3d
I have general rank-2 real tensor in 3d space represented as a 3x3 real matrix $M$ (it is gradient of a vector field). I am writing some code to visualize it in several isolated points, this is what I ...
0answers
525 views
### Invariant proof of the Contracted Bianchi Identity
In "Riemannian Manifolds: An Introduction to Curvature," John Lee states the following lemma: Lemma 7.7 (Contracted Bianchi Identity): The covariant derivatives of the Ricci and scalar curvatures ...
2answers
675 views
### Mathematically Precise Definition of Covariant and Contravariant Transformation
I am trying to understand the meanings of "covariant transformation" and "contravariant transformation" and how they are related. I have read the related Wikipedia article and still feel I cannot ...
2answers
448 views
### What is the definition of tensor contraction?
According to Wikipedia's page on tensor contraction: In general, a tensor of type $(m,n)$ (with $m \geq 1$ and $n \geq 1$) is an element of the vector space \$V \otimes \ldots \otimes V \otimes V^* ...
1answer
180 views
### How did the author of the following paper compute the curvature matrix?
I would like to be shown how the curvature matrix K on page 7 in the paper "Regularisation Theory Applied to Neurofuzzy Modelling" (Bossley) is computed. ...
0answers
36 views
### Is there a particular name for a'long-small-small' tensor/array?
I'm thinking of a 3D array, with dimensions small,small,large. I've taken to saying 'sausage' as shorthand (and I'm sure there are worse NSFW descriptions) but is there a 'legitimate' description for ...
2answers
282 views
### Is correct to say that every tensor is a spinor but not every spinor is a tensor?
Can spinors be seen as a generalization of tensors,but with complex numbers?
1answer
84 views
### Matrix representing $\Lambda^k$(A)
Let V , W be finite dimensional vector spaces over R. Let A : V->W be a linear map. Choose bases of V and W and the corresponding bases of $\Lambda^k$(V ) and of $\Lambda^k$(W). How to show that the ...
2answers
615 views
### Prove the determinant of a tensor is invariant
Given is a second-order tensor $T$, and three arbitrary vectors, $u$, $v$ and $w$, defined in Euclidean point space $\mathcal{E}$. Prove that the determinant of the tensor $T$ \$\det T=\frac{Tu.(Tv ...
1answer
252 views
### Relation between metric tensor and second fundamental form
I'm confused with these definitions. The metric of certain space and the second fundamental form seem to be the same object. I don't know what else to say, this is a pretty straight forward question. ...
1answer
148 views
### Trouble deriving the Harris Corner Detection
I just started studying a small paper about the Harris Corner Detection. The problem is I don't understand how step 7 is derived from step 6. In step 7 the expression is expanded in a way that we get ...
1answer
172 views
### gradient of row vector multiplied by scalar
I'm trying to re-write $v (u x)$ where $v$ and $u$ are row vectors and $x$ is a column vector as some expression $M x$ (or $\bar{v}x$, etc.). The motivation is because I'm trying to compute the ...
1answer
205 views
### Taylor expansion in time of the time component of a stress energy tensor
Perform a taylor expansion in 3 dimensions in time on the time compontent of of $T^{\alpha \beta}(t - r + n^{i} y_{i})$ given that $r$ is a contstant and $n^{i} y_{i}$ is the scalar product of a ...
2answers
281 views
### In tensor notation in Spivak's Calculus on Manifolds, what is that character that looks like a 3?
For example, saying that $T$ is a k-tensor one might see $T\in 3^k(V)$, of course it's not actually a 3. It looks somewhat like Fraktur font Z: $\frak{Z}$. I couldn't detexify it, and it doesn't ...
2answers
1k views
### Tensors as matrices vs. Tensors as multi-linear maps
So I read the answers in this question, and don't feel that much closer to an answer about how tensors as multi-linear maps and tensors as "multi-dimensional" matrices are truly related. For ...
5answers
3k views
### An Introduction to Tensors
As a physics student, I've come across mathematical objects called tensors in several different contexts. Perhaps confusingly, I've also been given both the mathematician's and physicist's definition, ...
2answers
2k views
### Intuitive way to understand covariance and contravariance in Tensor Algebra
I'm trying to understand basic tensor analysis. I understand the basic concept that the valency of the tensor determines how it is transformed, but I am having trouble visualizing the difference ...
1answer
354 views
### Einstein notation - difference between vectors and scalars
From Wikipedia: First, we can use Einstein notation in linear algebra to distinguish easily between vectors and covectors: upper indices are used to label components (coordinates) of ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 89, "mathjax_display_tex": 6, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9239475727081299, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/90467/extension-of-pointwise-convergence-of-a-sequence-of-uniformly-continuous-function
|
Extension of pointwise convergence of a sequence of uniformly continuous functions that converges on a dense set
Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
It is known that a sequence of continuous functions on a metric space that converges pointwise on a dense subset need not converge pointwise on the full space. But what about if one assumes uniform continuity? Let me be more precise:
Let $X$ be a metric space and let $r_\alpha$ (for $\alpha=1,2,\ldots$) be a sequence of uniformly continuous functions $r_\alpha:X\to\mathbb{R}$. Furthermore, assume that $r:X\to\mathbb{R}$ is a uniformly continuous function such that $\lim_{\alpha\to\infty}r_\alpha(x)=r(x)$ for all $x$ in a dense subset $A\subseteq X$. Does this imply that $\lim_{\alpha\to\infty}r_\alpha(x)=r(x)$ for all $x\in X$?
-
3 Answers
If your sequence of functions $r_\alpha$ is uniformly equicontinuous, then this result should hold. That is, there should be one modulus of continuity for all functions in the sequence. Note that the sequence of @i707107 does not satisfy this stronger property. The proof goes along the same lines as the proof that C([0,1]) with supremum norm is a Banach (i.e. complete) space.
-
That's a good point. If you have equicontinuity on compact metric space $X$, then the convergence should be for every point of $X$, and the sequence is uniformly convergent. If you don't have equicontinuity, then my example shows that the pointwise convergence does not necessarily hold for every point. There is even more extreme example. Such as $f_n(x)=\sin n!x$ on $[-\pi,\pi]$. W.Rudin's "Principles of Analysis", page 334, exercise 16 shows that the set of all points $x\in [-\pi,\pi]$ of pointwise convergence has measure zero, and it contains all rational multiples of $\pi$. – i707107 Mar 7 2012 at 21:48
You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
I don't know what example that you have in mind in the first statement, but you can find such example in your question on $X=[0,1]$ as pointed out by Peter, any continuous function on $X$ is uniformly continuous. Consider $f_n(x)=0$ on $x\in [\frac{1}{n},1]$, and $(-1)^nn(x-\frac{1}{n})$ on $x\in [0,\frac{1}{n}]$. This sequence is pointwise convergent to $0$ on $(0,1]$ which is a dense set in $[0,1]$.
-
Since all continuous functions on a compact metric space are uniformly continuous one can construct an easy conterexample on $X=[0,1]$.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 27, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9277060031890869, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/44326/solve-for-level-payment-with-a-twist?answertab=active
|
# Solve for level payment with a twist
Reworded trying to clarify. Also corrected example to correctly state 4 years and 150 days.
I'm struggling with how to solve for a level withdrawal that will reduce a starting balance amount to zero over n years and m days assuming interest rate x. Withdrawals and interest happen annually. The first withdrawal happens right away and is subject to no interest, the twist comes with the second withdrawal which happens after m days which would be a partial years worth of interest (if m is not 365), and the withdrawals thereafter happen annually. Partial year interest rate would be calculated as (1+i)^(m/365).
Is there a single formula or combination of formulas I can use that will give me the level withdrawal amount if I know the initial amount, the interest rate and the number of years and days?
Here's an example with everything known: Deposit 100,000, interest rate 3%, 4 years and 150 days, withdrawal \$17667.40.
100,000 82332.6 (subtracted 17667.40 after 0 days) 83338.83 (add 150 days of interest on 82332.6) 65671.43 (subtracted 17667.40 after 150 days) 67641.57 (add 1 years interest on 65671.43 after 1 years and 150 days) 49974.17 (subtracted 17667.40 after 1 year and 150 days ) 51473.40 (add 1 years interest on 49974.17 after 2 years and 150 days) 33805.99 (subtracted 17667.40 after 2 years and 150 days) 34820.18 (add 1 years interest on 33805.99 after 3 years and 150 days) 17152.78 (subtracted 17667.40 after 3 years and 150) 17667.36 (add 1 years interest on 17152.78 after 4 years and 150 days) -0.03 (subtracted 17667.40 after 4 years and 150 days, close enough to zero, withdrawal is between 17667.40 and 17667.39)
-
To be clear: you have a starting bank balance $X_0$, and to get the balance in year $t$, you take $X_t$, add on the interest from that year, then subtract the withdrawal amount? And your question is how to calculate the withdrawal amount so that you reduce the balance to zero after $n$ years, with the additional complication that your 'year 1' might not be a full year? – Chris Taylor Jun 9 '11 at 14:23
Close but not quite I think, here is a sample order of operations to help explain my question better. deposit X0, subtract withdrawal amt y to yield X1, credit interest for partial year on principal of X1, subtract withdrawal amt y to yield X2, credit interest for full year on principal of X2, subtract withdrawal amt y to yield Xn, credit interest for full year on Xn... etc. Then knowing the interest rate, number of years n and initial deposit, how do I find withdrawal amount y? – brentj Jun 9 '11 at 15:34
## 2 Answers
First I'll do this assuming that there is no immediate withdrawal or unscheduled partial-term second withdrawal since we can account for this by hand adding a few terms and altering the deposit amount afterward.
Here's a quick run through a specific example. Say you deposited \$100,000, and were going to withdrawal \$5000 after every interest period, and you received 6% interest each period. At each stage, the current money is increased by the interest (multiply by $1.06$), and the withdrawal is subtracted, so your account would go like this:
$$\begin{array}{rl} \text{Deposit, withdrawal 0:}&100000 \\ \text{After withdrawal 1:}&(100000)(1.06) - 5000 \\ \text{After withdrawal 2:}&(100000)(1.06)^2 - (5000)(1.06) - 5000\\ \text{After withdrawal 3:}&(100000)(1.06)^3 - (5000)(1.06)^2 - (5000)(1.06) - 5000\\ &\vdots \\ \text{After withdrawal n:}&(100000)(1.06)^n - (5000)(1.06)^{n-1} - \dots - (5000)(1.06)^0\\ =&(100000)(1.06)^n - (5000)\sum_{k=0}^{n-1}(1.06)^k \\ \end{array}$$
This should suggest what happens in general, and in fact we can just replace those numbers with variables and see that the same thing happens. Let $d$ be the initial deposit, $r$ be the interest rate for each year-long-term as a decimal like 1.06 for 6%, and let $w$ be the withdrawal amount each term:
$$\begin{array}{rl} \text{Deposit, withdrawal 0:}&d \\ \text{After withdrawal 1:}&dr - w \\ \text{After withdrawal 2:}&dr^2 - wr - w\\ \text{After withdrawal 3:}&dr^3 - wr^2 - wr - w\\ &\vdots \\ \text{After withdrawal n:}&dr^n - wr^{n-1} - \dots - wr^0\\ =&dr^n - w\sum_{k=0}^{n-1}r^k \\ \end{array}$$
We can evaluate that sum since its a geometric series, so after $n$ years and withdrawals, the amount of money is:
$$dr^n - w(\frac{r^n - 1}{r - 1})$$
Now, in your weird not-the-same-frequency withdrawal case, we need to stick two extra terms on the front. The first two withdrawals can be interpreted as changing the initial deposit for this term we just came up with.
The first withdrawal makes our deposit $d-w$ instead of $d$. The $m$ days of interest correspond to letting our $d-w$ accrue $r^{(m/365)}$, so this changes the deposit to $(d-w) r^{(m/365)}$. The second withdrawal at this point changes it to $(d-w) r^{(m/365)} - w$. So we'll substitute this in for $d$ where it occurred in the earlier equation.
If you've been skipping all of this so far, here's the equation. Plug in the variables, set it to $0$, and solve for $w$.
$$((d-w) r^{(m/365)} - w)r^n - w(\frac{r^n - 1}{r - 1})$$
At this point we go to a computer algebra system and ask it to simply this expression or solve it for $w$ because we're lazy (but not so lazy as to go find out if the CAS or Excel had a built in function for doing all of this for us).
Edit, actually did the above, and it pops out:
$$w = \frac{d (r-1) r^{\frac{m}{365}+n}}{-r^{\frac{m}{365}+n}+r^{\frac{m}{365}+n+1}+r^{n+1}-1}$$
In the particular case of your example in the OP, we want to solve for $w$ in
$$((100000-w) (1.03)^{(150/365)} - w)(1.03)^4 - w(\frac{(1.03)^4 - 1}{(1.03) - 1})=0$$
which by the above means we want
$$\frac{100000 (1.03-1) 1.03^{\frac{150}{365}+4}}{-1.03^{\frac{150}{365}+4}+1.03^{\frac{150}{365}+4+1}+1.03^{4+1}-1}$$
which gives $w = 17667.39411...$. (here is a link to wolframalpha for evaluating it http://bit.ly/lof47Q )
Note I used $n=4$ years here. In your example you seem to have only gone $4$ terms despite saying 150 days plus five years. If you use $n=5$ you get $15355.50$.
-
@brentj I edited the post to add the actual formula for w; figured I'd point it out to you so you can double check it against however you're finding the roots yourself. – matt Jun 10 '11 at 5:18
Upvoted again for taking the extra steps of solving for W and providing link to Wolfram. – brentj Jun 10 '11 at 14:38
OK, suppose the initial balance is $B$, the annual interest rate is $r$, payment in the amount of $x$ is made every $m$ days, the total number of payments to be made is $s$. Let me write $v=1+(mr/365)$. At time zero, you pay $x$ and bring the balance down to $B-x$. After $m$ days, the balance has grown to $(B-x)v$, you pay $x$ to bring it down to $$(B-x)v-x=vB-(v+1)x$$ After $2m$ days it grows to $v^2B-(v^2+v)x$, and you bring it down to $$v^2B-(v^2+v+1)x$$ Now you can see what's happening; after $s$ payments, the balance is $$v^{s-1}B-(v^{s-1}+v^{s-2}+\cdots+1)x=v^{s-1}B-{v^s-1\over v-1}x$$ Set this equal to zero, and solve for $x$.
-
this is close, but not quite the answer I'm looking for. It computes a payment of 17922.09 for my example. I think where it gets off track is the first v the m can be between 1...365 and there after, m = 365. Thanks though, it gave me some ideas to look at. – brentj Jun 10 '11 at 3:49
@brentj, sorry, didn't read closely enough, thought all the payments were $m$ days apart, missed that after the second they're annual. – Gerry Myerson Jun 10 '11 at 5:17
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 43, "mathjax_display_tex": 10, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9459840059280396, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/tagged/atomic-physics
|
# Tagged Questions
The atomic-physics tag has no wiki summary.
1answer
20 views
### Table of matrix elements of powers of r for radial functions in H atom
Im looking for some references here. I hope it is the right place to ask. I need to find a table of (or a formula from which to extrapolate) the matrix elements of the radial functions of the ...
1answer
64 views
### Highest naturally occuring binding energy of electrons
I was wondering which element has the highest binding energy of an electron. Is it simply the 1s electron of the heaviest stable element? If so, can somebody tell me where I can find a table of ...
1answer
114 views
### Stark Effect on the 1st excited state of Hydrogen
I know the ground state of hydrogen is unaffected by the Stark effect to first order. And I also know that the 1st excited state is split from 4 degenerate states to 2 distinct, and 1 degenerate state ...
0answers
18 views
### Degeneracy of orbitals in magenetic field
Why is that in an external magnetic field(uniform) the degeneracy of d,f orbitals is lost but the degeneracy of p orbitals remain intact assuming the main cause of losing degeneracy is the difference ...
1answer
53 views
### Energy required for ionizing Helium
The energy required to remove both electrons from the helium atom in its ground state is 79.0 eV. How much energy is required to ionize helium (i.e., to remove one electron)? ...
0answers
29 views
### Max wavelength of radiation to remove electron from $He^{+}$ [closed]
I am thinking, I use the formula $$E_n = -13.6 \frac{Z^2}{n^2}$$ $$E_n = -13.6 \frac{2^2}{1^2}$$ But this gives $54.4$, correct answer is $22.8$. Whats wrong?
0answers
27 views
### State of (orbit of) electron given wavelength [closed]
I was thinking I use the formula $$\frac{1}{\lambda} = R(\frac{1}{2^2} - \frac{1}{n^2})$$ But what I got looks something like: \frac{1}{1.99\times 10^{-9}} = 1.097\times 10^7 ...
1answer
83 views
### Finding the wavelength of an electron in its ground state?
To find the wavelength of an electron in its ground state in a hydrogen atom, would I or could I do the following? Use the ground state energy (-13.6eV) in $E^2 = m^2c^4 + p^2c^2$ Solve for $p$ Use ...
3answers
217 views
### How electricity, and generating electricity works on the atomic level?
I am trying to understand the basics physics as to how electricity works. Unfortunately it seems most online material is either complex full blown mathematical equations, or water pump analogies. I ...
0answers
34 views
### Is it reasonable to interpret the Lamb shift as vacuum induced Stark shifts?
This is a pretty hand-wavy question about interpretation of the Lamb shift. I understand that one can calculate the Lamb shift diagrammatically to get an accurate result, but there exist ...
1answer
40 views
### I want some information about population inversion in graphene & build laser with this theory
I have read the paper Theoretical Study of Population Inversion in Graphene under Pulse Excitation. A. Satou, T. Otsuji and V. Ryzhii. Jpn. J. Appl. Phys. 50 no. 7, pp. 070116-070116-4 (2011). ...
1answer
58 views
### how laser interact with atoms?
I am reading a book introducing basic concept of laser. It is pretty shocking to me that people can generate beam with almost all photons in the same state. In the book, it said that a two-level atoms ...
1answer
153 views
### Huge confusion with Fermions and Bosons and how they relate to total spin of atom
I am supremely confused when something has spin or when it does not. For example, atomic Hydrogen has 4 fermions, three quarks to make a proton, and 1 electron. There is an even number of fermions, ...
0answers
27 views
### Recommend AMO physics news channel [closed]
I'm very interested in Atomic, molecular, and optical(AMO) physics. Could you recommend some good place to see AMO physics news(not too technical)? For example, I found some "SpaceRip" on youtube is ...
0answers
47 views
### Breaking of a covalent bond
When a bond between two atoms is broken, why only one electron is released. Why not two? (as two electrons make up a covalent bond.)
1answer
48 views
### What does the Atomic Form Factor means?
I was reading about Nuclear Physics and the autor mentioned something about the Atomic form factor, something relationated with the Fourier Transform of the espacial distribution of the electric ...
1answer
157 views
### Two photons transition
if an atom in its ground state is coupled to an electromagnetic field it can absorb a photon if the EM field contains one with the right frequency. These transitions depends on $⟨f|H_i|i⟩$ (from ...
0answers
29 views
### Thermionic emission, delayed emission and predissociation
In molecular photodissociation, the thermionic emission, delayed emission and predissociation are the same? otherwise, what is the difference between them? My question is not about the solids, but I ...
1answer
52 views
### How is a Rydberg Blockade Radius defined?
Rydberg blockade is a phenomena in 3 or more level systems of Rydberg dressed atoms.
0answers
33 views
### Where can I find the Bohr Sommerfield condition?
I need to solve the Hydrogen Atom using the phase integral [Bohr Sommerfield Condition] but I don't know where can I find it. Help me please!
0answers
49 views
### Where do electrons get the energy to remain in orbit? [duplicate]
As we know electrons continuously revolve around the nuclus without falling in it at a high velocity beating it's force of attraction. My question is where do electrons get energy to revolve around ...
2answers
57 views
### Optical trapping problem
Can we make light slower by applying optical trapping (I mean applying laser beam to lower the speed of light)?
1answer
57 views
### Thermionic emission and delayed emission
I want to understand the concepts behind the thermionic emission. In thermionic emission, the energy randomization occurs and the energy may be split to electronic or roto-vibrational states. If this ...
0answers
92 views
### What is the height of the electron orbits of atom?
What is the height of the electron orbits an atom? (How far are the energy levels of the electron relative to the center of the atomic nucleus?) How fast do electrons move in their orbits?
2answers
512 views
### Frequency of an Electron
My question is very simple. If frequency is defined as the cylces per unit time, Then what is meant by "Frequency of an Electron" ? If the rotation of electron around a nucleus is considered then, ...
2answers
83 views
### Is the photon energy required to cause an atomic transition $\Delta E+\Delta KE$, where $\Delta E$ is the “transition energy”?
An atom "at rest" can absorb a photon, and while some of this energy goes into increasing the energy level of the electron, momentum must be conserved, and so some energy must also increase the ...
4answers
601 views
### If photon energies are continuous and atomic energy levels are discrete, how can atoms absorb photons?
If photon energies are continuous and atomic energy levels are discrete, how can atoms absorb photons? The probability of a photon having just the right amount of energy for an atomic transition is ...
0answers
49 views
### Where to find probability density plots for all elements?
Does anyone know where I can find something similar to this, but for all elements? I would love to find something with the same image quality. Also, is there any software that can produce images ...
1answer
99 views
### Why it is called a Newton Sphere? (Velocity map imaging)
In velocity map imaging (photo-dissociation and photo-emission), the ejected particles form a newton sphere. I didn't really get the concept why it is called a "newton sphere" and also why at the ...
0answers
96 views
### Does the electron have spin in it's own reference frame?
In our atomic physics class, we saw that the spin-orbit coupling term arises from the scalar product of the magnetic moment of the electron (proportional to its spin), and the magnetic field created ...
3answers
365 views
### What is the physical meaning/concept behind Legendre polynomials?
In mathematical physics and other textbooks we find the Legendre polynomials are solutions of Legendre's differential equations. But I didn't understand where we encounter Legendre's differential ...
3answers
136 views
### Do electrons in multi-electron atoms really have definite angular momenta?
Since the mutual repulsion term between electrons orbiting the same nucleus does not commute with either electron's angular momentum operator (but only with their sum), I'd assume that the electrons ...
1answer
79 views
### Optimal methods for mapping out molecules, atoms and nuclei and their energy levels?
I'm wondering if it would be possible to map out all the different types of molecules, atoms and nuclei and their energy levels on one page (even if in a generalised way)? But perhaps I'm referring to ...
1answer
417 views
### Explanation of energy levels in molecules, atoms, nuclei and their relationship
Why are the energy levels of molecules, the atoms that form them and the nuclei inside the atoms considered separately? Or phrased in a different way- what is it that makes their energy levels so ...
1answer
141 views
### Rutherford's Gold Foil Experiment
Can anybody explain how Rutherford bombarded a 0.0004 cm thick gold foil? How did he put it in a photographic sheet? Wasn't the foil too thin to be held? How did he know that the atoms were deflected ...
2answers
130 views
### Causality in a gedanken experiment on the hydrogen atom
Consider a gedanken(=thought) experiment where I am tracking the motion of the electron in a hydrogen atom with a time resolution of (say) $\Delta t = 10^{-20}$ seconds. Further assume (for ...
1answer
133 views
### Explanation on Atomic Orbitals and Molecular Orbitals
We were reading about atomic structures and bond making and my teacher told me that when two atoms are fused or when they make bond, There are two orbitals formed. 1-Bonding Molecular Orbital & 2- ...
2answers
159 views
### Is the artificial gauge field a gauge field?
The so-called artificial gauge fields are actually the Berry connection. They could be $U(1)$ or $SU(N)$ which depends on the level degeneracy. For simplicity, let's focus on $U(1)$ artificial gauge ...
0answers
93 views
### General question on aligning a quantization axis
I have a general question on how to work with quantization axis. Here is the setup: I am looking at a single two-level atom placed at the origin $(0, 0, 0)$, which is unperturbed in the sense that ...
1answer
123 views
### Spin-orbit coupling constant for rubidium
I have come across the following question in my course notes: The $5s\to 5p$ transition in rubidium is split into two components with wavelengths of 780nm and 795nm respectively. For the $5p$ state, ...
1answer
97 views
### About Efimov States and Halo-Nuclei
I read that Halo nuclei could be seen as special Efimov states, depending on the subtle definitions. (The last sentence in the second to last paragraph of this Wikipedia article.) This does ...
0answers
47 views
### Where can I find a complete list of metamaterials up to today?
Where might I find a list of all the metamaterials up-to-date?
2answers
164 views
### high spin atoms SU(2) representation
I am very confused that some atoms called high spin or magnetic atoms have spin level more than $\frac{1}{2}$ but are still said to have $SU(2)$ symmetry. Why not $SU(N)$?
1answer
206 views
### Is the structural similarity between atoms ( smallest) and universe (biggest) a conincidence. Or there can a reason for this beyond imaginations
Is the structural similarity between atoms ( smallest) and universe (biggest) a coincidence. Or there can a reason for this beyond imaginations. It seems like, if one starts travelling from atoms... ...
0answers
116 views
### What is the Landé g factor?
What is the Landé g factor? I know that it gives the relation between magnetic moment and angular moment, but i wanted to know why are those magnitudes related to each other and why is the magnetic ...
3answers
330 views
### Disproving a refutation of quantum mechanics (QM) via a calculation of the ground state of the helium atom
This website http://www7b.biglobe.ne.jp/~kcy05t/ appears to refute Quantum mechanics using some proof. An important paper involved is this 'Calculation of Helium Ground State Energy by Bohr's ...
2answers
231 views
### How robust is Kramers degeneracy in real material?
Kramers theorem rely on odd total number of electrons. In reality, total number of electrons is about 10^23. Can those electrons be so smart to count the total number precisely and decide to form ...
1answer
79 views
### What are relativistic and radiative effects (in quantum simulation)?
I'm reading about Quantum Monte Carlo, and I see that some people are trying to calculate hydrogen and helium energies as accurately as possible. QMC with Green's function or Diffusion QMC seem to be ...
2answers
120 views
### Why is there a factor of 1/2 in the interaction energy of an induced dipole with the field that induces it?
In this paper, there's the following sentence: ...and the factor 1/2 takes into account that the dipole moment is an induced, not a permanent one. Without any further explanation. I looked ...
1answer
322 views
### What exactly is a Fluorescent lamp?
A fluorescent tube (home-based) works on the principle of discharge of electricity through gases, as far as I can tell (I don't know much about cathode rays or gas discharge) What happens inside the ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 19, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9179911613464355, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/54065/about-injective-hull/54200
|
## About injective hull
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Let $M$ be an $A$-module. Is its injective hull affected by whether I regard $M$ as an $A$-module or $A/\mbox{Ann}(M)$-module ?
-
## 3 Answers
I'll follow up on what Karl said with an example closer to my own experience. Let Z be the ring of integers and p a positive prime. Then Z/pZ is injective as a Z/pZ - module, being a vector space over a field, whence Z/pZ is its own injective envelope (hull) as a Z/pZ module. However, the injective envelope of Z/pZ as an abelian group is Z(p^{infty}), which gives witness to Karl's statement that the injective envelope over A can be much larger than the injective envelope over A/ann(M). You can play this game with A any commutative Noetherian ring with 1, ann(M) = any maximal ideal of R, and M = A/I where I is the chosen maximal ideal. Karl's example presents very limited choice for I since k[[x]] is local. I think Proposition 2.27 and Lemma 4.24 of "Injective Modules" by Sharpe and Vamos present enough to figure out what is going on in the general case.
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Yes, take $A = k[[x]]$ and $M = A/(x)$. Then as a $k = A/(x) = A/\text{Ann}(M)$-module, the injective hull of $k$ is $k$. As an $A$-module, the injective hull is much much bigger.
-
whate\ injective hull of field K is K?
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9318575263023376, "perplexity_flag": "middle"}
|
http://climateaudit.org/2008/03/24/pcs-in-a-linear-network-with-homogeneous-spatial-autocorrelation/?like=1&source=post_flair&_wpnonce=843d76250c
|
by Steve McIntyre
## PCs in a Linear Network with Homogeneous Spatial Autocorrelation
As I observed a couple of posts ago, the Stahle SWM network can be arranged so that its correlation matrix is closely approximated by a Toeplitz matrix i.e. there is a “natural” linear order. I also noted that results from matrix algebra proving that, under relevant conditions, all the coefficients in the PC1 were of the same sign; there was one sign change for the coefficients in the PC2, two in the PC3 and so on. These resulted in contour maps for the eigenvector coefficients with very distinct geometries – here’s an example of the PC2 in the SWM network showing the strong N-S gradient in this PC.
As soon as one plots the site locations and contours the eigenvector coefficients, the raggedness of the original site geometry is demonstrated. But how much does this raggedness matter? This leads us away from matrix algebra into more complicated mathematics, as the eigenvectors which contour out to the pretty gradients in the earlier diagrams now become eigenfunctions with an integral operator replacing the simpler matrix multiplication.
As an exercise, I thought that it would be interesting to see what happened in an idealized situation in which the network was a line segment with spatial autocorrelation as a negative exponential function of distance between sites.
Because all the circumstances are pretty simple, one presumes that the resulting functions are simple and I presume that there’s a simple derivation somewhere for eigenfunctions in these simple cases.
Since I don’t know these derivations, I went back to more or less brute force methods and did some calculations using N=51; N=101,… , assuming that the sites were uniformly spaced along the line segment, creating a correlation matrix for rho=0.8, carried out SVD on the constructed correlation matrix, plotted the eigenvectors and experimented with fitting elementary functions to the resulting eigenvectors. As shown below, I got excellent fits (with some edge effect) for the following eigenfunctions:
$V_k(t)= sin (k \pi t)$ where k = 1,.. and t is over [0,1].
So it looks like the eigenfunctions are pretty simple. One can also see how the number of sign changes increases by 1 as k increases by 1.
Even the plots in the network illustrated yesterday show elements of these idealizations. For example, here’s the PC1 from the combined network. I think that I can persuade myself that there are elements of the sin (kt) k=1 shape from the idealized PC1 coefficient curve illustrated in the top right panel.
Next here’s the PC2 from the combined network. Again I think that I can persuade myself that there are elements of the sin (kt) k=2 shape in this example.
Obviously the functions are elementary and I’m sure that there are any number of elegant derivations of the formulas. But I think that the results are at least a little bit pretty in the context of something as humdrum as the Stahle SWM network. As one goes from a 1-D to a 2-D situation, the geometry is somewhat more complicated, but we’re still going to see well-constrained relatively elementary functions making up the eigenfunctions for squarish and rectangular shaped regions.
Also, here’s a plot of the normalized eigenvalues for N=101. Again, I’m sure that there’s a known distribution for these eigenvalues somewhere in the mathematical literature and would welcome any references.
### Like this:
This entry was written by , posted on Mar 24, 2008 at 1:13 PM, filed under General and tagged chladni, stahle, toeplitz. Bookmark the permalink. Follow any comments here with the RSS feed for this post. Post a comment or leave a trackback: Trackback URL.
### 12 Comments
1. chefen
My first guess would be that Hermite-Gaussians would be a typical basis for your eigenfunctions. They come up all the time in rectangular problems. See for instance gas laser transverse modes. If it were properly cicular geometry you’d use Lebesgue-Gaussians. But this is just a feeling.
2. Glacierman
Try this:
http://www.nd.edu/~networks/Publication%20Categories/03%20Journal%20Articles/Physics/Spectra_Physical%20Rev%20E%2063,%20026704%20(2001).pdf
Steve: It’s about eigenvalues, but for circumstances quite different than the ones at hand and do not derive results that are on point for the present situation.
3. Posted Mar 24, 2008 at 3:00 PM | Permalink | Reply
It looks like the PC transform has useful similarities with Fourier or Laplace transforms. Could you borrow some knowledge from these well-studied transforms?
4. Steve McIntyre
#3. This is a very specialized case. Quite different sort of results occur when you get inhomogeneous systems.
5. Ellis
http://climate.gsfc.nasa.gov/publications/fulltext/North-Sampling-1982.pdf
6. MattN
You lost me at “eigenvector coefficients”. Can anyone translate this entry into English for us common folk?
7. Al
English: Some of the crucial bits of the current “best temperature reconstruction” are essentially the mathematical equivalent of zero divided by zero. Not only did they start with problematic data (tree-rings are precipitation sensitive), but then they used a method that unduly weights individual sets of data by their geographic relation to other points. Four points in a perfect square would be equally weighted. But if there were three in an equilateral triangle with one dead center, the dead center data point is going to basically be counted twice. It is more complex with non-geometric figures.
This highlights and exacerbates something that is known as ‘cherry picking’. The “best temperature reconstruction” doesn’t use all tree-ring studies. Or all tree-ring studies that were actually taken with temperature-tree-ring studies in mind. Instead, they were chosen on how well they “worked”. And some of the non-included data was then not archived on the basis “Well, I didn’t use it.”
So you can have a lone hockeystick treering and emphasize that particular signal with completely flat treering signals by just making sure that the flat ones are spaced around the periphery and the ‘key’ treering is centrally located. I’d actually like to see this simplified scenario carried out, just to emphasize how much the choice can be affecting things.
8. MattN
Thanks Al. That’s much better.
9. Steve McIntyre
#7 is not a translation of the post, but raises different matters. I can’t translate everything into simpler terms; this is already a more plentifully illustrated exposition of the point than you’ll find anywhere. If you don’t get all the nuances, don’t worry about it.
10. Sam Urbinto
I get the entire engien thing as operators on vectors.
Vectors are arrows if you will. A geometric magnitude and direction. Collections of these form vector spaces which can be scaled and added. Linear transformations can be made on vectors. So if I take a vector and double its length but not the direction, the eigenvector has an eigenvalue of +2 If I reverse the vector and double its length, I have an eigenvalue of -2
So eigenvectors are the already transformed lines which are described as being different from the original by looking at the eigenvalue.
Then you start getting into different ways you can change each vector in a vector space and combinations, and then you get into math using matrices. And that’s where I stop.
These mgiht help.
http://members.aol.com/jeff570/e.html
http://planetmath.org/?op=getobj&from=objects&id=4397
Or this online linear algebra lecture #21 (so it might require some prerequite knowledge…) :
And a calculator
http://www.arndt-bruenner.de/mathe/scripts/engl_eigenwert.htm
11. Alexander Harvey
Hi, I am trully late for this party.
I am not sure whether this is the best place to post this as it also refers to a later posting but here will have to do. I am reasonably sure of the following but it is not well grounded in any literature of which I am aware.
In the later post a link to Chladni Patterns is drawn which I think might possibly mislead. The vibrational patterns on a disc are determined by real boundary conditions, for the plate has real edges.
The similar patterns that appear in autocorrelated data may be determined simply by the subjective choice of the observational area, window or aperture. I guess this is your point.
The same is true of an autocorrelated causal time series. The eigenvectors for a specified autocovariance and segment length are always the same relative to the segment being analysed. Move the segment or aperture to an earlier or later time whilst preserving its length N, and the eigenvectors move with the aperture. They are not an any normal sense real.
I say this as there might be a temptation to encourage our thinking that the real data elements in the middle of an aperture necessarily behave differently to those at the extremities. If you sum the total variance for any data element according to the eigenvector/eigenvalue pairs you will get the same value but it will be distributed across the eigenvector/eigenvalue pairs differently. However a problem arises if you decide to discard any of the low order eigenvectors, the ones with small eigenvalues as you are forcing a pattern on the now filtered data that would tend to move if you moved the aperture.
The eigenvectors across an aperture in a time series are also either symmetric and antisymmeric despite the asymmetry of the autocorrelation function for all causal functions.
Having said that the eigenvectors may be determined by observer choices, that is not to say that they are arbitrary to the analysis. They have the property that upto any possibly degeneracy they form a unique basis for which the eigenvalues are all fully independent. I believe all other choices for an orthogonal basis will have weighting values drawn from distributions that are correlated. Also filtering by removing the variance attributable to any vector other than an eigenvector will leave a residue that when projected back on the existing eigenvectors or perhaps those resulting from the analysis of the filtered system, (which will have one less degree of freedom), will result in weights and eigenvalues that are correlated. Typically neither the mean nor a linear slope are eigenvectors and similarly their removal results in a residual that is not fully independent of the values of the mean and gradient removed, this is normally very minor but worth mentioning given statistical tests tend to assume that the residuals are fully independent of the variance of the vectors removed.
As the eigenvalues act like fully independent random variables, each adds seperately to the total expected variance. As each eigenvector/eigenvalue pair has a different expected variance, the total variance may have a distribution similar too, but definitely not Chi-squared even for normally distributed generating noise, this divergence from Chi-squaredness has I think the potential to cause problems but probably less so than the difficulty in constraining model paramaters to a small enough region to avoid a Bayesian analysis under uncertainty in the parameters.
The eigenvectors represent a basis where the expected variance attributable is maximised for each eigenvector in turn. That is the expected variance as approximated by PCA/EOF type analysis over a large number of independent samples. That is to say if one analysed the mean temperatures for each day in August this year its representation according to the eigenvectors of the 31 day aperture with its known or suspected autocovariance function would not be very revealing but a compilation of many different years of August values would reveal the eigenfunctions. Move the aperture to say, mid-July to mid-August and the sample weights would change, yet for a large sample, the sample EOFs would largely stay the same relative to the aperture, not to the data unless some real effect existed; the underyling eigenvectors as determined by the aperture size and the autocorrelation would stay the same relative to the aperture.
With regard to time series, there is another group of eigenvector/eigenvalue pairs of interest, those that represent the effect of events prior to the start of the aperture chosen for observation or analysis, on the data in that aperture. I will call them the history eigenvectors if you will allow.
Famously AR(1) has a single history eigenvector, an exponential decay. ARMA(1,1) also has a single history eigenvector and in general ARMA(p,q) has the greater of p and q eigenvectors provided that is no greater than the aperture length N. So for most ARMA(p,q) functions the historic content of the data in the aperture will not have a large number of degrees of freedom as compared to models that have longterm persistence where the historic DOFs may only be limited by the length of the aperture. The historic eigenvectors are of course anything but symmetric across the aperture, typically they each have an initial offset and a trend that reverts to the mean perhaps with damped oscillations. I believe that these will also suffer correlation in the expectation of the eigenvalues whenever the mean, as is almost universally the case, and or the gradient have been filtered out. That is to suggest that to simulate possible history functions for observed data one might prefer to simulate the full history eigenvectors using suitable noise weighted by their eigenvalues and then filter out the mean and gradient as applicable. To make this clearer, to simulate ARMA(5,3) data one could calculate the 5 historic eigenvector/eigenvalue pairs for the following aperture drive these by 5 of the random noise variables and combine this with an aperture driven assuming all the prior noise variables were zero, which can save using the very long run in necessary models other than AR(1), and also facilitate querying what plausible prior history could have contributed to the observed data.
In a similar but not identical way the expected observational lagged covariance or correlation function must be determined for the particular aperture size whenever, as is generally the case, the mean and possibly the gradient has been removed. Loss of the mean gives rise to the typical tendency for the now filtered sample covariance/correlation function to cross through to negative values for some lag even when the full covariance function doesn’t. Again the observed sample covariance function can be decomposed into its expected part which has a weight representing a variance which for normal generating noise has a pseudo Chi-Squared distribution in much the same way as above, plus I suspect some eigenvectors whose eigenvalues represents a product-normal distribution but I am unsure of the details. The point here being that the expected part of the lagged covariance functiion for any arbitrary response function is computable according to the aperture size but even so the observed covariance/correlation function is but one instance of the sum of the expected part with some unknown weight plus all the nuisance parts and determining unique model parameters from a single observation is fraught in the well known way that it is. However it may be possible to see when the model is inadequate in cases where particularly the long lagged covariance is much larger than plausible for models that lack true longterm persistance, this seems to me to be the case for typical sample temperature series, ARMA(p,q) models are commonly quite well constrained at the long lag extremity of their covariance function.
To summarise, I believe that EOFs/PCAs may commonly be largely a function of the observing aperture but not necessarily so, and that this may not be obviously the case in particular circumstances. Any procedure akin to EOF/PCA analysis will tend to converge to the underlying eigenvectors whenever there is no real feature present in the data and in that case moving the aperture will move also move the EOFs/PCAs largely unchanged but will alter their weights. I think that the potentially phantasmagoric nature of all this is covered in the posts that follow.
Alex
• Steve McIntyre
I appreciate the comment. The Chladni posts are among my favorites in the entire CA corpus and I welcome any thoughts on the topic, especially well-considered ones like this.
I am convinced that Chaldni patterns recur in some borehole inversions as well, though I only have notes on the topic.
• ### Tip Jar
(The Tip Jar is working again, via a temporary location)
• ### NOTICE
Frequent visitors will want the CA Assistant. Sort/hide comments; improved reply box, etc.
• ### Blog Stats
• 10,053,408 hits since 2010-Sep-12
• ### Recent Comments
• MikeN on PAGES2K Online “Journal Club”
• kim on PAGES2K Online “Journal Club”
• Jeff Norman on PAGES2K Online “Journal Club”
• 97% Of The Papers We Choose Agree With Our Criteria | suyts space on Tom Curtis Writes
• seanbrady on Cook’s Survey
• Don Monfort on PAGES2K Online “Journal Club”
• Skiphil on PAGES2K Online “Journal Club”
• Skiphil on PAGES2K Online “Journal Club”
• Steven Mosher on PAGES2K Online “Journal Club”
• Steven Mosher on PAGES2K Online “Journal Club”
• Cap and Trade | Detached Ideas on Cloud Super-Parameterization and Low Climate Sensitivity
• Yamal | Detached Ideas on Yamal: A “Divergence” Problem
• Will J. Richardson on PAGES2K Online “Journal Club”
• TerryMN on PAGES2K Online “Journal Club”
• Duster on PAGES2K Online “Journal Club”
%d bloggers like this:
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 1, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9203256964683533, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/88260/sum-of-derivatives-of-a-polynomial?answertab=active
|
# Sum of derivatives of a polynomial
Let $p(x)$ be a polynomial of degree $n$ satisfying $p(x)\geq 0$ for all $x$. That is, for all $x$, $p(x) = a_n x^n + a_{n-1} x^{n-1} + \cdots + a_1 x + a_0 \geq 0$, $a_n\neq 0$.
Show that $p(x)+p'(x)+p''(x)+\cdots+p^{(n)}(x)\geq 0$ for all $x$ where $p^{(i)}(x)$ is the $i^\text{th}$ derivative.
My interest: I know that, we can rewrite the LDE as follows
$$p(x)+p'(x)+p''(x)+\cdots+p^{(n)}(x) = Lp(x)$$
where $L := I + D + D^2 + \cdots + D^{(n)}$. Can we say anything about a linear operator of this kind so that it does not change the sign of the input it takes? I can try to solve the question by writing for all the derivatives and factoring them using $p(x)$, but I think there should be a clever way of showing this by the properties of $L$. Can I figure out a solution by just looking at $L$ and the sign of $p(x)$ as in the question? Where should I look for that?
-
I seem to remember that there was a slick trick exploiting that $(I-D)Lp = p \geq 0$, but I don't remember how that worked. – t.b. Dec 4 '11 at 14:56
## 2 Answers
Instead of $L_n := (1 + D + D^2 + \dots + D^{n})$, use $L_{\infty} := (1 + D + D^2 + \dots)$, which comes to the same thing for polynomials of degree $n$ or less. If we let
$$\sigma(x) = p(x)+p'(x)+p''(x)+\dots+p^{(n)}(x) = L_{\infty}(p)(x)$$
then we can sum the geometric progression in $D$ to get
$$\sigma(x) = ((1 + D + D^2 + \dots)p)(x) = ((1-D)^{-1}p)(x)$$
Thus
$$p(x) = ((1-D)\sigma)(x)$$
This might look like trickery, but if you check it against the original expression for $\sigma$ you can see that it works:
$$((1-D)\sigma)(x) = p(x)+p'(x)+p''(x)+\dots+p^{(n)}(x) - (p'(x)+p''(x)+\dots+p^{(n)}(x))$$ $$= p(x)$$
Now define $\tau(x) = e^{-x}\sigma(x)$, so $\tau'(x) = e^{-x}(\sigma'(x) - \sigma(x)) = e^{-x}((D-1)\sigma)(x) = -e^{-x}p(x)$. By hypothesis, this is $\le 0$ for all $x \in \mathbb R$. Also, $\sigma$ is a polynomial, so $\tau(x) \to 0$ as $x \to \infty$. Therefore $\tau(x) \ge 0$ for all $x$.
Hence $\sigma(x) \ge 0$ for all $x$, which is what we wanted.
-
+1. A similar argument shows that $p(x)+tp'(x)+t^2p''(x)+\ldots+t^np^{(n)}(x)\gt0$ for every $x$ and every $t\geqslant0$. – Did Dec 4 '11 at 15:51
@Didier: Thank you for correcting my mistake. I have reverted a couple of your gratuitous edits, because it is, after all, my response. – TonyK Dec 4 '11 at 15:55
Of course. Sorry for the trouble. Nice answer. – Did Dec 4 '11 at 15:56
Very nice! Not that it matters much, but there is now a switch from $1$ to $I$ in the second displayed formula (meaning: before that you write $1$ consistently, afterwards $I$). – t.b. Dec 4 '11 at 15:59
Note however that using $1$ (one) to denote $I$ (the linear map identity) is (1) formally incorrect and (2) in contradiction with the question itself. – Did Dec 4 '11 at 16:03
show 2 more comments
I have to admit the following solution was proposed to me by a friend, I did not find it:
Let $$f(x) = p(x) + p^\prime(x) + ... + p^{(n)}(x)$$
Note that $$f^\prime = p^\prime + p^{\prime \prime} + ...+ p^{(n)}$$
that is, $$f = p + f^\prime$$
Clearly $n$ is even. Hence $f$ has even degree, too. This implies that $f$ attains it's absolute minimum (it's not a maximum, as $f$ behaves like $p$ at infinity) in some point $z_0$, hence $f^\prime(z_0) = 0.$ Consequently, $\forall x$, $$f(x) \ge \min(f) = f(z_0) = p(z_0) + f^\prime (z_0) = p(z_0) \ge 0$$
-
That's nice! And it seems to be completely different from my answer. After getting $f = p + f'$, your (friend's) answer uses the fact that $f$ attains an absolute minimum; my answer uses the fact that $e^{-x}f(x) \to 0$ as $x \to \infty$. So between them, the two approaches cover more classes of function that either answer alone. – TonyK Dec 5 '11 at 20:54
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 54, "mathjax_display_tex": 10, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.958273708820343, "perplexity_flag": "head"}
|
http://mathhelpforum.com/calculus/109788-related-rates-print.html
|
# Related rates
Printable View
• October 22nd 2009, 06:26 PM
scorpion007
Related rates
http://img203.imageshack.us/img203/2729/math13.png
-----------
Any pointers? I can't see how to relate $h$ to $\ell$.
• October 22nd 2009, 06:30 PM
Arturo_026
Notice that it is asking you to "minimize" therefore this is an optimization problem, not related rates.
• October 22nd 2009, 06:47 PM
scorpion007
Oops!
Yes, indeed. But c should eventually be a single variable function, so I need to find a way to eliminate one of the variables.
• October 24th 2009, 09:11 PM
scorpion007
Any tips on this? I'm kinda stuck.
• October 24th 2009, 09:30 PM
Arturo_026
I too find this problem hard but I can assist you with what I know:
Notice that they ask you to minimize, thus you have to find c'(h), then set c'(h)=0 and when you have found that h, use c''(h) to see which solutions are minimum or maximum.
I don't think you have to relate h to l since l was given as a constant, and they also tell you that your answer for h will be in terms of l.
Again, I'm not completly sure but I hope I helped.
• October 24th 2009, 09:41 PM
scorpion007
Oh. I was under the impression that $\ell$ was variable?
Also, don't we need to somehow use that information about the density, rho?
I do know about finding critical points of a function, but I'm certain that I first must eliminate one of the variables, since this is a single-variable calculus subject, and there are no partials here.
All times are GMT -8. The time now is 02:20 PM.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9653792381286621, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/19119/approximating-pi-using-monte-carlo-integration/19202
|
# Approximating $\pi$ using Monte Carlo integration
I need to estimate $\pi$ using the following integration:
$$\int_{0}^{1} \!\sqrt{1-x^2} \ dx$$
using monte carlo
Any help would be greatly appreciated, please note that I'm a student trying to learn this stuff so if you can please please be more indulging and try to explain in depth..
-
could it be a typografic error? You have $$\frac{\pi}{4} = \int_0^1 \sqrt{1-x^2} dx$$ – Esteban Crespi Jan 26 '11 at 19:58
Yes!! can you explain how you reached that answer? – Zapacila Jan 26 '11 at 20:03
A picture may help. $x^2 + y^2 = 1$ gives the unit circle; if $y \ge 0$ and $x \in [0, 1]$, then we can rewrite this as $y = \sqrt{1 - x^2}$. The unit circle has area $\pi * 1^2 = \pi$, and we're taking just the upper right quarter of it. – user4689 Jan 29 '11 at 4:37
## 4 Answers
Generate a sequence $U_1,U_2,\ldots$ of independent uniform$[0,1]$ random variables. Let $Y_i = f(U_i)$, where $f(x)=\sqrt{1-x^2}$, $0 \leq x \leq 1$. Then, for sufficiently large $n$, $$\frac{{\sum\nolimits_{i = 1}^n {Y_i } }}{n} \approx \int_0^1 {f(x)\,{\rm d}x = } \int_0^1 {\sqrt {1 - x^2 } \,{\rm d}x} = \frac{\pi }{4}.$$
EDIT: Elaborating.
Suppose that $U_1,U_2,\ldots$ is a sequence of independent uniform$[0,1]$ random variables, and $f$ is an integrable function on $[0,1]$, that is, $\int_0^1 {|f(x)|\,{\rm d}x} < \infty$. Then, the (finite) integral $\int_0^1 {f(x)\,{\rm d}x}$ can be approximated as follows. Let $Y_i = f(U_i)$, so the $Y_i$ are independent and identically distributed random variables, with mean (expectation) $\mu$ given by $$\mu = {\rm E}[Y_1] = {\rm E}[f(U_1)] = \int_0^1 {f(x)\,{\rm d}x}.$$ By the strong law of large numbers, the average $\bar Y_n = \frac{{\sum\nolimits_{i = 1}^n {Y_i } }}{n}$ converges, with probability $1$, to the expectation $\mu$ as $n \to \infty$. That is, with probability $1$, $\bar Y_n \to \int_0^1 {f(x)\,{\rm d}x}$ as $n \to \infty$.
To get a probabilistic error bound, suppose further that $f$ is square-integrable on $[0,1]$, that is $\int_0^1 {f^2 (x)\,{\rm d}x} < \infty$. Then, the $Y_i$ have finite variance, $\sigma_2$, given by $$\sigma^2 = {\rm Var}[Y_1] = {\rm E}[Y_1^2] - {\rm E}^2{[Y_1]} = {\rm E}[f^2{(U_1)}] - {\rm E}^2{[f(U_1)]} = \int_0^1 {f^2 (x) \,{\rm d}x} - \bigg[\int_0^1 {f(x)\,{\rm d}x} \bigg]^2 .$$ By linearity of expectation, the average $\bar Y_n$ has expectation $${\rm E}[\bar Y_n] = \mu.$$ Since the $Y_i$ are independent, $\bar Y_n$ has variance $${\rm Var}[\bar Y_n] = {\rm Var}\bigg[\frac{{Y_1 + \cdots + Y_n }}{n}\bigg] = \frac{1}{{n^2 }}{\rm Var}[Y_1 + \cdots + Y_n ] = \frac{n}{{n^2 }}{\rm Var}[Y_1 ] = \frac{{\sigma ^2 }}{n}.$$ By Chebyshev's inequality, for any given $\varepsilon > 0$, $${\rm P}\big[\big|\bar Y_n - {\rm E}[\bar Y_n]\big| \geq \varepsilon \big] \leq \frac{{{\rm Var}[\bar Y_n]}}{{\varepsilon ^2 }},$$ so $${\rm P}\big[\big|\bar Y_n - \mu \big| \geq \varepsilon \big] \leq \frac{{\sigma^2}}{{n \varepsilon ^2 }},$$ and hence $${\rm P}\bigg[\bigg|\bar Y_n - \int_0^1 {f(x)\,{\rm d}x} \bigg| \geq \varepsilon \bigg] \leq \frac{1}{{n \varepsilon ^2 }} \bigg \lbrace \int_0^1 {f^2 (x) \,{\rm d}x} - \bigg[\int_0^1 {f(x)\,{\rm d}x} \bigg]^2 \bigg \rbrace.$$ So if $n$ is sufficiently large, with high probability the absolute difference between $\bar Y_n$ and $\int_0^1 {f(x)\,{\rm d}x}$ will be smaller than $\varepsilon$.
Returning to your specific question, letting $f(x)=\sqrt{1-x^2}$ thus gives $${\rm P}\Big[\Big|\bar Y_n - \frac{\pi }{4} \Big| \geq \varepsilon \Big] \leq \frac{1}{{n \varepsilon ^2 }} \bigg \lbrace \int_0^1 {(1 - x^2) \,{\rm d}x} - \frac{\pi^2 }{16} \bigg \rbrace = \frac{1}{{n \varepsilon ^2 }} \bigg \lbrace \frac{2}{3} - \frac{\pi^2 }{16} \bigg \rbrace < \frac{1}{{20n\varepsilon ^2 }},$$ where $\bar Y_n = \frac{{\sum\nolimits_{i = 1}^n {\sqrt {1 - U_i^2 } } }}{n}$.
-
How did you get from that integration to \frac{\pi }{4}. ? My math is very rusty.. – Zapacila Jan 26 '11 at 20:13
– Shai Covo Jan 26 '11 at 20:20
on the link abose they get (x/2)sqrt(1-x^2) + Arcsin(x)/2 how did you get \frac{\pi }{4}. ? – Zapacila Jan 26 '11 at 20:37
Let $F(x)=(x/2)\sqrt{1-x^2} + \arcsin(x)/2$. Note that $F(1)-F(0) = \arcsin(1)/2 = \pi/4$. – Shai Covo Jan 26 '11 at 20:46
1
Concerning the evaluation of the integral, note that $\int_0^1 {\sqrt {1 - x^2 } \,{\rm d}x}$ gives the area under the curve $y = \sqrt{1-x^2}$ as $x$ goes from $0$ to $1$, hence equal to $\pi/4$. – Shai Covo Jan 26 '11 at 21:28
show 2 more comments
Let's also elaborate on Ross Millikan's answer, adapted to the case $f(x)=\sqrt{1-x^2}$, $0 \leq x \leq 1$. Suppose that $(X_1,Y_1),(X_2,Y_2),\ldots$ is a sequence of independent uniform vectors on $[0,1] \times [0,1]$, so that for each $i$, $X_i$ and $Y_i$ are independent uniform$[0,1]$ random variables. Define $Z_i$ as follows: $Z_i = 1$ if $X_i^2 + Y_i^2 \leq 1$, $Z_i = 0$ if $X_i^2 + Y_i^2 > 1$, so the $Z_i$ are independent and identically distributed random variables, with mean $\mu$ given by $$\mu = {\rm E}[Z_1] = {\rm P}[X_1^2 + Y_1^2 \leq 1] = {\rm P}\big[(X_1,Y_1) \in \lbrace (x,y) \in [0,1]^2 : x^2+y^2 \leq 1\rbrace \big] = \frac{\pi }{4},$$ where the last equality follows from ${\rm P}[(X_1,Y_1) \in A] = {\rm area}A$ ($A \subset [0,1]^2$).
By the strong law of large numbers, the average $\bar Z_n = \frac{{\sum\nolimits_{i = 1}^n {Z_i } }}{n}$ converges, with probability $1$, to the expectation $\mu$ as $n \to \infty$. That is, with probability $1$, $\bar Z_n \to \frac{\pi }{4}$ as $n \to \infty$.
To get a probabilistic error bound, note first that the $Z_i$ have variance $\sigma^2$ given by $$\sigma^2 = {\rm Var}[Z_1] = {\rm E}[Z_1^2] - {\rm E}^2{[Z_1]} = \frac{\pi }{4} - \Big(\frac{\pi }{4}\Big)^2 = \frac{\pi }{4} \Big(1 - \frac{\pi }{4}\Big) < \frac{10}{59}.$$ The average $\bar Z_n$ has expectation ${\rm E}[\bar Z_n] = \mu$ and variance ${\rm Var}[\bar Z_n] = \sigma^2 / n$; hence, by Chebyshev's inequality, for any given $\varepsilon > 0$, $${\rm P}\big[\big|\bar Z_n - \mu \big| \geq \varepsilon \big] \leq \frac{{\sigma^2}}{{n \varepsilon ^2 }},$$ and so $${\rm P}\bigg[\bigg|\bar Z_n - \frac{\pi }{4} \bigg| \geq \varepsilon \bigg] < \frac{{10}}{{59n\varepsilon ^2 }}.$$
-
I'll definitely have to review my math!! – Zapacila Jan 28 '11 at 7:28
No longer applicable as the integral has been corrected: I don't understand how this integral gets you $\pi$ and if you pull the $x$ out you get $\int_0^1x\sqrt{10}dx=\frac{\sqrt{10}}{2}$
The general idea of a Monte Carlo integration of $\int_0^1 f(x)dx$ is to take random pairs $(x,y)$ with $0\le x\le 1$ and $0\le y \le y_{max}$ and check for how many of the pairs $f(x)\le y$. The integral is then the number of pairs under the curve divided by the number of trials and multiplied by the area of the box ($x y_{max}$)
-
hey i am very sorry i misspelled (first time using the alfabet) it's actually sqrt (1-x^2) inside the integration – Zapacila Jan 26 '11 at 19:54
So i need to make a c++ program that simulates this. i would random generate x from 0 to 1 and y from 0 to lets say 10.000 (the higher the more accurate).. do the math for f(x) an check f(x)<= y. Good. bu how do i relate to pi? – Zapacila Jan 26 '11 at 19:58
y would run from 0 to 1-you just need it to be big enough to be greater than f(x) for all x. So you would generate lots (say 10,000) pairs (x,y) in the unit square and count how many have y<sqrt(1-x^2). Presumably this is about 7850. The area would be 7850/10000 and multiplying by 4 gives your measurement of pi. – Ross Millikan Jan 26 '11 at 20:06
i'm missing something here: int x = 0; int y = 0; int k = 0; for (int i = 0; i < 10000; i++) { x = new Random().Next(0, 1); y = new Random().Next(0, 1); if (y < Math.Sqrt(1 - x * x)) k++; } Console.WriteLine("Pi={0}", (k/10000)*4); – Zapacila Jan 26 '11 at 20:54
@Zapacila: I don't read C very well, but with k an int, doesn't dividing by 10000 give 0? – Ross Millikan Jan 26 '11 at 21:00
show 1 more comment
C# implementation. Thanks to all who contributed!! Ross, Esteban, Shai
````class Program
{
static void Main(string[] args)
{
double x = 0;
var rd = new Random();
double sum = 0;
for (int i = 0; i < 1000000; i++) // the higher the more precise
{
x = rd.NextDouble();
sum += Math.Sqrt(1 - x*x);
}
Console.WriteLine("Pi={0}", (sum/1000000)*4);
Console.Read();
}
}
````
-
Later on, I'll give a probabilistic error bound. I hope you'll find it interesting. – Shai Covo Jan 26 '11 at 21:46
Thanks Shai for you devotion to the subject. I'm most interested since i have a project based on this. Please don't forget to share lots of details.Again Thanks! – Zapacila Jan 26 '11 at 22:33
google searchable: sqrt(1-x^2) dx pi estimate integration algorithm – Zapacila Jan 26 '11 at 22:37
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 91, "mathjax_display_tex": 15, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8867206573486328, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/40788/how-to-measure-force-of-impact-inside-container/40840
|
# How to measure force of impact inside container?
I am in 7th grade and for my science fair project, I need a way to measure the force on a dropped object when it hits the ground. What I am trying to determine is which packing materials provide the best protection for an object in a collision. So I am planning on dropping containers filled with different packing materials surrounding some sort of force-measuring device in the middle. But I don't know how to either obtain or construct the force-measuring device.
Because I am measuring the effectiveness of the packing material, I need to measure the force inside. One method I thought about was having a metal ball sitting on top of clay. After hitting the ground, the ball will dig into the clay. I can measure how deep the impact is and assume that the deeper the hole, the greater the force. But I am not sure if this will work.
Does anyone have any suggestions on how to measure force (using either my idea or something else entirely)?
-
1
I am sorry, I don't understand your question. I am trying to measure the impact force from a collision. For instance, if an object in a box was surrounded by cotton or if it was surrounded by bubble wrap and it was dropped, which would protect the object better? I assume the packing material would absorb some of the force of the impact so I am trying to measure the remaining force. – user14040 Oct 14 '12 at 19:31
Would a spherical blob of playdoh do? Under loading it should flatten, and the more the force the more the flattening. – ja72 Oct 15 '12 at 12:41
## 3 Answers
That sounds like an excellent idea.
You could also test the idea of how deep it goes into the clay by dropping a ball from different heights and see if twice the height = twice the depth into the clay.
Shipping stores sell shock indicators which are little plastic tubes with paint in them that will change color at a certain shock level - but your plan to make the shock sensor yourself would be a better way of showing a physical principle at work.
Good luck.
-
Perhaps have a pencil attached to the end of a spring inside the container, and a piece of paper the pencil can draw on. Then you only need to check the maximal extension (end of the plotted line) and use Hooke's law ($F=-kx$) with the spring constant (which you can measure by test weights)
-
This sounds like a great project and I think you are on the right track.
There are hundreds of engineers working on these problems all over the world to estimate what forces crash test dummies experience in car crashes. A standard way to measure forces is using a spring, the more the spring extends or is compressed the higher the force. In your experiment this force will only act for a very short time, probably less than a second, so measuring the length of the spring in that time is hard to do.
There are at least two possible ways out, either attach a pen to the spring as Guy Ziv pointed out or use a 'spring' that once it is compressed does not go back to its original form. Clay might be a bit stiff, so I would experiment with different materials and balls. If the dent in clay is too small something like Jell-O might work better (the professional term is ballistic gelatin but that uses the same ingredients).
To support your argument an egg might also be useful but not for actual force measurements.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9482319951057434, "perplexity_flag": "middle"}
|
http://unapologetic.wordpress.com/2007/08/03/free-algebras/?like=1&source=post_flair&_wpnonce=c8d80d4293
|
The Unapologetic Mathematician
Free Algebras
Let’s work an explicit example from start to finish to illustrate these free monoid objects a little further. Consider the category $K\mathbf{-mod}$ of modules over the commutative ring $K$, with tensor product over $K$ as its monoidal structure. We know that monoid objects here are $K$-algebras with units.
Before we can invoke our theorem to cook up free $K$-algebras, we need to verify the hypotheses. First of all, $K\mathbf{-mod}$ is symmetric. Well, remember that the tensor product is defined so that $K$-bilinear functions from $A\times B$ to $C$ are in bijection with $K$-linear functions from $A\otimes B$ to $C$. So we’ll define the function $T(a,b)=b\otimes a$. Now there is a unique function $\tau:A\otimes B\rightarrow B\otimes A$ which sends $a\otimes b$ to $b\otimes a$. Naturality and symmetry are straightforward from here.
Now we need to know that $K\mathbf{-mod}$ is closed. Again, this goes back to the definition of tensor products. The set $\hom_K(A\otimes B,C)$ consists of $K$-linear functions from $A\otimes B$ to $C$, which correspond to $K$-bilinear functions from $A\times B$ to $C$. Now we can use th same argument we did for sets to see such a function as a $K$-linear function from $A$ to the $K$-module $\hom_K(B,C)$. Remember here that every modyle over $K$ is both a left and a right module because $K$ is commutative. That is, we have a bijection $\hom_K(A\otimes B)\cong\hom_K(A,\hom_K(B,C))$. Naturality is easy to check, so we conclude that $K\mathbf{-mod}$ is indeed closed.
Finally, we need to see that $K\mathbf{-mod}$ has countable coproducts. But the direct sum of modules gives us our coproducts (but not products, since our index set is infinite). Then since $K\mathbf{-mod}$ is closed the tensor product preserves all of these coproducts.
At last, the machinery of our free monoid object theorem creaks to life and says that the free $K$-algebra on a $K$-module $A$ is $\bigoplus\limits_n(A^{\otimes n})$. And we see that this is exactly how we constructed the free ring on an abelian group! In fact, that’s a special case of this construction because abelian groups are $\mathbb{Z}$-modules and rings are $\mathbb{Z}$-algebras.
Like this:
Posted by John Armstrong | Category theory
1 Comment »
1. [...] is exactly the free algebra on a vector space, and it’s just like we built the free ring on an abelian group. If we [...]
Pingback by | October 26, 2009 | Reply
« Previous | Next »
About this weblog
This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”).
I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 40, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9179376363754272, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/16495?sort=votes
|
## Applications of homotopy groups of spheres
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
The study of the homotopy groups of spheres $\pi_i(S^n)$ is a major subject in algebraic topology. One knows for example that nearly all of them are finite groups. Some are explicitly known. There is a 'stable range' of indices which one understands better than the unstable part.
I think that there is an analogy (have to be careful with that word) to the distribution of primes: It seems that there exists a general pattern but no one has found it yet. It is a construction producing an infinite list of numbers (or groups) but no numbers were put into it. Such a thing always fascinates me.
The largely unknown prime pattern leads to applications in cryptography for example. Are there similar applications of the knowledge (or not-knowledge) of the homotopy groups of spheres? Are there applications to real natural sciences or does one study the homotopy groups of spheres only for their inherent beauty?
-
2
I don't think there's an analogy between homotopy groups of spheres and the distribution of primes in any way. Yes, they both yield some numbers which are not very well understood -- but you can't say that any two poorly understood sequences are analogous, just because they're both poorly understood! So I think these questions are not that well-motivated, at least by the middle paragraph above. On the other hand, I too wonder about questions like "what are homotopy groups of spheres good for, outside of the easy and obviously useful cases?" – Marty Feb 26 2010 at 22:56
## 3 Answers
A few comments on applications that aren't covered by the above Wikipedia article.
I don't know any applications to cryptography. Most cryptosystems require some kind of one-way lossless function and it's not clear how to do that with the complexity of the homotopy groups of spheres. Moreover, the homotopy-groups of spheres have a lot of redundancy, there are many patterns.
There's work by Fred Cohen, Jie Wu and John Berrick's where they relate Brunnian braid groups to the homotopy-groups of the 2-sphere. It's not clear if that has any cryptosystem potential but it's an interesting aspect of how the homotopy-groups of a sphere appear in a natural way in what might otherwise appear to be a completely disjoint subject.
Homotopy groups of spheres and orthogonal groups appear in a natural way in Haefliger's work on the group structure (group operation given by connect sum) on the isotopy-classes of smooth embeddings $S^j \to S^n$. I suppose that shouldn't be seen as a surprise though. Moreover, it's not clear to me that this is always the most efficient way of computing these groups. But I think all techniques that I know of ultimately would require some input in the form of computations of some relatively simple homotopy groups of spheres.
I think one of the most natural applications of homotopy groups of spheres, Stiefel manifolds and orthogonal groups would be obstruction-theoretic constructions. Things like Whitney classes, Stiefel-Whitney classes and general obstructions to sections of bundles. Not so much the construction of the individual classes, more just the understanding of the general method.
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Wikipedia already gives a list of some examples.
-
In many physical problems S^n symmetry grouo is important internal or even for smal n physical symmetry of tre system. For such system it is convinient to check for topological invariants which rise to conserved quantities during evolution. Article from wikipedia says on solitons:
A topological soliton, or topological defect, is any solution of a set of partial differential equations that is stable against decay to the "trivial solution." Soliton stability is due to topological constraints, rather than integrability of the field equations. The constraints arise almost always because the differential equations must obey a set of boundary conditions, and the boundary has a non-trivial homotopy group, preserved by the differential equations.
If internal symmetry is symmetry of sphere then number of windings is one of examples of such property ( and in fact is the simplest one). You may find such creatures ( topological solitons) not only in string theory ( too speculative for physics interesting and great for mathematics) but also in liquid crystal physics, solid state theory ( in Ising models for example) etc. You may even made by Yourself model of such physical system at home by gluing matches or sticks to a thread and twisting such "chain". This model is used in demonstration for physicist, and it is related to Sine-Gordon equation. which appears in theory of Josephson junction fro example.
-
2
I don't see much connection to the question, besides some word-association. The question asks specifically about the homotopy groups of spheres (beyond $\pi_1$, of course!). I don't see how this response could be useful. – Marty Feb 26 2010 at 12:18
I do not understand. Could You point me where in the question I answer is any requirement that i>1? veit79 asks for application of homotopy groups which I gave. Do You not agree that winding number is topological invariant and appears in sine-Gordon equation which may be applied in Josephson junction ( and liquid crystals also) theory? Fro sine-Gordon equation configuration space for field is $S^1$ for liquid crystals it is in some cases $S^3$. There are also models in Field Theory where You using $S^n$ directly and even in limit $n-> \infinity$ but this is pure theoretical. – kakaz Feb 26 2010 at 13:19
1
Why the question does not make the $i>1$ requirement, it is generally understood that "homotopy groups of spheres" refers to the higher homotopy groups, the ones that are hard to compute and therefore those for which one would love to have applications which justfy the effort! The computation of $\pi_1$ of the spheres can be done one or two classes after having defined homotopy in a topology course, so, while the knowledge of $\pi_1(S^1)$ is surely a nice example and a very useful example, it is not the kind of example the question as in mind (in all likelihood...) – Mariano Suárez-Alvarez Feb 26 2010 at 15:09
3
Conversely, that the higher homotopy groups of $S^1$ are trivial, this has more applications than I can count. – Ryan Budney Feb 26 2010 at 15:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9484024047851562, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/73726/generating-r-regular-random-graph-in-parallel/73735
|
## Generating r-Regular Random Graph in Parallel
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Goal:
I want to generate a r-regular graph with n vertices. rn = 2m.
Current best:
````(1) take n vertices; randomly pick a vertex v of degree < r.
(2) S = set of all vertices of degree < r, and not a neighbor of v.
(3) create an edge between v and a random element of S.
(4) repeat.
````
Question:
Is there a more parallel way to do this?
Clarification:
Suppose I wanted to randomly pick an element in [1...n]. I could do it sequentially like:
````take 1 w/ prob 1/n
else take 2 w/ prob 1/n-1
else take 3 w/ prob 1/n-2
...
````
Or I could do it "one shot" by generating a random element between [1...n].
Similarly, I want to generate a r-regular graph "one shot" rather than an single edge at a time.
Goal:
This is to build mental intuition of what it means to "uniformly pick a r-regular graph."
Thanks!
-
What sorts of random entities are available? Specifically, can you generate a random permutation? Part of my reason for asking is that the special case $r=2$ of your question is pretty close to asking for a random permutation. Another part is that conversely, for general $r$, by thinking of each vertex as having $r$ half-edges already attached, your problem becomes one of pairing up these half-edges, which looks similar to finding a random permutation of the set of half-edges (though you'd have to do something to avoid loops and multiple edges). – Andreas Blass Aug 26 2011 at 3:43
## 2 Answers
This question is more difficult that it seems.
Firstly, there is a difference between picking edges of a graph uniformly, and picking a $r$-regular graph uniformly.
Let $G_{r,n}$ be the set of $r$-regular graphs on $n$ nodes. By "uniformly pick a $r$-regular graph", you need to create an algorithm that chooses $G \in G_{r,n}$ with probability $1/|G_{r,n}|$. There are probabilistic methods to do this, perhaps they even lend themselves to parallelization.
See the section on algorithms for generation of random regular graphs here. In particular,B. McKay and N. Wormald, Uniform Generation of Random Regular Graphs of Moderate Degree, Journal of Algorithms, Vol. 11 (1990), pp 52-67
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
The Wormald method (alluded to in @Daniel's answer) is an application of the "configuration model" introduced by Bollobas (look in his "Random Graphs") It does not work (not in finite time) for graphs of degree greater than around 4. There has been more recent work on this by Kim and Vu (see http://dl.acm.org/citation.cfm?id=780576, there is presumably an arxiv version also), but be forewarned that it is of more theoretical than practical value (it is not clear how large a graph needs to be before the distribution is "close enough" to uniform). Interestingly, while the Wormald algorithm is morally parallel (you generate a configuration at once, then throw it out if it is not a simple graph), the algorithm Kim/Vu analyze (the algorithm is not due to them, and goes back to the ancients, the analysis is theirs) does it one vertex at a time, so "parallelism" is quite expensive, to get back to the OP's original question.
-
1
This answer is really good and deserves to be upvoted (but I can't upvote), and since I could only accept one answer; accepted the one with the user with lower reputation score to encourage more MO participation. – random graphs Aug 26 2011 at 11:03
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9446635246276855, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/186810/what-type-of-graph-problem-is-this
|
# What type of graph problem is this?
Lets say I have four group
A [ 0, 4, 9] B [ 2, 6, 11] C [ 3, 8, 13] D [ 7, 12 ]
Now I need a number from each group(i.e a new group) E [num in A,num in B, num in C, num in D], such that the difference between the maximum num in E and minimum num in E should be possible lowest.What type of problem is this ? which graph algorithm will be better to solve this kind of problem ? Thanks in advance.
-
2
Why do you think this has anything to do with graphs? – Chris Eagle Aug 25 '12 at 17:03
@ChrisEagle shortest path that travels only one node in every group. P.S: I'm asking here for help only because I'm not sure about it. – user1624525 Aug 25 '12 at 17:10
What is the type of each element ? is it an integer ? – Belgi Aug 25 '12 at 17:37
I guess if max num in E = min num in E, the problem reduces down to a combinatorial search problem that has non-polynomial complexity. – tatterdemalion Aug 25 '12 at 17:55
@Joe - from what I understand you have to find an option to get the minimum, not all options – Belgi Aug 25 '12 at 18:13
## 1 Answer
I can't think of any reasonable way to represent this problem using graphs.
I suggest the following to deal with your problem:
1) keep the elements of each groups sorted .
denote $n$ as the total number of elements ($n=|A|+...+|D|)$, for each $M\in\{A,B,C,D\}$ denote$U_{M}$ as the union of the other groups with higher alphabetical order (for example $U_{B}=C\cup D)$.
2) since we have to choose one element from each group we have to choose one from $M$, for every element in $M$ find an element in$U_{M}$ s.t the difference is minimal (since the elements are sorted you can do this in $log(|U|)$ using binary search).
save this minimum in a variable and update it when you find a new minimum (also save the elements that gave you the minimum and the groups they came from so you can know what elements in what groups give this minimum)
At the end of this procedure you will know what two elements in which groups give you your minimum, of course you can take the other two elements for $E$ in an arbitrary way without it affecting the minimum.
The time complexity is $O(nlog(n))$ since for every $M$ it holds that $|M|\leq n$ and$|U_{M}|\leq n$.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9402285814285278, "perplexity_flag": "middle"}
|
http://quant.stackexchange.com/questions/3960/equivalent-true-martingale-measures-and-no-arbitrage-conditions?answertab=oldest
|
# Equivalent (true) Martingale Measures and no-arbitrage conditions
I hope this is the correct site for this question, as it is rather theoretical...
In their famous paper, Delbaen and Schachermayer proved that the No Free Lunch with Vanishing Risk condition is equivalent to the existence of the Equivalent Local Martingale Measure. Are there any stronger no-arbitrage type conditions that guarantee that this measure is a true martingale measure (i.e. that all discounted asset prices are true martingales as opposed to merely local ones)?
I would be grateful for (academic) references.
-
## 1 Answer
(If I remember well,) the local nature of the equivalent measure in the NFLVR theory comes from the fact that the market $S$ is a locally bounded semi-martingale. If it is bounded, you obtain an equivalent martingale measure.
Should be in A general version of the fundamental theorem of asset pricing, by Freddy Delbaen and Walter Schachermayer (thanks to Richard's remark, your answer seem to be theorem 1.1 of the paper).
-
You can take the paper from here taken from Prof.Schachermayer's homepage – Richard Aug 18 '12 at 19:47
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9137833714485168, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/158746/intersection-between-two-functions
|
# Intersection between two functions?
How do I find the intersection between $f(x) = 3(x-2)^2$ and $g(x) = 3\sin(x-4)$? It's a sinusodial function so it I couldn't solve it like a normal intersection.
-
Are you looking for $x$ such that $f(x) = g(x)$? – user20266 Jun 15 '12 at 16:36
Yes. I would assume to find two $x$s though because it's a quadratic. – user26649 Jun 15 '12 at 16:39
you should then maybe ask for the intersection of the graphs of two functions. To make even clearer what your question is about you could write somthing like 'Intersection of graph of polynomial and sine function' – user20266 Jun 15 '12 at 16:45
## 1 Answer
This leads to a transcendental equation. Typically, you can find approximate solutions for these using numerical root-finding algorithms, like Newton's method or Bisection method.
Typically, an anlytical solution to these could be expressed using special functions, but they don't have nice closed form.
However, in your case, it is easy to prove that, in fact, no solutions exists at all in the real numbers (there are complex ones, though) -- since f(x) is always above g(x).
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9532460570335388, "perplexity_flag": "middle"}
|
http://stats.stackexchange.com/questions/46988/interpretation-of-expression-in-ordinal-regression
|
# Interpretation of Expression in ordinal regression
Suppose we are looking at an ordinal variable with $4$ categories. So there are three threshold coefficients $b_1, \dots b_3$ and one probit slope $b_4$. What is the interpretation of the following expressions: $$\frac{b_{1}}{b_{4}^{2}}, \frac{b_{2}}{b_{4}^{2}}, \frac{b_{3}}{b_{4}^2}, \frac{1}{b_{4}}$$
assuming a multivariate normal distribution?
-
3
In what sense are you assuming a multivariate response distribution? Typically we think there is a univariate normal response distribution that has been categorized by replacing the value of the latent variable with an ordinal category based on where the value is relative to the thresholds. – gung Jan 4 at 20:16
– whuber♦ Jan 4 at 21:36
Why are you looking at these expressions? The threshold divided by the square of the slope? Why would that be interesting? – Peter Flom Jan 4 at 23:23
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9270111322402954, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/57761/character-group-of-frobenius-kernels
|
## Character group of Frobenius kernels
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Let $G$ be a semisimple algebraic group over an algebraically closed field $k$ of characteristic $p$ (e.g., $G=SL_n(k)$). Then $G$ is equal to its derived subgroup $[G,G]$. Consequently, the character group $X(G)$ of all algebraic group homomorphisms $G \rightarrow \mathbb{G}_m$ is trivial, because any character $\chi \in X(G)$ will vanish on the derived subgroup $[G,G]$. (Here $\mathbb{G}_m$ is the multiplicative group of units in $k$.)
Now I want to think of $G$ as an algebraic group scheme. Thus, $G$ is a representable functor from the category of commutative $k$-algebras to the category of groups. Given a commutative $k$-algebra $A$, $G(A) = \textrm{Hom}_{k-alg}(k[G],A)$, where $k[G]$ is the (usual) coordinate ring of $G$. For the example $G=SL_n$, we can be more explicit and say $G(A) = SL_n(A)$.
Since the characteristic of $k$ is positive, the group $G$ comes equipped with its Frobenius morphism $F: G \rightarrow G$. This is induced by a certain map of $k$-algebras $k[G] \rightarrow k[G]$, which, roughly speaking, is just the $p$-th power map $f \mapsto f^p$. In our example $G(A) = SL_n(A)$, the image of a matrix $(a_{ij}) \in SL_n(A)$ under $F$ is the matrix $(a_{ij}^p)$.
We can consider the scheme-theoretic kernel $G_1$ of $F$, and, more generally, the kernel $G_r$ of the $r$-th iterate $F^r$. These are the Frobenius kernels of $G$. They are normal subgroup schemes of $G$. They are not interesting algebraic groups in the classical sense (e.g., if $A=k$, then $(a_{ij}^p)=1$ only if $(a_{ij})=1$ and the kernel is trivial), but they are interesting as algebraic group schemes.
Let $G_r$ be the $r$-th Frobenius kernel of $G$. What is the structure of the character group $X(G_r)$ of algebraic group homomorphisms $G_r \rightarrow \mathbb{G}_m$? If $G$ is semisimple and simply-connected, is $X(G_r)$ trivial?
-
I suspect that the Frobenius kernels are all equal to their derived subgroups, and that the natural first attempt to prove this should be to mimic the proof that G is equal to its derived subgroup. – Peter McNamara Mar 8 2011 at 5:29
An addendum to my previous comment, via the in-exile Bcnrd, the notion of derived group is somewhat problematic for non-smooth group schemes (but quesiton has affirmative answer for G 1-connected by reduction to SL_2 subgroups). – Peter McNamara Mar 8 2011 at 20:17
## 2 Answers
It's probably most natural to consider this as a question about the (rational) representations of Frobenius kernels, in the spirit of Jantzen's book Representations of Algebraic Groups (Chapter II.3). Given a connected, simply connected semisimple group `$G$`, the irreducible representations of its Frobenius kernel `$G_r$` are parametrized naturally by `$p^r$` of the highest weights for `$G$` relative to a fixed maximal torus. Only the zero weight corresponds to a 1-dimensional representation (i.e., character of `$G_r$`) because `$G$` is semisimple.
-
Nice answer. I feel silly for not seeing that approach myself. By the way, I assume you mean that the irreducible representations of $G_r$ are parametrized naturally by the $p^r$-restricted highest weights for $G$, not by $p^r$ of the highest weights for $G$. – Christopher Drupieski Mar 8 2011 at 14:38
Incidentally, this argument would also work for the finite group of Lie type $G(\mathbb{F}_{p^r})$ (the fixed points in $G$ under $F^r$). – Christopher Drupieski Mar 8 2011 at 14:48
Yes, I'm using the relevant restricted weights here, though technically the "weights" for the Frobenius kernel are taken mod `$p^r$` in the weight lattice (character group of maximal torus). It's more straightforward here for the finite groups, since you only have to restrict the `$p^r$` irreducible representations of `$G$`, whereas passing to Frobenius kernels imitates in an enriched way passage to the Lie algebra in characteristic 0 theory. – Jim Humphreys Mar 8 2011 at 15:13
Historical question: it was Curtis who first described the simples for the Frobenius kernel when $r=1$ (or rather, he described the simple restricted Lie(G) representations). I guess the "analogous" description of simples for $G(\mathbf{F}_p)$ was done earlier? Maybe by Steinberg? I've somehow never known the sequence of events... – George McNinch Mar 8 2011 at 15:32
1
@George: In his 1960 papers Curtis studied restricted Lie algebra representations and started to work on representations of finite Chevalley groups over the prime field. But Steinberg's 1963 paper covered all groups of Lie type more systematically, including the twisted tensor product theorem. Curtis did work in the BN-pair setting in the early to mid-1960s (inspiring Carter-Lusztig), but algebraic group methods went much farther. The rank one case was studied concretely and much earlier by Brauer, with steps toward higher ranks by his students; this inspired both Curtis and Steinberg. – Jim Humphreys Mar 8 2011 at 16:45
show 1 more comment
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Frobenius Kernel cannot be ever trivial, only for g = -1. But at such g, the complement H copletely loses its meaning as a complement and thus the kernel too. Tomas Perna (Pecik)
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 60, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9298455119132996, "perplexity_flag": "head"}
|
http://stats.stackexchange.com/questions/31442/calculating-the-confidence-interval-for-simple-linear-regression-coefficient-est
|
# Calculating the confidence interval for simple linear regression coefficient estimates
I have a data set of paired measurements $(x_1,y_1),(x_2,y_2),...,(x_n,y_n)$. I need to fit a linear regression line $y=ax+b$ to this data. Therefore, I have to estimate the parameters $a$ and $b$.
How can I then calculate the confidence interval for these estimated parameters?
I referred to the wiki article which says http://en.wikipedia.org/wiki/Simple_linear_regression:
Normal assumption
Under the first assumption above, that of the normality of the error terms, the estimator of the slope coefficient will itself be normally distributed with mean $\beta$ and variance $\sigma^2/\sum(x_i-\bar > x)^2$.
I didn't get how this formula was derived.
-
If this is homework, please tag it as such. – JohnRos Jun 30 '12 at 19:05
– Macro Jun 30 '12 at 19:16
I referred to the wiki article. But I didn't get some of the stuff. I have reposted in the question – user34790 Jun 30 '12 at 19:59
1
– cardinal Jun 30 '12 at 21:06
## 2 Answers
Under normality assumptions for the error term in the model the formulas for the least squares estimates are:
$\beta_0=\bar{y}-\beta_1\bar{x}$ (where $\bar{x}$ is the mean of the $x_i$s and $\bar{y}$ is the mean of the $y_i$s) and $\beta_1=(\sum x_iy_i-n\bar{x}\bar{y})/(\sum x_i^2-n\bar{x}^2)$.
Both $\beta_0$ and $\beta_1$ are then normally distributed and when divided by their estimated standard deviations have t distributions under the null hypothesis that the true value is 0. Given this you can construct confidence intervals based on the t distribution.
-
3
– gung Jul 1 '12 at 15:04
How are you fitting the regression equation? Knowing that will help us to help you.
If you are doing the regression by hand, then use the formula in the book that gave you the formula for the regression (or use the wikipedia article in the comments above). Or better yet, get a real statistical software package to help you.
If you are using Excel or another spreadsheet then you should really switch to a real statistics program.
If you are using a statistics program then it may have an option or command that will compute the interval for you (but we don't know what program you are using, so we can't tell you what that command is). Even if the software does not compute the interval for you, it may give you the proper standard errors that you just need to multiply by the proper table value and add and subtract from the coefficient estimate (that formula should be in your textbook or on the wikipedia article).
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9060838222503662, "perplexity_flag": "head"}
|
http://unapologetic.wordpress.com/2009/08/11/invariant-subspaces-of-self-adjoint-transformations/?like=1&source=post_flair&_wpnonce=3fdfac1636
|
# The Unapologetic Mathematician
## Invariant Subspaces of Self-Adjoint Transformations
Okay, today I want to nail down a lemma about the invariant subspaces (and, in particular, eigenspaces) of self-adjoint transformations. Specifically, the fact that the orthogonal complement of an invariant subspace is also invariant.
So let’s say we’ve got a subspace $W\subseteq V$ and its orthogonal complement $W^\perp$. We also have a self-adjoint transformation $S:V\rightarrow V$ so that $S(w)\in W$ for all $w\in W$. What we want to show is that for every $v\in W^\perp$, we also have $S(v)\in W^\perp$
Okay, so let’s try to calculate the inner product $\langle S(v),w\rangle$ for an arbitrary $w\in W$.
$\displaystyle\langle S(v),w\rangle=\langle v,S(w)\rangle=0$
since $S$ is self-adjoint, $S(w)$ is in $W$, and $v$ is in $W^\perp$. Then since this is zero no matter what $w\in W$ we pick, we see that $S(v)\in W^\perp$. Neat!
### Like this:
Posted by John Armstrong | Algebra, Linear Algebra
## 1 Comment »
1. [...] (how?). The subspace is then invariant under the action of . But then the orthogonal complement is also invariant under . So we can restrict it to a transformation [...]
Pingback by | August 14, 2009 | Reply
« Previous | Next »
## About this weblog
This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”).
I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 17, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8857465982437134, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/78822/do-online-lecture-recordings-hurt-or-help-math-students-at-university?answertab=active
|
# Do online lecture recordings hurt or help math students at university? [closed]
Continuing with my series of soft questions on teaching practice:
My university uses a system whereby all lectures (given via computer slides or hand-writing on a sort of overhead projector called a visualiser) can be recorded and placed online with zero effort by the lecturer. We just have to tick a box to make it happen.
The question is: for a large first-year Calculus unit (500+ students) taken mostly by students who are intending to major in Engineering, is it academically better for the students to HAVE online access to recorded lectures or to NOT HAVE online access to recorded lectures?
Our department debates this endlessly: the people in the "PRO recording" camp mention that many students have part-time jobs or lecture clashes or live very far from campus and this allows them to participate, while the people in the "ANTI recording" camp say that it encourages absenteeism and "putting off" the work until just before the exam when it is far too late.
So I guess the question is:
• on average do recordings help more students than they hinder, or vice versa?
Again, all opinions welcomed, though those backed with documented research especially so.
-
Anything special about the fact that this is a Calculus course? Both the arguments you mention hold for all courses across disciplines... – Srivatsan Nov 4 '11 at 2:12
3
This should not be a matter of opinion and debate. It's not too hard to do a controlled experiment to shed light on this question and perhaps even settle it. – Hans Engler Nov 4 '11 at 2:16
1
– Hans Engler Nov 4 '11 at 2:18
Personal observation suggests that having recorded lectures (or even just class notes) encourages absenteeism to some extent, and more so in large classes (such as the $500+$ students probably taught in a large lecture hall) than in smaller ones where the instructor can get to know all the students by name and face in a very short time. – Dilip Sarwate Nov 4 '11 at 2:18
5
Dear Gordon, please try to not continue much more your series of soft questions... The topic of this site is math, and while your questions are certainly interesting, they quite not belong here. – Mariano Suárez-Alvarez♦ Nov 4 '11 at 12:00
show 7 more comments
## closed as not constructive by Henning Makholm, Srivatsan, Asaf Karagila, Grigory M, Alex B.Nov 5 '11 at 16:36
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or specific expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, see the FAQ for guidance.
## 6 Answers
I'm a university student as well. This is my last semester though. So, with that said it really depends on the class on how you should treat your students. I find that videos are helpful because you can always go back and review the lecture in case you forgot something (I'm a bad note taker).
This is where the "how you should treat your students" part comes in. If it's a lower level class, this is the time where students are still adjusting to the college life. I would say you kind of need to force them to come to class because most just don't know any better. So, if you assign small in class assignments or quizzes (This is what one of my professor does in a 100 level class, this class has quite a few people too), then people are still forced to come and even participate.
When you get to the high level classes (300 are junior level and 400 are senior level at my college), then this should basically stop. This is the process in which they need to prepare themselves for the real world and by now they should learn what they need to do to study and get a good grade if they wish. I'm the type of person that can learn through just lecture videos, so I may never show up myself. I can see how this can be offensive, but the work that the professors put in, is not lost. I just happen to go through the lecture at a different time than most people. If it's the student's choice not to come the class, then that's their decision. But like I said, if it's a lower level class then you kind of have to push them to come.
My professors a lot of the times will drop quizzes or assignments in case you do miss a few days of class.
For this class in particular I would personally assign quizzes and assignments since I would consider it a freshman or sophomore class.
-
My feelings as a graduate student, having both done some teaching and having done tremendous amounts of class-taking in my day: Recorded lectures help.
Why?
1. As some people have mentioned most of the arguments against recorded lectures boil down to "students are lazy and shiftless". I prefer to treat students like adults. That means respecting their own decision-making ability. And also letting them fail if they so choose. If they want to risk their grade by not showing up to class and relying on the online lectures? So be it.
2. As above, I'd much rather help someone who missed a lecture for some unforeseen, unavoidable reason than hurt someone blowing off my class. The former could use a hand, the latter will take care of itself in time.
3. It helps students in class. I can't tell you the number of times I wish I could rewind 30 seconds to hear an offhand remark by a professor that contains some bit of insight, or a parenthetical aside I'm really going to want to know later. Yes, if I'm taking flawless notes, I might capture it, but I much prefer setups that let students pay attention in class, and outsource transcriptionist duties to technology.
4. They're a tremendous benefit to remote students. Lets say there's someone at a different university who would like to audit your class. Recorded lectures essentially mean they can.
The one place I don't like them is for student presentations. I think students should have room to make bold steps, be wrong occasionally, and not have that uploaded onto iTunes U.
-
I second tards' answer--- I would make the videos available, while also making students explicitly aware of the risks that they run in misusing the video resource.
The main negatives, as you mention, are that some students will use the videos as a replacement for lecture, and that some students will use the psychological comfort that the existence of a video archive gives them to postpone serious study until right before the exams. In this respect, videos are actually no different from printed textbooks. Both of these negatives are perfectly good arguments against using a textbook in a course, but presumably your colleagues all do that.
In my own experience I have found it very difficult to convey caveats to the students who most need to hear them. Some students hear warnings and it goes in one ear and out the other (or they remember it perfectly well, but assume that it applies only to people who aren't them). It is a bit like trying to phrase the language of your syllabus so that students won't misunderstand it: some students will always misunderstand it. Probably, the same habits of mind that are responsible for missing lectures and misunderstanding instructions, are responsible for failing to heed warnings. There is really no way around that. To deny the entire class access to a useful resource, simply because it may cause this subset of students to do even more poorly than they might otherwise have done, seems silly to me.
Having said that, I think it is a matter of opinion, and reasonable people can disagree. I would not want a department to require faculty to make video lectures available.
On the positive side: hardworking students may find that their performance increases with the videos, because the existence of a video record means they can spend less brain time in lecture focusing on copying down what you write and say, and more time actually thinking about it. Also, if you are at a stage in your life where you may need teaching recommendations for future job applications, the existence of a video record of your lectures would really help people who write you letters of recommendation. (Frankly, if you are really good at lecturing and your university allows it, I would consider adding links to these videos on your homepage when you go on the job market.)
-
I'm a university student, so I'll try to give the perspective from this side (we have this tool in a few of our classes).
First let me say I agree with tards that students should be treated as adults. The level of independence of university life suggests that students are quite capable of making important decisions for themselves, and how they treat academics is a significant part of it. Of course, you can take this how you like since it's a university student saying it.
The main point I want to make is something that I believe applies to many such situations; implementing a change like this will not fundamentally change the way people behave. Students that value education highly will continue to do so, and those that don't will continue as well. I know many students who always go to class, despite there being videos of the lectures; and I know others who often skip class despite the fact that there are no videos of lectures. The more important consideration is that for students who need extra review outside of class, this will help them. For the students that miss class - whether it's because they're sick or have a conflict or simply don't want to go - this will help them too.
In fact, I've witnessed an interesting phenomenon as a result of these video recorded lectures. Some students are not satisfied with the way their professor teaches the material; perhaps he skips over too many things, or doesn't explain clearly, or has too much of an accent. These students, in addition to attending class, watch the video lectures from another section of the same class - taught by a different professor! So they can have two lectures on the material from two different styles of teaching. This may not apply if there is only one professor for any course, but I thought it well worth mentioning as an example of the power of video lectures.
So you can see where my preference lies. Video lectures help students much more than they hurt.
-
5
I like the point you make that it will not fundamentally change the students' behaviour. I haven't "accepted" this as the answer quite yet as I expect more replies as Europe and the US wake up for their day, but you got an upvote. – Gordon Nov 4 '11 at 4:06
3
I was going to write an answer, but you've summed up everything I wanted to say :) – tom Nov 4 '11 at 4:33
1
I think a short to-the-point summary would help this: (1) videos may hurt students who don't care but (2) they help students who do care, and (3) students who do care shouldn't be punished for the apathy of other students. – Brendan Long Nov 4 '11 at 17:42
1
Just a little addition: many uni students do need to work for a living in parallel to working hard for their BSc or MSc. It's unfortunate, but for some this is the only way. They (we) are greatly helped by the existence of video lectures. So please make them if possible :) – fish Nov 4 '11 at 20:11
I love the point about seeing different lectures from different professors - different viewpoints can help you cement your knowledge and fill in any gaps you didn't know you even had! – DMan Nov 5 '11 at 1:24
I have had some success with the following format. I doubt that it's feasible to use everyday in class with 500+ students, but it would certainly be reasonable if you have an extra recitation day or something of the like.
1. Prior to a given class day, I post a reading or video on the topic to be discussed that day.
2. For homework, the student is given a few simple exercises, nearly identical to those covered in the video. This assignment is due at the beginning of the related class period and is never accepted late.
3. During the class period, I remark on the topic very briefly, just to jog their memories or add something I think the reading/video did not adequately address.
4. After I've said what I want, the students break into pre-defined groups of four and work on some related, but more involved, problems.
5. Students are encouraged to help each other within their groups, asking for my help only when a consensus cannot be reached. To give them some incentive to be helpful, students evaluate their group members once in a while. The results of a student's evaluation is part of his/her grade.
With this format, the student makes use of the video to attain some level of mastery (though not complete mastery) before class. They are expected to "more or less get it" before class, and hopefully they have some specific questions already forming.
Since the student has spent some time thinking about the topic, my brief lecture becomes far more effective, and students are prepared to ask more meaningful questions and provide answers in groups. The time during class thus intended to build on the foundation provided by the video and untangle any misconceptions that may still be lingering.
I've heard this referred to as the "flipped classroom model", and so there may be some pertinent research under that name. All I can offer personally is anecdotal evidence that this method generally helps those students that take it seriously and seems to combine the best parts of "watch a video on your own time" and "attend daily lecture".
-
@ austin - that's pretty much the way I work as well, or at least try to. It will require some serious modification if you have large lectures. Or perhaps it makes large lectures pointless, which is a good thing. – Hans Engler Nov 5 '11 at 13:27
Unless there is a university/college wide policy about attendance my preference is to treat students as adults and not require students to come to classes. If they can master the material on their own then that should be fine.
However, it should be communicated clearly that you are treating them as adults and that students should take responsibility if they suffer any negative consequences of not attending classes (falling behind on work, failing the class etc).
As long as expectations are set appropriately and requiring students to attend is not a must, it seems to me that recording lectures is the way to go.
-
2
This sounds great, but imagine (if you can...) a lot of the students are irresponsible or unmotivated. So the grades in the course are bad. The dean comes to you and asks what's going on. "I'm treating the students like adults, so the poor grades are only their fault," you say. That's not going to fly! Also, students might complain that the low grades cause many of them to lose scholarships -- but blaming them isn't feasible. As the course instructor, you are, at some level, responsible for how well the students do -- even if they choose to be lazy. – Dan Drake Nov 4 '11 at 7:40
...that said, I really like the idea of the "inverted classroom" and would love to try it. – Dan Drake Nov 4 '11 at 7:43
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9692161083221436, "perplexity_flag": "middle"}
|
http://stochastix.wordpress.com/2011/09/05/circular-convolution-in-haskell/
|
# Rod Carvalho
## Circular convolution in Haskell
Given two finite sequences, $x$ and $y$, of length $n$, their $n$-point circular convolution can be written in matrix form [1] as follows
$\left[\begin{array}{c} z_0\\ z_1\\ z_2\\ \vdots\\ z_{n-2}\\ z_{n-1}\end{array}\right] = \left[\begin{array}{cccccc} y_0 & y_{n-1} & y_{n-2} & \ldots & y_2 & y_1\\ y_1 & y_0 & y_{n-1} & \ldots & y_3 & y_2\\ y_2 & y_1 & y_0 & \ldots & y_4 & y_3\\ \vdots & \vdots & \vdots & \ddots & \vdots & \vdots\\ y_{n-2} & y_{n-3} & y_{n-4} & \ldots & y_0 & y_{n-1}\\ y_{n-1} & y_{n-2} & y_{n-3} & \ldots & y_1 & y_0\end{array}\right] \left[\begin{array}{c} x_0\\ x_1\\ x_2\\ \vdots\\ x_{n-2}\\ x_{n-1}\end{array}\right]$
where $z$ is also a sequence of length $n$. Note that the $n \times n$ matrix above is a circulant matrix, as each row is a circularly right-shifted version of the previous one. Each $z_k$ is obtained by taking the inner product of the $k$-th row of the matrix with the $x$ vector.
In Digital Signal Processing (DSP), we often think of finite sequences as vectors, which allows us to compute convolutions using matrix operations. Taking a more Computer Science-ish approach, one can think of finite sequences as finite lists, and compute convolutions using list operations. The matrix above can be thought of as a list of lists (where each row is a list). We know how to multiply a matrix by a vector, but how do we “multiply” a list of lists by a list?
I started teaching myself Haskell this Summer, and I am (obviously) looking for interesting problems at which to throw some functional firepower. Solutions looking for a problem. But, first, let us “begin at the beginning”…
__________
Reverse
Take another look at the matrix above and note that its last row, when viewed as a list, is the reversal of list $y$, which means that we need a function that reverses lists. Fortunately, we do not need to implement such a function, as there is the function reverse in Prelude that uses foldl and flip:
```reverse :: [a] -> [a]
reverse = foldl (flip (:)) []```
__________
Circular shifts
Yet once again, let us look at the matrix at the top of this post. Note that the 2nd row is a circularly right-shifted version of the 1st one, and that the 3rd row is a circularly right-shifted version of the 2nd one, et cetera. Also, note that the $(n-1)$-th row is a circularly left-shifted version of the $n$-th row, and so and so on. Thus, we need to implement functions that perform circular shifts.
The following function circularly right-shifts a list:
```circShiftR :: [a] -> [a]
circShiftR [] = []
circShiftR x = last x : init x```
This function merely takes the last element of the list and moves it to the beginning. We can circularly left-shift a list using the function:
```circShiftL :: [a] -> [a]
circShiftL [] = []
circShiftL xs = (tail xs) ++ [head xs]```
which moves the first element of a list to the end. Let us now test these functions on some simple lists using the GHCi interpreter:
```*Main> circShiftR [0..9]
[9,0,1,2,3,4,5,6,7,8]
*Main> circShiftR (circShiftR [0..9])
[8,9,0,1,2,3,4,5,6,7]
*Main> (circShiftR . circShiftR) [0..9]
[8,9,0,1,2,3,4,5,6,7]
*Main> circShiftL [0..9]
[1,2,3,4,5,6,7,8,9,0]
*Main> (circShiftL . circShiftL) [0..9]
[2,3,4,5,6,7,8,9,0,1]```
Note that in the 3rd and 5th inputs we composed the circular shifting functions to perform double right- and left-shifts.
__________
Iterate
Since each row of the matrix is a right-shifted version of the previous one or a left-shifted version of the next one, we can generate the matrix from its first or last rows. Putting it in terms of lists, we can generate a list of lists that represents the matrix from a single list using the iterate function in Prelude:
```iterate :: (a -> a) -> a -> [a]
iterate f x = x : iterate f (f x)```
and the circular shifting functions we implemented before. To make things concrete, we can generate all four circular right-shifts of a list of length 4, as follows:
```*Main> take 4 (iterate circShiftR [0..3])
[[0,1,2,3],[3,0,1,2],[2,3,0,1],[1,2,3,0]]```
A word of caution is in order. Note that we take the first four lists, as
`iterate f x`
does generate an infinite list containing $x$, $f (x)$, $(f \circ f ) (x)$, $(f \circ f \circ f ) (x)$, et cetera. We do not need an infinite list of lists.
We now know how to generate a list of lists representing the matrix.
__________
Inner product
To obtain the circular convolution of $x$ and $y$, we multiply a matrix by a (column) vector, which consists of computing $n$ inner products. However, in this post we are thinking in terms of lists. We thus need to implement a function that computes the inner product of two (finite) lists, as follows:
```innerProd :: Num a => [a] -> [a] -> a
innerProd xs ys = sum(zipWith (*) xs ys)```
where we first use zipWith to obtain a list of the $x_i y_i$ products, and then use sum to compute the sum $\sum_i x_i y_i$. Let us test the inner product function:
```*Main> let xs = [1..4]
*Main> let ys = [0..3]
*Main> zipWith (*) xs ys
[0,2,6,12]
*Main> sum(zipWith (*) xs ys)
20```
__________
Circular convolution
We finally have all we need to compute the circular convolution.
For example, suppose we want to compute the circular convolution of two 4-point sequences, say, $x = (1,1,1,1)$ and $y = (0,1,2,3)$. Using matrix-vector multiplication, we obtain:
$\left[\begin{array}{cccc} 0 & 3 & 2 & 1\\ 1 & 0 & 3 & 2\\ 2 & 1 & 0 & 3\\ 3 & 2 & 1 & 0\end{array}\right] \left[\begin{array}{c} 1\\ 1\\ 1\\ 1\end{array}\right] = \left[\begin{array}{c} 6\\ 6\\ 6\\ 6\end{array}\right]$
Alternatively, we can compute the circular convolution by representing sequences by lists and using the functions we have implemented before:
```*Main> -- define lists
*Main> let xs = [1,1,1,1]
*Main> let ys = [0,1,2,3]
*Main> -- reverse and right-shift ys
*Main> let ys' = (circShiftR . reverse) ys
*Main> ys'
[0,3,2,1]
*Main> -- compute list of lists representing the matrix
*Main> let yss = take 4 (iterate circShiftR ys')
*Main> yss
[[0,3,2,1],[1,0,3,2],[2,1,0,3],[3,2,1,0]]
*Main> -- compute inner product of xs with each list in yss
*Main> map (innerProd xs) yss
[6,6,6,6]```
Note that I wrote comments on the GHCi command line in order to improve readability. We are now ready to implement the function that computes the circular convolution:
```circConv :: Num a => [a] -> [a] -> [a]
circConv xs ys = map (innerProd xs) yss
where
n = length xs
ys' = (circShiftR . reverse) ys
yss = take n (iterate circShiftR ys')```
How does it work? We first take sequence $y$, reverse and right-shift it (to obtain the 1st row of the matrix), generate the remaining rows by right-shifting the previous ones, and finally compute the inner product of each row with sequence $x$ using map. Note that
`innerProd xs`
is a partial application of the inner product function, as we provide only the first argument. We could define a function $g$ as follows:
```*Main> let g = innerProd [1,1,1,1]
*Main> :type g
g :: [Integer] -> Integer
*Main> g [0..3]
6```
Note that $g$ takes a list of integers and returns its inner product with the list of four ones. Putting it all together in a .hs file, we have:
```-- circular right-shift
circShiftR :: [a] -> [a]
circShiftR [] = []
circShiftR x = last x : init x
-- inner product of two lists
innerProd :: Num a => [a] -> [a] -> a
innerProd xs ys = sum(zipWith (*) xs ys)
-- circular convolution of two lists
circConv :: Num a => [a] -> [a] -> [a]
circConv xs ys = map (innerProd xs) yss
where
n = length xs
ys' = (circShiftR . reverse) ys
yss = take n (iterate circShiftR ys')```
We use several higher-order functions in this implementation: map, iterate, zipWith. We also use function composition and partial function application. A lot of beautiful ideas in only a dozen lines of code!
I am a Haskell neophyte, and I make no claims that the code above cannot be improved. If you have any constructive suggestions, I would be happy to hear them. Please use the HTML tags <pre> and </pre> to post code in the comments.
__________
Testing
Last but not least, let us carry out a couple of tests using examples from Mitra’s book [1]:
```*Main> -- problem 5.2 b)
*Main> let xs = [-1,5,3,0,3]
*Main> let hs = [-2,0,5,3,-2]
*Main> circConv xs hs
[1,-1,-2,16,26]
*Main> -- problem 5.45 b)
*Main> let gs = [2,-1,3,0]
*Main> let hs = [-2,4,2,-1]
*Main> circConv gs hs
[3,7,-6,8]```
It appears to be working! Please let me know if you find any bugs.
__________
References
[1] Sanjit K. Mitra, Digital Signal Processing: a computer-based approach, 3rd edition, McGraw-Hill, 2005.
### Like this:
Tags: Circular Convolution, Digital Signal Processing, Discrete-Time Signal Processing, Functional Programming, Haskell, Higher-Order Functions, List Processing
This entry was posted on September 5, 2011 at 22:51 and is filed under Haskell, Signal Processing. You can follow any responses to this entry through the RSS 2.0 feed. You can leave a response, or trackback from your own site.
### 2 Responses to “Circular convolution in Haskell”
1. logdiff Says:
September 7, 2011 at 07:49 | Reply
Really nice solution explaining each component part and bringing it together to produce the correct result.
2. eng.lamitta Says:
January 19, 2013 at 11:50 | Reply
really THANK YOU so much :D
I have an exam tomorrow and you’ve saved me :)
%d bloggers like this:
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 30, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9066759347915649, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/tagged/antimatter+dirac-equation
|
# Tagged Questions
3answers
449 views
### What was missing in Dirac's argument to come up with the modern interpretation of the positron?
When Dirac found his equation for the electron $(-i\gamma^\mu\partial_\mu+m)\psi=0$ he famously discovered that it had negative energy solutions. In order to solve the problem of the stability of the ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.966334879398346, "perplexity_flag": "head"}
|
http://www.physicsforums.com/showthread.php?p=1216315
|
Physics Forums
Thread Closed
Page 1 of 2 1 2 >
## CIRCUIT ANALYSIS: 7 resistors, 2 Indep. Volt Source, V.C.C.S, V.C.V.S. - find I
1. The problem statement, all variables and given/known data
For the circuit below find $I_1$ and $I_2$:
2. Relevant equations
KVL
KCL
Ohm's Law
3. The attempt at a solution
I tried the problem many times, but I always get crazy answers. It seems that every time I need one new equation to have a system of solveable equations, I have to add a new variable and hence I need another equation. It's a vicious cycle that when I get up to 13 variables for all of the V's at the resistors and different I's at nodes 1-4, I get a crazy answer like -1.35 mA for $I_1$. Does that seem right?
Any suggestion on what to do about the one-more variable, one-more equation problem? I tried a super-node between nodes 2 and 3. Didn't help though.
PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug
Recognitions: Science Advisor Start by finding the voltages. You can write down the voltage at nodes 4, 3, 2 (two different ways) and 1 (in that order) without knowing any currents. The two ways of getting the voltage at node 2 gives you a relation between Va and Vb, so eliminate one of them. Then start finding the currents in terms of the voltages. You don't need to add any more nodes.
Ok, so I get these for the node voltages(4,3,2,2,1): $$V_4\,-\,0$$ $$V_3\,+\,5V$$ $$V_2\,-\,0$$ $$V_2\,-\,2\,-\,V_4\,=\,0$$ $$V_2\,=\,V_4\,+\,2$$ $$V_1\,-\,0$$ Are these right? If not, how am I supposed to make these voltage equations? Also, I am stuck again, I don't know where to go from here (even if the voltage EQs are correct)! Should I use KVL or KCL? I tried to add 2 current variables at Node 2. I used KCL there and I made some EQs for the currents there (that I had an R for) using $i\,=\,\frac{V}{R}$. I am seriously stuck now though!!! Can someone walk me through the most logical way to proceed from here, I am really confused. Thanks
Recognitions:
Science Advisor
## CIRCUIT ANALYSIS: 7 resistors, 2 Indep. Volt Source, V.C.C.S, V.C.V.S. - find I
I don't understand exactly what those expresssions are.
The way I would do this is by looking at the circuit and thinking about what you know, not trying to apply the K. laws in a mechanical way.
It's "obvious" from the circuit diagram that
V4 = VA
V3 = V4 = VA (the current source has no internal resistance so no voltage across it)
V2 = V4 - 2 = VA - 2 (from the 2V voltage source)
and also V2 = VB so VB = VA - 2
V1 = V2 - 2VA = -2 - VA
Now try and find a node which doesn't have the unknown currents I1 and I2 flowing into it: there is one, node 3. Find all the currents flowing into node 3 by Ohms law. By KCL they add up to zero. That will give you an equation for VA.
Now you know all the voltages, you can use Ohms law and KCL to find the other currents.
Okay Vinny, we'll use nodal analysis or KCL for the problem if that is fine with you. And for the moment, let's avoid supernodes but stick with the conventional nodes.
To get you started, do this for me: Note down the 4 equations corresponding to the 4 nodes using KCL, i.e., the sum of currents entering/leaving the node equals zero. And do that with only the variables V1, V2, V3, V4, I1, I2 and no other variables.
Quote by AlephZero ... V3 = V4 = VA (the current source has no internal resistance so no voltage across it)
Not true. The current source could present a voltage drop/gain without any internal resistance.
OK, I have added 3 currents to the diagram (in green). $$V_4\,=\,V_a\,=\,V_3$$ $$V_2\,=\,V_4\,-\,2\,=\,V_b$$ $$V_b\,=\,V_a\,-\,2$$ $$V_1\,=\,V_\,2\,-\,2\,V_a$$ Just what you said above. Now, I use $i\,=\,\frac{V}{R}$ to get the new currents in green. KCL: $$I_3\,+\,I_4\,+\,I_5\,+\,\frac{V_b}{4K\Omega}\,=\,0$$ $$I_3\,=\,\frac{(5\,V)}{4000\Omega}\,=\,0.00125\,A\,=\,1.25\,mA$$ $$I_4\,=\,\frac{V_2\,-\,V_3}{2000\Omega}$$ $$I_5\,=\,\frac{V_1\,-\,V_3}{4000\Omega}$$ Now if I combine those four equations above: $$\frac{V_2\,-\,V_3}{2000\Omega}\,+\,\frac{V_1\,-\,V_3}{4000\Omega}\,+\,\frac{V_4\,-\,2}{4000\Omega}\,=\,-1.25\,A$$ How do you proceed?
Ok, lets try NODE 1 first: $$\left(\frac{-V_1}{1000\Omega}\right)\,+\,\left[\left(\frac{V_3\,-\,V_1}{4000\Omega}\right)\,+\,\left(\frac{V_4\,-\,V_1}{3000\Omega}\right)\right]\,=\,I_1$$ Is that correct?
Quote by VinnyCee $$V_4\,=\,V_a\,=\,V_3$$
As I have said earlier, to claim that V3 = V4 is incorrect. Read my earlier post.
Quote by VinnyCee Just what you said above. Now, I use $i\,=\,\frac{V}{R}$ to get the new currents in green. KCL: $$I_3\,+\,I_4\,+\,I_5\,+\,\frac{V_b}{4K\Omega}\,=\,0$$
Well yes, that is right. But you do know that the above expression gives only 1 nodal equation, that is, the nodal equation for node 3. You would have to do the same thing for the other 3 nodes.
Quote by VinnyCee $$I_3\,=\,\frac{(5\,V)}{4000\Omega}\,=\,0.00125\,A\,=\,1.25\,mA$$
This is not right- you're forgetting V3.
Quote by VinnyCee $$I_4\,=\,\frac{V_2\,-\,V_3}{2000\Omega}$$ $$I_5\,=\,\frac{V_1\,-\,V_3}{4000\Omega}$$
These are correct.
And yes, combine these terms into an equation. Note that Vb = V2. Also, at the moment, let's forget about V2 = V4-2. Now write down again the nodal equation for node 3.
Quote by VinnyCee Ok, lets try NODE 1 first: $$\left(\frac{-V_1}{1000\Omega}\right)\,+\,\left[\left(\frac{V_3\,-\,V_1}{4000\Omega}\right)\,+\,\left(\frac{V_4\,-\,V_1}{3000\Omega}\right)\right]\,=\,I_1$$ Is that correct?
That's right. :)
Cool! NODE 2 now: $$I_1\,+\,\left(\frac{-V_2}{2000\Omega}\right)\,+\,\left(\frac{V_2\,-\,V_3}{2000\Omega}\right)\,=\,I_2$$ Is that right? Is $V_b$ still equal to $V_2$?
Quote by VinnyCee Cool! NODE 2 now: $$I_1\,+\,\left(\frac{-V_2}{2000\Omega}\right)\,+\,\left(\frac{V_2\,-\,V_3}{2000\Omega}\right)\,=\,I_2$$ Is that right? Is $V_b$ still equal to $V_2$?
A small mistake here, check the equation again. Yes, Vb = V2. Two more nodal equations to go (nodes 3 and 4). Keep it up!
The mistake fixed? $$I_1\,+\,\left(\frac{-V_2}{2000\Omega}\right)\,+\,\left(\frac{V_3\,-\,V_2}{2000\Omega}\right)\,=\,I_2$$
That's right, now move on to the other 2 equations.
OK, for NODE 3: $$\left(\frac{V_1\,-\,V_3}{4000\Omega}\right)\,+\,\left(\frac{V_2\,-\,V_3}{2000\Omega}\right)\,+\,\left(\frac{5\,-\,V_3}{4000\Omega}\right)\,=\,\left(\frac{-V_2}{4000\Omega}\right)$$ And for NODE 4: $$\left(\frac{-V_4}{4000\Omega}\right)\,+\,\left(\frac{V_1\,-\,V_4}{3000\Omega}\,+\,I_2\right)\,=\,\left(\frac{V_2}{4000\Omega}\righ t)$$ Are those right? Or did I mess up the path with the 5V independent voltage source and 4Kohm resistor?
Very good. Now, I would like to have these equations simplified a bit. As an example, for node 4, you wrote: $$\left(\frac{-V_4}{4000\Omega}\right)\,+\,\left(\frac{V_1\,-\,V_4}{3000\Omega}\right)\,+\,I_2\,=\,\left(\frac{V_2}{4000\Omega}\righ t)$$ I want it simplified to become: $$\frac{1}{3}V_1 - \frac{1}{4}V_2 - \frac{7}{12}V_4 + I_2 = 0$$ Specifically, I have ignored the '000 in the R's (the V's are still as before but the I's are now in milliamperes) and arranged the equations such that on the left side are the unknowns, ordered V1, V2, V3, V4, I1, I2 and on the right side, the constants. There's a reason for doing all these of course. :) Do the same for the other 3 equations and we will proceed from there.
NODE3: $$V_1\,+\,3\,V_2\,-\,4\,V_3\,=\,-5$$ NODE2: $$2000\,I_1\,-\,2000\,I_2\,-\,2\,V_2\,+\,V_3\,=\,0$$ NODE1: $$-4000\,I_1\,-\,\frac{19}{3}\,V_1\,+\,V_3\,+\,\frac{4}{3}\,V_4\,=\,0$$ Right?
Okay, that's close enough. Let me edit a bit... Node 1: $$-\frac{19}{12}V_1 + \frac{1}{4}V_3 + \frac{1}{3}V_4 - I_1 = 0$$ Node 2: $$-V_2 + \frac{1}{2}V_3 + I_1 - I_2 = 0$$ Node 3: $$V_1 + 3V_2 - 4V_3 = -5$$ Node 4: $$\frac{1}{3}V_1 - \frac{1}{4}V_2 - \frac{7}{12}V_4 + I_2 = 0$$ I prefer to have the equations in the manner above, with the I's in milliamperes. Now, note that there's a voltage source across nodes 1 and 2, similarly a voltage source between nodes 2 and 4. Due to that, we ought to form a supernode, that is, to combine nodes 1, 2 and 4 into a single supernode. The result is the elimination of the unknown variables I1 and I2. To do that, try combining equations 1, 2 and 4 above into a single equation such that the unknowns I1 and I2 disappear.
Thread Closed
Page 1 of 2 1 2 >
Thread Tools
Similar Threads for: CIRCUIT ANALYSIS: 7 resistors, 2 Indep. Volt Source, V.C.C.S, V.C.V.S. - find I
Thread Forum Replies
Engineering, Comp Sci, & Technology Homework 37
Introductory Physics Homework 5
Introductory Physics Homework 1
Advanced Physics Homework 6
Engineering, Comp Sci, & Technology Homework 5
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 36, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9212180972099304, "perplexity_flag": "middle"}
|
http://cms.math.ca/Reunions/hiver12/abs/cp
|
Réunion d'hiver SMC 2012
Fairmont Le Reine Elizabeth (Montréal), 7 - 10 décembre 2012
Communications libres
Org: Odile Marcotte (CRM)
[PDF]
FABRICE COLIN, Université Laurentienne / Laurentian University
Generalized Fountain Theorem and Application to the Semilinear Schrödinger Equation [PDF]
Fountain theorems and their variants have proven to be effective tools in studying the existence of infinitely many solutions of partial differential equations. By using the degree theory and the $\tau-$topology of Kryszewski and Szulkin, we establish a version of the Fountain Theorem for strongly indefinite functionals. This abstract result will be applied for studying the existence of infinitely many solutions of two strongly indefinite semilinear problems including the semilinear Schr\"{o}dinger equation.
IBRAHIMA DIONE, Université Laval
Penalty/finite element approximations of slip boundary conditions and Babuska's paradox [PDF]
The penalty method is a classical and widespread method for the numerical treatment of constrained problems such as unilateral contact problems and problems with Dirichlet boundary conditions. It provides an alternative approach to constrained optimization problems which avoids the necessity of introducing additional unknowns in the form of Lagrange multipliers. In the case of slip boundary conditions for fluid flows or elastic deformations, one of the main obstacle to their efficiency and to their mathematical analysis is that a Babuska's type paradox occurs.
Observed first by Sapondzyan [2] and Babuska [1] on the plate equation in a disk with simple support boundary conditions, Babuska's paradox can be stated as follows: on a sequence of polygonal domains converging to the domain with a smooth boundary, the solutions of the corresponding problems do not converge to the solution of the problem on the limit domain.
Our presentation will focus on the finite element approximation of Stokes equations with slip boundary conditions imposed with the penalty method in two and three space dimensions. For a polygonal or polyhedral boundary, we prove convergence estimates in terms of both the penalty and discretization parameters. In the case of a smooth curved boundary, we show through a numerical example that convergence may not hold due to a Babuska'type paradox. Finally, we propose and test numerically several remedies. \
[1] Babuska I. and Pitkaranta J. SIAM J. MATH. ANAL, {21} (1990) 551-576.
[2] Sapondzyan O.M. Akad. Nauk Armyan. SSR. Izv. Fiz.-Mat. Estest. Tehn. Nauki, 5:29–46, 1952.
SAFOUHI HASSAN, University of Alberta
New Formulae for Differentiation and Techniques in Numerical Integration [PDF]
We present new formulae, called the Slevinsky-Safouhi's formulae (SSF) I and II [1] for the analytical development of derivatives. The SSF, which are analytic and exact, represent the derivative as a discrete finite sum involving coefficients that can be computed recursively and they are not subject to any computational instability.
There are numerous applications in science and engineering for special functions and higher order derivatives. As an example, the nonlinear G transformation has proven to be a very powerful tool in numerical integration [3]. However, this transformation requires higher order derivatives of the integrands for the calculation, which can be a severe computational impediment.
As examples of applications of the SSF, we present higher order derivatives of Bessel functions which are prevalent in oscillatory integrals and provide tables illustrating our results. We also present an efficient recursive algorithm for the implementation of the G transformation. The incomplete Bessel function is presented as an example of application. Lastly, we present a generalized and formalized integration by parts to create equivalent representations to some challenging integrals. As an example of application, we present the Twisted tail.
[1] R. M. Slevinsky and H. Safouhi. New formulae for higher order derivatives and applications. J. Comput. App. Math., 233:405–419, 2009.
[2] H. L. Gray and S. Wang. A new method for approximating improper integrals. SIAM J. Numer. Anal., 29:271–283, 1992.
[3] R. M. Slevinsky and H. Safouhi. The S and G transformations for computing three-center nuclear attraction integrals. Int. J. Quantum Chem., 109:1741–1747, 2009.
PATRICK LACASSE, Université Laval
Résolution d'un problème d'élasticité avec contact sans frottement par stratégie de contraintes actives et algorithme itératif. [PDF]
On s'intéresse au problème de calculer la déformation qu'un corp élastique entrant en contact avec un corp rigide. Il s'agit donc d'optimiser une fonctionnelle non linéaire sous une contrainte inégalité également non linéaire. Dans un premier temps, ces contraintes sont reportés sur un espace de multiplicateurs de Lagrange. Une stratégie dite de contraintes actives permet par la suite de transformer le problème en une suite de problèmes avec contrainte égalité. Selon la méthode classique de Newton, ce système est linéarisé, conduisant à un système matriciel par bloc. Ce derneir est finalement résolu par une approche itérative tirant profit de la factorisation de ce système particulier.
SOPHIE LÉGER, Université Laval
An updated Lagrangian method for very large deformation problems [PDF]
The use of the finite element method is quite widespread for the analysis of large deformation problems, notably for the calculation of tire deformation. In this case and in many others, a good numerical method is essential. Industrial partners expect accurate, efficient and robust methods, and all of this preferably at a low computational cost.
When using a Lagrangian point of view in the finite element method for the resolution of large deformation problems, the mesh elements can become severely distorted over time. This can lead to numerical instabilities and slow convergence. To avoid this problem, frequent remeshing of the domain during the computation becomes necessary in order to optimize the quality of the mesh and thus improve convergence. In an updated Lagrangian framework, the deformation gradient tensor, which is key for the calculation, has to be transfered from the old mesh to the new mesh after each remeshing step. In this presentation, we will compare different transfer techniques and show which one seems to be more efficient and give the best results.
Numerical continuation methods have proved to be very powerful tools when dealing with very nonlinear problems. When combining both a good remeshing algorithm and a good transfer method for the deformation gradient tensor with the Moore-Penrose continuation method, we will show that very large levels of deformation can be attained and that the combination of all these tools leads to a very stable and efficient updated Lagrangian algorithm.
TRUEMAN MACHENRY, York University
Permanents, Determinants, Integer Sequences and Isobaric Polynomials [PDF]
ABSTRACT. In this paper we construct two types of Hessenberg matrices with the properties that every weighted isobaric polynomial (WIP) appears as a determinant of one of them, and as the permanent of the other . Every integer sequence which is linearly recurrent is representable by (an evaluation of) some linearly recurrent se- quence of WIPs. WIPs are symmetric polynomials written on the elementary sym metric polynomial basis. Among them are the generalized Fibonacci polynomials and the generalized Lucas polynomials, which already have these sweeping representing properties. Among the integer sequences discussed are the Chebychev polynomials of the 2nd kind, the Stirling numbers of the 1st and 2nd kind, the Catalan numbers, and the triangular numbers, as well as all sequences which are either multiplicative arithmetic functions or additive arithmetic functions.
ODILE MARCOTTE, CRM- UQAM
On the maximum orders of an induced forest, an induced tree, and a stable set [PDF]
Let G be a connected graph, n the order of G, and f (resp. t) the maximum order of an induced forest (resp. tree) in G. We give upper bounds for f-t (depending upon the value of n) and show that these bounds are tight. We give similar results for the difference between the stability number of G and the maximum order of an induced tree in G.
STEPHANIE PORTET, University of Manitoba
Dynamics of length distributions of in vitro intermediate filaments [PDF]
Intermediate filaments are one of the cytoskeleton components. The cytoskeleton is an intracellular structure made of proteins polymerized in filaments that are organized into networks in the cytoplasm. Here a general method is given to study the dynamics of length distributions of filaments described as linear macromolecules. An aggregation model with explicit expression of association rate constants depending on the properties of interacting objects is considered. A set of hypotheses on the geometry and properties of interacting macromolecules is considered, leading to a collection of models. Fitting of model responses to experimental data yields the best-fit for each model in the collection. By using model selection, the more appropriate model to represent the assembly at a given time point is identified. Hence, conclusions on the object properties can be drawn.
ANTHONY SHAHEEN, California State University, Los Angeles
A Brief Introduction to Expanders and Ramanujan Graphs [PDF]
Think of a graph as a communications network. Putting in edges (e.g., fiber optic cables, telephone lines) is expensive, so we wish to limit the number of edges in the graph. At the same time, we would like the communications network to be as fast and reliable as possible. We will see that the quality of the network is closely related to the eigenvalues of the graph's adjacency matrix. Essentially, the smaller the eigenvalues are, the better the communications network is. It turns out that there is a bound, due to Alon, Serre, and others, on how small the eigenvalues can be. This gives us a rough sense of what it means for graphs to represent "optimal" communications networks; we call these Ramanujan graphs. Families of k-regular Ramanujan graphs have been constructed in this manner by Lubotzky, Sarnak, and others whenever k-1 equals a power of a prime number. No one knows whether families of k-regular Ramanujan graphs exist for all k.
CHUNHUA SHAN, York University
Finite cyclicity of hh-graphics with a triple nilpotent singularity of codimension 4 [PDF]
In 1994, Dumortier, Roussarie and Rousseau launched a program aiming at proving the finiteness part of Hilbert's 16th problem for the quadratic system. For the program, 121 graphics need to be proved to have finite cyclicity. In this presentation, I will report our effort to show that some hh-graphics through a triple nilpotent singularity of codimension 4 have finite cyclicity. This is an in progress joint work with professor Christiane Rousseau and professor Huaiping Zhu.
IICKHO SONG, Korea Advanced Institute of Science and Technology
An Extension of the Vandermonde Convolution Formula [PDF]
As an extension of the Vandermonde Convolution $\sum\limits_{m=0}^{\gamma} {{\alpha}\choose{\gamma-m}} {{\beta}\choose{m}} = {{\alpha+\beta}\choose{\gamma}}$, an explicit expression for the sum $\sum\limits_{m=0}^{\gamma} m (m-1) \cdots (m-\zeta+1) {{\alpha}\choose{\gamma-m}} {{\beta}\choose{m}}$ is obtained, where ${{n}\choose{r}} = \frac{n!}{(n-r)!r!}$ denotes the binomial coefficient. Some examples for the application of the result are considered.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8139694929122925, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/106645/holomorphic-extension-of-functions-closed
|
## holomorphic extension of functions [closed]
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
hallo,
I have a problem that I dont understand. Let $U \subset \mathbb{C}$ be a open neigbourhood of zero. Let furthermore $f,g : U \subset \mathbb{C} = \mathbb{R}^{2} \rightarrow \mathbb{R}$ be two analytic functions defined by $f(x,y) = x + y$ and $g(x,y) = x$. Since these functions are analytic we can extend them holomorphically to some open neighbourhood $W$ of zero in $\mathbb{C}^{2}$ just by $f(z_{1}, z_{2}) = z_{1} + z_{2}$ and $g(z_{1}, z_{2}) = z_{1}$. Its obvious that these functions are holomorphic on $W$ since they are polynomials. Furthermore $f$ and $g$ agree on $W \cap U \cap \mathbb{R}$. From the identity Theorem they have to be the same, since they are holomorphic. But obviously they are NOT! I am wondering where the mistake is? does anybody know. I would be very thankfull for some answers.
Greetings bruno
-
1
Try stating the Identity Principle for holomorphic functions in two variables... – Yemon Choi Sep 8 at 7:09
Several complex variables is a very different world from single variable. Check out Krantz's or Hormander's book. – Steven Gubkin Sep 24 at 13:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9424878358840942, "perplexity_flag": "head"}
|
http://mathhelpforum.com/statistics/187737-odds-1-6-straight.html
|
# Thread:
1. ## Odds of 1 to 6 straight
What are the odds of rolling 1,2,3,4,5,6 (any order) with one throw of six standard dice?
2. ## Re: Odds of 1 to 6 straight
are you tossing the die six times?
I didn't understand the one throw comment.
$6!/6^6$
3. ## Re: Odds of 1 to 6 straight
6 dice are thrown once?
1/6*1/6*1/6*1/6*1/6*1/6
as each outcome is independant and you have 1/6 chance of any dice obtaining the number you want
4. ## Re: Odds of 1 to 6 straight
BUT then any order, hence the 6!
Or use conditional probabilities....
(1)(5/6)(4/6)(3/6)(2/6)(1/6)
where you can get any number, then any number except the first...
5. ## Re: Odds of 1 to 6 straight
Matheagle and RHandford are using the fact that the probability of getting a 1 on a die is 1/6, the probability of a 2 is also 1/6, etc. so the probability of getting 1, 2, 3, 4, 5, 6, in that order is $\frac{1}{6}\frac{1}{6}\frac{1}{6}\frac{1}{6}\frac{ 1}{6}\frac{1}{6}= \frac{1}{6^6}$. And then because you said "in any order", matheagle multiplied by 6!, the number of different orders of 6 things.
Here is another way to get the same answer: the probability of throwing any value from 1 to 6 on the first throw is, of course, 6/6= 1. Once we have that, the probability of getting any number except that first number is 5/6 since now any of the 5 numbers left will work. The probability of throwing, on the third die, any number except those two is 4/6, etc.
Since nothing is said there about the specific numbers, the probability of throwing 1, 2, 3, 4, 5, 6 in any order is $\frac{6}{6}\frac{5}{6}\frac{4}{6}\frac{3}{6}\frac{ 2}{6}\frac{1}{6}= \frac{6!}{6^6}$,
6. ## Re: Odds of 1 to 6 straight
To err is human- to really screw up requires a computer!
To save typing all of those fractions, I typed $\frac{1}{6}$ and the "copied and pasted" the rest. Of course, I had accidently put an extra "{" in the first fraction and then copied that error into all of them!
And I should have read Matheagles second response before posting!
7. ## Re: Odds of 1 to 6 straight
Sorry. I'm having trouble with the math notation. Earlier, Plato said the odds of throwing six of any kind on a single throw of six dice is 6 to the 6th power. Does the answer of HallsofIvy mean that throwing a six-die straight (1,2,3,4,5,6) any order - is six times more unlikely than six of a kind? Thanks.
8. ## Re: Odds of 1 to 6 straight
Originally Posted by wshore
Sorry. I'm having trouble with the math notation. Earlier, Plato said the odds of throwing six of any kind on a single throw of six dice is 6 to the 6th power. Does the answer of HallsofIvy mean that throwing a six-die straight (1,2,3,4,5,6) any order - is six times more unlikely than six of a kind? Thanks.
Actually the probability of tossing 123456 in any order is $6!=720$ times the probability of the probability of tossing 123456 in that order.
The first is $\frac{6!}{6^6}$ the second is $\frac{1}{6^6}$.
9. ## Re: Odds of 1 to 6 straight
Please, disregard the "order" of the dice roll! When six dice are thrown, they fall and the faces are read. There is no order. The question means that all six numbers (1,2,3,4,5,6) appear on the dice - however they lie. Also, while I understand this is a math forum, I don't understand what ! means in a formula.
Can someone PLEASE supply an answer in English prose?
10. ## Re: Odds of 1 to 6 straight
hi
the final answer is 6!/6^6 where the 6! means 6*5*4*3*2*1.
btw n! (pronounced n factorial) is the product of all natural numbers from 1 to n.
11. ## Re: Odds of 1 to 6 straight
Thank you. I gather that is 720 times more unlikely to throw a six-die straight than six of a kind. Do I have that right?
12. ## Re: Odds of 1 to 6 straight
Originally Posted by wshore
Please, disregard the "order" of the dice roll! When six dice are thrown, they fall and the faces are read. There is no order. The question means that all six numbers (1,2,3,4,5,6) appear on the dice - however they lie. Also, while I understand this is a math forum, I don't understand what ! means in a formula.
Can someone PLEASE supply an answer in English prose?
You are the one who is mathematically challenged here.
You are receiving help from professional mathematicians we cannot be faulted for using the language of the field.
First the correct way to ask about this is to use the word probability not odds.
Secondly, the outcome space is no different in tossing six dice at one time and tossing one die six times.
So the probability of getting each of the six numbers is $\frac{6!}{6^6}$.
13. ## Re: Odds of 1 to 6 straight
and I thought I solved this twice yesterday
14. ## Re: Odds of 1 to 6 straight
Originally Posted by matheagle
and I thought I solved this twice yesterday
As did I. But see reply #7.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 8, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9303608536720276, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/87687/showing-piax-pibx-sim-a-b-as-x-to-infty
|
# Showing $\pi(ax)/\pi(bx) \sim a/b$ as $x \to \infty$
I'm having a bit of a problem with exercise 4.12 in Apostol's "Introduction to Analytic Number Theory". I don't think it's supposed to be a very hard exercise, it's the first one in its section (they're usually a bit like warm-ups). I'm supposed to show that
If $a>0$ and $b>0$, then $\pi(ax)/\pi(bx) \sim a/b$ as $x \to \infty$.
It also says I'm allowed to use the prime number theorem. Is it just something like (a rough sketch): $$\frac{\pi(ax)}{\pi(bx)} \sim \frac{ax \log bx}{bx \log ax} \sim \frac{a}{b}, \quad \text{since the logs $\to 1$ as $x \to \infty$?}$$ I don't know, maybe I'm heading in the wrong direction... It would be very nice if someone could show me how to do this properly!
-
5
Looks good to me, although instead of saying "the logs go to 1" I would say the ratio of the logs goes to 1. – Gerry Myerson Dec 2 '11 at 9:53
@Gerry: Oh, ok, I thought I was a long way from being done! Thanks! – Carolus Dec 2 '11 at 10:13
## 3 Answers
All you're missing is the use of $\text{log}(bx) = \text{log}(b) + \text{log}(x)$, as the fraction then ~$\frac{ax\text{log}(x)}{bx\text{log}(x)}$ ~ $\frac{a}{b}$.
-
You have $\frac{ax \log bx}{bx \log ax} = \frac ab \frac{\log x+\log b}{\log x+\log a}$.
Since $\log x\to\infty$, you have $$\lim\limits_{x\to\infty} \frac{\log x+\log b}{\log x+\log a}=1.$$ (The constants $\log a$ and $\log b$ are "small" compared to $\log x$.)
-
Of course! Thank you! – Carolus Dec 2 '11 at 10:14
It might be interesting to note that the initial statement is what guarantees that the fractions of the form $p/q$ with $p,q$ primes, form a dense subset of $[0,\infty).$
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 17, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9713844656944275, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/127995/jacobi-symbol-and-invertibility-of-m-for-an-odd-n
|
# Jacobi symbol and invertibility of $m$ for an odd $n$
I have asked a similar question here before, and received a nice answer. I think that the next question here is equivalent, but can't seem to be able to prove it. Here goes:
Given an odd $n$, I want to find an $0\leq m\leq n-1$ s.t $m\in\left(\mathbb{Z}/n\mathbb{Z}\right)^{*}$ (i.e, $m$ is invertible modulu $n$), and also $\left(1-m^{-1}\right)\in\left(\mathbb{Z}/n\mathbb{Z}\right)^{*}$ and $\left(\frac{m}{n}\right)=1$ when $\left(\frac{m}{n}\right)$ is the Jacobi symbol.
I can prove that if we disregard the Jacobi symbol requirement, this is equivalent to finding to succeeding invertible numbers modulu $n$. But when we throw the Jacobi symbol in to the equation, I'm not sure if this is the same as my earlier question. If it is, I would greatly appreciate a proof. If not, a new answer :).
Thanks alot!
-
## 2 Answers
For any $m \in (\mathbb{Z}/n\mathbb{Z})^\star$, since $m^{-1} = m \times (m^{-1})^2$, $\left(\frac{m}{n}\right)=\left(\frac{m^{-1}}{n}\right)$. Furthermore, $-1$ is invertible, so it is the same thing to find a square residue $m$ with $1-m^{-1}$ invertible than to find a square residue $m^{-1}$ with $m^{-1}-1$ invertible.
-
$1 - m^{-1} = (m-1)/m$. $m$ and $1 - m^{-1}$ are units if and only if $m$ and $m-1$ are units.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 24, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9525704979896545, "perplexity_flag": "head"}
|
http://mathhelpforum.com/math-topics/169318-time-period-proton-magnetic-field.html
|
# Thread:
1. ## Time Period of a Proton in a Magnetic Field
Spent half an hour looking at this problem, but I can't see anyway to get any of the multiple choice answers:
Originally Posted by AQA Specimen Paper
Protons, each of mass mand charge e, follow a circular path when travelling perpendicular to a magnetic field of uniform flux density B. What is the time period for on complete orbit?
• A $\frac{2 \pi e B}{m}$
• B $\frac{m}{2 \pi e B}$
• C $\frac{eB}{2 \pi m}$
• D $\frac{2 \pi m}{eB}$
All I know is that it should have something to do with:
$F = m \omega^2 r = m \frac{v^2}{r} = BQv \: , \: \omega = \frac{2 \pi}{T}$
Thanks
Edit: Oh and the answer is D
2. You're missing an equation.
Remember that the tangential velocity of an object in circular motion is: $v = \omega r$ so that:
$F = QvB = m\frac{v^2}{r}$
$QB = m\frac{v}{r} = m\omega$
$\omega = \frac{QB}{m}$
We know the formula for period, which is:
$T = \frac{2\pi}{\omega}$
$T = \frac{2\pi m}{QB}$
Since we know the charge, we can fill that in to get:
$T = \frac{2\pi m}{eB}$
I'm not sure why D doesn't match up to this, but I'm almost 95% sure this is right.
3. Thanks very much for the reply, will look again when I got some caffeine.
You were right also, I just had a typo in my LaTeX (\pim instead of \pi m).
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 12, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.934820294380188, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/14523/how-can-i-find-int-tan-x-cos-2x-mathrm-dx
|
# How can I find $\int\tan\;x\;\cos\;2x\;\mathrm dx$?
My question is ; How can I solve the following integral question?
$$\int\tan\;x\;\cos\;2x\;\mathrm dx$$
Thanks in advance,
-
2
$\cos\;2x=2\cos^2\;x-1$ – J. M. Dec 16 '10 at 15:03
## 3 Answers
HINT
(1) $\cos 2x = \cos^2x-\sin^2x=2\cos^2x-1$.
(2) $2\sin x\cos x = \sin 2x$.
(3) $\frac{d}{dx}\log(f(x))=?$
-
thanks for your answer.But , Can you write step - by -step solution ? – MAxcoder Dec 16 '10 at 15:09
2
@MAx: Why not try replacing the $\cos\;2x$ first with one of AD's suggestions and see where it leads you? – J. M. Dec 16 '10 at 15:11
1
@MAxcoder: I do not want to spoil the fun parts. – AD. Dec 16 '10 at 21:27
The answer for checking purposes is the following: $-\frac {\rm 1}{\rm 2}\rm{\cos(\rm 2x)}+\rm \ln(\cos(\rm x))$. – night owl Jul 5 '11 at 12:03
Suppose I gave you an integral of the form
$\displaystyle \int \cot x \ \ f(\sin x) \ \text{dx}$
Can you think of a substitution to get rid of the $\cot x$ term?
For a concrete example, can you try evaluating
$\displaystyle \int \cot x \ \ (1 + \sin^5 x) \ \ \text{dx}$ ?
-
I'm going to tell you that by parts done directly isn't the way to approach this:
$$\int \tan(x)\cos(2x)dx = -\ln(\cos(x))\cos(2x) - 2\int \ln(\cos(x))\sin(2x)dx$$
As you can see, this expression is not likely to become any more manageable by solving the next integral.
In short, your problem comes down to simplifying the expression $\tan(x)cos(2x)$. Big hint. The other answers have shown you how to do this. Once you simplify it, you will have a much easier job of integrating said expression and you most certainly won't need integration by parts.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9398921132087708, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/60579/gromov-witten-invariants-of-singular-spaces
|
## Gromov-Witten invariants of singular spaces
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I wonder if there is any situation where one can talk about Gromov-Witten invariants or quantum multiplication for singular varieties. Ideally, I would like have a situation where for a singular variety $X$ one can define quantum multiplication operators by elements of ORDINARY cohomology of $X$ on the INTERSECTION cohomology of $X$ (I have some examples where I know what I want the answer to be, but I don't know how to ask the question).
In fact, I will be ready to start with the following simple example: assume that $X$ just has quotient singularities, i.e. locally it looks like $Y/G$ where $Y$ is smooth and $G$ is a finite group. In this case the intersection cohomology coincides with the ordinary cohomology, so my question is whether in this case one can define quantum multiplication. One warning: I am talking about quantum cohomology of $X$ itself, not about what is called "orbifold quantum cohomology" (which in many cases coincides with the quantum cohomology of a good resolution of $X$).
-
## 1 Answer
Since no one else has said anything, let me make two naive comments.
1. The only case I know of where there is a well developed notion of Gromov-Witten invariants for a singular variety is the very special case when the target admits a gluing $X \cup_D Y$ where X and Y are smooth and projective and D is a smooth divisor.
2. You write that you are not interested in the orbifold Gromov-Witten invariants, but perhaps it is possible to define something similar to what you want in terms of the orbifold invariants. Let $X = Y/G$ and set $\mathscr X = [Y/G]$. The Gromov-Witten invariants of $\mathscr X$ are given by maps $$I_{g,n,\beta} : H^\ast(\overline I \mathscr X)^{\otimes n} \to H^\ast(\overline M_{g,n}),$$ where $\overline I \mathscr X$ is the rigidified inertia stack of $\mathscr X$. However there is also a natural map $H^\ast(X) \to H^\ast(\overline I \mathscr X)$ induced by: (i) the rigidification morphism $I \mathscr X \to \overline I \mathscr X$, in particular the isomorphism it induces on cohomology; (ii) the forgetful map $I \mathscr X \to \mathscr X$; and (iii) the coarse moduli space map $\mathscr X \to X$. So by precomposition we get a collection of maps $H^\ast(X)^{\otimes n} \to H^\ast(\overline M_{g,n})$, and even though it seems they will not satisfy the axioms required of GW invariants maybe they are what is needed in your situation.
-
Thanks. In fact I only care about genus 0 case. I have a question: suppose that $X$ has a crepant resolution $\widetilde{X}$. Then $H^*(X)$ maps to $H^*(\widetilde{X})$ in the obvious way. Question: if one assumes the crepant resolution conjecture, will one get the same construction from this embedding as from yours? – Alexander Braverman Apr 6 2011 at 0:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 24, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9367722868919373, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/96068/ergodicity-of-the-group-of-transformations-preserving-a-partition/96075
|
## ergodicity of the group of transformations preserving a partition
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Let $X=\{0,1\}^{\mathbb{N}}$ and $\theta$ be the partition of $X$ induced by the equivalence relation $x \sim x'$ when $x$ and $x'$ differ only at a finite number of coordinates (see this related question).
Given a Bernoulli measure $m$ on $X$, let ${\cal H}$ be the group of transformations $S$ of $X$ satisfying $\theta(x)=\theta(S(x))$ for almost all $x$ and let ${\cal G}$ be the subgroup of ${\cal H}$ consisting of measure-preserving transformations.
Is it possible to explicitely describe ${\cal G}$ ? Under which condition on $m$ the group ${\cal G}$ is ergodic ?
EDIT: I am also interested in the case when $m$ is a stationnary Markov probability on $X$.
-
## 1 Answer
It's easy to see that the action is always ergodic since $\mathcal G$ contains the group of finite permutations on the indices, which acts ergodically. In fact, the group $\mathcal G$ (which equals $\mathcal H$ in the case $m = \mu^{\mathbb N}$ with $\mu(0) = 1/2$.) that you are describing is the full group of the ergodic hyperfinite measurable equivalence relation. It, and other full groups are discussed in Sections I.3 and I.4 in the book by Alexander Kechris: Global aspects of ergodic group actions, Mathematical Surveys and Monographs, 160, American Mathematical Society, 2010.
-
Are you sure you're right when $m=\mu^{\mathbb{N}}$ with $\mu(0) \in ]0, \frac{1}{2}[$ ? (I had been trying to understand an ergodic theoretic paper and that would contradict my understanding) – Stéphane Laurent May 5 2012 at 17:01
Oops sorry, in the case I am interested in, $m$ is Markov. I'm going to add this question to my initial question. – Stéphane Laurent May 5 2012 at 17:14
Yes, you are right. In the case when $m = \mu^{\mathbb N}$ with $\mu(0) \in (0, 1/2)$ you don't have $\mathcal G = \mathcal H$. But $\mathcal G$ still acts ergodically and gives the full group of the ergodic hyperfinite measurable equivalence relation. – Jesse Peterson May 5 2012 at 18:18
Thank you Jesse. This is more clear for me now. In a stationnary Markov non-Bernoulli case, I think there are only two transformations: the idendity and $x \mapsto -x$ (with $\\{-1,1\\}$ instead of $\\{0,1\\}$), hence the full group is not ergodic in this case. – Stéphane Laurent May 6 2012 at 7:16
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 35, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.926940381526947, "perplexity_flag": "head"}
|
http://medlibrary.org/medwiki/Bicubic_interpolation
|
# Bicubic interpolation
Welcome to MedLibrary.org. For best results, we recommend beginning with the navigation links at the top of the page, which can guide you through our collection of over 14,000 medication labels and package inserts. For additional information on other topics which are not covered by our database of medications, just enter your topic in the search box below:
Bicubic interpolation on the square $[0,3] \times [0,3]$ consisting of 9 unit squares patched together. Bicubic interpolation as per MATLAB's implementation. Colour indicates function value. The black dots are the locations of the prescribed data being interpolated. Note how the color samples are not radially symmetric.
Bilinear interpolation on the same dataset as above. Derivatives of the surface are not continuous over the square boundaries.
Nearest-neighbor interpolation on the same dataset as above. Note that the information content in all these three examples is equivalent.
In mathematics, bicubic interpolation is an extension of cubic interpolation for interpolating data points on a two dimensional regular grid. The interpolated surface is smoother than corresponding surfaces obtained by bilinear interpolation or nearest-neighbor interpolation. Bicubic interpolation can be accomplished using either Lagrange polynomials, cubic splines, or cubic convolution algorithm.
In image processing, bicubic interpolation is often chosen over bilinear interpolation or nearest neighbor in image resampling, when speed is not an issue. In contrast to bilinear interpolation, which only takes 4 pixels (2x2) into account, bicubic interpolation considers 16 pixels (4x4). Images resampled with bicubic interpolation are smoother and have fewer interpolation artifacts.
## Bicubic interpolation
Suppose the function values $f$ and the derivatives $f_x$, $f_y$ and $f_{xy}$ are known at the four corners $(0,0)$, $(1,0)$, $(0,1)$, and $(1,1)$ of the unit square. The interpolated surface can then be written
$p(x,y) = \sum_{i=0}^3 \sum_{j=0}^3 a_{ij} x^i y^j.$
The interpolation problem consists of determining the 16 coefficients $a_{ij}$. Matching $p(x,y)$ with the function values yields four equations,
1. $f(0,0) = p(0,0) = a_{00}$
2. $f(1,0) = p(1,0) = a_{00} + a_{10} + a_{20} + a_{30}$
3. $f(0,1) = p(0,1) = a_{00} + a_{01} + a_{02} + a_{03}$
4. $f(1,1) = p(1,1) = \textstyle \sum_{i=0}^3 \sum_{j=0}^3 a_{ij}$
Likewise, eight equations for the derivatives in the $x$-direction and the $y$-direction
1. $f_x(0,0) = p_x(0,0) = a_{10}$
2. $f_x(1,0) = p_x(1,0) = a_{10} + 2a_{20} + 3a_{30}$
3. $f_x(0,1) = p_x(0,1) = a_{10} + a_{11} + a_{12} + a_{13}$
4. $f_x(1,1) = p_x(1,1) = \textstyle \sum_{i=1}^3 \sum_{j=0}^3 a_{ij} i$
5. $f_y(0,0) = p_y(0,0) = a_{01}$
6. $f_y(1,0) = p_y(1,0) = a_{01} + a_{11} + a_{21} + a_{31}$
7. $f_y(0,1) = p_y(0,1) = a_{01} + 2a_{02} + 3a_{03}$
8. $f_y(1,1) = p_y(1,1) = \textstyle \sum_{i=0}^3 \sum_{j=1}^3 a_{ij} j$
And four equations for the cross derivative $xy$.
1. $f_{xy}(0,0) = p_{xy}(0,0) = a_{11}$
2. $f_{xy}(1,0) = p_{xy}(1,0) = a_{11} + 2a_{21} + 3a_{31}$
3. $f_{xy}(0,1) = p_{xy}(0,1) = a_{11} + 2a_{12} + 3a_{13}$
4. $f_{xy}(1,1) = p_{xy}(1,1) = \textstyle \sum_{i=1}^3 \sum_{j=1}^3 a_{ij} i j$
where the expressions above have used the following identities,
$p_x(x,y) = \textstyle \sum_{i=1}^3 \sum_{j=0}^3 a_{ij} i x^{i-1} y^j$
$p_y(x,y) = \textstyle \sum_{i=0}^3 \sum_{j=1}^3 a_{ij} x^i j y^{j-1}$
$p_{xy}(x,y) = \textstyle \sum_{i=1}^3 \sum_{j=1}^3 a_{ij} i x^{i-1} j y^{j-1}$.
This procedure yields a surface $p(x,y)$ on the unit square $[0,1] \times [0,1]$ which is continuous and with continuous derivatives. Bicubic interpolation on an arbitrarily sized regular grid can then be accomplished by patching together such bicubic surfaces, ensuring that the derivatives match on the boundaries.
If the derivatives are unknown, they are typically approximated from the function values at points neighbouring the corners of the unit square, e.g. using finite differences.
Grouping the unknown parameters $a_{ij}$ in a vector,
$\alpha=\left[\begin{smallmatrix}a_{00}&a_{10}&a_{20}&a_{30}&a_{01}&a_{11}&a_{21}&a_{31}&a_{02}&a_{12}&a_{22}&a_{32}&a_{03}&a_{13}&a_{23}&a_{33}\end{smallmatrix}\right]^T$
and letting
$x=\left[\begin{smallmatrix}f(0,0)&f(1,0)&f(0,1)&f(1,1)&f_x(0,0)&f_x(1,0)&f_x(0,1)&f_x(1,1)&f_y(0,0)&f_y(1,0)&f_y(0,1)&f_y(1,1)&f_{xy}(0,0)&f_{xy}(1,0)&f_{xy}(0,1)&f_{xy}(1,1)\end{smallmatrix}\right]^T$,
the problem can be reformulated into a linear equation $A\alpha=x$ where its inverse is:
$A^{-1}=\left[\begin{smallmatrix} 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ -3 & 3 & 0 & 0 & -2 & -1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 2 & -2 & 0 & 0 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -3 & 3 & 0 & 0 & -2 & -1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 2 & -2 & 0 & 0 & 1 & 1 & 0 & 0 \\ -3 & 0 & 3 & 0 & 0 & 0 & 0 & 0 & -2 & 0 & -1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & -3 & 0 & 3 & 0 & 0 & 0 & 0 & 0 & -2 & 0 & -1 & 0 \\ 9 & -9 & -9 & 9 & 6 & 3 & -6 & -3 & 6 & -6 & 3 & -3 & 4 & 2 & 2 & 1 \\ -6 & 6 & 6 & -6 & -3 & -3 & 3 & 3 & -4 & 4 & -2 & 2 & -2 & -2 & -1 & -1 \\ 2 & 0 & -2 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 2 & 0 & -2 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 1 & 0 \\ -6 & 6 & 6 & -6 & -4 & -2 & 4 & 2 & -3 & 3 & -3 & 3 & -2 & -1 & -2 & -1 \\ 4 & -4 & -4 & 4 & 2 & 2 & -2 & -2 & 2 & -2 & 2 & -2 & 1 & 1 & 1 & 1 \end{smallmatrix}\right]$.
## Bicubic convolution algorithm
Bicubic spline interpolation requires the solution of the linear system described above for each grid cell. An interpolator with similar properties can be obtained by applying a convolution with the following kernel in both dimensions:
$W(x) = \begin{cases} (a+2)|x|^3-(a+3)|x|^2+1 & \text{for } |x| \leq 1 \\ a|x|^3-5a|x|^2+8a|x|-4a & \text{for } 1 < |x| < 2 \\ 0 & \text{otherwise} \end{cases}$
where $a$ is usually set to -0.5 or -0.75. Note that $W(0)=1$ and $W(n)=0$ for all nonzero integers $n$.
This approach was proposed by Keys who showed that $a=-0.5$ (which corresponds to cubic Hermite spline) produces the best approximation of the original function.[1]
If we use the matrix notation for the common case $a=-0.5$, we can express the equation in a more friendly manner:
$p(t) = \tfrac{1}{2} \begin{bmatrix} 1 & t & t^2 & t^3 \\ \end{bmatrix} \begin{bmatrix} 0 & 2 & 0 & 0 \\ -1 & 0 & 1 & 0 \\ 2 & -5 & 4 & -1 \\ -1 & 3 & -3 & 1 \\ \end{bmatrix} \begin{bmatrix} a_{-1} \\ a_0 \\ a_1 \\ a_2 \\ \end{bmatrix}$
for $t$ between 0 and 1 for one dimension. for two dimensions first applied once in $x$ and again in $y$:
$\textstyle b_{-1} = p(t_x, a_{(-1,-1)}, a_{(0,-1)}, a_{(1,-1)}, a_{(2,-1)})$
$\textstyle b_{0} = p(t_x, a_{(-1,0)}, a_{(0,0)}, a_{(1,0)}, a_{(2,0)})$
$\textstyle b_{1} = p(t_x, a_{(-1,1)}, a_{(0,1)}, a_{(1,1)}, a_{(2,1)})$
$\textstyle b_{2} = p(t_x, a_{(-1,2)}, a_{(0,2)}, a_{(1,2)}, a_{(2,2)})$
$\textstyle p(x,y) = p(t_y, b_{-1}, b_{0}, b_{1}, b_{2})$
## Use in computer graphics
Bicubic interpolation causes overshoot, which increases acutance.
The bicubic algorithm is frequently used for scaling images and video for display (see bitmap resampling). It preserves fine detail better than the common bilinear algorithm.
However, due to the negative lobes on the kernel, it causes overshoot (haloing). This can cause clipping, and is an artifact (see also ringing artifacts), but it increases acutance (apparent sharpness), and can be desirable.
## See also
• Spatial anti-aliasing
• Bézier surface
• Bilinear interpolation
• Cubic Hermite spline, the one-dimensional analogue of bicubic spline
• Lanczos resampling
• Natural neighbor interpolation
• Sinc filter
• Spline interpolation
• Tricubic interpolation
## References
1. R. Keys, (1981). "Cubic convolution interpolation for digital image processing". IEEE Transactions on Signal Processing, Acoustics, Speech, and Signal Processing 29 (6): 1153–1160. doi:10.1109/TASSP.1981.1163711.
Content in this section is authored by an open community of volunteers and is not produced by, reviewed by, or in any way affiliated with MedLibrary.org. Licensed under the Creative Commons Attribution-ShareAlike 3.0 Unported License, using material from the Wikipedia article on "Bicubic interpolation", available in its original form here:
http://en.wikipedia.org/w/index.php?title=Bicubic_interpolation
• ## Finding More
You are currently browsing the the MedLibrary.org general encyclopedia supplement. To return to our medication library, please select from the menu above or use our search box at the top of the page. In addition to our search facility, alphabetical listings and a date list can help you find every medication in our library.
• ## Questions or Comments?
If you have a question or comment about material specifically within the site’s encyclopedia supplement, we encourage you to read and follow the original source URL given near the bottom of each article. You may also get in touch directly with the original material provider.
• ## About
This site is provided for educational and informational purposes only and is not intended as a substitute for the advice of a medical doctor, nurse, nurse practitioner or other qualified health professional.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 57, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8430164456367493, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/algebra/170266-logarithmic-identities-problem-solves-one-way-but-not-another.html
|
# Thread:
1. ## Logarithmic identities problem - solves one way but not another
Hi,
I was working through a log problem using logarithmic identities. The problem is worded as follows:
log(4) = 0.703x
log(1/4) = z
solve for z in terms of x
To solve this, I tried one approach, which is simply:
z = log(1/4) = log(4^-1) = -1*log(4) = -0.703x.
This seems to be the correct answer. Then I also tried solving the same question by adding the two equations together:
log(4) + log(1/4) = 0.703x + z
so: z = log(4*(1/4)) - 0.703x = -0.703x. The solution in this case is also in agreement.
My question is... why does it not work when you subtract one equation from the other? My work comes out like this:
log(4) - log(1/4) =0.703x - z
log[4/[1/4]] = .703x - z
log(16) = .703x - z
z = .703x - log(16)
The z in this case doesn't agree with z in the other cases. Can anyone tell me what I am doing wrong in the last case? Thanks.
2. You've not done anything wrong, just didn't finish it off.
$\log(16) = \log(4^2) = 2\log(4)$ and since you have $\log(4) = 0.703x$ then your equation becomes $z = 0.703x - 2 \cdot 0.703x = -0.703x$
3. Thank you so much! Had me confused for a while...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9640340209007263, "perplexity_flag": "middle"}
|
http://www.reference.com/browse/ruffini's+corpuscle
|
Definitions
# Ruffini's rule
In mathematics, Ruffini's rule allows the rapid division of any polynomial by a binomial of the form x − r. It was described by Paolo Ruffini in 1809. Ruffini's rule is a special case of long division when the divisor is a linear factor. Ruffini's rule is also known as synthetic division. The Horner scheme is a fast algorithm for dividing a polynomial by a linear polynomial with Ruffini's rule. See also polynomial long division for related background.
## Algorithm
The rule establishes a method for dividing the polynomial
$P\left(x\right)=a_nx^n+a_\left\{n-1\right\}x^\left\{n-1\right\}+cdots+a_1x+a_0$
by the binomial
$Q\left(x\right)=x-r,!$
to obtain the quotient polynomial
$R\left(x\right)=b_\left\{n-1\right\}x^\left\{n-1\right\}+b_\left\{n-2\right\}x^\left\{n-2\right\}+cdots+b_1x+b_0$
and a remainder s.
The algorithm is in fact the long division of P(x) by Q(x).
To divide P(x) by Q(x):
1. Take the coefficients of P(x) and write them down in order. Then write r at the bottom left edge, just over the line:
| an an-1 ... a1 a0
|
r |
----|---------------------------------------------------------
|
|
2. Pass the leftmost coefficient (an) to the bottom, just under the line:
| an an-1 ... a1 a0
|
r |
----|---------------------------------------------------------
| an
|
| = bn-1
|
3. Multiply the rightmost number under the line by r and write it over the line and one position to the right:
| an an-1 ... a1 a0
|
r | bn-1r
----|---------------------------------------------------------
| an
|
| = bn-1
|
4. Add the two values you've just put in the same column
| an an-1 ... a1 a0
|
r | bn-1r
----|---------------------------------------------------------
| an an-1+(bn-1r)
|
| = bn-1 = bn-2
|
5. Repeat steps 3 and 4 until you've run out of numbers
| an an-1 ... a1 a0
|
r | bn-1r ... b1r b0r
----|---------------------------------------------------------
| an an-1+(bn-1r) ... a1+b1r a0+b0r
|
| = bn-1 = bn-2 ... = b0 = s
|
The b values are the coefficients of the result (R(x)) polynomial, the degree of which is one less than that of P(x). The final value obtained, s, is the remainder. As shown in the polynomial remainder theorem, this remainder is equal to P(r), the value of the polynomial at r.
There is a numerical example below.
## Uses of the rule
Ruffini's rule has many practical applications; most of them rely on simple division (as demonstrated below) or the common extensions given still further below.
### Polynomial division by x − r
A worked example of polynomial division, as described above.
Let:
$P\left(x\right)=2x^3+3x^2-4,!$
$Q\left(x\right)=x+1,!$
We want to divide P(x) by Q(x) using Ruffini's rule. The main problem is that Q(x) does not appear to be a binomial of the form x − r, but rather x + r. We must rewrite Q(x) in this way:
$Q\left(x\right)=x+1=x-\left(-1\right),!$
Now we apply the algorithm:
1. Write down the coefficients and r. Note that, as P(x) didn't contain a coefficient for x, we've written 0:
| 2 3 0 -4
|
-1 |
----|----------------------------
|
|
2. Pass the first coefficient down:
| 2 3 0 -4
|
-1 |
----|----------------------------
| 2
|
3. Multiply the last obtained value by r:
| 2 3 0 -4
|
-1 | -2
----|----------------------------
| 2
|
4. Add the values:
| 2 3 0 -4
|
-1 | -2
----|----------------------------
| 2 1
|
5. Repeat steps 3 and 4 until we've finished:
| 2 3 0 -4
|
-1 | -2 -1 1
----|----------------------------
| 2 1 -1 -3
|{result coefficients}{remainder}
So, if original number = divisor×quotient+remainder, then
$P\left(x\right)=Q\left(x\right)R\left(x\right)+s,!$, where
$R\left(x\right) = 2x^2+x-1,!$ and $s=-3.,!$
### Polynomial root-finding
The rational root theorem tells us that for a polynomial f(x) = anxn + an−1xn−1 + ... + a1x + a0 all of whose coefficients (an through a0) are integers, the real rational roots are always of the form p/q, where p is an integer divisor of a0 and q is an integer divisor of an. Thus if our polynomial is
$P\left(x\right)=x^3+2x^2-x-2=0,!,$
then the possible rational roots are all the integer divisors of a0 (−2):
$mbox\left\{Possible roots:\right\}left\left\{+1, -1, +2, -2right\right\}$
(This example is simple because the polynomial is monic (i.e. an = 1); for non-monic polynomials the set of possible roots will include some fractions, but only a finite number of them since an and a0 only have a finite number of integer divisors each.) In any case, for monic polynomials, every rational root is an integer, and so every integer root is just a divisor of the constant term. It can be shown that this remains true for non-monic polynomials, i.e. to find the integer roots of any polynomials with integer coefficients, it suffices to check the divisors of the constant term.
So, setting r equal to each of these possible roots in turn, we will test-divide the polynomial by (x − r). If the resulting quotient has no remainder, we have found a root.
You can choose one of the following three methods: they will all yield the same results, with the exception that only through the second method and the third method (when applying Ruffini's rule to obtain a factorization) can you discover that a given root is repeated. (Remember that neither method will discover irrational or complex roots.)
#### Method 1
We try to divide P(x) by the binomial (x − each possible root). If the remainder is 0, the selected number is a root (and vice versa):
| +1 +2 -1 -2 | +1 +2 -1 -2
| |
+1 | +1 +3 +2 -1 | -1 -1 +2
----|---------------------------- ----|---------------------------
| +1 +3 +2 0 | +1 +1 -2 0
| +1 +2 -1 -2 | +1 +2 -1 -2
| |
+2 | +2 +8 +14 -2 | -2 0 +2
----|---------------------------- ----|---------------------------
| +1 +4 +7 +12 | +1 0 -1 0
$x_1=+1,!$
$x_2=-1,!$
$x_3=-2,!$
#### Method 2
We start just as in Method 1 until we find a valid root. Then, instead of restarting the process with the other possible roots, we continue testing the possible roots against the result of the Ruffini on the valid root we've just found until we only have a coefficient remaining (remember that roots can be repeated: if you get stuck, try each valid root twice):
| +1 +2 -1 -2 | +1 +2 -1 -2
| |
-1 | -1 -1 +2 -1 | -1 -1 +2
----|--------------------------- ----|---------------------------
| +1 +1 -2 | 0 | +1 +1 -2 | 0
| |
+2 | +2 +6 +1 | +1 +2
------------------------- -------------------------
| +1 +3 |+4 | +1 +2 | 0
|
-2 | -2
-------------------
| +1 | 0
$x_1=+1,!$
$x_2=-1,!$
$x_3=+2,!$
#### Method 3
• Determine the set of the possible integer or rational roots of the polynomial according to the rational root theorem.
• For each possible root r, instead of performing the division P(x)/(x -r), apply the polynomial remainder theorem, which states that the remainder of this division is P(r), i.e. the polynomial evaluated for x = r.
Thus, for each r in our set, r is actually a root of the polynomial if and only if P(r) = 0
This shows that finding integer and rational roots of a polynomial neither requires any division nor the application of Ruffini's rule.
However, once a valid root has been found, call it r1: you can apply Ruffini's rule to determine
Q(x) = P(x)/(x-r1).
This allows you to partially factorize the polynomial as
P(x) = (x -r1)·Q(X)
Any additional (rational) root of the polynomial is also a root of Q(x) and, of course, is still to be found among the possible roots determined earlier which have not yet been checked (any value already determined not to be a root of P(x) is not a root of Q(x) either; more formally, P(r)≠0 → Q(r)≠0 ).
Thus, you can proceed evaluating Q(r) instead of P(r), and (as long as you can find another root, r2) dividing Q(r) by (x-r2).
Even if you're only searching for roots, this allows you to evaluate polynomials of successively smaller degree, as the factorization proceeds.
If, as is often the case, you're also factorizing a polynomial of degree n, then:
• if you've found p=n rational solutions you end up with a complete factorization (see below) into p=n linear factors;
• if you've found p
Remember to check out the limitations of the whole procedure.
Examples:
##### Finding roots without applying Ruffini's Rule
P(x) = x³ +2x² -x -2
Possible roots = {1, -1, 2, -2}
• P(1) = 0 → x1 = 1
• P(-1) = 0 → x2 = -1
• P(2) = 12 → 2 is not a root of the polynomial
and the remainder of (x³ +2x² -x -2)/(x-2) is 12
• P(-2) = 0 → x3 = -2
##### Finding roots applying Ruffini's Rule and obtaining a (complete) factorization
P(x) = x³ +2x² -x -2
Possible roots = {1, -1, 2, -2}
• P(1) = 0 → x1 = 1
Then, applying Ruffini's Rule:
(x³ +2x² -x -2) / (x -1) = (x² +3x +2) →
→ x³ +2x² -x -2 = (x-1)(x² +3x +2)
Here, r1=-1 and Q(x) = x² +3x +2
• Q(-1) = 0 → x2 = -1
Again, applying Ruffini's Rule:
(x² +3x +2) / (x +1) = (x +2) →
→ x³ +2x² -x -2 = (x-1)(x² +3x +2) = (x-1)(x+1)(x+2)
As it was possible to completely factorize the polynomial, it's clear that the last root is -2 (the previous procedure would have given the same result, with a final quotient of 1).
### Polynomial factoring
Having used the "p/q" result above (or, to be fair, any other means) to find all the real rational roots of a particular polynomial, it is but a trivial step further to partially factor that polynomial using those roots. As is well-known, each linear factor (x − r) which divides a given polynomial corresponds with a root r, and vice versa.
So if
$P\left(x\right)=a_nx^n+a_\left\{n-1\right\}x^\left\{n-1\right\}+cdots+a_1x+a_0,!$ is our polynomial; and
$R=left\left\{mbox\left\{roots of \right\}P\left(x\right)inmathbb\left\{Q\right\}right\right\},!$ are the roots we have found, then consider the product
$R\left(x\right)=a_n\left\{prod \left(x-r\right)\right\} mbox\left\{ for all \right\} rin R ,!$.
By the fundamental theorem of algebra, R(x) should be equal to P(x), if all the roots of P(x) are rational. But since we have been using a method which finds only rational roots, it is very likely that R(x) is not equal to P(x); it is very likely that P(x) has some irrational or complex roots not in R. So consider
$S\left(x\right)=frac\left\{P\left(x\right)\right\}\left\{R\left(x\right)\right\},!$, which can be calculated using polynomial long division.
If S(x) = 1, then we know R(x) = P(x) and we are done. Otherwise, S(x) will itself be a polynomial; this is another factor of P(x) which has no real rational roots. So write out the right-hand-side of the following equation in full:
$P\left(x\right)=R\left(x\right) cdot S\left(x\right),!$
We can call this a complete factorization of P(x) over Q (the rationals) if S(x) = 1. Otherwise, we only have a partial factorization of P(x) over Q, which may or may not be further factorable over the rationals; but which will certainly be further factorable over the reals or at worst the complex plane. (Note: by a "complete factorization" of P(x) over Q, we mean a factorization as a product of polynomials with rational coefficients, such that each factor is irreducible over Q, where "irreducible over Q" means that the factor cannot be written as the product of two non-constant polynomials with rational coefficients and smaller degree.)
#### Example 1: no remainder
Let
$P\left(x\right)=x^3+2x^2-x-2,!$
Using the methods described above, the rational roots of P(x) are:
$R=left\left\{+1, -1, -2right\right\},!$
Then, the product of (x − each root) is
$R\left(x\right)=1\left(x-1\right)\left(x+1\right)\left(x+2\right).,!$
And P(x)/R(x):
$S\left(x\right)=1.,!$
Hence the factored polynomial is P(x) = R(x) · 1 = R(x):
$P\left(x\right)=\left(x-1\right)\left(x+1\right)\left(x+2\right),!$
#### Example 2: with remainder
Let
$P\left(x\right)=2x^4-3x^3+x^2-2x-8,!$
Using the methods described above, the rational roots of P(x) are:
$R=left\left\{-1, +2right\right\},!$
Then, the product of (x − each root) is
$R\left(x\right)=\left(x+1\right)\left(x-2\right),!$
And P(x)/R(x)
$S\left(x\right)=2x^2-x+4,!$
As $S\left(x\right)\left\{ne\right\}1$, the factored polynomial is P(x) = R(x) · S(x):
$P\left(x\right)=\left(x+1\right)\left(x-2\right)\left(2x^2-x+4\right),!$
#### Factoring over the complexes
To completely factor a given polynomial over C, the complex numbers, we must know all of its roots (and that could include irrational and/or complex numbers). For example, consider the polynomial above:
$P\left(x\right)=2x^4-3x^3+x^2-2x-8,!$.
Extracting its rational roots and factoring it, we end with:
$P\left(x\right)=\left(x+1\right)\left(x-2\right)\left(2x^2-x+4\right),!$.
But that is not completely factored over C. If we need to factor our polynomial to a product of linear factors, we must deal with that quadratic factor
$\left\{2x^2-x+4\right\}=0.,!$
The easiest way is to use quadratic formula, which gives us
$x=frac\left\{-bpmsqrt\left\{b^2-4ac\right\}\right\}\left\{2a\right\}=frac\left\{1pmsqrt\left\{\left(-1\right)^2-4cdot 2cdot 4\right\}\right\}\left\{2cdot 2\right\}=frac\left\{1pmsqrt\left\{-31\right\}\right\}\left\{4\right\},!$
and the solutions
$x_1=frac\left\{1+sqrt\left\{-31\right\}\right\}\left\{4\right\},!$
$x_2=frac\left\{1-sqrt\left\{-31\right\}\right\}\left\{4\right\},!$.
So the completely-factored polynomial over C will be:
$P\left(x\right)=2\left(x+1\right)\left(x-2\right)\left(x-frac\left\{1+isqrt\left\{31\right\}\right\}\left\{4\right\}\right)\left(x-frac\left\{1-isqrt\left\{31\right\}\right\}\left\{4\right\}\right),!$.
However, it should be noted that we cannot in every case expect things to be so easy; the quadratic formula's analogue for fourth-order polynomials is very messy and no such analogue exists for 5th-or-higher order polynomials. See Galois theory for a theoretical explanation of why this is so, and see numerical analysis for ways to approximate roots of polynomials numerically.
#### Limitations
It is entirely possible that, in looking for a given polynomial's roots, we might obtain a messy higher-order polynomial for S(x) which is further factorable over the rationals even before considering irrational or complex factorings. Consider the polynomial x5 − 3x4 + 3x3 − 9x2 + 2x − 6. Using Ruffini's method we will find only one root (x = −3); factoring it out gives us P(x) = (x4 + 3x2 + 2)(x − 3).
As explained above, if our assignment was to "factor into irreducibles over C" we know that would have to find some way to dissect the quartic and look for its irrational and/or complex roots. But if we were asked to "factor into irreducibles over Q", we might think we are done; but it is important to realize that this might not necessarily be the case.
For in this instance the quartic is actually factorable as the product of two quadratics (x2 + 1)(x2 + 2). These, at last, are irreducible over the rationals (and, indeed, the reals as well in this example); so now we are done; P(x) = (x2 + 1)(x2 + 2)(x − 3). In this instance it is in fact easy to factor our quartic by treating it as a biquadratic equation; but finding such factorings of a higher degree polynomial can be very difficult.
## External links
• Synthetic Division, an article by Elizabeth Stapel on Purple Math
• Synthetic Division Tutorial An article explaining Synthetic Division
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 40, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9104623794555664, "perplexity_flag": "middle"}
|
http://mathoverflow.net/revisions/55380/list
|
## Return to Answer
2 deleted 7 characters in body; deleted 4 characters in body; added 7 characters in body
If $G$ is a $p$-group, this is almost never true (and I believe but may be wrong that for general $G$, it's completely governed by the $p$-Sylow). In that case, we can understand modules in terms of the support variety $V={\rm Proj} H^{2*}(G,k)$, the projective variety associated to the (even-dimensional) cohomology ring of $G$. To a finite $kG$-module $M$ we can associate its support in $V$, namely the support of $Ext^(M,M)$ Ext(M,M) as a graded $H^{2}(G,k)$-module. H^{2*}(G,k)$-module. This support is a closed subset of$V$, and conversely every closed subset is the support of some finite$kG$-module. Finally, the support of$M\otimes N$is the intersection of the support of$M$and the support of$N$, and a module is projective iff its support is empty. Thus$M\otimes N$is projective iff$M$and$N\$ have disjoint support.
Thus unless $V$ is just a single point, it is possible to have non-projective modules whose tensor product is projective. For $V$ to be a point, the cohomology of $G$ must be a polynomial ring in one variable, up to nilpotent elements. By Quillen's theorem, this is the case iff all elementary abelian subgroups of $G$ are conjugate and rank 1.
In particular, it is true for cyclic groups, but otherwise it is almost always false. There's a simple argument to see directly that it holds for cyclic groups of order $p$: in that case, $kG$ can be identified with $k[x]/x^p$, and every indecomposable module is of the form $M_i=k[x]/x^i$ for some $n\leq i\leq p$. Such a module $M_i$ is projective iff $i=p$. If $M_i$ and $M_j$ are not projective, then $M_i\otimes M_j$ has dimension $ij$, which is not divisible by $p$. Thus $M_i \otimes M_j$ cannot be a sum of copies of $M_p$ and is hence not projective.
1
If $G$ is a $p$-group, this is almost never true (and I believe but may be wrong that for general $G$, it's completely governed by the $p$-Sylow). In that case, we can understand modules in terms of the support variety $V={\rm Proj} H^{2*}(G,k)$, the projective variety associated to the (even-dimensional) cohomology ring of $G$. To a finite $kG$-module $M$ we can associate its support in $V$, namely the support of $Ext^(M,M)$ as a graded $H^{2}(G,k)$-module. This support is a closed subset of $V$, and conversely every closed subset is the support of some finite $kG$-module. Finally, the support of $M\otimes N$ is the intersection of the support of $M$ and the support of $N$, and a module is projective iff its support is empty. Thus $M\otimes N$ is projective iff $M$ and $N$ have disjoint support.
Thus unless $V$ is just a single point, it is possible to have non-projective modules whose tensor product is projective. For $V$ to be a point, the cohomology of $G$ must be a polynomial ring in one variable, up to nilpotent elements. By Quillen's theorem, this is the case iff all elementary abelian subgroups of $G$ are conjugate and rank 1.
In particular, it is true for cyclic groups, but otherwise it is almost always false. There's a simple argument to see directly that it holds for cyclic groups of order $p$: in that case, $kG$ can be identified with $k[x]/x^p$, and every indecomposable module is of the form $M_i=k[x]/x^i$ for some $n\leq p$. Such a module $M_i$ is projective iff $i=p$. If $M_i$ and $M_j$ are not projective, then $M_i\otimes M_j$ has dimension $ij$, which is not divisible by $p$. Thus $M_i \otimes M_j$ cannot be a sum of copies of $M_p$ and is hence not projective.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 74, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9526592493057251, "perplexity_flag": "head"}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.