url
stringlengths 17
172
| text
stringlengths 44
1.14M
| metadata
stringlengths 820
832
|
---|---|---|
http://math.stackexchange.com/questions/205795/factorization-of-a-polynomial-using-zeros/205802
|
# Factorization of a Polynomial Using Zeros
I am reading a definition in my Pre-Calculus book but I am a little but confused, the definition states:
Suppose $p$ is a nonzero polynomial with at least one (real) zero. Then
*There exist real numbers $r_1$,$r_2$,...,$r_m$ and a polynomial G such that G has no (real) zeroes and $p(x)=(x-r_1)(x-r_2)...(x-r_m)G(x)$ for every real number $x$;
*each of the numbers $r_1$,$r_2$,...,$r_m$ is a zero of $p$;
*$p$ has no zeros other than $r_1$,$r_2$,...,$r_m$.
I understand why $p(x)=(x-r_1)(x-r_2)...(x-r_m)$ makes sense but I am having trouble understanding why there is a polynomial $G(x)$ that has no (real) zeroes at the end. Could someone please explain this to me? I am really confused.
-
There are irreducible quadratics in the reals. For example, you can be left over with a factor of $x^2 + 1$. – EuYu Oct 2 '12 at 3:32
## 2 Answers
For example: $$x^4 - 1 = (x-1)(x+1)(x^2+1)$$ The real zeroes of $x^4-1$ are 1 and -1. What's left is $x^2+1$, which has no real zeroes.
If $G$ still had another real zero at a point $x=a$, then you could pull out another factor $(x-a)$. So after you factor out all the real zeroes of $p$, what's left cannot have any real zeroes.
-
In general, given a polynomial with real coefficients, you will have a set of real roots and a set of complex conjugate pair roots. For example, you have the polynomial $$x^2 + 1 = (x-i)(x+i)$$ which factors with the conjugate root pair $\pm i$ but the polynomial is irreducible over the real numbers. To see this more clearly, suppose you factor the polynomial over the complex numbers. You will end up with something akin to $$p(x) = [(x-r_1)\cdots(x-r_m)]\cdot[(x-z_1)(x-\overline{z_1})\cdots(x-{z_k})(x-\overline{z_k})]$$ The $r_i$ are your real roots and the $z_i, \overline{z_i}$ are your complex conjugate root pairs (that complex roots always come in such pairs for real coefficient polynomials is a consequence of the conjugate root theorem). Each of the pairs $$(x-z_i)(x-\overline{z_i}) = x^2 - 2\mathrm{Re}(z_i) + |z_i|^2$$ will then multiply to produce an irreducible real quadratic. The collection of these irreducible quadratics will then form your remaining polynomial $G$.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 27, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9127422571182251, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/36422/why-do-objects-with-different-masses-fall-at-the-same-rate
|
# Why do objects with different masses fall at the same rate? [duplicate]
Possible Duplicate:
Confused about the role of mass
Today we were in our Literature class talking about the Renaissance and the Enlightement and our teacher also said that scientific experiments were being conducted, and she gave as an example the experiment in which they dropped objects with different mass from a tower to see which object would land first. She then said that she herself didn't know the outcome and since I'm known in my class as the #1, they asked me and I said that they landed at the same time (to my shock many classmates even disagreed with me about this fact). The teacher asked me to explain why and we haven't had anything about inertia in our physics class, so I was forced to use my own self-thought knowledge of physics: I said gravity does pull harder on heavier objects, but heavier objects have more resistance to move, in other words, they have more inertia. The entire class except for the teacher disagreed with me even though they usually don't. One fairly annoying kid asked me to explain why a feather falls way slower than a bowling ball, and I explained why and I also made the claim that in a vacuum they would fall at the same rate. Was I right or my class? Of course I know my explanation is lacking, but it has truth to it, right?
-
– Qmechanic♦ Sep 14 '12 at 17:39
2
You were right, they were wrong... the novelty of this type of situation will wear quickly; stick with it anyway. – AdamRedwine Sep 14 '12 at 18:49
## marked as duplicate by Qmechanic♦, David Zaslavsky♦Sep 14 '12 at 19:29
This question has been asked before and already has an answer. If those answers do not fully address your question, please ask a new question.
## 2 Answers
Your teacher was referring to an experiment attributed to Galileo, which most people agree is apocryphal; Galileo actually arrived at the result by performing a thought experiment. Your answer to the feather vs. the bowling ball question is also basically correct.
Two other things to be said here:
In order to answer a question on physics or any other subject, there has to be a minimum knowledge and terminology by the person asking the question and the answerer, otherwise it boils down to a useless back and forth. I suggest watching Feynman's famous answer to see a good example.
The second point is the question why the extra pull of the gravity gets exactly cancelled by the extra "resistance" of the object, as you put it. This leads to the question as to why the $m$ in the $F=GMm/r^2$ is the same as the one in $F=ma$. This is known as the Equivalence Principle.
-
In very simple terms, the force pulling the "heavy" object down is greater BUT it also takes more force to accelerate a heavy object. These two effects cancel out.
In complex terms, why this is true - ie. the reason why gravitational mass and inertial mass are the same - is still a puzzle to physics.
-
Well, GR explains it quite nicely, so I wouldn't consider it to be a puzzle... – David Zaslavsky♦ Sep 14 '12 at 19:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9800835251808167, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/tagged/acceleration?sort=votes&pagesize=15
|
# Tagged Questions
The rate of change of velocity of a body per unit of time.
9answers
7k views
### Don't heavier objects actually fall faster because they exert their own gravity?
The common understanding is that, setting air resistance aside, all objects dropped to Earth fall at the same rate. This is often demonstrated through the thought experiment of cutting a large object ...
4answers
376 views
### Is there an easy way to show that $x^2-t^2=1/g^2$ for a (relativistic) body undergoing acceleration g?
A professor asked me about the (c=1) equation: $$x^2 - t^2 = 1/g^2$$ which I used in a paper. Or with $c$: $$x^2 - (ct)^2 = c^4/g^2.$$ I told him that it was the exact equation of motion for a ...
7answers
2k views
### Would it help if you jump inside a free falling elevator?
Imagine you're trapped inside a free falling elevator. Would you decrease your impact impulse by jumping during the fall? When?
4answers
967 views
### Acceleration of two falling objects with identical form and air drag but different masses
I have a theoretical question that has been bugging me and my peers for weeks now - and we have yet to settle on a concrete answer. Imagine two balloons, one is filled with air, one with concrete. ...
1answer
596 views
### How do I calculate the (apparent) gravitational pull with General Relativity?
Assume a static metric with (known) components $g_{\mu\nu}$. I'd like to know what is the gravitational pull $g$ of a test particle placed on an arbitrary point $X$. The gravitational pull being ...
1answer
631 views
### What came first, Rice Crispy or “Snap,” “Crackle,” and “Pop”? [closed]
The fourth, fifth, and sixth derivatives of position are called "Snap" "Crackle" and "Pop". What came first, the rice crispy characters, or the physics units?
3answers
429 views
### How do we explain accelerated motion in Newtonian physics and in modern physics?
Maybe my question will seem stupid, but I am not a physicist so I have some problems understanding a classic Newtonian experiment: in the bucket experiment, why does he have to introduce the absolute ...
3answers
363 views
### Whether $m$ in $E=mc^{2}$ and $F=ma$ are both relativistic mass?
I know that $m$ in $E=mc^{2}$ is the relativistic mass, but can $m$ in $F=ma$ can also be relativistic? If the answer is yes, then can you tell me whether this equation is valid $E=\frac{F}{a}c^{2}$? ...
4answers
311 views
### Are smaller soap bubbles more accelerated by wind?
If you blow a bunch of soap bubbles outside, and a gust of wind hits them, will the bigger ones be more or less accelerated by the wind than the smaller ones? Intuitively, and maybe from remembered ...
4answers
2k views
### Acceleration in special relativity
I am currently studying the motion of relativistic charged particles in electromagnetic fields. More exactly, we first derived the equation of motion in the 4-vector formalism. I was a bit confused ...
1answer
248 views
### Is acceleration an average?
Background I'm new to physics and math. I stopped studying both of them in high-school, and I wish I hadn't. I'm pursuing study in both topics for personal interest. Today, I'm learning about ...
2answers
803 views
### Relativistic centripetal force
The thought randomly occurred to me that a circular particle accelerator would have to exert a lot of force in order to maintain the curvature of the trajectory. Many accelerators move particles at ...
2answers
786 views
### Should acceleration be included in state vector of a Kalman filter?
I'm developing (actually adopting existing solution) a Kalman filter to model motion of a vehicle (UAV or automobile). The state vector will include position, velocity, and, possibly, acceleration. ...
8answers
475 views
### Can you completely explain acceleration to me?
I understand what acceleration is, and I know the formula, and I understand it's a vector. I just don't understand how the equation works exactly. I'm kind of picky, I know, but bear with me. ...
4answers
460 views
### How to brake 'beautifully'?
Sometimes when I'm driving my car, I play a "game" against myself in which I try to minimize the deceleration felt by passengers (including myself) while still braking in a reasonable short space. I ...
8answers
695 views
### Reactionless Drives
According to the third law of motion, you van't have an mass move in a particular direction unless there is a proportional opposite mass/acceleration ratio in the opposite direction. No-one has been ...
4answers
458 views
### Why do we weigh less when falling?
I don't want to go to science world to find out because it would be a long round-trip. I understand that acceleration/deceleration would effect the weight and I can also imagine that someone at ...
4answers
114 views
### Integrating radial free fall in Newtonian gravity
I thought this would be a simple question, but I'm having trouble figuring it out. Not a homework assignment btw. I am a physics student and am just genuinely interested in physics problems involving ...
2answers
284 views
### relativistic acceleration equation
A Starship is going to accelerate from 0 to some final four-velocity, but it cannot accelerate faster than $g_M$, otherwise it will crush the astronauts. what is the appropiate equation to constraint ...
1answer
113 views
### Slinky base does not immediately fall due to gravity
Why does the base of this slinky not fall immediately to gravity. My guess is tension in the springs is a force > mass*gravity but even then it is dumbfounding.
1answer
99 views
### Measuring acceleration of a bus using water between two sheets of glass
I was riding a bus one day and noticed that the double windows had some water between them. As the bus accelerated, the water collected to the sides, first forming a trapezoid and then a right ...
2answers
123 views
### In mechanics, is shock really better expressed as jerk instead of acceleration?
Some expensive electronics or mechanical devices are designed to be shock-resistant. However, the manufacturers often market the level of shock-resistance in units of g-force (I know g-force is really ...
5answers
4k views
### Distance when acceleration not constant
I have a background in calculus but don't really know anything about physics. Forgive me if this is a really basic question. The equation for distance of an accelerating object with constant ...
3answers
419 views
### Infinite acceleration?
Why is acceleration regulated by mass? In a frictionless environment, why doesn't an object move at infinite acceleration if force is applied on it? Force causes movement, so unless there is an ...
3answers
759 views
### Is acceleration relative?
A while back in my Dynamics & Relativity lectures my lecturer mentioned that an object need not be accelerating relative to anything - he said it makes sense for an object to just be accelerating. ...
4answers
219 views
### Gravity from a singularity as distance approaches zero
If you had a singularity (that had mass but took up no space), what would happen to the acceleration of an object as it approached this singularity? I would assume that it would be infinite, since as ...
4answers
699 views
### Is acceleration an absolute quantity?
I would like to know if acceleration is an absolute quantity, and if so why?
1answer
81 views
### How did Newton find out force has something to do with acceleration?
Its about Newton's second law of motion, $$F=ma.$$ It says the acceleration of an object is directly proportional to the net force and is inversely proportional to the object's mass. Yes I can ...
5answers
553 views
### Confused about the role of mass
I'm far from being a physics expert and figured this would be a good place to ask a beginner question that has been confusing me for some time. According to Galileo, two bodies of different masses, ...
5answers
372 views
### If a space ship accelerated constantly, would its astronauts constantly feel the forward movement?
I know that if a space ship suddenly traveled very fast, its astronauts would be fly against the back wall, potentially getting hurt. If the space ship suddenly stopped, they would also fly against ...
2answers
601 views
### How can an object's instantaneous speed be zero and it's instantaneous acceleration be nonzero?
I'm studying for my upcoming physics course and ran across this concept - I'd love an explanation.
2answers
502 views
### Does a moving escalator make it easier to walk up the steps?
I was discussing with my colleagues why it feels easier to walk up an escalator when it is moving. My natural assumption was that the movement of the escalator imparts some extra acceleration on the ...
3answers
3k views
### Ion Drive Propulsion Top Speed
I would like to know if there is some formula / graph which would provide / show the efficiency of a certain type of propeller in space. Specifically, I'm interested in the acceleration attainable at ...
2answers
66 views
### Sign of acceleration
I'm developing an application using accelerometer sensor. I'm not good at physics so forgive me if the question is trivial. If I have 3 values of acceleration: $x$, $y$, $z$, I find acceleration ...
1answer
600 views
### Is acceleration due to gravity constant?
I was taught in school that acceleration due to gravity is constant. But recently, when I checked Physics textbook, I noted that $$F = \dfrac{G \cdot m_1 \cdot m_2}{r^2}$$ So, as the body falls ...
2answers
485 views
### Classical car collision
I have a very confusing discussion with a friend of mine. 2 cars ($car_a$ and $car_b$) of the same mass $m$ are on a collision course. Both cars travel at $50_\frac{km}{h}$ towards each other. They ...
6answers
157 views
### Is acceleration $a = s/t^2$, or $a = 2s/t^2$, or something third?
I'm having trouble understanding some of the stuff regarding movement in my introductory physics class (I never thought I'd say that...) Acceleration is defined as $a = \frac{s}{t^2}.$ Distance can ...
2answers
220 views
### A simple thought experiment about traversable wormholes
Let's say I have a tube, of large radius (about 5 - 7 meters in diameter), with traversable wormholes at the ends. The wormholes are arranged as such that if something falls inside one hole from ...
1answer
164 views
### What does it mean to find acceleration in terms of g?
I'm having trouble understanding what a problem I have is seeking. To simplify the problem: A particle reaches a speed of 1.6 m/s in a 5.0 micrometer launch. The speed is reduced to zero in 1.0 ...
1answer
140 views
### A practical deceleration question
My friend is a U.S. Army paratrooper. Today, through an unfortunate series of events, he was jerked out of a C-17 traveling at 160 knots by his reserve parachute. First-hand accounts describe it as he ...
1answer
103 views
### Why is cosmological acceleration expressed in terms of an energy density?
In the articles that I have (tried to) read, acceleration ends up being expressed as a dimensionless constant (omega-lambda) or else occasionally in terms of a "dark" energy density. Presumably one ...
2answers
960 views
### Formulas for ball rolling in a bowl?
I'm developing a program where I've a ball/sphere rolling in a bowl from the side at top, till the center at bottom, and I'm trying to get the formulas for: The rotation angle and the position of ...
2answers
158 views
### Nonuniform acceleration due to rubber rope
What I want: I have a rubber rope which is $5m$ in length when not stressed and is able to stretch about $100\%$ (to $10m$ long). I want to accelerate a constant mass horizontally, which has ...
5answers
434 views
### Why doesn't an electron accelerate in a circuit?
Why don't electrons accelerate when a voltage is applied between two points in in a circuit? All the textbooks I've referred conveyed the meaning that when an electron traveled from negative potential ...
2answers
826 views
### Gravitational pull vs. acceleration due to gravity
It might seem obvious but i can't imagine how is gravitational pull is different from acceleration due to gravity?
4answers
868 views
### Physics behind Wheel Slipping
Lets say that I'm in a car and I apply full acceleration suddenly. Now, the wheels would slip and hence the car doesn't displace much. But If I start with some constant acceleration, slipping doesn't ...
3answers
772 views
### What is the velocity area method for estimating the flow of water?
Can anyone explain to me what the Velocity Area method for measuring river or water flow is? My guess is that the product of the cross sectional area and the velocity of water flowing in a pipe is ...
2answers
582 views
### How much effect does the mass of a bicycle tire have on acceleration?
There are claims often made that, eg, "An ounce of weight at the rims is like adding 7 ounces of frame weight." This is "common knowledge", but a few of us are skeptical, and our crude attempts at ...
3answers
2k views
### Convert acceleration as a function of position to acceleration as a function of time?
Suppose I have acceleration defined as a function of position, "a(x)". How to convert it into a function of time "a(t)"? Please give an example for the case a(x)= x/s²
2answers
145 views
### Fictitious forces confusion
I have a hard time understanding the subject of fictitious forces. Let's discuss a few examples: 1) I'm sitting inside a vehicle which is accelerating in a straight line. I feel like someone is ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 27, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9545449614524841, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/tagged/sun?sort=faq&pagesize=15
|
# Tagged Questions
The Sun is an almost perfectly symmetric yellow dwarf star [spectral class G2V] which is at the center of our Solar System.
2answers
202 views
### Why don't we see solar and lunar eclipses often?
Since we see the new moon at least once in a month when the Moon gets in between of the Sun and the Moon at the night and as far as I know if this happens during the day, you'll get to see a solar ...
2answers
316 views
### Nuclear decay rate affected by sun and quantum randomness
If nuclear decay rate were affected by sun, then emission probabilities would be subject to sun state and its influence, so quantum randomness would depend on it, Would it still be truly random? One ...
4answers
2k views
### Why is a new moon not the same as a solar eclipse?
Forgive the elementary nature of this question: Because a new moon occurs when the moon is positioned between the earth and sun, doesn't this also mean that somewhere on the Earth, a solar eclipse ...
2answers
229 views
### Is dark matter really present around the sun?
Recently I read an article that there is dark matter around the sun but if it is so, than why can we see it clearly. If it is called matter than it shall show some hindrance in radiation we receive ...
1answer
276 views
### What is actually meant by 'sun set' and 'sun rise' times, when taking into account the mirage due to light bending in the atmosphere
I’ve heard from the likes of Brian Cox that what we see of the sun during a sunset and sun rise is actually the mirage of the sun. The Sun has actually set/risen and we see it due to the way light is ...
2answers
2k views
### Why is the sky not purple?
I realise the question of why this sky is blue is considered reasonably often here, one way or another. You can take that knowledge as given. What I'm wondering is, given that the spectrum of ...
1answer
130 views
### The transit of Venus and solar neutrino rates
The following question was posed at the end of Maury Goodman's June 2012 long-baseline neutrino newsletter. During the Venus transit of the sun, were more solar neutrinos absorbed in Venus, or ...
0answers
52 views
### Could Voyager 1 have entered a solar radiation belt?
We currently believe that the Sun has no radiation belts because the unstable magnetic field, which turns every 11 years, is not stable enough to sustain a solar radiation belt. But observations from ...
1answer
43 views
### Is there an Algorithm to find the time when the sun is X degrees above the horizon for a given latitude B at date C
Is there an accurate algorithm / method to determine the precise time of day/night when the sun is X degrees above (or below) the horizon for a given latitude Y at date Z? Is this the same question ...
3answers
464 views
### Energy of the electron-muon reaction
Lets see the reaction: $e^- \mu^- \to e^- \pi^- \nu_\mu \;\;\;\;\;\;\;\;\;\;\; {(1)}$ I suppose, that this reaction occurs as follows $e^- \mu^- \to e^- \mu^- \pi^+ \pi^- \to e^- \pi^- \nu_\mu$ Is ...
2answers
234 views
### Is the length of the day increasing?
In Frontiers of Astronomy, Fred Hoyle advanced an idea from E.E.R.Holmberg that although the Earth's day was originally much shorter than it is now, and has lengthened owing to tidal friction, that ...
3answers
492 views
### Is a water world possible, and for how long could it be stable?
I have several questions regarding this topic. First, could a water world be stable for thousands of years with most of its surface remaining covered in water. What would it take for this to be ...
3answers
2k views
### How fast will the sun become a red giant?
I've read many accounts of our sun's distant fate, but what I've never heard is on what time scale these events occur. For instance, when the sun runs out of hydrogen, I presume it doesn't just WHAM! ...
1answer
699 views
### What is the relationship between mass, speed and distance of a planet orbiting the sun?
After reading this fascinating story about a new exoplanet, I was wondering about how mass, speed and distance determine a circular orbit of a planet around a star. Given the mass of the sun and ...
1answer
689 views
### Why don't we see solar and lunar eclipses often? [duplicate]
Since we see the new moon at least once in a month when the Moon gets in between of the Sun and the Moon at the night and as far as I know if this happens during the day, you'll get to see a solar ...
1answer
490 views
### How to determine day/night based on latitude, longitude and a date/time?
Is there a simple method of determining, given a UTC date/time, whether it is day or night at a given lat/long coordinate? I am currently using a formula based on a Sunrise/Sunset Algorithm from the ...
3answers
240 views
### Sunspots formula
I used the package 'EUREQA', version Formulize, to analyse the monthly smoothed sunspot timeseries from 1750 till 2010. It gives me a simple formula, with 8 coefficients, that match data with a ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9422178268432617, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/47441/a-sine-product-with-almost-integer-values
|
# A sine product with (almost) integer values
Let n, k be integers, $n>1$ and $k \perp n$ denote that k, n are coprime and let $S_n = \{1 \le k \le \lfloor n / 2 \rfloor : k \perp n \}.$ Then $$n \left( \prod_{k \in S_{n}} \sin \left( k \frac {\pi}{n} \right) \right)^{-2} \in \mathbb{Z}.$$
I think this is surprising but I have no proof.
-
## 1 Answer
There are a number of results for products that are closely connected to your product. There are some inessential differences (inverse is not taken), and generally products are over all $k$ from $1$ to $n-1$ relatively prime to $n$, but that would be taken care of by your squaring. Here is a link to a fully available paper by Steven Galovich.
-
Thank you very much for this nice and highly relevant paper. Yes, I think the above formula can be justified by Galovich's theorems. – Jonas Alomo Jun 24 '11 at 21:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.968766450881958, "perplexity_flag": "head"}
|
http://mathoverflow.net/revisions/32743/list
|
## Return to Answer
2 added 257 characters in body
Your requirements aren't rigorously stated, so it's hard to say what you can prove exists or doesn't exist. In the strictest sense, a pseudo-random number generator cannot possibly be "roughly uniformly distributed". Every PRNG is an expansion of entropy from its settings to the set of possible sequences, and it is easy to see for reasons similar to your comments about memory requirements that the set of possible sequences has vastly more entropy than the settings. What a PRNG really does is passes certain computationally feasible tests of randomness and not necessarily other tests.
There are conjectures in computer science that "one-way functions exist". Some of these conjecture would imply that there are PRNGs that look random for any polynomial-time test, and for some conjectures permutations that look random for any polynomial-time test. However, these conjectures are harder than the P vs NP problem, so no one is about to prove them. In any case, if you just want a PRNG for your own practical use, it's overkill to look for one that has been analyzed cryptographically.
It is known that modular exponentiation is a pretty good PRNG. If you want something that looks like a permutation, let $p$ be a prime number, and let $a$ be a carefully chosen residue mod $p$. (Carefully chosen means that $a$ should be a primitive residue far away from $0$.) Then the function $$f_a(k) = a^k \pmod bmod p$$ is already statistically okay. This is a permutation of the numbers $1 \le k \le p-1$.
Now, the most common way to compute $f_a$ is to store $f_a(k)$ and then multiply by $a$ to get the next power. (As Richard Borcherds mentions, the iteration $x \mapsto ax+b \bmod n$ is a similar idea and a major standard, including in Knuth's book and in the Unix RNG "drand48".) However, these days that level of efficiency isn't so important, and it is interesting that you can compute $f_a$ directly by repeated squaring. So you can improve the strength of $f_a$ by making a composition such as $f_b(f_a(k)+c)$, where the addition is taken mod $p-1$. Or you could insert a more creative transformation. For instance, if $p$ is a Mersenne prime, then permuting the bits of $k$ is a simple transformation that can be inserted between applications of $f_a$.
If you want a permutation of some $n$ that is not of the form $p-1$, then you can find some prime $p > n$ that is not much larger and use the above same tricks. You can just skip values that are out of range.
Decades ago, I wanted a pseudo-random permutation for a scrambled screen fade in a computer game. I just used consecutive values of $f_a$ for some convenient modulus (which doesn't have to be prime; there are other variations) and it looked fine.
1
Your requirements aren't rigorously stated, so it's hard to say what you can prove exists or doesn't exist. In the strictest sense, a pseudo-random number generator cannot possibly be "roughly uniformly distributed". Every PRNG is an expansion of entropy from its settings to the set of possible sequences, and it is easy to see for reasons similar to your comments about memory requirements that the set of possible sequences has vastly more entropy than the settings. What a PRNG really does is passes certain computationally feasible tests of randomness and not necessarily other tests.
There are conjectures in computer science that "one-way functions exist". Some of these conjecture would imply that there are PRNGs that look random for any polynomial-time test, and for some conjectures permutations that look random for any polynomial-time test. However, these conjectures are harder than the P vs NP problem, so no one is about to prove them. In any case, if you just want a PRNG for your own practical use, it's overkill to look for one that has been analyzed cryptographically.
It is known that modular exponentiation is a pretty good PRNG. If you want something that looks like a permutation, let $p$ be a prime number, and let $a$ be carefully chosen residue mod $p$. Then the function $$f_a(k) = a^k \pmod p$$ is already statistically okay. This is a permutation of the numbers $1 \le k \le p-1$.
Now, the most common way to compute $f_a$ is to store $f_a(k)$ and then multiply by $a$ to get the next power. However, these days that level of efficiency isn't so important, and it is interesting that you can compute $f_a$ directly by repeated squaring. So you can improve the strength of $f_a$ by making a composition such as $f_b(f_a(k)+c)$, where the addition is taken mod $p-1$. Or you could insert a more creative transformation. For instance, if $p$ is a Mersenne prime, then permuting the bits of $k$ is a simple transformation that can be inserted between applications of $f_a$.
If you want a permutation of some $n$ that is not of the form $p-1$, then you can find some prime $p > n$ that is not much larger and use the above same tricks. You can just skip values that are out of range.
Decades ago, I wanted a pseudo-random permutation for a scrambled screen fade in a computer game. I just used consecutive values of $f_a$ for some convenient modulus (which doesn't have to be prime; there are other variations) and it looked fine.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 39, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.951867401599884, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/51682/reasons-for-the-importance-of-planarity-and-colorability/51700
|
## Reasons for the importance of planarity and colorability?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Could it have been foreseen that - exemplarily - planarity and colorability would turn out to be such important concepts in graph theory (there's almost no textbook on graphs without two chapters devoted to these concepts), or did time have to come and show it? The latter might imply that it's just a historical and evolutionary incident because there are loads of conceivable x-arities, y-abilities and other graph properties. (Maybe the importance of planarity and colorability just has to do with the contingent fact that we live on a (locally) two-dimensional plane and our need of maps?)
But maybe there are more objective reasons internal to mathematics that are formulable?
Related questions: Why-are-planar-graphs-so-exceptional; generalizations-of-planar-graphs;why-is-edge-coloring-less-interesting-than-vertex-coloring
-
2
*Post-delivered motivation*: I wonder why questions for "importance" - of concepts and theorems - are in general not taken as seriously as questions for hard facts. Wouldn't this - taking them equally important - be a real step further in the venture of mathematics? (I admit that questions for importance may be trivial - because importance may be obvious in special cases - or mistaken, because there is no importance at all of trivial concepts.) – Hans Stricker Jan 10 2011 at 20:54
5
I think half of your question is asked (and answered) here: mathoverflow.net/questions/7114/… – Tony Huynh Jan 10 2011 at 21:28
2
As for your question why "importance" questions enthuse people less than actual mathematics: maybe importance is just not the right kind of measure in pure maths. What does importance mean, anyway? If I talked to the man on the street, I would have a hard time convincing him that anything I do is important. I don't even believe it myself, when I view it in the context of global challenges that humanity is facing. If you are simply asking for a [big-list] of examples of applications of these concepts, then maybe you should say so. – Alex Bartel Jan 11 2011 at 8:57
2
I guess what I am getting at is: can you give me an example of what a good answer to "are there objective reasons internal to mathematics for why concept XYZ is important" would look like? I am having a hard time imagining such an answer that wouldn't ultimately confuse mathematics and physics. – Alex Bartel Jan 11 2011 at 8:59
3
That seems like a prototypical "time had to come and show it and it's just a historical and evolutionary incident" type of explanation, since it is ultimately a consequence of the order in which things were discovered, so I am afraid I am still confused by what the question is really asking. – Alex Bartel Jan 11 2011 at 9:30
show 6 more comments
## 6 Answers
A few reasons for the importance of planarity having little to do with the need for maps:
• A matroid is both graphic and co-graphic if and only if it is the graphic matroid of a planar graph
• Planar graphs are the graphs with Colin de Verdiere invariant ≤ 3. As such they form a sequence with the trees, outerplanar graphs, planar graphs, and linklessly embeddable graphs.
• The graphs of three-dimensional convex polyhedra are exactly the 3-connected planar graphs (Steinitz's theorem).
• A minor-closed graph family has bounded treewidth if and only if it does not include all the planar graphs.
-
David, did you mean "a planar graph as a minor" rather than "all planar graphs"? – Gil Kalai Jan 12 2011 at 19:48
@Gil: I think David does indeed mean all planar graphs. The point is that the family $\mathcal{F}$ is minor-closed. Obviously if $\mathcal{F}$ contains all planar graphs, then it does not have bounded tree-width. Conversely, if $H \notin \mathcal{F}$ for some planar graph $H$, then since $\mathcal{F}$ is minor-closed, no graph in $\mathcal{F}$ can have an $H$-minor. But this implies graphs in $\mathcal{F}$ cannot have arbitrarily large grid-minors (since large enough grids contain $H$-minors). By the Grid Theorem, $\mathcal{F}$ does have bounded tree-width. – Tony Huynh Jan 12 2011 at 21:09
oops sorry I had a mistake in exchanging the "does not" and the quntifier "all". – Gil Kalai Jan 13 2011 at 7:24
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
I think that planarity figures more heavily in typical undergraduate courses than its importance at research level would warrant. I'm not saying that it's not important at research level, but I do think that it is noticeably less important than graph colouring. In particular, there are whole areas of graph theory where planarity doesn't come up, whereas colouring is fairly ubiquitous.
Why the emphasis at the undergraduate level? This is easily explained: the 4-colour theorem is a very famous result, and the 5-colour theorem has a nice proof that is suitable for an undergraduate course. There is nothing wrong with this at all: not everything that goes into an undergraduate course is there as direct training for research.
-
1
I guess even if one viewed the undergraduate syllabus as exclusively direct training for research, there would be some value in teaching irrelevant but beautiful mathematics: it would make people want to go into research in the first place, which in the early stages seems at least as important to me as equipping them with the necessary knowledge and skills. – Alex Bartel Jan 11 2011 at 8:50
1
It's also as good a time as any to introduce Euler's formula. A nice bridge to algebraic topology via triangulations of surfaces. – Qiaochu Yuan Jan 11 2011 at 19:38
3
I don't think that in resaerch level mathematics planarity is less important than colorability. – Gil Kalai Jan 12 2011 at 19:39
1
I thought that statement might be disputable and don't want to go to the wall to defend it. A much much weaker statement is true though, which is that I personally come across colouring a lot more often than planarity. (I realize that this proves nothing.) – gowers Jan 12 2011 at 21:46
I certainly agree that "coloring" is sort of a very general concept that one comes accross more often and can be used to formulate various things. (Especially, if you do not restrict yourself to graph coloring in the restricted sense.) – Gil Kalai Jan 13 2011 at 7:29
The main feature of Kurtowski's theorem on planar graphs at the time was that it connected two seemingly unrelated aspects of graphs, one topological in nature and the other combinatorial. Since then different authors have proved many theorems about planar graphs and they are now a fairly understood family. So I guess, the reasons for the importance of planarity have changed a bit with time.
Since many hard problems simplify a lot in the planar case this makes them a very good pedagogical tool to introduce when teaching/motivating various results in graph theory. As far as I know, planar graphs where the main reason to support Tutte's flow conjectures, for example. Other problems/conjectures where the planar case makes an interesting toy model are Fleischner's conjecture, circuit decomposition (Hajos), strong perfect graph conjecture, strong embedding conjecture, strong cycle double cover conjecture etc.
Part of motivation comes from topological graph theory, too. If you are interested in graphs as discretized version of surfaces, the planar case is probably the first you want to look into. Here the subject may jump to fields other than graph theory, though, and you may start to ask about properties under scalings and subdivisions (think of dimer models for example), but then there are more reasons that enter the picture for why the planar case is interesting, such as conformality for instance.
-
I agree with the points of David Eppstein's answer on the important of planarity. I can add also my answer to a similar problem. We still need a good answer on why colorability is so important. As Tim Gowers said it is studied in many areas of graph theory and in also outside graph theory. Colorability is computationally intractible yet it is mathematically more tractable compared to other computational intractible questions like Hamiltonianity.
Let me try to suggest some answers on why graph colorability is important (I think it is a tentative and partial list):
1) It is a very easy to define and a very natural concept.
2) It is related to real life questions like scheduling and map coloring.
3) It is related to an important algebraic notion (that came from its study) the chromatic polynomial.
4) It led to many important generalizations. Like chosability.
-
Mesh Analysis a method used to solve circuits only applies to planar circuits.
-
There is the circle packing theorem, every connected simple planar graph is isomorphic to a circle packing.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9582175612449646, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/173095/tangent-space-of-schemes-and-manifolds
|
Tangent space of schemes and manifolds
Suppose $X$ is the set of $\mathbb R$-points of a scheme $\mathfrak X$. Let $x\in \mathfrak X$ be a simple point with a smooth neighbourhood $\mathfrak U$. I presume the (scheme-theoretic) tangent space at $x\in\mathfrak U$ coincides with the (differential-geometric) tangent space at $x\in U$, where $U$ denotes the set of $\mathbb R$-points of $\mathfrak U$ with a manifold structure.
Do you know of a reference for this? (If the statement is true, that is, and I haven't failed to assume algebraic closure or something like that.)
-
1
It is not clear to me what you mean by smooth neighborhood in general. What is the smooth structure on $X$ if the scheme is, say, $\text{Spec } \mathbb{R}[x_1, x_2, ... ]$? – Qiaochu Yuan Jul 20 '12 at 4:59
Does it help to assume that $\mathfrak X$ is reduced? I am thinking of the situation when $x\in\mathfrak X(\mathbb R)$ has a neighbourhood of a smooth algebraic variety and I would like to consider this neighbourhood as a manifold... – user1205935 Jul 20 '12 at 5:14
1
The example I gave is reduced. Since this is a local question, you are free to restrict to affine schemes, in which case I think you want to restrict to a finitely-generated reduced ring and a non-singular point (maybe this is what you mean by simple). In that case you have an embedding into $\mathbb{R}^n$ ($n$ the number of generators) so you can talk about the induced smooth structure away from the singular locus, and you should now be able to write down explicitly what the two tangent spaces are and verify that they're the same. – Qiaochu Yuan Jul 20 '12 at 5:20
– user1205935 Jul 20 '12 at 5:30
It's not a duality (the correspondence, to the degree that it exists, is covariant). Most people prefer to talk about the complex points in which case you want to look up the analytification functor (en.wikipedia.org/wiki/Algebraic_geometry_and_analytic_geometry). The study of real points is more subtle and, as I understand it, has a very different flavor (en.wikipedia.org/wiki/Real_algebraic_geometry), but in this case you can get away with a relatively straightforward computation. – Qiaochu Yuan Jul 20 '12 at 14:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 16, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9482905268669128, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/12358/how-to-determine-the-mass-of-a-quark
|
# How to determine the mass of a quark?
As far as I know quarks are never found in isolation, so how can we determine their rest mass?
-
1
@David: I think a good answer to the question would cover both of our interpretations. I would attempt to do so but I am sure that, before long, someone else will answer it more clearly than I could. – qftme Jul 17 '11 at 15:22
1
– Marek Jul 17 '11 at 20:53
1
Sorry, I'm a newbie in Physics world. But since you said that it's always with other stuff, can't we just weight the whole bundle of'em (quark with other stuff), then if we know the weight of other stuff, do simple math? – Saeed Neamati Jul 18 '11 at 12:25
3
@Saeed Neamati: One would think that the mass of proton would just be the mass of the three quarks that make it up. But the quarks are moving at high speeds so they have a lot of kinetic energy, and there is energy in the strong force that binds together the quarks. By $E=mc^2$ all this energy contributes to the mass, and it fact it makes up the 98% of the mass. So from an experimental point of view, measuring the mass of the proton doesn't tell you anything directly about the mass of the quarks: they could even be massless and still the protons would have their mass. – BebopButUnsteady Jul 18 '11 at 14:16
3
Poor wikipedia articles which might nevertheless point in the right directions: 1. en.wikipedia.org/wiki/Current_quark_mass 2. en.wikipedia.org/wiki/Constituent_quark_mass @pipsi: reason this question is not being answered more fully is that a complete answer would need to explain quantum field theory, renormalisation and confinement (at least, and even more from the experimental side). Perhaps there are people who are working on such an enormously long answer --- perhaps people should offer bounties to encourage... – genneth Jul 18 '11 at 16:57
show 9 more comments
## 1 Answer
For the light quarks, one can use chiral perturbation theory to relate the mass of the light hadrons to the mass of the light quarks. These two links give details and caveats of the procedures, as well as the most precise determinations:
http://pdg.lbl.gov/2011/reviews/rpp2011-rev-quark-masses.pdf
http://pdg.lbl.gov/2011/listings/rpp2011-list-light-quarks.pdf
-
1
And the heavier quarks masses are established by resonances, I presume? – BebopButUnsteady Jul 18 '11 at 14:16
+1 for the links to the pdg. – qftme Jul 18 '11 at 16:08
@BebopButUnsteady Only for the top you can do that (and it is basically what is done). – Rafael Jul 19 '11 at 12:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9342436194419861, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/65045?sort=newest
|
Example of an amenable finitely generated and presented group with a non-finitely generated subgroup
Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I'm looking for an example of a finitely presented and finitely generated amenable group, that has a subgroup which is not finitely generated.
The question is easy for finitely generated amenable group and an example is the lamp-lighter group $C_2\wr \mathbb{Z}$.
An Abelian and finitely generated group has no such subgroups. There exists a bigger class of groups with this property?
-
3 Answers
I don't know much about amenable groups I am afraid, but according to the Wikipedia article, all solvable groups are amenable. So we can take the Baumslag-Solitar group
$B(1,n) = \langle x,y \mid y^{-1}xy = x^n \rangle.$
If we let $N$ be the normal closure of the subgroup generated by $x$, then $N$ is abelian with $G/N$ cyclic, but $N$ is not finitely generated when $n > 1$. Note also that $B(1,n)$ is isomorphic to the subgroup of ${\rm GL}(2, \mathbb{Q})$ generated by
$x = \left(\begin{array}{cc}1&0\\1&1\end{array}\right)$ and $y = \left(\begin{array}{cc}n&0\\0&1\end{array}\right).$
-
2
The subgroup generated by $x$ is not normal! – Steve D May 15 2011 at 20:08
1
But the normal subgroup generated by $x$ is normal and infinitely generated. It is isomorphic to the additive group of diadic rational numbers. – Mark Sapir May 16 2011 at 5:59
You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
By the way, you may enjoy the fact, due to G. Baumslag, that a standard wreath product $W\wr G$ with $W\neq 1$ and $G$ infinite, is never finitely presented; see Gilbert Baumslag. Wreath products of finitely presented groups. Math. Z. 75 , 22-28, 1961. For finite presentability of permutational wreath products, see a paper by Cornulier: http://www.normalesup.org/~cornulier/wrea_fp.pdf
-
There are finitely presented metabelian groups containing the lamplighter groups. One of them was constructed by Baumslag: $\langle a,b,c \mid a^2=1, [b,c]=1, [a^b,a]=1, a^c=a^ba\rangle$.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 18, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9337502717971802, "perplexity_flag": "head"}
|
http://www.newton.ac.uk/programmes/SAS/seminars/2012051016001.html
|
Navigation: Home > SAS > Seminars > Setzer, A
# SAS
## Seminar
### Consistency, physics, and coinduction
Setzer, A (Swansea University)
Thursday 10 May 2012, 16:00-16:30
Seminar Room 1, Newton Institute
#### Abstract
In the first part of this talk we discuss the consistency problem of mathematics. Although we have very well verified mathematical proofs of $\Pi_1$-statements such as Fermat's last theorem, we cannot, because of Gödel's 2nd incompleteness theorem, exclude the existence of a counter example with absolute certainty -- a counter example would, unless there was a mistake in the proof, prove the inconsistency of the mathematical framework used. This uncertainty has similarities with physics, where we cannot exclude that nature does not follow the laws of physics as determined up to now. All it would imply is that the laws of physics need to be adjusted. Whereas physicists openly admit this as a possibility, in mathematics this fact is not discussed very openly.
We discuss how physics and mathematics are in a similar situation. In mathematics we have no criterion to check with absolute certainty that mathematics is consistent. In physics we cannot conduct an experiment which determines any law of physics with absolute certainty. All we can do is to carry out experiments to test whether nature follow in one particular instance the laws of physics formulated by physicists. In fact we know that the laws of physics are incomplete, and therefore not fully correct, and could see a change of the laws of physics during the historical evolving of physics. Changes of the laws of physics did not affect most calculations made before, because these were thoroughly checked by experiments. The changes had to be made only in extreme cases (high speed, small distances). In the same way, we know by reverse mathematics that most mathematical theorems can be proved in relatively weak theories, and therefore would not be affected by a potential inconsistency which probably would make use of proof theoretically very strong principles. In both mathematics and physics we can carry out tests for the axiom systems used. In physics these tests are done by experiments and as well theoretical investigations. In mathematics this is done by looking for counter examples to theorems which have been proved, and by applying the full range of meta mathematical investigations, especially by proof theoretic analysis, normalisation proofs and the formation of constructive foundations of mathematics.
As the laws of physics are empirical, so are $\Pi_1$-theorems (phrasing it like this is due to an informal comment by Peter Aczel). As one tries in physics to determine the laws of physics and draw conclusions from them, in logic one tries to determine the laws of the infinite, and derive conclusions from those laws. We can obtain a high degree of certainty, but no absolute certainty, and still can have trust in the theorems derived.
In the second part we investigate informal reasoning about coinductive statements.
#### Video
Available Video Formats
#### Comments
Start the discussion!
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.951232373714447, "perplexity_flag": "head"}
|
http://mathhelpforum.com/calculus/101449-extremal.html
|
# Thread:
1. ## extremal
find the extremal for
$\int^{T}_{0} (\dot{x}^2 + 2x\dot{x} + 2x^2) dt$
when $x(0) = 1$ and $T = 2$ i.e. $x(2)$ free.
i get $x = Ae^{\sqrt{2}t} + Be^{-\sqrt{2}t}$
and the first condition $x(0) = 1$ gives $A + B = 1$
then when i apply the transversality condition $T = 2$
ie. $\frac{df}{d\dot{x}}(2) = 0$
it gets really complicated. can someone please tell me if i have the correct equation for $x$?
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 10, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.7713159322738647, "perplexity_flag": "middle"}
|
http://cstheory.stackexchange.com/questions/10055/algebraic-formulation-for-packing-problem
|
# Algebraic formulation for packing problem
My question is regarding the algebraic formulation for packing problems in graphs.
Taking an example, suppose I am interested in the problem of finding if there is a packing of k edge disjoint triangles in a given input (undirected) graph. I am aware of techniques that approach this problem combinatorially. Is there a way to frame this problem algebraically?
Also, in context of graph theory, are algebraic methods less powerful when compared to combinatorial methods? Or is there any particular intuition that governs which approach is more effective in certain problems?
I apologize if my question sounds too general. The purpose of this question is to try and form an understanding of algebraic vs combinatorial methods in the context of graph theory.
-
3
I do not know what you mean by algebraic methods, but linear programming is used very, very often (approximation, branch and bound, branch and cut, …) to solve various combinatorial optimization problems, and I would be surprised if it is not used for the particular problem you mentioned. – Tsuyoshi Ito Feb 4 '12 at 0:32
Probably, it'd be helpful if you gave us an example of algebraic methods and combinatorial methods in the context of graph theory. – Yoshio Okamoto Feb 4 '12 at 1:35
Consider the kernelization of vertex cover problem (FPT version). Using a Linear programming approach, we can obtain a linear kernel of size 2k. At the same time, a technique like Crown Decomposition of Graphs can be used to get a kernel of 3k. I think I used the term "algebraic" to refer to the former techniques (LP, ILP) and the term "combinatorial" for techniques like Crown Decompositions. That was my notion of algebraic and combinatorial techniques which might be very loose and not precise. – Nikhil Feb 4 '12 at 2:04
(Continued from previous comment): Apart from the packing problem formulation, I am interested in knowing if there is some intuition governing that why would I want to use one approach over another. Or does it depend mainly on the properties/structure of the concerned problem? – Nikhil Feb 4 '12 at 2:06
## 3 Answers
It is possible to prove the fixed-parameter tractability of various graph-theoretical problems (e.g., finding a path of length k or finding k disjoint triangles) using algebraic techniques by reducing the problem to questions about certain polynomials. See for example the following very readable paper by Ryan Williams:
http://arxiv.org/abs/0807.3026v3
and some other related papers:
http://www.cs.cmu.edu/~jkoutis/papers/MultilinearDetection.pdf
http://doi.ieeecomputersociety.org/10.1109/FOCS.2010.24
http://arxiv.org/abs/0912.2371
About your meta question: as the formulation of graph-theoretical problems is usually "combinatorial," it is not surprising to me that most of the natural approaches are "combinatorial" rather than "algebraic." The "algebraic" approaches work only if the problem has some unexpected connection with algebra, which is probably quite rare for natural "combinatorial" problems. In the above-mentioned papers, such a connection is discovered, allowing us to solve the problem using algebraic techniques. Interestingly, for these problems the algebraic algorithms are faster (have better bounds) than the combinatorial ones.
-
Thanks for the references and for clarifying on my doubt related to the algebraic-combinatorial approaches. This answer was really helpful. – Nikhil Feb 5 '12 at 21:28
I looked at the RW paper & it seems to have nothing specifically on k disjoint triangles. is there some connection thats not outlined in the paper? do any of the others talk about k disjoint triangles in particular? (they dont in abstracts) – vzn Feb 6 '12 at 17:26
The link for the last paper was wrong, sorry (fixed now). The RW paper deals with finding paths, which was generalized to finding trees and then to finding bounded-treewidth subgraphs in the last paper. The graph formed by the union of k disjoint triangles has treewidth 2, thus in particular finding k disjoint triangles is covered by the last paper. – Daniel Marx Feb 6 '12 at 20:39
## Did you find this question interesting? Try our newsletter
email address
There are algebraic formalizations of combinatorial problems. Combinatorial problems can be transferred to a problem of solvability of a system of polynomials. For instance, Bayer was able to model 3-colorability using a system of polynomial equations.
Here is an excerpt of the abstract of Hilbert’s Nullstellensatz and an Algorithm for Proving Combinatorial Infeasibility:
"Systems of polynomial equations over an algebraically-closed field K can be used to concisely model many combinatorial problems. In this way, a combinatorial problem is feasible (e.g., a graph is 3-colorable, hamiltonian, etc.) if and only if a related system of polynomial equations has a solution over K."
-
Thanks for the answer. This was helpful. – Nikhil Feb 5 '12 at 21:47
In exact exponential algorithmics, the subset convolution is a particularly useful algebraic technique for solving covering, packing and partitioning problems. It generally works very well when the objects to be packed are 'hard', like dominating or independent sets. For example, the best known $k$-colouring (i.e. partition into independent sets) algorithm uses subset convolution. See e.g. Exact Exponential Algorithms by Fomin and Kratsch or this paper by Björklund et al.
The subset convolution is most often used in the context of exponential algorithms, but there are some useful applications in polynomial side of things. In particular, see this paper for the so-called 'counting in halves' approach to packing problems.
My intuition is that since the subset convolution -type methods count the number of solutions instead of finding just one, they usually cannot be used to obtain fixed parameter algorithms. Also, they are rather space-intensive; their space complexity often equals their time complexity.
-
Though I have not much idea about the "subset convolution" technique but thanks for mentioning it and the paper references. I will give it a look. – Nikhil Feb 7 '12 at 20:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9258183836936951, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/29847?sort=oldest
|
## An obstruction theory for promoting homotopy equivalences that are equivariant maps to equivariant homotopy equivalences?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Say I have a map of $G$-spaces $f : X \to Y$ and I know it is a homotopy-equivalence in the plain sense that there exists a map (maybe not equivariant) $g : Y \to X$ such that the two composites are homotopic to the identity $f\circ g \simeq Id_Y$, $g\circ f \simeq Id_X$.
Is there something of an obstruction theory that tells you when you can promote $f$ to a homotopy-equivalence in the category of $G$-spaces?
Ideally the level of generality I care for is $X$ and $Y$ manifolds and $G$ a compact Lie group. But anything in that ballpark interests me.
-
## 3 Answers
When you say $f$ is a map of $G$-spaces I am guessing you mean a morphism in the category of $G$-spaces, i.e. a continuous map satisfying $f(ax)=af(x)$ for $a\in G$. If so, then there is a good answer. (But "promoting" the map to a strong $G$-equivalence is a funny way to say it, because it suggests a piece of extra structure rather than a property.)
The fixed point spaces of closed subgroups are the key to much of equivariant homotopy theory. In the most important model structure for $G$-spaces, a morphism $X\to Y$ is called a weak equivalence (resp. fibration) iff for every subgroup $H$ the induced map $X^H\to Y^H$ of fixed point spaces is a weak homotopy equivalence (resp. Serre fibration). Cofibrations are generated by attaching orbits of cells. For cofibrant objects (and this includes manifolds with smooth action) weak equivalences are then the same as maps that actually have an inverse up to $G$-homotopy.
-
Dear Tom, is this identical to the projective model structure on the diagram category $\mathcal{CG}^G$, where we consider $G$ as a one-object category? – Harry Gindi Jun 29 2010 at 2:14
2
No. But it is equivalent to the projective model structure on the category of diagrams indexed by the opposite of the orbit category of $G$, i.e. the category with one object for each subgroup of $G$ with morphisms the $G$-maps $G/H\to G/K$. – Tom Goodwillie Jun 29 2010 at 2:54
Thanks! Good to know. – Harry Gindi Jun 29 2010 at 2:56
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
The simplest interesting case would be when G is Z/p, p prime. In this case the main issue is that by Smith Theory (applied to the mapping cylinder rel domain, say) you will only know that the induced map of fixed point sets is a Z/pZ equivalence, and definitely not necessarily a homotopy equivalence. If the G-map that is a homotopy equivalence induces a homotopy equivalence of fixed point sets, then I believe one can prove the desired result by bare hands, assuming, say the spaces are G-CW complexes, or smooth G-manifolds. For more general groups G one would need to assume or arrange that the induced maps on fixed point sets for all subgroups are themselves homotopy equivalences. The idea is the same, but the details seem more daunting.
-
A facetious answer would be: sure, but one of the ingredients is going to be the group of homotopy equivalences between $X^G$ and $Y^G$. Since $G$-spaces are diagrams of fixed spaces, the Dwyer-Kan obstruction theory of diagrams (c 1984) probably describes the space of maps, in principle allowing you to answer this question and decide which maps of the given fixed sets are realized. This doesn't sound remotely practical to me.
If you don't understand the fixed points to begin with, Smith theory or bust. The map on fixed sets is going to be an isomorphism on homology with coefficients (possibley $0$) depending on the group and exotic fixed sets satisfying that condition are possible. If you're willing to assume everything is simply connected and the group is the circle, it's probably automatically an equivalence. On the opposite extreme is (binary?) $A_5$ acting on high dimensional manifolds. It has PL actions on the disk with the fixed set arbitrary simplicial complexes. I forget what happens in the smooth category; I think empty fixed set is possible, but I forget how. In between are $p$-groups, where you'll keep control of the mod $p$ homology but not the general homology.
-
Yes, I'm in the situation you describe in your 2nd paragraph. My space is a Frechet manifold with an action of $O(n)$ -- the simplest non-trivial case that I care about the group is $O(2)$. But the fixed point sets there's little technology out there that will help to describe their homotopy-type. – Ryan Budney Jun 29 2010 at 3:00
Smith theory is about finite dimensional spaces, so you are in trouble. If $V$ is an infinite dimensional complex Frechet space, it has a natural $U(1)$-action, and $V-\{0\}\to V$ is an equivariant map that is an equivalence on total spaces, though the fixed sets are empty on the left and a point on the right. Maybe you can use some other finiteness assumptions, like control at infinity or finite codimension of the fixed sets, but this is not an off-the-shelf situation. – Ben Wieland Jun 29 2010 at 3:37
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 39, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9299836158752441, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/99912/almost-complex-structure-approach-to-deformation-of-compact-complex-manifolds/99926
|
Almost Complex Structure approach to Deformation of Compact Complex Manifolds
Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I don't know much about the deformation of compact complex manifolds, I've only read chapter 6 of Huybrechts' book Complex Geometry: An Introduction. There are two parts to this chapter. The second goes through the standard approach, that is, considering a family of compact complex manifolds as a proper holomorphic submersion between two connected complex manifolds. My question is about the approach taken in the first section, which I will briefly outline.
One can instead consider a deformation of complex structures on a fixed smooth manifold, as opposed to deformations of complex manifolds – by Ehresmann's result, a deformation over a connected base is nothing but a deformation of complex structure on a fixed smooth manifold. This point of view is difficult to work with because a complex structure is a complicated object, so we instead consider almost complex structures – by the Newlander-Niremberg Theorem, complex structures correspond to integrable almost complex structures.
Fix a smooth even-dimensional manifold $M$. Now Huybrechts considers a continuous family of almost complex structures $I(t)$. He does not say where $t$ comes from, but I have interpreted it to be an open neighbourhood of $0$ in $\mathbb{C}$. Now, let $I(0) = I$. The complexified tangent bundle to $M$ splits with respect to $I$. That is, $TM\otimes_{\mathbb{R}}\mathbb{C} = T^{1,0}M\oplus T^{0,1}M$. But this is true of each almost complex structure $I(t)$. Denote the corresponding decompositions by $TM\otimes_{\mathbb{R}}\mathbb{C} = T^{1,0}M_t\oplus T^{0,1}M_t$ – this is deliberately suggestive notation; we can consider the compact (soon-to-be) complex manifold $(M, I(t))$ as the fibre of a complex family over a point $t$ in the base.
For small $t$, we can encode the given information by a map $\phi(t) : T^{0,1}M \to T^{1,0}M$ where, for $v \in T^{0,1}M$, $v + \phi(t)v \in T^{0,1}M_t$. Huybrechts then says:
Explicitly, one has $\phi(t) = -\text{pr}_{T^{1,0}M_t}\circ j$, where $j : T^{0,1}M \subset TM\otimes_{\mathbb{R}}\mathbb{C}$ and $\text{pr}_{T^{1,0}M_t} : TM\otimes_{\mathbb{R}}\mathbb{C} \to T^{1,0}M_t$ are the natural inclusion respectively projection.
According to this, the codomain of $\phi(t)$ is $T^{1,0}M_t$, not $T^{1,0}M$. Is this a typo or am I missing something? Added later: As Peter Dalakov points out in his answer, it is a typo.
Anyway, Huybrechts continues with this approach. Enforcing the integrability condition $[T^{0,1}M_t, T^{0,1}M_t] \subset T^{0,1}M_t$ ensures that each almost complex structure is induced by a complex structure. Under the assumption that $I$ is integrable, $[T^{0,1}M_t, T^{0,1}M_t] \subset T^{0,1}M_t$ is equivalent to the Maurer-Cartan equation $\bar{\partial}\phi(t) + [\phi(t), \phi(t)] = 0$, where $\bar{\partial}$ is the natural operator on the holomorphic vector bundle $T^{1,0}M$, and $[\bullet, \bullet]$ is an extension of the Lie bracket.
I like this approach because if you take a power series $\sum_{t=0}^{\infty}\phi_it^i$ of $\phi(t)$ you can deduce:
1. $\phi_1$ defines the Kodaira-Spencer class of the deformation;
2. all the obstructions to finding the coefficients $\phi_i$ lie in $H^2(M, T^{1,0}M)$.
Does anyone know of some other places where I would be able to learn about this approach, or is there some reason why this approach is not that common?
Just for the record, I have looked at Kodaira's Complex Manifolds and Deformation of Complex Structures, but I haven't been able to find anything resembling the above.
-
5
Another book that discusses this approach in detail is "Calabi-Yau Manifolds and Related Geometries" by Gross-Huybrechts-Joyce, chapter 2, from page 73. – YangMills Jun 18 at 16:09
1 Answer
This approach to deformations is taken, for instance, in all of the original papers of Kodaira-Spencer and Nirenberg. You can have a look at On the existence of deformations of complex analytic structures, Annals, Vol.68, No.2, 1958
http://www.jstor.org/discover/10.2307/1970256?uid=3737608&uid=2129&uid=2&uid=70&uid=4&sid=47699092130607
but there are many other papers by the same authors.
For a nice and compact exposition, you can look at these class notes of Christian Schnell: http://homepages.math.uic.edu/~cschnell/pdf/notes/kodaira.pdf
Of course, the Maurer-Cartan equation and deformations (of various structures) via dgla's have been used by many other people since the late 1950-ies: Goldman & Millson, Gerstenhaber, Stasheff, Deligne, Quillen, Kontsevich.
Regarding the formula: that's a typo, indeed. You have two eigen-bundle decompositions, for $I$ and $I_t$:
$$T_{M, \mathbb{C}} = T^{1,0}\oplus T^{0,1}\simeq T^{1,0}_t\oplus T^{0,1}_t$$
and you write $T^{0,1}_{t}=\textrm{graph }\phi$, where $\phi: T^{0,1}_M\to T^{1,0}_M$. So actually
$$\phi = \textrm{pr}^{1,0}\circ \left.\left(\textrm{pr}^{0,1}\right)\right|_{T^{1,0}_t}^{-1}.$$
In local coordinates, $$\phi = \sum_{j,k=1}^{\dim_{\mathbb{C}} M}h_{jk}(t,z)d\overline{z}_j\otimes \frac{\partial}{\partial z_k},$$ and $T^{0,1}_t$ is generated (over the smooth functions) by
$$\frac{\partial}{\partial \overline{z_j}} + \sum_{k=1}^{\dim_{\mathbb{C}}M}h_{jk}\frac{\partial}{\partial z_k}.$$
Regarding the question "where does $t$ come from?", the answer is "From Ehresmann's Theorem": given a proper holomorphic submersion $\pi:\mathcal{X}\to \Delta$, you can choose a holomorphically transverse trivialisation $\mathcal{X}\simeq X\times \Delta$, $X=\pi^{-1}(0)= (M,I)$. In this way you get yourself two (almost) complex structures on $X\times \Delta$, which you can compare.
ADDENDUM I second YangMills' suggestion to have a look at Chapter 2 of Gross-Huybrechts-Joyce. You can also try Chapter 1 of K. Fukaya's book "Deformation Theory, Homological algebra, and Mirror Symmetry", as well as the Appendix to Homotopy invariance of the Kuranishi Space by Goldman and Millson (Illinois J. of Math, vol.34, No.2, 1990). In particular, you'll see how one uses formal Kuranishi theory to avoid dealing with the convergence of the power series for $\phi(t)$. For deformations of compact complex manifolds, the convergence was proved by Kodaira-Nirenberg-Spencer. Fukaya says a little bit about the convergence of this series in general, i.e., for other deformation problems.
-
Thanks. I will definitely take a look at these references. – Michael Albanese Jun 20 at 7:37
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 46, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9099789261817932, "perplexity_flag": "head"}
|
http://www.all-science-fair-projects.com/science_fair_projects_encyclopedia/Reed's_law
|
# All Science Fair Projects
## Science Fair Project Encyclopedia for Schools!
Search Browse Forum Coach Links Editor Help Tell-a-Friend Encyclopedia Dictionary
# Science Fair Project Encyclopedia
For information on any area of science that interests you,
enter a keyword (eg. scientific method, molecule, cloud, carbohydrate etc.).
Or else, you can start by choosing any of the categories below.
# Reed's law
Reed's law is the assertion of David P. Reed that the utility of large networks, particularly social networks, can scale exponentially with the size of the network.
The reason for this is that the number of possible sub-groups of network participants is $2^N - N - 1 \,$, where N is the number of participants. This grows much more rapidly than either
• the number of participants, N, or
• the number of possible pair connections, $N (N - 1) / 2\,$ (which follows Metcalfe's law)
so that even if the utility of groups being available to be joined is very small on a per-group basis, eventually the network effect of potential group membership can dominate the overall economics of the system.
## Derivation of the number of possible subgroups
Given a set A which represents a group of people, and whose members are persons, then the number of people in the group is the cardinality of set A.
The set of all subsets of A is the power set of A, denoted as $\mathcal{P} (A)$:
$\mathcal{P}(A) = \{B : B \subseteq A\}$.
It is known in set theory that the cardinality of $\mathcal{P}(A)$ is equal to 2 to the power of the cardinality of A, i.e.
$\mbox{card} \, \mathcal{P}(A) = 2^{\mbox{card} \, A}$.
This is not difficult to see, since we can form each possible subgroup by simply choosing for each element of A one of two possibilities: whether to include that element, or not.
However, A itself belongs to its own power set $\mathcal{P}(A)$ but if A is considered as a group of people, then A is not a proper "subgroup" of itself:
$\mbox{card} \, \left( \mathcal{P}(A) - \{A\} \right) = 2^N - 1$,
where $N = \mbox{card} \, A$.
Then, any members of $\mathcal{P}(A)$ which are singletons are not considered "groups of people". Since each individual in a group can form a singleton, then the number of singletons in A is equal to the cardinality of A:
$\mbox{card} \{C : C \in \mathcal{P}(A) \wedge \mbox{card} \, C = 1 \} = N,$
$\mbox{card} \, \left( \mathcal{P}(A) - \{A\} - \{C : C \in \mathcal{P}(A) \wedge \mbox{card} \, C = 1 \} \right) = 2^N - N - 1.$
But notice that — using Big O notation — the function $2^N - N - 1 \,$ is $O(2^N) \,$ as $N \rightarrow \infty \,$, so that it is exponential.
## External links
03-10-2013 05:06:04
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 15, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9119923710823059, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/261564/how-can-every-triangle-have-a-circumcircle
|
# How can every triangle have a circumcircle
Let's take for example $\triangle ABC$ with $\angle A = \angle B = 1^o$. How can a triangle like this have a circumcircle? My confusion is with triangles like this in general, with very long sides.
-
3
How many non-collinear points do you need so that a unique circle passes through them? :) – Isomorphism Dec 18 '12 at 17:02
1
No matter how long you make the sides, I can make a circle big enough to circumscribe your triangle. We have all of infinity to play with, so size matters not (to paraphrase Yoda). – Todd Wilcox Dec 18 '12 at 17:04
Maybe visualise it like this: imagine any triangle with a circumcircle, with one vertex at the top. Now imagine moving the other two points round the circle - getting nearer to the top. That way you can get a triangle with 2 angles of just one degree. Then just enlarge the diagram. – John Wordsworth Dec 18 '12 at 17:06
1
Oh, I think I visualize it now.. so those points would be relative close together on the circle? – geometry Dec 18 '12 at 17:06
Yes, they would need to be comparatively close together. – John Wordsworth Dec 18 '12 at 17:10
show 5 more comments
## 4 Answers
Here's a link to an interactive webpage: triangle circumscribed by a circle, where you can manipulate the shape and size of the triangle, any which way you want, and each manipulation yields the unique circle that circumscribes that particular triangle.
The circumcircle always passes through all three vertices of a triangle. Its center is at the point where all the perpendicular bisectors of the triangle's sides meet (intersect). This center is called the circumcenter.
Note that the center of the circle can be inside or outside of the triangle.
The radius of the circumcircle is also called the triangle's circumradius.
Below you'll see a screen-shot from the site linked above showing the arc of a huge circle (radius 68.25) circumscribing a triangle of the sort you asked, with two sides at length 19.5 units, one side at 38.6 units.
-
2
I think this is what the OP needed - a visualisation, rather than a proof. – Daniel Littlewood Dec 18 '12 at 19:57
Take one side of the triangle, say $AB$, and construct the perpendicular bisector $L$. Any point $P$ on $L$ will be equidistant from A and B, because $\triangle PAB$ will be isosceles by construction.
Do the same with side $BC$ and perpendicular bisector $M$.
Then the lines $L$ and $M$ are not parallel since $\triangle ABC$ is non-degenerate. Hence they intersect in a point $Q$.
$Q$ lies on $L$ so is equidistant from $A$ and $B$. $Q$ also lies on $M$ so it is equidistant from $B$ and $C$. Hence the $\bar{AQ}=\bar{BQ}=\bar{CQ}$ and $Q$ is the centre of a circle which passes through the points $A, B \text{ and } C$.
-
Note that if the sides are "nearly parallel" the perpendicular bisectors will be "nearly parallel" too, but they will eventually meet. – Mark Bennet Dec 18 '12 at 17:14
For your particular triangle, let $C$ be the remaining vertex. Draw the line $\ell$ through $C$ perpendicular to $AB$. Suppose that this line meets $AB$ at $M$.
Travel on $\ell$, away from $M$, in the direction opposite to $C$. Always when $P$ is on $\ell$, $P$ is at equal distances from $A$ and $B$.
At first $P$ is much closer to $C$ than to $A$ or $B$. But if you travel to a point $P'$ that makes $\angle CAP'$ equal to $90$ degrees, $P'$ will be closer to $A$ than it is to $C$. This is because then $P'C$ is the hypotenuse of a right triangle, and $P'A$ is one of the legs of that triangle.
So $P$ somewhere between $M$ and $P'$ is "just right," it makes $PA=PB=PC$. This is the centre of our circumcircle.
Probably your discomfort is due to the fact that the centre of the circumcircle is outside the triangle. That happens whenever one of the angles of the triangle is $\gt 90^\circ$.
-
It might be a little easier to understand to the problem backwards. Imagine that you constructed the circumcircle of $ABC$.
Now, $\angle A=1^\circ$ means $\arc{BC} =2^\circ$. Thus the arc $BC$ represents a 180-th of a circle. Same for the arc $AC$. Thus, the arc $ACB$ represents the 90th part of the circle, thus it is extremely tiny compared with the circle.
Moreover, starting with any circle, and picking an arc $AB$ which is the 90-th part of the circle, and then picking $C$ the midpoint of this arc, creates a triangle ABC similar to your triangle..
This explains the issue which you probably have: for such a triangle, the circumcircle has to be extremely (maybe unreasonably) big compared with the triangle...
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 63, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9265069961547852, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/60590/category-theoretic-limit-related-to-topological-limit
|
# Category-theoretic limit related to topological limit?
Is there any connection between category-theoretic term 'limit' (=universal cone) over diagram, and topological term 'limit point' of a sequence, function, net...?
To be more precise, is there a category-theoretic setting of some non-trivial topological space such that these different concepts of term 'limit' somehow relate?
This question came to me after I saw ( http://www.youtube.com/watch?v=be7rx29eMr4 ) a surprising fact that generalised metric spaces can be seen as categories enriched over preorder $([0,\infty],\leq)$.
-
2
Similar question on MO (without satisfactory answer). With some fleshing out, this answer in a linked question might lead to something. – t.b. Aug 29 '11 at 22:51
I seems that suggestion wouldn't work. If my objects are open sets and morphisms inclusions, then limit of any family of objects is just interior of intersection of that family. – rafaelm Aug 29 '11 at 23:10
It struck me as naive as well... However I think that what one really should consider would be filters modulo some equivalence relation. A filter then should have a limit point $p$ if and only if it is equivalent to the associated principal ultrafilter (the neighborhood filter of $p$). – t.b. Aug 29 '11 at 23:15
## 2 Answers
The connection is well-known (in particular I'm claiming no originality; I don't recall where I found this, though !): Let $(X,\mathcal O)$ be a topological space, $\mathcal F(X)$ the poset of filters on $X$ with respect to inclusions, considered as a (small, thin) category in the usual way. Given $x\in X$ and $F\in\mathcal F(X)$ let $\mathcal U_X(x)$ denote the neighbourhood filter of $x$ in $(X,\mathcal O)$ and $\mathcal F_{x,F}(X)$ the full subcategory of $\mathcal F(X)$ generated by $\{G\in\mathcal F(X):F\cup\mathcal U_X(x)\subseteq G\}$, let $E:\mathcal F_{x,F}\hookrightarrow\mathcal F(X)$ be the obvious (embedding) diagram, $\Delta$ the usual diagonal functor and $\lambda:\Delta(F)\rightarrow E$ the natural transformation where $\lambda(G):F\hookrightarrow G$ is the inclusion for each $G\in\mathcal F_{x,F}$. It is not hard to see that $F$ tends to $x$ in $(X,\mathcal O)$ iff $\lambda$ is a limit of $E$. Kind regards - Stephan F. Kroneck.
-
Ah, yes, thanks for spelling it out! It is essentially what I thought it should be (I don't know why I mentioned an equivalence relation in my comment above) only excuse: it was late :) – t.b. Sep 8 '11 at 11:22
Tnx, I'll accept now and check later :) don't know much about filters yet. – rafaelm Sep 8 '11 at 18:36
This construction works out perfectly. Tnx again. :) – rafaelm Sep 10 '11 at 20:20
@ rafaelm: no problem; as written, I cannot claim any dues; kind regards - Stephan F. Kroneck. – bonnbaki Sep 11 '11 at 19:56
Let me think. So $\lambda$ is a limit of $E$ iff $F$ is the intersection of all filters $G$ with $F \cup U_X(x) \subseteq G$ iff $U_X(x) \subseteq F$ iff $F$ converges to $x$. So this is really just a trivial reformulation. But quite amusing. – Martin Brandenburg Jan 29 at 0:10
Let $\rm X$ and $\rm Y$ be $\rm T_1$ topological spaces. Let $f : \rm X \to Y$ be any function and let $x \in \rm X$.
Then because $\rm X$ is a $\rm T_1$ topological space, if $\mathcal V_x$ is the filter of neighborhoods of $x$, we have $$\lim_{\mathcal V_x} \mathrm V = \bigcap_{\mathrm V \in \mathcal V_x} \mathrm V = \{x\}.$$
Now suppose $f$ is continuous at $x$. That means that the filter $f(\mathcal V_x)$ is finer than $\mathcal V_{f(x)}$. This implies that $$\{f(x)\} \subset \lim_{\mathcal V_x} f(\mathrm V) \subset \bigcap_{\mathrm W \in \mathcal V_{f(x)}} \mathrm W = \{f(x)\}$$
Conclusion : if $f$ is continous at $x$ then $$\lim_{\mathcal V_x} f(\mathrm V) = f(\lim_{\mathcal V_x}\mathrm V).$$
More generally, if $\mathfrak F$ is any ultrafilter converging to $x$ :
• if $\bigcap \mathfrak F = \emptyset$, then $\bigcap f(\mathfrak F) = \emptyset$ ;
• or $\bigcap \mathfrak F = \{x\}$ and because $\mathcal V_{f(x)} \subset f(\mathcal V_x) \subset f(\mathfrak F)$, we have $\bigcap f(\mathfrak F) = \{f(x)\}$.
So in both cases, $$\lim_{\mathfrak F}f(\mathrm V) = f(\lim_{\mathfrak F} \mathrm V).$$
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 54, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9328374862670898, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/9449?sort=votes
|
## A conjecture of Montesinos
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Not every orientable 3-manifold is a double cover of $S^3$ branched over a link. For example, the 3-torus isn't. However, in 1975 Montesinos conjectured (Surjery on links and double branched covers of $S^3$, in: "Knots, groups and 3-manifolds", papers dedicated to the memory of R. Fox) that every orientable 3-manifold is a double branched cover of a sphere with handles i.e. the connected sum of a certain number of copies of $S^1\times S^2$ (this number can be zero, in which case we get $S^3$). Notice that this time $T^3$ does not provide a counter-example since if we take the quotient of $T^3$ by the involution $(x,y,z)\mapsto (x^{-1}, y^{-1},z)$ we get $S^2\times S^1$.
I was wondering what the status of this conjecture is.
-
## 2 Answers
It is false. For example, there are closed, orientable, aspherical 3-manifolds that admit no nontrivial action of a finite group whatsoever. The first examples were due to F. Raymond and J. Tollefson in the 1970s, I believe.
-
Thanks, Allan! I've found their papers. – algori Dec 21 2009 at 2:29
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
I don't understand the answer. Why a manfold without nontrivial action cannot a branch cover of $S^1\timesS^2$? Compare it with that every 3-manfold is a 3-fold branch cover of $S^3$.
-
It's because the question asks for a 2-fold cover, which would have to be regular. – Richard Kent May 11 2010 at 14:59
I see. Thank you. – Wolffo May 11 2010 at 15:09
Then is the conjecture true when restricting the condition on manifold with $\mathbbZ$ action? – Wolffo May 11 2010 at 15:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.933479905128479, "perplexity_flag": "middle"}
|
http://mathematica.stackexchange.com/questions/10180/plotting-a-complex-valued-function-over-a-circular-region/10184
|
Plotting a complex valued function over a circular region
I would like to apply some complex valued function to some region in the plane, say, a circle of radius $R$ centred at $k$.
How can I do this?
-
1
– Artes Sep 3 '12 at 21:16
4 Answers
Under the interpretation of OP's question as
How do I apply the transformation $w=f(z)$ to a region (e.g. a disk) in the complex plane?
I'd say `ParametricPlot[]` (which now incorporates the functionality from the old `Graphics`ComplexMap`` package) would be what you can use:
````With[{f = # + 1/# &, center = 1/3 + 3 I/2, radius = 4/3},
ParametricPlot[
Through[{Re, Im}[f[center + r Exp[I θ]]]], {r, 0, radius}, {θ, -π, π},
PlotPoints -> 45, PlotRange -> All]]
````
-
That is awesome!!! – Fawkes5 Sep 4 '12 at 3:23
I thought the question was to apply a translation to the circle.
Including the option for radius size and location of center of circle:
````f[radius_, {x1_, y1_}] :=
DensityPlot[Sin[x]*Sin[y], {x, -12, 12}, {y, -12, 12},
RegionFunction -> Function[{x, y}, (x - x1)^2 + (y - y1)^2 < radius^2],
ColorFunction -> "SunsetColors", GridLines -> {{x1}, {y1}}]
f[2, {5, -3}]
````
Making this interactive,
````Manipulate[ DensityPlot[Sin[x]*Sin[y], {x, -12, 12}, {y, -12, 12},
RegionFunction -> Function[{x, y}, (x - x1)^2 + (y - y1)^2 < radius^2],
ColorFunction -> "SunsetColors", GridLines -> {{x1}, {y1}},
PlotRange -> 15],{{x1, 0}, -10, 10},{{y1, 0}, -10, 10, Slider},{{radius, 1}, 0, 4, Slider}]
````
-
That is wonderful, thank you!!! – Fawkes5 Sep 4 '12 at 3:24
Use `RegionFunction`, like so:
````DensityPlot[Sin[3*x]*Cos[4*y], {x, -2, 2}, {y, -2, 2},
RegionFunction -> Function[{x, y}, x^2 + y^2 < 4]]
````
-
Also awesome!! Thanks! – Fawkes5 Sep 4 '12 at 3:26
There are many ways to represent visually a complex-valued function of a complex variable. Here's another one. (Instead of superimposing the plots of real and imaginary parts, you could make a `Row` or `GraphicsRow` of them.
````Plot3D[Through[{Re, Im}[ArcSin[x + y I]]], {x, -Pi, Pi}, {y, -Pi, Pi},
RegionFunction -> Function[{x, y, z}, x^2 + y^2 <= Pi^2],
MeshFunctions -> {Re[Sqrt[#1 + I #2]] &, Im[Sqrt[#1 + I #2]] &},
BoxRatios -> {1,1,0.7}]
````
-
lang-mma
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.7984048128128052, "perplexity_flag": "middle"}
|
http://mathforum.org/mathimages/index.php?title=Parabolic_Reflector&diff=5791&oldid=5784
|
# Parabolic Reflector
### From Math Images
(Difference between revisions)
| | | | |
|----------|--------------------------------------------------------------------------------------------|----------|--------------------------------------------------------------------------------------------|
| | | | |
| Line 49: | | Line 49: | |
| | |SiteURL=http://www.eia.doe.gov/cneaf/solar.renewables/page/solarthermal/solarthermal.html | | |SiteURL=http://www.eia.doe.gov/cneaf/solar.renewables/page/solarthermal/solarthermal.html |
| | |Field=Geometry | | |Field=Geometry |
| - | |InProgress=Yes | + | |InProgress=No |
| | }} | | }} |
## Revision as of 10:59, 25 June 2009
Parabolic Reflector Dish
Solar Dishes such as the one shown use a paraboloid shape to focus the incoming light into a single collector.
# Basic Description
Figure 1: Incoming beams of light perpendicular to the directrix bounce off the dish directly towards the focus.
Figure 2: Note that incoming beams reflect 'over' the line perpendicular to the parabola at the point of contact.
The geometry of a parabola makes this shape useful for solar dishes. If the dish is facing the sun, beams of light coming from the sun are essentially parallel to each other when they hit the dish. Upon hitting the surface of the dish, the beams are reflected directly towards the focus of the parabola, where a device to absorb the sun's energy would be located.
We can see why beams of light hitting the parabola-shaped dish will reflect towards the same point. A beam of light reflects 'over' the line perpendicular to the parabola at the point of contact. In other words, the angle the light beam makes with the perpendicular when it hits the parabola is equal to the angle it makes with same perpendicular after being reflected, as shown in Figure 2.
Near the bottom of the parabola the perpendicular line is nearly vertical, meaning an incoming beam barely changes its angle after being reflected, allowing it to reach the focus above the bottom part of the parabola. Further up the parabola the perpendicular becomes more horizontal, allowing a light beam to undergo the greater change in angle needed to reach the focus.
Parabolic reflectors can also work in reverse: if a light emitter is placed at the focus and shined inward towards the parabola, the light will be reflected straight out of the parabola, with the beams of light traveling parallel to each other. Headlights on cars often use this effect to shine light directly forward.
# A More Mathematical Explanation
[Click to view A More Mathematical Explanation]
Figure 3
[[Image:Parabdiagram4.JPG|thumb|400px|right|Fi [...]
[Click to hide A More Mathematical Explanation]
Figure 3
Figure 4: A represents equal angles: the line normal to the parabola has the same slope relative to the y-axis as the line tangent to the parabola has relative to the x-axis.
The fact that a parabolic reflector can collect light in this way can be proven. We can show that any beam of light coming straight down into a parabola will reflect at exactly the angle needed to hit the focus, as follows:
Step 1
We begin with the equation of a parabola with focus at (0,p):
• $x^2=4py$
Step 2
We take the derivative with respect to x, giving the slope of the tangent at any point on the parabola:
• $\frac{x}{2p} = \frac{dy}{dx}$
The slope of this tangent line is relative to the x-axis: when the slope is zero, the tangent line is parallel to the x-axis. The line normal to the parabola has the same slope relative to the y-axis as the line tangent to the parabola has relative to the x-axis, as shown in Figure 4.
Step 3
We use this slope to find the angle between the normal and the y-axis, which is the same as the angle between the normal and an incoming beam of light. The desired angle $\theta$ can be expressed as:
• $\tan\theta= \frac{x}{2p}$ (Equation 1)
As mentioned previously, a beam of light is reflected 'over' the normal. This means that the angle the beam of light takes relative to a vertical line is equal to two times the angle the normal makes with the same vertical line.
Step 4
We now must show that the direction the light takes after being reflected is exactly the angle needed to hit the focus.
Notice from Figure 3 that geometrically, the angle needed to hit the focus is equal to $2\theta$, and satisfies the relationship
• $\tan2\theta= \frac{x}{p-x^2/4p}$
Step 5
We use a trigonometric identity to rewrite the equation in Step 4:
• $\tan2\theta= \frac{x}{p-x^2/4p} =\frac{2\tan\theta}{1-\tan\theta^2}$ (Equation 2)
Step 6
We now manipulate Equation 1's expression for $\tan\theta$ to show its equivalence to Equation 2 (that is, to show the angle $\theta$ in Equation 1 is the same as the angle $\theta$ in Equation 2).:
• $2tan\theta= 2\frac{x}{2p} = \frac{x}{p}$ , and
• $1 - \tan\theta^2 = 1 - (\frac{x}{2p})^2 = 1 - \frac{x^2}{4p^2}$ .
Step 7
We combine the two expressions in Step 6, giving:
• $\frac{2\tan\theta}{1-\tan\theta^2} = \frac{\frac{x}{p}}{1 - \frac{x^2}{4p^2}} = \frac{x}{p-x^2/4p}$
Which is the same as the expression in Equation 2.
Therefore, a beam of light will hit the parabola's focus after being reflected. ■
# Teaching Materials
There are currently no teaching materials for this page. Add teaching materials.
Leave a message on the discussion page by clicking the 'discussion' tab at the top of this image page.
Parabolic Reflector Dish
Field: Geometry
Created By: Energy Information Administration
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 13, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8736799359321594, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/geometry/186240-solving-side-square-given-diagonal.html
|
# Thread:
1. ## Solving for side of square given diagonal
Can someone correct me?
Problem: The diagonal of a square quilt is 4 times the square root of 2. What is the area of the quilt in square feet?
The answer is 4. My book suggests that we use the property of a 45 45 90 triangle to solve this. I understand that method. However, I initially attempted to solve it by using Pythagorean theorem with the diagonal being the hypotenuse. I came up with 4 x 2^(1/2) = 2 x (a^2) I proceeded to divide the left side by 2, and I did not come up with 4 for the answer. Why is this?
2. ## Re: Solving for side of square given diagonal
If you want to use Phytagoras then, let $a$ be the side of the square quilt therefore:
$a^2+a^2=(4\sqrt{2})^2$
Solve this equation for $a$.
3. ## Re: Solving for side of square given diagonal
Originally Posted by benny92000
Can someone correct me?
Problem: The diagonal of a square quilt is 4 times the square root of 2. What is the area of the quilt in square feet?
The answer is 4. <--- That is the side-length and not the area
My book suggests that we use the property of a 45 45 90 triangle to solve this. I understand that method. However, I initially attempted to solve it by using Pythagorean theorem with the diagonal being the hypotenuse. I came up with 4 x 2^(1/2) = 2 x (a^2) I proceeded to divide the left side by 2, and I did not come up with 4 for the answer. Why is this?
1. Draw a sketch.
2. The quilt is placed inside a square of the side-length d.
3. You can prove (by congruent triangles) that the area of the quilt is
$A = \frac12 \cdot d^2$
4. Plug in the value for d and you'll get the given result.
5. If you want to calculate the side-length then $a = \sqrt{A}=d \cdot \sqrt{\frac12}$
Attached Thumbnails
4. ## Re: Solving for side of square given diagonal
Siron has calculated the sides of the square when what was asked for was the area, s^2.
5. ## Re: Solving for side of square given diagonal
Originally Posted by cathectio
Siron has calculated the sides of the square when what was asked for was the area, s^2.
no kidding ...
Siron most probably assumed the OP was capable of determining the area of a square once he/she determined the side length.
Effective tutoring sometimes requires providing just enough information to let the OP solve the problem (or screw it up) on their own.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.953706681728363, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/pre-calculus/200701-f-x-1-x-continuous.html
|
# Thread:
1. ## f(x)=1/x continuous?
Is f(x)=1/x considered continuous? A practice test I was doing said it was, but I figured it would have to be discontinuous at x=0. Does it possibly have to do with 0 not being in the domain? The simple definition of continuity given to me was if the graph could be drawn without picking up your pencil; this is why I do not see how 1/x can be continuous.
2. ## Re: f(x)=1/x continuous?
Originally Posted by Jzon758
Is f(x)=1/x considered continuous? A practice test I was doing said it was, but I figured it would have to be discontinuous at x=0. Does it possibly have to do with 0 not being in the domain? The simple definition of continuity given to me was if the graph could be drawn without picking up your pencil; this is why I do not see how 1/x can be continuous.
On what set? Not on $\mathbb{R}$. It is continuous on $(-\infty,0)\cup (0,\infty)$
3. ## Re: f(x)=1/x continuous?
$f(x) = \frac{1}{x}$ is continuous over its domain.
4. ## Re: f(x)=1/x continuous?
Okay good. I thought I was missing out on something regarding the definition of continuity. I think the practice test must be flawed. Thanks!
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9797453880310059, "perplexity_flag": "middle"}
|
http://terrytao.wordpress.com/2008/12/27/tricks-wiki-use-basic-examples-to-calibrate-exponents/
|
What’s new
Updates on my research and expository papers, discussion of open problems, and other maths-related topics. By Terence Tao
# Tricks Wiki: Use basic examples to calibrate exponents
27 December, 2008 in math.CA, math.CO, tricks | Tags: additive combinatorics, calibration, Cauchy-Schwarz, Fourier transform, NLS, scale invariance, Sobolev embedding, test cases
Title: Use basic examples to calibrate exponents
Motivation: In the more quantitative areas of mathematics, such as analysis and combinatorics, one has to frequently keep track of a large number of exponents in one’s identities, inequalities, and estimates. For instance, if one is studying a set of N elements, then many expressions that one is faced with will often involve some power $N^p$ of N; if one is instead studying a function f on a measure space X, then perhaps it is an $L^p$ norm $\|f\|_{L^p(X)}$ which will appear instead. The exponent $p$ involved will typically evolve slowly over the course of the argument, as various algebraic or analytic manipulations are applied. In some cases, the exact value of this exponent is immaterial, but at other times it is crucial to have the correct value of $p$ at hand. One can (and should) of course carefully go through one’s arguments line by line to work out the exponents correctly, but it is all too easy to make a sign error or other mis-step at one of the lines, causing all the exponents on subsequent lines to be incorrect. However, one can guard against this (and avoid some tedious line-by-line exponent checking) by continually calibrating these exponents at key junctures of the arguments by using basic examples of the object of study (sets, functions, graphs, etc.) as test cases. This is a simple trick, but it lets one avoid many unforced errors with exponents, and also lets one compute more rapidly.
Quick description: When trying to quickly work out what an exponent p in an estimate, identity, or inequality should be without deriving that statement line-by-line, test that statement with a simple example which has non-trivial behaviour with respect to that exponent p, but trivial behaviour with respect to as many other components of that statement as one is able to manage. The “non-trivial” behaviour should be parametrised by some very large or very small parameter. By matching the dependence on this parameter on both sides of the estimate, identity, or inequality, one should recover p (or at least a good prediction as to what p should be).
General discussion: The test examples should be as basic as possible; ideally they should have trivial behaviour in all aspects except for one feature that relates to the exponent p that one is trying to calibrate, thus being only “barely” non-trivial. When the object of study is a function, then (appropriately rescaled, or otherwise modified) bump functions are very typical test objects, as are Dirac masses, constant functions, Gaussians, or other functions that are simple and easy to compute with. In additive combinatorics, when the object of study is a subset of a group, then subgroups, arithmetic progressions, or random sets are typical test objects. In graph theory, typical examples of test objects include complete graphs, complete bipartite graphs, and random graphs. And so forth.
This trick is closely related to that of using dimensional analysis to recover exponents; indeed, one can view dimensional analysis as the special case of exponent calibration when using test objects which are non-trivial in one dimensional aspect (e.g. they exist at a single very large or very small length scale) but are otherwise of a trivial or “featureless” nature. But the calibration trick is more general, as it can involve parameters (such as probabilities, angles, or eccentricities) which are not commonly associated with the physical concept of a dimension. And personally, I find example-based calibration to be a much more satisfying (and convincing) explanation of an exponent than a calibration arising from formal dimensional analysis.
When one is trying to calibrate an inequality or estimate, one should try to pick a basic example which one expects to saturate that inequality or estimate, i.e. an example for which the inequality is close to being an equality. Otherwise, one would only expect to obtain some partial information on the desired exponent p (e.g. a lower bound or an upper bound only). Knowing the examples that saturate an estimate that one is trying to prove is also useful for several other reasons – for instance, it strongly suggests that any technique which is not efficient when applied to the saturating example, is unlikely to be strong enough to prove the estimate in general, thus eliminating fruitless approaches to a problem and (hopefully) refocusing one’s attention on those strategies which actually have a chance of working.
Calibration is best used for the type of quick-and-dirty calculations one uses when trying to rapidly map out an argument that one has roughly worked out already, but without precise details; in particular, I find it particularly useful when writing up a rapid prototype. When the time comes to write out the paper in full detail, then of course one should instead carefully work things out line by line, but if all goes well, the exponents obtained in that process should match up with the preliminary guesses for those exponents obtained by calibration, which adds confidence that there are no exponent errors have been committed.
Prerequisites: Undergraduate analysis and combinatorics.
Example 1. (Elementary identities) There is a familiar identity for the sum of the first n squares:
$\displaystyle 1^2 + 2^2 + 3^2 + \ldots + n^2 = ???$ (1)
But imagine that one has forgotten exactly what the RHS of (1) was supposed to be… one remembers that it was some polynomial in n, but can’t remember what the degree or coefficients of the polynomial were. Now one can of course try to rederive the identity, but there are faster (albeit looser) ways to reconstruct the right-hand side. Firstly, we can look at the asymptotic test case $n \to \infty$. On the LHS, we are summing n terms of size at most $n^2$, so the LHS is at most $n^3$; thus, if we believe the RHS to be a polynomial in n, it should be at most cubic in n. We can do a bit better by approximating the sum in the LHS by the integral $\int_0^n x^2\ dx = n^3/3$, which strongly suggests that the cubic term on the RHS should be $n^3/3$. So now we have
$\displaystyle 1^2 + 2^2 + 3^2 + \ldots + n^2 = \frac{1}{3} n^3 + a n^2 + b n + c$
for some coefficients a,b,c that we still have to work out.
We can plug in some other basic cases. A simple one is n=0. The LHS is now zero, and so the constant coefficient c on the RHS should also vanish. A slightly less obvious base case is n=-1. The LHS is still zero (note that the LHS for n-1 should be the LHS for n, minus $n^2$), and so the RHS still vanishes here; thus by the factor theorem, the RHS should have both n and n+1 as factors. We are now looking at
$\displaystyle 1^2 + 2^2 + 3^2 + \ldots + n^2 = n(n+1) ( \frac{1}{3} n + d )$
for some unspecified constant d. But now we just need to try one more test case, e.g. n=1, and we learn that $d = 1/6$, thus recovering the correct formula
$\displaystyle 1^2 + 2^2 + 3^2 + \ldots + n^2 = \frac{n(n+1) (2n+1)}{6}$. (1′)
Once one has the formula (1′) in hand, of course, it is not difficult to verify by a textbook use of mathematical induction that the formula is in fact valid. (Alternatively, one can prove a more abstract theorem that the sum of the first n $k^{th}$ powers is necessarily a polynomial in n for any given k, at which point the above analysis actually becomes a rigorous derivation of (1′).)
Note that the optimal strategy here is to start with the most basic test cases ($n \to \infty, n = 0, n = -1$) first before moving on to less trivial cases. If instead one used, e.g. n=1, n=2, n=3, n=4 as the test cases, one would eventually have obtained the right answer, but it would have been more work.
Exercise 1. (Partial fractions) If $w_1,\ldots,w_k$ are distinct complex numbers, and P(z) is a polynomial of degree less than k, establish the existence of a partial fraction decomposition
$\displaystyle \frac{P(z)}{(z-w_1) \ldots (z-w_k)} = \frac{c_1}{z-w_1} + \ldots + \frac{c_k}{z-w_k},$
(Hint: use the remainder theorem and induction) and use the test cases $z \to w_j$ for $j=1,\ldots,k$ to compute the coefficients $c_1,\ldots,c_k$. Use this to deduce the Lagrange interpolation formula. $\diamond$
Example 2. (Counting cycles in a graph) Suppose one has a graph G on N vertices with an edge density of $\delta$ (thus, the number of edges is $\delta \binom{N}{2}$, or roughly $\delta N^2$ up to constants). There is a standard Cauchy-Schwarz argument that gives a lower bound on the number of four-cycles $C_4$ (i.e. a circuit of four vertices connected by four edges) present in G, as a function of $\delta$ and N. It only takes a few minutes to reconstruct this argument to obtain the precise bound, but suppose one was in a hurry and wanted to guess the bound rapidly. Given the “polynomial” nature of the Cauchy-Schwarz inequality, the bound is likely to be some polynomial combination of $\delta$ and N, such as $\delta^p N^q$ (omitting constants and lower order terms). But what should p and q be?
Well, one can test things with some basic examples. A really trivial example is the empty graph (where $\delta = 0$), but this is too trivial to tell us anything much (other than that p should probably be positive). At the other extreme, consider the complete graph on N vertices, where $\delta = 1$; this renders p irrelevant, but still makes q non-trivial (and thus, hopefully, computable). In the complete graph, every set of four points yields a four-cycle $C_4$, so the number of four-cycles here should be about $N^4$ (give or take some constant factors, such as 4! – remember that we are in a hurry here, and are ignoring these sorts of constant factors). This tells us that q should be at most 4, and if we expect the Cauchy-Schwarz bound to be saturated for the complete graph (which is a good bet – arguments based on the Cauchy-Schwarz inequality tend to work well in very “uniformly distributed” situations) – then we would expect q to be exactly 4.
To calibrate p, we need to test with graphs of density $\delta$ less than 1. Given the previous intuition that Cauchy-Schwarz arguments work well in uniformly distributed situations, we would want to use a test graph of density $\delta$ that is more or less uniformly distributed. A good example of such a graph is a random graph G on N vertices, in which each edge has an independent probability of $\delta$ of lying in G. By the law of large numbers, we expect the edge density of such a random graph to be close to $\delta$ on the average. On the other hand, each one of the roughly $N^4$ four-cycles $C_4$ connecting the N vertices has a probability about $\delta^4$ of lying in the graph, since the $C_4$ has four edges, each with an independent probability of $\delta$ of lying in the edge. The events that each of the four-cycles lies in the graph G aren’t completely independent of each other, but they are still close enough to being so that one can guess using the law of large numbers that the total number of 4-cycles should be about $\delta^4 N^4$ on the average (up to constants). [Actually, linearity of expectation will give us this claim even without any independence whatsoever.] So this leads one to predict p=4, thus the number of 4-cycles in any graph on N vertices of density $\delta$ should be $\geq c \delta^4 N^4$ for some absolute constant $c>0$, and this is indeed the case (provided that one also counts degenerate cycles, in which some vertices are repeated).
If one is nervous about using the random graph as the test graph, one could try a graph at the other end of the spectrum – e.g. the complete graph on about $\sqrt{\delta} N$ vertices, which also has edge density about $\delta$. Here one quickly calculates that the number of 4-cycles is about $\delta^2 N^4$, which is a larger quantity than in the random case (and this fits with the intuition that this graph is packing the same number of edges into a tighter space, and should thus increase the number of cycles). So the random graph is still the best candidate for a near-extremiser for this bound. (Actually, if the number of 4-cycles is close to the Cauchy-Schwarz lower bound, then the graph becomes pseudorandom, which roughly speaking means any randomly selected small subgraph of that graph is indistinguishable from a random graph.)
One should caution that sometimes the random object is not the extremiser, and so does not always calibrate an estimate correctly. For instance, consider Szemerédi’s theorem, that asserts that given any $0 < \delta < 1$ and $k > 1$, that any subset of $\{1,\ldots,N\}$ of density $\delta$ should contain at least one arithmetic progression of length k, if N is large enough. One can then ask what is the minimum number of k-term arithmetic progressions such a set would contain. Using the random subset of $\{1,\ldots,N\}$ of density $\delta$ as a test case, we would guess that there should be about $\delta^k N^2$ (up to constants depending on k). However, it turns out that the number of progressions can be significantly less than this (basically thanks to the old counterexample of Behrend): given any constant C, one can get significantly fewer than $\delta^C N^2$ k-term progressions. But, thanks to an averaging argument of Varnavides, it is known that there are at least $c(k,\delta) N^2$ k-term progressions (for N large enough), where $c(k,\delta) > 0$ is a positive quantity. (Determining the exact order of magnitude of $c(k,\delta)$ is still an important open problem in this subject.) So one can at least calibrate the correct dependence on N, even if the dependence on $\delta$ is still unknown.
Example 3. (Sobolev embedding) Given a reasonable function $f: {\Bbb R}^n \to {\Bbb R}$ (e.g. a Schwartz class function will do), the Sobolev embedding theorem gives estimates such as
$\displaystyle \| f \|_{L^q({\Bbb R}^n)} \leq C_{n,p,q} \|\nabla f\|_{L^p({\Bbb R}^n)}$ (2)
for various exponents p, q. Suppose one has forgotten the exact relationship between p, q, and n and wants to quickly reconstruct it, without rederiving the proof of the theorem or looking it up. One could use dimensional analysis to work out the relationship (and we will come to that shortly), but an equivalent way to achieve the same result is to test the inequality (2) against a suitably basic example, preferably one that one expects to saturate (2).
To come as close to saturating (2) as possible, one wants to keep the gradient of f small, while making f large; among other things, this suggests that unnecessary oscillations in f should be kept to a minimum. A natural candidate for an extremiser, then, would be a rescaled bump function $f(x) = A\phi(x/L)$, where $\phi \in C^\infty_0({\Bbb R}^n)$ is some fixed bump function, $A > 0$ is an amplitude parameter, and $L > 0$ is a parameter, thus f is a rescaled bump function of bounded amplitude O(A) that is supported on a ball of radius O(r) centred at the origin. [As the estimate (2) is linear, the amplitude A turns out to ultimately be irrelevant here, but the amplitude plays a more crucial role in nonlinear estimates; for instance, it explains why nonlienar estimates typically have the same number of appearances of a given unknown function f in each term. Also, it is sometimes convenient to carefully choose the amplitude in order to attain a convenient normalisation, e.g. to set one of the norms in (2) equal to 1.]
The ball that f is supported on has volume about $O(L^n)$ (allowing implied constants to depend on n), and so the $L^q$ norm of f should be about $O(L^{n/q} )$ (allowing implied constants to depend on q as well). As for the gradient of f, since f oscillates by O(A) over a length scale of O(L), one expects $\nabla f$ to have size about $O(A/L)$ on this ball (remember, derivatives measure “rise over run“!), and so the $L^p$ norm of $\nabla f$ should be about $O( \frac{A}{L} L^{n/p} )$. Inserting this numerology into (2), and equating powers of L (note A cancels itself into irrelevance, and could in any case be set to equal 1), we are led to the relation
$\displaystyle \frac{n}{p} - 1 = \frac{n}{q}$ (2)
which is indeed one of the necessary conditions for (2). (The other necessary conditions are that p and q lie strictly between 1 and infinity, but these require a more complicated test example to establish.)
One can efficiently perform the above argument using the language of dimensional analysis. Giving f the units of amplitude A, and giving space the units of length L, we see that the n-dimensional integral $\int_{{\Bbb R}^n}\ dx$ has units of $L^n$, and thus $L^p({\Bbb R}^n)$ norms have units of $L^{n/p}$. Meanwhile, from the rise-over-run interpretation of the derivative, $\nabla f$ has units of $A/L$, thus the LHS and RHS of (2) have units of $A L^{n/q}$ and $\frac{A}{L} L^{n/p}$ respectively. Equating these dimensions gives (3). Observe how this argument is basically a shorthand form of the argument based on using the rescaled bump function as a test case; with enough practice one can use this shorthand to calibrate exponents rapidly for a wide variety of estimates.
Exercise 2. Convert the above discussion into a rigorous proof that (3) is a necessary condition for (2). (Hint: exploit the freedom to send L to zero or to infinity.) What happens to the necessary conditions if ${\Bbb R}^n$ is replaced with a bounded domain (such as the unit cube ${}[0,1]^n$, assuming Dirichlet boundary conditions) or a discrete domain (such as the lattice ${\Bbb Z}^n$, replacing the gradient with a discrete gradient of course)? $\diamond$
Exercise 3. If one replaces (2) by the variant estimate
$\displaystyle \| f \|_{L^q({\Bbb R}^n)} \leq C_{n,p,q} (\|f\|_{L^p({\Bbb R}^n)} + \|\nabla f\|_{L^p({\Bbb R}^n)})$ (2′)
establish the necessary condition
$\displaystyle \frac{n}{p} - 1 \leq \frac{n}{q} \leq \frac{n}{p}$. (3′)
What happens to the dimensional analysis argument in this case? $\diamond$
Remark 1. There are many other estimates in harmonic analysis which are saturated by some modification of a bump function; in addition to the isotropically rescaled bump functions used above, one could also rescale bump functions by some non-isotropic linear transformation (thus creating various “squashed” or “stretched” bumps adapted to disks, tubes, rectangles, or other sets), or modulate bumps by various frequencies, or translate them around in space. One can also try to superimpose several such transformed bump functions together to amplify the counterexample. The art of selecting good counterexamples can be somewhat difficult, although with enough trial and error one gets a sense of what kind of arrangement of bump functions are needed to make the right-hand side small and the left-hand side large in the estimate under study. $\diamond$
Example 3 (Scale-invariance in nonlinear PDE) The model equations and systems studied in nonlinear PDE often enjoy various symmetries, notably scale-invariance symmetry, that can then be used to calibrate various identities and estimates regarding solutions to those equations. For sake of discussion, let us work with the nonlinear Schrödinger equation (NLS)
$\displaystyle i u_t + \Delta u = |u|^{p-1} u$ (4)
where $u: {\Bbb R} \times {\Bbb R}^n \to {\Bbb C}$ is the unknown field, $\Delta$ is the spatial Laplacian, and $p > 1$ is a fixed exponent. (One can also place in some other constants in (4), such as Planck’s constant $\hbar$, but we have normalised this constant to equal 1 here, although it is sometimes useful to reinstate this constant for calibration purposes.) If u is one solution to (4), then we can form a rescaled family $u^{(\lambda)}$ of such solutions by the formula
$\displaystyle u^{(\lambda)}(t,x) := \frac{1}{\lambda^a} u( \frac{t}{\lambda^b}, \frac{x}{\lambda} )$ (5)
for some specific exponents a, b; these play the role of the rescaled bump functions in Example 2. The exponents a,b can be worked out by testing (4) using (5), and we leave this as an exercise to the reader, but let us instead use the shorthand of dimensional analysis to work these exponents out. Let’s give u the units of amplitude A, space the units of length L, and time the units of duration T. Then the three terms in (4) have units $A/T$, $A/L^2$, and $A^p$ respectively; equating these dimensions gives $T=L^2$ and $A=L^{-2/(p-1)}$. (In particular, time has “twice the dimension” of space; this is a feature of many non-relativistic equations such as Schrödinger, heat, or viscosity equations. For relativistic equations, of course, time and space have the same dimension with respect to scaling.) On the other hand, the scaling (5) multiplies A, T, and L by $\lambda^{-a}$, $\lambda^b$, and $\lambda$ respectively; to maintain consistency with the relations $T=L^2$ and $A=L^{-2/(p-1)}$ we must thus have $a=2/(p-1)$ and $b=2$.
Exercise 4. Solutions to (4) (with suitable smoothness and decay properties) enjoy a conserved Hamiltonian $H(u)$, of the form
$\displaystyle H(u) = \int_{{\Bbb R}^n} \frac{1}{2} |\nabla u|^2 + \alpha |u|^q\ dx$
for some constants $\alpha, q$. Use dimensional analysis (or the rescaled solutions (5) as test cases) to compute q. (The constant $\alpha$, unfortunately, cannot be recovered from dimensional analysis, and other model test cases, such as solitons or other solutions obtained via separation of variables, also turn out unfortunately to not be sensitive enough to $\alpha$ to calibrate this parameter.) $\diamond$
Remark 2. The scaling symmetry (5) is not the only symmetry that can be deployed to calibrate identities and estimates for solutions to NLS. For instance, we have a simple phase rotation symmetry $u \mapsto e^{i\theta} u$ for such solutions, where $\theta \in {\Bbb R}$ is an arbitrary phase. This symmetry suggests that in any identity involving u and its complex conjugate $\bar{u}$, the net number of factors of u, minus the factors of $\bar{u}$, in each term of the identity should be the same. (Factors without phase, such as |u|, should be ignored for this analysis.) Other important symmetries of NLS, which can also be used for calibration, include space translation symmetry, time translation symmetry, and Galilean invariance. (While these symmetries can of course be joined together, to create a large-dimensional family of transformed solutions arising from a single base solution u, for the purposes of calibration it is usually best to just use each of the generating symmetries separately.) For gauge field equations, gauge invariance is of course a crucial symmetry, though one can make the calibration procedure with respect to this symmetry automatic by working exclusively with gauge-invariant notation (see also my earlier post on gauge theory). Another important test case for Schrödinger equations is the high-frequency limit $|\xi| \to \infty$, closely related to the semi-classical limit $\hbar \to 0$, that allows one to use classical mechanics to calibrate various identities and estimates in quantum mechanics. $\diamond$
Exercise 5. Solutions to (4) (again assuming suitable smoothness and decay) also enjoy a virial identity of the form
$\displaystyle \partial_{tt} \int_{{\Bbb R}^n} x^2 |u(t,x)|^2\ dx = \int_{{\Bbb R}^n} ???\ dx$
where the right-hand side only involves u and its spatial derivatives $\nabla u$, and does not explicitly involve the spatial variable x. Using the various symmetries, predict the type of terms that should go on the right-hand side. (Again, the coefficients of these terms are unable to be calibrated using these methods, but the exponents should be accessible.) $\diamond$
Remark 3. Einstein used this sort of calibration technique (using the symmetry of spacetime diffeomorphisms, better known as the general principle of relativity, as well as the non-relativistic limit of Newtonian gravity as another test case) to derive the Einstein equations of gravity, although the one constant that he was unable to calibrate in this fashion was the cosmological constant. $\diamond$
Example 4 (Fourier-analytic identities in additive combinatorics). Fourier analysis is a useful tool in additive combinatorics for counting various configurations in sets, such as arithmetic progressions $n, n+r, n+2r$ of length three. (It turns out that classical Fourier analysis is not able to handle progressions of any longer length, but that is a story for another time – see e.g. this paper of Gowers for some discussion.) A typical situation arises when working in a finite group such as ${\Bbb Z}/N{\Bbb Z}$, and one has to compute an expression such as
$\displaystyle \sum_{n, r \in {\Bbb Z}/N{\Bbb Z}} f(n) g(n+r) h(n+2r)$ (6)
for some functions $f,g,h: {\Bbb Z}/N{\Bbb Z} \to {\Bbb C}$ (for instance, these functions could all be the indicator function of a single set $A \subset {\Bbb Z}/N{\Bbb Z}$). The quantity (6) can be expressed neatly in terms of the Fourier transforms $\hat f, \hat g, \hat h: {\Bbb Z}/N{\Bbb Z} \to {\Bbb C}$, which we normalise as $\hat f(\xi) := \frac{1}{N} \sum_{x \in {\Bbb Z}/N{\Bbb Z}} f(x) e^{-2\pi i x \xi/N}$. It is not too difficult to compute this expression by means of the Fourier inversion formula and some routine calculation, but suppose one was in a hurry and only had a vague recollection of what the Fourier-analytic expression of (6) was – something like
$\displaystyle N^p \sum_{\xi \in {\Bbb Z}/N{\Bbb Z}}\hat f( a \xi ) \hat g( b \xi ) \hat h( c \xi )$ (7)
for some coefficients p, a, b, c, but the precise values of which have been forgotten. (In view of some other Fourier-analytic formulae, one might think that some of the Fourier transforms $\hat f, \hat g, \hat h$ might need to be complex conjugated for (7), but this should not happen here, because (6) is linear in f,g,h rather than anti-linear; cf. the discussion in Example 3 about factors of u and $\bar{u}$.) How can one quickly calibrate the values of p,a,b,c without doing the full calculation?
To isolate the exponent p, we can consider the basic case $f \equiv g \equiv h \equiv 1$, in which case the Fourier transforms are just the Kronecker delta function (e.g. $\hat f(\xi)$ equals 1 for $\xi=0$ and vanishes otherwise). The expression (6) is just $N^2$, while the expression (7) is $N^p$ (because only one of the summands is non-trivial); thus p must equal 2. (Exercise: reinterpret the above analysis as a dimensional analysis.)
Next, to calibrate a,b,c, we modify the above basic test case slightly, modulating the f,g,h so that a different element of the sum in (7) is non-zero. Let us take $f(x) := e^{2\pi i a x \xi/N}$, $g(x) := e^{2\pi i b x \xi/N}$, $h(x) := e^{2\pi i c x \xi/N}$ for some fixed frequency $\xi$; then (4) is again equal to $N^p=N^2$, while (6) is equal to
$\displaystyle \sum_{n,r \in {\Bbb Z}/N{\Bbb Z}} e^{2\pi i [ a n + b (n+r) + c(n+2r)] \xi / N}.$
In order for this to equal $N^2$ for any $\xi$, we need the linear form $an+b(n+r)+c(n+2r)$ to vanish identically, which forces a=c and b=-2a. We can normalise a=1 (by using the change of variables $\xi \mapsto a \xi$), thus leading us to the correct expression for (7), namely
$\displaystyle N^2 \sum_{\xi \in {\Bbb Z}/N{\Bbb Z}}\hat f( \xi ) \hat g( -2 \xi ) \hat h( \xi )$.
Once one actually has this formula, of course, it is a routine matter to check that it actually is the right answer.
Remark 4. One can also calibrate a,b,c in (7) by observing the identity $n - 2(n+r) + (n+2r)=0$ (which reflects the fact that the second derivative of a linear function is necessarily zero), which gives a modulation symmetry $f(x) \mapsto f(x) e^{2\pi i \alpha x}$, $g(x) \mapsto g(x) e^{-4\pi i \alpha x}$, $h(x) \mapsto h(x) e^{2\pi i \alpha x}$ to (6). Inserting this symmetry into (7) reveals that $a=c$ and $b=-2a$ as before. $\diamond$
Remark 5. By choosing appropriately normalised conventions, one can avoid some calibration duties altogether. For instance, when using Fourier analysis on a finite group such as ${\Bbb Z}/N{\Bbb Z}$, if one expects to be analysing functions that are close to constant (or subsets of the group of positive density), then it is natural to endow physical space with normalised counting measure (and thus, by Pontryagin duality, frequency space should be given non-normalised counting measure). [Conversely, if one is analysing functions concentrated on only a bounded number of points, then it may be more convenient to give physical space counting measure and frequency space normalised counting measure.] In practical terms, this means that any physical space sum, such as $\sum_{x \in {\Bbb Z}/N{\Bbb Z}} f(x)$, should instead be replaced with a physical space average ${\Bbb E}_{x \in {\Bbb Z}/N{\Bbb Z}} f(x) = \frac{1}{N} \sum_{x \in {\Bbb Z}/N{\Bbb Z}} f(x)$, while keeping sums over frequency space variables unchanged; when one does so, all powers of N “miraculously” disappear, and there is no longer any need to calibrate using the constant function 1 as was done above. Of course, this does not eliminate the need to perform other calibrations, such as that of the coefficients a,b,c above. $\diamond$
### Recent Comments
Sandeep Murthy on An elementary non-commutative…
Luqing Ye on 245A, Notes 2: The Lebesgue…
Frank on Soft analysis, hard analysis,…
andrescaicedo on Soft analysis, hard analysis,…
Richard Palais on Pythagoras’ theorem
The Coffee Stains in… on Does one have to be a genius t…
Benoît Régent-Kloeck… on (Ingrid Daubechies) Planning f…
Luqing Ye on 245B, Notes 7: Well-ordered se…
Luqing Ye on 245B, Notes 7: Well-ordered se…
Arjun Jain on 245B, Notes 7: Well-ordered se…
%anchor_text% on Books
Luqing Ye on 245B, Notes 7: Well-ordered se…
Arjun Jain on 245B, Notes 7: Well-ordered se…
Luqing Ye on 245A, Notes 2: The Lebesgue…
## 7 comments
Hi,
In example 1 the integral of x^2 from 0 to n should be n^3/3. [Corrected, thanks. - T.]
In example 1, can you please elaborate on how you can take n=-1 as an example? Can you take n=-2?
Ok, I understand now. You extended the definition of LHS for negative n as well.
30 December, 2008 at 1:18 am
daoguozhou
The beautiful article clarifies something in PDE. In my opinion, Tricks Wiki is of great help to graduate students like me.
30 December, 2008 at 5:55 am
\sqrt[4]{3}
It would be better to clarify the Riemann Hypothesis, e.g.,
by finding only two zeros which are not on 1/2.
Happy New Year!
Symbolic Logic
Lovely! I realized some time ago that the test case in Example 1 applied to 1/(1 – x^n) can also be used to extract the coefficients of the DFT matrix of order n, but using it to extract the coefficients of the Lagrange interpolating polynomial is sneaky.
[...] this inequality, as a necessary condition for (5). (See also this blog post for an equivalent way to establish these conditions, using rescaled test functions instead of [...]
Cancel
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 167, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9130892753601074, "perplexity_flag": "head"}
|
http://mathhelpforum.com/calculus/79145-newton-s-law-cooling.html
|
# Thread:
1. ## Newton's law of cooling
The rate at which a substance cools in moving air is proportional to the difference between the temperature of the substance and that of the air. If the temperature of the air is 300K and the substance cools from 370K to 340K in 25 minutes, find when the temperature will be 310K.
2. Originally Posted by cazimi
The rate at which a substance cools in moving air is proportional to the difference between the temperature of the substance and that of the air. If the temperature of the air is 300K and the substance cools from 370K to 340K in 25 minutes, find when the temperature will be 310K.
$<br /> \frac{d(T)}{dt} = k(T - 300)$
$<br /> \frac{d(T)}{T-300} = k dt$
Integrate both sides
$ln|T-300| =kt + h$
At t = 0 ,T = 370K
Hence h =ln|70|
ln|370-340| - ln|70| = 25k
ln|30/70| = 25k
k = (ln|3/7| )/25
c =25 ln|70| / ln|3/7|
Put this in the equation and T = 310 to get the value of t
-----------------------------
Another thing the sign of k has come out to be negative thus we can see that the temperature is decreasing, (something you can observe by reading the question)
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.923839271068573, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/47064/basic-question-about-probability-and-measurements?answertab=votes
|
# Basic question about probability and measurements
Say I have a Galton box, i.e. a ball dropping on a row of solid bodies. Now I want to calculate the probability distribution of the movement of the ball based on the properties of the body (case A). For instance if I change the position of the ball the distribution might change (case B). I want to know the distribution of the ball after it has hit the body, as a function of various properties. Does it even make sense to speak of a probability distribution here? Basically what I had in mind before, is that quantum mechanics is probabilistic and that classical mechanics is deterministic. Does this mean one can actually calculate where each ball will end up, if the measurements are precise enough?
-
## 1 Answer
In classical physics, all the motion of the objects and the behavior after all the recoils is predictable in principle. In practice, there's always some dependence on tiny errors in the knowledge of the initial state; tiny velocities that the elements may have and the motion and rotation of the marble in particular; tiny non-uniformities in the shape of the elements, and so on. The required accuracy in the knowledge of all these things is "exponential" (the error has to be at most $\exp(-X)$ where X is a number much greater than one) if we want to be able to predict the evolution for a long time, e.g. in a tall enough Galton box.
So in practice, the motion isn't predictable, much like it's not predictable when one is throwing dice. The precise behavior of the balls etc. may be described by probabilistic distributions whose width takes the "possible errors" produced in the real world into account. When several such probability distributions are convoluted, one gets some random distribution for the result. The Galton box is an example of that.
In quantum mechanics, one can't even imagine that there exists some sharp, non-probabilistic answer. Even if one knew everything about the objects perfectly, the final state would be undetermined – ambiguous – and only probabilities of different answers could be predicted. While this difference between classical physics and quantum mechanics seems "qualitative" in principle, it is very modest in the operational sense. Even in classical physics, due to the error margins etc., one should have adopted the fact that only probabilistic predictions were possible. Quantum mechanics takes this observation seriously and creates a framework for physics in which the precise deterministic predictions are non-existence not even in practice but even in principle. However, the ways how to calculate the probabilities are pretty much analogous.
-
Thanks. Is there a definition of measurement error? I suppose that comes from information theory.. – RParadox Dec 17 '12 at 16:37
– Luboš Motl Dec 20 '12 at 7:05
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9438840746879578, "perplexity_flag": "head"}
|
http://mathhelpforum.com/advanced-algebra/183820-proof-similar-matrices.html
|
# Thread:
1. ## proof of similar matrices
A is a 2x2 matrices
$A^2=-I$
and there is a transformation for which $T^2=-I$
there is a basis B={v,T(v)} for which
$[T]_B$=
(0 1)
(-1 0)
prove that A is similar to
(0 1)
(-1 0)
matrices.
how i tried:
i got T being represented by base B
$[T]_B$=
(0 1)
(-1 0)
i got T being represented by the stansdart base which is
A matrices.
what so A is similar to
(0 1)
(-1 0)
??
bacause of what?
is it correct?
2. ## Re: proof of similar matrices
Originally Posted by transgalactic
A is a 2x2 matrices
$A^2=-I$
and there is a transformation for which $T^2=-I$
there is a basis B={v,T(v)} for which
$[T]_B$=
(0 1)
(-1 0)
prove that A is similar to
(0 1)
(-1 0)
matrices.
how i tried:
i got T being represented by base B
$[T]_B$=
(0 1)
(-1 0)
i got T being represented by the stansdart base which is
A matrices.
what so A is similar to
(0 1)
(-1 0)
??
bacause of what?
is it correct?
Sadly, this question makes no sense. Is there some way you can reword it?
3. ## Re: proof of similar matrices
Originally Posted by Drexel28
Sadly, this question makes no sense. Is there some way you can reword it?
I agree. If $B=\{v,T(v)\}$ is a basis of $\mathbb{R}^{2\times 2}$ and the coordinates on $B$ are expressed by rows, then $[T]_B=\begin{bmatrix}{\;\;0}&{1}\\{-1}&{0}\end{bmatrix}$ . So, it seems we are saying that $A$ is similar to $A$ .
4. ## Re: proof of similar matrices
Originally Posted by FernandoRevilla
I agree. If $B=\{v,T(v)\}$ is a basis of $\mathbb{R}^{2\times 2}$ and the coordinates on $B$ are expressed by rows, then $[T]_B=\begin{bmatrix}{\;\;0}&{1}\\{-1}&{0}\end{bmatrix}$ . So, it seems we are saying that $A$ is similar to $A$ .
this question is the third part of some bigger question in which i solved the first two parts.
V is lenear space dimV=n n>1
T:V->V is a transformation for wheach $T^2=-I$
i prooved that {T(v),v} basis $[T]_B=$
(0 1)
(-1 0)
in the third part i was asked:
A is 2x2 matrices
$A^2=-I$
prove that A is similar to
(0 1)
(-1 0)
i am used to prove that if $B=P^-1AP$ then they are similar
but here in the book they some thing liketwo representation in diffeerent basis
so they are similar
i cant understand this thing
why is that?
5. ## Re: proof of similar matrices
Originally Posted by transgalactic
in the third part i was asked:
A is 2x2 matrices $A^2=-I$ prove that A is similar to
(0 1)
(-1 0)
That is false. Choose for example $A=\begin{bmatrix}{i}&{0}\\{0}&{i}\end{bmatrix}$ , we have $A^2=-I$. However $\det A=-1$ and $\det \begin{bmatrix}{\;\;0}&{1}\\{-1}&{0}\end{bmatrix}=1$ so, the matrices are not similar.
6. ## Re: proof of similar matrices
Originally Posted by FernandoRevilla
That is false. Choose for example $A=\begin{bmatrix}{i}&{0}\\{0}&{i}\end{bmatrix}$ , we have $A^2=-I$. However $\det A=-1$ and $\det \begin{bmatrix}{\;\;0}&{1}\\{-1}&{0}\end{bmatrix}=1$ so, the matrices are not similar.
But the result is true for matrices with real entries. Let T be the linear transformation whose matrix (with respect to the standard basis) is A. Then $T^2=-I$, so T has no real eigenvalues and hence no (real) eigenvectors. It follows that if x is any nonzero vector then the vectors x and –Tx are linearly independent. If B is the basis consisting of those two vectors then the matrix of T with respect to B is $\bigl[{\scriptstyle{0\atop-1}\:{1\atop0}}\bigr].$ So that matrix is similar to A.
7. ## Re: proof of similar matrices
why if $A^2=-I$ then $T^2=-I$
?
8. ## Re: proof of similar matrices
Originally Posted by Opalg
But the result is true for matrices with real entries. Let T be the linear transformation whose matrix (with respect to the standard basis) is A. Then $T^2=-I$, so T has no real eigenvalues and hence no (real) eigenvectors. It follows that if x is any nonzero vector then the vectors x and –Tx are linearly independent. If B is the basis consisting of those two vectors then the matrix of T with respect to B is $\bigl[{\scriptstyle{0\atop-1}\:{1\atop0}}\bigr].$ So that matrix is similar to A.
Of course. These things happen when the formulation of the problem is not clear. For example in the answer #2 we already commented that for a basis $B$ satisfying $B=\{v,T(v)\}$ the matrix of $T$ is $\begin{bmatrix}{\;\;0}&{1}\\{-1}&{0}\end{bmatrix}$ (row representation) or equivalently, for $B'=\{v,-T(v)\}$ is $\begin{bmatrix}{\;\;0}&{1}\\{-1}&{0}\end{bmatrix}$ (column representation). Now in the third part of the problem it seems we have a $2\times 2$ matrix without a reference to the field $\mathbb{K}$ .
9. ## Re: proof of similar matrices
Originally Posted by transgalactic
why if $A^2=-I$ then $T^2=-I$ ?
If $A$ is the matrix of $T$ with respect to a determined basis $B$ and we write the coordinates of the vectors in columns then, by a well known result, $Y=AX$ where $X$ are the coordinates of $x\in V$ with respect to $B$ and $Y$ the coordinates of $T(x)\in V$ with respect to $B$ . Conclude.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 59, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.941114068031311, "perplexity_flag": "middle"}
|
http://unapologetic.wordpress.com/2009/02/02/upper-triangular-matrices/?like=1&_wpnonce=67418a1937
|
# The Unapologetic Mathematician
## Upper-Triangular Matrices
Until further notice, I’ll be assuming that the base field $\mathbb{F}$ is algebraically closed, like the complex numbers $\mathbb{C}$.
What does this assumption buy us? It says that the characteristic polynomial of a linear transformation $T$ is — like any polynomial over an algebraically closed field — guaranteed to have a root. Thus any linear transformation $T$ has an eigenvalue $\lambda_1$, as well as a corresponding eigenvector $e_1$ satisfying
$T(e_1)=\lambda_1e_1$
So let’s pick an eigenvector $e_1$ and take the subspace $\mathbb{F}e_1\subseteq V$ it spans. We can take the quotient space $V/\mathbb{F}e_1$ and restrict $T$ to act on it. Why? Because if we take two representatives $v,w\in V$ of the same vector in the quotient space, then $w=v+ce_1$. Then we find
$T(w)=T(v+ce_1)=T(v)+cT(e_1)=T(v)+c\lambda_1e_1$
which represents the same vector as $T(v)$.
Now the restriction of $T$ to $V/\mathbb{F}e_1$ is another linear endomorphism over an algebraically closed field, so its characteristic polynomial must have a root, and it must have an eigenvalue $\lambda_2$ with associated eigenvector $e_2$. But let’s be careful. Does this mean that $e_2$ is an eigenvector of $T$? Not quite. All we know is that
$T(e_2)=\lambda_2e_2+c_{1,2}e_1$
since vectors in the quotient space are only defined up to multiples of $e_1$.
We can proceed like this, pulling off one vector $e_i$ after another. Each time we find
$T(e_i)=\lambda_ie_i+c_{i-1,i}e_{i-1}+c_{i-2,i}e_{i-2}+...+c_{1,i}e_1$
The image of $e_i$ in the $i$th quotient space is a constant times $e_i$ itself, plus a linear combination of the earlier vectors. Further, each vector is linearly independent of the ones that came before, since if it weren’t, then it would be the zero vector in its quotient space. This procedure only grinds to a halt when the number of vectors equals the dimension of $V$, for then the quotient space is trivial, and the linearly independent collection $\{e_i\}$ spans $V$. That is, we’ve come up with a basis.
So, what does $T$ look like in this basis? Look at the expansion above. We can set $t_i^j=c_{i,j}$ for all $i<j$. When $i=j$ we set $t_i^i=\lambda_i$. And in the remaining cases, where $i^gt;j$, we set $t_i^j=0$. That is, the matrix looks like
$\displaystyle\begin{pmatrix}\lambda_1&&*\\&\ddots&\\{0}&&\lambda_d\end{pmatrix}$
Where the star above the diagonal indicates unknown matrix entries, and the zero below the diagonal indicates that all the entries in that region are zero. We call such a matrix “upper-triangular”, since the only nonzero entries in the matrix are on or above the diagonal. What we’ve shown here is that over an algebraically-closed field, any linear transformation has a basis with respect to which the matrix of the transformation is upper-triangular. This is an important first step towards classifying these transformations.
### Like this:
Posted by John Armstrong | Algebra, Linear Algebra
## 19 Comments »
1. [...] a vector space over an algebraically-closed field has a basis with respect to which its matrix is upper-triangular. That is, it looks [...]
Pingback by | February 3, 2009 | Reply
2. [...] pick a basis and associate a matrix to each of these linear transformations. It turns out that the upper-triangular matrices form a [...]
Pingback by | February 5, 2009 | Reply
3. [...] Even better than upper-triangular matrices are diagonal matrices. These ones look [...]
Pingback by | February 9, 2009 | Reply
4. [...] matrix is upper-triangular, and so we can just read off its eigenvalues from the diagonal: two copies of the eigenvalue . We [...]
Pingback by | February 11, 2009 | Reply
5. [...] generally, consider a strictly upper-triangular matrix, all of whose diagonal entries are zero as [...]
Pingback by | February 16, 2009 | Reply
6. [...] capture the right notion. In that example, the -eigenspace has dimension , but it seems from the upper-triangular matrix that the eigenvalue should have multiplicity [...]
Pingback by | February 19, 2009 | Reply
7. [...] to is the multiplicity of , which is the number of times shows up on the diagonal of an upper-triangular matrix for . Since the total number of diagonal entries is , we see that the dimensions of all the [...]
Pingback by | March 4, 2009 | Reply
8. [...] polynomial had a root. Applying this to the characteristic polynomial of a linear transformation, we found that it must have a root, which would be definition be an eigenvalue of the transformation. There [...]
Pingback by | March 31, 2009 | Reply
9. [...] Upper-Triangular Matrices Over an algebraically closed field we can always find an upper-triangular matrix for any linear endomorphism. Over the real numbers we’re not quite so lucky, but we can come [...]
Pingback by | April 1, 2009 | Reply
10. [...] be a linear map from to itself. Further, let be a basis with respect to which the matrix of is upper-triangular. It turns out that we can also find an orthonormal basis which also gives us an upper-triangular [...]
Pingback by | May 8, 2009 | Reply
11. no need of such material
Comment by shasha | May 21, 2009 | Reply
12. Oh I’m so sorry that I chose to cover a topic you see as unnecessary. I’ll be sure to run all my future topics by you first.
Comment by | May 21, 2009 | Reply
13. Surely in the first equation you meant to write
T(e_1) = lambda_1 * e_1
(instead of T(v) on LHS)
Comment by A Khan | June 17, 2009 | Reply
14. Yes, sorry. Thanks for catching that.
Comment by | June 17, 2009 | Reply
15. [...] with a complex transformation we’re done. We can pick a basis so that the matrix for is upper-triangular, and then its determinant is the product of its eigenvalues. Since the eigenvalues are all [...]
Pingback by | August 3, 2009 | Reply
16. [...] Self-Adjoint Transformation has an Eigenvector Okay, this tells us nothing in the complex case, but for real transformations we have no reason to assume that a given [...]
Pingback by | August 12, 2009 | Reply
17. [...] nilpotent transformation, so all of its eigenvalues are . Specifically, we want those that are also upper-triangular. Thus the matrices we’re talking about have everywhere below the diagonal and all on the [...]
Pingback by | August 28, 2009 | Reply
18. [...] in a given row is to the right of the leftmost nonzero entry in the row above it. For example, an upper-triangular matrix is in row echelon form. We put a matrix into row echelon form by a method called [...]
Pingback by | September 1, 2009 | Reply
19. [...] the difference between and is some scalar multiple of . On the other hand, remember how we found upper-triangular matrices before. This time we peeled off one vector and the remaining transformation was the identity on the [...]
Pingback by | January 19, 2010 | Reply
« Previous | Next »
## About this weblog
This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”).
I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 39, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9261336922645569, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/117420?sort=votes
|
## Hamiltonian actions and contractible loops
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Let $(M, \omega)$ be a symplectic manifold and $G$ be a compact Lie group. Suppose we have a Hamiltonian $G$-action on $M$, with moment map $\mu: M \to {\mathfrak g}^*$.
We assume that the moment map is proper in case $M$ is noncompact.
The question is: for any loop $\gamma: S^1 \to G$, and a point $x\in M$, is the loop $t\mapsto \gamma(t) x$ a contractible loop in $M$? So we assume neither $G$ or $M$ is simply-connected.
We may assume that $\gamma$ is actually a 1-parameter subgroup of $G$, generated by a vector $\xi \in {\mathfrak g}$. In the case $M$ is compact, we restrict the moment map to this subgroup, which is equivalent to a real valued function $\mu_\gamma$. Then the gradient flow of this function should push the loop to a critical point, which is a fixed point of this subgroup. Hence this shows that the loop is contractible.
Now if $M$ is noncompact, the gradient flow doesn't necessarily converge to a critical point (could escape to $\pm \infty$). Note that the real valued function $\mu_\gamma$ is not necessarily proper. So the above method fails. But I still guess that the loop should be contractible.
Is there any proof or counter-example? Or should we add some conditions to guarantee this?
-
## 1 Answer
There are counter-examples, hope they answer your question completely, just take any non-simply connected $G$ and consider its action on $T^*G$. The simplest case is:
Let $M$ be the cylinder $S^1\times \mathbb R$ with the symplectic form $ds \wedge dt$. Then the Hamiltonial $H=t$ defines an $S^1$-action on the cylinder.
One more counterexamle. Consider just the action of $SO(3)$ on its cotangent space. Clearly this action is Hamiltonian. Let us take the subgroup $S^1\subset SO(3)$ that represents the non-zero element of $\pi_1(SO(3))$. Obviously all the orbits of the action of this $S^1$ on $T^*(SO(3))$ will not be contractible.
So we see that in the case the Lie group is not simply-connected it always admits a "bad" action.
-
Thanks. But if we consider moment maps which are like "quadratic functions", for general compact Lie group, is there still counter-examples? Or is there some other conditions to guarantee the similar situation in the compact case (i.e., the convergence of gradient flow)? – Guangbo Xu Dec 29 at 15:35
1
Dear Guangbo, unfortunately I don't undrestand what you mean by ""if we consider moment maps which are like "quadratic functions", for general compact Lie group"". What do you mean by "general compact Lie group"? What is "moment maps which are like "quadratiс functions""? Probably you should give the example you have in mind. – Dmitri Dec 30 at 11:27
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 32, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8858019709587097, "perplexity_flag": "head"}
|
http://mathhelpforum.com/advanced-algebra/114811-cartesian-product-cyclic-groups.html
|
# Thread:
1. ## Cartesian product and cyclic groups
A pressing question:
If we are given two cyclic groups, and we take a cartesian product of the two, is the result also a cyclic group?
2. $\mathbb{Z}_2 \times \mathbb{Z}_2$ is a group of order 4 where all it's elements have order 2
3. Excellent! Thank you very much
4. In fact, $\mathbb{Z}_n\oplus\mathbb{Z}_m$ is cyclic iff $(m,n)=1$. To think of why this is we need some element of $\mathbb{Z}\oplus\mathbb{Z}_m$ to be of exactly order $mn$ but it is clear the order of $\mathbb{Z}_n\oplus\mathbb{Z}_m$ is $\text{lcm}(m,n)$. Therefore lastly noting that $\text{lcm}(m,n)=\frac{|mn|}{(m,n)}$ we conclude that $(m,n)=1$. And since all cyclic groups of order $\ell$ are isomorphic to $\mathbb{Z}_{\ell}$ this is the case for any cyclic groups.
5. Originally Posted by Drexel28
In fact, $\mathbb{Z}_n\oplus\mathbb{Z}_m$ is cyclic iff $(m,n)=1$. To think of why this is we need some element of $\mathbb{Z}\oplus\mathbb{Z}_m$ to be of exactly order $mn$ but it is clear the order of $\mathbb{Z}_n\oplus\mathbb{Z}_m$ is $\text{lcm}(m,n)$. Therefore lastly noting that $\text{lcm}(m,n)=\frac{|mn|}{(m,n)}$ we conclude that $(m,n)=1$. And since all cyclic groups of order $\ell$ are isomorphic to $\mathbb{Z}_{\ell}$ this is the case for any cyclic groups.
thank you for further elucidating the problem. We had recently learned about this part: " $\text{lcm}(m,n)=\frac{|mn|}{(m,n)}$ we conclude that $(m,n)=1$" So it was pretty clear from the previous post with Z2 x Z2, but your explanation was definitely something that went into my notes. Thank you!
6. Originally Posted by flabbergastedman
thank you for further elucidating the problem. We had recently learned about this part: " $\text{lcm}(m,n)=\frac{|mn|}{(m,n)}$ we conclude that $(m,n)=1$" So it was pretty clear from the previous post with Z2 x Z2, but your explanation was definitely something that went into my notes. Thank you!
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 25, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9800129532814026, "perplexity_flag": "head"}
|
http://en.wikipedia.org/wiki/Knuth%E2%80%93Morris%E2%80%93Pratt_algorithm
|
# Knuth–Morris–Pratt algorithm
In computer science, the Knuth–Morris–Pratt string searching algorithm (or KMP algorithm) searches for occurrences of a "word" `W` within a main "text string" `S` by employing the observation that when a mismatch occurs, the word itself embodies sufficient information to determine where the next match could begin, thus bypassing re-examination of previously matched characters.
The algorithm was conceived in 1974 by Donald Knuth and Vaughan Pratt, and independently by James H. Morris. The three published it jointly in 1977.
## Background
A string matching algorithm wants to find the starting index `m` in string `S[]` that matches the search word `W[]`.
A straightforward algorithm simply tries successive values of `m` until it finds a match or fails. Each trial involves using a loop that checks `S[m+i] = W[i]` for each character `W[i]` in the search word.
Usually, the trial check will quickly reject the trial match. If the strings are uniformly distributed random letters, then the chance that characters match is 1 in 26. In most cases, the trial check will reject the match at the initial letter. The chance that the first two letters will match is 1 in 26^2 (1 in 676). So if the characters are random, then the expected complexity of searching string `S[]` of length k is on the order of k comparisons or O(k). The expected performance is very good. If `S[]` is 1 billion characters and `W[]` is 1000 characters, then the string search should complete after about 1 billion character comparisons (which might take a few seconds).
That expected performance is not guaranteed. If the strings are not random, then checking a trial `m` may take many character comparisons. The worst case is if the two strings match in all but the last letter. Imagine that the string `S[]` consists of 1 billion characters that are all A, and that the word `W[]` is 999 A characters terminating in a final B character. The simple string matching algorithm will now examine 1000 characters at each trial position before rejecting the match and advancing the trial position. The simple string search example would now take about 1000 character comparisons times 1 billion positions for 1 trillion character comparisons (which might take an hour). If the length of `W[]` is n, then the worst case performance is O(k⋅n).
The KMP algorithm does not have the horrendous worst case performance of the straightforward algorithm. KMP spends a little time precomputing a table (on the order of the size of `W[]`, O(n)), and then it uses that table to do an efficient search of the string in O(k). KMP would search the previous example in a dozen seconds.
The difference is that KMP makes use of previous match information that the straightforward algorithm does not. In the example above, when KMP sees a trial match fail on the 1000th character (`i`=999) because `S[m+999]≠W[999]`, it will increment `m` by 1, but it will know that the first 998 characters at the new position already match. KMP matched 999 A characters before discovering a mismatch at the 1000th character (position 999). Advancing the trial match position `m` by one throws away the first A, so KMP knows there are 998 A characters that match `W[]` and does not retest them; that is, KMP sets `i` to 998. KMP maintains its knowledge in the precomputed table and two state variables. When KMP discovers a mismatch, the table determines how much KMP will increase `m` and where it will resume testing (`i`).
## KMP algorithm
### Worked example of the search algorithm
To illustrate the algorithm's details, we work through a (relatively artificial) run of the algorithm. At any given time, the algorithm is in a state determined by two integers:
• `m` which denotes the position within `S` which is the beginning of a prospective match for `W`
• `i` the index in `W` denoting the character currently under consideration.
In each step we compare `S[m+i]` with `W[i]` and advance if they are equal. This is depicted, at the start of the run, like
`````` 1 2
m: 01234567890123456789012
S: ABC ABCDAB ABCDABCDABDE
W: ABCDABD
i: 0123456
```
```
We proceed by comparing successive characters of `W` to "parallel" characters of `S`, moving from one to the next if they match. However, in the fourth step, we get `S[3]` is a space and `W[3] = 'D'`, a mismatch. Rather than beginning to search again at `S[1]`, we note that no `'A'` occurs between positions 0 and 3 in `S` except at 0; hence, having checked all those characters previously, we know there is no chance of finding the beginning of a match if we check them again. Therefore we move on to the next character, setting `m = 4` and `i = 0`. (m will first become 3 since `m + i - T[i] = 0 + 3 - 0 = 3` and then become 4 since `T[0] = -1`)
`````` 1 2
m: 01234567890123456789012
S: ABC ABCDAB ABCDABCDABDE
W: ABCDABD
i: 0123456
```
```
We quickly obtain a nearly complete match `"ABCDAB"` when, at `W[6]` (`S[10]`), we again have a discrepancy. However, just prior to the end of the current partial match, we passed an `"AB"` which could be the beginning of a new match, so we must take this into consideration. As we already know that these characters match the two characters prior to the current position, we need not check them again; we simply reset `m = 8`, `i = 2` and continue matching the current character. Thus, not only do we omit previously matched characters of `S`, but also previously matched characters of `W`.
`````` 1 2
m: 01234567890123456789012
S: ABC ABCDAB ABCDABCDABDE
W: ABCDABD
i: 0123456
```
```
This search fails immediately, however, as the pattern still does not contain a space, so as in the first trial, we return to the beginning of `W` and begin searching at the next character of `S`: `m = 11`, reset `i = 0`. (m will first become 10 since `m + i - T[i] = 8 + 2 - 0 = 10` and then become 11 since `T[0] = -1`)
`````` 1 2
m: 01234567890123456789012
S: ABC ABCDAB ABCDABCDABDE
W: ABCDABD
i: 0123456
```
```
Once again we immediately hit upon a match `"ABCDAB"` but the next character, `'C'`, does not match the final character `'D'` of the word `W`. Reasoning as before, we set `m = 15`, to start at the two-character string `"AB"` leading up to the current position, set `i = 2`, and continue matching from the current position.
`````` 1 2
m: 01234567890123456789012
S: ABC ABCDAB ABCDABCDABDE
W: ABCDABD
i: 0123456
```
```
This time we are able to complete the match, whose first character is `S[15]`.
### Description of and pseudocode for the search algorithm
The above example contains all the elements of the algorithm. For the moment, we assume the existence of a "partial match" table `T`, described below, which indicates where we need to look for the start of a new match in the event that the current one ends in a mismatch. The entries of `T` are constructed so that if we have a match starting at `S[m]` that fails when comparing `S[m + i]` to `W[i]`, then the next possible match will start at index `m + i - T[i]` in `S` (that is, `T[i]` is the amount of "backtracking" we need to do after a mismatch). This has two implications: first, `T[0] = -1`, which indicates that if `W[0]` is a mismatch, we cannot backtrack and must simply check the next character; and second, although the next possible match will begin at index `m + i - T[i]`, as in the example above, we need not actually check any of the `T[i]` characters after that, so that we continue searching from `W[T[i]]`. The following is a sample pseudocode implementation of the KMP search algorithm.
```algorithm kmp_search:
input:
an array of characters, S (the text to be searched)
an array of characters, W (the word sought)
output:
an integer (the zero-based position in S at which W is found)
define variables:
an integer, m ← 0 (the beginning of the current match in S)
an integer, i ← 0 (the position of the current character in W)
an array of integers, T (the table, computed elsewhere)
while m+i is less than the length of S, do:
if W[i] = S[m + i],
if i equals the (length of W)-1,
return m
let i ← i + 1
otherwise,
let m ← m + i - T[i],
if T[i] is greater than -1,
let i ← T[i]
else
let i ← 0
(if we reach here, we have searched all of S unsuccessfully)
return the length of S
```
### Efficiency of the search algorithm
Assuming the prior existence of the table `T`, the search portion of the Knuth–Morris–Pratt algorithm has complexity O(n), where `n` is the length of `S` and the `O` is big-O notation. Except for the fixed overhead incurred in entering and exiting the function, all the computations are performed in the `while` loop. To bound the number of iterations of this loop; observe that the `T` is constructed so that if a match which had begun at `S[m]` fails while comparing `S[m + i]` to `W[i]`, then the next possible match must begin at `S[m + (i - T[i])]`. In particular the next possible match must occur at a higher index than `m`, so that `T[i] < i`.
This fact implies that the loop can execute at most `2n` times. For, in each iteration, it executes one of the two branches in the loop. The first branch invariably increases `i` and does not change `m`, so that the index `m + i` of the currently scrutinized character of `S` is increased. The second branch adds `i - T[i]` to `m`, and as we have seen, this is always a positive number. Thus the location `m` of the beginning of the current potential match is increased. Now, the loop ends if `m + i = n`; therefore each branch of the loop can be reached at most `k` times, since they respectively increase either `m + i` or `m`, and `m ≤ m + i`: if `m = n`, then certainly `m + i ≥ n`, so that since it increases by unit increments at most, we must have had `m + i = n` at some point in the past, and therefore either way we would be done.
Thus the loop executes at most `2n` times, showing that the time complexity of the search algorithm is `O(n)`.
Here is another way to think about the runtime: Let us say we begin to match W and S at position i and p, if W exists as a substring of S at p, then W[0 through m] == S[p through p+m]. Upon success, that is, the word and the text matched at the positions(W[i] == S[p+i]), we increase i by 1 (i++). Upon failure, that is, the word and the text does not match at the positions(W[i] != S[p+i]), the text pointer is kept still, while the word pointer roll-back a certain amount(i = T[i], where T is the jump table) And we attempt to match `W[T[i]]` with `S[p+i]`. The maximum number of roll-back of i is bounded by i, that is to say, for any failure, we can only roll-back as much as we have progressed up to the failure. Then it is clear the runtime is 2n.
## "Partial match" table (also known as "failure function")
The goal of the table is to allow the algorithm not to match any character of `S` more than once. The key observation about the nature of a linear search that allows this to happen is that in having checked some segment of the main string against an initial segment of the pattern, we know exactly at which places a new potential match which could continue to the current position could begin prior to the current position. In other words, we "pre-search" the pattern itself and compile a list of all possible fallback positions that bypass a maximum of hopeless characters while not sacrificing any potential matches in doing so.
We want to be able to look up, for each position in `W`, the length of the longest possible initial segment of `W` leading up to (but not including) that position, other than the full segment starting at `W[0]` that just failed to match; this is how far we have to backtrack in finding the next match. Hence `T[i]` is exactly the length of the longest possible proper initial segment of `W` which is also a segment of the substring ending at `W[i - 1]`. We use the convention that the empty string has length 0. Since a mismatch at the very start of the pattern is a special case (there is no possibility of backtracking), we set `T[0] = -1`, as discussed above.
### Worked example of the table-building algorithm
We consider the example of `W = "ABCDABD"` first. We will see that it follows much the same pattern as the main search, and is efficient for similar reasons. We set `T[0] = -1`. To find `T[1]`, we must discover a proper suffix of `"A"` which is also a prefix of `W`. But there are no proper suffixes of `"A"`, so we set `T[1] = 0`. Likewise, `T[2] = 0`.
Continuing to `T[3]`, we note that there is a shortcut to checking all suffixes: let us say that we discovered a proper suffix which is a proper prefix and ending at `W[2]` with length 2 (the maximum possible); then its first character is a proper prefix of `W`, hence a proper prefix itself, and it ends at `W[1]`, which we already determined cannot occur in case T[2]. Hence at each stage, the shortcut rule is that one needs to consider checking suffixes of a given size m+1 only if a valid suffix of size m was found at the previous stage (e.g. T[x]=m).
Therefore we need not even concern ourselves with substrings having length 2, and as in the previous case the sole one with length 1 fails, so `T[3] = 0`.
We pass to the subsequent `W[4]`, `'A'`. The same logic shows that the longest substring we need consider has length 1, and although in this case `'A'` does work, recall that we are looking for segments ending before the current character; hence `T[4] = 0` as well.
Considering now the next character, `W[5]`, which is `'B'`, we exercise the following logic: if we were to find a subpattern beginning before the previous character `W[4]`, yet continuing to the current one `W[5]`, then in particular it would itself have a proper initial segment ending at `W[4]` yet beginning before it, which contradicts the fact that we already found that `'A'` itself is the earliest occurrence of a proper segment ending at `W[4]`. Therefore we need not look before `W[4]` to find a terminal string for `W[5]`. Therefore `T[5] = 1`.
Finally, we see that the next character in the ongoing segment starting at `W[4] = 'A'` would be `'B'`, and indeed this is also `W[5]`. Furthermore, the same argument as above shows that we need not look before `W[4]` to find a segment for `W[6]`, so that this is it, and we take `T[6] = 2`.
Therefore we compile the following table:
`i` `W[i]` `T[i]`
0 1 2 3 4 5 6
A B C D A B D
-1 0 0 0 0 1 2
Other example more interesting and complex:
`i` `W[i]` `T[i]`
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3
P A R T I C I P A T E I N P A R A C H U T E
-1 0 0 0 0 0 0 0 1 2 0 0 0 0 0 0 1 2 3 0 0 0 0 0
### Description of pseudocode for the table-building algorithm
The example above illustrates the general technique for assembling the table with a minimum of fuss. The principle is that of the overall search: most of the work was already done in getting to the current position, so very little needs to be done in leaving it. The only minor complication is that the logic which is correct late in the string erroneously gives non-proper substrings at the beginning. This necessitates some initialization code.
```algorithm kmp_table:
input:
an array of characters, W (the word to be analyzed)
an array of integers, T (the table to be filled)
output:
nothing (but during operation, it populates the table)
define variables:
an integer, pos ← 2 (the current position we are computing in T)
an integer, cnd ← 0 (the zero-based index in W of the next character of the current candidate substring)
(the first few values are fixed but different from what the algorithm might suggest)
let T[0] ← -1, T[1] ← 0
while pos is less than the length of W, do:
(first case: the substring continues)
if W[pos - 1] = W[cnd], let cnd ← cnd + 1, T[pos] ← cnd, pos ← pos + 1
(second case: it doesn't, but we can fall back)
otherwise, if cnd > 0, let cnd ← T[cnd]
(third case: we have run out of candidates. Note cnd = 0)
otherwise, let T[pos] ← 0, pos ← pos + 1
```
### Efficiency of the table-building algorithm
The complexity of the table algorithm is `O(n)`, where `n` is the length of `W`. As except for some initialization all the work is done in the `while` loop, it is sufficient to show that this loop executes in `O(n)` time, which will be done by simultaneously examining the quantities `pos` and `pos - cnd`. In the first branch, `pos - cnd` is preserved, as both `pos` and `cnd` are incremented simultaneously, but naturally, `pos` is increased. In the second branch, `cnd` is replaced by `T[cnd]`, which we saw above is always strictly less than `cnd`, thus increasing `pos - cnd`. In the third branch, `pos` is incremented and `cnd` is not, so both `pos` and `pos - cnd` increase. Since `pos ≥ pos - cnd`, this means that at each stage either `pos` or a lower bound for `pos` increases; therefore since the algorithm terminates once `pos = n`, it must terminate after at most `2n` iterations of the loop, since `pos - cnd` begins at `1`. Therefore the complexity of the table algorithm is `O(n)`.
## Efficiency of the KMP algorithm
Since the two portions of the algorithm have, respectively, complexities of `O(k)` and `O(n)`, the complexity of the overall algorithm is `O(n + k)`.
These complexities are the same, no matter how many repetitive patterns are in `W` or `S`.
## Variants
A real-time version of KMP can be implemented using a separate failure function table for each character in the alphabet. If a mismatch occurs on character $x$ in the text, the failure function table for character $x$ is consulted for the index $i$ in the pattern at which the mismatch took place. This will return the length of the longest substring ending at $i$ matching a prefix of the pattern, with the added condition that the character after the prefix is $x$. With this restriction, character $x$ in the text need not be checked again in the next phase, and so only a constant number of operations are executed between the processing of each index of the text. This satisfies the real-time computing restriction.
The Booth algorithm uses a modified version of the KMP preprocessing function to find the lexicographically minimal string rotation. The failure function is progressively calculated as the string is rotated.
## References
• Knuth, Donald; Morris, James H., jr; Pratt, Vaughan (1977). "Fast pattern matching in strings". SIAM Journal on Computing 6 (2): 323–350. doi:10.1137/0206024. Zbl 0372.68005.
• Cormen, Thomas; Lesiserson, Charles E.; Rivest, Ronald L.; Stein, Clifford (2001). "Section 32.4: The Knuth-Morris-Pratt algorithm". (Second ed.). MIT Press and McGraw-Hill. pp. 923–931. ISBN 0-262-03293-7. Zbl 1047.68161.
• Crochemore, Maxime; Rytter, Wojciech (2003). Jewels of stringology. Text algorithms. River Edge, NJ: World Scientific. pp. 20–25. ISBN 981-02-4897-0. Zbl 1078.68151.
• Szpankowski, Wojciech (2001). Average case analysis of algorithms on sequences. Wiley-Interscience Series in Discrete Mathematics and Optimization. With a foreword by Philippe Flajolet. Chichester: Wiley. pp. 15–17,136–141. ISBN 0-471-24063-X. Zbl 0968.68205.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 6, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9186555743217468, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/38707/physics-behind-wheel-slipping/38881
|
Physics behind Wheel Slipping
Lets say that I'm in a car and I apply full acceleration suddenly. Now, the wheels would slip and hence the car doesn't displace much.
But If I start with some constant acceleration, slipping doesn't appear and the car moves normally. I think that its related to some friction mechanism.
But I don't understand why the wheel slips at high speeds and not at low speeds. It's like, when the speed is high, rules are changing.
Also, In each step F(s) (friction) should be equal to F (force in other direction). Isn't it? Any physical explanations?
-
The law of inertia. – Keegan McCarthy Oct 2 '12 at 3:58
4 Answers
It's hard to make the wheels spin at high speeds because you're in a higher gear, so the torque at the wheels is less. So I assume you are only asking about wheel spin in first gear i.e. it's quite easy to spin the wheels when pulling away in first gear but much harder if e.g. you're travelling at 10 mph in first gear.
The reason is that if you're stationary and drop the clutch the angular momentum of the engine contributes to the torque. That is, the torque at the wheels is the torque from the engine plus the torque from angular momentum stored in the flywheel, crankshaft etc. This happens because the engine is spinning faster that it would if the clutch were engaged, so engaging the clutch slows the engine speed. The extra torque is given by:
$$\tau = I\frac{d\omega}{dt}$$
where $I$ is the moment of inertia of the spinning bits of the engine and $\omega$ is the engine speed, so $d\omega/dt$ is the rate of change of engine speed. If you drop the clutch the engine speed changes rapidly so $d\omega/dt$ is large and the extra torque is large. If you ease the clutch out $d\omega/dt$ is small so the extra torque is small and the wheels won't spin.
When you're driving at (e.g.) a steady 10 mph the engine speed matches the wheel speed, so if you now suddenly stamp on the accelerator it's only the torque from the engine that's available to spin the wheels. You don't get the contribution from $d\omega/dt$.
To see this try driving at 5 mph, then disengage the clutch, rev the engine and drop the clutch. As the clutch bites the wheels will spin just as they do when the car is stationary.
It's worth noting that a powerful car can spin the wheels in first gear even without playing with the clutch. In fact an old sports car I had many years ago would spin the wheels in second gear in the dry and in third gear if the road was wet!
-
The car's engine can only control the angular acceleration provided to the wheels. The more you press on gas, the more is the angular acceleration of the wheels.
It is friction's responsibility to convert the angular acceleration of the wheels into linear acceleration of the car.
Now consider a wheel of radius $r$, rotating with an angular acceleration $\alpha$. If there is no slippage, that means the wheel is moving forward with a linear acceleration of $\alpha r$. And if the car's mass is $m$, that would mean there must be friction of amount $m\alpha r$ acting on the car. However, if this quantity is larger than what the ground can support, the wheels will slip.
-
1
The whole question of myne is shrinked to your last ( unexcplaind) sentence if this quantity is larger than what the ground can support – Royi Namir Sep 30 '12 at 15:33
Your answer is actually explaining OP's question... You just explained how the vehicle moves..! – Ϛѓăʑɏ βµԂԃϔ Sep 30 '12 at 17:39
I'm sorry, perhaps I misunderstood the question. OP seems to be confused about why the rules change with speed. I just pointed out that the rules are the same for all speeds. The ground does not really behave differently when the car is moving fast. It's just that it is not able to provide the amount of friction needed to support a high acceleration. Anyway, I apologise if this is not what OP was asking. – Vinayak Pathak Oct 1 '12 at 0:47
Traction (The friction between a moving body relative to the surface) plays an important role here, 'cause one or more tires in the car lose traction and lead to Wheel Spin (i.e.) The car remains slipping until it attains some stable Traction. This is best explained via Starting Tractive Effort. It is an important factor that is given some higher priority in Railway Engineering. They use "Locomotive Wheel-slip" instead of our "Wheel-spin"..!
This is because the weight of a car is too much to pull immediately at a given period of time (i.e.) Power-to-weight ratio should also be noticed. But, it's less for vehicles and great for Locomotives and it's calculated using the Curb Weight of vehicles.
Surface conditions: This slipping is more common in winters 'cause the coefficient of friction is too low for lubricants like water, oil grease, mud, etc. Hence, the colder water in between the road and tires prevent them from sticking to the roads. In a more specific manner, the differentials provide enough torque for the wheels to spin. Similar thing is applicable to the puck in Ice Hockey..!
Inertia of motion also plays here, 'cause the Inertia of the engine and the regulator wheel (Flywheel) is at a higher RPM than the gear that tries to bring the shaft of transmission of the heavy vehicle to the same velocity, starting at rest..! (which makes the situation more complicated...)
You could see this in most common Drag races and it's called a Burnout where those racers release the clutch and accelerate while holding the brakes. They even use reserved wet tracks as Burnout boxes for proving their freestyle. But, the only difference is that those guys are doing it in a purpose, while here - It happens when you have no experience regarding this..!
-
Slipping happens when the force applied to move the car is larger than the friction between the wheel and the ground can resist. Friction is needed to stop the wheel from slipping.
There is a difference between static and kinetic friction. As long as the wheels are not slipping, the contact point is stationary with respect to the ground and static friction applies. Once the wheels slip, the contact point actually moves on the ground, i.e. kinetic friction. While the contact point is stationary, the applied force exactly matches the friction force - up to the point where the force is greater than the maximum friction possible between tyre and ground.
Kinetic friction is lower than static friction so, once the force applied is sufficient to overcome the friction, the required force drops, which makes it even easier for the wheels to spin. At that point you need to reduce the force significantly to stop the slipping.
If you reduce the friction(e.g. on ice) slipping is much easier but the same rules still apply. The same rules also apply in the reverse situation, when you are braking. If you apply too much force the wheels will lock up, and you're into kinetic friction. At that point you need to release the brakes and try again. This is what ABS systems do automatically - hence the "rattling" when you brake hard.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9523659348487854, "perplexity_flag": "middle"}
|
http://crypto.stackexchange.com/questions/5528/exhausting-the-entropy-of-a-hash-function/5531
|
Exhausting the entropy of a hash function
In the case of password storage, consider the following:
I have an idea that one can exhaust the entropy of input to the MD5 function by using a 128 bit random value as the password (indeed, any hash function, using the output length as input). Is this a correct assumption, or is the entropy exhausted at 123.4 bit, this being the best attack to date? Or does this only apply to hash functions that for every value in the interval $[0, 2^{L}]$ provide another unique value in the same interval?
I hope you understand what I'm trying to ask here - I see that I have a hard time explaining it clearly. What I want to do with this idea is argue that in the case of MD5 stored passwords, there is no reason to use passwords with a higher entropy than the hash itself.
-
3
MD5 restricted to 128-bit inputs is likely not injective (and thus also not surjective), independently of any attacks on the hash function. A hash function models a random function, not a random permutation. But you are right, there is no point in having passwords with higher entropy than the hash output size. – Paŭlo Ebermann♦ Nov 30 '12 at 8:34
1 Answer
Entropy is not gas -- you do not "consume" it.
In the case of hashing passwords, entropy is a measure of what the password could have been. A password with "$n$ bits of entropy" is a password such that breaking it by dictionary attack (trying potential passwords until the right one is found) has average cost $2^{n-1}$.
It is useless to have a password entropy much beyond the output length of the employed hash function, because if you hash to $k$ bits, then trying random passwords will succeed with probability $2^{-k}$, hence average cost $2^{k}$. Thus, no need to go beyond $k+1$ bits or entropy for the password.
It is also useless to have a password entropy beyond the point where dictionary attacks are ludicrously expensive, regardless of the hash function output size. With today's technology, an 80-bit password entropy is already enough to defeat such endeavours. Actually, if the password hashing is done properly (with a slow password processing function, like bcrypt), then lower entropies are already fine (that's the point of slow hashing: to make low password entropy more tolerable).
-
Would you agree that the max. reasonable entropy of a password is equal to that of the best preimage attack? This would be the threshold where dictionary and preimage resistance meet. – Henning Klevjer Nov 30 '12 at 15:32
1
There are details. There are also online dictionary attacks, which are performed against the live target server, without knowing the hash value. Specific preimage resistance of the hash function is irrelevant for that. But if the attacker knows the hash value, then yes, there is no need, even academic, to push the password entropy beyond the preimage resistance of the hash function. – Thomas Pornin Nov 30 '12 at 16:28
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9240207076072693, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/calculus/85205-taylor-series-convergence.html
|
# Thread:
1. ## Taylor series convergence
I have a function $f(x) = 1/\sqrt(1-x)$ defined for x in
$]-1,1[$ and centered at a = 0. The taylor polynomial for this would be $T_{n}(x) = \sum_{k=0}^n \frac {f^{(k)}(0)x^k}{k!}$.
Now for every $x \ \epsilon \ ]0,1[$, there is $\xi = \xi(x) \ \epsilon \ ]0,x[$ such that the error would be $R_{n}(x) = \frac {xf^{(n+1)}(\xi)(x-\xi)^n}{n!}$.
Using this, how do I prove that the Taylor polynomials $T_{n}$ converge uniformly over [0,x]?
By doing a ratio test I get $\lim_{n\rightarrow\infty} |{\frac {R_{n+1}(x)}{R_{n}(x)}}| = \frac{2n+3}{2n+2}$. How is this supposed to converge? Can somebody please explain this?
2. Originally Posted by chainrule
I have a function $f(x) = 1/\sqrt(1-x)$ defined for x in
$]-1,1[$ and centered at a = 0. The taylor polynomial for this would be $T_{n}(x) = \sum_{k=0}^n \frac {f^{(k)}(0)x^k}{k!}$.
Now for every $x \ \epsilon \ ]0,1[$, there is $\xi = \xi(x) \ \epsilon \ ]0,x[$ such that the error would be $R_{n}(x) = \frac {xf^{(n+1)}(\xi)(x-\xi)^n}{n!}$.
Using this, how do I prove that the Taylor polynomials $T_{n}$ converge uniformly over [0,x]?
By doing a ratio test I get $\lim_{n\rightarrow\infty} |{\frac {R_{n+1}(x)}{R_{n}(x)}}| = \frac{2n+3}{2n+2}$. How is this supposed to converge? Can somebody please explain this?
I am sure you learned this long ago- perhaps too long ago! Divide both numerator and denominator by n to get $\frac{2+ \frac{3}{n}}{2+ \frac{2}{n}}$. Now what is the limit of that as n goes to infinity?
3. I am sure you learned this long ago- perhaps too long ago! Divide both numerator and denominator by n to get $\frac{2+ \frac{3}{n}}{2+ \frac{2}{n}}$. Now what is the limit of that as n goes to infinity?
Yes, but this Limit would give me a value of 1 whereas convergence would mean that the limit has to tend to 0 as n tends to infinity. Also this series is supposed to converge uniformly over $\xi \ \epsilon \ ]0,x[$ for every $x \ \epsilon \ ]0,1[$. How is that possible?
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 20, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9673788547515869, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/tagged/characters?sort=votes&pagesize=15
|
Tagged Questions
The characters tag has no wiki summary.
2answers
769 views
exercise in Isaacs's book on Character Theory
I'm stuck on an exercise in Isaacs's book "Character Theory of Finite Groups" - it relates to something I'm looking at as part of ongoing research, but I guess it belongs here rather than on ...
2answers
368 views
Formula for number of solutions to $x^4+y^4=1$, from Ireland and Rosen #8.18.
There is a sequence of three exercise in Ireland and Rosen's Introduction to Modern Number Theory, Chapter 8, page 106. I can do the first two, but can't finish the third. I can include the proofs to ...
5answers
185 views
Applications of Character Theory
Some of the applications of character theory are the proofs of Burnside $p^aq^b$ theorem, , Frobenius theorem and factorization of the group determinant (the problem which led Frobenius to character ...
1answer
178 views
Do all algebraic integers in some $\mathbb{Z}[\zeta_n]$ occur among the character tables of finite groups?
The values of irreducible characters of a finite groups are always sums of roots of unity; do all sums of roots of unity (i.e. algebraic integers in the maximal abelian extension of $\mathbb{Q}$) ...
0answers
224 views
Subgroups as isotropy subgroups and regular orbits on tuples
Is there some natural or character-theoretic description of the minimum value of d such that G has a regular orbit on Ωd, where G is a finite group acting faithfully on a set Ω? Motivation: In ...
1answer
198 views
An exercise involving characters
Suppose $p$ is a prime, $\chi$ and $\lambda$ are characters on $\mathbb{F}_p$. How can I show that $\sum_{t\in\mathbb{F}_p}\chi(1-t^m)=\sum_{\lambda}J(\chi,\lambda)$ where $\lambda$ varies over all ...
0answers
198 views
Character theory of $2$-Frobenius groups.
Edit Summary: I've posted this on MO and received a partial answer there. Can anybody help me expand on this? Definition. Let $G$ be a finite group and $F_1=\text{Fit}\,G$ and ...
3answers
348 views
Character Table From Presentation
I've recently learned about character tables, and some of the tricks for computing them for finite groups (quals...) but I've been having problems actually doing it. Thus, my question is (A) how to ...
2answers
181 views
Functional equation of irreducible characters
I am preparing to an exam in representations of finite groups. I am trying to tackle a problem regarding a characterization of irreducible characters: Let $f$ be a complex-valued function on a finite ...
0answers
101 views
Where does this elliptic curve come from?
In Zeta functions of an infinite family of K3 surfaces, Scott Alhgren, Ken Ono and David Penniston compute the zeta functions (given a good reduction restriction mentioned below) of the K3 surfaces ...
1answer
165 views
What is the relationship between Mackey's theorem in character theory and Mackey's theorem in transfer theory?
Here are the statements of the two theorems. The first statement I took from a paper I have been reading, but I believe can also be found in Isaacs' Character Theory of Finite Groups as an exercise. ...
1answer
74 views
Formula for evaluation of character on a transposition
Let $\lambda\vdash n$ be a partition of $n\in\mathbb N$ and $\chi=\chi_\lambda$ the corresponding irreducible character of the symmetric group $S_n$. Denote by $\lambda^t$ be the transpose of ...
1answer
92 views
Some irreducible characters of the Symmetric group $S_n$
I want to have characters of some irreducible $S_n$-modules corresponding to certain partitions $\lambda$ of $n$, the computations using Frobenius formula get complicated and I am unable to find in ...
1answer
129 views
How does Pontryagin duality fit into the general cohomology theory framework?
Pontryagin duality implies the isomorphic relation of the function space $C(G)$ on a locally compact group $G$ to the function space on it's dual group $\hat G \overset{\sim}{=}\text{Hom}(G,T)$, ...
2answers
211 views
What is an irrreducible character of a finite group?
Let $S_n$ be the group of permutations of $\{1, 2, \ldots, n\}$. A “character” for $S_n$ is a function $\chi\colon S_n \to \mathbb{C} \setminus \{0\}$ with $\chi(ab) = \chi(a)\chi(b)$ for all \$a, b ...
2answers
116 views
Estimates on conjugacy classes of a finite group.
In Character Theory Of Finite Groups by I Martin Issacs as exercise 2.18, on page 32. Theorem: Let $A$ be a normal subgroup of $G$ such that $A$ is the centralizer of every non-trivial element ...
1answer
51 views
characters of a $C^*$-algebra
I have read that a state $\rho$ on a unital $C^*$-algebra $A$ is a character (i.e. multiplicative) if and only if, for all unitary $u\in A$, $|\rho(u)| = 1$. Is there an easy proof, or can someone ...
2answers
225 views
How to generalise $(\wedge^2 \chi)(g) = \frac{1}{2}(\chi(g)^2-\chi(g^2))$?
One can decompose $\bigotimes^2 V = \bigvee^2 V \oplus \bigwedge^2 V$, getting a corresponding decomposition for representations, say when $V$ is a module for some finite group $G$. One then has the ...
1answer
254 views
Are two groups isomorphic if they have the same character table and each $|\chi| \leq 1$?
Suppose two groups have the same character table of complex representations. Also, all the entries in this character table have absolute value at most $1$. Does this imply that the two groups are ...
2answers
153 views
Two non-isomorphic groups with the same complex character table
Could you give me an example of two non-isomorphic groups with the same complex character table?
1answer
184 views
Question about Weyl character formula
In the book of Humphreys, page 139, Weyl character formula is \left(\sum_{w\in W} \operatorname{sn}(w)\epsilon_{w\delta}\right) * \operatorname{ch}_{\lambda} = \sum_{w\in W} \operatorname{sn}(w) ...
1answer
80 views
Character theory exercises [closed]
I'm doing the exercises from chapter 2 of M.Isaacs' Character theory of finite groups, and I'm having problems with some of them. In particular, I would need help with these ones. Thank you very much ...
0answers
43 views
Characters of the symmetric group corresponding to partitions into two parts
Let $n\in\mathbb N$ be a natural number and $\lambda=(a,b)\vdash n$ a partition of $n$ into two parts, i.e. $a\ge b$ and $a+b=n$. In this special case, is there a simple description of the character ...
0answers
55 views
Character of $\text{Hom}_{\mathbb{C}}(\mathbb{C}G,\mathbb{C})$
I've been given the following two questions, and for both I'm really unsure what to do (I'm a beginner with the theory of characters). Let $G$ be a finite group, $\mathbb{C}G$ its group algebra over ...
3answers
139 views
Orthogonality relations of Characters
Could somebody please help me understand the jump from Proposition 10 to Proposition 11 in the following http://www.ms.uky.edu/~pkoester/research/charactersums.pdf Note: The orthogonality relations ...
2answers
322 views
Condition for abelian subgroup to be normal
Sorry for any mistakes I make here, this is my first post here. I have a group $G$ which has an abelian subgroup $A<G$. I also know there is a irreducible character $\chi$ with the degree of ...
1answer
43 views
Character of a permutation representation
I am self-studying representation theory, and I would like to make sure my proofs are complete. Following Serre's notation, let $X$ be a finite set, and let $G$ be a group that acts on $X$. Let ...
3answers
90 views
Character of $S_3$
I am trying to learn about the characters of a group but I think I am missing something. Consider $S_3$. This has three elements which fix one thing, two elements which fix nothing and one element ...
1answer
62 views
Exercise 2.15 M.Isaacs' Character theory of finite groups
I'm beggining to study character theory, and i'm doing some problems from Isaacs' Character theory book. I would need some help with this one: (2.15): Let $\chi\in \operatorname{Irr}(G)$ be ...
1answer
51 views
Subspace spanned by powers of a faithful character
The following well known theorem can be found in many books on character theory: Let $\chi$ be a faithful character of a finite group $G$ and suppose that $\chi(g)$ takes on exactly $m$ different ...
1answer
185 views
Notation: Character of a Finite Field
This is my first post on StackExchange. I had a quick question about notation (appearing in research literature) that I was unable to find by repeated searches, and I was hoping that someone would be ...
1answer
43 views
Characters of subrepresentation
Given an algebra $A$ with finite dimensional representation $V$ with action $\rho$, I want to prove the following statement: If $W\subset V$ are finite dimensional representations of A, then ...
1answer
80 views
Is there some relation between characters in representation theory and multiplicative characters?
A character of a group representation is obtained by taking trace of each matrix in this representation. The word character is often used in the sense that it is a homomorphism from a group to ...
1answer
75 views
Generalizing Artin's theorem on independence of characters
Artin's theorem says that for any field $K$ and any (semi) group $G$, the set of homomorphisms from $G$ into the multiplicative group $K^*$ is linearly independent over $K$. Can this theorem be ...
1answer
144 views
Why unitary characters for the dual group in Pontryagin duality if $G$ is not compact?
In harmonic analysis, for any locally compact abelian group, one constructs the dual group as the group of homomorphisms into the unit circle with the compact open topology. In other words, unitary ...
1answer
128 views
The natural inclusion of an infinite abelian group $G$ into $\widehat{\widehat{G}}$
I was recently trying to think of a simple example that demonstrates that the natural inclusion of an abelian group $G$ into ...
0answers
288 views
Weyl Character formula applied to Sp$(4,\mathbb{C})\cap$ U$(4)$.
I posted a question a short while ago on this but got no response. I have worked on this more and so now have a more specific question. To start with we work with the $\mathbb{Q}$ version of ...
0answers
74 views
Character formula for $S_n$ and $GL(V)$
In a set of lecture notes I'm reading, we consider representations of the symmetric group $S_n$ via treatment of Young tableaux, partitions of $n$ etc. (in what I believe is the standard approach) - ...
1answer
49 views
Characters of affine algebraic groups and the determinant
Let $G$ be an affine algebraic group (i.e. a $k$-variety which is also a group and the group multiplication and inversion are morphisms of varieties). A character of $G$ is a morphism of algebraic ...
1answer
206 views
Setting up Brauer character theory
My question relates to p. 147 of Serre's Linear Representations of Finite Groups, where he is setting up the definitions relevant to Brauer character theory. Having fixed an algebraically closed ...
2answers
91 views
Extending a homomorphism $f:\left<a \right>\to\Bbb T$ to $g:G\to \Bbb T$.
Suppose $G$ is an abelian group and $a\in G$ and $$f:\left<a \right>\to\Bbb T$$ is a homomorphism. Can $f$ be extended to a homomorphism on $G$: $$g:G\to \Bbb T$$ ? $\Bbb T$ is the circle ...
2answers
70 views
Equality Involving Character Sums
I'm reading through a paper involving character sums, and I have run into an equality that I am unsure how to justify. Here is the set-up: Suppose $\chi$ is a multiplicative character of ...
1answer
60 views
How to bound the order of a finite group under the following hypotheses?
In the book Character Theory Of Finite Groups by I.Martin Issacs as exercise 2.14 Let $G$ be a finite group with commutator subgroup $G'$. Let $H \subset G' \cap Z(G)$ be cyclic of order n and ...
1answer
61 views
Intuition on characters of topological groups
I am coming to the end of a series of lecture notes on representations of $S_n$ and $GL(V)$. Near the end, it attempts to introduce the notion of the "character of a topological group", but doesn't ...
1answer
221 views
character tables for groups of order $pq^2$
What is the character table for groups or order $pq^2$? The classification of order $pq^2$ groups has already been discussed in relation to Sylow theory. For the Abelian groups, \$\mathbb{Z}_p ...
1answer
46 views
Some questions about representation theory in the modular case
I'm working on a paper which uses representation theory in order to compute some characters and deduce arithmetical statements about certain field extensions. Let $\Delta$ be a group of order prime ...
1answer
57 views
Vantage point of character theory
I am not sure whether I can frame my question properly, or whether at this point my understandings permit me to comprehend the perspectives of the answers to come, but somehow I find it pretty amazing ...
1answer
67 views
Sum of squares of dimensions of irreducible characters.
For anyone familiar with Artin's Algebra book, I just worked through the proof of the following theorem, which can be seen here: (5.9) Theorem Let $G$ be a group of order $N$, let ...
1answer
133 views
What is a rational character?
Let $G$ be the group of $F$-points of a connected, reductive group over a $p$-adic field $F$. The unramified character of $G$ are $\chi\circ\psi$ where $\chi$ is an unramified character of ...
1answer
94 views
Is the estimation of number's name's length and comma-grouping feasible?
I am thinking in a mathematical problem that probably is already formulated and even solved. It is about big integers and someting else. Let n be an integer positive number. For n := 1,000 we have ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 117, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.931381106376648, "perplexity_flag": "head"}
|
http://icml.cc/discuss/2012/473.html
|
Efficient Active Algorithms for Hierarchical Clustering
Advances in sensing technologies and the growth of the internet have resulted in an explosion in the size of modern datasets, while storage and processing power continue to lag behind. This motivates the need for algorithms that are efficient, both in terms of the number of measurements needed and running time. To combat the challenges associated with large datasets, we propose a general framework for active hierarchical clustering that repeatedly runs an off-the-shelf clustering algorithm on small subsets of the data and comes with guarantees on performance, measurement complexity and runtime complexity. We instantiate this framework with a simple spectral clustering algorithm and provide concrete results on its performance, showing that, under some assumptions, this algorithm recovers all clusters of size $\Omega(\log n)$ using $O(n \log^2 n)$ similarities and runs in $O(n \log^3 n)$ time for a dataset of $n$ objects. Through extensive experimentation we also demonstrate that this framework is practically alluring.
## Discussion
Email notifications of comments are sent to authors.
Please use the feedback page to report broken links and other problems.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9050429463386536, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/223011/what-is-the-probability-that-n-dice-tie-on-successive-rolls
|
# What is the probability that $n$ dice tie on successive rolls?
## The Question
What is the probability, rolling $n$ six-sided dice twice, that their sum each time totals to the same amount? For example, if $n = 4$, and we roll $1,3,4,6$ and $2,2,5,5$, adding them gives
$$1+3+4+6 = 14 = 2+2+5+5$$
What is the probability this happens as a function of $n$?
## Early Investigation
This problem is not too hard for $n = 1$ or $n = 2$ via brute force...
For $n = 2$:
Tie at a total of $2$: $$\frac{1}{36} * \frac{1}{36} = \frac{1}{1296}$$
Tie at a total of $3$: $$\frac{2}{36} * \frac{2}{36} = \frac{4}{1296}$$
etc.
so the answer is $$\frac{1^2 + 2^2 + 3^2 + ... + 6^6 + 5^2 + ... + 1^2}{1296} = \frac{\frac{(6)(7)(13)}{6} + \frac{(5)(6)(11)}{6}}{1296} = \frac{146}{1296}$$
Note that I use the formula: $\sum_{k=1}^{n}k^2=\frac{(n)(n+1)(2n+1)}{6}$.
Is there a way to do this in general for $n$ dice? Or at least a process for coming up with a reasonably fast brute force formula?
## The Difficulty
The problem arises that the sum of squares is not so simple when we get to three dice.
Using a spreadsheet, I figured out we need to sum these squares for 3 dice:
$$1, 3, 6, 10, 15, 21, 25, 27, 27, 25, 21, 15, 10, 6, 3, 1$$
For a brute force answer of $\frac{4332}{46656}$. Note how we can no longer use the sum of squares formula, as the squares we need to sum are no longer linear.
## Some Thoughts
I am no closer to figuring out an answer for $n$ dice, and obviously the question becomes increasingly more difficult for more dice.
One thing I noticed: I see a resemblance to Pascal's Triangle here, except we start with the first row being six $1$, not one $1$. Se we have:
```` 1 1 1 1 1 1
1 2 3 4 5 6 5 4 3 2 1
1 3 6 10 15 21 25 27 27 25 21 15 10 6 3 1
1 4 9 16 25 36 46 52 54 52 46 36 25 16 9 4 1
...
````
but that's still a process, not a formula. And still not practical for $n = 200$.
I know how to prove the formula for any cell in Pascal's Triangle to be $C(n,r) = \frac{n!}{r!(n-r)!}$... using induction; that doesn't really give me any hints to deterministically figuring out a similar formula for my modified triangle. Also there is no immediately obvious sum for a row of this triangle like there is (powers of 2) in Pascal's Triangle.
Any insight would be appreciated. Thanks in advance!
-
Do you know about generating function? – Jean-Sébastien Oct 28 '12 at 21:01
1
You have a small typo, for $n=2$ it should be $146$ on the numerator. You did $2n+1=12$ but with $n=5,$ its $11$ – Jean-Sébastien Oct 28 '12 at 21:47
That's two typos actually :) Fixed them thanks – durron597 Oct 28 '12 at 21:50
– Jean-Sébastien Oct 28 '12 at 21:59
## 3 Answers
I don't know whether you're interested in approximate and asymptotic answers – there's a straightforward estimate for large $n$. The distribution for the sum tends to a normal distribution. The variance for one die is
$$\langle x^2\rangle-\langle x\rangle^2=\frac{1+4+9+16+25+36}6-\left(\frac{1+2+3+4+5+6}6\right)^2=\frac{35}{12}\;,$$
so the variance for $n$ dice is $n$ times that. The probability of a tie is the sum over the squares of the probabilities, which we can approximate by the integral over the square of the density, so this is
$$\int_{-\infty}^\infty\left(\frac{\exp\left(-x^2/\left(2\cdot\frac{35}{12}n\right)\right)}{\sqrt{2\pi\frac{35}{12}n}}\right)^2\,\mathrm dx=\int_{-\infty}^\infty\frac{\exp\left(-x^2/\left(\frac{35}{12}n\right)\right)}{2\pi\frac{35}{12}n}\,\mathrm dx=\sqrt{\frac3{35\pi n}}\approx\frac{0.1652}{\sqrt n}\;.$$
The approximation is already quite good for $n=3$, where it yields about $0.095$ whereas your exact answer is about $0.093$.
-
I am looking for an exact answer, but this is pretty great. Upvote for you! – durron597 Oct 28 '12 at 21:44
This is not as explicit as you may want, but it's a first step. I encourage you to read about generating function, one possible answer to your question lies in them. (Plus they are really fun!).
One can show that the number of ways to write $k=x_1+x_2+\cdots +x_n$ where $1\leq x_i\leq 6$ for all $i$ are the coefficient of $x^k$ in $$(x+x^2+x^3+x^4+x^5+x^6)^n.$$ This coefficient is the multinomial coefficient $${n\choose x_1,x_2,\cdots, x_m}$$, where $k=x_1+x_2+\cdots +x_n$ for a given $k$.
Now what you want to do is sum the square of these coefficients and divide it by $36^n$
We can write this as $$\frac{1}{36^n}\sum_{k=n}^{6n}\quad\sum_{x_1+x_2+\cdots +x_n=k}{n\choose x_1,x_2,\cdots, x_m}^2$$ Substituting $n=2$ we get $\frac{146}{1296}$ and for $n=3$ we get $\frac{4332}{46656}.$
-
It is possible to calculate this exactly if you are willing to use arbitrary precision integer arithmetic.
You can use the recursion $$f(n,k)=\sum_{j=1}^6 f(n-1,k-j)$$ starting at $f(0,0)=1$ and $f(0,k)=0$ when $k\not =0$ to find the number of ways of scoring $k$ from $n$ dice. Your result is then $$\sum_{i=n}^{6n} f(n,i)^2 / 6^{2n}$$ which is the division of two very large integers: for $n=200$ the numerator will be about $2.1\times 10^{309}$ and the denominator will be $6^{400}\approx 1.8\times 10^{311}$.
More practically using a spreadsheet and only looking for several decimal places you can use $$g(n,k)=\sum_{j=1}^6 g(n-1,k-j) / 6$$ starting at $g(0,0)=1$ and $f(0,k)=0$ when $k\not =0$ to find the probability of scoring $k$ from $n$ dice. Your result is then $$\sum_{i=n}^{6n} g(n,i)^2.$$
With $n=200$ this latter method will just over 200 columns and 1200 rows of the spreadsheet, so not difficult, and an extra column for the squares of the final column. In practice it give a value of about $0.0116752$ for the probability of matched sums rolling 200 dice twice.
This compares with about $0.0116798$ from joriki's approximation, a relative difference of around 0.04%.
-
For what it's worth, the original question (that inspired the post) was actually $n=8$, I simply mentioned $n=200$ as an example of an arbitrarily large $n$ that is conceivably meaningful. – durron597 Oct 29 '12 at 1:06
@durron597 for $n=8$ the exact answer is $\dfrac{163112472594}{2821109907456} \approx 0.0578185$ – Henry Oct 29 '12 at 6:33
isn't your answer basically exactly the same as my "Pascalish triangle"? – durron597 Oct 29 '12 at 13:24
@durron597 yes: but if you do it with probabilities [my $g(n,i)$] then it is practical for $n=200$ – Henry Oct 29 '12 at 13:31
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 64, "mathjax_display_tex": 14, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9282212257385254, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/74135/computational-methods-for-dealing-with-geometrically-complicated-solid-boundaries/74219
|
## Computational methods for dealing with geometrically complicated solid boundaries in fluid-air interface problems
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Hello,
I am a PhD student who does not have extensive computational experience seeking advice from those experienced with computational modelling as to which method would be most appropriate for solving my particular problem.
Background
Physical Scenario
The Salvinia is a small floating fern. Its leaves have upon them a forest-like structure of fronds, with a particular shape and particular regions of hydrophobia and hydrophilia. This structure, also found elsewhere in nature, allows the Salvinia to maintain a persistent, stable air layer on its surface due to the phenomenon of surface tension.
For those familiar with the phenomenon of capillary action, this physical scenario is closely related, and involves many of the same considerations.
Mathematics
In brief, the interface between the air and the water is often constructed according to the method of Gauss. Operating on a variational principle, this method involves writing the free surface energy, the "wetting energy" due to contact with the solid boundaries (fronds) and gravitational potential (if desired) as action functionals to be minimised. It is also usually desirable to impose a condition on the volume bounded by the surface, specifying that it should not change under the variation, in order to fix a unique surface with respect to translations.
Classical formulations have viewed the surface as a height function over a euclidean domain. This can cause problems when the surface curves back on itself, as in the case of some sessile drops, for example. Thus, I have written the action functionals in terms of functions $X^A$ (where $A=1,2,3$) which define the embedding of the surface into three-dimensional euclidean space.
Why is this a problem?
The difficulty I have encountered is that the solid boundaries (fronds) do not have a simple geometry. Take, for example, the case where the fronds are cylinders. It is then possible to define the surface as a function $u(x,y)$ over a domain $\Omega$. The functional is can then be decomposed into an interior term and a boundary term using the divergence theorem, and solved using a finite element method.
Now consider the (still relatively simple) case where the frond is a cone, rather than a cylinder. Now, as the surface moves up and down vertically, the location and shape of the boundary as viewed in the $(x,y)$ plane changes, depending on the height and curvature of the surface.
What I have already tried
I have produced solutions for the cylinder case using FEMLAB, and replicated those results with COMSOL. However, I was unable to think of a way to incorporate more complicated boundary structures (even simple ones such as a cone).
I have had slightly more success with the Surface Evolver, developed by Ken Brakke. This is also a finite-element-style scheme, which works by evolving an initial surface using a gradient descent method. The software is stable and well-written, and I have been able to produce results for a cylinder, a cone and hyperboloid. However, as the solid boundaries must be defined as level-set constraints, I assume that building more complex solid boundaries would require overlapping level-set constraints and some criteria for switching between them appropriately.
Notes
I am aware of several different methods which may be applicable, including: Volume-of-Fluid methods, Level-Set Methods (Osher & Sethian), Finite Element Methods for PDEs and the Dorfmeister-Pedit-Wu algorithm. I have been endeavouring to determine for myself whether any or all of these might be appropriate, but due to my limited computational experience, I am quite unsure as to what method might be appropriate.
Important Comment
I am not attempting, in any way, to avoid the long and possibly laborious process of learning the ins and outs of a computational method. If referred by consensus or expert advice to an appropriate method, I will most happily plow into every piece of material I can find on the subject until I am able to address my problem. At this stage, I simply do not have the breadth of knowledge necessary to investigate every possible method and assess each for its strengths and weaknesses with respect to my problem.
Summary
Is there a computational method, or already-existing software package, which is appropriate for modelling fluid-air interfaces with solid boundaries of complicated geometry?
With thanks in anticipation,
Christopher Laing
-
Some level-set work of Fedkiw and collaborators seems to be similar to what you want, but I do not know about software availability. You can look at physbam.stanford.edu/~fedkiw – S. Carnahan♦ Aug 31 2011 at 5:56
Dear Dr. Carnahan, Thank you very much for your response. I have just today taken Prof. Fedkiw's book (co-authored with level set pioneer Stanley Osher) from the library, with the intention of investigating it fully should the level-set method be recommended here. Regards, Christopher Laing – Christopher Laing Aug 31 2011 at 6:05
## 2 Answers
In addition to level set methods, there a couple other things you might want to look into.
One possibility is isogeometric analysis (T.J.R. Hughes and collaborators), which is designed for taking complicated smooth surfaces and discretizing them for finite-element-like computations.
Another thing to think about for fluid-structure interaction is the immersed boundary method (Charles Peskin and collaborators), which models the effect of the solid structure as a "force" on the fluid.
There are also arbitrary Lagrangian-Eulerian methods that allow you to track a moving surface with a moving mesh by modifying some terms in the underlying equations appropriately.
-
Dear Mr Barker, Thank you for your reply. I will go and do some basic reading on each of your suggestions. I have seen some problems which have been solved with ALE methods, but I am still somewhat in the dark as to how complicated solid geometries might be expressed. It seems to me at first glance that such methods rely on the computation of boundary integrals to represent the energy contributed by the solid boundary, however perhaps I do not have a good mental picture of how such solid bodies could be dealt with. Do you perhaps have a good reference/example? – Christopher Laing Aug 31 2011 at 23:01
Usually you discretize the geometry (say, with finite elements) so that the boundary integral is approximated by a sum of integrals, where each one can be represented nicely (say, with a polynomial). The standard reference for ALE is Donea, Giuliani, Halleux, Comp. Meth. Appl. Mech. Engrg. 33 (1982) pp. 689-723, but I'm not sure it will be quite what you're looking for. – Andrew T. Barker Sep 1 2011 at 13:47
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
You may also consider the immersed boundary method, and its variants. It was specifically developed for situations involving complex fluid/structure interactions. The method is quite successful for solid structures (complexity is not a problem); it's a bit trickier for porous media. In essence one write down the interaction forces experienced by the particles in the solid body, and integrates.
The methods do indeed rely on the computation of integrals, but fast quadratures work quite well here.
A really great starting point for this field is the Acta Numerica paper by Charles Peskin (2002). He also has some nice course notes, and code, online:
http://math.nyu.edu/faculty/peskin/ib_lecture_notes/index.html
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9416616559028625, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/74464/the-fundamental-groupoid-and-a-pushout-in-the-category-of-groupoids/74559
|
## The fundamental groupoid and a pushout in the category of groupoids.
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Hi, Recently I've been looking at the more general version of Van Kampen's theorem, or R. Browns version of it, for the fundamental groupoid. It mentions that if a space X is the union of the interiors of $X_1$ and $X_2$ , then:
is a pushout in the category of groupoids. In the category of groups, we have the concrete description that the pushout is just the free product with amalgation. Does something similar hold here? Is there any explicit description, like free product with amalgation?
-
1
"push-out" and "free product with amalgamation" are pretty much synonymous, so it's not clear to me there's anything happening to generalize. – Ryan Budney Sep 3 2011 at 22:13
I think Ryan is saying that the "free product with amalgamation" interpretation of a pushout of groups is just a general categorical fact: in any category with finite coproducts and coequalizers, the pushout of $A \rightarrow B$ along $A \rightarrow C$ is the coequalizer the two maps into the coproduct of $B$ and $C$. – Dylan Wilson Sep 3 2011 at 22:28
3
I would use "free product with amalgamation" only when the upper corner group injects into the other two groups. This is also the case when you have a really nice description of the result. Otherwise you only get a presentation (given presentation for the three groups) and we all know how difficult such can be to handle... – Torsten Ekedahl Sep 5 2011 at 10:38
## 4 Answers
I agree with the comments above: being a pushout is a categorical property. What is useful is to be able to compute explictly such pushouts and, as you say, free/amalgamated products do so in the category of groups.
In his paper Le théorème de Van Kampen (Cahiers de Topologie et Géométrie Différentielle Catégoriques, 33 no. 3 (1992), p. 237-251. Available on Numdam, http://www.numdam.org/item?id=CTGDC_1992__33_3_237_0), André Gramain gives (part of) an explicit recipe to compute the isotropy groups of a coequalizer of a pair $(\phi,\psi)$ of morphisms of groupoids. This recipe applies to your case by considering (as in van Kampen's theory) the disjoint sum of the groupoids $\pi_1(X_1,A)$ and $\pi_1(X_2,A)$ and the two morphisms from $\pi_1(X_0,A)$ to this disjoint union.
In SGA 1 (Revêtements étales et groupe fondamental, Exposé IX, §5), Grothendieck had given the same recipe for the fundamental group of schemes. However, his proof is more categorical and based on the correspondence between coverings and sets with action of the fundamental groups, and on descent theory for coverings.
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Andre Gramain's 1992 exposition of van Kampen's statement is in fact covered explicitly in "Topology and Groupoids", as it was in the 1988 version of that book; it was just an exercise in the 1968 edition.
The point of this type of exposition is to say that sometimes a group can be better explicitly described in terms of groupoids. A basic example is the group $\mathbb Z$ of integers! This is obtained from the groupoid $\mathcal I$ which has two objects 0,1 and exactly one arrow between them by identifying 0 and 1. This is rather analogous to the way the circle is obtained from the unit interval $[0,1]$ by identifying 0 and 1 !
Another aspect is that sometimes a groupoid is a better object to deal with than a group. For example, a homotopy colimit of a diagram of groups is really a groupoid. This is analogous in topology to taking double mapping cylinders rather than a pushout of maps of CW-complexes.
I should also say the Higgins' monograph has results on groups, for example a generalisation of Grusko's theorem, that has not been equalled by other methods.
Another aspect of Topology and groupoids is Chapter 11 on "Orbit spaces, orbit groupoids", which allows some computation of the fundamental groupoid and hence group of an orbit space.
-
I don't fully agree: while it is obvious that van Kampen theorems for groupoids have been published long ago, notably in your book, I am not able to recognize there the formulae of which Gramain gives a preview (beyond the case of quotient by a groupp actions, as in your Chapter 11). But I would love to learn I'm wrong ! – ACL Nov 23 2011 at 13:30
Another comment: I like to compute the fundamental group of the circle as follows. The circle is the interval with endpoints 0 and 1 identified. A covering of the circle is a covering of the interval together with an identification of the fibers at 0 and 1. A covering of the interval is trivial, so is just a set ; identifying the fibers means precisely giving oneself a bijection of this set. So coverings of the circle are classified by a set together with a bijection of this set, that is, by an action of the group $\mathbf Z$. – ACL Nov 23 2011 at 13:32
With respect to the second sentence of the previous comment, note that the group of integers can be presented, in the category of groupoids, as the groupoid $$\mathbf I=\pi_1([0,1],\{0,1\})$$ with $0,1$ identified. – Ronnie Brown Dec 22 2011 at 14:42
2
Re ACL's comment on coverings: One reason I got involved with groupoids was that in writing the first edition of my topology text in the 1960s I got annoyed at having to divert into covering theory to get this basic example of the fundamental group of the circle. After finding the use of groupoids, I also rewrote the basic theory of coverings using not actions, but that a covering {\bf map} of spaces is modelled by a covering {\bf morphism} of groupoids. I invite people to compare and contrast this exposition in "Topology and Groupoids" with those in other texts. – Ronnie Brown Mar 31 2012 at 22:05
You mention Ronnie Brown, but have you looked up his book on topology and groupoids. I think what you ask for is there. The other very early source for this sort if calculation is in P. J. Higgins little monograph which is a TAC reprint at (http://www.tac.mta.ca/tac/reprints/articles/7/tr7abs.html). There is a lot of stuff in there which you do not find most other places.
-
I think the relevant formula is 8.4.1 in T&G. This is applied in section 9.2 to the Phragmen-Brouwer property and the Jordan Curve Theorem.
My original motivation for the investigation was to avoid a detour to compute the fundamental group of the circle: a basic theorem should compute THE basic example! I like the view of the integers (an infinite set) as an identification of a groupoid $\mathbf I$ with 4 arrows, identifying 0 and 1.
Also I tend to see covering spaces in terms of covering morphisms of groupoids, since then a covering map is algebraically modelled by a covering morphism, whereas an action is one step further.
In the new book `Nonabelian algebraic topology', published by the EMS, the van Kampen style arguments are used to compute relative homotopy groups as modules, and second relative homotopy groups as crossed modules, using colimit calculations.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 16, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9286256432533264, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/112417?sort=votes
|
## Why Donaldson’s Four-Six Conjecture?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Simon Donaldson apparently made the following conjecture: Two closed symplectic 4-manifolds $(X_1,\omega_1)$ and $(X_2,\omega_2)$ are diffeomorphic if and only if $(X_1\times S^2,\omega_1\oplus\omega)$ is deformation-equivalent to $(X_2\times S^2,\omega_2\oplus\omega)$. Here $\omega$ is a symplectic structure on $S^2$, and a deformation-equivalence is a diffeomorphism $\phi:X_1\times S^2\to X_2\times S^2$ such that $\omega_1\oplus\omega$ and $\phi^*(\omega_2\oplus\omega)$ can be joined by a path of symplectic forms.
However, where I read this did not contain any background or the original source. Where did Donaldson make this claim? And why did he make this claim? What is the motivation / are there good examples where this holds? Ivan Smith showed (through examples) that this conjecture fails when we replace $S^2$ by $\mathbb{T}^2$, so the statement itself seems pretty rigid.
-
2
And... why is it called "four-six"? (ok, if $X_1$, $X_2$ are symplectic $4$-folds, then $X_i\times S^2$ is a $6$-fold, but the theorem as in the question doesn't seem to impose any dimension restriction on $X_i$) – Qfwfq Nov 14 at 22:37
Oops. It's because I forgot to write "$4-$manifolds" :-) – Chris Gerig Nov 15 at 0:42
But my typo then begs the question, why just 4-manifolds and not $2n$-manifolds? – Chris Gerig Nov 15 at 1:42
1
I think you should definitely edit it to say 4-manifolds: Donaldson never made the conjecture for 2n-manifolds, which is wrong (e.g. there exist diffeomorphic, non-symplectomorphic six manifolds which give eight manifolds which are distinguished by Gromov-Witten invariants). This is because it's much easier for high-dimensional manifolds to be diffeomorphic than for four-manifolds. – Jonny Evans Nov 15 at 9:25
1
@Chris: no it doesn't. begthequestion.info – HW Nov 15 at 12:52
## 2 Answers
I think that YangMills is probably right that Donaldson never wrote the conjecture down. But there are some interesting circles of ideas surrounding the conjecture which deserve mention which, again he probably never wrote down, but I think motivated some of his work on symplectic manifolds: namely, the idea that one could define invariants of symplectic manifolds inductively by dimension. For instance, take a 4-dimensional symplectic (Donaldson) hypersurface in a symplectic 6-manifold. There is a sense (only an asymptotic sense) in which you can do this uniquely. Is that enough to use smooth invariants of the 4-manifold to define symplectic invariants of the six-manifold? No-one has ever succeeded, due to the complicated nature of the asymptotic uniqueness.
The question about 4/6-manifolds which Chris Gerig is asking about is probably motivated by a more concrete phenomenon: smooth (i.e. Seiberg-Witten) invariants of symplectic 4-manifolds see the same information as symplectic (i.e. Gromov-Witten) invariants; after crossing with a sphere, homeomorphic but non-diffeomorphic symplectic 4-manifolds become diffeomorphic 6-manifolds, however symplectically you can still detect their Gromov-Witten invariants by counting curves in the 6-manifold (see the early papers of Ruan). The classic example is to compare the Barlow surface (a surface of general type) and a (homeomorphic) blow-up of the projective plane. One is minimal, the other has many -1-curves and you can still see these after crossing with a sphere.
This also explains why 4 and 6 are the relevant dimensions: smooth geometry in dimension 4 and symplectic geometry in dimension 6 are both "hard" in the Gromov sense. There are elliptic PDEs whose moduli spaces can be used to distinguish exotic pairs. By contrast there's no hard smooth invariants for 6-manifolds, so the question doesn't generalise.
I guess the conjecture Chris mentions is the most optimistic extrapolation of this observation, designed to encourage people to think about the circle of ideas.
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
About your first question, of where did Donaldson make this claim, you can look at Smith's paper here. On page 3 where he states this conjecture he says "According to ([5]; p.437) Simon Donaldson has formulated..."
His reference [5] is the book of McDuff-Salamon, "Introduction to symplectic topology" (2nd edition). There the authors say "Indeed, inspired by this fact and his results on the existence of symplectic submanifolds, Donaldson made the following conjecture".
Judging from these sources, it seems quite likely that Donaldson did not put this conjecture in writing, and that it was indeed publicized by the book of McDuff-Salamon.
And about motivation and examples, these two sources provide quite a lot of information.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 19, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9500964283943176, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/183598/master-theorem-solving/183753
|
# Master theorem solving
I'm starting to study the master theorem, why does something like
$$T(n) = aT(n/b)+f(n)$$
solves to
$$f(n)^{\log_ba}$$ ?
I'm a bit confused on the resolution
-
1
What is confusing you exactly? A proof of this fact can be found in the book Introduction to Algorithms. – Daniel Pietrobon Aug 17 '12 at 12:13
It is important when asking a question to actually ask a question. Can you give us an example of what is confusing you? – Thomas Andrews Aug 17 '12 at 12:44
I don't know where the second formula came from, @DanielPietrobon is that explained where the second formula come from in the book you cited? – John Smith Aug 17 '12 at 12:59
1
Again, "where the second formula came from" is not a clear question. Are you looking for the history of the formula, a proof of why the formula works, or some other question? – Thomas Andrews Aug 17 '12 at 13:16
1
– mixedmath♦ Aug 17 '12 at 18:30
show 3 more comments
## 3 Answers
The right way to think about this is that we reduce the original problem, of size $n$, to $a$ separate problems of size $n/b$, and we do this recursively. At stage $i$, there are $a^i$ problems of size $n b^{-i}$. For each such problem, we need to do something which costs $f(x)$ on a problem of size $x$. Hence, the total cost at level $i$ is $a^i f(n b^{-i})$.
This recursion has $\log_b n$ levels, so the total work consumed at all levels is $$\sum_{i=0}^{\log_b n} a^i f(n b^{-i})$$
Depending on the growth of $f$, this is basically a geometric series.
-
For an in depth explanation and proof of the "Master Theorem" or "Master Method" of finding asymptotic solutions to recurrences, see MIT OCW's Introduction to Algorithms, Lecture 2.
-
Master Theorem In the last section, we saw three different kinds of behavior for recurrences of the form T(n) = _ aT(n/2) + n if n > 1 d if n = 1. These behaviors depended upon whether a < 2, a = 2, and a > 2. Remember that a was the number of subproblems into which our problem was divided. Dividing by 2 cut our problem size in half each time, and the n term said that after we completed our recursive work, we had n additional units of work to do for a problem of size n. There is no reason that the amount of additional work required by each subproblem needs to be the size of the subproblem. In many applications it will be something else, and so in Theorem 5.1 we consider a more general case. Similarly, the sizes of the subproblems don’t have to be 1/2 the size of the parent problem. We then get the following theorem, our first version of a theorem called the Master Theorem. (Later on we will develop some stronger forms of this theorem.) Theorem 5.1 Let a be an integer greater than or equal to 1 and b be a real number greater than 1. Let c be a positive real number and d a nonnegative real number. Given a recurrence of the form T(n) = _ aT(n/b) + nc if n > 1 d if n = 1 then for n a power of b, 1. if logba < c, T(n) = Θ(nc), 2. if logb a = c, T(n) = Θ(nc log n), 3. if logba > c, T(n) = Θ(nlogb a). Proof: In this proof, we will set d = 1, so that the bottom level of the tree is equally well computed by the recursive step as by the base case. It is straightforward to extend the proof for the case when d _= 1. Let’s think about the recursion tree for this recurrence. There will be logb n levels. At each level, the number of subproblems will be multiplied by a, and so the number of subproblems at level i will be ai. Each subproblem at level i is a problem of size (n/bi). A subproblem of size n/bi requires (n/bi)c additional work and since there are ai problems on level i, the total number of units of work on level i is ai(n/bi)c = nc _ ai bci _ = nc _ a bc _i . Recall from above that the different cases for c = 1 were when the work per level was decreasing, constant, or increasing. The same analysis applies here. From our formula for work on level i, we see that the work per level is decreasing, constant, or increasing exactly when ( a bc )i 5.2. THE MASTER THEOREM 171 is decreasing, constant, or increasing. These three cases depend on whether ( a bc) is 1, less than 1, or greater than 1. Now observe that ( a bc) = 1 ⇔ a = bc ⇔ logb a = c logb b ⇔ logb a = c Thus we see where our three cases come from. Now we proceed to show the bound on T(n) in the different cases. In the following paragraphs, we will use the facts (whose proof is a straightforward application of the definition of 1ogartihms and rules of exponents) that for any x, y and z, each greater than 1, xlogy z = zlogy x and that logx y = Θ(log2 y). (See Problem 3 at the end of this section and Problem 4 at the end of the previous section.) In general, we have that the total work done is lo_gb n i=0 nc _ a bc _i = nc lo_gb n i=0 _ a bc _i In case 1, (part 1 in the statement of the theorem) this is nc times a geometric series with a ratio of less than 1. Theorem 4.4 tells us that nc lo_gb n i=0 _ a bc _i = Θ(nc). Exercise 5.2-1 Prove Case 2 of the Master Theorem. Exercise 5.2-2 Prove Case 3 of the Master Theorem. In Case 2 we have that a bc = 1 and so nc lo_gb n i=0 _ a bc _i = nc lo_gb n i=0 1i = nc(1 + logb n) = Θ(nc log n) . In Case 3, we have that a bc > 1. So in the series lo_gb n i=0 nc _ a bc _i = nc lo_gb n i=0 _ a bc _i , the largest term is the last one, so by Theorem 4.4,the sum is Θ _ nc _ a bc logb n
. But nc _ a bc _logb n = nc alogb n (bc)logb n = nc nlogb a nlogb bc = nc nlogb a nc = nlogb a. Thus the solution is Θ(nlogb a). We note that we may assume that a is a real number with a > 1 and give a somewhat similar proof (replacing the recursion tree with an iteration of the recurrence), but we do not give the details here.
```` By:ISRAFIL CSE
````
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9084141254425049, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/128165/what-is-a-vanishing-moment?answertab=active
|
# What is a “vanishing moment”?
In this paper, Sweldens says about desireable properties of wavelets:
To analyze and represent such signals we need wavelets that are local in space and frequency. Typically this is achieved by building wavelets which have compact support (localization in space), which are smooth (decay towards high frequencies), and which have _vanishing moments_ (decay towards low frequencies).
I understand that compact support means the function is non-zero on a limited portion of its domain. "Smoothness" I believe means that there are no corners in the basis function, ie it would not take infinitely high frequencies to represent the curve (using Fourier analysis).
I do not know what a "vanishing moment" means, intuitively, though. Why would you call a "decay to towards low frequencies" a vanishing moment?
-
## 1 Answer
The basic idea is that a wavelet has $p$ vanishing moments if and only if the wavelet scaling function can generate polynomials up to degree $p-1$. The "vanishing" part means that the wavelet coefficients are zero for polynomials of degree at most $p-1$, that is, the scaling function alone can be used to represent such functions. More vanishing moments means that the scaling function can represent more complex functions. Loosely, you can think of it as
$$\textrm{more vanishing moments } \rightarrow \textrm{ complex functions can be represented with a sparser set of wavelet coefficients.}$$
The "moments" part comes from the fact that this is all equivalent to saying that the first $p$ derivatives of the Fourier transform of the wavelet filter all are zero when evaluated at 0. This is perfectly analogous to the probabilistic idea of a "moment generating function" of a random variable, which is basically the Fourier transform, and the $n$-th derivative evaluated at zero gives the $n$-th moment of the variable (i.e. the expected value, the expected value of the square, of the cube, etc.) So these Fourier transform derivative-zeros correspond to integrals back in the time/space domain that must be zero for the wavelet. In a sense, these conditions mean that the wavelet is "unbiased." It doesn't skew the function that is being transformed because the wavelet itself has no expected effect on a function until that function has a non-trivial $p+1$ order derivative.
Added: Section 5.2.1 at this link shows the integrals that I'm referring to and, I think, does a good job illustrating why you might refer to this as a "decay toward infinity" kind of property.
-
A couple questions: 1) Is the link broken? 2) When you say "equivalent to saying that the first p derivatives of the Fourier transform of the wavelet filter all are zero when evaluated at 0.", do you mean evaluated at zero frequency? Isnt that then just the dot-product of the wavelet with the function in question?... – Mohammad Nov 13 '12 at 16:27
– endolith Mar 6 at 18:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9342017769813538, "perplexity_flag": "head"}
|
http://www.impan.pl/cgi-bin/dict?random
|
## random
The random variable $X$ has the Poisson distribution with mean $v$.
In this and the other theorems of this section, the $X_n$ are any independent random variables with a common distribution.
To calculate (2), it helps to visualize the $S_n$ as the successive positions in a random walk.
The proof shows that if the points are drawn at random from the uniform distribution, most choices satisfy the required bound.
Go to the list of words starting with: a b c d e f g h i j k l m n o p q r s t u v w y z
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8593290448188782, "perplexity_flag": "head"}
|
http://mathoverflow.net/revisions/86146/list
|
## Return to Question
3 edited body
Igor Pak suggested I ask this as a separate question. In http://mathoverflow.net/questions/85547/extensions-of-the-koebeandreevthurston-theorem-to-sphere-packing it was asked whether there were simple conditions to decide whether a finite graph could be expressed by a bunch of spheres in $\mathbb R^3,$ two spheres touching if and only if the relevant vertices shared an edge.
Steve
Scott Carnahan and I are of the opinion that any graph on $n$ vertices can be placed in $\mathbb R^{n-1}$ in the manner described. It is proved in Igor's book that the complete graph can be placed in $\mathbb R^{n-2}$ and no smaller dimension, one regular simplex with unit radii and then one extra sphere in the center. Of course, the complete graph can also be placed as a regular simplex with all unit spheres in $\mathbb R^{n-1}.$ But the varying radius question seems more fortunate, we get the same answer, if it works, in $\mathbb H^{n-1}$ and $\mathbb S^{n-1}.$
So, that is the initial question, can anyone prove that any graph on $n$ vertices, however many or few edges, can be placed in $\mathbb R^{n-1}$ as a set of spheres, if we allow varying radius?
Secondarily, and I have not the slightest idea, is there any sort of expected value of the minimum dimension, or, at least, some sort of "normal behavior" for this, meaning that "most" graphs on $n$ vertices need a minimum dimension of about __?
2 added 129 characters in body
Igor Pak suggested I ask this as a separate question. In http://mathoverflow.net/questions/85547/extensions-of-the-koebeandreevthurston-theorem-to-sphere-packing it was asked whether there were simple conditions to decide whether a finite graph could be expressed by a bunch of spheres in $\mathbb R^3,$ two spheres touching if and only if the relevant vertices shared an edge.
Steve Carnahan and I are of the opinion that any graph on $n$ vertices can be placed in $\mathbb R^{n-1}$ in the manner described. It is proved in Igor's book that the complete graph can be placed in $\mathbb R^{n-2}$ and no smaller dimension, one regular simplex with unit radii and then one extra sphere in the center. Of course, the complete graph can also be placed as a regular simplex with all unit spheres in $\mathbb R^{n-1}.$ But the varying radius question seems more fortunate, we get the same answer, if it works, in $\mathbb H^{n-1}$ and $\mathbb S^{n-1}.$
So, that is the initial question, can anyone prove that any graph on $n$ vertices, however many or few edges, can be placed in $\mathbb R^{n-1}$ as a set of spheres, if we allow varying radius?
Secondarily, and I have not the slightest idea, is there any sort of expected value of the minimum dimension, or, at least, some sort of "normal behavior" for this, meaning that "most" graphs on $n$ vertices need a minimum dimension of about __?
1
# Minimum dimension for sphere packing a graph in Euclidean space
Igor Pak suggested I ask this as a separate question. In http://mathoverflow.net/questions/85547/extensions-of-the-koebeandreevthurston-theorem-to-sphere-packing it was asked whether there were simple conditions to decide whether a finite graph could be expressed by a bunch of spheres in $\mathbb R^3,$ two spheres touching if and only if the relevant vertices shared an edge.
Steve Carnahan and I are of the opinion that any graph on $n$ vertices can be placed in $\mathbb R^{n-1}$ in the manner described. It is proved in Igor's book that the complete graph can be placed in $\mathbb R^{n-2}$ and no smaller dimension, one regular simplex with unit radii and then one extra sphere in the center. Of course, the complete graph can also be placed as a regular simplex with all unit spheres in $\mathbb R^{n-1}.$ But the varying radius question seems more fortunate, we get the same answer, if it works, in $\mathbb H^{n-1}$ and $\mathbb S^{n-1}.$
So, that is the initial question, can anyone prove that any graph on $n$ vertices, however many or few edges, can be placed in $\mathbb R^{n-1}$ as a set of spheres, if we allow varying radius?
Secondarily, and I have not the slightest idea, is there any sort of expected dimension or "normal behavior" for this?
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 29, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9392827749252319, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/115/mathematica-to-help-for-an-hamiltonian-problem/612
|
# Mathematica to help for an Hamiltonian problem
I have an Hamiltonian problem whose 2D phase space exhibit islands of stability (elliptic fixed points).
I can calculate the area of these islands in some cases, but for other cases I would like to use Mathematica (or anything else) to compute it numerically.
The phase space looks like that :
This is a contour plot make with Mathematica. Could anyone with some knowledge of Mathematica provide a way to achieve this ?
-
Calculate the area or plot the graph? – KennyTM Nov 3 '10 at 6:35
I did the plot, from the plot, or from the function I want to calculate the area. – Cedric H. Nov 3 '10 at 7:01
2
This actually seems like a computing question that happens to arise in a physical application, not really a physics question. Or is it just me? – David Zaslavsky♦ Nov 3 '10 at 7:41
Asking here, maybe I'll find someone using Mathematica for physics, I don't think asking in SO is a better idea. – Cedric H. Nov 3 '10 at 8:02
6
I think this one is grey area, leaning towards computing rather than physics. But not voting to close because you attached a nice graph. Always like a good graph. – Alasdair Allan Nov 3 '10 at 10:29
show 3 more comments
## 1 Answer
There is rather nice function in Mathematica 7, which allows one to integrate over an arbitrary complicated region. It is Boole:[True,False]$\to${1,0}. Below is just an example taken from Mathematica Documentation Center. If you have a 2D area defined by the inequality $4 x^4-4 x^2+y^2\leq 0$,
you can integrate any function $f(x,y)$ over this domain as follows:
````Integrate[f[x,y] Boole[y^2 - 4 x^2 + 4 x^4 <= 0], {x, -Infinity,Infinity},
{y, -Infinity,Infinity}]
````
For example, if $f(x,y)$ is unity then it gives you the total volume of the integration domain:
````In[1]:= Integrate[Boole[y^2 - 4 x^2 + 4 x^4 <= 0], {x, -Infinity,Infinity},
{y, -Infinity,Infinity}]
Out[1]= 8/3
````
In fact, you can use any condition you want, including that is determining your islands of stability. Numerical integration is also possible:
````In[1]:= NIntegrate[Boole[y^2 - 4 x^2 + 4 x^4 <= 0], {x, -Infinity,Infinity},
{y, -Infinity,Infinity}]
Out[1]= 2.66667
````
-
By the way, Mathematica is a grate tool widely used by physicists in various areas. I think it is worth to collect questions about it here (by using the new tag "Wolfram Mathematica", not just "Mathematica"). – Grisha Kirilin Nov 12 '10 at 2:20
Thanks, it is not exactly what I was looking for but it might do the trick. About the tags: these two can be made synonyms. – Cedric H. Nov 12 '10 at 11:37
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9072312712669373, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/tagged/lie-algebra?sort=faq&pagesize=30
|
# Tagged Questions
The lie-algebra tag has no wiki summary.
2answers
408 views
### How does non-Abelian gauge symmetry imply the quantization of the corresponding charges?
I read an unjustified treatment in a book, saying that in QED charge an not quantized by the gauge symmetry principle (which totally clear for me: Q the generator of $U(1)$ can be anything in ...
1answer
330 views
### Why is the value of spin +/- 1/2?
I understand how spin is defined in analogy with orbital angular momentum. But why must electron spin have magnetic quantum numbers $m_s=\pm \frac{1}{2}$ ? Sure, it has to have two values in ...
2answers
343 views
### Lie bracket for Lie algebra of $SO(n,m)$
How does one show that the bracket of elements in the Lie algebra of $SO(n,m)$ is given by $$[J_{ab},J_{cd}] ~=~ i(\eta_{ad} J_{bc} + \eta_{bc} J_{ad} - \eta_{ac} J_{bd} - \eta_{bd}J_{ac}),$$ ...
2answers
512 views
### Is the G2 Lie algebra useful for anything?
Seems like all the simpler Lie algebras have a use in one or another branch of theoretical physics. Even the exceptional E8 comes up in string theory. But G2? I've always wondered about that one. ...
1answer
250 views
### Why does a transformation to a rotating reference frame NOT break temporal scale invariance?
Naively, I thought that transforming a scale invariant equation (such as the Navier-Stokes equations for example) to a rotating reference frame (for example the rotating earth) would break the ...
1answer
71 views
### Charge of a field under the action of a group
What does it mean for a field (say, $\phi$) to have a charge (say, $Q$) under the action of a group (say, $U(1)$)?
1answer
129 views
### Similar masses and lifetimes of the $\Delta$ baryons
Why do the four spin 3/2 $\Delta$ baryons have nearly identical masses and lifetimes despite their very different $u$ and $d$ quark compositions?
1answer
627 views
### Killing vector fields
I am facing some problems in understanding what is the importance of a Killing vector field? I will be grateful if anybody provides an answer, or, refer me to some review or books.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9125882387161255, "perplexity_flag": "middle"}
|
http://www.physicsforums.com/showthread.php?p=1075741
|
Physics Forums
## "Brute force" quantization..
Let's suppose we have the "Equation of motions" for a particle:
$$F(y'',y',y,x)=0$$ my question is if exsit a "direct" method to apply quantization rules..for example simply stating that:
$$F(y'',y',y,x)| \psi (x) >=0$$ or something similar.
- I'm not talking about the usual method (you use the Hamiltonian operator to get the Wave function) but a method to "Quantize" everything without using Hamiltonians or Lagrangians only with the equation of motion and similar...thanks.
PhysOrg.com physics news on PhysOrg.com >> Promising doped zirconia>> New X-ray method shows how frog embryos could help thwart disease>> Bringing life into focus
The Hellman-feynman theorem deals with the quantum mechanics of forces, but no, there is no procedure which involves quantised forces.
Quote by Epicurus The Hellman-feynman theorem deals with the quantum mechanics of forces, but no, there is no procedure which involves quantised forces.
Well, I don't know about that theorem, but I was once thinking about quantum mechanics as a probability fluidum (the Madelung interpretation), something I never published since I cannot believe it has not been done yet. In this case, let $$R^2$$ be the mass´´ density and $$\partial_{\mu} S$$ be the integrable fluid velocity field. Then, the traditional navier stokes equation is:
$$R^2 \partial_t \partial_{\alpha}S + R^2 \partial_{\beta} S \partial_{\beta} \left( \partial_{\alpha} S \right) = \frac{R^2}{m} F_{\alpha} - \partial_{\alpha} p + \partial^{\beta} T_{\beta \alpha}$$ and the usual continuity equation
$$\partial_t R^2 + \partial^{\alpha} \left( R^2 \partial_{\alpha} S \right) = 0$$
Now, let the pressure $$p = - \frac{1}{2m^2} \left( R \partial_{\beta} \partial^{\beta }R - \frac{1}{3} \partial_{\beta} R \partial^{\beta} R \right)$$ and the stress tensor
$$T_{\alpha \beta} = - \frac{1}{m^2} \left( \partial_{\alpha}R \partial_{\beta} R - \frac{1}{3} \delta_{\alpha \beta} \partial_{\gamma} R \partial^{\gamma} R \right)$$ then it is easy to prove that
with $$F_{\alpha} = - \partial_{\alpha} V$$, the Navier Stokes equation gives rise to the Hamilton Jacobi equation of Bohmian mechanics. Hence, this provides a general scheme for quantization of particles in general force fields. If you definetly know this has not been done yet, give me a sign and I will post the paper'' on the arxiv.
It seems to me you cannot quantize general force fields (in the case of instantaneous action at a distance, there are no travelling waves, hence no particles), only those which can be derived from a (eventually distributional) field theory seem to be meaningful.
Careful
Thread Tools
| | | |
|---------------------------------------------------|----------------------------|---------|
| Similar Threads for: "Brute force" quantization.. | | |
| Thread | Forum | Replies |
| | Calculus & Beyond Homework | 9 |
| | General Physics | 2 |
| | Classical Physics | 4 |
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 9, "mathjax_asciimath": 2, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9069710373878479, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/calculus/201946-differential-equation-separable.html
|
# Thread:
1. ## Is this differential equation separable?
Is this separable? I'm trying, but I'm not sure if I'm on the right track:
y' = (x+y)/(x+2y)
dy/dx = (x+y)/(x+2y)
multiply both sides by dx:
dy = (x+y)/(x+2y) dx
Now I am not sure what to do because if I get (x+2y) to the other side, it'd still have an x and would be attached to the dy as well.
2. ## Re: Is this differential equation separable?
Is this separable? I'm trying, but I'm not sure if I'm on the right track:
y' = (x+y)/(x+2y)
dy/dx = (x+y)/(x+2y)
multiply both sides by dx:
dy = (x+y)/(x+2y) dx
Now I am not sure what to do because if I get (x+2y) to the other side, it'd still have an x and would be attached to the dy as well.
I'd do it like this... Make the substitution $\displaystyle \begin{align*} v = \frac{y}{x} \implies y = v\,x \implies \frac{dy}{dx} = v + x\,\frac{dv}{dx} \end{align*}$, then
$\displaystyle \begin{align*} \frac{dy}{dx} &= \frac{x + y}{x + 2y} \\ \frac{dy}{dx} &= \frac{1 + \frac{y}{x}}{1 + 2\left(\frac{y}{x}\right)} \\ v + x\,\frac{dv}{dx} &= \frac{1 + v}{1 + 2v} \\ x\,\frac{dv}{dx} &= \frac{1 + v}{1 + 2v} - v \\ x\,\frac{dv}{dx} &= \frac{1 - 2v^2}{1 + 2v} \\ \frac{1 + 2v}{1 - 2v^2}\,\frac{dv}{dx} &= \frac{1}{x} \end{align*}$
Go from here.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9552103281021118, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/151611/simplify-a-1-a-2-a-3-a-nm/151615
|
# simplify $(a_1 + a_2 +a_3+… +a_n)^m$
How to simplify this best $(a_1 + a_2 +a_3+... +a_n)^m$ for $m=n, m<n, m>n$
I could only get $\sum_{i=0}^{m}\binom{m}{i}a_i^i\sum_{j=0}^{m-i}\binom{m-i}{j}a_j ...$
-
It is already as simple as it gets if you don't have other information on the $a_i$s. – Phira May 30 '12 at 15:06
they are just variable ... let's say like a and b in binomial expansion – experimentX May 30 '12 at 15:07
Note that Pascal's Tetraedron, then its $4th$ dimensional analogue, then the $5th$, is a graphical way to visualise the multinomial theorem. – Alyosha May 5 at 16:59
## 2 Answers
The simplification for this type of expansion is done through the multinomial theorem. The multinomial theorem is a generalization of the binomial case to any arbitrary number of terms in the sum to be exponentiated.
The multinomial theorem is written as follows:
$$(x_1 + x_2 + \cdots + x_m)^n = \sum_{k_1+k_2+\cdots+k_m=n} {n \choose k_1, k_2, \ldots, k_m} \prod_{1\le t\le m}x_{t}^{k_{t}}\$$
Where the multinomial co-efficient is defined as:
$${n \choose k_1, k_2, \ldots, k_m} = \frac{n!}{k_1!\, k_2! \cdots k_m!}$$
It may also be useful to you to note that the multinomial co-efficient is always expressible as products of binomial co-efficients [Graham, Knuth, Patashnik, Concrete Mathematics (2nd edition)]:
$${n \choose k_1, k_2, \ldots, k_m} = {x_1+x_2+\cdots+x_m \choose x_2+\cdots+x_m}\cdots{x_{m-1}+x_m \choose x_m}$$
A fuller explanation can be found on Wikipedia, or Wolfram MathWorld
-
is there any relation between $k_1, k_2, ...$ isn't $k_1+k_2+ ...+k_m=n$ again going to be combination of k's?? – experimentX May 30 '12 at 15:18
The only relation is that the sum of all $k_i$ is equal to n. I do not believe there is an equality relationship such as $k_1>k_2$ or anything like that. – Shaktal May 30 '12 at 15:20
Oh ... thank you both!! – experimentX May 30 '12 at 15:21
This is called the Multinomial theorem: $$(a_1 + a_2 +a_3+... +a_n)^m=\sum_{k_1+k_2+...+k_n=m}\frac{m!}{k_1!\cdot...\cdot k_n!}a_1^{k_1}\cdot...\cdot a_n^{k_n}$$
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9252668023109436, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/calculus/25476-two-problems-dealing-linear-approximation-inflection-points.html
|
Thread:
1. Two problems dealing with linear approximation and inflection points
Okay here are two problems that have my friends and I stumped as we can't figure out a correct answer.
Problem #1:
At the point where x=3 on the curve y = (ax+1)/(x-2) , where a is a constant the slop of the normal is 1/5. Find the value of a.
Its multiple choice and the answers they give us are 1/2, 2, 0, 3, 1/3.
However, I keep getting -5/3 for a.
Problem #2:
The function y=x^4 - 4x^3 + 14 has a horizontal inflection at (A, B). The value of A+B is what?
Yet again it is another multiple choice question. The answers given are -10, 0, -39, 14, or none of these. I have a question though. In class we talk about points of inflection but in this question it talks about horizontal inflection. Is there a difference? We haven't touched base at all in class if it is different.
We found two points of inflection and they were (-2, 2) and (0,14) but we don't know which one is the correct answer if any. Its either 0 or 14 or maybe not if we screwed up in our calculations.
Any help would be appreciated.
2. Originally Posted by forkball42
Problem #1:
At the point where x=3 on the curve y = (ax+1)/(x-2) , where a is a constant the slop of the normal is 1/5. Find the value of a.
Its multiple choice and the answers they give us are 1/2, 2, 0, 3, 1/3.
However, I keep getting -5/3 for a.
The slope of the tangent to the curve can be found by
$y(x) = \frac{ax + 1}{x - 2}$
$y^{\prime}(x) = \frac{a(x - 2) - (ax + 1)(1)}{(x - 2)^2}$
$y^{\prime}(x) = -\frac{2a + 1}{(x - 2)^2}$
So at x = 3, the tangent to the curve has the value
$y^{\prime}(3) = -\frac{2a + 1}{(3 - 2)^2} = -(2a + 1)$
Now, the normal to the curve at this point is a line with a slope perpendicular to the tangent line. So the slope of the normal line is
$-\frac{1}{-(2a + 1)} = \frac{1}{2a + 1} = \frac{1}{5}$
So solving this I get that a = 2.
-Dan
3. For #1 the slope is given by $y'=\frac{-(2a+1)}{(x-2)^{2}}$
The normal has slope 1/5, therefore, the slope of the tangent at x=3 must have slope -5.
So, when x=3, $\frac{-(2a+1)}{(3-2)^{2}}=-5$
Solving for a we see a=2
4. Originally Posted by forkball42
Problem #2:
The function y=x^4 - 4x^3 + 14 has a horizontal inflection at (A, B). The value of A+B is what?
Yet again it is another multiple choice question. The answers given are -10, 0, -39, 14, or none of these. I have a question though. In class we talk about points of inflection but in this question it talks about horizontal inflection. Is there a difference? We haven't touched base at all in class if it is different.
We found two points of inflection and they were (-2, 2) and (0,14) but we don't know which one is the correct answer if any. Its either 0 or 14 or maybe not if we screwed up in our calculations.
This is my best guess after doing a search: A "horizontal inflection point" likely what you would simply call an inflection point. Now, recall that a critical point exists when the first derivative is either 0 or undefined. If the critical point is one where the first derivative is undefined and the second derivative at this point is also undefined, we have what you might call a "vertical inflection point."
In any event, all the inflection points here have a horizontal "tangent" so I would imagine they fall under the category of horizontal inflection points.
$y(x) = x^4 - 4x^3 + 14$
$y^{\prime}(x) = 4x^3 - 12x^2$
So we have critical points at
$y^{\prime}(x) = 4x^3 - 12x^2 = 0 \implies x = 0, 3$
The second derivative is
$y^{\prime \prime}(x) = 12x^2 - 24x$
$y^{\prime \prime}(0) = 0$
$y^{\prime \prime}(3) = 36$
So I get that there is only one inflection point for the function: at x = 0. The y value at x = 0 is y = 14. So the point (A, B) = (0, 14) which implies that A + B = 0 + 14 = 14.
-Dan
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 13, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9462661147117615, "perplexity_flag": "head"}
|
http://mathhelpforum.com/number-theory/128918-sequence.html
|
# Thread:
1. ## Sequence...
Find anfor this sequence:
5, 6, 7, 5, 6, 7, 5, 6, 7, ...
2. Originally Posted by bearej50
Find anfor this sequence:
5, 6, 7, 5, 6, 7, 5, 6, 7, ...
you can use a piece-wise definition: $a_n = \left \{ \begin{matrix} 5 & \text{ if } n \equiv 1 \bmod 3 \\ 6 & \text{ if } n \equiv 2 \bmod 3 \\ 7 & \text{ if } n \equiv 0 \bmod 3 \end{matrix} \right.$
are you familiar with the notation $n \equiv m \bmod 3$? It means $n = m + 3k$ for some integer $k$
3. I am familiar with this notation but I was looking for a more generalized formula. For example, for the sequence: 5, 7, 5, 7, .... an = 6 - (-1)^(n+1)
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8760533332824707, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/47442/diophantine-equation-with-no-integer-solutions-but-with-solutions-modulo-every-i/47528
|
Diophantine equation with no integer solutions, but with solutions modulo every integer
Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
It's probably common knowledge that there are Diophantine equations which do not admit any solutions in the integers, but which admit solutions modulo $n$ for every $n$. This fact is stated, for example, in Dummit and Foote (p. 246 of the 3rd edition), where it is also claimed that an example is given by the equation $$3x^3 + 4y^3 + 5z^3 = 0.$$ However, D&F say that it's "extremely hard to verify" that this equation has the desired property, and no reference is given as to where one can find such a verification.
So my question is: Does anyone know of a readable reference that proves this claim (either for the above equation or for others)? I haven't had much luck finding one.
-
2
Duplicate: mathoverflow.net/questions/2779/… – Qiaochu Yuan Nov 26 2010 at 17:35
3
The example in your question is a famous example of Selmer. An explanation is given by Keith Conrad here: math.uconn.edu/~kconrad/blurbs/gradnumthy/… Note also that the example given below by Qiaochu are reducible. One can show that if $f(x)$ is an integer polynomial in one variable that is irreducible, then it cannot have a solution modulo every prime. The essential point of Selmer's example is that it is irreducible. – Emerton Nov 26 2010 at 18:52
1
A proof that there are no solutions over Q is in Cassels' book on elliptic curves. An elementary proof that there are p-adic solutions for all p, using Hensel's lemma and an argument very much in the spirit of Qiaochu's answer (i.e. using a "cube" analogue of the statement that if $a$ and $b$ aren't squares mod $p$ then $ab$ is) can be found at www2.imperial.ac.uk/~buzzard/maths/teaching/10Aut/… : I just set it as homework for my students in fact :-) – Kevin Buzzard Nov 26 2010 at 20:30
3
Dear A. Pacetti, Here is the argument I had in mind (it is not original to me, though); hopefully I am not butchering it: consider the Galois group $G$ of the splitting field of the polynomial. If the poly. $f$ is irred., then $G$ acts transitively on the roots of $f$, and so group theory shows that (if $f$ has degree $> 1$) then there is a conjugacy of $G$ with no fixed point. The $p$ whose Frobenius elements are equal to this conjugacy class then have the property that $f$ has no root modulo $p$. (Such a root would give a fixed point for the Frobenius of $p$. – Emerton Nov 27 2010 at 1:38
24
Another elementary example is $x^2+y^2+z^2+w^2=-1$. Because every positive integer is a sum of 4 squares, this has solutions modulo every $n\ge 2$. – Tim Dokchitser Nov 27 2010 at 11:51
show 8 more comments
7 Answers
It is actually quite straightforward to write down examples in one variable where this occurs. For example, the Diophantine equation $(x^2 - 2)(x^2 - 3)(x^2 - 6) = 0$ has this property: for any prime $p$, at least one of $2, 3, 6$ must be a quadratic residue, so there is a solution $\bmod p$, and by Hensel's lemma (which has to be applied slightly differently when $p = 2$) there is a solution $\bmod p^n$ for any $n$. We conclude by CRT. (Edit: As Fedor says, there are problems at $2$. We can correct this by using, for example, $(x^2 - 2)(x^2 - 17)(x^2 - 34)$.)
Hilbert wrote down a family of quartics with the same property. There are no (monic) cubics or quadratics with this property: if a monic polynomial $f(x) \in \mathbb{Z}[x]$ with $\deg f \le 3$ is irreducible over $\mathbb{Z}$ (which is equivalent to not having an integer solution), then by the Frobenius density theorem there are infinitely many primes $p$ such that $f(x)$ is irreducible $\bmod p$.
-
@Qiaochu : CRT ? – Andres Caicedo Nov 26 2010 at 17:43
1
I am afraid that this specific example does not have solution modulo 8, but if one multiples by $(x^2-17)$ then it becomes ok. – Fedor Petrov Nov 26 2010 at 17:47
@Andres: Chinese remainder theorem. @Fedor: oops! I think there is a simpler example along those lines, though. – Qiaochu Yuan Nov 26 2010 at 18:00
Qiaochu: Out of curiosity, what is Hilbert's family of examples? – Faisal Nov 26 2010 at 19:42
1
I approve ! I gave the same example $(x^2-2)(x^2-3)(x^2-6)=0$ thirty-three years ago, at the oral examination of the French teaching contest "Agrégation". – Denis Serre Nov 26 2010 at 19:46
You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Here is another example, which is easy to verify by hand: $x^2+23y^2=41$. Note it has rational solutions (e.g. $(1/3,4/3)$). This provides solutions modulo $m$ if $(m,3)=1$. For $m$ a power of $3$, there is always a solution with $x=0$. Verifying that it doesn't have integral solutions is trivial.
-
2
The example I usually give my students is $x^2-37y^2=3$. The rational solution is $7/2,1/2$ and there's a 2-adic integer solution as well. However it's much harder to prove there's no integer solutions! (I can do it without too much trouble on a computer but...) So I'm very grateful for your example Felipe! – Kevin Buzzard Nov 26 2010 at 20:43
This is a great example! Thanks! – Faisal Nov 26 2010 at 22:14
1
An easy way to see that there are $3$-adic solutions is to note that $(81/13, 4/13)$ is a solution. – David Speyer Dec 2 2010 at 18:30
1
$(9/4,5/4)$ is a solution too. I should have looked for other rational solutions. – Felipe Voloch Dec 2 2010 at 18:55
The equation 2x^2 + 7y^2 = 1 has two rational solutions with small relatively prime denominators (hence as a congruence mod m it is solvable for all m by CRT) but it visibly has no integral solutions. Look for a rational solution with denominator 3 and also for one with denominator 5 (small numerators in both cases).
-
2
KConrad: it would have been shorter to say "$(1/3,1/3)$ and $(3/5,1/5)$" than "Look for a rational solution....both cases)." :-) – Kevin Buzzard Nov 28 2010 at 0:45
Yours seems to me like the "best" answer so far though (simplest equation, smallest coefficients). – Kevin Buzzard Nov 28 2010 at 0:45
2
A Weierstrass equation in this spirit is y^2 = x^3 - 51 (which BCnrd told me about, and I think he got it from Venkatesh). It has the rational solution (1375/9,50986/27) and it can be solved mod 3^r for any r using x = 1 and for y a square root of -50 mod 3^r (which exists since -50 = 1 mod 3). Thus as a congruence this equation has a solution mod m for any m. That there is no Z-solution follows from Q(sqrt(-51)) having class number 2, which is rel. prime to 3, by the same kind of method used to find the Z-solutions to y^2 = x^3-2. – KConrad Nov 28 2010 at 7:19
Consider the equation $(2x - 1)(3x - 1) = 0$. This equation has no integer solutions. But modulo $n$, it always has a solution. If $n$ is not a multiple of $2$, we can make $2x -1$ a multiple of $n$. If $n$ is not a multiple of $3$, we can make $3x - 1$ a multiple of $n$. Using the Chinese Remainder Theorem, we can handle every other $n$ by piecing together these two solutions.
-
There is an easier example in
http://zakuski.math.utsa.edu/~jagy/papers/Experimental_1995.pdf
where Kap disposed of the concern with the brief "(it is easy to see that the assumption of no congruence obstructions is satisfied)."
The example is, given a positive prime $p \equiv 1 \pmod 4,$ there is no solution in integers $x,y,z$ to $$x^2 + y^2 + z^9 = 216 p^3$$
Robert C. Vaughan wrote to Kap (prior to publication) in appreciation, there was something involved that "could not be detected p-adically." I forget what, it has been years. But we did well, Vaughan got an early draft in time to include the example in the second edition of
The Hardy-Littlewood Method.
Later for some reason I looked at negative targets, with the same primes I believe it turned out that there were no integer solutions to $$x^2 + y^2 + z^9 = -8 p^3.$$
The significance of the example is not so much as a single Diophantine equation, rather as a Diophantine representation problem in the general vicinity of the Waring problem, but with mixed exponents: given nonegative integer variables $x,y,z$ and exponents $a,b,c \geq 2,$ and given the polynomial $f(x,y,z) =x^a + y^b + z^c,$ if $f(x,y,z)$ represents every positive integer $p$-adically and if $$\frac{1}{a} + \frac{1}{b} + \frac{1}{c} > 1,$$ does $f(x,y,z)$ integrally represent all sufficiently large integers? The answer is no for the problem as stated, but the counterexamples depend heavily on factorization, and in the end upon composition of binary forms. As this is also the mechanism underlying the simplest examples of spinor exceptional integers for positive ternary quadratic forms, it is natural to ask whether there is some relatively easy formalism that adds "factorization obstructions" to the well-studied "congruence obstructions."
See:
http://zakuski.math.utsa.edu/~jagy/Vaughan.pdf
http://en.wikipedia.org/wiki/Waring's_problem
-
Is there a Brauer-Manin obstruction to these examples? – Felipe Voloch Nov 27 2010 at 19:08
Hi, Felipe. I really would not know. To be more precise, I do not know what the phrase means. But there is such an example with $x^2 + y^2 + z^n$ where $n$ is any odd and composite number. It is known that $x^2 + y^2 + z^3$ integrally represents all integers, and modest evidence suggests the same for any $n$ odd prime. Meanwhile, warranted or not, I think this is related to my first question in January 2010, $2 x^2 + x y + 3 y^2 + z^3 - z,$ answered with some real effort by Kevin Buzzard. There are about 25 of those, one for each discriminant of positive binary forms with class number 3. – Will Jagy Nov 27 2010 at 20:15
Felipe, the integral Brauer-Manin obstruction has been systematically invstigated by Colliot-Thélène and Xu (math.u-psud.fr/~colliot/CTXuCompositio2009.pdf). – Chandan Singh Dalawat Nov 28 2010 at 11:00
@Chandan: Yes, I know about this paper. My question was whether the lack of integral solutions to Will's examples can be explained by an integral Brauer-Manin obstruction. – Felipe Voloch Nov 29 2010 at 18:57
Felipe and Chandan, thank you for at least discussing this a little. Kap's proof by factorization is in the short paper with the link already given in my answer above. The phenomenon, in my comment above, in positive integral ternary quadratic forms is in a 1995 letter by Kap to J. S. Hsia and Rainer Schulze-Pillot, pdf link attempted here: zakuski.math.utsa.edu/~kap/Forms/… and the January 2010 question Kevin answered at: mathoverflow.net/questions/12486/… Will – Will Jagy Nov 29 2010 at 19:51
An example even easier than Jagy and Kaplansky's
$x^2+y^2+z^9 = 216p^3$, for $p=1 \bmod 4$, is given in:
Sums of two squares and one biquadrate, by R. Dietmann, and C. Elsholtz,
Funct. Approx. Comment. Math. Volume 38, Number 2 (2008), 233-234.
http://www.math.tugraz.at/~elsholtz/WWW/papers/papers26de08.pdf
Here we showed:
$x^2+y^2+z^4=p^2$ has no positive solutions, when $p=7 \bmod 8, p$prime. Once the example is known, it's trivial to prove.
The Jagy-Kaplansky example can be generalized to odd composite exponent, instead of 9. It seems the example above was overlooked for quite a while.
-
This is nice, Christian. – Will Jagy Dec 2 2010 at 20:43
See 6.4.1 in my paper with Rudnick http://www.springerlink.com/content/l1t0071152537186/, page 62. The equation is: $$-9x^2+2xy+7y^2+2z^2=1.$$ This equation has a rational solution $(-\frac{1}{2}, \frac{1}{2},1)$, hence it has solutions modulo $p^n$ for all $p\neq 2$ and all $n$. In addition, it has a solution $(4,1,1)$ modulo $2^7$, and using Hensel's lemma one can easily check that the equation has solutions modulo $2^n$ for all $n$. The elementary proof that this equation has no integral solutions is due to Don Zagier and is based on (a supplementary formula to) the quadratic reciprocity law.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 87, "mathjax_display_tex": 11, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9283317923545837, "perplexity_flag": "head"}
|
http://dsp.stackexchange.com/questions/7688/multidimensionnal-wavelet-admissibility-criteria
|
# Multidimensionnal wavelet admissibility criteria
I've read there are two admissibility criteria for wavelets, both of which are designed to preserve total power of the signal (source: http://en.wikipedia.org/wiki/Wavelet#Mother_wavelet, as well as various scientific papers)
1. Condition for zero-mean: $$\int_{-\infty}^{\infty}\psi(t)dt=0$$
2. Condition for square norm one: $$\int_{-\infty}^{\infty}|\psi(t)|^2dt=1,$$
where $\psi(t)$ is the wavelet kernel. This leads to having a normalization factor of $\frac{1}{\sqrt{2\pi}}$ for the Gabor/Morlet wavelet.
Now, my question is: how do these admissibility criteria apply to 2D wavelets? (Let's say $\psi(x,y)$.)
My guess would be:
1. $$\int_{x=-\infty}^{\infty}\int_{y=-\infty}^{\infty}\psi(x,y)dx dy=0$$
2. $$\int_{x=-\infty}^{\infty}\int_{y=-\infty}^{\infty}|\psi(x,y)|^2dxdy=1,$$
but I get inconsistent results in my computations.
Is my guess right? If not, what are they? Can you provide a reliable source?
-
2
What exactly is being inconsistent in your computations? From what I understand your admissibility criterions are correct, but I will have to double check when I get home. Still, what is inconsistent in your calculations? – Mohammad Jan 29 at 23:31
When comparing wavelets coefficients to Fourier coefficients, I'm off by about 5-6 orders of magnitude (beside that, everything is fine: the slope and overall shape is good). It's just a matter of getting my code straight (which is a completely different problem!) If I can just confirm my normalisation constant is good, it'll save me a lot of time! – PhilMacKay Jan 30 at 14:43
## 1 Answer
It turns out the admissibility criterion does not dictate that the energy of the wavelet must be unity. To be classified as a wavelet, a wavelet must follow the following criteria:
• The wavelet must have finite energy, so $E = \int_{-\infty}^{+\infty} |\psi(t)|^{2} dt < \infty$
• The second condition is that the wavelet must have zero mean. (The intuition behind this is because the wavelet is acting like a matched filter, and we care only about how nicely the shape of the received signal matches the wavelet, and not the energy of the received signal). So if $\hat\psi(f)$ is the fourier transform of your wavelet, then: $$C_g = \int_{0}^{\infty} \frac{|\hat\psi(f)|^2}{f} df < \infty$$ The $C_g$ here is called the admissibility constant, and the above property is called the admissibility criterion. This implies that $\hat\psi(0)=0$, because if it didn't, the above integral would blow up as you can see.
• Finally, for complex wavelets, the last criterion states that the Fourier transform must be both real, and vanish for negative frequencies.
Now notice, no where does it state that the square norm of the wavelet must be equal to unity. It only states that the $E < \infty$.
In your case, it sounds as though the energy of your signal in the Wavelet domain, is not matching the energy of your signal in the Fourier domain. My best guess without further information is that you not normalize your wavelet to unit energy, since it is not demanded by the admissibility criterion. (First check though, if the energy of your signal in the spatial domain actually matches the energy of your signal in the wavelet domain). I would start there.
-
– PhilMacKay Jan 30 at 19:20
@PhilMacKay There also appears to be a typo in the paper: Equation (5) states that "they must have finite energy", and then goes on to say that: $$\int\int_{-\infty}^{\infty}|\psi(\bf{x})|^2d^2x = \int\int_{-\infty}^{\infty}|\hat\psi(k)|^2d^2k =0$$ It seems as though it should read "$< \infty$" instead of "$=0$" – Mohammad Jan 30 at 21:58
– B Z Feb 1 at 18:22
@BruceZenone Thanks again Bruce, Ill have to add that to my ever expanding list! :-) – Mohammad Feb 3 at 22:51
@Mohammad I did notice this odd definition... In practice, I usually normalise such that it equals unity, because then the result is of the same order of magnitude as the initial data. – PhilMacKay Feb 5 at 16:42
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 6, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9431533813476562, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/calculus/73019-show-harmonic-representing-composition-maps.html
|
# Thread:
1. ## show harmonic by representing by composition of maps
show $tan^{-1}(\frac{2x}{x^{2}+y^{2}-1})$ is harmonic by considering $h(w(z))=log(\frac{i+z}{i-z})$
i found that h(w(z)) is just the same as the function required. but how can i show it is harmonic by writing it as composition of maps?
2. Originally Posted by szpengchao
show $tan^{-1}(\frac{2x}{x^{2}+y^{2}-1})$ is harmonic by considering $h(w(z))=log(\frac{i+z}{i-z})$
i found that h(w(z)) is just the same as the function required. but how can i show it is harmonic by writing it as composition of maps?
Remember that the real and imaginary parts of an analytic function on an open set S are harmonic on S.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9651073813438416, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/53571/operator-compression-preserving-lowest-energy-eigenspace
|
## Operator compression preserving lowest energy eigenspace.
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I have a large ($10^6$ by $10^6$) sparse ($0.4$% nonzero) hermitian matrix $H$ arising from the discretization of an elliptic PDE. I would like to approximate $H$ with a smaller matrix $H'$ in such a way that $H'$ and $H$ have nearly identical eigenvectors and eigenvalues for the lowest 5 or 6 eigenvalues.
This could be done if I knew the lowest eigenvectors of $H$ as I could simply restrict $H$ to the space spanned by these values. However I would like to be able to find an approximation before solving eigenvalue problem (in fact my eventual goal is to make solving the eigenvalue problem more efficient).
I know of the recent work on approximating SDD systems with graph sparsification as well as multilevel operator compression. What else is out there?
The application is a full-CI treatment of a multiparticle quantum system.
-
## 1 Answer
Isn't this what the Lanczos algorithm was designed for? http://en.wikipedia.org/wiki/Lanczos_algorithm
-
Sorta, but that's not what I want here. I am trying to compress the operator without first computing an invariant subspace. It would be great if one could identify which entries of the matrix contribute most to the eigenvectors without explicitly forming the matrix (it's about 90GB in RAM). If you think this sounds crazy that makes two of us. Ideally my advisor would be the third but no luck there. – rcompton May 6 2011 at 4:39
1
You do not need to form the matrix to run Lanczos, you just need a "black-box" function that computes $Av$ given a vector $v$. If you cannot even do this efficiently, please specify how you can interact with the matrix, or which special structure it has. – Federico Poloni May 6 2011 at 7:03
Yes, you only need to form $Av$ to do Lanczos. My matrix is exactly this $\mathbb{H}$: en.wikipedia.org/wiki/Configuration_interaction . I would like it to be smaller while still preserving the lowest energy states. A similar problem arises in finite element simulations. Consider coarsening the discretization in certain regions in order to work with a smaller stiffness matrix. Normally this is done using geometric intuition but in my problem the space is high dimensional so I would like to somehow automate the coarsening. – rcompton May 6 2011 at 21:35
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9287679195404053, "perplexity_flag": "head"}
|
http://en.wikipedia.org/wiki/Lyman_series
|
# Lyman series
In physics and chemistry, the Lyman series is the series of transitions and resulting ultraviolet emission lines of the hydrogen atom as an electron goes from n ≥ 2 to n = 1 (where n is the principal quantum number referring to the energy level of the electron). The transitions are named sequentially by Greek letters: from n = 2 to n = 1 is called Lyman-alpha, 3 to 1 is Lyman-beta, 4 to 1 is Lyman-gamma, etc. The series is named after its discoverer, Theodore Lyman.
## History
The first line in the spectrum of the Lyman series was discovered in 1906 by Harvard physicist Theodore Lyman, who was studying the ultraviolet spectrum of electrically excited hydrogen gas. The rest of the lines of the spectrum (all in the ultraviolet) were discovered by Lyman from 1906-1914. The spectrum of radiation emitted by hydrogen is non-continuous. Here is an illustration of the first series of hydrogen emission lines:
Historically, explaining the nature of the hydrogen spectrum was a considerable problem in physics. Nobody could predict the wavelengths of the hydrogen lines until 1885 when the Balmer formula gave an empirical formula for the visible hydrogen spectrum. Within five years Johannes Rydberg came up with an empirical formula that solved the problem, presented first in 1888 and in final form in 1890. Rydberg managed to find a formula to match the known Balmer series emission lines, and also predicted those not yet discovered. Different versions of the Rydberg formula with different simple numbers were found to generate different series of lines.
On December 1, 2011, it was announced that Voyager 1 detected the first Lyman-alpha radiation originating from the Milky Way galaxy. Lyman-alpha radiation had previously been detected from other galaxies, but due to interference from the Sun, the radiation from the Milky Way was not detectable.[1]
## The Lyman series
The version of the Rydberg formula that generated the Lyman series was:[2]
${1 \over \lambda} = R_\text{H} \left( 1 - \frac{1}{n^2} \right) \qquad \left( R_\text{H} = 1.0968{\times}10^7\,\text{m}^{-1} = \frac{13.6\,\text{eV}}{hc} \right)$
Where n is a natural number greater than or equal to 2 (i.e. n = 2,3,4,...).
Therefore, the lines seen in the image above are the wavelengths corresponding to n=2 on the right, to $n= \infty$ on the left (there are infinitely many spectral lines, but they become very dense as they approach to $n= \infty$ (Lyman limit), so only some of the first lines and the last one appear).
The wavelengths (nm) in the Lyman series are all ultraviolet:
| | | | | | | | | | $n$ | Wavelength (nm) |
|-------|-------|------|----|------|------|------|------|------|------------------------------------------------------------------------------|----------------------------------------------------------------------------|
| 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | $\infty$ |
| 121.6 | 102.6 | 97.3 | 95 | 93.8 | 93.1 | 92.6 | 92.3 | 92.1 | 91.9 | 91.18 (Lyman limit) |
## Explanation and derivation
In 1913, when Niels Bohr produced his Bohr model theory, the reason why hydrogen spectral lines fit Rydberg's formula was explained. Bohr found that the electron bound to the hydrogen atom must have quantized energy levels described by the following formula,
$E_n = - \frac{me^4}{2(4\pi\varepsilon_0\hbar)^2}\,\frac{1}{n^2} = - \frac{13.6\,\text{eV}}{n^2}.$
According to Bohr's third assumption, whenever an electron falls from an initial energy level $E_\text{i}$ to a final energy level $E_\text{f}$, the atom must emit radiation with a wavelength of
$\lambda = \frac{hc}{E_\text{i} - E_\text{f}}.$
There is also a more comfortable notation when dealing with energy in units of electronvolts and wavelengths in units of angstroms,
$\lambda = \frac{12398.4\,{\rm \AA}\,\text{eV}}{E_\text{i} - E_\text{f}}.$
Replacing the energy in the above formula with the expression for the energy in the hydrogen atom where the initial energy corresponds to energy level n and the final energy corresponds to energy level m,
$\frac{1}{\lambda} = \frac{E_\text{i} - E_\text{f}}{12398.4\,{\rm \AA}\,\text{eV}} = \left(\frac{12398.4}{13.6}\,{\rm \AA}\right)^{-1} \left(\frac{1}{m^2} - \frac{1}{n^2} \right) = R_\text{H} \left(\frac{1}{m^2} - \frac{1}{n^2} \right)$
Where $R_\text{H}$ is the same Rydberg constant for hydrogen from Rydberg's long known formula.
For the connection between Bohr, Rydberg, and Lyman, one must replace m by 1 to obtain
$\frac{1}{\lambda} = R_\text{H} \left( 1 - \frac{1}{n^2} \right)$
which is Rydberg's formula for the Lyman series. Therefore, each wavelength of the emission lines corresponds to an electron dropping from a certain energy level (greater than 1) to the first energy level.
## References
1. "Voyager Probes Detect "invisible" Milky Way Glow". National Geographic. December 1, 2011. Retrieved 2013-03-04.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 13, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9363206028938293, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/advanced-algebra/169008-pseudo-inverse-matrix-print.html
|
# pseudo inverse of a matrix
Printable View
• January 21st 2011, 08:43 PM
nadia321
pseudo inverse of a matrix
Hi
Let A be an $m \times n$ matrix. An $n\times m$ matrix $A{^\oplus}$ is a pseudoinverse of A if there are matrices $U \ and\ V$ such that
$AA^{\oplus} A= A,$
$A^{\oplus}=UA^{\oplus}=A^{T}V.$
Show that $A^{\oplus}$ exists and is unique.
Whereas i know pseudo inverse is obtained from singular value decomposition (SVD), as any $m \times n$ matrix can be decomposed as
$A=UDV^T$
where $D$ is diagonal matrix and $U, V$ are orthogonal matrices.
So we can find $A^{\oplus}=UD^{\oplus}V^T,$ this stratifies the first property $AA^{\oplus}A = A$.
Where $D^{\oplus}$ is matrix of reciprocal of eigenvalues of $D$.
My Question is how can
1. How can I prove other properties $A^{\oplus}=UA^{\oplus}=A^{T}V$ using this SVD decomposition.
2. Here in the stated problem matrices are not orthogonal, is it OK.
Thanks in advance
• January 21st 2011, 09:01 PM
Drexel28
Quote:
Originally Posted by nadia321
Hi
Let A be an $m \times n$ matrix. An $n\times m$ matrix $A{^\oplus}$ is a pseudoinverse of A if there are matrices $U \ and\ V$ such that
$AA^{\oplus} A= A,$
$A^{\oplus}=UA^{\oplus}=A^{T}V.$
Show that $A^{\oplus}$ exists and is unique.
Whereas i know pseudo inverse is obtained from singular value decomposition (SVD), as any $m \times n$ matrix can be decomposed as
$A=UDV^T$
where $D$ is diagonal matrix and $U, V$ are orthogonal matrices.
So we can find $A^{\oplus}=UD^{\oplus}V^T,$ this stratifies the first property $AA^{\oplus}A = A$.
Where $D^{\oplus}$ is matrix of reciprocal of eigenvalues of $D$.
My Question is how can
1. How can I prove other properties $A^{\oplus}=UA^{\oplus}=A^{T}V$ using this SVD decomposition.
2. Here in the stated problem matrices are not orthogonal, is it OK.
Thanks in advance
Do you have to use the singular value decomposition (I hate it! Bad experience)? Just in case you want to google information this is often called the Moore-Penrose pseudoinverse.
All times are GMT -8. The time now is 07:39 AM.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 32, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8874721527099609, "perplexity_flag": "middle"}
|
http://quant.stackexchange.com/questions/4260/combining-covariances/4263
|
# Combining covariances?
Consider an economy with assets with return processes $A$, $B$, $C$, $D$. Consider a weighted index with return process $I=aA + bB + cC + dD$ where $a,b,c,d$ are coefficients, and $a+b+c+d = 1$.
Suppose I want to find $cov(I,A)$. Is this possible given that I know the covariance between all possible pairs of $A,B,C,D$?
Also, suppose I have some asset $E$. Suppose I know $cov(A,E),cov(B,E),cov(C,E),cov(D,E)$. How do I find $cov(I,E)$.
-
## 1 Answer
Wikipedia gives:
$\sigma(x,y) = E[xy] - E[x]E[y]$
and
$\sigma(ax+by,cz) = ac\, \sigma(x,z) + bc\, \sigma(y,z)$
(paraphrasing the $\sigma(ax+by,cW+dV)$ rule).
So
$\sigma(I,A) = \sigma([aA+bB+cC+dD],A)$ $\sigma(I,A) = a\,\sigma(A,A) + b\,\sigma(B,A) + c\,\sigma(C,A) + d\,\sigma(D,A)$ $\sigma(I,A) = a\,\sigma^2(A) + b\,\sigma(B,A) + c\,\sigma(C,A) + d\,\sigma(D,A)$
Since you know the covariances between all the pairs and presumably the variance ($\sigma^2$) of A, you can thus calculate $\sigma(I,A)$.
The same holds for $\sigma(I,E)$, only you won't get a variance term: $\sigma(I,E) = a\,\sigma(A,E) + b\,\sigma(B,E) + c\,\sigma(C,E) + d\,\sigma(D,E)$
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 22, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9053783416748047, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/47808/change-in-intensity-of-electric-field-with-constant-velocity
|
# Change in intensity of electric field with constant velocity
Consider a +Q charged particle is travelling towards another test charge +Q. Now what would be the difference in electric field experienced by the test charge(avoid the gradual decrease in distance between them)? Would the field lines look compressed and effective field strength increased for the test charge?
-
## 2 Answers
If you are looking for an effect separate from the particle's position, at classical velocities there isn't one. The electric field is \begin{equation} \mathbf{E}=\mathbf{E}(\mathbf{r},t), \end{equation} that is, the electric field is only a function of the position $\mathbf{r}$ and the time $t$. At any given instant in time, the force a test charge 'feels' due to another charge depends only on its position $\mathbf{r}$, and not on its velocity.
This velocity independence breaks down when the charges' relative velocities approach the speed of light. If a reference frame has an electric field, a frame boosted with the respect to the reference appears to have some magnetic field. For a frame boosted by a velocity $\mathbf{v}=v_x \mathbf{\hat{x}}$ where the separation $\mathbf{r}$ between the charges is given by $\mathbf{r}=r\mathbf{\hat{x}}$ (in other words, the charges are moving directly toward each other), so that \begin{equation} \mathbf{\beta}=\beta_x=v_x/c, \end{equation} and \begin{equation} \gamma=\left[1-\left(\frac{v_x}{c}\right)^2\right]^{-1/2}, \end{equation} then for an electric field in the frame of the stationary charge $\mathbf{E}=E_x\mathbf{\hat{x}}+E_y\mathbf{\hat{y}}+E_z\mathbf{\hat{z}}$ with a background magnetic field ($\mathbf{B}$), the test charge will 'see' fields $\mathbf{E}'$ and $\mathbf{B}'$ given by \begin{equation} \mathbf{E}'=\gamma(\mathbf{E}+\beta_x \mathbf{\hat{x}}\times\mathbf{B}) - \frac{\gamma^2\beta_x^2}{\gamma+1}(\mathbf{\hat{x}}\cdot\mathbf{E})\mathbf{\hat{x}}\\ \mathbf{B}'=\gamma(\mathbf{B}-\beta_x \mathbf{\hat{x}}\times\mathbf{E}) - \frac{\gamma^2\beta_x^2}{\gamma+1}(\mathbf{\hat{x}}\cdot\mathbf{B})\mathbf{\hat{x}} \end{equation}
(Source, J.D. Jackson 1999, section 11.10.)
The end result is that electric fields in a rest frame look like magnetic fields from a moving frame.
Interestingly, if the test particle is moving directly toward the charge, the electric and magnetic field along its trajectory will always be the classical one and relativity will have no effect. It is only when the boost has a component perpendicular to the rest-frame fields that the boost-frame fields are different.
There is compression of the field lines at relativistic velocities, but again, only for field lines that are not parallel to the velocity. If you picture the field lines radiating out of a stationary charge, then a moving charge looks similar, but with the field lines perpendicular to the boost velocity more tightly bunched together.
-
A test charge "feels" only an external field. If another charge is approaching, it will experience a stronger field due to $1/R^2$ field strength dependence.
-
So other than this distance dependent field strength no other possibilities are there? – Inquisitive Dec 28 '12 at 20:28
Well, you can consider retarded fields as more exact, if necessary (a retarded near field and radiation). – Vladimir Kalitvianski Dec 28 '12 at 20:30
Not understood, please explain it little more? – Inquisitive Dec 28 '12 at 20:32
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8482072353363037, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/calculus/148853-adding-up-percentages-over-continuous-function.html
|
# Thread:
1. ## Adding up percentages over a continuous function
Hi,
I'm trying to figure out how to add percentages along a continuous curve--that is to say for any function f(x) where for a given x, f(x) is a percentage consumed or expended.
To make it easier to conceptualize, consider the function $f(t)=\cos{t}+1$. Say this gives us the percentage of a birthday cake eaten by a given time t over the course of three days.
$0<t<\frac{\pi}{3}$ is the birthday on which the eating begins and $t=\pi$ is the end of the third day when the cake is finished. The curve reveals that the largest percentage of the cake is eaten on the birthday. A smaller portion eaten on the second, and just a few bites are left to consume on the third day.
Is there a function that can sum up the percentages over the time period $\pi$ so that they equal 100%?
Thanks
2. Originally Posted by rainer
Hi,
I'm trying to figure out how to add percentages along a continuous curve--that is to say for any function f(x) where for a given x, f(x) is a percentage consumed or expended.
To make it easier to conceptualize, consider the function $f(t)=\cos{t}+1$. Say this gives us the percentage of a birthday cake eaten by a given time t over the course of three days.
Actually, that example doesn't make a whole lot of sense. cos(t) is a decreasing function. f(0)= 2 and $f(\pi/2)= 1$. How is that "the percentage of a birthday cake eaten by a given time t"?
In any case, you "sum" a continuous function by integrating. Given a function af(t) where a is an as yet undetermined constant, you can make the "sum" equal to 100%= 1.0 "over time period $\pi$" by choosing a so that $a\int_0^\pi f(t)dt= 1.0$ or $a= \frac{1}{\int_0^\pi f(t) dt}$.
$0<t<\frac{\pi}{3}$ is the birthday on which the eating begins and $t=\pi$ is the end of the third day when the cake is finished. The curve reveals that the largest percentage of the cake is eaten on the birthday. A smaller portion eaten on the second, and just a few bites are left to consume on the third day.
Is there a function that can sum up the percentages over the time period $\pi$ so that they equal 100%?
Thanks
3. Originally Posted by HallsofIvy
Actually, that example doesn't make a whole lot of sense. cos(t) is a decreasing function. f(0)= 2 and $f(\pi/2)= 1$. How is that "the percentage of a birthday cake eaten by a given time t"?
Ah yes, forgive me. It should be at a given time t.
The problem then is this: based on piecewise measurments of "cake eaten" taken at various points during the time period 0-pi, it is found that the continuous function f(t) gives the % of cake dished up and eaten at any given time t (0<t<pi). Not the cumulative by that time. So, the fact that f(t) is decreasing just means that as time progresses smaller and smaller portions of the whole cake are being dished up and eaten.
My confusion is that the individual percentages at each time t must add up to 100%, but if you actually sum these individual percentages you get--it seems to me--way too big a number. Is the area under the curve from 0 to pi really 100%? The piecewise measurements add up to 100%, sure, but then when you are dealing with a continuous curve it had seemed to me you're adding infinitessimals which--so it had seemed to me--will give an unbounded result.
Thanks for your solution. Just to make sure I understand...
$\int_{0}^{\pi}\cos{t}+1\; dt=\pi$
So you're saying I have to set $\pi\equiv1.0 \;(or\; 100\%)$?
And given that at t=0 $f(0)=\cos{0}+1=2$, you're saying that the largest percentage of the cake dished up (call it "L") at any time t is $L=\frac{2}{\pi}=0.6366197724\;or\;63.66\%$?
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 17, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9603095054626465, "perplexity_flag": "middle"}
|
http://gowers.wordpress.com/2012/05/08/a-look-at-a-few-tripos-questions-v/
|
# Gowers's Weblog
Mathematics related discussions
## A look at a few Tripos questions V
Here is the final analysis question from 2003.
12C. State carefully the formula for integration by parts for functions of a real variable.
Let $f:(-1,1)\to\mathbb{R}$ be infinitely differentiable. Prove that for all $n\geq 1$ and for all $t\in(-1,1)$,
$\displaystyle f(t)=f(0)+f'(0)t+\frac 1{2!}f''(0)t^2+\dots$
$\displaystyle \dots+\frac 1{(n-1)!}f^{(n-1)}(0)t^{n-1}+\frac 1{(n-1)!}\int_0^tf^{(n)}(x)(t-x)^{n-1}dx$.
By considering the function $f(x)=\log(1-x)$ at $x=1/2$, or otherwise, prove that the series
$\displaystyle \sum_{n=1}^\infty\frac 1{n2^n}$
converges to $\log 2$.
What is implied by “state carefully”? It probably means that more is required than just writing
$\displaystyle \int_a^bf(x)g'(x)dx=[f(x)g(x)]_a^b-\int_a^bf'(x)g(x)dx$.
What else can one put? The main thing is the conditions under which the formula is valid. So I think what is required is something like this.
Let $f$ and $g$ be differentiable functions on the interval $[a,b]$. Hmm … I have to confess that I’m not sure what the precise conditions are, or rather what a standard set of precise conditions is. I could go for continuously differentiable since that would guarantee that all the integrals exist. A quick check — that’s the formulation used by Wikipedia, so it’s probably fairly standard. So here’s what you probably need to say (unless you’ve been given some more general statement in lectures, in which case obviously you should use that).
Let $f$ and $g$ be continuously differentiable functions on the closed interval $[a,b]$. Then
$\displaystyle \int_a^bf(x)g'(x)dx=[f(x)g(x)]_a^b-\int_a^bf'(x)g(x)dx$.
Now we have to prove Taylor’s theorem with the integral form of the remainder. I remember that at least one version of Taylor’s theorem always gives me trouble, but I think it’s the one with the mean-value-theorem-ish remainder, and a quick look at this one suggests that all we have to do is integrate the remainder term by parts, which is an obvious enough thing to try even without the huge clue that we have just been told to state the formula for integration by parts.
Obviously, integrating the remainder term by parts is done in order to produce a new term, and therefore to prove the statement by induction. So let’s write down the $n=1$ statement first. Here is what I would actually write.
When $n=1$, the statement we are asked to prove is that $f(t)=f(0)+\int_0^tf'(x)dx$. This is true by the fundamental theorem of calculus.
Now to the rest of the answer.
Let us now use integration by parts to rewrite the remainder term. Setting $u=f^{(n)}$ and $v=-(t-x)^n/n!$, we have that both $u$ and $v$ are continuously differentiable. Also, $v'(x)=(t-x)^{n-1}/(n-1)!$. Therefore our integral is $\int_0^tu(x)v'(x)dx$, which equals
$\displaystyle -[f^{(n)}(x)(t-x)^n/n!]_0^t+\frac 1{n!}\int_0^tf^{(n+1)}(x)(t-x)^ndx$.
The first term is equal to $f^{(n)}(0)t^n/n!$, which proves the inductive step.
It’s obvious what the last part is asking us to do: we must simply plug in $f(x)=\log(1-x)$. That requires us to differentiate $\log(1-x)$ infinitely many times. Fortunately, it’s a function where the result is extremely nice. The first derivative is $-1/(1-x)$. Then we get $-1/(1-x)^2$, then $--2/(1-x)^3$, then $-3!/(1-x)^4$. OK, the pattern is clear now, so let’s do a proper proof by induction.
I claim that $f^{(n)}(x)=-(n-1)!/(1-x)^n$. This is true when $n=1$, since then the derivative is $-1/(1-x)$. If it is true for $n$, then it is true for $n+1$, since the derivative of $-(n-1)!/(1-x)^n$ is $-n!/(1-x)^{n+1}$. [It looks like a bit of a cheat to write that, since I haven't shown my working -- things like noticing that two minus signs cancelled out -- but it's hard to see how I could have reached this answer by accident, so the examiner couldn't reasonably remove marks for that. Maybe it would have been better to write slightly more.]
The question really is holding our hand here. Let’s apply Taylor’s theorem with $x=1/2$. The one thing not to do is just calculate the infinite series. The whole point of the question is to prove that you understand that estimating the remainder term is necessary if you want a rigorous proof. Let’s underline that by doing it first.
We shall prove this result by applying Taylor’s theorem. First let us obtain a bound for the remainder term, which is
$\displaystyle \frac 1{(n-1)!}\int_0^{1/2}(n-1)!(1/2-x)^{n-1}dx/(1-x)^{n-1}$.
How are we going to estimate that? Well, that $(1/2-x)^{n-1}$ looks a lot smaller than $(1-x)^{n-1}$ and the $(n-1)!$s cancel out. How can we make that thought precise? Well, $1/2-x\leq (1-x)/2$. OK, here goes.
Since $1/2-x\leq(1-x)/2$ for all $x$ in the range $[0,1/2]$, the integrand is at most $(n-1)!/2^{n-1}$, which implies that the term is at most $1/2^n$, which tends to zero. Therefore, by Taylor’s theorem,
$f(1/2)=f(0)+\sum_{n=1}^\infty f^{(n)}(0)(1/2)^n/n!$
By our earlier calculation, $f^{(n)}(0)=-(n-1)!$, and $f(0)=0$, so
$\displaystyle f(1/2)=\sum_{n=1}^\infty -\frac 1{n2^n}$
But $f(1/2)=\log(1/2)=-\log 2$, so we are done.
Not much to say about that question, since it was an easy one. But as I’ve already said, if a question is easy, then the examiner wants you to do it properly, and if you don’t then you may well lose an alpha. In this case, doing it properly means stating some conditions on functions that appear in the formula for integration by parts, and more importantly it means bothering to prove that the remainder term tends to zero when you apply Taylors theorem. People who didn’t do the latter would not have got alphas.
Is there a reasonable “or otherwise” option? That’s a difficult one. If you’re allowed to differentiate a power series term by term, then you can differentiate $\sum_{n=1}^\infty x^n/n$ to get $\sum_{n=1}^\infty x^{n-1}$, which is a geometric series (when $|x|<1$ as it will be here) that sums to $1/(1-x)$. So the original function is, up to a constant, $-\log(1-x)$. Looking at what happens when $x=0$ we see that the constant is 0, and we can now plug in $x=1/2$.
But was it reasonable to assume that a power series can be differentiated term by term inside its radius of convergence? It's certainly a different part of the course. My guess is that this proof — written out a bit less sketchily than I have written it — would have been accepted even if that result had been merely stated and not itself proved, simply because the examiner would have given some credit for independent thought. But I can't say that with total certainty, because it is a fairly substantial result to assume, whereas the intended approach doesn't ask you to assume anything more than you've just proved.
### Like this:
This entry was posted on May 8, 2012 at 5:46 pm and is filed under Cambridge teaching. You can follow any responses to this entry through the RSS 2.0 feed. You can leave a response, or trackback from your own site.
### 6 Responses to “A look at a few Tripos questions V”
1. Paul Weemaes Says:
May 9, 2012 at 10:37 am | Reply
Dear Mr. Gowers,
I really enjoy your writings, and I hope to see many more of them in the future!
However, I’d like bring to your attention the fact that the math formulas (images of a rather poor quality, if I may say so) are almost unreadable when printed on paper. I’m not sure if you are aware of this (or if you care about it), but I thought I’d mention it just in case.
I’m not very technical and I’m sorry that I cannot provide you with a solution or be of any help otherwise. Perhaps the people at WordPress can help you.
Best Regards,
Paul Weemaes
2. barry kiernan Says:
May 10, 2012 at 9:22 am | Reply
Hello,
there is a minor typo, in the inductive step. “The first term is equal to …” should evaluate the derivative at 0, not t, and the integral just above this line should have dx, not dt.
I hope you keep up this series of exam questions – it’s extremely insightful, though I’m a little surprised at how much energy is devoted to second guessing the examiner’s intentions.
Cheers
Thanks — corrected now.
3. Thursday Highlights | Pseudo-Polymath Says:
May 10, 2012 at 2:30 pm | Reply
[...] Wrapping up our maths fun for the last week or so. [...]
4. Stones Cry Out - If they keep silent… » Things Heard: e220v4 Says:
May 10, 2012 at 2:30 pm | Reply
[...] Wrapping up our maths fun for the last week or so. [...]
5. Donkey_2009 Says:
May 10, 2012 at 11:10 pm | Reply
Just to reassure you – the proof that power series can be differentiated term by term inside their radius of convergence is listed in our notes as being “non-examinable”, though I suppose that might not have been the case in 2003.
6. Dhwanit agarwal Says:
May 14, 2012 at 7:56 am | Reply
great writings !!
%d bloggers like this:
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 64, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9600951671600342, "perplexity_flag": "head"}
|
http://rjlipton.wordpress.com/2012/02/15/nature-does-not-conspire/?like=1&_wpnonce=93ffd21953
|
## a personal view of the theory of computation
tags: BQP, Einstein, Machine, quantum, randomness
by
Second response by Aram Harrow
Albert Einstein was a great observer of science, as well as doer of science. Most of his quotations as well as theories have proved their staying power over the past century. We, Dick and Ken, feel that some of them are better when understood narrowly within science rather than taken broadly as they usually are.
Today our guest poster Aram Harrow opens his second response to Gil Kalai’s conjectures of impediments to quantum computation by applying one of Einstein’s quotations in such a focused manner.
The quotation, and Einstein’s own clarification of it, are conveyed in this article on him and the mathematician Oswald Veblen, regarding the latter’s request in 1930 to inscribe the quotation on a plaque in the new Princeton mathematics building:
Einstein consented to the inscription of the phrase: “God is subtle, but he is not malicious,” even though he wrote to Veblen that he had meant to say “Nature conceals her mystery by means of her essential grandeur not by her cunning.”
The narrow meaning is that in physics and other natural sciences, solutions may be hard to find but they will not evade theory in perverse ways. Even in cases like the ${N}$-body problem where closed-form solutions are unavailable and behavior may become unstable, the underlying elements are smooth enough that one’s expectations will not be wantonly deceived. This goes especially in a classical geometric theory like relativity—whether Einstein thought this of quantum we don’t know.
By contrast, those who work in number theory must cope with having expectations deceived all the time. For instance, the Mertens conjecture is true for the first zillions of cases but known be false, though a concrete counterexample is yet to be constructed. (See also this about expectations.) Nastiness may be as Platonic and existential as niceness—in math. If Max Tegmark is right that “all structures that exist mathematically also exist physically,” then Nature must get arbitrarily nasty somewhere. So Einstein and Tegmark cannot both be right, though we could live in a nice “pocket” of Tegmark’s cosmos.
Anyway Gil and Aram are talking about our world, and we give the matter back to Aram. This is Part 2 of his three-part response, and addresses Gil’s contention that quantum errors are subject to correlations that defeat the independence assumptions of fault-tolerant quantum computation (FTQC). There will be a Part 3 by Aram and then (at least) a longer rebuttal by Gil.
## Second Response by Aram Harrow: Nature is Subtle, But Not Malicious
The possibility of a high rate of correlated errors seems reasonable in many ways. Claire Mathieu points out that independence of random events is the sort of thing that often gets blithely assumed, when actually it’s a major assumption. Independence is attractive because it lets us extrapolate from easily observable probabilities to infinitesimally small probabilities, but for this reason it is also dangerous.
Analyzing FTQC is more complicated than analyzing error correction, but the big picture is similar. If an error-correcting code on ${n}$ qubits corrects ${k}$ errors, then independent error rate ${p}$ will be transformed into an error rate that looks like ${\binom{n}{k+1} p^{k+1}}$. If this is an improvement, then we can iterate this process via concatenation and drive the effective error rate down. In all cases, our goal is to apply Chernoff-type bounds that guarantee an overall low error rate.
But what if errors are not independent? Then the answer still depends on how likely we are to encounter clusters of ${k+1}$ simultaneous errors. Gil points out (e.g. Lemma 1 of this 2008 arXiv paper) that if errors are pairwise correlated, then the probability of ${k+1}$ errors can be significantly higher. But Chernoff-type bounds still hold. Indeed mere pairwise, or ${O(1)}$-wise, correlation of errors is not enough to derail FTQC. Gil mentions that one of his models of correlations corresponds to the Curie-Weiss model, which describes classical magnets. But here too, the chance of ${k}$ errors decays exponentially with ${k}$ (see Lemma 1 of these notes).
Fundamentally this is because the basic interactions in physics involve two or three particles, for instance when a photon bounces off an electron. Here is the standard model describing all known interactions. To get interactions with more particles, you need to go to higher levels of perturbation theory, but as you do so, amplitudes go down exponentially. This is also what we observe in experiments, when we entangle 7 atomic nuclei in NMR or 14 electronic states in trapped ions. Of course, one can imagine dangerously correlated noise models. One such model is that with probability ${1-p}$ nothing happens, and with probability ${p}$, the user trips over the power cord, and all bits simultaneously fail. But are these more likely in quantum computers? Let’s look at the two ways that Gil claims these are more likely.
## Does Entanglement Cause Correlated Errors?
Quantum computing (probably) requires producing large entangled states to be interesting. Gil suggests that this entanglement may be self-limiting, by increasing the rate at which correlated errors occur. This is one of the routes he proposes for generating highly correlated errors in quantum computers.
The key reason I regard this as unlikely is that quantum mechanics is linear, in the same way that stochastic maps act linearly on probability distributions. This means that there is no physical process that can distinguish an arbitrary entangled state from an arbitrary non-entangled state, just as no test can determine whether a probability distribution is correlated, given only a single sample. Specific states can be distinguished, but there is no “entanglement” or “correlation” observable that can be measured. In particular, when “noise” is modeled in the usual manner as a trace-preserving completely-positive linear map, then linearity forbids noise depending on whether the host system is entangled.
More generally, error processes are defined in terms of Kraus operators that do not depend on the state being acted on. The way this works is that the state is a vector, and the Kraus operators are matrices that act on this vector. Each matrix-vector product is a possible outcome, with probability given by its squared length.
For example, suppose our qubit is stored in the electronic state of a trapped ion. Spontaneously emission is a type of error with two Kraus operators: one that sends the excited state to the ground state, and one that merely reduces the amplitude of the excited state. These can be represented by 2×2 matrices. These act as a physical process on any state, entangled or otherwise. For the Kraus operators to change as a function of entanglement, quantum mechanics would have to become nonlinear, which would mean utter catastrophe, comparable to what would result from being able to act nonlinearly on the probability distribution of the universe.
## How Classical Synchrony Emerges Linearly
But don’t we observe synchronization in real-world systems? Gil gives an example of metronomes: if you have a bunch of mechanical metronomes sitting on a table, then the weak coupling they experience via the common tabletop will cause them to eventually synchronize. There are also quantum analogues in which spin qubits interact with a common bath of bosonic modes. But for the metronomes, what we are really observing is the result of pairwise interactions together with dissipation.
In fact, the process is similar to the error-correction procedure for a repetition code. The differential equation has the form
$\displaystyle \ddot x_i = -\omega^2 x_i + \lambda \sum_j (x_j-x_i) - \epsilon \dot x_i.$
These dynamics will synchronize the metronomes, and thus will generate correlations. But this process has nothing to do with the question of whether the metronomes were correlated in the first place. No physical process will make the ${\lambda}$ term larger if the state is correlated/entangled.
Moreover, these types of correlations can be handled by FTQC. What it means for the error rate to be, say, ${10^{-3}}$, is that our control is 1000 times faster/stronger than the error processes. So even if the system would synchronize if left alone, sufficiently fast control can still prevent this by acting before multi-qubit correlations can build up. This argument is developed in detail in a paper by Aharonov, Kitaev and Preskill.
One subtlety concerns the semi-classical approximation. For example, suppose we apply a laser to the trapped ion. If it is the right frequency, the laser will cause X rotations. This can be viewed as applying a controllable Kraus operator. But now the Kraus operator still depends on the state of something; namely, the classical system controlling the laser. This appears nonlinear, but really the true Kraus operators involve two-body interactions between the ion and individual photons in the laser. Each such interaction is very weak, and when we add up the many photons comprising the laser beam, we get what looks like a Kraus operator acting only on the ion, but with parameters that depend on the settings of the laser. There is fundamentally no nonlinearity, and more important, nothing that is even approximately nonlinear in the state of the ion, which is what is relevant to the quantum computer.
## Does Computation Cause Correlated Errors?
Another route to correlated errors is by piggy-backing on our computation. For example, suppose that we control our ion-trap quantum computer by blasting it with lasers. The lasers are macroscopic objects, and if there were other ions, or systems that behaved like ions, lurking near the ions we use for qubits, then these would be similarly affected by the lasers. If we were unlucky, these “shadow qubits” might interact with our computational qubits in ways that caused errors, and now these errors would exhibit complicated correlation patterns that would depend on the history of laser pulses we have used. Thus, even though there is no direct way for errors to depend on whether our states are entangled or not, errors could depend on shadow qubits, whose entanglement/correlation could be produced at the same rate as entanglement/correlation in our quantum computer.
This type of correlated noise is possible, and is indeed devastating for some types of error-correction.
But general situations are better behaved. For ion traps, we really have a single ion floating in space, with other ions far away. There are other quantum systems besides the electron, such as the nucleus, or the electrons in the inner shell, or modes of the electromagnetic field. They all look different from the qubits we are using for the computation, and they respond to different frequencies, have different interactions, and so on. The issue is whether these ambient factors can induce correlated errors into the computation qubits.
## Shadow Errors Cannot Set the Pace
The errors may be correlated among themselves in various malicious ways. For example, if we move the ion to bring it closer to another ion, then the nucleus (whose spin could be called a shadow qubit) moves right along with the electron that we are using for our computation. But the point is that these effects can be brought within the modeling, and thus dealt with.
In fact, experimental physicists have spent a long time studying decoherence, and a large amount of it is due to what you could call shadow qubits. In many processes, it’s understood how this works, and often there are known methods of reducing decoherence with pulses that decouple shadow and computational qubits. So shadow qubits are something we need to be aware of, but in practice, when people look at implementations, they usually are aware of them.
Could shadow qubits generate the kinds of highly correlated errors that are dangerous for quantum computing? It seems unlikely. In general, they may cause extra two-qubit errors, but for them to track the computation precisely enough to cause multi-qubit errors, their task would be as difficult as that of the computation itself. This may be true of the stock market or international espionage, but atoms and photons are not intelligent or malicious enough to defeat us in this way.
In fact, nobody knows how to even design a hypothetical Hamiltonian for an environment in which single-qubit error rates are below, say, ${10^{-6}}$, but correlated decoherence dooms fault-tolerant quantum computing. A weaker goal would be to design such a Hamiltonian in which FTQC may be possible, but our proofs fail; see Preskill’s comment for thoughts along these lines. One reason for this may be that it is hard to know what a computer is actually doing.
Finally, to bring in a point of my first response, shadow qubits are already present in classical computers. For one instance, 0s and 1s produce different heating patterns in classical computer chips. This heat, in turn, might cause errors in the circuitry. But for the heat to cause bit flips in just the right way to overcome software error correction would require an incredibly unlikely conspiracy of events—a conspiracy that doesn’t have a plausible mechanism, and that isn’t observed in practice.
## Brief Rejoinder by Gil Kalai on Correlated Errors and Entanglement
Correlated errors are not caused by entanglement but by the entire computing process leading also to the entanglement. Just like in Aram’s example with lasers, when we isolate all other ingredients and examine how the error operation relate to the entanglement of the qubits the relation is nonlinear, but, there is nothing about it that violates QM linearity.
Let me propose a thought experiment. It is so imaginary that I ask you to close your eyes.
Imagine a quantum mechanical world where in order to prepare or emulate a quantum state you need to create a special machine. Each state requires a different kind of machine. Of course, there are some quantum states you can design but cannot prepare at all, and for others it may take decades to build the machine.
The states you build are not perfect. Compared to the ideal pure state you intended to build they are noisy. The noise depends on the machine. And now since the machine is specific to the state that you want to build, there is some systematic relation between the state and the noise. You would like to understand and find the laws for this relation. There is no reason to believe that noise will be the same for all prepared state (or “linear” as people refer to such independence). Of course, it is not the state that causes the noise. It is the common properties for all the machines needed to create this specific state that lead to systematic relations between the state and the noise.
And now open your eyes. This is the world we live in.
Possible relations between the created state and the noise are part of our reality. When we will build universal quantum computers we will change this reality, but our attempts to build them should be based on this reality. My conjectures attempt to draw such systematic relation between states and noise.
## Updates From Comments
The comments sections of Gil’s post and Aram’s first response have engaged the above points in more technical detail. Here is a digest:
• Joe Fitzsimmons noted that particle interactions are known to great detail, and those found in nature obey a restriction on their Hamiltonians called 2-locality, so as to limit the scope for correlated noise. Kalai agreed but noted that the latter applies only to cases where the states along the evolution themselves obey 2-locality.
• Commenter matt pointed out that Kalai’s conjecture 1 would be vacuously true if it concerned the problem of encoding an initially unprotected single qubit, as opposed to a full FTQC scheme with data qubits initialized, operated on, and measured always from within the protection of an error-correction code. Gil clarified that his conjecture states that even with fault-tolerant initialization schemes, the rate of logical errors will remain above some universal constant.
• Back in the first post’s comments, Robert Alicki gave a detailed argument on how the environment could maliciously interfere with a computation. This included remarks on Kraus operators that Aram acknowledged but do not seem to refute the defense given above. The key point going further than Aram’s sections above on shadow effects is that the environment can add friction to the system’s dynamics. This confers stability on classical systems, at the cost of heat dissipation and irreversibility, but quantum systems cannot afford to pay this piper. Aram, however, replied that quantum error correction works as friction in this beneficial manner while avoiding the problems.
Further discussion in this long comment thread led John Preskill to adjudicate some of the contentions based on mathematical modeling. This shows that cooling considerations do not single out difficulties for quantum over classical computation, and that some of Alicki’s objections fail to apply to non-Markov noise models. He then referenced his first comment with 6-page paper giving a less-restrictive Hamiltonian noise model under which a threshold theorem conditionally supporting FTQC can be proven. He allowed it may not be realistic, but it contradicts the objections in as far as they apply to it. The same was observed for a case where a long computation could drive the environmental “bath” into a highly adversarial state.
• In reply to Preskill’s first comment, Cristopher Moore allowed that “issues of correlated noise are subtle and important,” but called out Kalai’s conjectures as lacking sufficient motivation or detail to see why the errors should conspire against QC’s, and failing to define how noise as treated as separate from the computation’s physical process when it is really part of that process.
• Meanwhile from the ringside, Boaz Barak asked for an update on what is currently obstructing progress on building QC’s. This drew detailed replies by Joe and Aram.
• Gil showed how the “roadmap” has expanded as a result of these discussions, and tried to answer Aram’s open problem about constructing even a single physical system where (some of) his conjectures hold. Although the initial attempt was headed off by Matt, this led to interesting further exchanges about possible concrete physical tests in that thread, and a separate discussion beginning here.
## Open Problems
To you, dear reader, what appears to be the core of the dispute? Is there enough specific mathematical modeling of how errors could correlate in harmful ways? Or do the high-level arguments by Aram and others suffice to reason that quantum computations themselves—in at least some reasonable systems supporting them—can out-pace any special error-causing effects, so that the standard conditions for fault tolerance assuming low dependence confer enough protection for them?
### Like this:
from → All Posts, History, Ideas, People
119 Comments leave one →
1. February 15, 2012 3:41 pm
I am trying to understand Gil’s rejoinder, but I find it a bit difficult to read the description of his thought experiment with my eyes closed
• February 15, 2012 5:03 pm
OK—I guess you’re commenter “matt”. Thanks for the comments!
2. February 15, 2012 11:23 pm
Dear Aram,
I dont understand this part of your argument:
“Gil points out (e.g. Lemma 1 of this 2008 arXiv paper) that if errors are pairwise correlated, then the probability of errors can be significantly higher. But Chernoff-type bounds still hold. Indeed mere pairwise, or O(1)-wise, correlation of errors is not enough to derail FTQC.”
• February 16, 2012 12:29 am
Yeah, I realize this point is a little unclear, since pairwise correlations are just one property of distributions among many.
Here’s another way of saying my point. What kills error-correction is not the failure of independence, but the failure of the Chernoff bound. The exception is that if you have predictably correlated errors, those are especiallly easy to correct. But in general, you need the amplitude of k simultaneous errors to scale like alpha^k for alpha<1.
What I meant about pairwise not being bad is that it's ok if that scaling doesn't kick in until k is larger than some constant. FTQC can still work then, but with larger overheads.
• February 16, 2012 5:29 am
My claims is the following. Suppose you have n qubits. Suppose you know that the probability for every qubit to be faulty is small, say 1/10,000. (We both agree that if the errors are independent then the probability that more than n/5000 bits will fail at once is exponentially small with n.)
Now suppose that you know that the pairwise correlation between qubit i being faulty and qubit j being faulty is >0.4 for every i and j. So you have information only about pairwise correlation. I think that this suffice to imply that there will be error synchronization. Do you disagree?
• February 16, 2012 7:17 am
“Now suppose that you know that the pairwise correlation between qubit i being faulty and qubit j being faulty is >0.4 for every i and j. So you have information only about pairwise correlation. I think that this suffice to imply that there will be error synchronization.”
Gil, I don’t think this enough information to be very meaningful. A simple example of a noise which satisfies your description is that with probability 1/10,0000 we flip all qubits simultaneously. This gives completely correlated errors. However it is very easy to correct this particular noise.
As Aram has already pointed out, correlated errors are easy to correct if the noise structure is known.
This area is not one in which I would consider myself an expert but it seems to me that a much more specific model for the proposed noise is needed in order to have a meaningful discussion. But of course part of the discussion aims to show that there cannot be noise of the conjecture type.
One question which looks relevant is whether or not you think that using a quantum error correcting code in order to transfer states which are completely classical will be impossible? This is a question which is only about the code, its preparation, and the final decoding, and does not involve any additional quantum algorithms.
• aram
February 16, 2012 2:05 pm
Well, certainly if you make the correlations large enough then you eventually get the “trip over the power cord” model, in which all bits fail simultaneously with a non-negligible probability.
Maybe the best way to make this concrete is with the Curie-Weiss model, or something else that produces distributions of the form P(x_1, .., x_n) = e^{-f(x)} for f a low-degree polynomial of the bits. These definitely exhibit phase transitions.
But for sufficiently low constant values of the coefficients of these polynomials, things would still be ok for a fault-tolerant quantum computer.
Also, as for interactions, I don’t think it’s reasonable to assume that the interaction graph is a complete graph. Realistically, it should be something that can be embedded in two or three dimensions. (Yes, there is a long-range Coulomb interaction, but this can still be modeled by local interactions between electrons and photons.)
• February 16, 2012 8:48 pm
Dear Aram, good, it looks that we are in some agreement on this technical issue. (Of course, the readers should be worn that even if we agree we can both be wrong.) Let me say how I see it:
Suppose that you can represent the distribution of errors by a distribution of 0-1 vectors of length n for the n qubits. 0 means no error and 1 means error. (Indeed, when you know the noise operation you can represent the errors
in this way based on expansions via tensor products of Pauli operators.)
If the probability for every bit to be 1 is 1/1000 (say)
and you have indpendence; then the probability that 1/500 (say) or more bits have errors is exponentially small with n.
If the probability for every bit to be 1 is again 1/1000 but the pairwise correlations between the events x_i=1 and x_j=1 is high, say above 0.4, then the probability that even 1/50 (say) of the bits will have errors is already substantial. (And dont go to 0 with n). So we do witness error synchronization.
(Actually this will go on with more than 2 bits. If you have pairwise independence (or almost independence) then dependencies for triples may lead to error synchronization, and so on.))
It would be interesting to find a good mathematical description for these type of observations.
In any case large pairwise positive correlations do imply error synchronization.
On a related matter you wrote: “Also, as for interactions, I don’t think it’s reasonable to assume that the interaction graph is a complete graph. Realistically, it should be something that can be embedded in two or three dimensions.”
Well, this is indeed an important point. My conjecture 3 says that when a pair of qubits are entangled (or even have a large “emergent entanglement”) then the probability of errors between them will be substantially positively correlated. So assuming that this conjecture holds, the relevant “graph of interaction” will be complete for most pairs of qubits for interesting states that we use for quantum computation and quantum error correction. (And again, Conjecture 3, as all its sisters does not depend on the geometry of the computer.)
You say that you regard Conjecture 3 as unrealistic and this is perfectly fine. If I understand you you regard the relation between the correlations between errors and the pair of qubits being entangled, is precisely the things that you do not regard as realistic.
• February 17, 2012 3:14 pm
Dear Klas; you wrote: “I don’t think this enough information to be very meaningful. A simple example of a noise which satisfies your description is that with probability 1/10,0000 we flip all qubits simultaneously. This gives completely correlated errors. However it is very easy to correct this particular noise.”
The type of noise where a large fraction of all qubits will be hit simultaneously is sufficient to fail fualt tolerance schemes. I dont think it is easy to correct this type of noise.
• February 17, 2012 5:31 pm
Dear Gill; “The type of noise where a large fraction of all qubits will be hit simultaneously is sufficient to fail fualt tolerance schemes. I dont think it is easy to correct this type of noise.”
This depends strongly on the type of noise considered. If the noise is highly structured it is quite easy to modify a code to take that structure into account. The basic observation is that saying that “a large fraction of all qubits will be hit simultaneously” is not sufficient. In my simple example where all qubits are flipped simultaneously one can simple say that every codeword and its negation are mapped onto the same word by the decoding procedure. That will lead to perfect protection against this particular noise type, no matter how high the error rate is.
In general, a highly structured noise will mean that the volume of the set of words which an original codewords can be mapped into by the noise is smaller than for an uncorrelated noise, and typically a smaller volume means that it is easier to construct an error correcting code.
My point is that in order to be able to say whether error correction will work or not it is not sufficient to say that a qubit is hit by the noise, one must also say in which way it is hit and how the type of error on that qubit is correlated with the type of error at the other qubits.
This in itself does not rule out your conjectures, but I believe that more information than simple correlations is needed.
• matt
February 17, 2012 5:48 pm
Gil and Aram: if you have a model in which there is a large pairwise correlation between certain pairs of bits then we can, as you note, describe this by an interaction graph, where edges on the graph indicate which pair of bits are correlated. What happens then depends a little on what further assumptions you make. Let’s consider the simplest setting in which each bit is 0 or 1 with probability 1/2 (yes, I know that this is a much higher error rate, but it is easier to talk about). A natural guess then is that there is a Boltzmann-like weight for different configurations, and, as Aram notes, in two dimensions there will be a phase transition in which the errors synchronize once the correlations are large enough. But the point I want to make, since Gil likes to think about worst cases (though perhaps in this setting it is a “best case”….): it is possible to have all those local correlations in two dimensions without having global correlation. It is an easy exercise to make up probability distributions which have all those local correlations without global correlation (i.e., in which the correlation between two far-separated bits is small) on a two-dimensional lattice. However, on an expander graph this is not possible for sufficiently high correlations: once the pairwise correlations are big enough, local correlation implies global on an expander graph. (This comment has no bearing on actual quantum computers, just an interesting point of stat mech/comp sci).
3. February 16, 2012 2:04 am
Aram argues very eloquantly that errors which piggy-back on the computation are unlikely. However it seems like such piggy-backing is essentially impossible in general if it must only affect entangled states and not separable states, due to the existence of blind quantum computing protocols. These allow information about the current state of the machine to be hidden from its operator, and known only to some remote party.
A trivial example of this is where two parties share N pairs of qubits in the anti-symmetric state. These are only entangled states of 2 qubits, and our ability to produce such states is well documented. The first party (alice) chooses a random bit R and measures all of their qubits in either the Z basis (if B=0) or the X basis (if B=1). The second party (Bob) then applies [1 0 0 0; 0 1 0 0; 0 0 1 0; 0 0 0 i] between nearest neighbours.
If R=0 then Alice’s measurements project Bob’s qubits onto eignestates of the gate Bob applies and hence the result is a separable state. If R=1 however, Alice’s measurement projects Bob’s qubits onto X eigenstates, which lead to a highly entangled state (according to Gil’s definition).
In this case although Bob is performing the entangling operations, it is impossible for them to tell whether they are producing an entangled state or not. Only Alice knows whether the result is entangled or not.
While it seems like a loophole would be to suggest that Alice’s device malfunctions when R=1. However, since it’s operation should be independent of Bob’s device, it is possible to check this by instead of performing an entangling operation, performing single qubit measurements on Bob’s qubits.
While it is possible to come up with ways to have Alice and Bob’s devices collaborate which defeat this argument, if you make the rather plausible assumption that the action of the devices involved is independent of whatever is done to the remote qubits this can be ruled out.
• February 16, 2012 5:49 am
Joe, I am not sure what you mean by piggy-back. In any case, the conjectures only say this: Every quantum computation comes with a noise (perhaps on top of other standard noises) with the following property: If your computation leads to a state where with qubuts i and j are entangled then the probability for i being faulty and for j being faulty will be positively correlated.
I cannot guarantee you that this harmful noise will act independently on pairs of qubits which are not entangled. This depends on the computation process, the device , etc.
Perhaps one thing we can discuss and perhaps agree about is that if we we have a quantum computer without quantum fault tolerance then this property is likely to be satisfied.
• February 16, 2012 6:08 am
Hi Gil,
Sorry, I was using “piggy-back” in the sense Aram used it in the “Does Computation Cause Correlated Errors?” section.
Your response to Aram seemed to suggest that you believed that the design of your device and operations you apply necessarily determine the entanglement of the state you produce. I’m referring specifically to statements like this: “Imagine a quantum mechanical world where in order to prepare or emulate a quantum state you need to create a special machine. Each state requires a different kind of machine……This is the world we live in.”
Such a belief would be incorrect, since we know ways to avoid there being enough information present in the (quantum) instructions the device receives to be able to determine virtually anything about the state it has produced.
However, I could be misunderstanding your meaning. Perhaps you simply meant that if you try to build a universal quantum computer you will always get massive correlated errors independent of what you try to do with it (including classical computation for example).
• Jon
February 16, 2012 2:22 pm
“Perhaps one thing we can discuss and perhaps agree about is that if we we have a quantum computer without quantum fault tolerance then this property is likely to be satisfied.”
If you consider Joe’s example, then it does not satisfy this property even if there is no quantum fault tolerance.
• February 19, 2012 4:53 pm
Joe wrote: “A trivial example of this is where two parties share N pairs of qubits in the anti-symmetric state. These are only entangled states of 2 qubits, and our ability to produce such states is well documented. The first party (alice) chooses a random bit R and measures all of their qubits in either the Z basis (if B=0) or the X basis (if B=1). The second party (Bob) then applies [1 0 0 0; 0 1 0 0; 0 0 1 0; 0 0 0 i] between nearest neighbours.
If R=0 then Alice’s measurements project Bob’s qubits onto eignestates of the gate Bob applies and hence the result is a separable state. If R=1 however, Alice’s measurement projects Bob’s qubits onto X eigenstates, which lead to a highly entangled state (according to Gil’s definition).”
Joe, I dont understand the specific example. The ability to produce a pair of entangled states is well documented but it is also well known (and we always assume that) that such entangled states produced by quantum gates are noisy and the noise is not necessarily independent. (In fact, it is reaonable to assume that gated entangled states will be subject to measurement-noise where errors are positively correlated.) A similar remark may apply to the gates you propose to perform at the second stage (although I am not sure what they are, precisely.)
The whole question is if we can use quantum error-correction to create pairs of entangled qubits subject to independent errors when we assume that the errors for gated qubits are dependent.
There is the more general issue if “blind computation” can be used as an argument “against” my conjectures. Let’s return to it.
• February 20, 2012 10:09 pm
Hi Gil,
I don’t quite follow your comment. Surely you accept that it is possible to produce maximally entangled pairs such that even though these states may be somewhat noisy, there is not correlated errors across pairs?
My point was that if you can do this (and, really, we know that you can) then you can construct a non-local computation in such a way that no local system can tell whether the resultant state is entangled or a product state. In my example I only limited the information for the party performing the entangle gates, but it is pretty trivial to extend this to the second party. So you can construct protocols whereby it is impossible for the noise to adversely affect preferentially (or only) entangled states without violating linearity, since the noise is presumably a local process.
• February 20, 2012 10:38 pm
Hi Joe, OK, let me try to understand your example step by step. Suppose Alice choose always R=1 and measure her qubits accordingly, without any random bit. Is this simplified version also a counter example to my conjecture (and if not how do we see it)?
• February 20, 2012 10:41 pm
No, because then the protocol dictates the final state of the system.
• February 20, 2012 11:00 pm
OK, specifically to this case, the strong version of conjecture 3 talks about emergent entanglement. (which is what you get by measuring other qubits and looking at the results) so in your scenario the two qubits will always have large emergent entanglement and hence by the conjecture positively correlated errors.
more generally: when I say “if A then B” it does not mean “if and only if A then B” and it also does not mean “always B”. It seems that you allow only two ways to understand my conjecture: either I say: “the errors are correlated precisely when the qubits are entangled” or I simply say “the errors are always correlated”. No, what I say is: the errors are correlated when the qubits are entangled.
• February 20, 2012 11:23 pm
Yes, I am clear on the difference between “if” and “if and only if”. However, there are a number of things which have led me to making my last few comments on this. One such example is the context in which these conjectures are made, which attempts to rule out quantum computation while allowing classical computation. The alternative interpretation seems to rule out more than quantum computation, since your conjecture would seem to suggest that even in the case where the computation results in classical states, they must contain these disastrous errors. This seemed a stronger statement about noise than I thought was your intention to make.
• February 21, 2012 12:06 am
Well, you are right, Joe, that a constant worry in formulating the conjectures is that they do not fall into the trap of applying to classical correlations. Of course, if the conjectures as stated fall into this trap then they should be sent back to the drawing board or eliminated altogether. When it comes to conjecture 1 that we discussed last time it looks to me that it does not have any consequence for classical error-correction. Contecture 3 (even in its strong form regarding emergent entanglement) was also tailored not to apply to classical correlation. This was clearly my intention, but of course, maybe I missed something, and if so I will be happy to know it. So if this particular example, or another one supports what you say that they “seem to suggest that even in the case where the computation results in quantum states, they must contain these disastrous errors” please let me know.
4. February 16, 2012 8:58 am
Please let me say that I am very grateful for all of these wonderful posts. In (slow) preparation for a longer post next week, here is a concrete bit/qubit system that I am thinking about, that has given me considerable respect for the plausibility of Gil Kalai’s conjectures.
As a warm-up exercise, let’s consider a dynamical system of nine classical bits, each subject to thermal noise. Consulting the Wikipedia page “ Hamming Codes”, we see observe that these nine classical bits support a (7,4) Hamming code, plus one parity bit, yielding a 4-bit SECDED memory, encoded in 8 classical bits (with one bit left over).
An instructive next step is to read at least the introductory chapter of an engineering textbook about fault-tolerant computing: for example, Jangwoo Kim’s Two-Dimensional Memory System Protection (2008). The lesson-learned is the humility-promoting reality — which Nature already teaches with respect to biological error-protection dynamics — that classical large-scale memory devices, both engineered and evolved, make maximal use of error-correction principles; this solidifies our appreciation that classical error correction is routine, and that multiply-nested levels of classical error correction are common practice.
Now let’s promote our nine classical qubits to nine quantum qubits, and apply (per its Wikipedia page) “Shor’s algorithm” to protect one of those qubits. To simulate noise in a fully quantum manner, let’s introduce nine more qubits (making eighteen in all), and we introduce small random dipole couplings pairwise among all eighteen qubits. Now we have two ${9}$-qubit systems, and each ${9}$-qubit system serves as a generator of noise to the other. We will call these two ${9}$-qubit systems ${A}$ and ${B}$.
Gil Kalai’s general conjecture (as I read it) then predicts the following: Shor’s algorithm can either protect ${A}$‘s qubit from ${B}$‘s noise, or else it can protect ${B}$‘s qubit from ${A}$‘s noise, but when Shor’s algorithm simultaneously protects both ${A}$‘s qubit and ${B}$‘s qubit from each other’s noise, the error rate is severely (even non-scalably?) increased.
The physical intuition is that the Shor error-correction process itself, when applied to ${A}$‘s qubits, alters the dynamics of the noise generated by the ${A}$ qubits, in such a fashion that the now-correlated ${A}$-qubit noise becomes fatal to the ${B}$-qubit error correction process. And vice-versa.
More broadly, within this coupled-qubit model of quantum noise, we can postulate that Shor-type error correction works for individual qubits, but does not scale to large ensembles of qubits, for the concrete reason that error-correction in any one portion of a QC memory, generically acts to propagate adverse forms of correlated noise to the remaining qubits, thus disrupting their ongoing error-correction.
The primary intent of this post is to convey an appreciation that the Kalai conjectures are more than philosophical speculations: these conjectures can be associated to specific predictions regarding quantum computing that are physically plausible and practicably testable, both immediately in terms of numerical simulation, and possibly even in small systems of physical qubits.
New-fangled methods for “geometrizing” the preceding dynamical intuitions, with a view toward proving rigorous theorems, via the powerful proof technologies that algebraic geometry provides, are a present interest of mine. The above multi-qubit example was conceived within this algebraic/geometric context, and in the next week or two I hope to post here on GLL and/or MOF about the capabilities of these algebraic/geometric proof technologies.
In the meantime, we can all be appreciative of and thankful to Gil and Aram (and everyone else posting here on GLL) for this wonderfully thought-provoking QM/QC/QIT debate. Surely there are many more good posts to come. Please sirs, we’d like some more!
• February 16, 2012 11:04 am
To summarize the above toy-example aphoristically:
A Kalai-type Conjecture: The transformation of classical noise that is induced by classical error-correction processes is generically innocuous, but the transformation of quantum noise that is induced by quantum error-correction processes is generically adverse.
• February 16, 2012 1:50 pm
As the first of two final remarks, the intended Wikipedia reference was not to the “Shor algorithm”, but rather to the “Shor Code” (both of which of course are truly seminal innovations).
The second remark is a suggestion: anyone (maybe a student?) who contemplates testing Kalai-type conjectures in numerical experiments, would be well-advised to consult both John Preskill’s article “Fault Tolerant Quantum Computation” (arXiv:quant-ph/9712048) and (for reasons of numerical efficiency) consider implementing the (shorter) 7-qubit code of Steane, and/or the (shortest) 5-qubit code of eq. 10.104 of Nielsen and Chuang’s textbook Quantum Computation and Quantum Information. In particular, the existence of 5-qubit error-correcting codes would seem to render Kalai-type correlated-noise conjectures reasonably amenable to numerical experiment on a ordinary laptop computer.
• February 21, 2012 12:57 am
As a further update, working through the details of the above computational program, with a variety of more-or-less realistic noise models, teaches great respect for the generality and scope of the theorem proved in John Preskill’s earlier GLL comment and note titled “Sufficient condition on noise correlations for scalable quantum computing”.
Moreover Preskill’s theorem dovetails nicely within the student-friendly multimedia content that he has provided on the UCSB-hosted web page “Pedagogical Lecture: Fault Tolerant Quantum Computation” (which a Google search will find).
A natural question is: Do we live in a world in which the quantum assumptions of Preskill’s theorem do not apply, even in idealized cases?
Assuming that we do live in such a world, it may be a world in which either: (1) essential aspects of the dynamics of thermal reservoirs and measurement devices are nonperturbative (because Preskill’s sufficient conditions assume a convergent perturbative dynamical expansion), and/or (2) the state-space geometry of quantum dynamical systems is locally Hilbert, but globally has some other geometry (because Preskill’s expansions assume a global Hilbert geometry for the expansion).
Alternative (1) leads straight to the well-known complexities that are associated to renormalization, quantum field theory, and cavity QED, whereas Alternative (2) leads straight to the well-known complexities that are associated to spatial localization, geometric quantum mechanics, and general relativity.
Both alternatives thus lead to tough classes of open quantum questions, that refreshingly are seen from new perspectives. That’s why this is a great topic!
5. February 17, 2012 3:43 pm
Dear all, let me just mention a few technical and conceptual issues that were raised here:
1) Joe raised the possibility to apply “blind” quantum computation which allows information on the state of the computer be hidden from its operator. So according to Joe, this possibility is in direct conflict with my conjectures and the possibility to have a relation between the state of the computer and the noise.
2) Jon asserted that Joe’s example does not even require quantum fault tolerance.
3) Aram raised the question if pairwise correlation alone can lead to error synchronization. (I think we are now in agreement that the answer is yes, but it will be useful to understand the connection quantitavely.)
4) Aram pointed out that we can can assume that the “graph of interaction” is sparse. Perhaps it will be useful to elaborate on that, since I am not sure I understood it.
5) There is also some debt regarding a possible loophole Joe suggested for Conjectures 3 and 4 based on moving to another basis.
6) Of course, the main issue raised by Aram is to what extent it is reasonable to expect highly correlated noise.
7) Yet another question that Aram raised, is if there is tension between my conjectures and linearity of quantum mechanics.
8) I raised in my reply the following point: We can think about a quantum computer as a universal machine for creating complicated quantum evolutions and quantum states. The situation today (before quantum computers) is that we need special purpose machines for each quantum state that we want to create; Much of the intuition of people regarding the hypothetical behavior of quantum computers, and relevant models, implicitely assume such a general purpose machine.
6. February 18, 2012 12:21 pm
Worthwhile discussion on how D-Wave’s machine actually works. The more technical discussion starts at 29:50. When they say in the discussion that it is literally an physical analogue to a math problem they aren’t kidding. The circuit board really is built to simulate a matrix. Is it a quantum computer? I think that depends on what the technical definition of quantum is. Does it solve IP problems quickly, it looks like it can.
It looks like what they are doing is building a magnetic field that can be tuned, and then adding some energy and seeing how it distributes across the chip. The chip then approaches some final state that represents an optimal distribution. Because all the elements are coupled, quadratic effects are accounted for.
It is literally a physical analogue of illustration I.1.1 on page four of A.Zee’s book “Quantum Field Theory In A Nutshell”.
http://webcasterms1.isi.edu/mediasite/SilverlightPlayer/Default.aspx?peid=a0ada8c3b0f9448f8692a8046534f99c1d
7. Gil Kalai
February 18, 2012 6:44 pm
Hi everyone, Let me add to the list as point number 9) Klas’s intersting comment that if the noise is very structured, errors can be corrected even in cases of error-synchronization. This is related to a point raised by Chris in the first post about the heuristic/unclear nature of the term “noise” to start with. Great points.
Matt wrote “This comment has no bearing on actual quantum computers, just an interesting point of stat mech/comp sci.” Matt, it goes without saying that from my point of view such comments are most welcome.
• Gil Kalai
February 29, 2012 4:52 pm
Perhaps one more question is this: 10) Is nature which satisfies my conjectures regarding correlated errors, the relation between signal and noise, noisy codes, etc, really malicious? I realize that such a behavior would be considered malicious to the idea of efficient factoring, but nature has other things also to worry about.
• February 29, 2012 6:30 pm
I’d say that correlated noise would be shocking not because we have a right to fast factoring algorithms, but because it would cast doubt on the scientific process and the idea of reductionism. We believe we can understand the properties of electrons by studying one or two of them, and that the same principles apply when we have 10 or 10^23. In principle, this doesn’t have to hold, but if it didn’t then scientific induction would be a less useful tool.
• Gil Kalai
March 1, 2012 8:45 am
Aram, I don’t see the relation with reductionism; the phenomenon I considered – that two entangled qubits have positive correlation of errors (or specifically of information-leaks) is most probably true for gated qubits, and for quantum computers with few qubits. This phenomenon just talk about two qubits. (I think we are, more or less in agreements that high positive pairwise correlation suffices to imply error-synchronization.) So I don’t see how scientific induction is relevant here. It is true that if quantum fault tolerance is possible we can create two entangled qubits with almost independent errors, but if quantum fault tolerance is not possible, or until it is possible we cannot expect such a pair.
• March 1, 2012 9:20 am
I wouldn’t say we can create two entangled qubits with independent errors; the interaction itself could always be faulty (and this is a standard assumption of FTQC).
But I would say that whatever errors there are on those interactions, we should be able to repeat the interaction many times in parallel, with little-to-no correlation among these parallel entanglement-creation experiments. If this were false, scientific induction would be thrown into doubt.
Further, I think we should be able to repeat the same faulty interaction on the same entangled qubits that have been moved around and matched up in different ways. And here I believe that generically the new errors should have little-to-no correlation with the past errors, based on the principle that particles generally don’t remember where they’ve been and what has happened to them in the past. This principle definitely doesn’t apply to the stock market, but it’s been (somewhat) experimentally demonstrated for things like photons and buckyballs via things like the two-slit experiment, in which creating an external record of a particle’s path changes observable quantities.
• Gil Kalai
March 1, 2012 11:06 am
Dear Aram, all your beliefs are reasonable and this is what we try to discuss. I just referred to the reductionism claim. But I realize that Point 10 was unclear so let’s forget it.
• March 1, 2012 12:29 pm
It seems to me that Aram’s physical assumptions are reasonable, and in particular that all of the great FTQC theorems are rigorously applicable, in an ideally reductionist world in which no two qubits are in thermalizing interaction with a shared vacuum (that is, a shared zero-temperature bath).
So have Aram’s arguments prevailed? This is less clear (to me).
We all appreciate that, to a degree that has been challenging to quantum experimentalists, Nature seemingly requires of qubits that (1) they generically couple to vacua (of one sort or another), and that (2) qubit vacua cannot readily be reductionally isolated from one another to the degree required for FTQC.
In short, the reductionist program of independent qubit noise associated to independent qubit vacua has been surprisingly hard to achieve.
So is qubit vacuum design solely a practical problem, to be overcome by creative quantum systems engineering, or does it point also to fundamental gaps in our appreciation of quantum vacua? It seems hardly likely that the Harrow/Kalai debate can reach any robust conclusion, until these vacuum-related issues are clarified, by arguments that are more nearly rigorous than any that have yet been presented.
8. February 19, 2012 5:16 pm
While there are various important technical and conceptual issues that were raised by Aram’s post and by the following comments, I would like to have a better understanding of Aram’s overall logic:
First Aram describes my conjecture:
“Gil suggests that this entanglement may be self-limiting, by increasing the rate at which correlated errors occur.”
Aram’s reply is: “The key reason I regard this as unlikely is that quantum mechanics is linear,”
Later Aram’s described shadow qubits and say
“Thus, even though there is no direct way for errors to depend on whether our states are entangled or not, errors could depend on shadow qubits, whose entanglement/correlation could be produced at the same rate as entanglement/correlation in our quantum computer.”
(And then he explain why he thinks that we can deal with shadow qubits.)
The way I see it is that Aram’s interesting shadow qubit story is precisely an example for my claim how a systematic relation between the state created by the computation and the noise can occur. Isn’t this example suffice to put to rest the claim that such a relation is necessary in conflict with linearity of QM?
Also, Aram, are you claiming that the shadow qubit story is the only way, that may lead to a systematic relation between the state and the noise?
• February 19, 2012 8:17 pm
“Aram, are you claiming that the shadow qubit story is the only way, that may lead to a systematic relation between the state and the noise?”
yes.
I think the original conjecture needs to be modified in this sort of way to make it possible. Linear QM implies that there is nothing intrinsic to entanglement that can enhance/change noise, so any such effects must be extrinsic, i.e. must result from other systems, which could be called shadow qubits.
The problem is that once you make this modification, or make the conjecture more concrete in any way, it because less plausible. Because the types of shadow qubits we see in actual implementations are totally different in NMR than in ion trips than in optical experiments. So it’s hard to imagine what a unified law of correlated decoherence could look like.
9. February 19, 2012 8:10 pm
Here’s a thought on a route to keep a quantum state in coherence indefinitely. Here you would use quantum teleportation. So in the standard scheme, one has an unknown quantum state (A) and wants to transfer that state to another location. So one creates an entangled pair and one member of the pair interacts with A. A is then measured and destroyed and the information is transmitted over a classical channel to B. B interacts with the second entangled and the classically transmitted information and takes on the unknown state of the original A.
Now B by itself doesn’t want to sit around to long because of decoherence, so what does one do? Well one can teleport it back to its original location (which we will now call C). As long as one can reasonable protect the entangled particles en route, then classical fault protection protocols can be used to protect the information in the classical channel. The quantum information is “hidden” from the environment during the transfer process.
One can think of this along the lines of creating an oscillating quantum state that moves back and forth from point to point.
This might protect the quantum state, but the question is whether it can be used for any meaningful calculation.
• February 19, 2012 8:25 pm
This sort of technique is a part of many fault-tolerant schemes, but isn’t enough by itself. If a qubit starts to decohere, then teleporting it won’t slow down that decoherence at all. In fact, it’ll even result in the qubit being affected by any noise that had affected the entangled pairs used for the teleportation.
• February 19, 2012 10:35 pm
Hmm…I am thinking that one has to look at this in terms of probabilities. Conjecture 3 reads that errors involving entangled qubits will be correlated. Assuming that’s the case, then one has to ask what the source of errors are. We say its the environment. Ok, so it is reasonable that we can say something about the statistical distribution of measurements performed by the environment? If we can then can we account for those errors over the course of several runs by the computer program?
So for instance, using the teleportation scheme outlined above, we know that there is some amount of decoherence that the system will undergo within a period of time and we know that there is some amount of noise that might effect entangled pairs. So I run the program, and get some result that I know is potential wrong. So I run the program again, and get another answer, etc (like running a multiple stochastic simulations). So shouldn’t the distribution of results take on some of the characteristic of the noise in the environment? Suppose the noise is gaussian in nature? Shouldn’t my result be gaussian as well?
This might not be the ideal set for some applications, but since they randomness is more pure than what one finds in a classical machine, it might be useful in certain applications.
• February 21, 2012 10:19 am
Hi Hal, One of the interesting features of quantum noise, and, in particular, the noise anticipated by my conjectures is that it does not “average away” by repetitions. (This is indeed different from various cases of classical noise, that when the noise is highly structured it will be averaged away by repetition; this is also related to Klas’s question.) So if you look at Conjecture 1 which anticipated a cloud of codewords around the intended codewords: when you avarage you will just increase that cloud. Conjecture 3 talks about positive correlation for information leak for two entangled qubits, again this property will remain under averaging.
There is another aspects of repeting the same experiment many times and this is post-selection which is actually in use in various quantum fault tolerance schemes. I expect that if you achieve low error rates by massive post-selection then you will typically get a noise respecting Conjecture 3. Even if you will eliminate pairwise correlation by even more massive post selection, I expect that you will still get error-synchronization because of higher order correlations.
• aram
February 21, 2012 11:35 am
Gil, can you explain this a little more?
“One of the interesting features of quantum noise, and, in particular, the noise anticipated by my conjectures is that it does not “average away” by repetitions. (This is indeed different from various cases of classical noise, that when the noise is highly structured it will be averaged away by repetition;”
It sounds like you’re talking about properties of error-correcting codes rather than noise; is that right?
Also, there are quantum codes that are fairly repetition-like, like the Shor code (which is basically like two 1 to sqrt(n) repetition codes concatenated with each other), and the toric/surface code (which is obtained by tiling a local pattern over a 2-d region).
10. February 19, 2012 10:35 pm
I am really trying hard to understand why Aram regards the shadow qubit description as the only way that may lead to correlated errors
of the kind I am describing.
Maybe it, and the issue of linearity raised here, are related to the folllowing point of John Preskill from the first post. I do hope to get to John’s full comment and manuscript a bit later but let me just relate to this part:
JP: “Gil says that while he is skeptical of quantum computing he is not skeptical of quantum mechanics. Therefore, I presume he takes for granted that a quantum computer (the “system”) and its environment (the “bath”) can be described by some Hamiltonian, and that the joint evolution of the system and bath is determined by solving the time-dependent Schroedinger equation.”
Yes, of course!!
(continued) “If Gil’s conjectures are true, then, no matter how we engineer the system, the Hamiltonian and/or the initial quantum state of the joint system must impose noise correlations that overwhelm fault-tolerant quantum protocols as the system scales up.”
Right. As a matter of fact, I even present a class of time-dependent evolutions (a certain subclass of all time-dependent evolutions) which are meant to describe the kind of noise I am talking about.
Maybe thinking about evolutions involving the computer’s qubits and the environment is what is behind Aram’s comment that the “shadow qubits” scenario is the only scenario that may support my conjectures. But this is unclear to me.(On my side, while I am committed to the relevant mathematics, including, of course, the linearity of QM, I don’t always subscibe to all the intuitions and mental images that may come with it. Those can be misleading. In particular the issue “what is the environment?” for controled quantum evolutios is interesting.)
• aram
February 21, 2012 12:02 pm
Yes, my “shadow qubits” are part of what John calls the bath, or environment.
You said ” I even present a class of time-dependent evolutions (a certain subclass of all time-dependent evolutions) “.
Are you talking about the one where qubits fail in a way that has large pairwise correlation? Because what you’ve described isn’t a fully description of the noise, but merely some properties of it: namely, its one- and two-qubit marginals. I think that once you work it out, then this model either has small enough errors to be qualitatively like uncorrelated noise, or is equivalent to the “trip over the power cord” model. In either case, I don’t see how it would separate the feasibility of classical and quantum computing.
There is one tricky part of my argument, which emerges even when you have i.i.d. noise, let’s say at rate p. (I’m not sure about exact numbers in what follows, and they depend in any case on the exact model.)
If p < 1/100 then we can prove that FTQC is possible.
If p 2/3 then we can prove that FTQC is impossible.
If p > 3/4 then we can prove that FT classical computing is impossible.
This leaves open the (likely) possibility that for some values of p we can to FT classical computing but not FTQC. Even more intriguingly, we might have complexity classes intermediate between BPP and BQP.
But it’s pretty farfetched to imagine that noise *always* must be in this range. After all, people have pushed noise far belong the threshold in systems where the only thing stopping quantum computing is the difficulty of putting many qubits in the same place, such as liquid NMR, where qubits are nuclei and getting more qubits means bigger molecules, or in superconducting circuits, where running more control lines into the circuit introduces more noise.
• February 24, 2012 7:29 am
“Are you talking about the one where qubits fail in a way that has large pairwise correlation? Because what you’ve described isn’t a fully description of the noise, but merely some properties of it.”
No, my paper contains a description of a subclass of all time dependent noisy evoluions which I refer to as “Smoothed Lindblad evolutions” which are proposed as a proposed model of “noisy non-fault tolerance QC”.
11. February 20, 2012 11:58 am
One more interesting point from Chris Moore’s first remark: “Building a quantum computer will certainly require huge advances in our ability to control quantum states, and shield them from unwanted interactions. But we have built exotic states of matter before (e.g. semiconductors) and I see no fundamental reason why we can’t make these advances.”
The purpose of my conjectures is to describe fundamental differences between what we can do without quantum fault tolerance and what we can do with quantum fault tolerance. The conjectures are not aimed to give a “proof” that quantum computers are impossible. I think that we are in some sort of agreement that there are fundamental differences between the type of states we can achieve now and the type of states that we will be able to achieve with universal quantum computers (but giving a technical formulation of this difference is a tricky business), and , as we discussed last time, between classical fault tolerance and quantum fault tolerance. So trying to study such fundamental differences is important and interesting.
Actually, The first version of the post that I had sent to Aram did not include the paragraph about my own beliefs. I only mentioned the conjectures and their weak, medium and strong interpretations. Aram asked me to say in which interpretation I believe and why, which I gladly did. Indeed I tend to believe that universal quantum computers are impossible, and I find that discussing our beliefs and motivations adds value to the debate. However, seeking fundamental differences between the reality without quantum fault tolerance and a reality with quantum fault tolerance is of independent value.
12. February 20, 2012 2:07 pm
{\bf Pairwise positive correlation.}
It looks that we were in agreement regarding the qualitative aspects of pairwise positive correlation of errors but the quantitative aspects were not discussed yet.
Aram wrote: “Regarding our sub-discussion on pairwise correlation, Aram wrote: Well, certainly if you make the correlations large enough then you eventually get the “trip over the power cord” model, in which all bits fail simultaneously with a non-negligible probability.
Maybe the best way to make this concrete is with the Curie-Weiss model, or something else that produces distributions of the form P(x_1, .., x_n) = e^{-f(x)} for f a low-degree polynomial of the bits. These definitely exhibit phase transitions.
But for sufficiently low constant values of the coefficients of these polynomials, things would still be ok for a fault-tolerant quantum computer.”
We are left with the quantitative issue of what amount of pairwise positive correlation will be harmful to FTQC. My thinking about it is as follows: Suppose that for a certain architecture the threshold is 0.001, and suppose that the error rate for each of the qubits is 0.0001 so it is comfortably below the threshold. Suppose that the number n of qubits is large. It looks to me (by rather naive calculations, I did not apply the Curie-Weiss model) that if the pairwise correlation between every pair of qubits is 0.01, then this will already be damaging for FTQC. In facts, it looks that the required “threshold” for the pairwise correlation is similar to the threshold for the error-rate itself. With paiwise correlation of 0.01 you do not get precisely the “trip over the power cord” effect but you get a substantial probability that at least 1/100 of the qubits will be faulty.
Here I assumed that the error rate will remain 0.0001 and discussed only the error-synchronization effect. As I already mentioned, the most devastating aspect of correlated errors is that the error-rate itself (in terms of qubit errors) scales up.
• aram
February 21, 2012 12:08 pm
Gil writes:
“We are left with the quantitative issue of what amount of pairwise positive correlation will be harmful to FTQC. My thinking about it is as follows: Suppose that for a certain architecture the threshold is 0.001, and suppose that the error rate for each of the qubits is 0.0001 so it is comfortably below the threshold. Suppose that the number n of qubits is large. It looks to me (by rather naive calculations, I did not apply the Curie-Weiss model) that if the pairwise correlation between every pair of qubits is 0.01, then this will already be damaging for FTQC. In facts, it looks that the required “threshold” for the pairwise correlation is similar to the threshold for the error-rate itself. With paiwise correlation of 0.01 you do not get precisely the “trip over the power cord” effect but you get a substantial probability that at least 1/100 of the qubits will be faulty.”
Such noise is fine for FTQC. Well, 1/100 is right at the edge, but if you said “In every time step a random set of n/200 qubits are randomized” then codes like the surface code (see http://arxiv.org/abs/0803.0272 ) would have no problem. (And this is worse than what you are saying, which is more like “In every 100 time steps, n/100 qubits are randomized.”)
More generally, if your pairwise correlation model meant that a random set of n/200 qubits were randomized in each time step (or one in every 100 time steps), then this could be treated the same as i.i.d. noise at the rate of 1/200. Yes, perhaps this is a higher rate than the single-qubit error rate, but that just says that the single-qubit error rate is not the relevant parameter.
• February 21, 2012 2:39 pm
Aram,
I am just referring to a small fragment of the discussion regarding the effect of pairwise positive correlation. (This was point 3 on my list and it referred to your claim that 2-qubits correlations and even O(1)-qubit correlations are not enough to cause FTQC to fail).
My intuition is that if you do not assume independence then you have to impose a similar threshold on the pairwise correlation to the threshold for the error rate itself. So even if the rate is 1/10000, a pairwise correlation of 0.01 will be roughly as damaging as a 1/100 error rate, and a pairwise correlation of 0.1 will be roughly as damaging as a 1/10 error rate. *In a case* where the required threshold is 1/1000 even pairwise correlation of 0.01 between every pair of qubits will be damaging.
• February 21, 2012 4:04 pm
Gil,
In the example I gave there is complete correlation between the errors not just 0.01, either no qubits fail or all fail, and the errors are still easy to correct. In that example an error rate of 0.5 for the individual qubits is perfectly fine together with complete error correlations.
This is why I claimed that more information than just the size of the correlations and the error rates must be given in order to find something truly problematic.
13. February 21, 2012 11:11 am
Again on the general theme of renormalization effects, the (excellent) recent article by Hui Khoon Ng and John Preskill “Fault-tolerant quantum computation versus Gaussian noise” (arXiv:0810.4953) suggests (in effect) a concrete path toward proving a theorem of Kalai-Alicki type, in the following passages:
Each qubit has a time-independent coupling to its own independent oscillator bath (though admittedly these are dubious assumptions when multi-qubit gates are executed) […] It might be interesting to see if further technical assumptions (which one would hope to justify a posteriori) about the system-bath state ${|\boldsymbol{\psi}_{\text{SB}}(t)\rangle}$ during the course of the computation would lead to a less ultraviolet-sensitive threshold condition, but we have not yet succeeded in finding useful results with this character.
Here we might hope to construct a Kalai-Alicki theorem along the lines of
Conjecture (Shared Ohmic baths obstruct fault-tolerant quantum computations) For multiple qubits simultaneously coupled to a single zero-temperature Ohmic bath of (possibly non)-independent oscillators, fault-tolerant quantum computation methods generically fail in consequence of long-range qubit couplings that are induced by the bath.
Key to this line of inquiry would be the choice of a technical means for regulating the ultraviolet divergences associated to Ohmic couplings. In this regard a simple regulatory method that has been effective our own practical calculations (of spin-entangled nanoscale mechanical oscillators whose dynamics is “dressed” by Casimir interactions) has been to expand “bare” quantum operators as a power series in “dressed” quantum operators (rather than the reverse) and then pass to the Ohmic limit. Conceivably these reverse-renormalization techniques might usefully be extended to the cavity QED regime in which vacuum fluctuations are Ohmic to an excellent approximation.
Here the broad mathematical intuition is that, generically in quantum computing, obstructions and solutions alike are associated to non-perturbative dynamics, and this suggests the concrete physical intuition that Ohmic (nonperturbative) quantum dynamics — as present in thermal baths, sensors, and photon sources — may be associated to a class of quantum errors that is particularly challenging to correct.
• aram
February 21, 2012 11:42 am
John, is there a reference that describes where non-perturbative dynamics apply in a quantum-computing-related setting?
I should also have mentioned that my post was in some ways too much in a defensive crouch. Often correlated errors are *easier* to deal with. Techniques like spin echo, composite pulses and decoherence-free subspaces all exploit knowledge about correlations in noise to correct errors much better than we could hope to do with small QECCs. So it’s a giant leap to go from “errors exhibit long-range correlation” to “these make FTQC impossible.” Do you have a model of long-range-correlated errors that looks like it would prevent FTQC?
• February 22, 2012 10:05 am
Just to be clear, our result applies in a more general setting than the case where each qubit has a time-independent coupling to its own independent oscillator bath, though we considered that special case as an illustration of the more general result. It is okay for the qubits to share a common bath. What we require is that each qubit couples to a bath variable obeying Gaussian statistics, so that the higher-point correlation functions of the bath are all determined by the two-point correlation function.
Our condition for scalability is that the modulus of this two-point function, when we sum one of the points over all system qubits and integrate that point over all times during the execution of the quantum circuit, is sufficiently small. If the bath two-point correlation function decays two slowly in space or in time this sum/integral may not converge, so in this setting one sees explicitly how our argument breaks down if the noise is too strongly correlated.
The argument is rigorous in the context of a Gaussian noise model, but as John Sidles remarks, ultraviolet divergences can be an issue; for Ohmic noise in particular the scalability criterion depends on the ultraviolet cutoff. I don’t see a direct connection, though, between that ultraviolet problem and the long-range correlations referenced in John’s conjecture formulated above.
• February 22, 2012 10:56 am
John please let me express and appreciation and thanks (that many GLL readers share) for the wonderful constellation of articles that you and your colleagues have produced regarding quantum computing.
For me, one illumination in particular came when reading “Fault-tolerant quantum computation with long-range correlated noise” (quant-ph/0510231) in appreciating that the noise model, in accommodating pairwise mutual qubit interactions with a bath, included as a special case the general pairwise mutual interaction of qubits with a trivial (identity) bath interaction (so that the bath is a passive spectator).
Then the analysis that you posted here on GLL, titled “Sufficient condition on noise correlations for scalable quantum computing,” extended this model to multiple interacting qubits, such that the resulting threshold theorem formally encompasses the multiple/mutual qubit interactions that concern Robert Alicki and Gil Kalai, insofar as those multiple/mutual qubit interactions are sufficiently weak, localized, and perturbative.
It’s not easy to step outside the broad scope of these beautiful and powerful theorems, and simultaneously retain any realistic expectation of obtaining concrete results. The point of my post above was that the Ohmic noise model of Ford, Lewis and O’Connell possibly offers one such avenue, via a strategy of investigating within their tractable Langevin noise model the nature of the quantum entanglement that is associated to the simultaneous interaction of two (or more) qubits with a shared Ohmic bath.
Would this approach yield any useful insights? Don’t ask me! Yet each of the above articles is worth reading in its own right, and perhaps some GLL reader will perceive (more clearly than me) the possibilities for uniting their physical insights and proof technologies.
14. February 21, 2012 12:41 pm
Aram, there is considerable literature on the “independent oscillator” bath model that Ng and Preskill use, and our QSE group is particularly fond of a old-but-good review by Ford, Lewis, and O’Connell “Quantum Langevin equation” (Phys Rev A, 1988), which analyzes issues associated to ultraviolet divergences in-depth. Our own reverse-renormalization application of this formalism is in “The Classical and Quantum Theory of Thermal Magnetic Noise: with Applications in Spintronics and Quantum Microscopy” (Proc IEEE, 2003; see Appendix III.C “Oscillator renormalization”. The resulting theory works well (for example) in statistically characterizing the quantum dynamics of the continuous experimental observation of one qubit by an Ohmic-damped mechanical oscillator.
As for the (tougher) coupling and observing of multiple qubits, that is a topic that our UW seminar Natural Elements Of Separatory Transport (NEST) will cover starting April 17. Although the NEST Seminar will emphasize polarization transport dynamics rather than computational dynamics, the seminar’s point-of-view is that the former may usefully be regarded as the large-$N$ high-temperature limit of the latter, such that the main math-and-physics intuitions and techniques are the same. All are welcome, needless to say!
15. February 21, 2012 6:03 pm
Joe’s point regarding blind computation.
Let me discuss in some detail Joe’s important point regarding blind computation. Joe’s remark started with a quotation from my response to Aram followed by a refutation:
” (Quoting me) ‘Imagine a quantum mechanical world where in order to prepare or emulate a quantum state you need to create a special machine. Each state requires a different kind of machine……This is the world we live in.’
(Joe’s refutation) Such a belief would be incorrect, since we know ways to avoid there being enough information present in the (quantum) instructions the device receives to be able to determine virtually anything about the state it has produced.”
Joe’s reference to blind computation has bearing to other aspects of my argument. An important aspect of all my conjectures is the assertion that there can be (and there is) a systematic relation between the state we create and the noise. Would such “blind computation” contradict my claim? As a matter of fact, isn’t it the case that such blind computation leads to quantum computing without entanglement? (We also had a specific discussion if a certain version of this blind-computation argument leads to a loophole in the formulation of my conjecture 3.)
First, let me repeat my remark that in today’s world or, if you wish, in the pre-quantum-computers era, we need carefully designed special purpose machines to create different types of quantum evolutions and quantum states. It is not clear if the correct way to think about universal quantum computers is as analogs of our digital computers, or, perhaps, as universal machines for creating quantum states analogous to a very sophisticated model of a James Bond car that can simultaneously serve in many different functions.
Whan Joe says “we know ways to avoid…” he is correct, but these ways assume that we have a quantum computer at our disposal to start with.
Let me make two comments. My first comment is that it does not seem reasonable that the theoretical possibility of making “blind computation” is really relevant to the reality of emulating quantum states that I was referring to.
Here is an example: In the previous thread Aram challenged me to present a physical situation where my conjectures hold. I proposed the prediction given by the conjecture 1 for Bose-Einstein states created in the laboratory. Matt asserted that while my prediction is correct the phenomenon, described already by Anderson, has nothing to do with my Conjecture 1 and has everything to so with the fact that Bose-Eintsten states, considered as a code, is a very bad code. Very well. We agreed that Conjecture 1 can be tested when certain abelian and nonabelian anyons will be created, something that is anticipated in the next couple of years. If I understood Matt correctly for these states he does not expect that the prediction given by Conjecture 1 is correct.
Of course, if Conjecture 1′s prediction will hold for one such experiment it may still fail for another, so it may take a while to be convinced. In any case, the fact that the experimental procedures that Matt described do not apply quantum fault tolerance, makes Conjecture 1 especially appealing in my eyes.
Now, is it reasonable to believe that if the group of researchers preparing these anyons will split into two groups where one group will make the experiment according to some secret instructions of the other group, then this can somehow have an effect on the type of noise these anions have? Or perhaps that the mere possibility of the scientists to split into two such groups give enough evidence that my conjecture 1 will not be satisfied? This seems to me unlikely.
The second comment is that blind computation talks about a different, more general, model of computation. While in my work the basic noiseless model is of pure evolutions (of course, with noise and FT it is not pure anymore,) for the blind computation model the basic noiseless computation is of mixed states. Understanding what is going on and translating matters to this more general model requires some work.
To a large extent this work was done. The main issue which supplied the motivation to the research I will now mention is that you can have a quantum computation with a quantum computer controlled by a classic computer where in all times the state of the quantum computer is a maximal mixed state. This seems to be a quantum computation without entanglement. This was carefully studied by Dorit Aharonov (with no connection to QC skepticism), and while the paper was not written yet, there is a talk by Dorit that was given at the Perimeter Institute and can be found on the Internet. When you consider the quantum computer and the classical control device as a one large quantum computer (which brings us back to the pure state model for the intended evolution) you recover the lost entanglement. In particular, it will be manifested, I believe, by emergent entanglement of the qubits in the quantum device.
(Klas, I did not forget you.)
• February 22, 2012 5:34 am
Gil, when we translate the noise models that appear in threshold theorems (both classical and quantum) into geometric language, we see a substantial difference between the canonical assumptions of classical noise and the canonical assumption of quantum noise.
We specify that our state-space (both classical and quantum) are equipped with both a symplectic structure (so that Hamiltonian dynamics is entropy-conserving) and a metric structure (such that thermal noise is entropy-increasing). Then when we integrate stochastic trajectories (both classical and quantum), it is natural to consider the invariance properties of the $\binom{0}{2}$ tensor $\mathcal{N}$ that characterizes the second moments of the noise increment.
Then the following difference becomes apparent:
(A) Classical dynamical simulations commonly specify $\mathcal{N} \propto \text{Id}$, that is, classical simulations commonly have no preferred noise basis.
(B) Quantum dynamical simulations commonly specify that $\mathcal{N}$, has polynomial rank, that is, thermal noise is restricted to a set of preferred basis vectors having only polynomially many dimensions.
It seems (to me) that the non-classical ubiquity of preferred basis vectors in standard QM is intimately bound-up with the viability (or not) of the Kalai conjectures. Because (a) hypothesis (B) is accepted as physically correct, and (2) the noise is made suitably weak by skilled engineering, such that (3) perturbative expansions work, and moreover (4) assume the state-space metric is flat, then it seems to me that standard theorems of FTQC are rigorously correct.
Conversely, we discern four curmudgeonly grounds for QC skepticism:
(1) Maybe quantum noise ain’t sparse? and/or
(2) Maybe quantum noise ain’t weak? and/or
(3) Maybe quantum noise ain’t perturbative? and/or
(4) Maybe quantum state-spaces ain’t flat?
We become mired in a quantum Ground-Hog Day when dialogs endlessly repeat: Yes, quantum noise is sparse, but is it sparse enough? Yes, quantum noise can be made weak, but can it be made weak enough? Yes, quantum noise effects can be nonperturbative, but can they be regarded as perturbative for practical purposes? Yes, non-Hilbert state-space geometries are useful for practical computations, but is the state-space of Nature strictly Hilbert?
As Phil Connors (played by Bill Murray) showed us in the wonderful comedy Groundhog Day, the key to escaping GroundHog Day is embrace the hard questions. That is, in the 30++ years that we have been pursuing quantum computing, have we been asking these four questions in the most enjoyable way? Have we been testing them in the most ingenious way? Can we conceive enjoyable new ways to embrace these four curmudgeonly questions?
• February 22, 2012 7:06 am
As a followup, please let me commend Elizabeth Allman’s delightful Salmon Prize as one example of a non-Groundhog non-Bœotian approach to a broad class of curmudgeonly 21st century questions (in both QM and biology).
• February 22, 2012 6:39 pm
Dear John,
I dont understand (A) and (B) well enough so I can only guess what (1)-(4) stand for. The issue if the noise level can be made weak enough is intimately related to the issue of correlations. (One way to think about the questions I raise would be “how quantum evolutions look above the threshold for quantum fault tolerance”).
Regarding geometry. My very basic approach is geometry-free. The models/requirements/axioms for decoherence that I am looking at are geometry free. (I agree that the issue of FTQC may have bearing on geometry and this was one of the items on my long list that I planned to comment about at some point.)
About your suggestion that quantum state-space isn’t flat. The truth is that I don’t understand the non-flat evolutions you refer to. (And a more sad truth is that probably referring me to papers wont help, although face-to-face or very very self contained description might.) On the conceptual level, it is not clear to me if what you say is: “non flat evolutions are useful way to describe (approximately) in a an efficient way, quantum evolutions which really take place on some high dimension Hilbert spaces”. Or perhaps that you regard it as a genuine alteration of QM formulation.
(You may regard the distinction itself as unimportant, but this will also be interesting to know.)
I am also not sufficiently familiar with perturbative methods used to study noisy quantum systems. This is a place where my conjectures may indeed be in some conflict with some standard physical computations. Of course, I will be happy to know if this is the case.
For example, look at Kitaev’s toric code that encodes 2 qubits or the Kitaev-Preskill 4D code that encodes (to the best of my memory) 6 qubits. My Conjecture 1 will say that what we should expect is a mixture of an intended pure state (represented by a codeword) with a cloud of unintended codewords. (In addition to standard noise given by excited qubits.) If this is indeed in contrast with what perturbation theory say I will be very interested to know it. (I suppose that both Robert and John P. studied these specific states.) A similar question can be asked about anyons that Matt and I talked about.
• February 22, 2012 7:00 pm
Dear Gil — all of your remarks are very good, and in fact, my own remarks are based largely upon your posts (as I understand them).
Regarding (A) and (B), the intuition is direct and simple: in classical simulations it makes physical sense (and is very common) to increment the position and/or momentum of a particle, or the direction of a classical spin, by a small amount in some randomly chosen direction. But when we model quantum noise, we *never* increment the state by a small randomly chosen state. Heck, we can’t even write down such a random state in any tractably short representation. For this pragmatic reason, all numerical quantum simulations are “low-noise” in a dimensional sense. Hopefully this low-dimension property of quantum noise expresses a deep truth about nature, since otherwise all error-correction schemes fail.
Regarding state-space geometry, your post expresses my own views very nicely. After all, wooden-ship sailors understood spherical geometry long before Gauss and Riemann formalized their methods. Similarly, simulationists today understand reasonably well how to do Hamiltonian dynamics on curved state-spaces (both classical and quantum), even though those methods are not (yet) fully formalized for the quantum case.
As for nonperturbative and/or ultraviolet-divergent and/or infrared-divergent noise models, they are located at the fuzzy boundary of my comprehension, and moreover toric/topological codes lie well beyond that boundary. So all I can do is share your interest in them … and hope that other GLL readers will post about them!
• aram harrow
February 22, 2012 6:50 pm
Gil, that Dorit talk looks interesting; thanks for mentioning it!
I’m confused about how BECs could be a counterexample, though. As I see it, they aren’t quite the right format.
For an example of a system satisfying conjecture 1, I assume you want a system that appears to have all the prerequisites for FTQC—individually addressable qubits, fast control, low single-qubit error rates, etc.—but nevertheless fails to permit effective error correction because of some sort of noise process that is dangerously correlated either with itself, or more likely, the computation.
By contrast, plenty of systems just fail to even come close to meeting the requirements for FTQC. Not only BECs qualify, but so do ordinary superconductors, and objects like computers, cups of coffee, etc.
But maybe I’m putting words into your mouth.
Can you say what a positive demonstration of conjecture 1 would look like? Is such a thing ever possible, given that it’s formulated as a statement about what is *impossible*?
• February 22, 2012 9:42 pm
Hi Aram,
As you may remember Conjecture 1 talks about a code which uses n qubits to encode a single qubit. The pure states codewords form a 2-dimensional Hilbert space inside the huge 2^n dimensional Hilbert space which corresponds to the individual qubits. My conjecture is about what mixed states will necessarily be created when you create such a code. It asserts that we will necessarily obtain a mixture of a “cloud” of codewords. (on top of ordinary noise where each qubit can be excited independently.)
This conjecture and small extension of it apply to rather general situations. The BEC states can be regarded as a code although it is not the state of a single qubit that is encoded. The BEC state can be regarded as encoding a state of one atom (say) in a huge number of atoms. The type of noisy BEC states my conjecture (adapted to this case) predicts is mixture of different BEC states.
The cases I mentioned of anyons of various kinds, toric codes etc are actually defined as codes.
“For an example of a system satisfying conjecture 1, I assume you want a system that appears to have all the prerequisites for FTQC—individually addressable qubits, fast control, low single-qubit error rates, etc.—but nevertheless fails to permit effective error correction because of some sort of noise process that is dangerously correlated either with itself, or more likely, the computation.”
No, not necessarily, the conjecture is suppose to be rather general
There are many possible examples where conjecture 1 can be tested. In my first post I asked about noisy cluster states and noisy anyons. We can try to think about other examples. The conjecture proposes how noisy states near pure enoded states look like, so it has a fairly positive nature.
• aram harrow
February 22, 2012 9:49 pm
But I guess your conjecture has a big \forall quanitifier: for all quantum codes, and for all attempts at implementing them. And implicitly this is followed (or maybe preceded) by a \exists quantifier: “there exists a nonnegligible source of noise that exists in Nature, and is not corrected by the code.”
I can give examples of quantum codes and attempts at implementing them for which this conjecture holds, but these are just examples of bad codes, e.g. codes with low distance, or non-fault-tolerant preparation procedures. This is not at all surprising. e.g. BECs are a bad code, since their distance is 1, and the “preparation procedure” of cooling the system is known to put only a constant fraction of atoms into the ground state (albeit with overwhelming probability).
But there’s an unlimited number of bad codes and bad preparation procedures. A positive demonstration of your conjecture must somehow look different.
• February 22, 2012 9:47 pm
In particular, I would regard a noisy BEC state where the noise consists just of independent excitations of the individual atoms as a counterexample.
• February 23, 2012 1:09 pm
Gil, sorry for taking so long to respond. I must admit I don’t understand your counter argument. My point was that you can use blind computation two prepare one of two states (one entangled, one not) in such a way that the information about whether the state is entangled or not is not local to the entangling system. Blind QC does not allow quantum computing without entanglement, but the reduced density matrix of the system present in the QC is mixed and separable, where as when the state of the other party is taken into account entanglement is recovered in one of these cases.
You seem to be talking like this is some theoretical oddity which can never be realised in practice, but blind quantum computing *has* now been demonstrated experimentally in 4-qubit systems, with the information leakage bounded via tomography. See for example http://www.sciencemag.org/content/335/6066/303.abstract and the related perspectives.
• February 23, 2012 11:26 pm
Hi Joe, there is no need to be sorry and no hurry whatsoever. I think we discussed the blind computation issue quite a bit (and it is also mentioned in my papers). I tried to express my thoughts about it as good as I could, and perhaps it is time for us to keep thinking about it off-line, (and hear what others have to say) and discuss more things. Thanks for your always interesting remarks and participation which I hope will continue.
16. February 22, 2012 10:32 pm
“I can give examples of quantum codes and attempts at implementing them for which this conjecture holds, but these are just examples of bad codes, e.g. codes with low distance, or non-fault-tolerant preparation procedures.”
Aram, does this means that we are in agreement in expecting conjecture 1 to hold when codes are created with non-fault-tolerance-preperation-procedures?
• aram harrow
February 22, 2012 10:39 pm
“Aram, does this means that we are in agreement in expecting conjecture 1 to hold when codes are created with non-fault-tolerant-preparation procedures?”
Yeah, I generally agree, since every gate we ever do will have some nonzero constant error rate.
Conversely, I think the conjecture is false because fault-tolerant preparation is, in principle, possible. (Note that there’s no such thing as fault-tolerant encoding, or more precisely, when converting qubits from code A to code B, the process can offer no error-protection greater than what is offered by A or B separately. Thus encoding qubits that are initially unencoded is an inherently non-fault-tolerant procedure. Instead, FTQC prepares encoded |0> states and then does encoded gates on them.)
• February 22, 2012 10:55 pm
“‘Aram, does this means that we are in agreement in expecting conjecture 1 to hold when codes are created with non-fault-tolerant-preparation procedures?’
Yeah, I generally agree, since every gate we ever do will have some nonzero constant error rate.”
In this case, Aram, would you tend to agree that this makes both topological quantum computing and measurement-based quantum computing non viable approaches to quantum computing, as they both seem to depend on creating with “ordinary experimental procedures” states which are supposed to violate Conjecture 1?
• aram harrow
February 22, 2012 11:06 pm
“In this case, Aram, would you tend to agree that this makes both topological quantum computing and measurement-based quantum computing non viable approaches to quantum computing, as they both seem to depend on creating with “ordinary experimental procedures” states which are supposed to violate Conjecture 1?”
I think I disagree here, although I’m not totally sure what “ordinary experimental procedures” are. FTQC merely relies on the right sequence of ordinary experimental procedures, so the two are not mutually exclusive.
Anyway, neither topological QC nor MBQC (meaurement-based QC) are intrinsically fault tolerant (well, there might exist versions of topological QC that are intrinsically FT, but none are yet known in 3 dimensions). But they are both compatible with fault-tolerance. For example, the surface code can be actively error corrected, and this gives a memory threshold of about 1%. (Here’s a recent paper arguing that this has all the necessary ingredients for FTQC http://arxiv.org/abs/1111.4022 )
MBQC is nonviable if you do it without any error-correction. But it too can be combined with quantum error correction in a way that allows threshold theorems to be proven. Here’s one reference:
http://arxiv.org/abs/quant-ph/0405134
17. February 23, 2012 8:59 pm
Gil Kalai asks On the conceptual level, it is not clear to me if what you say is: “Non-flat [quantum dynamica] evolutions are useful way to describe (approximately) in a an efficient way, quantum evolutions which really take place on some high dimension Hilbert spaces.” Or perhaps that you regard it as a genuine alteration of QM formulation. (You may regard the distinction itself as unimportant, but this will also be interesting to know.)
Gil, you have asked a very interesting question, and it seems to me that it is perfectly consistent to hold both opinions.
The practical utility of non-flat (that is, Käherian) quantum state-spaces is evident already: there are tens of thousands of articles that compute quantum dynamics on these state-spaces. Regarding the physical reality of non-flat quantum state spaces, my survey of the literature — which is posted on the ComplexityTheory StackExchange as an answer to Joshua Herman’s question [what is a] “Physical realization of nonlinear operators for quantum computers“ — suggests to me that (somewhat contrary to a widespread folk belief among experimental physicists) there is at present no very strong experimental evidence either way regarding the flatness of Hilbert space.
Thus, my answer will be supported solely by frail strands of philosophy and hope … which amounts to asking: Which alternative is more aesthetically pleasing, flat versus nonflat quantum state-spaces?
One such hope-driven philosophical answer is this: We appreciate that Nature has made space curved and time relativistic, perhaps so as to economize in creating a universe that is finite in spacetime. It is therefore plausible that Nature has made quantum state-space non-flat, and and possibly dynamic in its geometry, so as to similarly economize in creating a universe that is polynomial in dimensionality. Moreover, in both cases we may presume that Nature realizes her economy via wonderful mathematics.
With regard to classical physics, the slow emergence of the wonderful mathematics of general relativity eventually catalyzed dozens of subtle, sensitive, and beautiful experiments (perihelion precession, stellar parallax, Gravity Probe B, the cosmic microwave background, gravitational lensing, etc.), not to mention amazing technologies (GPS!). Similarly in the quantum case, the wonderful mathematics of curved quantum state-spaces is slowly coming into focus, and in coming decades we have every reason to be confident that this mathematics similarly will catalyze dozens of subtle, sensitive, and beautiful experiments, not to mention amazing technologies (spin microscopes!).
By deliberate intent, this answer embodies the fondest hopes of the world’s mathematicians, scientists, engineers, and even medical researchers … and to me, this seems like a good idea!
• February 28, 2012 8:59 pm
“Gil, you have asked a very interesting question, and it seems to me that it is perfectly consistent to hold both opinions.”
Thanks for the answer, John,
If I understand you correctly (but do correct me otherwise), the non-flat quantum dynamical evolutions you mention are accomodated by “ordinary” quantum mechanics so they cannot in anyway falsify QM or show that the QM formulation is not sufficient to describe our physical world. Is a fair description of your view (which includes as perfectly consistent the two perspectives I asked about earlier) that you regard the relation between “ordinary” (flat Hilbert space) QM and the non-flat versions as the relation between machine language and say FORTRAN?
(Unfortunately I am not familiar with your suggested theories on the technical level.)
• February 29, 2012 8:45 am
Gil, what I have in mind is less like FORTRAN versus machine code, and more like the difference between arbitrary-precision arithmetic (sometimes called “bignum” arithmetic) and IEEE floating-point arithmetic.
We all know that floating-point arithmetic can be severely criticized on formal mathematical grounds. Heck, floating-point operations are neither commutative nor associative … formally “float” operations don’t even form an algebra!
And yet, for most practical purposes floats work just well as bignums — for example, floating-point operations are approximately commutative and approximately associative — and of course floating point calculations generically are exponentially faster and economical of storage.
So it is natural to ask: Is there an analog of floating-point arithmetic for quantum simulation, that for most practical purposes yields exponential speedups? The answer is “yes.”
As researchers we are all of us uniquely fortunate that in the late decades of the 21st century the tools of algebraic geometry and geometric dynamics evolved to a point that we can map quantum “bignum” calculations onto quantum “float” calculations, by procedures that can be naturalized, formalized, dualized, localized, and linearized … which are all of them “izes” that provide solid foundations for mathematical narratives!
Therefore, (in my view) the following three reasons for studying “quantum floats” are equally valid, and are wholly compatible, such that we need not privilege any one of them over the other two:
(1) For systems engineers the practical advantages of “quantum floats” are sufficiently great that a working knowledge of them has become essential to the social and technical process of designing and producing any product that presses against the quantum limits to speed, size, sensitivity, and power efficiency;
(2) For mathematicians “quantum floats” are fun and beautiful and are linked to numerous mathematical questions that are open, broad-ranging, and considered deep;
(3) For physicists there is the great and still-unanswered experimental question of whether Nature conducts her calculations via the exact but immensely slow and memory-intensive “quantum bignum” algorithms of Hilbert space, or whether she economizes by computing with infinitesimally imprecise yet exponentially more efficient “quantum floats”!
It is evident that exploring the mathematical nature and the practical applications of “quantum floats,” for any or all of the preceding three reasons, requires that we embrace much the same engineering, math, and science challenges as constructing quantum computers, in service of objectives and narratives that are broader and (in the end) more relevant to the challenges of our 21st century.
This is why I am very grateful to you (Gil) and Aram for your debate, and to Dick and Ken for hosting it!
And that is why (it seems to me) the Harrow-Kalai debate here on GLL is only superficially about the feasibility of quantum computers, because the real debate here on GLL inextricably concerns a broad class of 21st century engineering enterprises that are comparably urgent, and mathematical questions that are comparably wonderful, and scientific challenges that are comparably exciting, to those of any preceding century.
18. February 23, 2012 11:11 pm
Let me remark on Conjecture 1 which is at the heart of the matter, and was the center of my recent exchange with Aram. As a reminder, Conjecture 1 asserts that a noisy encoded qubit is a mixture of a “cloud” of codewords around the intended encoded state (in addition to some standard noise). This conjecture has wide applicability and it can be extended to cases where we talk about more complicated “qudits” (like BEC states).
Aram wrote: “I can give examples of quantum codes and attempts at implementing them for which this conjecture holds, but these are just examples of bad codes, e.g. codes with low distance, or non-fault-tolerant preparation procedures. This is not at all surprising.”
I agree that this phenomenon is not surprising and the conjecture is that this phenomenon always holds. A weaker conjecture is that this phenomenon holds for quantum devices which are not applying quantum fault tolerance based on qubits and gates architecture. The weaker conjecture is already in contrast with some researchers’ hopes from anyonic systems that are expected to be built in the next few years.
Conjecture 1 is relevant to topological quantum computing and measurement-based quantum computing as ideas to shortcut (or replace) the standard qubits and gates models. (Of course, these forms of quantum computing are important also in the framework of qubits and gates computers.) Such “shortcut” ideas are based on preparing certain complicated states, not by an ordinary quantum computer, but by special purpose experimental processes, which by themselves do not involve fault tolerance protocols. The concern is that the resulting noisy states will satisfy Conjecture 1 and will not enable universal quantum computing.
19. February 24, 2012 8:02 am
Gil, also relevant to conjecture 1 is Dick’s statement:
The basic interactions in physics involve two or three particles, for instance when a photon bounces off an electron. … To get interactions with more particles, you need to go to higher levels of perturbation theory, but as you do so, amplitudes go down exponentially.
Here emphasis has been added to to the assertion that in higher orders of perturbation theory “amplitudes go down exponentially.” Generically speaking, Dick’s assertion is simply not true, and it would be more fair to say: “In higher orders of quantum perturbation theory amplitudes generically increase exponentially.”
It is striking that noise models for which FTQC is feasible typically are associated to microscopic physics for which perturbative expansions diverge; this includes (for example) ohmic baths, unit-efficiency single-photon sources, and unit-efficiency photon detectors … one wonders whether a more rigorous noise & renormalization analysis would tend to affirm Conjecture I?
More broadly, one of the virtues of Kalai Conjecture 1 (it seems to me) is that it brings a fresh perspective to broad class of long-standing open problems in quantum physics. These problems center upon the non-perturbative quantum dynamics that for decades has obstructed our understanding of the vacuum considered as a zero-temperature bath.
E.g., perhaps Conjecture 1 motivates us to conceive the divergences associated to the physical vacuum as Nature’s fundamental mechanism for obstructing the physical universe from violating the (extended) Church-Turing thesis. And from a pragmatic point-of-view, all of the technological roadmaps to practical QC that ever have been proposed, in pushing against quantum limits to spatial localization, thermal isolation, and signal detection, require a firmer grasp of nonperturbative quantum dynamics than we presently possess.
• February 24, 2012 8:50 am
John, It is Aram’s statement, not Dick’s.
• February 24, 2012 10:35 am
You are right about the attribution (and I apologize too for the unmatched quoting). To the extent that there is any accessible textbook on these renormalization/divergence issues, Tony Zee’s well-reviewed Quantum Field Theory in a Nutshell is perhaps that book. And in Zee’s book we read:
I believe that any self-respecting physicist should learn about the history of physics, and the history of quantum field theory is among the most fascinating …[skip 461 pages!] … In all previous revolutions in physics, a formerly cherished concept has to be jettisoned. If we are poised before another conceptual shift, something else might have to go.
Quantum computing research in general, and the Kalai conjectures in particular, are valuable partly for their Zee-style hints regarding “what has to go.” Thank you for posting them, Gil!
• February 24, 2012 3:53 pm
“…is among the most fascinating …[skip 461 pages!] … In all previous” John, one reason for skepticism about “open science” of various kind, and scientific blog discussions etc. is that we cannot skip these 461 pages if we wish to seriously study the matter.
• February 24, 2012 4:40 pm
Gil, it was never my intention to imply that one should skip the middle pages of Zee’s Quantum Field Theory in a Nutshell … the intent was rather to link Zee’s introductory themes to his concluding themes.
In fact, among many hundreds of field theory texts, Zee’s is acknowledged to stand out for the well-crafted unity of its math-and-physics narrative, such that Zee’s central chapters are those that one is *least* well-advised to skip!
A specific example of particular relevance to quantum computing is Zee’s integrative presentation of the non-perturbative dynamical origins of Anderson localization, which provides a starting point for reading the topical QIT literature like Wootton and Pachos “Bringing order through disorder: Localization of errors in topological quantum memories” (arXiv:1101.5900, 2011) … not to mention 2,824 more arxiv preprints, spanning many disciplines, that similarly discuss Anderson localization.
Does this foretell a blurring of the boundaries between QM/QC/QIT and disciplines like condensed matter physics and elementary particle physics? The Magic Eight Ball seems to answer “signs point to yes“!
• February 25, 2012 9:12 am
John, no critique intended, and your reference to Zee’s book is surely appreciated; I was just reminded how difficult it is to read 461 pages (or even 46 pages) in a text-book, even an excellent one, especially in an unfamiliar area.
20. February 24, 2012 8:04 am
Gil, also relevant to Conjecture 1 is Dick’s statement:
The basic interactions in physics involve two or three particles, for instance when a photon bounces off an electron. … To get interactions with more particles, you need to go to higher levels of perturbation theory, but as you do so, amplitudes go down exponentially.
Here emphasis has been added to to the assertion that in higher orders of perturbation theory “amplitudes go down exponentially.” Generically speaking, Dick’s assertion is simply not true, and it would be more fair to say: “In higher orders of quantum perturbation theory amplitudes generically increase exponentially.”
Noise models for which FTQC is feasible typically are associated to microscopic physics for which perturbative expansions diverge; this includes (for example) ohmic baths, unit-efficiency single-photon sources, and unit-efficiency photon detectors … one wonders whether a more rigorous noise & renormalization analysis would tend to affirm Conjecture I?
More broadly, one of the virtues of Kalai Conjecture 1 (it seems to me) is that it brings a fresh perspective to broad class of long-standing open problems in quantum physics. These problems center upon the non-perturbative quantum dynamics that for decades has obstructed our understanding of the vacuum considered as a zero-temperature bath.
Perhaps Conjecture 1 motivates us to conceive the divergences associated to the physical vacuum as Nature’s fundamental mechanism for obstructing the physical universe from violating the (extended) Church-Turing thesis. And from a pragmatic point-of-view, all of the technological roadmaps to practical QC that ever have been proposed, in pushing against quantum limits to spatial localization, thermal isolation, and signal detection, require a firmer grasp of nonperturbative quantum dynamics than we presently possess.
21. February 24, 2012 12:57 pm
Klas wrote: “Gil, I don’t think this enough information to be very meaningful. A simple example of a noise which satisfies your description is that with probability 1/10,0000 we flip all qubits simultaneously. This gives completely correlated errors. However it is very easy to correct this particular noise.
As Aram has already pointed out, correlated errors are easy to correct if the noise structure is known.”
Klas, perhaps the short answer is this: we talk about errors which amount to information leaks or measurements. In this context it is a reasonable assumption (especially in the quantum case) that if a large fraction of your memory is arased, e.g. replaced by a random data, you will not be able to correct it. So for example, a noise for which with some small probability all qubits (or bits) of your computer will be replaced by random qubits (bits) is assumed not to be correctable. You are right that (both in the classic and quantum case) this assumption is not always correct (especially if you put the noise yourself and keep track of it), but it is a reasonable assumption.
• March 1, 2012 7:10 am
Well if you replace all qubits by random ones there will certainly be a problem, but it looks to me like even this must be tuned in a nontrivial way in order to stop qc from being efficient.
One of the effects of having highly correlated errors is that if the error rate for the individual qubits is fixed and we increase the correlation, then this means that whenever we have one error we are in fact likely to have many of them, as you have pointed out. However since the individual error rate is fixed this also means that most of the time we must have a lot fever, perhaps even no, errors, since otherwise the expected number of errors would contradict the individual error rates, by the linearity of expectations.
So if we have no errors most of the time it looks like we could simply repeat the computation and compensate for the errors by taking the majority answer.
22. February 25, 2012 7:01 pm
One thing I am curious about is the predictions (based on perturbation methods, I believe) regarding Kitaev’s 4D model. (I believe that both John Preskill and Robert Alicki who participated in our discussions studied this model.) If I understand correctly, the behavior described by perturbative computations suggest that you will reach a pure codeword up to a small level of standard (independent) noise. Is this indeed the case, and what is involved in this computation? My Conjecture 1 (which is supposed to hold in every dimension) asserts that you can only reach a mixture of pure states near a single intended pure state.
• February 25, 2012 7:07 pm
The predictions for the 4D toric code are qualitatively like what you’d see in the 2-D classical Ising model with no applied field. That is, consider bits in a plane such to dynamics where at each time step, a bit flips with probability 0.99 * eps to equal the value of majority of its neighbors, and with probability 0.01*eps to equal the opposite value.
The resulting state would be no means be pure, but a bit would still be protected for an amount of time exponential in the size of the system. This because the errors that would be created in this process would w.h.p. always be correctable.
• Gil Kalai
February 25, 2012 7:47 pm
Thanks Aram, the question is if the mixed state will be of the form:
(*) A pure codeword (or something extremely close to it as the size of the system grows ) + independent excitations
or as Conjecture 1 predicts:
(**) A substantial mixture of codewords (no matter how large the system is) + independent excitations.
And also, I was asking about the physics calculations that leads to (*).
• February 25, 2012 8:04 pm
I think the references for the thermal stability of the 4-d toric code are
http://arxiv.org/abs/quant-ph/0110143
http://arxiv.org/abs/0811.0033
• February 26, 2012 10:16 am
Aram, thank you for these references! A recent preprint that extents these themes is Bravyi and Haah “Analytic and numerical demonstration of quantum self-correction in the 3D Cubic Code” (arXiv:1112.3252, 2011), which includes numerical calculations that largely affirm the theoretical predictions of the articles you suggested.
Supposing that we seek to reconcile the (wonderful!) results of these articles with the (similarly wonderful!) Kalai Conjectures, how can we proceed concretely?
One avenue is to show that the idealized noise models of the FTQC literature depart substantially and essentially from real-world quantum noise. And here it seems to me that the FTQC literature does not reflect all that much progress during the decade 2001–2011 (that is, arXiv:quant-ph/0110143–arXiv:1112.3252). Bravyi and Haah’s article succinctly states the consensus precedent as of 2011: “[Ohmic noise] was adopted as a model of the thermal dynamics in most of the previous works with a rigorous analysis of quantum self-correction,” and with no further comment Bravyi and Haah adopt the same noise model. Implicit in this precedent is the folk-assumption that a rigorous microscopic theory of quantum noise would yield an effectively Markovian model.
To strengthen the justification for this precedent, the QM/QC/QIT community would benefit from (no doubt difficult, lengthy, and even tedious) analyses that grapple concretely with the tough mathematical issues attendant to FTQC folk-assumptions, with special regard to the divergences of field theory (particularly in regard to near-Ohmic noise in condensed matter physics) and also in regard to the divergences of cavity QED (particularly near-unit-efficiency sensors and photon sources).
To the extent that there already exist such analyses, then hopefully GLL readers will post links to them!
A Kalaist point of view is that in-depth noise-and-detector analyses may yield surprises having fundamental significance for QM/QC/QIT … and of course this possibility is a sufficient justification for attempting them. More broadly, from a systems engineering point of view, the more that we understand about quantum noise and sensing, and the better-integrated our knowledge of these phenomena, and the more widely disseminated that knowledge, the faster and more confidently we can launch the great quantum enterprises of the 21st century.
23. February 27, 2012 1:14 am
As the genius’ proceeds in this conversation, a dog under the table would appreciate any diversion in terms of general description regarding ohmic noise and toric code. I know that sounds ignorant, but dissipation describes itself to me in my gut while this other stuff is pretty heady… i.e., beyond me.
Who is Amir Caldeira? Why do path integrals enter the discussion here? How is the Hilbert Space reduced?
yours truly, etc. etc.
24. February 28, 2012 12:20 pm
Three remarks: 1) Aram wrote: “There are many reasons why quantum computers may never be built,” so let me emphasize that Conjectures 1-4 are offered to represent the nature of decoherence for quantum computers/emulators/evolutions that do not enact quantum fault tolerance, whatever the cause for that may be.
2)2) A brief explanation why error synchronization goes hand in hand with scaling up of the error rate in terms of qubit errors. The relevant way to measure error-rate for quantum computers for the purpose of quantum error correction is in terms of qubit-errors per computer cycle. On the other hand, error-rate in terms of trace distance per a tiny unit of time is more natural because this is a measure which remain fixed under unitary evolutions. The trace distance (if I am not mistaken) does not distinguish between changing a single qubit to a maximal mixed state qubit with probability epsilon and leaving other qubits unchanged, or changing all the qubits to a maximal mixed state with probability epsilon. In terms of qubits errors the later case represent n times higher error rate. When you do not correct errors and let them propagate under complicated unitary evolutions a single qubit error will quickly transform to a n-qubit error so this will lead to qubit-errors scaling up linearly. If your computer leaves few qubits errors “under the fault tolerance radar” these errors, propagated, will be devastating.
3) We talked about classical fault tolerance demonstrated by digital computers but we did not touch much the analogy with noise and denoising in classical systems. Aram mentioned the stock market behavior so let me give one remark based on this example. We can consider two types of errors or noise when describing values of stocks:
1) The errors in transmitting over noisy channels today’s stock market values,
2) The errors in predicting tomorrow’s stock market values.
It will be reasonable to conjecture that errors of the second kind, will satisfy (even, in principle,) similar properties to Conjectures 3 and 4. Namely, that if the values of two stocks are correlated then making an error in predicting their values will be positively correlated, and that there is a substantial probability for making errors in the prediction of a large number of stocks simultaneously. Of course, this will not be the case for errors of the first king. Basing the distinction between errors of these two kinds on formal grounds will not be easy.
25. February 28, 2012 6:10 pm
Dear all, I have a secreterial request: Please, please continue here the discussion about thermodynamics (and not where it started). Also please start a new thread so that the comments will not be painfully narrow (in the graphic sense).
26. February 28, 2012 7:05 pm
Dear Gil. I am one of the old-post offenders, and will continue my thermodynamical remarks here. I am drafting a GLL post (hopefully for next week) that will specify a concrete class of Hamiltonian/Lindbladian dynamical models that:
(1) are thermodynamically valid on Hilbert space, and
(2) remain valid upon pullback onto non-Hilbert spaces, and
(3) are mathematically fun and useful in practical engineering, and
(4) come with no warranty that Nature uses them, and even
(4) no warranty that we can feasibly test whether Nature uses them.
To extend Dave Bacon’s perspicacious remarks in the above (older) GLL thread, the engineering analysis of nonequilibrium dynamics commonly is associated to the relaxation of “small-to-moderate” disequilibria. Whereas FTQC roadmaps are some sense ambitious to generate (by gate logic or adiabatic evolution) and sustain indefinitely (by various error-correction strategies) states of “non-classically large” quantum disequilibrium.
How does one naturally distinguish “small-to-moderate” disequilibria from “non-classically large” disequilibria? Don’t ask me! Fortunately, Charles Bennett has a very interesting post on Quantum Pontiff, titled “What increases when a self-organizing system organizes itself? Logical depth to the rescue”, that might suggest informatically natural measures of the quantum disequilibrium that is associated to FTQC.
27. Luca
February 29, 2012 7:14 am
Hi Gil! I have no formal background in quantum computers, quantum mechanics and so on but i would like to ask: if your conjectures are correct or if it will be difficult to scale up qc (up to the point that it will not be possible at all) for other reasons, does this render false what some researchers are saying about our universe, that is, that the universe itself could be a qc?!?
Thx!
• Gil Kalai
March 1, 2012 8:51 am
Hi Luca, I think that something similar will come up in Aram’s next post.
• March 8, 2012 12:35 am
Luca, I am not familiar with those works claiming that the universe itself is a QC and to where the researchers take this claim. Generally speaking, I dont think these works refer to computational complexity issues, or to quantum fault tolerance, so I would guess that my conjectures are not relevant to these claims. Specific works where the discussion is based on states that required quantum fault tolerance/quantum error-correction are indeed in tension with my conjectures.
28. March 1, 2012 6:59 am
Aram, the recognition that “correlated [quantum] noise would cast doubt on … the idea of reductionism” surely is a Great Truth. And for its apposing Great Truth we have Tony Zee’s remark in his Quantum Field Theory in a Nutshell:
In all previous revolutions in physics, a formerly cherished concept has to be jettisoned. If we are poised before another conceptual shift, something else might have to go.
Although people put forth various no-go arguments to the effect that the geometry of quantum state-spaces necessarily is Hilbert, these arguments are reminiscent of Kant’s cherished yet utterly mistaken faith that the geometry of Newtonian state-space necessarily is Euclidean. In this regard we have two more celebrated aphorisms to guide our thinking:
The virtue of a logical proof is not that it compels belief but that it suggests doubts. The proof tells us where to concentrate our doubts.
Henry George Forder
Foundations of Euclidean Geometry
———————
My conviction that we cannot thoroughly demonstrate geometry a priori is, if possible, more strongly confirmed than ever. It will take a long time for me to bring myself to the point of working out and making public my very extensive investigations on this subject, and possibly this will not be done during my life, inasmuch as I stand in dread of the clamors of the Boeotians, which would be certain to arise if I should ever give full expression to my views.
Carl Friedrich Gauss
Letter to Friedrich Bessel
With specific regard to thermodynamical issues in quantum computing, we are all familiar with von Neumann’s expression for the entropy of a quantum system. And when we localize and geometrize the von Neumann entropy, with a view toward its natural pullback onto post-Hilbert state-spaces, we encounter logarithmic divergences that strikingly resemble the logarithmic divergences that are generically associated with renormalization in quantum field theory.
Field theory teaches us to rest easy: logarithmic divergences mean that we should regard field theory not as a fundamental theory, but rather as an effective field theory. Therefore, by similar reasoning applied to the thermodynamical aspects of quantum computing, perhaps we should regard Hilbert state-spaces not as fundamental state-spaces, but rather as effective state-spaces.
When we travel this geometrically naturalized post-Hilbert path, it is natural to ask ourselves Tony Zee’s happy question: “What formerly cherished concept has to be jettisoned?” Gil Kalai’s conjectures implicitly suggest to us that spatial localization and thermal isolation are the concepts that perhaps we may learn to fruitfully regard not as absolute, global, and static (in the algebraic sense of Euclid-Kant-Cauchy), but as approximate, local, and dynamic (in the geometric sense of Gauss-Riemann-Einstein).
A great virtue of quantum computing research is that it inspires us to a thorough investigation (both mathematical and experimental) of the Kantian ideals of localization, isolation, and thermalization. And we are not surprised to find that — as with prior Kantian ideals in physics and mathematics and indeed pretty much all else — Nature’s physical realizations are more subtle than was originally conceived.
To state this point starkly, perhaps Nature does not support the ideals of localization, isolation, and thermalization, any more than she supports the ideals of Euclidean geometry. This (for me) is one main implication of the Kalai Conjectures.
Obviously it is neither feasible, nor necessary, nor even desirable, that everyone think alike in these regards. For every algebraist who stands entranced by the crystalline perfection of the Euclidean plane, there is a geometer who, upon “jettisoning that cherished perfection” (in Tony Zee’s phrase) finds that
It was as if I had fled the harsh arid steppes to find myself suddenly transported to a kind of ‘promised land’ of superabundant richness, multiplying out to infinity wherever I placed my hand on it, either to search or to gather …
Alexander Grothendieck
Recoltes et Semailles
It is a hopeful sign that at the beginning of our shared 21st century quantum journey, we are finding interesting new mathematics. Now we stand urgently in need of new classes of quantum experiments, whose meaning is illuminated by this new mathematics, and new classes of applications, that will help create new enterprises, that will address the urgent needs of our 21st century.
All of which is good. And for these inspirations, please accept this appreciation and thanks, Aram and Gil and GLL!
29. Gil Kalai
March 1, 2012 12:17 pm
Some answers to Boaz Barak:
Dear Boaz,
1) Boaz: “I know very little about quantum noise models and error correction, so am using my (probably flawed) intuition from classical computation that in principle one should be able to reduce the noise to an arbitrarily low level which is some function of the budget you have to spend. I guess this is in sharp contrast to your Conjecture 1, and I’m very curious to understand this.”
Several intuitions regarding classical computers should be carefully examined when they are “transferred” to the quantum case. Indeed, Conjecture 1 is in sharp contrast with the intuition that a large “budget” allows to reduce the error (for a single encoded qubit) to an arbitrary low-level. (See below for four such intuitions.)
2) Boaz: “What’s somewhat still confuses me is the following. You could have simply conjectured that for some fundamental reason, the noise rate will always stay above, say, 45% or whatever the number is for the threshold theorem to fail. But you seem to assume that it would be physically possible to reduce the expected number of errors to an arbitrarily low amount, but somehow correlation will stay high.”
This is a very good question. The purpose of the two-qubit conjecture is to identify the simplest possible distinction (that I could think about) between quantum computers that allow (or enact) fault tolerance and quantum computers that don’t allow (or enact) fault tolerance. The conjecture is not about the entire evolution, or about the behavior in one computer cycle, but about a single “snapshot:” you compare for two qubits the intended pure joint state and the actual noisy state.
I do not assume at all that it will be possible to reduce the expected number of errors to an arbitrary low amount. Already the standard noise models assume, to the contrary, that the error rate will be above some constant, so two qubits seem necessary to present a “clean” distinction between standard noise models and mine.
As I said already, e.g. in this comment, the conjecture that for very complicated quantum states (highly entangled) the errors will be substantially correlated is closely related to the rate of error scaling up for complicated quantum states.
3) Boaz: “So I guess my question is whether in quantum computing also, if I didn’t want a machine that runs forever, but only one that can handle T computational cycles, shouldn’t I be able to build one by spending f(T) dollars, for some function f()? This is of course a rephrasing of the question I asked Gil before, to which he promised an answer,”
This is also an excellent question, and again I don’t think the intuition from classical computers is correct. This is related to some planned later discussions with Aram but a short reply is that I speculate that for general quantum circuits f(T) will be infinite already for some bounded (even rather small) value of T.
4) Boaz: “I can’t say I fully understand this ‘noisy qubit assumption’ ”
I think I referred to the question that came up in our conversation if Conjecture 1 alone suffices to reduce quantum computers to BPP. The question is still not formally described, and it is a rather wild speculation to expect a yes answer.
There were other questions that you asked, Boaz, some related to computational complexity that we may return to.
Finally, let me “try on you” some proposed distinctions between digital computers and quantum computers. The first three we already mentioned.
1) Can an essentially noiseless encoded qubit be constructed?
My conjecture: no, FTQC: yes.
2) Is there a systematic relation between the state and the noise?
My conjecture: yes, FTQC: no.
3) Are there general-purpose quantum emulators, or perhaps every type of quantum state/evolution requires a special machine?
My conjecture: the later, FTQC: the former.
A fourth question that I plan to discuss later is:
4) Can you hear the shape of a quantum computer?
For digital computers we know (and this looks completely obvious) that we can implement any computer program on part of the computer memory of arbitrary geometry. If universal quantum computers can be built this will hold also for quantum computers. I conjecture that in the quantum case this is no longer true.
• Boaz Barak
March 2, 2012 10:58 am
Thank you Gil for your thoughtful responses, I think I understand things better now. One minor comment is regarding your second answer: is it really the case that standard noise models *assume* that the error can’t be brought below a certain absolute constant, or that they are just happy that they have the threshold theorem, but if instead of threshold an absolute constant the threshold would be 1/(log n) or maybe even 1/(sqrt(n)) they wouldn’t say it was physically impossible to realize.
• March 2, 2012 1:39 pm
There are some examples in nature where some forms of noise scale like 1/n. For example, the Mossbauer effect:
http://en.wikipedia.org/wiki/M%C3%B6ssbauer_effect
The idea is that a photon is emitted from one atom in a lattice of n atoms, and the recoil is divided among the entire lattice. This reduces one type of noise (the shift of frequency of the photon caused by the recoil) so it scales like 1/n (or maybe 1/sqrt(n)), and usually n is like 10^20, so this is generally considered negligible.
• March 2, 2012 7:05 pm
Gil writes:
2) Is there a systematic relation between the state and the noise?
My conjecture: yes, FTQC: no.
Actually, in the theory of FTQC with independent noise, one expects there to be a relation between the state and the noise. This statement is a little vague, so let me put it differently: at intermediate times, the difference between the actual computation and the ideal computation will consist of an error pattern that is *not* i.i.d., but has a complicated relationship with the computation.
However, the proofs still work. There are deviations from i.i.d. but everything is rigorously controlled.
One other thing. John Sidles suggests that our inability to tame noise in the lab may be a sign of ugly correlated decoherence. But in many cases the difficulty comes from other things, like addressability (e.g. in optical lattices, or NMR in many cases). This makes it hard to imagine a unified theory of “quantum computers won’t work.”
• March 2, 2012 7:08 pm
To clarify: what I said refers to the theory, when we assume a noise model that is proven correctable.
In general, my belief that the noise models of nature are generally correctable (once the single-qubit rates are low enough) even in cases where no one has proved it. But this is my own guess, and what I said above about the theory is a more objective fact.
• March 2, 2012 8:00 pm
Aram, it seems (to me) that it is similarly difficult to imagine “a unified theory of ‘how quantum computers won’t work’” as it is to imagine a “a feasible design of ‘how quantum computers will work’”.
In the latter regard we have this week’s preprint by Fujii, Yamamoto, Koashi, and Imoto, titled “A distributed architecture for scalable quantum computation with realistically noisy devices.” (arXiv:1202.6588v1).
In using the word “scalable” in their title, these authors aren’t joking: their reference FTQC factors 1024-bit integers via a processor of $2\times10^{21}$ gates. Thus far in the debate, there hasn’t been much discussion of concrete FTQC design issues, or of how many state-space dimensions an FTQC will really require, so perhaps this article will inspire some.
More broadly, it seems (to me) that a useful lesson of Bill Murray’s Groundhog Day is that it’s a mistake to stay “stuck” on a binary quest to find either:
• “a unified theory of ‘how quantum computers won’t work’”, versus
• “a feasible design of ‘how quantum computers will work’”.
In essence, the QM/QIT/QC community’s best hope for a revitalizing escape from Groundhog Day may be to conceive a third path forward.
• Gil Kalai
March 4, 2012 10:30 am
Aram: “Actually, in the theory of FTQC with independent noise, one expects there to be a relation between the state and the noise.”
Dear Aram, This is correct, but the relation betwen the state and the noise under FTQC is rather mild and it essentially expresses just the last several rounds of computation. So it is possible under FTQC to create on some subset of all qubits every feasible state (a state a noiseless computer can reach) up to essentially i.i.d. errors.
30. Gil Kalai
March 3, 2012 2:01 pm
Aram: “..in many cases the difficulty comes from other things, like addressability (e.g. in optical lattices, or NMR in many cases).”
Aram, can you elaborate a little on the addressability problem?
31. Gil Kalai
March 3, 2012 7:00 pm
Dave Bacon made a very interesting comment on thermodynamics, so let me repeat some of what Dave said, and make a few comments of my own.
After a short description of equilibrium thermodynamics Dave’s first assertion was:
“This means that one needs to study thermodynamics of out of equilibrium systems.”
I completely agree with this statement and I don’t think it is in dispute. (I should note, though, that there was an interesting approach by Robert Alicki to apply insights/theorems from the study of equilibrium thermodynamics to the study of meta-stable states.)
“In that case there are metastable as well as perfectly stable classical memories (2D Ising an example of the first and Gacs CA as an example of the second.) For storage of quantum information, it certainly seems like the 4D toric code is an example of the first, and spatially local QECC is an example of the second.”
(Small questions: Dave, what perfectly stable means in this context? and what “Gacs CA” and “local QECC” refer to?)
“The thermodynamics and out of equilibrium dynamics of these systems seems well-studied, for specific noise models.”
Ok, this is a place where my conjectures give a different picture for these well-studied systems. And indeed they are based on rejecting the specific noise models that were used in these studies. Actually I am quite interested in as many more cases of well-studied models where my conjectures propose a different answer. And also I would like to gather (if not to understand) different methods that are used, and I am especially curious about the perturbative methods that were mentioned earlier. This is also related to Joe Fitzsimons’s comment and point 16 is the long list of issues. Joe said
“..The second is the implicit assumption [by Gil] that we don’t know what the noise should look like, and hence are free to conjecture whatever we please about it so long as it allows for classical computation. This, however, is not the case. The fact of the matter is that we know in excruciating detail the physics of how low energy particles interact with one another, and can use this to say a lot about the type of noise that a given quantum computer is subject to.”
It goes without saying that I will be very interested to see places where my conjectures are in confrontation with what can be said about the type of noise that a given quantum computer is subject to based on our detailed knowledge of how low energy particles interact with one another.
Dave wrote
“If you’re going to take a crack at breaking quantum fault tolerance, I think pinning your hope on a more refined thermodynamics is a bit of a longshot and really it would have to be tied to the specific physics of the system. So it seems to me that your more traditional argument, Gil, is more likely to succeed then some magical development in thermodynamics”
Well, my “traditional argument” is what I try to examine and advance but it looks to me that my specific conjectures might be related to thermodynamics. One possible connection that I was puzzled about, that was mentioned in remarks (like this one) by John Sidles, is to Onsager’s regression hypothesis. Conjecture 1 says something in the direction that “the noise mimic the signal (and forget the computational basis)”. Onsager’s hypothesis (which is about out-of equilibrium dynamics) also says something in the direction that the laws for the noise are the same as the laws for the signal. (Unfortunately, I don’t really understand what O’s hypothesis says.)
32. March 4, 2012 7:52 pm
I would like to suggest an in-depth and essentially optimistic reading to Dave Bacon’s assertion:
“We know in excruciating detail the physics of how low energy particles interact with one another, and can use this to say a lot about the type of noise that a given quantum computer is subject to.”
First to inject a note of humility, please let me recommend notes 18.1 and 18.2 of Howard Carmichael’s 2007 text Statistical Methods in Quantum Optics 2: Nonclassical Fields (page 411ff, see google books), followed by a reading of Hayrynen, Oksanen, and Tulkki’s 2008 preprint Cavity photon counting: ab-initio derivation of the quantum jump superoperators and comparison of the existing models (arXiv:0808.1660v1), followed by a reading of Braungardt, Rodriguez, Glauber, and Lewenstein’s 2011 preprint Particle counting statistics of time and space dependent fields (arXiv:1110.4250v1).
Yes, this is the same Howard Carmichael who conceived the term “quantum unraveling”, and the same Roy Glauber who perceived the relevance of quantum coherent states (and won a Nobel for it).
Dave Bacon has articulated a Great Truth when it comes the quasi-continua of quantum states that are associated to photon/phonon sources and photon/phonon sinks, and Carmichael and Glauber and their colleagues have articulated its complementary Great Truth:
“The time development of electromagnetic fields in closed cavities under continuous detection of photons continues to be a subject of confusing controversy.”
The GLL debate associated to QM/QIT/QC/FTQC can hardly be resolved until these fundamental problems are better understood. Conversely, we may hope the GLL debate will bring welcome new mathematical tools, theoretical insights, and experimental tests to these long-standing problems in quantum optics.
In this regard, an excellent proving-ground — that has not yet been mentioned in the Harrow/Kalai debate — is the celebrated (justly!) class of optical scattering experiments that has been proposed by Scott Aaronson and Arkhipov. There is a reasonably strong consensus that the following are individually feasible:
• near-unit-efficiency photon sources, and
• near-perfect emission synchrony, and
• near-unitary optical scatterers, and
• near-unit-efficiency photon detectors, and
• nearly independent detector noise.
It is natural to inquire: What are the quantum mechanical obstructions — both practical and fundamental — to achieving these objectives not merely individually, but simultaneously?
More broadly, it hardly seems likely that FTQC can ever be achieved, until we can confidently answer the many hard questions in this class, via a Bacon-esque “excruciatingly detailed” quantum theory (hopefully just one of them) that has been thoroughly validated by careful experiments.
The notorious resistance of physics questions in this class to easy answers, and the immense difficulty of the associated mathematics, and the high caliber of researchers who tackling these questions, and the many precedents of Nature preparing astounding surprises for us, all suggest that (1) these are exceptionally worthy problems to work on, yet (2) we had best not be too confident that solving them will be easy or quick, and (3) the answers we find may not be ones we expected, and may even be utterly astounding to us.
Summary: A thorough understanding of the quasi-continuum quantum physics of low-energy open-system dynamics is essential to FTQC, and yet there is a broad class of questions associated to it that are both fundamental and unanswered.
As for Onsager-type relations in FTQC, that will have to be a separate post!
• March 5, 2012 8:32 am
As an historical aside, the above quantum optics textbook and articles are … let’s face it … not so easy to read. It is fair to ask, where’s the mathematical naturality? Where’s the simplicity? Where’s the universality? Where’s the beauty?
History offers us the following guidance. In the early 19th century, non-euclidean geometry was a subject that working-class sailors studied (via Bowditch’s The American practical navigator: an epitome of navigation, for example). It required considerable creative genius on the part of Gauss and Riemann to abstract from Bowditch’s practical calculations the natural, simple, universal, beautiful elements of geometry. And determination was required too; as Gauss wrote to Bessel: “I stand in dread of the clamors of the Boeotians.”
In the twentieth century this pattern repeated itself. As Wolfgang Pauli wrote to Rudolf Peierls (in 1931): “Der Restwiderstand ist ein Dreckeffekt, und im Dreck soll man nicht w\”{o}hlen”(“Resistance is a dirt effect, and in dirt we should not wallow”), and “On semiconductors one should not work; that’s a pigsty (schweinerei). Who knows if semiconductors exist at all?” Yet needless to say, practical 20th century investigations into the multitude of Dreckeffekts of solid-state physics has uncovered a paradise of natural, simple, universal, beautiful phenomena, that were waiting for mathematicians and physicist to discern.
Now in the 21st century, perhaps the great enterprise of building quantum computers can help us discern naturality, simplicity, universality, and beauty, even in the seemingly mundane Dreckeffekts that are associated to the practical physics and engineering of these quantum devices. Almost by definition, the new mathematics and physics of the 21st century will include “burning arrows” that we do not presently anticipate. And so we can reasonably foresee that, as Tony Zee puts it, “formerly cherished concepts will have to be jettisoned” … just the 19th and 20th centuries successfully jettisoned them.
33. March 15, 2012 3:27 pm
No malice intended
If you think about it, a noise which satisfies conjecture 1 is not malicious. On the contrary, it has nice properties for running the world nicely and even for our ability to explore nature. For example, when we study physical phenomena at one scale we often can go a long way even without understanding the smaller scales, and this is related to the fact that the noise satisfies the same large-grain structure as the signal. We have to thank noise accumulation and Conjecture 1 for that.
When we similate on a universal quantum computer a multiscale phenomenon, the noise will still be based on the computational basis. In nature (fortunately!) things are better and the noise inherits the same multiscale structure as the “signal”.
### Trackbacks
%d bloggers like this:
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 38, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9327239990234375, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/78050?sort=oldest
|
determinant of the table of characters
Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I am certain that the answer to this question exists somewhere. It might be a classical exercise.
Let $G$ be a finite group. Its table of characters is a square matrix, whose rows are indexed by the conjugacy classes and the columns are indexed by the irreducible characters. It is well defined, up to the order of rows and columns. In particular, its determinant if well-defined up to the sign. Let us define $\Delta$ to be the square of this determinant (this is well-defined). Because the characters form a basis of the space of class functions, we know that $\Delta\ne0$. When $G={\mathbb Z}/n{\mathbb Z}$, $\Delta=n^n$.
Is there a close formula for $\Delta$ for a general group? Is it always an integer?
-
1
If $G={\bf Z}/n{\bf Z}$ then $\Delta$ is the square of a Vandermonde determinant, and can be computed in closed form, but the answer is not $n^2$ but $\pm n^n$, with the sign depending on $n \bmod 4$ if I did this right. – Noam D. Elkies Oct 13 2011 at 20:04
@Noam. Right. I edit. – Denis Serre Oct 13 2011 at 20:06
@Denis thanks, but you still need a $\pm$ sign (also for the general formula, since the nice argument you give computes only the absolute value $|\Delta|$, not $\Delta$ itself). – Noam D. Elkies Oct 13 2011 at 20:14
1
I think you should post the answer as an answer. – S. Carnahan♦ Oct 13 2011 at 21:53
2
One can also ask about determinants of submatrices of the character table, a much more subtle question. See front.math.ucdavis.edu/1110.0818. – Richard Stanley Oct 14 2011 at 0:22
show 1 more comment
4 Answers
Corrected version
I believe it is always an integer. We can assume that all our representations are over the algebraic closure $\overline{\mathbb Q}$ of $\mathbb Q$. If $\Gamma$ is the absolute Galois group, then clearly $\Gamma$ acts on the characters of $G$ (if you have a representation, then twist it by the action of $\Gamma$). It thus follows that the determinant squared is fixed by $\Gamma$ (since $\Gamma$ permutes the rows) and so is a rational number. But it is also an algebraic integer so it is an integer.
-
1
According to www.combinatorics.org/Volume_10/PDF/v10i1n3.pdf the answer for the symmetric group is nice. – Benjamin Steinberg Oct 13 2011 at 19:42
1
Oops, the link was bad. The answer for $S_n$ is the product of all parts of partitions of $n$. I doubt there is a closed form in general. – Benjamin Steinberg Oct 13 2011 at 19:45
I made a correction because I forgot the action of $\Gamma$ permutes the rows and so could change the sign of the determinant (e.g. complex conjugation switches two rows). Thus I changed the answer to saying it is an integer. I don't know if it is a square. – Benjamin Steinberg Oct 13 2011 at 19:58
Actually $\Gamma$ permutes the columns since Denis uses columns as characters. – Benjamin Steinberg Oct 13 2011 at 19:58
You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
If `$A$` is the character table and `$A^\ast$` is its conjugate transpose, then the orthogonality relations tell us that `$A A^\ast = \text{diag}\{|C_G(g)|\} $`, where the enties run over a fixed choice of elements of `$G$`, one from each conjugacy class. Thus `$|\Delta| = \det A A^\ast = \prod |C_G(g)|$` is an integer. On the other hand, `$\Delta$` must be rational. This follows from the fact that the action of `$\text{Gal}(\overline{\mathbb Q}/\mathbb Q)$` permutes the columns of `$A$`, hence fixes `$\Delta = (\det A)^2$`. Thus `$\Delta=\pm |\Delta|$` is an integer.
-
I am not convinced with your identity $A^\ast A = \text{diag}\{|C_G(g)|\}$, unless you consider that the columns are indexed by $G$ instead of the set of conjugacy classes; but then $A$ is no longer a square matrix. See my answer. – Denis Serre Oct 15 2011 at 12:11
Sorry, it should be $AA^\ast$, since your columns are indexed by characters. I'll edit my answer. Anyway, the proof goes as follows. Choose representatives $g_1,\ldots,g_r$ for the conjugacy classes of $G$ and let $\chi_1,\ldots,\chi_r$ be the irreducible characters. Then $A_{ij}=\chi_j(g_i)$ so $A^\ast_{ij}=\overline{\chi_i(g_j)}$ and therefore $(AA^\ast)_{ij}=\sum_k \chi_k(g_i) \overline{\chi_k(g_j)}$. By the orthogonality relations, this sum is equal to $0$ if $i\neq j$ and to $|C_G(g_i)|$ otherwise. – Faisal Oct 15 2011 at 18:39
Hasn't this been discussed at length in the recent Annals paper
An Elementary Exposition of Frobenius's Theory of Group-Characters and Group-Determinants Leonard Eugene Dickson (1902)?
The Determinant the OP discusses is NOT the group determinant per se, but comes up shortly after the definition of the group determinant.
-
1
Where are the OP's questions discussed in that paper? – Faisal Oct 13 2011 at 23:12
It's 1902, pp. 25--49. – KConrad Oct 14 2011 at 13:33
I found the following answer after posting it: $$\Delta=\epsilon\prod_c\frac{|G|}{|c|},\qquad\epsilon=(-1)^m,$$ where the product is taken over the conjugacy class. And $m$ is the number of pairs of complex conjugate irreducible characters.
Proof. On the one hand, the complex conjugate of the table is itself, up to $m$ transpositions of rows. This is because the conjugate of an IC is an IC. Therefore $$\overline{\det(TC)}=\epsilon\det(TC)$$ ($TC$ stands for table of characters''.) Hence $\det(TC)$ is real if $m$ is even, pure imaginary if $m$ is odd. hence $\Delta$ is real and its sign is $\epsilon$.
Now the characters form a unitary basis. Because a unitary matrix has a unit determinant, we may compute $|\Delta|$ by taking any unitary basis. Take $\phi_c(g)$ to be $0$ if $g\not\in c$ and $|G|^{1/2}/|c|^{1/2}$ if $g\in c$. In particular $|\Delta|$ is an integer because $$\frac{|G|}{|c|}=|{\mathcal Z}(a)|,\qquad a\in c.$$
Another Proof: Let $D$ be the diagonal matrix whose diagonal entries are the cardinals of the congugacy classes. We may assume that the first rows of $TC$ are the real characters and the $2m$ last ones are the pairs of complex conjugate characters. Then the $(i,j)$-entry of $M:=(TC)D(TC)^T$ is $|G|\langle\overline{\chi_i},\chi_j\rangle$. From the orthogonality relations, we see that $M={\rm diag}(1,\ldots,1,J,\ldots,J)$ where $$J=\begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix}.$$ The number of blocks $J$ is precisely $m$. Now take the determinant; we obtain $\Delta\det D=(-1)^m|G|^r$ where $r\times r$ is the size of $TC$. Hence the formula.
-
What is $TC$? What is that curly Z with a line through the middle? – Gerry Myerson Oct 14 2011 at 20:46
Gerry, I believe $TC$ is the matrix product of the character table $T$ and its conjugate transpose $C$. And $\mathcal{Z}(a)$ is the centralizer of $a$ in $G$. Also, just in case anyone else is wondering (I was), "IC" = "irreducible character". – Faisal Oct 14 2011 at 21:18
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 73, "mathjax_display_tex": 4, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8999072313308716, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/49699/coloring-edges-on-a-graph-s-t-the-set-of-edges-for-any-two-vertices-have-no-more/49735
|
## Coloring edges on a graph s.t. the set of edges for any two vertices have no more than ‘k’ colors in common
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Please imagine the case where one has a planar graph, $G$, with a set of $|V|$ vertices, $(v_1, ..., v_{|V|}) \in V$, and $|E|$ edges, $(e_1, ..., e_{|E|}) \in E$. Now, provided a total of $N$ colors, where $N < |E|$ (the number of edges), we seek to assign these colors to the edges of the graph such that:
(1) - The set of edges connected to any vertex contains edges with all unique colors, i.e. no two edges share the same color when attached to the same vertex. This condition should establish my question as a special case of the general case NP-Hard edge-coloring problem.
(2) - The intersection, or overlap, between the colors of the edges of any two vertices is, at most, of size $k = 1$ or $k = 2$. This condition must hold true regardless of whether the vertices are adjacent or not. (thanks domotorp!)
What would be the most efficient algorithm for coloring the edges of $G$ provided these constraints? Does the problem become considerably simpler if one tightens the bounds on the size of vertex edge sets?
My approach to the problem thus far has been to assign unique colors to all $|E|$ edges of a graph, i.e. to have $N = |E|$, and then proceed to reduce $N$ using a naive stochastic procedure. It would be great to have an efficient deterministic or semi-deterministic algorithm.
I appreciate everyone's time!
Clarifications:
• I am allowing the case of $k = 2$ as well as $k = 1$.
• I changed criterion (2) from requiring that the intersection is of size 'k' to explicitly setting $k = 1$ (or $k = 2$), which is the case I am primarily interested in and hopefully better focuses this question.
-
The second condition is a condition on the edges, not a condition on the colorings. Is the graph's edges fixed or not? – Qiaochu Yuan Dec 17 2010 at 3:28
Qiaochu, yes, there is a fixed number of edges |E| (not the greatest notation). |E| > N, where N is the number of available colors. – AfternoonCoffee Dec 17 2010 at 3:38
@AfternoonCoffee: but is the location of the edges fixed? – Qiaochu Yuan Dec 17 2010 at 3:49
Yes, one is provided a fixed graph 'G'. Edges may be colored in any manner (so long as they obey the aforementioned constraints), but they cannot be rearranged. – AfternoonCoffee Dec 17 2010 at 3:59
@AfternoonCoffee: then why is the second condition a condition on the edges, not a condition on their colorings? – Qiaochu Yuan Dec 17 2010 at 4:08
show 2 more comments
## 2 Answers
If the input number $k$ is very large (say as large as the next-to-maximum degree) then condition (2) has no effect, and this becomes the same as $N$-edge-colouring of a graph, which is NP-complete. (Have you read about this problem? It's NP-complete even for 3-regular graphs and $N=3$.) So if I understand correctly it does not seem that there is any hope of exactly solving this problem with an efficient algorithm. There are lots of results about finding an edge-colouring with approximately the minimum number of colours, classical ones include Vizing's theorem.
My brain can't parse the phrase "...bounds on the size of vertex edge sets..." but maybe in the last question you mean, is the problem easy if $k$ is small enough? In the case of 3-regular graphs with $N=3$, it makes it easier but for a trivial reason: no colouring meeting (1) and (2) is possible for any such graph if $k<3$, since any colouring meeting (1) would have to have all 3 colours appearing adjacent to every vertex.
-
Dear Dave, I'm setting k = 1 to better focus the question on the case I'm particularly interested in where k << N. I'm hoping I can differentiate this problem from the known NP-hard problem of N-edge coloring... – AfternoonCoffee Dec 17 2010 at 16:23
It's probably worth stating that condition (1) defines this as a special case of the edge coloring problem. – AfternoonCoffee Dec 17 2010 at 16:25
"...bounds on the size of vertex edge sets..." - sorry, by this I mean providing tight lower and upper-bounds for connectivity of each vertex in the graph. – AfternoonCoffee Dec 17 2010 at 16:26
Ok, thanks! BTW the term is typically called "the degree of a vertex," connectivity already has a different standard meaning. It doesn't seem obvious to me whether or not this problem is NP-hard when $k=1$, so it is a good and interesting clarification. – Dave Pritchard Dec 17 2010 at 16:43
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
I edited my answer after the clarification of the question.
Consider the union of two different color classes. This subgraph consists of disjoint edges and (possibly) a path of length 2. This gives the following bound if the paths can have length 2:
$\sum_i {deg(v_i) \choose 2}\le {N\choose 2}$
Of course this is just a necessary and not a sufficient condition, but it might be a good start for an NP-completeness proof.
-
Dear domotorp, condition (2) must hold true regardless of whether the vertices are adjacent or not. Therefore the answer to: "...if there are two adjacent vertices, then can they have another common color apart from the color of their common edge or no?" is no. I hope that simplifies matters, and an NP-completeness result for this problem would be fantastic. – AfternoonCoffee Dec 18 2010 at 9:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 25, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9393019080162048, "perplexity_flag": "head"}
|
http://wikipedia.sfstate.us/Partition_function_(number_theory)
|
edit
# Partition (number theory)
(Redirected from Partition function (number theory))
Young diagrams associated to the partitions of the positive integers 1 through 8. They are arranged so that images under the reflection about the main diagonal of the square are conjugate partitions.
Partitions of n with biggest addend k
In number theory and combinatorics, a partition of a positive integer n, also called an integer partition, is a way of writing n as a sum of positive integers. Two sums that differ only in the order of their summands are considered to be the same partition; if order matters then the sum becomes a composition. For example, 4 can be partitioned in five distinct ways:
4, 3 + 1, 2 + 2, 2 + 1 + 1, 1 + 1 + 1 + 1.
The order-dependent composition 1 + 3 is the same partition as 3 + 1, while 1 + 2 + 1 and 1 + 1 + 2 are the same partition as 2 + 1 + 1.
A summand in a partition is also called a part. The number of partitions of n is given by the partition function p(n). So p(4) = 5. The notation λ ⊢ n means that λ is a partition of n.
Partitions can be graphically visualized with Young diagrams or Ferrers diagrams. They occur in a number of branches of mathematics and physics, including the study of symmetric polynomials, the symmetric group and in group representation theory in general.
## Examples
The partitions of 4 are:
1. 4
2. 3 + 1
3. 2 + 2
4. 2 + 1 + 1
5. 1 + 1 + 1 + 1.
In some sources partitions are treated as the sequence of summands, rather than as an expression with plus signs. For example, the partition 2 + 1 + 1 might instead be written as the tuple (2, 1, 1) or in the even more compact form (2, 12) where the superscript indicates the number of repetitions of a term.
## Restricted partitions
A restricted partition is a partition in which the parts are constrained in some way.
For example, we could count partitions which contain only odd numbers. Among the 22 partitions of the number 8, there are 6 that contain only odd parts:
• 7 + 1
• 5 + 3
• 5 + 1 + 1 + 1
• 3 + 3 + 1 + 1
• 3 + 1 + 1 + 1 + 1 + 1
• 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1
Alternatively, we could count partitions in which no number occurs more than once. If we count the partitions of 8 with distinct parts, we also obtain 6:
• 8
• 7 + 1
• 6 + 2
• 5 + 3
• 5 + 2 + 1
• 4 + 3 + 1
For all positive numbers the number of partitions with odd parts equals the number of partitions with distinct parts. This result was proved by Leonhard Euler in 17481 and is a special case of Glaisher's theorem.
Some similar results about restricted partitions can be obtained by the aid of a visual tool, a Ferrers graph (also called Ferrers diagram, since it is not a graph in the graph-theoretical sense, or sometimes Young diagram, alluding to the Young tableau).
Some results concerning restricted partitions are:
• The number of partitions of n in which the greatest part is m is equal to the number of partitions of n into m parts.
• The number of partitions of n in which each part is less than or equal to m is equal to the number of partitions of n into m or fewer parts.
• The number of partitions of n in which all parts are equal is the number of divisors of n.
• The number of partitions of n in which all parts are 1 or 2 (or, equivalently, the number of partitions of n into 1 or 2 parts) is
$\left \lfloor \frac {n}{2}+1 \right \rfloor \, .$
• The number of partitions of n in which all parts are 1, 2 or 3 (or, equivalently, the number of partitions of n into 1, 2 or 3 parts) is the nearest integer to (n + 3)2 / 12.2
## Partition function
In number theory, the partition function p(n) represents the number of possible partitions of a natural number n, which is to say the number of distinct ways of representing n as a sum of natural numbers (with order irrelevant). By convention p(0) = 1, p(n) = 0 for n negative.
The first few values of the partition function are (starting with p(0)=1):
1, 1, 2, 3, 5, 7, 11, 15, 22, 30, 42, … (sequence in OEIS).
The value of p(n) has been computed for large values of n, for example p(100)=190,569,292 and p(1000) is approximately 2.4×1031.3
As of June 2012[update], the largest known prime number that counts a number of partitions is p(82352631), with 10101 decimal digits.4
For every type of restricted partition there is a corresponding function for the number of partitions satisfying the given restriction. An important example is q(n), the number of partitions of n into distinct parts.5 As noted above, q(n) is also the number of partitions of n into odd parts. The first few values of q(n) are (starting with q(0)=1):
1, 1, 1, 2, 2, 3, 4, 5, 6, 8, 10, … (sequence in OEIS).
### Intermediate function
This section does not cite any references or sources. Please help improve this section by adding citations to reliable sources. Unsourced material may be challenged and removed. (December 2011)
One way of getting a handle on the partition function involves an intermediate function p(k, n), which represents the number of partitions of n using only natural numbers ≥ k. The original p(n) is just p(1, n).
For any given value of k, partitions counted by p(k, n) fit into exactly one of the following categories:
1. smallest addend is k
2. smallest addend is strictly greater than k.
The number of partitions meeting the first condition is p(k, n − k). To see this, imagine a list of all the partitions of the number n − k into numbers of size at least k, then imagine appending "+ k" to each partition in the list. Now what is it a list of?
The number of partitions meeting the second condition is p(k + 1, n) since a partition into parts of at least k that contains no parts of exactly k must have all parts at least k + 1.
Since the two conditions are mutually exclusive, the number of partitions meeting either condition is p(k + 1, n) + p(k, n − k). We thus obtain the recursion:
• p(k, n) = 0 if k > n
• p(k, n) = 1 if k = n
• p(k, n) = p(k+1, n) + p(k, n − k) otherwise.
The above reasoning leads to a formula for the partition function in terms of the intermediate function, namely as:
$p(n) = 1+\sum_{k=1}^{\lfloor \frac{1}{2}n \rfloor} p(k,n-k),$
where $\lfloor{n}\rfloor$ is the floor function.
This deceptively simple function in fact exhibits quite complex behavior.
p(1, 4) = 5
p(2, 8) = 7
p(3, 12) = 9
p(4, 16) = 11
p(5, 20) = 13
p(6, 24) = 16
The values of this function:
k
1 2 3 4 5 6 7 8 9 10
n 1 1 0 0 0 0 0 0 0 0 0
2 2 1 0 0 0 0 0 0 0 0
3 3 1 1 0 0 0 0 0 0 0
4 5 2 1 1 0 0 0 0 0 0
5 7 2 1 1 1 0 0 0 0 0
6 11 4 2 1 1 1 0 0 0 0
7 15 4 2 1 1 1 1 0 0 0
8 22 7 3 2 1 1 1 1 0 0
9 30 8 4 2 1 1 1 1 1 0
10 42 12 5 3 2 1 1 1 1 1
### Generating function
The generating function for p(n) is given by:6
$\sum_{n=0}^\infty p(n)x^n = \prod_{k=1}^\infty \left(\frac {1}{1-x^k} \right).$
Expanding each term on the right-hand side as a geometric series, we can rewrite it as
(1 + x + x2 + x3 + ...)(1 + x2 + x4 + x6 + ...)(1 + x3 + x6 + x9 + ...) ....
The xn term in this product counts the number of ways to write
n = a1 + 2a2 + 3a3 + ... = (1 + 1 + ... + 1) + (2 + 2 + ... + 2) + (3 + 3 + ... + 3) + ...,
where each number i appears ai times. This is precisely the definition of a partition of n, so our product is the desired generating function. More generally, the generating function for the partitions of n into numbers from a set A can be found by taking only those terms in the product where k is an element of A. This result is due to Euler.
The formulation of Euler's generating function is a special case of a q-Pochhammer symbol and is similar to the product formulation of many modular forms, and specifically the Dedekind eta function.
The denominator of the product is Euler's function and can be written, by the pentagonal number theorem, as
$(1-x)(1-x^2)(1-x^3) \dots = 1 - x - x^2 + x^5 + x^7 - x^{12} - x^{15} + x^{22} + x^{26} + \dots.$
where the exponents of x on the right hand side are the generalized pentagonal numbers; i.e., numbers of the form ½m(3m − 1), where m is an integer. The signs in the summation alternate as (-1)m. This theorem can be used to derive a recurrence for the partition function:
p(k) = p(k − 1) + p(k − 2) − p(k − 5) − p(k − 7) + p(k − 12) + p(k − 15) − p(k − 22) − ...
where p(0) is taken to equal 1, and p(k) is taken to be zero for negative k.
Another way of stating this is that the value of p(n) can be found from the formula7
$\begin{matrix} {\rm GPN's} \\ 0 \\1\\2\\~\\~\\5\\~\\ 7\\ ~ \\ ~\\ \vdots \\ ~ \\ ~ \end{matrix} ~~~ p(n) = \begin{vmatrix} ~~1 & -1~ & ~& ~ & ~ &~&~&~ \\ ~~1 & ~1 & -1~ & ~ \\ ~~0 & ~1 & ~1 & -1~ & ~ \\ ~~0 & ~0 & ~1 & ~1 &-1~ & ~ \\ -1 &~0 & ~0 & ~1 & ~1 &-1~ & ~ \\ ~~0 & -1~ & ~0 & ~0 & ~1 & ~1 & -1~ & ~ \\ -1 & ~0& -1~ & ~0 & ~0 & ~1 & ~1 & -1~ &~ \\ ~~0 & -1~ &~0& -1~ & ~0 & ~0 & ~1 & ~1 & -1~ &~ \\ ~~0 & ~0 & -1~ &~0& -1~ & ~0 & ~0 & ~1 & ~1 & ~ \\ ~~ \vdots & ~ & ~ & ~ & ~ & ~ &~ & ~ & ~ & \ddots \\ \end{vmatrix} _{ n \times n} .$
I.e., p(n) is the determinant of the n×n truncation of the infinite-dimensional Toeplitz matrix shown above. The only non-zero diagonals of this matrix are those which start on a row labeled by a generalized pentagonal number qm. (The superdiagonal is taken to start on row "0".) On these diagonals, the matrix element is (-1)m+1. This follows from a general formula for the quotients for power series.8
The generating function for q(n) (partitions into distinct parts) is given by:9
$\sum_{n=0}^\infty q(n)x^n = \prod_{k=1}^\infty (1+x^k) = \prod_{k=1}^\infty \left(\frac {1}{1-x^{2k-1}} \right).$
The second product can be written ϕ(x2) / ϕ(x) where ϕ is Euler's function; the pentagonal number theorem can be applied to this as well giving a recurrence for q:10
q(k) = ak+q(k − 1) + q(k − 2) − q(k − 5) − q(k − 7) + q(k − 12) + q(k − 15) − q(k − 22) − ...
where ak is (−1)m if k =3m2-m for some integer m and is 0 otherwise.
The determinant formula for the quotient of power series can be applied to the expression ϕ(x2) / ϕ(x) to produce the expression
$q(n) = \begin{vmatrix} ~1& ~ & ~&~&~&~&~&~&~1~\\ -1& ~1& ~ & ~&~&~&~&~&~0~\\ -1& -1& ~1& ~ & ~&~&~&~&-1~\\ ~0& -1& -1& ~1 & ~ & ~&~&~&~0~\\ ~0 & ~0 & -1& -1&~1 & ~&~&~&-1~\\ ~1& ~0 & ~0& -1& -1&~1&~&~& ~0~\\ ~0 & ~1& ~0 & ~0& -1& -1&~1 & ~ &~0~\\ ~1 & ~0& ~1 & ~0&~0& -1& -1 &~&~0~\\ ~ \vdots & ~&~&~&~&~& ~& \ddots & ~\vdots~ \end{vmatrix}_{(n+1) \times (n+1)} ,$
where the diagonals in the first n columns are constants equal to the coefficients in the power series for ϕ(x) and the last column has values ak given above.
#### Gaussian binomial coefficient
Main article: Gaussian binomial coefficient
The Gaussian binomial coefficient is related to integer partitions. The Gaussian binomial coefficient is defined as:
${k+l \choose l}_q = {k+l \choose k}_q = \frac{\prod^{k+l}_{j=1}(1+q^j)}{\prod^{k}_{j=1}(1+q^j)\prod^{l}_{j=1}(1+q^j)}.$
The number of integer partitions that would fit into a k by l rectangle (when expressed as a Ferrers or Young diagram) is denoted by p(n, k, l). The Gaussian binomial coefficient is related to the generating function of p(n, k, l) by the following equality:
$\sum^{\infty}_{n=0}p(n,k,l)x^n = {k+l \choose l}_x.$
#### Restricted partition generating functions
The generating function can be adapted to describe restricted partitions. For example, the generating function for integer partitions into distinct parts is:11
$\prod^{\infty}_{n=1}(1+x^n)$
and the generating function for partitions consisting of particular summands (specified by a set T of natural numbers) is:
$\prod_{t \in T}(1-x^t)^{-1}.$
This can be used to solve Change-making problems (where the set T specifies the available coins). Generating functions can be used to prove various identities involving integer partitions quite easily, for example the one mentioned in the Restricted partitions section. The generating function for partitions into odd summands is:11
$\prod^{\infty}_{\begin{smallmatrix} n = 1 \\ n \mbox{ odd} \end{smallmatrix}}(1-x^n)^{-1} = \frac{1}{(1-x)(1-x^3)(1-x^5)...} = \frac{(1-x^2)(1-x^4)...}{(1-x)(1-x^2)(1-x^3)(1-x^4)(1-x^5)...}$
$= \frac{(1-x)(1+x)(1-x^2)(1+x^2)...}{(1-x)(1-x^2)(1-x^3)(1-x^4)(1-x^5)...} = (1+x)(1+x^2)(1+x^3)...$
which is the generating function for partitions into distinct summands.
### Congruences
Main article: Ramanujan's congruences
Srinivasa Ramanujan is credited with discovering that "congruences" in the number of partitions exist for arguments that are integers ending in 4 and 9.12
$p(5k+4)\equiv 0 \pmod 5\,$
For instance, the number of partitions for the integer 4 is 5. For the integer 9, the number of partitions is 30; for 14 there are 135 partitions. This is implied by an identity, also by Ramanujan,13
$\sum_{k=0}^{\infty} p(5k+4)x^k = 5~ \frac{ (x^5)^5_{\infty} } {(x)^6_{\infty}}$
where the series $(x)_{\infty}$ is defined as
$(x)_{\infty} = \prod_{m=1}^{\infty}(1-x^m).$
He also discovered congruences related to 7 and 11:14
$\begin{align} p(7k + 5) &\equiv 0 \pmod 7\\ p(11k + 6) &\equiv 0 \pmod {11}. \end{align}$
Since 5, 7, and 11 are consecutive primes, one might think that there would be such a congruence for the next prime 13, $\scriptstyle p(13k \,+\, a) \;\equiv\; 0 \pmod{13}$ for some a. This is, however, false. It can also be shown that there is no congruence of the form $\scriptstyle p(bk \,+\, a) \;\equiv\; 0 \pmod{b}$ for any prime b other than 5, 7, or 11.
In the 1960s, A. O. L. Atkin of the University of Illinois at Chicago discovered additional congruences for small prime moduli. For example:
$p(11^3 \cdot 13 \cdot k + 237)\equiv 0 \pmod {13}.$
In 2000, Ken Ono of the University of Wisconsin–Madison proved that there are such congruences for every prime modulus. A few years later Ono, together with Scott Ahlgren of the University of Illinois, proved that there are partition congruences modulo every integer coprime to 6.15
### Partition function formulas
#### Exact formula
Main article: pentagonal number theorem
Leonhard Euler's pentagonal number theorem implies the identity
$p(n)=p(n-1)+p(n-2)-p(n-5)-p(n-7)+\cdots$
where the numbers 1, 2, 5, 7, ... that appear on the right side of the equation are the generalized pentagonal numbers $g_k = \frac{k(3k -1)}{2}$ for nonzero integers k. More formally,
$p(n)=\sum_k (-1)^{k-1}p\left(n- k(3k -1)/2\right)$
where the summation is over all nonzero integers k (positive and negative) and p(m) is taken to be 0 if m < 0.
#### Approximation formulas
Approximation formulas exist that are faster to calculate than the exact formula given above.
An asymptotic expression for p(n) is given by
$p(n) \sim \frac {1} {4n\sqrt3} \exp\left({\pi \sqrt {\frac{2n}{3}}}\right) \mbox { as } n\rightarrow \infty.$
This asymptotic formula was first obtained by G. H. Hardy and Ramanujan in 1918 and independently by J. V. Uspensky in 1920. Considering p(1000), the asymptotic formula gives about 2.4402 × 1031, reasonably close to the exact answer given above (1.415% larger than the true value).
Hardy and Ramanujan obtained an asymptotic expansion with this approximation as the first term:
$p(n)=\frac{1}{2 \sqrt{2}} \sum_{k=1}^v \sqrt{k}\, A_k(n)\, \frac{d}{dn} \exp \left({ \pi\sqrt{\frac23} \frac{\sqrt{n-\frac{1}{24}}}{k} } \right)$
where
$A_k(n) = \sum_{0 \,\le\, m \,<\, k; \; (m,\, k) \,=\, 1} e^{ \pi i \left[ s(m,\, k) \;-\; \frac{1}{k} 2 nm \right] }.$
Here, the notation (m, n) = 1 implies that the sum should occur only over the values of m that are relatively prime to n. The function s(m, k) is a Dedekind sum.
The error after v terms is of the order of the next term, and v may be taken to be of the order of $\sqrt n$. As an example, Hardy and Ramanujan showed that p(200) is the nearest integer to the sum of the first v=5 terms of the series.
In 1937, Hans Rademacher was able to improve on Hardy and Ramanujan's results by providing a convergent series expression for p(n). It is
$p(n)=\frac{1}{\pi \sqrt{2}} \sum_{k=1}^\infty \sqrt{k}\, A_k(n)\, \frac{d}{dn} \left({ \frac {1} {\sqrt{n-\frac{1}{24}}} \sinh \left[ {\frac{\pi}{k} \sqrt{\frac{2}{3}\left(n-\frac{1}{24}\right)}}\right] }\right) .$
The proof of Rademacher's formula involves Ford circles, Farey sequences, modular symmetry and the Dedekind eta function in a central way.
It may be shown that the k-th term of Rademacher's series is of the order
$\exp\left(\pi\sqrt\frac23 \frac{\sqrt n}{k} \right) ,$
so that the first term gives the Hardy–Ramanujan asymptotic approximation.
Paul Erdős published an elementary proof of the asymptotic formula for p(n) in 1942.1617
#### Asymptotics of restricted partitions
The asymptotic expression for p(n) implies that
$\log p(n) \sim C \sqrt n \mbox { as } n\rightarrow \infty$
where $C = \pi\sqrt\frac23.$
If A is a set of natural numbers, we let pA(n) denote the number of partitions of n into elements of A. If A possesses positive natural density α then
$\log p_A(n) \sim C \sqrt{\alpha n}$
and conversely if this asymptotic property holds for pA(n) then A has natural density α.18 This result was stated, with a sketch of proof, by Erdős in 1942.1619
If A is a finite set, this analysis does not apply (the density of a finite set is zero). If A has k elements then20
$p_A(n) = \left(\prod_{a \in A} a^{-1}\right) \cdot \frac{n^{k-1}}{(k-1)!} + O(n^{k-2}) .$
## Ferrers diagram
The partition 6 + 4 + 3 + 1 of the positive number 14 can be represented by the following diagram; these diagrams are named in honor of Norman Macleod Ferrers:
6 + 4 + 3 + 1
The 14 circles are lined up in 4 columns, each having the size of a part of the partition. The diagrams for the 5 partitions of the number 4 are listed below:
| | | | | | | | | |
|----|----|-------|----|-------|----|-----------|----|---------------|
| | | | | | | | | |
| 4 | = | 3 + 1 | = | 2 + 2 | = | 2 + 1 + 1 | = | 1 + 1 + 1 + 1 |
If we now flip the diagram of the partition 6 + 4 + 3 + 1 along its main diagonal, we obtain another partition of 14:
| | | |
|---------------|----|-----------------------|
| | ↔ | |
| 6 + 4 + 3 + 1 | = | 4 + 3 + 3 + 2 + 1 + 1 |
By turning the rows into columns, we obtain the partition 4 + 3 + 3 + 2 + 1 + 1 of the number 14. Such partitions are said to be conjugate of one another.21 In the case of the number 4, partitions 4 and 1 + 1 + 1 + 1 are conjugate pairs, and partitions 3 + 1 and 2 + 1 + 1 are conjugate of each other. Of particular interest is the partition 2 + 2, which has itself as conjugate. Such a partition is said to be self-conjugate.22
Claim: The number of self-conjugate partitions is the same as the number of partitions with distinct odd parts.
Proof (outline): The crucial observation is that every odd part can be "folded" in the middle to form a self-conjugate diagram:
↔
One can then obtain a bijection between the set of partitions with distinct odd parts and the set of self-conjugate partitions, as illustrated by the following example:
| | | |
|-----------|----|-------------------|
| | ↔ | |
| 9 + 7 + 3 | = | 5 + 5 + 4 + 3 + 2 |
| Dist. odd | | self-conjugate |
Similar techniques can be employed to establish, for example, the following equalities:
• The number of partitions of n into no more than k parts is the same as the number of partitions of n into parts no larger than k.
• The number of partitions of n into no more than k parts is the same as the number of partitions of n + k into exactly k parts.
## Young diagrams
Main article: Young diagram
An alternative visual representation of an integer partition is its Young diagram, named after the British mathematician Alfred Young. Rather than representing a partition with dots, as in the Ferrers diagram, the Young diagram uses boxes. Thus, the Young diagram for the partition 5 + 4 + 1 is
while the Ferrers diagram for the same partition is
While this seemingly trivial variation doesn't appear worthy of separate mention, Young diagrams turn out to be extremely useful in the study of symmetric functions and group representation theory: in particular, filling the boxes of Young diagrams with numbers (or sometimes more complicated objects) obeying various rules leads to a family of objects called Young tableaux, and these tableaux have combinatorial and representation-theoretic significance.
## See also
• Rank of a partition
• Crank of a partition
• Young's lattice
• Dominance order
• Partition of a set
• Stars and bars (combinatorics)
• Plane partition
• Polite number, defined by partitions into consecutive integers
• Multiplicative partition
• Twelvefold way
• Ewens's sampling formula
• Faà di Bruno's formula
• Multipartition
• Multiset
• Newton's identities
• Leibniz's distribution table for integer partitions
• Durfee square
• Smallest-parts function
• A Goldbach partition is the partition of an even number into primes (see Goldbach's conjecture)
• Kostant's partition function
## Notes
1. Andrews, George E. Number Theory. W. B. Saunders Company, Philadelphia, 1971. Dover edition, page 149–150.
2. Hardy, G.H. Some Famous Problems of the Theory of Numbers. Clarendon Press, 1920.
3. "Sloane's A070177 ", . OEIS Foundation.
4. The formula, due to Henri Faure, can be found in: Muir, Thomas (1920). The Theory of Determinants in the Historical Order of Development II. Macmillan and Co. p. 212.
5. ^ a b Hardy and Wright (2008) p.365
6. Hardy and Wright (2008) Theorem 359, p.380
7. Hardy and Wright (2008) Theorems 360,361, p.380
8. Ono, Ken; Ahlgren, Scott (2001). "Congruence properties for the partition function". 98 (23): 12,882–12,884. doi:10.1073/pnas.191488598.
9. ^ a b Erdős, Pál (1942). "On an elementary proof of some asymptotic formulas in the theory of partitions". Ann. Math. (2) 43: 437–450. Zbl 0061.07905.
10. Nathanson (2000) p.456
11. Nathanson (2000) pp.475–485
12. Nathanson (2000) p.495
13. Nathanson (2000) p.458–464
14. Hardy and Wright (2008) p.362
15. Hardy and Wright (2008) p.368
## References
• George E. Andrews, The Theory of Partitions (1976), Cambridge University Press. ISBN 0-521-63766-X .
• Apostol, Tom M. (1990) [1976]. Modular functions and Dirichlet series in number theory. Graduate Texts in Mathematics 41 (2nd ed.). New York etc.: Springer-Verlag. ISBN 0-387-97127-0. Zbl 0697.10023. (See chapter 5 for a modern pedagogical intro to Rademacher's formula).
• Hardy, G.H.; Wright, E.M. (2008) [1938]. An Introduction to the Theory of Numbers. Revised by D.R. Heath-Brown and J.H. Silverman. Foreword by Andrew Wiles. (6th ed.). Oxford: Oxford University Press. ISBN 978-0-19-921986-5. Zbl 1159.11001.
• Lehmer, D. H. (1939). "On the remainder and convergence of the series for the partition function". Trans. Amer. Math. Soc. 46: 362–373. doi:10.1090/S0002-9947-1939-0000410-9. MR 0000410. Zbl 0022.20401. Provides the main formula (no derivatives), remainder, and older form for Ak(n).)
• Gupta, Gwyther, Miller, Roy. Soc. Math. Tables, vol 4, Tables of partitions, (1962) (Has text, nearly complete bibliography, but they (and Abramowitz) missed the Selberg formula for Ak(n), which is in Whiteman.)
• Macdonald, Ian G. (1979). Symmetric functions and Hall polynomials. Oxford Mathematical Monographs. Oxford University Press. ISBN 0-19-853530-9. Zbl 0487.20007. (See section I.1)
• Nathanson, M.B. (2000). Elementary Methods in Number Theory. Graduate Texts in Mathematics 195. Springer-Verlag. ISBN 0-387-98912-9. Zbl 0953.11002.
• Ken Ono, Distribution of the partition function modulo m, Annals of Mathematics 151 (2000) pp 293–307. (This paper proves congruences modulo every prime greater than 3)
• Sautoy, Marcus Du. The Music of the Primes. New York: Perennial-HarperCollins, 2003.
• Richard P. Stanley, Enumerative Combinatorics, Volumes 1 and 2. Cambridge University Press, 1999 ISBN 0-521-56069-1
• Whiteman, A. L. (1956). "A sum connected with the series for the partition function". Pacific Journal of Math. 6 (1). pp. 159–176. Zbl 0071.04004. (Provides the Selberg formula. The older form is the finite Fourier expansion of Selberg.)
• Hans Rademacher, Collected Papers of Hans Rademacher, (1974) MIT Press; v II, p 100–107, 108–122, 460–475.
• Miklós Bóna (2002). A Walk Through Combinatorics: An Introduction to Enumeration and Graph Theory. World Scientific Publishing. ISBN 981-02-4900-4. (qn elementary introduction to the topic of integer partition, including a discussion of Ferrers graphs)
• George E. Andrews, Kimmo Eriksson (2004). Integer Partitions. Cambridge University Press. ISBN 0-521-60090-1.
• 'A Disappearing Number', devised piece by Complicite, mention Ramanujan's work on the Partition Function, 2007
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 35, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.825210452079773, "perplexity_flag": "head"}
|
http://en.wikipedia.org/wiki/Nuclear_physics
|
# Nuclear physics
For other uses, see Nuclear Physics (disambiguation).
Nuclear physics
Nucleus · Nucleons (p, n) · Nuclear force · Nuclear reaction
Nuclear models and stability
Nuclides' classification
Isotopes – equal Z
Isobars – equal A
Isotones – equal N
Isodiaphers – equal N − Z
Isomers – equal all the above
Mirror nuclei – Z ↔ N
Stable · Magic · Even/odd · Halo
Capturing processes
electron · neutron (s · r) · proton (p · rp)
Nucleosynthesis topics
Nuclear fusion
Processes: Stellar · Big Bang · Supernova
Nuclides: Primordial · Cosmogenic · Artificial
Nuclear physics is the field of physics that studies the constituents and interactions of atomic nuclei. The most commonly known applications of nuclear physics are nuclear power generation and nuclear weapons technology, but the research has provided application in many fields, including those in nuclear medicine and magnetic resonance imaging, ion implantation in materials engineering, and radiocarbon dating in geology and archaeology.
The field of particle physics evolved out of nuclear physics and is typically taught in close association with nuclear physics.
## History
Modern physics
${ i\hbar\frac{\partial}{\partial t} \Psi(\mathbf{r},\,t) = \hat H \Psi(\mathbf{r},\,t)}$
History of modern physics
Founders
Scientists
The history of nuclear physics as a discipline distinct from atomic physics starts with the discovery of radioactivity by Henri Becquerel in 1896,[1] while investigating phosphorescence in uranium salts.[2] The discovery of the electron by J. J. Thomson a year later was an indication that the atom had internal structure. At the turn of the 20th century the accepted model of the atom was J. J. Thomson's plum pudding model in which the atom was a large positively charged ball with small negatively charged electrons embedded inside of it. By the turn of the century physicists had also discovered three types of radiation emanating from atoms, which they named alpha, beta, and gamma radiation. Experiments in 1911 by Otto Hahn, and by James Chadwick in 1914 discovered that the beta decay spectrum was continuous rather than discrete. That is, electrons were ejected from the atom with a range of energies, rather than the discrete amounts of energies that were observed in gamma and alpha decays. This was a problem for nuclear physics at the time, because it indicated that energy was not conserved in these decays.
In 1905, Albert Einstein formulated the idea of mass–energy equivalence. While the work on radioactivity by Becquerel and Marie Curie predates this, an explanation of the source of the energy of radioactivity would have to wait for the discovery that the nucleus itself was composed of smaller constituents, the nucleons.
### Rutherford's team discovers the nucleus
Ernest Rutherford is often considered to be the "Father of Nuclear Physics"
In 1907 Ernest Rutherford published "Radiation of the α Particle from Radium in passing through Matter."[3] Hans Geiger expanded on this work in a communication to the Royal Society[4] with experiments he and Rutherford had done passing α particles through air, aluminum foil and gold leaf. More work was published in 1909 by Geiger and Marsden[5] and further greatly expanded work was published in 1910 by Geiger,[6] In 1911-2 Rutherford went before the Royal Society to explain the experiments and propound the new theory of the atomic nucleus as we now understand it.
The key experiment behind this announcement happened in 1910 at the University of Manchester, as Ernest Rutherford's team performed a remarkable experiment in which Hans Geiger and Ernest Marsden under his supervision fired alpha particles (helium nuclei) at a thin film of gold foil. The plum pudding model predicted that the alpha particles should come out of the foil with their trajectories being at most slightly bent. Rutherford had the idea to instruct his team to look for something that shocked him to actually observe: a few particles were scattered through large angles, even completely backwards, in some cases. He likened it to firing a bullet at tissue paper and having it bounce off. The discovery, beginning with Rutherford's analysis of the data in 1911, eventually led to the Rutherford model of the atom, in which the atom has a very small, very dense nucleus containing most of its mass, and consisting of heavy positively charged particles with embedded electrons in order to balance out the charge (since the neutron was unknown). As an example, in this model (which is not the modern one) nitrogen-14 consisted of a nucleus with 14 protons and 7 electrons (21 total particles), and the nucleus was surrounded by 7 more orbiting electrons.
The Rutherford model worked quite well until studies of nuclear spin were carried out by Franco Rasetti at the California Institute of Technology in 1929. By 1925 it was known that protons and electrons had a spin of 1/2, and in the Rutherford model of nitrogen-14, 20 of the total 21 nuclear particles should have paired up to cancel each other's spin, and the final odd particle should have left the nucleus with a net spin of 1/2. Rasetti discovered, however, that nitrogen-14 had a spin of 1.
### James Chadwick discovers the neutron
In 1932 Chadwick realized that radiation that had been observed by Walther Bothe, Herbert L. Becker, Irène and Frédéric Joliot-Curie was actually due to a neutral particle of about the same mass as the proton, that he called the neutron (following a suggestion about the need for such a particle, by Rutherford). In the same year Dmitri Ivanenko suggested that neutrons were in fact spin 1/2 particles and that the nucleus contained neutrons to explain the mass not due to protons, and that there were no electrons in the nucleus—only protons and neutrons. The neutron spin immediately solved the problem of the spin of nitrogen-14, as the one unpaired proton and one unpaired neutron in this model, each contribute a spin of 1/2 in the same direction, for a final total spin of 1.
With the discovery of the neutron, scientists at last could calculate what fraction of binding energy each nucleus had, from comparing the nuclear mass with that of the protons and neutrons which composed it. Differences between nuclear masses were calculated in this way and—when nuclear reactions were measured—were found to agree with Einstein's calculation of the equivalence of mass and energy to high accuracy (within 1 percent as of in 1934).
### Proca's equations of the massive vector boson field
Alexandru Proca was the first to develop and report the massive vector boson field equations and a theory of the mesonic field of nuclear forces. Proca's equations were known to Wolfgang Pauli[7] who mentioned the equations in his Nobel address, and they were also known to Yukawa, Wentzel, Taketani, Sakata, Kemmer, Heitler, and Fröhlich who appreciated the content of Proca's equations for developing a theory of the atomic nuclei in Nuclear Physics.[8][9][10][11][12]
### Yukawa's meson postulated to bind nuclei
In 1935 Hideki Yukawa proposed the first significant theory of the strong force to explain how the nucleus holds together. In the Yukawa interaction a virtual particle, later called a meson, mediated a force between all nucleons, including protons and neutrons. This force explained why nuclei did not disintegrate under the influence of proton repulsion, and it also gave an explanation of why the attractive strong force had a more limited range than the electromagnetic repulsion between protons. Later, the discovery of the pi meson showed it to have the properties of Yukawa's particle.
With Yukawa's papers, the modern model of the atom was complete. The center of the atom contains a tight ball of neutrons and protons, which is held together by the strong nuclear force, unless it is too large. Unstable nuclei may undergo alpha decay, in which they emit an energetic helium nucleus, or beta decay, in which they eject an electron (or positron). After one of these decays the resultant nucleus may be left in an excited state, and in this case it decays to its ground state by emitting high energy photons (gamma decay).
The study of the strong and weak nuclear forces (the latter explained by Enrico Fermi via Fermi's interaction in 1934) led physicists to collide nuclei and electrons at ever higher energies. This research became the science of particle physics, the crown jewel of which is the standard model of particle physics which describes the strong, weak, and electromagnetic forces.
## Modern nuclear physics
Main articles: Liquid-drop model and Nuclear shell model
A heavy nucleus can contain hundreds of nucleons which means that with some approximation it can be treated as a classical system, rather than a quantum-mechanical one. In the resulting liquid-drop model, the nucleus has an energy which arises partly from surface tension and partly from electrical repulsion of the protons. The liquid-drop model is able to reproduce many features of nuclei, including the general trend of binding energy with respect to mass number, as well as the phenomenon of nuclear fission.
Superimposed on this classical picture, however, are quantum-mechanical effects, which can be described using the nuclear shell model, developed in large part by Maria Goeppert-Mayer. Nuclei with certain numbers of neutrons and protons (the magic numbers 2, 8, 20, 28, 50, 82, 126, ...) are particularly stable, because their shells are filled.
Other more complicated models for the nucleus have also been proposed, such as the interacting boson model, in which pairs of neutrons and protons interact as bosons, analogously to Cooper pairs of electrons.
Much of current research in nuclear physics relates to the study of nuclei under extreme conditions such as high spin and excitation energy. Nuclei may also have extreme shapes (similar to that of Rugby balls) or extreme neutron-to-proton ratios. Experimenters can create such nuclei using artificially induced fusion or nucleon transfer reactions, employing ion beams from an accelerator. Beams with even higher energies can be used to create nuclei at very high temperatures, and there are signs that these experiments have produced a phase transition from normal nuclear matter to a new state, the quark-gluon plasma, in which the quarks mingle with one another, rather than being segregated in triplets as they are in neutrons and protons.
### Nuclear decay
Main article: Radioactivity
Eighty elements have at least one stable isotope never observed to decay, amounting to a total of about 254 stable isotopes. However, thousands of isotopes have been characterized that are unstable. These radioisotopes decay over time scales ranging from fractions of a second to weeks, years, billions of years, or even trillions of years.
The stability of a nucleus is highest when it falls into a certain range or balance of composition of neutrons and protons; too few or too many neutrons may cause it to decay. For example, in beta decay a nitrogen-16 atom (7 protons, 9 neutrons) is converted to an oxygen-16 atom (8 protons, 8 neutrons) within a few seconds of being created. In this decay a neutron in the nitrogen nucleus is converted into a proton and an electron and an antineutrino by the weak nuclear force. The element is transmuted to another element in by acquiring the created proton.
In alpha decay the radioactive element decays by emitting a helium nucleus (2 protons and 2 neutrons), giving another element, plus helium-4. In many cases this process continues through several steps of this kind, including other types of decays, until a stable element is formed.
In gamma decay, a nucleus decays from an excited state into a lower energy state, by emitting a gamma ray. The element is not changed to another element in the process (no nuclear transmutation is involved).
Other more exotic decays are possible (see the main article). For example, in internal conversion decay, the energy from an excited nucleus may be used to eject one of the inner orbital electrons from the atom, in a process which produces high speed electrons, but is not beta decay, and (unlike beta decay) does not transmute one element to another.
### Nuclear fusion
In nuclear fusion, two low mass nuclei come into very close contact with each other, so that the strong force fuses them. It requires a large amount of energy to overcome the repulsion between the nuclei for the strong or nuclear forces to produce this effect, therefore nuclear fusion can only take place at very high temperatures or high pressures. Once the process succeeds, a very large amount of energy is released and the combined nucleus assumes a lower energy level. The binding energy per nucleon increases with mass number up until nickel-62. Stars like the Sun are powered by the fusion of four protons into a helium nucleus, two positrons, and two neutrinos. The uncontrolled fusion of hydrogen into helium is known as thermonuclear runaway. A frontier in current research at various institutions, for example the Joint European Torus (JET) and ITER, is the development of an economically viable method of using energy from a controlled fusion reaction. Natural nuclear fusion is the origin of the light and energy produced by the core of all stars including our own sun.
### Nuclear fission
Nuclear fission is the reverse process of fusion. For nuclei heavier than nickel-62 the binding energy per nucleon decreases with the mass number. It is therefore possible for energy to be released if a heavy nucleus breaks apart into two lighter ones.
The process of alpha decay is in essence a special type of spontaneous nuclear fission. This process produces a highly asymmetrical fission because the four particles which make up the alpha particle are especially tightly bound to each other, making production of this nucleus in fission particularly likely.
For certain of the heaviest nuclei which produce neutrons on fission, and which also easily absorb neutrons to initiate fission, a self-igniting type of neutron-initiated fission can be obtained, in a so-called chain reaction. Chain reactions were known in chemistry before physics, and in fact many familiar processes like fires and chemical explosions are chemical chain reactions. The fission or "nuclear" chain-reaction, using fission-produced neutrons, is the source of energy for nuclear power plants and fission type nuclear bombs, such as those detonated by the United States in Hiroshima and Nagasaki, Japan, at the end of World War II. Heavy nuclei such as uranium and thorium may also undergo spontaneous fission, but they are much more likely to undergo decay by alpha decay.
For a neutron-initiated chain-reaction to occur, there must be a critical mass of the element present in a certain space under certain conditions. The conditions for the smallest critical mass require the conservation of the emitted neutrons and also their slowing or moderation so there is a greater cross-section or probabability of them initiating another fission. In two regions of Oklo, Gabon, Africa, natural nuclear fission reactors were active over 1.5 billion years ago.[citation needed] Measurements of natural neutrino emission have demonstrated that around half of the heat emanating from the Earth's core results from radioactive decay. However, it is not known if any of this results from fission chain-reactions.[citation needed]
### Production of "heavy" elements (atomic number greater than five)
Main article: nucleosynthesis
According to the theory, as the Universe cooled after the big bang it eventually became possible for common subatomic particles as we know them (neutrons, protons and electrons) to exist. The most common particles created in the big bang which are still easily observable to us today were protons and electrons (in equal numbers). The protons would eventually form hydrogen atoms. Almost all the neutrons created in the Big Bang were absorbed into helium-4 in the first three minutes after the Big Bang, and this helium accounts for most of the helium in the universe today (see Big Bang nucleosynthesis).
Some fraction of elements beyond helium were created in the Big Bang, as the protons and neutrons collided with each other (lithium, beryllium, and perhaps some boron), but all of the "heavier elements" (carbon, element number 6, and elements of greater atomic number) that we see today, were created inside of stars during a series of fusion stages, such as the proton-proton chain, the CNO cycle and the triple-alpha process. Progressively heavier elements are created during the evolution of a star.
Since the binding energy per nucleon peaks around iron, energy is only released in fusion processes occurring below this point. Since the creation of heavier nuclei by fusion costs energy, nature resorts to the process of neutron capture. Neutrons(due to their lack of charge) are readily absorbed by a nucleus. The heavy elements are created by either a slow neutron capture process (the so-called s process) or by the rapid, or r process. The s process occurs in thermally pulsing stars (called AGB, or asymptotic giant branch stars) and takes hundreds to thousands of years to reach the heaviest elements of lead and bismuth. The r process is thought to occur in supernova explosions because the conditions of high temperature, high neutron flux and ejected matter are present. These stellar conditions make the successive neutron captures very fast, involving very neutron-rich species which then beta-decay to heavier elements, especially at the so-called waiting points that correspond to more stable nuclides with closed neutron shells (magic numbers).
## References
1. B. R. Martin (2006). Nuclear and Particle Physics. John Wiley & Sons, Ltd. ISBN 0-470-01999-9.
2. Henri Becquerel (1896). "Sur les radiations émises par phosphorescence". Comptes Rendus 122: 420–421.
3. Philosophical Magazine (12, p 134-46)
4. Proc. Roy. Soc. July 17, 1908
5. Proc. Roy. Soc. A82 p 495-500
6. Proc. Roy. Soc. Feb. 1, 1910
7. W. Pauli, Nobel lecture, December 13, 1946.
8.
9. G. A. Proca, Alexandre Proca.Oeuvre Scientifique Publiée, S.I.A.G., Rome, 1988.
10. C. Vuille, J. Ipser, J. Gallagher, “Einstein-Proca model, micro black holes, and naked singularities”, General Relativity and Gravitation, 34 (2002), 689.
11. R. Scipioni, “Isomorphism between non-Riemannian gravity and Einstein-Proca-Weyl theories extended to a class of scalar gravity theories”, Class. Quantum Gravity., 16 (1999), 2471.
12. R. W. Tucker and C. Wang, C., “An Einstein-Proca-fluid model for dark matter gravitational interactions”, Nucl. Phys. B - Proc. suppl., 57 (1997) 259.
## Bibliography
• Nuclear Physics by Irving Kaplan 2nd edition, 1962 Addison-Wesley
• General Chemistry by Linus Pauling 1970 Dover Pub. ISBN 0-486-65622-5
• Introductory Nuclear Physics by Kenneth S. Krane Pub. Wiley
• N.D. Cook (2010). Models of the Atomic Nucleus (2nd ed.). Springer. pp. xvi & 324. ISBN 978-3-642-14736-4.
• Ahmad, D.Sc., Ishfaq; American Institute of Physics (1996). Physics of particles and nuclei. 1-3 (in English) 27 (3 ed.). University of California: American Institute of Physics Press. p. 209.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9308328032493591, "perplexity_flag": "middle"}
|
http://en.wikipedia.org/wiki/Graph_coloring
|
# Graph coloring
A proper vertex coloring of the Petersen graph with 3 colors, the minimum number possible.
In graph theory, graph coloring is a special case of graph labeling; it is an assignment of labels traditionally called "colors" to elements of a graph subject to certain constraints. In its simplest form, it is a way of coloring the vertices of a graph such that no two adjacent vertices share the same color; this is called a vertex coloring. Similarly, an edge coloring assigns a color to each edge so that no two adjacent edges share the same color, and a face coloring of a planar graph assigns a color to each face or region so that no two faces that share a boundary have the same color.
Vertex coloring is the starting point of the subject, and other coloring problems can be transformed into a vertex version. For example, an edge coloring of a graph is just a vertex coloring of its line graph, and a face coloring of a plane graph is just a vertex coloring of its dual. However, non-vertex coloring problems are often stated and studied as is. That is partly for perspective, and partly because some problems are best studied in non-vertex form, as for instance is edge coloring.
The convention of using colors originates from coloring the countries of a map, where each face is literally colored. This was generalized to coloring the faces of a graph embedded in the plane. By planar duality it became coloring the vertices, and in this form it generalizes to all graphs. In mathematical and computer representations, it is typical to use the first few positive or nonnegative integers as the "colors". In general, one can use any finite set as the "color set". The nature of the coloring problem depends on the number of colors but not on what they are.
Graph coloring enjoys many practical applications as well as theoretical challenges. Beside the classical types of problems, different limitations can also be set on the graph, or on the way a color is assigned, or even on the color itself. It has even reached popularity with the general public in the form of the popular number puzzle Sudoku. Graph coloring is still a very active field of research.
Note: Many terms used in this article are defined in Glossary of graph theory.
## History
The first results about graph coloring deal almost exclusively with planar graphs in the form of the coloring of maps. While trying to color a map of the counties of England, Francis Guthrie postulated the four color conjecture, noting that four colors were sufficient to color the map so that no regions sharing a common border received the same color. Guthrie’s brother passed on the question to his mathematics teacher Augustus de Morgan at University College, who mentioned it in a letter to William Hamilton in 1852. Arthur Cayley raised the problem at a meeting of the London Mathematical Society in 1879. The same year, Alfred Kempe published a paper that claimed to establish the result, and for a decade the four color problem was considered solved. For his accomplishment Kempe was elected a Fellow of the Royal Society and later President of the London Mathematical Society.[1]
In 1890, Heawood pointed out that Kempe’s argument was wrong. However, in that paper he proved the five color theorem, saying that every planar map can be colored with no more than five colors, using ideas of Kempe. In the following century, a vast amount of work and theories were developed to reduce the number of colors to four, until the four color theorem was finally proved in 1976 by Kenneth Appel and Wolfgang Haken. Perhaps surprisingly, the proof went back to the ideas of Heawood and Kempe and largely disregarded the intervening developments.[2] The proof of the four color theorem is also noteworthy for being the first major computer-aided proof.
In 1912, George David Birkhoff introduced the chromatic polynomial to study the coloring problems, which was generalised to the Tutte polynomial by Tutte, important structures in algebraic graph theory. Kempe had already drawn attention to the general, non-planar case in 1879,[3] and many results on generalisations of planar graph coloring to surfaces of higher order followed in the early 20th century.
In 1960, Claude Berge formulated another conjecture about graph coloring, the strong perfect graph conjecture, originally motivated by an information-theoretic concept called the zero-error capacity of a graph introduced by Shannon. The conjecture remained unresolved for 40 years, until it was established as the celebrated strong perfect graph theorem in 2002 by Chudnovsky, Robertson, Seymour, Thomas 2002.
Graph coloring has been studied as an algorithmic problem since the early 1970s: the chromatic number problem is one of Karp’s 21 NP-complete problems from 1972, and at approximately the same time various exponential-time algorithms were developed based on backtracking and on the deletion-contraction recurrence of Zykov (1949). One of the major applications of graph coloring, register allocation in compilers, was introduced in 1981.
## Definition and terminology
This graph can be 3-colored in 12 different ways.
### Vertex coloring
When used without any qualification, a coloring of a graph is almost always a proper vertex coloring, namely a labelling of the graph’s vertices with colors such that no two vertices sharing the same edge have the same color. Since a vertex with a loop could never be properly colored, it is understood that graphs in this context are loopless.
The terminology of using colors for vertex labels goes back to map coloring. Labels like red and blue are only used when the number of colors is small, and normally it is understood that the labels are drawn from the integers {1,2,3,...}.
A coloring using at most k colors is called a (proper) k-coloring. The smallest number of colors needed to color a graph G is called its chromatic number, and is often denoted χ(G). Sometimes γ(G) is used, since χ(G) is also used to denote the Euler characteristic of a graph. A graph that can be assigned a (proper) k-coloring is k-colorable, and it is k-chromatic if its chromatic number is exactly k. A subset of vertices assigned to the same color is called a color class, every such class forms an independent set. Thus, a k-coloring is the same as a partition of the vertex set into k independent sets, and the terms k-partite and k-colorable have the same meaning.
### Chromatic polynomial
All nonisomorphic graphs on 3 vertices and their chromatic polynomials. The empty graph E3 (red) admits a 1-coloring, the others admit no such colorings. The green graph admits 12 colorings with 3 colors.
Main article: Chromatic polynomial
The chromatic polynomial counts the number of ways a graph can be colored using no more than a given number of colors. For example, using three colors, the graph in the image to the right can be colored in 12 ways. With only two colors, it cannot be colored at all. With four colors, it can be colored in 24 + 4⋅12 = 72 ways: using all four colors, there are 4! = 24 valid colorings (every assignment of four colors to any 4-vertex graph is a proper coloring); and for every choice of three of the four colors, there are 12 valid 3-colorings. So, for the graph in the example, a table of the number of valid colorings would start like this:
| | | | | | |
|---------------------|----|----|----|----|----|
| Available colors | 1 | 2 | 3 | 4 | … |
| Number of colorings | 0 | 0 | 12 | 72 | … |
The chromatic polynomial is a function P(G, t) that counts the number of t-colorings of G. As the name indicates, for a given G the function is indeed a polynomial in t. For the example graph, P(G, t) = t(t − 1)2(t − 2), and indeed P(G, 4) = 72.
The chromatic polynomial includes at least as much information about the colorability of G as does the chromatic number. Indeed, χ is the smallest positive integer that is not a root of the chromatic polynomial
$\chi (G)=\min\{ k\,\colon\,P(G,k) > 0 \}.$
Triangle K3 $t(t-1)(t-2)$
Complete graph Kn $t(t-1)(t-2) \cdots (t-(n-1))$
Tree with n vertices $t(t-1)^{n-1}$
Cycle Cn $(t-1)^n+(-1)^n(t-1)$
Petersen graph $t(t-1)(t-2)(t^7-12t^6+67t^5-230t^4+529t^3-814t^2+775t-352)$
### Edge coloring
Main article: Edge coloring
An edge coloring of a graph is a proper coloring of the edges, meaning an assignment of colors to edges so that no vertex is incident to two edges of the same color. An edge coloring with k colors is called a k-edge-coloring and is equivalent to the problem of partitioning the edge set into k matchings. The smallest number of colors needed for an edge coloring of a graph G is the chromatic index, or edge chromatic number, χ′(G). A Tait coloring is a 3-edge coloring of a cubic graph. The four color theorem is equivalent to the assertion that every planar cubic bridgeless graph admits a Tait coloring.
### Total coloring
Main article: Total coloring
Total coloring is a type of coloring on the vertices and edges of a graph. When used without any qualification, a total coloring is always assumed to be proper in the sense that no adjacent vertices, no adjacent edges, and no edge and its endvertices are assigned the same color. The total chromatic number χ″(G) of a graph G is the least number of colors needed in any total coloring of G.
## Properties
### Bounds on the chromatic number
Assigning distinct colors to distinct vertices always yields a proper coloring, so
$1 \le \chi(G) \le n.\,$
The only graphs that can be 1-colored are edgeless graphs. A complete graph $K_n$ of n vertices requires $\chi(K_n)=n$ colors. In an optimal coloring there must be at least one of the graph‘s m edges between every pair of color classes, so
$\chi(G)(\chi(G)-1) \le 2m.\,$
If G contains a clique of size k, then at least k colors are needed to color that clique; in other words, the chromatic number is at least the clique number:
$\chi(G) \ge \omega(G).\,$
For interval graphs this bound is tight.
The 2-colorable graphs are exactly the bipartite graphs, including trees and forests. By the four color theorem, every planar graph can be 4-colored.
A greedy coloring shows that every graph can be colored with one more color than the maximum vertex degree,
$\chi(G) \le \Delta(G) + 1. \,$
Complete graphs have $\chi(G)=n$ and $\Delta(G)=n-1$, and odd cycles have $\chi(G)=3$ and $\Delta(G)=2$, so for these graphs this bound is best possible. In all other cases, the bound can be slightly improved; Brooks’ theorem[4] states that
Brooks’ theorem: $\chi (G) \le \Delta (G)$ for a connected, simple graph G, unless G is a complete graph or an odd cycle.
### Graphs with high chromatic number
Graphs with large cliques have high chromatic number, but the opposite is not true. The Grötzsch graph is an example of a 4-chromatic graph without a triangle, and the example can be generalised to the Mycielskians.
Mycielski’s Theorem (Alexander Zykov 1949, Jan Mycielski 1955): There exist triangle-free graphs with arbitrarily high chromatic number.
From Brooks’s theorem, graphs with high chromatic number must have high maximum degree. Another local property that leads to high chromatic number is the presence of a large clique. But colorability is not an entirely local phenomenon: A graph with high girth looks locally like a tree, because all cycles are long, but its chromatic number need not be 2:
Theorem (Erdős): There exist graphs of arbitrarily high girth and chromatic number.
### Bounds on the chromatic index
An edge coloring of G is a vertex coloring of its line graph $L(G)$, and vice versa. Thus,
$\chi'(G)=\chi(L(G)). \,$
There is a strong relationship between edge colorability and the graph’s maximum degree $\Delta(G)$. Since all edges incident to the same vertex need their own color, we have
$\chi'(G) \ge \Delta(G).\,$
Moreover,
König’s theorem: $\chi'(G) = \Delta(G)$ if G is bipartite.
In general, the relationship is even stronger than what Brooks’s theorem gives for vertex coloring:
Vizing’s Theorem: A graph of maximal degree $\Delta$ has edge-chromatic number $\Delta$ or $\Delta+1$.
### Other properties
A graph has a k-coloring if and only if it has an acyclic orientation for which the longest path has length at most k; this is the Gallai–Hasse–Roy–Vitaver theorem (Nešetřil & Ossona de Mendez 2012).
For planar graphs, vertex colorings are essentially dual to nowhere-zero flows.
About infinite graphs, much less is known. The following are two of the few results about infinite graph coloring:
• If all finite subgraphs of an infinite graph G are k-colorable, then so is G, under the assumption of the axiom of choice. This is the de Bruijn–Erdős theorem of de Bruijn & Erdős (1951).
• If a graph admits a full n-coloring for every n ≥ n0, it admits an infinite full coloring (Fawcett 1978).
### Open problems
The chromatic number of the plane, where two points are adjacent if they have unit distance, is unknown, although it is one of 4, 5, 6, or 7. Other open problems concerning the chromatic number of graphs include the Hadwiger conjecture stating that every graph with chromatic number k has a complete graph on k vertices as a minor, the Erdős–Faber–Lovász conjecture bounding the chromatic number of unions of complete graphs that have at exactly one vertex in common to each pair, and the Albertson conjecture that among k-chromatic graphs the complete graphs are the ones with smallest crossing number.
When Birkhoff and Lewis introduced the chromatic polynomial in their attack on the four-color theorem, they conjectured that for planar graphs G, the polymomial $P(G,t)$ has no zeros in the region $[4,\infty)$. Although it is known that such a chromatic polynomial has no zeros in the region $[5,\infty)$ and that $P(G,4) \neq 0$, their conjecture is still unresolved. It also remains an unsolved problem to characterize graphs which have the same chromatic polynomial and to determine which polynomials are chromatic.
## Algorithms
Graph coloring
Decision
Name Graph coloring, vertex coloring, k-coloring
Input Graph G with n vertices. Integer k
Output Does G admit a proper vertex coloring with k colors?
Running time O(2 nn)[5]
Complexity NP-complete
Reduction from 3-Satisfiability
Garey–Johnson GT4
Optimisation
Name Chromatic number
Input Graph G with n vertices.
Output χ(G)
Complexity NP-hard
Approximability O(n (log n)−3(log log n)2)
Inapproximability O(n1−ε) unless P = NP
Counting problem
Name Chromatic polynomial
Input Graph G with n vertices. Integer k
Output The number P (G,k) of proper k-colorings of G
Running time O(2 nn)
Complexity #P-complete
Approximability FPRAS for restricted cases
Inapproximability No PTAS unless P = NP
### Polynomial time
Determining if a graph can be colored with 2 colors is equivalent to determining whether or not the graph is bipartite, and thus computable in linear time using breadth-first search. More generally, the chromatic number and a corresponding coloring of perfect graphs can be computed in polynomial time using semidefinite programming. Closed formulas for chromatic polynomial are known for many classes of graphs, such as forest, chordal graphs, cycles, wheels, and ladders, so these can be evaluated in polynomial time.
If the graph is planar and has low branchwidth (or is nonplanar but with a known branch decompositon), then it can be solved in polynomial time using dynamic programming. In general, the time required is polynomial in the graph size, but exponential in the branchwidth.
### Exact algorithms
Brute-force search for a k-coloring considers every of the $k^n$ assignments of k colors to n vertices and checks for each if it is legal. To compute the chromatic number and the chromatic polynomial, this procedure is used for every $k=1,\ldots,n-1$, impractical for all but the smallest input graphs.
Using dynamic programming and a bound on the number of maximal independent sets, k-colorability can be decided in time and space $O(2.445^n)$.[6] Using the principle of inclusion–exclusion and Yates’s algorithm for the fast zeta transform, k-colorability can be decided in time $O(2^nn)$[5] for any k. Faster algorithms are known for 3- and 4-colorability, which can be decided in time $O(1.3289^n)$[7] and $O(1.7504^n)$,[8] respectively.
### Contraction
The contraction $G/uv$ of graph G is the graph obtained by identifying the vertices u and v, removing any edges between them, and replacing them with a single vertex w where any edges that were incident on u or v are redirected to w. This operation plays a major role in the analysis of graph coloring.
The chromatic number satisfies the recurrence relation:
$\chi(G) = \text{min} \{ \chi(G+uv), \chi(G/uv)\}$
due to Zykov (1949), where u and v are nonadjacent vertices, $G+uv$ is the graph with the edge $uv$ added. Several algorithms are based on evaluating this recurrence, the resulting computation tree is sometimes called a Zykov tree. The running time is based on the heuristic for choosing the vertices u and v.
The chromatic polynomial satisfies following recurrence relation
$P(G-uv, k)= P(G/uv, k)+ P(G, k)$
where u and v are adjacent vertices and $G-uv$ is the graph with the edge $uv$ removed. $P(G - uv, k)$ represents the number of possible proper colorings of the graph, when the vertices may have same or different colors. The number of proper colorings therefore come from the sum of two graphs. If the vertices u and v have different colors, then we can as well consider a graph, where u and v are adjacent. If u and v have the same colors, we may as well consider a graph, where u and v are contracted. Tutte’s curiosity about which other graph properties satisfied this recurrence led him to discover a bivariate generalization of the chromatic polynomial, the Tutte polynomial.
The expressions give rise to a recursive procedure, called the deletion–contraction algorithm, which forms the basis of many algorithms for graph coloring. The running time satisfies the same recurrence relation as the Fibonacci numbers, so in the worst case, the algorithm runs in time within a polynomial factor of $((1+\sqrt{5})/2)^{n+m}=O(1.6180^{n+m})$ for n vertices and m edges.[9] The analysis can be improved to within a polynomial factor of the number $t(G)$ of spanning trees of the input graph.[10] In practice, branch and bound strategies and graph isomorphism rejection are employed to avoid some recursive calls, the running time depends on the heuristic used to pick the vertex pair.
### Greedy coloring
Main article: Greedy coloring
Two greedy colorings of the same graph using different vertex orders. The right example generalises to 2-colorable graphs with n vertices, where the greedy algorithm expends $n/2$ colors.
The greedy algorithm considers the vertices in a specific order $v_1$,…,$v_n$ and assigns to $v_i$ the smallest available color not used by $v_i$’s neighbours among $v_1$,…,$v_{i-1}$, adding a fresh color if needed. The quality of the resulting coloring depends on the chosen ordering. There exists an ordering that leads to a greedy coloring with the optimal number of $\chi(G)$ colors. On the other hand, greedy colorings can be arbitrarily bad; for example, the crown graph on n vertices can be 2-colored, but has an ordering that leads to a greedy coloring with $n/2$ colors.
If the vertices are ordered according to their degrees, the resulting greedy coloring uses at most $\text{max}_i \text{ min} \{d(x_i ) + 1, i\}$ colors, at most one more than the graph’s maximum degree. This heuristic is sometimes called the Welsh–Powell algorithm.[11] Another heuristic due to Brélaz establishes the ordering dynamically while the algorithm proceeds, choosing next the vertex adjacent to the largest number of different colors.[12] Many other graph coloring heuristics are similarly based on greedy coloring for a specific static or dynamic strategy of ordering the vertices, these algorithms are sometimes called sequential coloring algorithms.
### Parallel and distributed algorithms
In the field of distributed algorithms, graph coloring is closely related to the problem of symmetry breaking. The current state-of-the-art randomized algorithms are faster for sufficiently large maximum degree Δ than deterministic algorithms. The fastest randomized algorithms employ the multi-trials technique by Schneider et al.[13]
In a symmetric graph, a deterministic distributed algorithm cannot find a proper vertex coloring. Some auxiliary information is needed in order to break symmetry. A standard assumption is that initially each node has a unique identifier, for example, from the set {1, 2, ..., n}. Put otherwise, we assume that we are given an n-coloring. The challenge is to reduce the number of colors from n to, e.g., Δ + 1. The more colors are employed, e.g. O(Δ) instead of Δ + 1, the fewer communication rounds are required.[13]
A straightforward distributed version of the greedy algorithm for (Δ + 1)-coloring requires Θ(n) communication rounds in the worst case − information may need to be propagated from one side of the network to another side.
The simplest interesting case is an n-cycle. Richard Cole and Uzi Vishkin[14] show that there is a distributed algorithm that reduces the number of colors from n to O(log n) in one synchronous communication step. By iterating the same procedure, it is possible to obtain a 3-coloring of an n-cycle in O(log* n) communication steps (assuming that we have unique node identifiers).
The function log*, iterated logarithm, is an extremely slowly growing function, "almost constant". Hence the result by Cole and Vishkin raised the question of whether there is a constant-time distribute algorithm for 3-coloring an n-cycle. Linial (1992) showed that this is not possible: any deterministic distributed algorithm requires Ω(log* n) communication steps to reduce an n-coloring to a 3-coloring in an n-cycle.
The technique by Cole and Vishkin can be applied in arbitrary bounded-degree graphs as well; the running time is poly(Δ) + O(log* n).[15] The technique was extended to unit disk graphs by Schneider et al.[16] The fastest deterministic algorithms for (Δ + 1)-coloring for small Δ are due to Leonid Barenboim, Michael Elkin and Fabian Kuhn.[17] The algorithm by Barenboim et al. runs in time O(Δ) + log*(n)/2, which is optimal in terms of n since the constant factor 1/2 cannot be improved due to Linial's lower bound. Panconesi et al.[18] use network decompositions to compute a Δ+1 coloring in time $2 ^{O(\sqrt{\log n})}$.
The problem of edge coloring has also been studied in the distributed model. Panconesi & Rizzi (2001) achieve a (2Δ − 1)-coloring in O(Δ + log* n) time in this model. The lower bound for distributed vertex coloring due to Linial (1992) applies to the distributed edge coloring problem as well.
### Decentralized algorithms
Decentralized algorithms are ones where no message passing is allowed (in contrast to distributed algorithms where local message passing takes places). Somewhat surprisingly, efficient decentralized algorithms exist that will color a graph if a proper coloring exists. These assume that a vertex is able to sense whether any of its neighbors are using the same color as the vertex i.e., whether a local conflict exists. This is a mild assumption in many applications e.g. in wireless channel allocation it is usually reasonable to assume that a station will be able to detect whether other interfering transmitters are using the same channel (e.g. by measuring the SINR). This sensing information is sufficient to allow algorithms based on learning automata to find a proper graph coloring with probability one, e.g. see Leith (2006) and Duffy (2008).
### Computational complexity
Graph coloring is computationally hard. It is NP-complete to decide if a given graph admits a k-coloring for a given k except for the cases k = 1 and k = 2. In particular, it is NP-hard to compute the chromatic number. The 3-coloring problem remains NP-complete even on planar graphs of degree 4.[19]
The best known approximation algorithm computes a coloring of size at most within a factor O(n(log n)−3(log log n)2) of the chromatic number.[20] For all ε > 0, approximating the chromatic number within n1−ε is NP-hard.[21]
It is also NP-hard to color a 3-colorable graph with 4 colors[22] and a k-colorable graph with k(log k ) / 25 colors for sufficiently large constant k.[23]
Computing the coefficients of the chromatic polynomial is #P-hard. In fact, even computing the value of $\chi(G,k)$ is #P-hard at any rational point k except for k = 1 and k = 2.[24] There is no FPRAS for evaluating the chromatic polynomial at any rational point k ≥ 1.5 except for k = 2 unless NP = RP.[25]
For edge coloring, the proof of Vizing’s result gives an algorithm that uses at most Δ+1 colors. However, deciding between the two candidate values for the edge chromatic number is NP-complete.[26] In terms of approximation algorithms, Vizing’s algorithm shows that the edge chromatic number can be approximated to within 4/3, and the hardness result shows that no (4/3 − ε )-algorithm exists for any ε > 0 unless P = NP. These are among the oldest results in the literature of approximation algorithms, even though neither paper makes explicit use of that notion.[27]
## Applications
### Scheduling
Vertex coloring models to a number of scheduling problems.[28] In the cleanest form, a given set of jobs need to be assigned to time slots, each job requires one such slot. Jobs can be scheduled in any order, but pairs of jobs may be in conflict in the sense that they may not be assigned to the same time slot, for example because they both rely on a shared resource. The corresponding graph contains a vertex for every job and an edge for every conflicting pair of jobs. The chromatic number of the graph is exactly the minimum makespan, the optimal time to finish all jobs without conflicts.
Details of the scheduling problem define the structure of the graph. For example, when assigning aircraft to flights, the resulting conflict graph is an interval graph, so the coloring problem can be solved efficiently. In bandwidth allocation to radio stations, the resulting conflict graph is a unit disk graph, so the coloring problem is 3-approximable.
### Register allocation
Main article: Register allocation
A compiler is a computer program that translates one computer language into another. To improve the execution time of the resulting code, one of the techniques of compiler optimization is register allocation, where the most frequently used values of the compiled program are kept in the fast processor registers. Ideally, values are assigned to registers so that they can all reside in the registers when they are used.
The textbook approach to this problem is to model it as a graph coloring problem.[29] The compiler constructs an interference graph, where vertices are symbolic registers and an edge connects two nodes if they are needed at the same time. If the graph can be colored with k colors then the variables can be stored in k registers.
### Other applications
The problem of coloring a graph has found a number of applications, including pattern matching.
The recreational puzzle Sudoku can be seen as completing a 9-coloring on given specific graph with 81 vertices.
## Other colorings
### Ramsey theory
Main article: Ramsey theory
An important class of improper coloring problems is studied in Ramsey theory, where the graph’s edges are assigned to colors, and there is no restriction on the colors of incident edges. A simple example is the friendship theorem says that in any coloring of the edges of $K_6$ the complete graph of six vertices there will be a monochromatic triangle; often illustrated by saying that any group of six people either has three mutual strangers or three mutual acquaintances. Ramsey theory is concerned with generalisations of this idea to seek regularity amid disorder, finding general conditions for the existence of monochromatic subgraphs with given structure.
### Other colorings
List coloring Each vertex chooses from a list of colors List edge-coloring Each edge chooses from a list of colors Total coloring Vertices and edges are colored Harmonious coloring Every pair of colors appears on at most one edge Complete coloring Every pair of colors appears on at least one edge Exact coloring Every pair of colors appears on exactly one edge Acyclic coloring Every 2-chromatic subgraph is acyclic Star coloring Every 2-chromatic subgraph is a disjoint collection of stars Strong coloring Every color appears in every partition of equal size exactly once Strong edge coloring Edges are colored such that each color class induces a matching (equivalent to coloring the square of the line graph) Equitable coloring The sizes of color classes differ by at most one T-coloring Distance between two colors of adjacent vertices must not belong to fixed set T Rank coloring If two vertices have the same color i, then every path between them contain a vertex with color greater than i Interval edge-coloring A color of edges meeting in a common vertex must be contiguous Circular coloring Motivated by task systems in which production proceeds in a cyclic way Path coloring Models a routing problem in graphs Fractional coloring Vertices may have multiple colors, and on each edge the sum of the color parts of each vertex is not greater than one Oriented coloring Takes into account orientation of edges of the graph Cocoloring An improper vertex coloring where every color class induces an independent set or a clique Subcoloring An improper vertex coloring where every color class induces a union of cliques Defective coloring An improper vertex coloring where every color class induces a bounded degree subgraph. Weak coloring An improper vertex coloring where every non-isolated node has at least one neighbor with a different color Sum-coloring The criterion of minimalization is the sum of colors Centered coloring Every connected induced subgraph has a color that is used exactly once
Coloring can also be considered for signed graphs and gain graphs.
## See also
Wikimedia Commons has media related to: Graph coloring
## Notes
1. M. Kubale, History of graph coloring, in Kubale (2004)
2. ^ a b
3. ^ a b
4. Cole & Vishkin (1986), see also Cormen, Leiserson & Rivest (1990, Section 30.5)
## References
• Barenboim, L.; Elkin, M. (2009), "Distributed (Δ + 1)-coloring in linear (in Δ) time", , pp. 111–120, doi:10.1145/1536414.1536432, ISBN 978-1-60558-506-2
• Panconesi, A.; Srinivasan, A. (1996), "On the complexity of distributed network decomposition", Journal of Algorithms 20
• Schneider, J. (2010), "A new technique for distributed symmetry breaking",
• Schneider, J. (2008), "A log-star distributed maximal independent set algorithm for growth-bounded graphs",
• Beigel, R.; Eppstein, D. (2005), "3-coloring in time O(1.3289n)", 54 (2)): 168–204, doi:10.1016/j.jalgor.2004.06.008
• Björklund, A.; Husfeldt, T.; Koivisto, M. (2009), "Set partitioning via inclusion–exclusion", 39 (2): 546–563, doi:10.1137/070683933
• Brélaz, D. (1979), "New methods to color the vertices of a graph", 22 (4): 251–256, doi:10.1145/359094.359101
• Brooks, R. L.; Tutte, W. T. (1941), "On colouring the nodes of a network", 37 (2): 194–197, doi:10.1017/S030500410002168X
• de Bruijn, N. G.; Erdős, P. (1951), "A colour problem for infinite graphs and a problem in the theory of relations", Nederl. Akad. Wetensch. Proc. Ser. A 54: 371–373 (= Indag. Math. 13)
• Byskov, J.M. (2004), "Enumerating maximal independent sets with applications to graph colouring", 32 (6): 547–556, doi:10.1016/j.orl.2004.03.002
• Chaitin, G. J. (1982), "Register allocation & spilling via graph colouring", , pp. 98–105, doi:10.1145/800230.806984, ISBN 0-89791-074-5
• Cole, R.; Vishkin, U. (1986), "Deterministic coin tossing with applications to optimal parallel list ranking", 70 (1): 32–53, doi:10.1016/S0019-9958(86)80023-7
• Cormen, T. H.; Leiserson, C. E.; Rivest, R. L. (1990), (1st ed.), The MIT Press
• Dailey, D. P. (1980), "Uniqueness of colorability and colorability of planar 4-regular graphs are NP-complete", 30 (3): 289–293, doi:10.1016/0012-365X(80)90236-8
• Duffy, K.; O'Connell, N.; Sapozhnikov, A. (2008), "Complexity analysis of a decentralised graph colouring algorithm", Information Processing Letters 107 (2): 60–63, doi:10.1016/j.ipl.2008.01.002
• Fawcett, B. W. (1978), "On infinite full colourings of graphs", XXX: 455–457
• Garey, M. R.; Johnson, D. S. (1979), , W.H. Freeman, ISBN 0-7167-1045-5
• Garey, M. R.; Johnson, D. S.; Stockmeyer, L. (1974), "Some simplified NP-complete problems", Proceedings of the Sixth Annual ACM Symposium on Theory of Computing, pp. 47–63, doi:10.1145/800119.803884
• Goldberg, L. A.; Jerrum, M. (July 2008), "Inapproximability of the Tutte polynomial", 206 (7): 908–929, doi:10.1016/j.ic.2008.04.003
• Goldberg, A. V.; Plotkin, S. A.; Shannon, G. E. (1988), "Parallel symmetry-breaking in sparse graphs", 1 (4): 434–446, doi:10.1137/0401044
• Guruswami, V.; Khanna, S. (2000), "On the hardness of 4-coloring a 3-colorable graph", Proceedings of the 15th Annual IEEE Conference on Computational Complexity, pp. 188–197, doi:10.1109/CCC.2000.856749, ISBN 0-7695-0674-7
• Halldórsson, M. M. (1993), "A still better performance guarantee for approximate graph coloring", Information Processing Letters 45: 19–23, doi:10.1016/0020-0190(93)90246-6
• Holyer, I. (1981), "The NP-completeness of edge-coloring", 10 (4): 718–720, doi:10.1137/0210055
• Crescenzi, P.; Kann, V. (December 1998), "How to find the best approximation results — a follow-up to Garey and Johnson", 29 (4): 90, doi:10.1145/306198.306210
• Jaeger, F.; Vertigan, D. L.; Welsh, D. J. A. (1990), "On the computational complexity of the Jones and Tutte polynomials", 108: 35–53, doi:10.1017/S0305004100068936
• Jensen, T. R.; Toft, B. (1995), Graph Coloring Problems, Wiley-Interscience, New York, ISBN 0-471-02865-7
• Khot, S. (2001), "Improved inapproximability results for MaxClique, chromatic number and approximate graph coloring", , pp. 600–609, doi:10.1109/SFCS.2001.959936, ISBN 0-7695-1116-3
• Kubale, M. (2004), Graph Colorings, American Mathematical Society, ISBN 0-8218-3458-4
• Kuhn, F. (2009), "Weak graph colorings: distributed algorithms and applications", , pp. 138–144, doi:10.1145/1583991.1584032, ISBN 978-1-60558-606-9
• Lawler, E.L. (1976), "A note on the complexity of the chromatic number problem", 5 (3): 66–67, doi:10.1016/0020-0190(76)90065-X
• Leith, D.J.; Clifford, P. (2006), "A Self-Managed Distributed Channel Selection Algorithm for WLAN", Proc. RAWNET 2006, Boston, MA
• Linial, N. (1992), "Locality in distributed graph algorithms", 21 (1): 193–201, doi:10.1137/0221015
• van Lint, J. H.; Wilson, R. M. (2001), A Course in Combinatorics (2nd ed.), Cambridge University Press, ISBN 0-521-80340-3.
• Marx, Dániel (2004), "Graph colouring problems and their applications in scheduling", Periodica Polytechnica, Electrical Engineering 48 (1–2), pp. 11–16, CiteSeerX:
• Mycielski, J. (1955), "Sur le coloriage des graphes", Colloq. Math. 3: 161–162 .
• Nešetřil, Jaroslav; Ossona de Mendez, Patrice (2012), "Theorem 3.13", Sparsity: Graphs, Structures, and Algorithms, Algorithms and Combinatorics 28, Heidelberg: Springer, p. 42, doi:10.1007/978-3-642-27875-4, ISBN 978-3-642-27874-7, MR 2920058 .
• Panconesi, Alessandro; Rizzi, Romeo (2001), "Some simple distributed algorithms for sparse networks", Distributed Computing (Berlin, New York: Springer-Verlag) 14 (2): 97–100, doi:10.1007/PL00008932, ISSN 0178-2770
• Sekine, K.; Imai, H.; Tani, S. (1995), "Computing the Tutte polynomial of a graph of moderate size", Proc. 6th International Symposium on Algorithms and Computation (ISAAC 1995), Lecture Notes in Computer Science 1004, Springer, pp. 224–233, doi:10.1007/BFb0015427, ISBN 3-540-60573-8
• Welsh, D. J. A.; Powell, M. B. (1967), "An upper bound for the chromatic number of a graph and its application to timetabling problems", The Computer Journal 10 (1): 85–86, doi:10.1093/comjnl/10.1.85
• West, D. B. (1996), Introduction to Graph Theory, Prentice-Hall, ISBN 0-13-227828-6
• Wilf, H. S. (1986), Algorithms and Complexity, Prentice–Hall
• Zuckerman, D. (2007), "Linear degree extractors and the inapproximability of Max Clique and Chromatic Number", 3: 103–128, doi:10.4086/toc.2007.v003a006
• Zykov, A. A. (1949), "О некоторых свойствах линейных комплексов (On some properties of linear complexes)", Math. Sbornik. (in Russian), 24(66) (2): 163–188
• Jensen, Tommy R.; Toft, Bjarne (1995), Graph Coloring Problems, John Wiley & Sons, ISBN 0-471-02865-7, 9780471028659 Check `|isbn=` value (help)
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 58, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8624022006988525, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/275194/how-does-a-non-mathematician-go-about-publishing-a-proof-in-a-way-that-ensures-i/276019
|
# How does a non-mathematician go about publishing a proof in a way that ensures it to be up to the mathematical community's standards?
I'm a computer science student who is a maths hobbyist. I'm convinced that I've proven a major conjecture. The problem lies in that I've never published anything before and am not a mathematician by profession. Knowing full well that my proof may be fallacious, erroneous, or simply lacking mathematical formality, what advice would you give me?
-
4
Really, you proved? – Inquisitive Jan 10 at 15:22
5
I don't know if this is a good idea. Why don't you discuss your work with a mathematician whom you trust. – Amr Jan 10 at 15:27
28
May I know what the conjecture is ? – Amr Jan 10 at 15:27
11
@Amr: I don't see that it makes much difference what the conjecture is. – Carl Mummert Jan 10 at 15:57
9
He or she. – Did Jan 10 at 17:13
show 18 more comments
## 7 Answers
I'm convinced that I've proved a major conjecture.
You are almost certainly mistaken. I say this on purely probabilistic grounds, so don't get upset $-$ even professional mathematicians are sometimes mistaken about their own 'proofs', and amateurs almost always.
I suggest you tell us what this major conjecture is, and post a link to your proof (or just post it here, if it's short enough). This is enough to establish your priority, if you are worried about somebody stealing your proof. Then the sharks of MSE can devour it.
PS Your proof will probably be more favourably received if it is nicely formatted, using LaTeX.
-
4
I am not sure, but maybe arXiv works better as a source to establish the priority? – Ilya Jan 10 at 15:30
3
@Ilya, that would certainly be true, if arXiv was open to all. Perhaps the OP has posting rights, perhaps not. – TonyK Jan 10 at 15:32
23
I don't think it's necessary to tell us what the conjecture is, or post a link. I would suggest that the OP find a math professor at his or her home institution and ask them in confidence. It's always better to have someone else look at anything "major" before posting it on the internet. – Carl Mummert Jan 10 at 15:59
8
If it is a valid proof, though, a single mathematician might well steal it, while a public posting makes it much more difficult for someone else to falsely claim authorship. @CarlMummert – Thomas Andrews Jan 10 at 16:07
4
If your concern is establishing priority you should make it public as quickly as possible; if your concern is avoiding embarrassment you should have it reviewed in private. But if you trust the reviewer there shouldn't be much of an issue either way. – Charles Jan 10 at 18:32
show 5 more comments
Many graduate students and people who try to switch fields often face this problem. Although, they might have a cool idea, they just don't have the knowledge about the right way to present the idea. Also, because they don't know the field very well, from the perspective of the field's community, their work is a weird mix of well-known results, irrelevant details and unexpected points-of-view. In the middle somewhere, there might be a brilliant idea. Quite often, reviewers will not have the patience to look for that cool idea. The less well you know the field, the more painful the review process will be for you and the review process is already painful enough for most.
Even researchers who are well-established can have this problem of not knowing how to express their idea for an unfamiliar field.
I would advise doing a lot of reading in the field that you think your proof belongs to until you can speak their language reasonably well. It's not unusual for this to take months. Most graduate students have to do this. A second approach would be to collaborate with somebody who is already established in your target field. In my experience, this is a common strategy for established researchers.
Don't be surprised if you spend longer figuring out how to write up and present your idea, than it took to do the actual research. That's pretty common!
I wouldn't go public as there is a lot of potential for embarrassment there and well, reputation does matter ... for instance, in cases where you are trying to get a collaborator.
-
If the conjecture is important it has a name and keywords associated with it.
Step 1: Primary search. First you should do a literature search (using the 'name of the conjecture' and 'keywords') on your conjecture. Visit pages from reputable mathematical websites that discuss open conjectures. And see if your conjecture is still open. Eg:
http://mathworld.wolfram.com/UnsolvedProblems.html
http://www.openproblems.net/
http://www.openquestions.com/oq-math.htm
Step 2: Fundamental search. Go to respectable databases with subject classifications:
http://www.jstor.org/action/showAdvancedSearch
http://www.ams.org/mathscinet/msc/msc2010.html
http://arxiv.org/multi?group=grp_math&%2Ffind=Search
Then use the 'name of conjecture', the 'keywords' and an appropriate classification for searches in databases. And see in the articles you find if this conjecture is not resolved or what contributions were made. See if there is a program to solve it. (As was the case with the BMV Conjecture, now resolved (?).) If your proof is in the direction of a program you did not know then your evidence may be right.
Step 3. Submit If after doing all of this you still believe that your proof is correct, write an article, look at a journal http://www.scimagojr.com/journalrank.php?area=2600&category=0&country=all&year=2011&order=sjr&min=0&min_type=cd that is compatible with the field of mathematics to which the conjecture belongs. Enter the journal page and follow the procedures for submitting articles.
-
11
At step 3, I presume you mean a "journal" not a "newspaper". I know the New York Times would be very confused if they received a mathematical proof :P – drxzcl Jan 10 at 19:03
I would suggest consulting with one of your mathematics professors at your university.
-
At the very least, you would need to avoid the pitfalls found in Scott Aaronson's Ten Signs a Claimed Mathematical Breakthrough is Wrong.
I tend to agree that it's incredibly unlikely that you have in fact solved a major conjecture. At various times as an undergraduate, I was convinced of solving major and minor conjectures. It's really very easy to fool yourself.
I'd say email a math professor at your university in the relevant specialty, and ask to meet with him/her for an hour or so to go over what you've been working on. They'll probably be able to quickly spot a major flaw in your work, and if not, they'll be extremely interested in working with you. If you approach them with the right humility about the correctness of your proof, most profs would be thrilled to interact with a student who is actually interested in research. You might talk about doing an REU or something similar in the general area of the conjecture if things go well.
-
There is no Explicit answer to your request. but first of all you should determine the field of mathematics which agrees with your research.
The second step is to read some texts about your work esp. papers published.(I guess you solved the conjecture by reading related mathematical papers or books! isn't it?!)
And as a final step you may write down a simple and related proof from a paper which you understand it.
-
While, as mentioned by many others, the best answer is definitely to consult with a trusted mentor, here's one thing you should have already done; you need to go over your proof with a fine tooth comb, and be extremely critical as you do so. You need to make sure that you are able to provide an air-tight proof for every single assertion on every page, and anticipate possible objections. This is tedious, difficult work, possibly harder than coming up with the idea of your proof in the first place, but also is absolutely necessary, moreso as you are claiming to prove a major conjecture.
Of course, your final published version will probably not contain this level of detail, and this process doesn't guarantee that mistakes won't get through (even for the world's best mathematicians) but you should be much more confident that your proof is not "erroneous or fallacious", as you put it in the OP, before claiming to have settled a major conjecture.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.958118200302124, "perplexity_flag": "middle"}
|
http://stats.stackexchange.com/questions/2715/rules-of-thumb-for-modern-statistics/2831
|
# Rules of thumb for “modern” statistics
I like G van Belle's book on Statistical Rules of Thumb, and to a lesser extent Common Errors in Statistics (and How to Avoid Them) from Philipp I Good. They address common pitfalls when interpreting results from experimental and observational studies and provide practical recommendations for statistical inference, or exploratory data analysis. But I feel that "modern" guidelines are somewhat lacking, especially with the ever growing use of computational and robust statistics in various fields, or the introduction of techniques from the machine learning community in, e.g. clinical biostatistics or genetic epidemiology.
Apart from computational tricks or common pitfalls in data visualization which could be addressed elsewhere, I would like to ask: What are the top rules of thumb you would recommend for efficient data analysis? (one rule per answer, please)
I am thinking of guidelines that you might provide to a colleague, a researcher without strong background in statistical modeling, or a student in intermediary to advanced course. This might pertain to various stages of data analysis, e.g. sampling strategies, feature selection or model building, model comparison, post-estimation, etc.
-
If someone can help me to better tag this question... – chl♦ Sep 16 '10 at 10:23
@chl I recently made [rule-of-thumb] tag, should work here. Also, if you don't find good tags, use [for-retag]. – mbq♦ Sep 16 '10 at 11:51
@mbq Thanks for the retag info. – chl♦ Sep 16 '10 at 11:52
@chi Perhaps, we should have one rule per answer? – user28 Sep 16 '10 at 12:20
@Srikant Well, if you feel it would be more readable by others, why not. Otherwise, I can manage to summarize the poll at the end of my question. In fact, I was expecting that people would like to enumerate 3 or more rules they always keep in mind, for every kind of data analysis project. – chl♦ Sep 16 '10 at 12:26
show 3 more comments
## 14 Answers
Don't forget to do some basic data checking before you start the analysis. In particular, look at a scatter plot of every variable you intend to analyse against ID number, date / time of data collection or similar. The eye can often pick up patterns that reveal problems when summary statistics don't show anything unusual. And if you're going to use a log or other transformation for analysis, also use it for the plot.
-
3
I learnt this one the hard way. Twice. – onestop Nov 5 '10 at 16:38
Yes! Look before you leap. Please, look at the data. – vqv Dec 19 '10 at 22:09
## There is no free lunch
Great part of statistical failures is created by clicking a big shiny button called "Calculate significance" without taking into account its burden of hidden assumptions.
## Repeat
Even if a single call to random generator is involved, one may have luck or bad luck and so get to the wrong conclusions.
-
Keep your analysis reproducible. A reviewer or your boss or someone else will eventually ask you how exactly you arrived at your result - probably six months or more after you did the analysis. You will not remember how you cleaned the data, what analysis you did, why you chose the specific model you used... And reconstructing all this is a pain.
Corollary: use R, put comments in your analysis scripts and keep them.
-
12
If you're going to use R, I'd recommend embedding your R code in an Sweave document that produces your report. That way the R code stays with the report. – John D. Cook Nov 2 '10 at 14:00
There can be a long list but to mention a few: (in no specific order)
1. P-value is NOT probability. Specifically, it is not the probability of committing Type I error. Similarly, CIs have no probabilistic interpretation for the given data. They are applicable for repeated experiments.
2. Problem related to variance dominate bias most the time in practice, so a biased estimate with small variance is better than an unbiased estimate with large variance (most of the time).
3. Model fitting is an iterative process. Before analyzing the data understand the source of data and possible models that fit or dont fit the description. Also, try model any design issues in your model.
4. Use the visualization tools, look at the data (for possible abnormalities, obvious trends etc etc to understand the data) before analyzing it. Use the visualization methods (if possible) to see how the model fits to that data.
5. Last but not the least, use statistical software for what they are made for (to make your task of computation easier), they are not a substitute for human thinking.
-
2
Your item 1 is incorrect: the P value is the probability of obtaining data as extreme, or more extreme, given the null hypothesis. As far as I know that means that P is a probability--conditional but a probability nonetheless. Your statement is correct in the circumstances that one is working within the Neyman-Pearson paradigm of errors, but not is one is working within the Fisherian paradigm where P values are idices of evidence against the null hypothesis. It is true that the paradigms are regularly mixed into an incoherent mish-mash, but both are 'correct' when used alone and intact. – Michael Lew Apr 10 '11 at 9:37
– Michael Lew Apr 10 '11 at 9:39
@Michael you are correct, but lets see: How many times is the Null correct? Or better: Can anyone prove if the null is correct? We can also have deep philosophical debates about this but that is not the point. In quality control repetitions make sense, but in science any good decision rule must condition data. – suncoolsu Apr 10 '11 at 16:51
Fisher knew this (conditioning on the observed data and the remark about quality control is based on that). He produced many counter examples based on this. Bayesian have been fighting about this, lets say, for more than half-a-century. – suncoolsu Apr 10 '11 at 16:58
1
@Michael Sorry if I wasn't clear enough. All I wanted to say: P-value is a probability ONLY when the null is true, but most of the times null is NOT true (as in: we never expect $\mu=0$ to be true; we assume it to be true, but our assumption is practically incorrect.) In case you are interested, I can point out some literature discussing this idea in greater detail. – suncoolsu Apr 11 '11 at 15:36
show 2 more comments
One thing I tell my students is to produce an appropriate graph for every p-value. e.g., a scatterplot if they test correlation, side-by-side boxplots if they do a one-way ANOVA, etc.
-
One rule per answer ;-)
Talk to the statistician before conducting the study. If possible, before applying for the grant. Help him/her understand the problem you are studying, get his/her input on how to analyze the data you are about to collect and think about what that means for your study design and data requirements. Perhaps the stats guy/gal suggests doing a hierarchical model to account for who diagnosed the patients - then you need to track who diagnosed whom. Sounds trivial, but it's far better to think about this before you collect data (and fail to collect something crucial) than afterwards.
On a related note: do a power analysis before starting. Nothing is as frustrating as not having budgeted for a sufficiently large sample size. In thinking about what effect size you are expecting, remember publication bias - the effect size you are going to find will probably be smaller than what you expected given the (biased) literature.
-
If you're deciding between two ways of analysing your data, try it both ways and see if it makes a difference.
This is useful in many contexts:
• To transform or not transform
• Non-parametric or parameteric test
• Spearman's or Pearson's correlation
• PCA or factor analysis
• Whether to use the arithmetic mean or a robust estimate of the mean
• Whether to include a covariate or not
• Whether to use list-wise deletion, pair-wise deletion, imputation, or some other method of missing values replacement
This shouldn't absolve one from thinking through the issue, but it at least gives a sense of the degree to which substantive findings are robust to the choice.
-
4
Is it a quotation? I'm just wondering how trying alternative testing procedures (not analysis strategies!) may not somewhat break control of Type I error or initial Power calculation. I know SAS systematically returns results from parametric and non-parametric tests (at least in two-sample comparison of means and ANOVA), but I always find this intriguing: Shouldn't we decide before seeing the results what test ought to be applied? – chl♦ Sep 18 '10 at 9:57
2
@chl good point. I agree that the above rule of thumb can be used for the wrong reasons. I.e., trying things multiple ways and only reporting the result that gives the more pleasing answer. I see the rule of thumb as useful as a data analyst training tool in order to learn the effect of analysis decisions on substantive conclusions. I've seen many students get lost with decisions particularly where there is competing advice in the literature (e.g., to transform or not to transform) that often have minimal influence on the substantive conclusions. – Jeromy Anglim Sep 19 '10 at 5:45
@chl no it's not a quotation. But I thought it was good to demarcate the rule of thumb from its rationale and caveats. I changed it to bold to make it clear. – Jeromy Anglim Sep 19 '10 at 10:11
Ok, it makes sense to me to try different transformations and look if it provides a better way to account for the studied relationships; what I don't understand is to try different analysis strategies, although it is current practice (but not reported in published articles :-), esp. when they rely on different assumptions (in EFA vs. PCA, you assume an extra error term; in non-parametric vs. parametric testing, you throw away part of the assumptions, etc.). But, I agree the demarcation between exploratory and confirmatory analysis is not so clear... – chl♦ Sep 19 '10 at 10:29
Question your data. In the modern era of cheap RAM, we often work on large amounts of data. One 'fat-finger' error or 'lost decimal place' can easily dominate an analysis. Without some basic sanity checking, (or plotting the data, as suggested by others here) one can waste a lot of time. This also suggests using some basic techniques for 'robustness' to outliers.
-
1
Corollary: look whether someone coded a missing value as "9999" instead of "NA". If your software uses this value at face value, it will mess up your analysis. – Stephan Kolassa Nov 29 '12 at 14:12
Use software that shows the chain of programming logic from the raw data through to the final analyses/results. Avoid software like Excel where one user can make an undetectable error in one cell, that only manual checking will pick up.
-
– Denis Jun 17 '11 at 10:10
Always ask yourself "what do these results mean and how will they be used?"
Usually the purpose of using statistics is to assist in making decisions under uncertainty. So it is important to have at the front of your mind "What decisions will be made as a result of this analysis and how will this analysis influence these decisions?" (e.g. publish an article, recommend a new method be used, provide \$X in funding to Y, get more data, report an estimated quantity as E, etc.etc.....)
If you don't feel that there is any decision to be made, then one wonders why you are doing the analysis in the first place (as it is quite expensive to do analysis). I think of statistics as a "nuisance" in that it is a means to an end, rather than an end itself. In my view we only quantify uncertainty so that we can use this to make decisions which account for this uncertainty in a precise way.
I think this is one reason why keeping things simple is a good policy in general, because it is usually much easier to relate a simple solution to the real world (and hence to the environment in which the decision is being made) than the complex solution. It is also usually easier to understand the limitations of the simple answer. You then move to the more complex solutions when you understand the limitations of the simple solution, and how the complex one addresses them.
-
For data organization/management, ensure that when you generate new variables in the dataset (for example, calculating body mass index from height and weight), the original variables are never deleted. A non-destructive approach is best from a reproducibility perspective. You never know when you might mis-enter a command and subsequently need to redo your variable generation. Without the original variables, you will lose a lot of time!
-
For histograms, a good rule of thumb for number of bins in a histogram:
square root of the number of data points
-
Think hard about the underlying data generating process. If the model you want to use doesn't reflect the DGP, you need to find a new model.
-
In a forecasting problem (i.e. when you need to forecast $Y_{t+h}$ given $(Y_t,X_t)$ $t>T$, with the use of a learning set $(Y_1,X_1),\dots, (Y_T,X_T)$ ), the rule of the thumb (to be done before any complex modelling) are
1. Climatology ($Y_{t+h}$ forecast by the mean observed value over the learning set, possibly by removing obvious periodic patterns)
2. Persistance ($Y_{t+h}$ forecast by the last observed value: $Y_t$.
What I often do now as a last simple benchmark/rule of the thumb is using randomForest($Y_{t+h}$~$Y_t+X_t$,data=leanringSet) in R software. It gives you (with 2 lines of code in R) a first idea of what can be achieved without any modelling.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.922416627407074, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/tagged/topological-field-theory+gauge-theory
|
# Tagged Questions
1answer
74 views
### Do instantons support quantum bound states?
When one quantizes a scalar in the 1+1 dimensions in the kink background of a double well potential, one finds a spectrum that includes: (1) a zero mode corresponding to the classical particle ...
2answers
348 views
### Gauge invariance and diffeomorphism invariance in Chern-Simons theory
I have studied Chern-Simons (CS) theory somewhat and I am puzzled by the question of how diff. and gauge invariance in CS theory are related, e.g. in $SU(2)$ CS theory. In particular, I would like to ...
2answers
196 views
### Chern-Simons degrees of freedom
I'm currently reading the paper http://arxiv.org/abs/hep-th/9405171 by Banados. I am just getting acquainted with the details of Chern-Simons theory, and I'm hoping that someone can explain/elaborate ...
1answer
177 views
### Using the covariant derivative to find force between 't Hooft-Polyakov magnetic monopoles
I am reading this research paper authored by NS Manton on the Force between 't Hooft-Polyakov monopoles. I have a doubt in equation 3.6 and 3.7. We assume the gauge field for a slowly accelerating ...
0answers
46 views
### Limit of the scalar field, and potential for a soliton ( finite energy, non dissipative) solution
I want to prove that the the scalar field of the yang-mills lagrangian tends to some constant value which is a function of theta at infinity and that this value is a zero of the potential, when we ...
1answer
169 views
### What is the winding number of a magnetic monopole, and why is it conserved
I had asked a similar question about a calculation involving the winding number here. But i haven't got a satisfactory response. So, I am rephrasing this question in a slightly different manner. What ...
1answer
128 views
### Normalization of the Chern-Simons level in $SO(N)$ gauge theory
In a 3d SU(N) gauge theory with action $\frac{k}{4\pi} \int \mathrm{Tr} (A \wedge dA + \frac{2}{3} A \wedge A \wedge A)$, where the generators are normalized to \$\mathrm{Tr}(T^a ...
1answer
141 views
### Chern-Simons theory
In Witten's paper on QFT and the Jones polynomial, he quantizes the Chern-Simons Lagrangian on $\Sigma\times \mathbb{R}^1$ for two case: (1) $\Sigma$ has no marked points (i.e., no Wilson loops) and ...
2answers
142 views
### Topological twists of SUSY gauge theory
Consider $N=4$ super-symmetric gauge theory in 4 dimensions with gauge group $G$. As is explained in the beginning of the paper of Kapustin and Witten on geometric Langlands, this theory has 3 ...
1answer
131 views
### Models of higher Chern-Simons type
It has long been clear that (the action functional of) Chern-Simons theory has various higher analogs and variations of interest. This includes of course traditional higher dimensional Chern-Simons ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9100613594055176, "perplexity_flag": "head"}
|
http://en.wikipedia.org/wiki/Sethi-Ullman_algorithm
|
# Sethi–Ullman algorithm
(Redirected from Sethi-Ullman algorithm)
In computer science, the Sethi–Ullman algorithm is an algorithm named after Ravi Sethi and Jeffrey D. Ullman, its inventors, for translating abstract syntax trees into machine code that uses as few instructions as possible.
## Overview
When generating code for arithmetic expressions, the compiler has to decide which is the best way to translate the expression in terms of number of instructions used as well as number of registers needed to evaluate a certain subtree. Especially in the case that free registers are scarce, the order of evaluation can be important to the length of the generated code, because different orderings may lead to larger or smaller numbers of intermediate values being spilled to memory and then restored. The Sethi–Ullman algorithm (also known as Sethi–Ullman numbering) fulfills the property of producing code which needs the least number of instructions possible as well as the least number of storage references (under the assumption that at the most commutativity and associativity apply to the operators used, but distributive laws i.e. $a * b + a * c = a * (b + c)$ do not hold). Please note that the algorithm succeeds as well if neither commutativity nor associativity hold for the expressions used, and therefore arithmetic transformations can not be applied.
## Simple Sethi–Ullman algorithm
The simple Sethi–Ullman algorithm works as follows (for a load-store architecture):
1. Traverse the abstract syntax tree in pre- or postorder
1. For every non-constant leaf node, assign a 1 (i.e. 1 register is needed to hold the variable/field/etc.). For every constant leaf node (RHS of an operation – literals, values), assign a 0.
2. For every non-leaf node n, assign the number of registers needed to evaluate the respective subtrees of n. If the number of registers needed in the left subtree (l) are not equal to the number of registers needed in the right subtree (r), the number of registers needed for the current node n is max(l, r). If l == r, then the number of registers needed for the current node is l + 1.
2. Code emission
1. If the number of registers needed to compute the left subtree of node n is bigger than the number of registers for the right subtree, then the left subtree is evaluated first (since it may be possible that the one more register needed by the right subtree to save the result makes the left subtree spill). If the right subtree needs more registers than the left subtree, the right subtree is evaluated first accordingly. If both subtrees need equal as much registers, then the order of evaluation is irrelevant.
### Example
For an arithmetic expression $a = (b + c+ f * g) * (d + 3)$, the abstract syntax tree looks like this:
``` =
/ \
a *
/ \
/ \
+ +
/ \ / \
/ \ d 3
+ *
/ \ / \
b c f g
```
To continue with the algorithm, we need only to examine the arithmetic expression $(b + c + f * g) * (d + 3)$, i.e. we only have to look at the right subtree of the assignment '=':
``` *
/ \
/ \
+ +
/ \ / \
/ \ d 3
+ *
/ \ / \
b c f g
```
Now we start traversing the tree (in preorder for now), assigning the number of registers needed to evaluate each subtree (note that the last summand in the expression $(b + c + f * g) * (d + 3)$ is a constant):
``` *2
/ \
/ \
+2 +1
/ \ / \
/ \ d1 30
+1 *1
/ \ / \
b1 c0f1 g0
```
From this tree it can be seen that we need 2 registers to compute the left subtree of the '*', but only 1 register to compute the right subtree. Nodes 'c' and 'g' do not need registers for the following reasons: If T is a tree leaf, then the number of registers to evaluate T is either 1 or 0 depending whether T is a left or a right subtree(since an operation such as add R1, A can handle the right component A directly without storing it into a register). Therefore we shall start to emit code for the left subtree first, because we might run into the situation that we only have 2 registers left to compute the whole expression. If we now computed the right subtree first (which needs only 1 register), we would then need a register to hold the result of the right subtree while computing the left subtree (which would still need 2 registers), therefore needing 3 registers concurrently. Computing the left subtree first needs 2 registers, but the result can be stored in 1, and since the right subtree needs only 1 register to compute, the evaluation of the expression can do with only 2 registers left.
## Advanced Sethi–Ullman algorithm
In an advanced version of the Sethi–Ullman algorithm, the arithmetic expressions are first transformed, exploiting the algebraic properties of the operators used.
## See also
• Strahler number, the minimum number of registers needed to evaluate an expression without any external storage
## References
• Sethi, Ravi; Ullman, Jeffrey D. (1970), "The Generation of Optimal Code for Arithmetic Expressions", 17 (4): 715–728, doi:10.1145/321607.321620 .
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 4, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8598325252532959, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/geometry/200292-tangent-point-curve.html
|
# Thread:
1. ## Tangent from a point to a curve?
I have an important exam on Monday and I stumbled upon this problem.
Problem: Calculate the angle (with tan(x) function) between the tangent lines drawn from (-4,1) point to a graph y^2=2*x.
I know that derivative of function represents its tangent line equation, however I cannot seem to implement the given point into the equation since it's out of the domain. Is the correct derivative:
Since the given equation represents a non "1-1" function, divide the graph into two separate graphs (y=sqrt(2*x) and y=-sqrt(2*x)) then dy=1/sqrt(2*x) and dy=-1/sqrt(2*x). I know I did something wrong but not sure what.
I hope somebody can help me. Thanks in advance
2. ## Re: Tangent from a point to a curve?
"dy=1/sqrt(2*x)" is wrong. I think you mean dx/dy=1/sqrt(2*x)
You have to find the points where the tangents touch the function yourself. Name these points, find equations which relate the slope of the tangent (at this point) to the condition that the tangent has to go through your point.
It should look similar to this:
3. ## Re: Tangent from a point to a curve?
$y^2 = 2x$. Differentiating both sides with respect to x, $2y \frac{dy}{dx} = 2 \Rightarrow \frac{dy}{dx} = \frac{1}{y}$.
This means that for a given $(x,y)$ on the function, the slope of the tangent line at that point is $\frac{1}{y}$. Since the tangent line also goes through (-4,1) we can write out an equation for the slope:
$\frac{1}{y} = \frac{y-1}{x-(-4)} = \frac{y-1}{x+4} \Rightarrow x+4 = y(y-1)$
We know that $x = \frac{y^2}{2}$, so
$\frac{y^2}{2} + 4 = y(y-1)$. Solve the quadratic.
4. ## Re: Tangent from a point to a curve?
First take a derivate of your function, it will be y'=1/sqrt(2*x)
Now write a tangent eg. over two points: T1(-4,1), T2(y2,x2) - point where tangent line touches our curve:
y2=√(2*x2 ), y-1= √2*x2 -1/(x2+4)(x+4)
Now, we know that √2*x2 -1/(x2+4) has to be equal 1/sqrt(2*x2) , and from that we can find x2=8.
y2=√2*×2_ =4.
Now u have two points to define ur tangent.
Same way find the other tangent and angle between them.
Hope this is correct.
5. ## Re: Tangent from a point to a curve?
Your suggestions were correct. After 10+mins I finally got to the correct answer. It took me that long cuz I was going the long way. While on that topic - I know two possible solutions to calculate the required tan(x).
First one is to use tan(theta)=slope of the corresponding function and then use the identity tan(x+y)=(tan(x)+tan(y))/(1-tan(x)*tan(y)).
The other one is to find coordinates of the dots that are part of right angle triangle (dots being (-4,1), circle and one tangent line intersection (2,-2) and radius perpendicular to that point and other tangent line intersection (32/7,22/7)).
My question is - is there a shorter way?
After all the calculations, I got the correct answer which is arctan(6/7) if anyone is wondering.
Thanks Imo and richard1234
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 7, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9245774745941162, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/19948/expressing-a-particles-matter-wave-in-terms-of-its-momentum
|
# Expressing a particle's matter wave in terms of its momentum
I'm following Zettili's QM book and on p. 39 the following manipulation is done,
Given a localized wave function (called a wave packet), it can be expressed as $$\psi(x,t) = \frac{1}{\sqrt{ 2 \pi}} \int_{-\infty}^{\infty} \phi(k) e^{i(kx-\omega t)} dk$$ Now use the de broglie relations: $p = \hbar k$ and $E = \hbar \omega$ and define $\tilde{\phi}(p) = \phi(\frac{k}{\hbar})$.
This should yield $$\psi(x,t) = \frac{1}{\sqrt{ 2 \pi \hbar}}\int_{-\infty}^{\infty} \tilde{\phi}(p) e^{i(px-E t)/ \hbar} dp$$ but I get $$\psi(x,t) = \frac{1}{\sqrt{ 2 \pi} \hbar}\int_{-\infty}^{\infty} \phi\biggl(\frac{p}{h}\biggr) e^{i(px-E t)/ \hbar} dp$$ when I make the change-of-variable $k=\frac{p}{\hbar}$. What am I missing?
-
2
Are you just talking about the factor of $\sqrt{\hbar}$? – Colin K Jan 24 '12 at 22:01
It's correct as written. So no, $\tilde{\phi}$ does not align either. – user19192 Jan 24 '12 at 23:54
1
Sorry, this is a really weak question. It's a straightforward and trivial algebra to get the right power of $\hbar$. Moreover, mature physicists - probably including the author of the textbook you mention - use units with $\hbar=1$, so missing powers of $\hbar$ are "not really mistakes". On the other hand, you have real mistakes in your text, e.g. the claim $\tilde \phi(p)=\phi(k/\hbar)$ at the top. You meant $p/\hbar$ on the right hand side, right? When you make this fix, you do see that the "obtained" and "desired" expressions only differ by $\sqrt{\hbar}$, don't you? – Luboš Motl Jan 25 '12 at 6:30
1
I'm sorry the author of an introductory textbook doesn't use your convention for mature physicists. And no, the text states $k/\hbar$, not $p/\hbar$. This is not in the errata either. – user19192 Jan 25 '12 at 15:58
1
– Maksim Zholudev Jan 25 '12 at 16:56
## 2 Answers
In bra-ket notation your formulas should look as follows $$\left<x\right.\left|\psi\right> = \int_{-\infty}^\infty \left<x\right.\left|k\right> \left<k\right.\left|\psi\right> dk = \int_{-\infty}^\infty \left<x\right.\left|p\right> \left<p\right.\left|\psi\right> dp$$ where
$\left<k\right.\left|\psi\right> = \phi(k)\exp(-i\omega t)$ and $\left<p\right.\left|\psi\right> = \tilde\phi(p)\exp(-i\omega t)$
are the wavefunctions in terms of $k$ and $p$;
$\left<x\right.\left|k\right>$ and $\left<x\right.\left|p\right>$ are the eigenfunctions of the operators $\hat{k}$ and $\hat{p}$ respectively.
These eigenfunctions should be normalized and the usual normalization for continuous quantum numbers ($k$ and $p$) is the one with delta function: $$\left<k'\right.\left|k\right> = \int_{-\infty}^\infty \left<k'\right.\left|x\right> \left<x\right.\left|k\right> dx = \delta(k'-k)$$ The eigenfunctions normalized like this are $$\left<x\right.\left|k\right> = \frac{1}{\sqrt{2\pi}} e^{ikx}$$ and $$\left<x\right.\left|p\right> = \frac{1}{\sqrt{2\pi\hbar}} e^{ipx/\hbar}$$ So here is the lost $\sqrt{\hbar}$, in the normalization coefficient.
## Edit: version without bra-kets
$$\psi(x, t) = \int_{-\infty}^\infty \xi_k(x)\varphi(k,t) dk = \int_{-\infty}^\infty \tilde\xi_p(x)\tilde\varphi(p,t) dp$$ where
$$\varphi(k,t) = \phi(k)\exp(-i\omega t) = \int_{-\infty}^\infty \xi_k^*(x)\psi(x, t) dx$$ and $$\tilde\varphi(p,t) = \tilde\phi(p)\exp(-i\omega t) = \int_{-\infty}^\infty \tilde\xi_p^*(x)\psi(x, t) dx$$
are the wavefunctions in terms of $k$ and $p$;
$\xi_k(x)$ and $\tilde\xi_p(x)$ are the eigenfunctions of the operators $\hat{k}$ and $\hat{p}$ respectively.
These eigenfunctions should be normalized and the usual normalization for continuous quantum numbers ($k$ and $p$) is the one with delta function: $$\int_{-\infty}^\infty \xi_{k'}^*(x)\xi_k(x) dx = \delta(k'-k)$$ The eigenfunctions normalized like this are $$\xi_k(x) = \frac{1}{\sqrt{2\pi}} e^{ikx}$$ and $$\tilde\xi_p(x) = \frac{1}{\sqrt{2\pi\hbar}} e^{ipx/\hbar}$$ So here is the lost $\sqrt{\hbar}$, in the normalization coefficient.
-
I haven't studied bra-kets yet so I will have to save this to read later. But you are basically saying to just re-normalize the wave function? That makes notation tricky because then the $\psi(x,t)$ are no longer equal to themselves. – user19192 Jan 25 '12 at 16:06
@user19192, nothing happens to $\psi(x,t)$ here. You have wrong assumption that functions $\tilde\phi(p)$ and $\phi(p/\hbar)$ are equal. They differ by $\sqrt{\hbar}$ because the basis functions are also different. See the 2nd and 3rd formulas in the updated part of my answer. – Maksim Zholudev Jan 25 '12 at 16:45
The culprit seems to be a typo
$$\widetilde{\phi}(p) ~=~ \phi(\frac{k}{\hbar}) \qquad\qquad ({\rm Wrong!})$$
right above eq. (1.98) in the 1st edition of N. Zettili, Quantum Mechanics: Concepts and Applications, which is corrected to
$$\widetilde{\phi}(p) ~=~ \frac{\phi(k)}{\sqrt{\hbar}}$$
in the 2nd edition, see also a comment from Maksim Zholudev.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 39, "mathjax_display_tex": 15, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9188459515571594, "perplexity_flag": "head"}
|
http://www.physicsforums.com/showthread.php?t=158088
|
Physics Forums
## Proof by induction of a problem
I'm working out a problem that requires me to proof a result by induction. I have worked out what I think is a correct proof, but I would like for somebody to look over it and give me feedback.
Part 1: Give a reasonable definition of the symbol $$\sum_{k = m}^{m + n}{a}_{k}$$
I first define:
[tex]
\sum_{k = m}^{m}{a}_{k} = {a }_{ m}
[/tex]
Then assuming I have defined $$\sum_{k = m}^{n}{a}_{k}$$ for a fixed n >= m, I further defined:
$$\sum_{ k=m}^{n+1 } {a }_{k } =( \sum_{k=m }^{ n} { a}_{ k} )+ { a}_{n+1 }$$
Part 2 requires me to prove by induction that for all n >= 1, we have the assertion (call it A(n)):
$$\sum_{k=n+1 }^{ 2n} \frac{1 }{k } = \sum_{ m=1}^{ 2n} \frac{ {(-1) }^{m+1 } }{m }$$
I approach this problem as I would have any proof by induction. The base case A(1) is true so I won't write it out here. Now, assuming the assertion is true for some k:
$$\frac{ 1}{ k+1} + \frac{1 }{k+2 } +...+ \frac{ 1}{ 2k} = 1 - \frac{ 1}{2 } + \frac{1 }{3 } - \frac{ 1}{ 4} +...+ \frac{ {(-1) }^{2k+1 } }{ 2k}$$
where the last term on the RHS simplifies to $$- \frac{ 1}{2k }$$.
I have to show that A(k+1) is true:
(*)$$\frac{ 1}{ k+2} + \frac{1 }{k+3 } +...+ \frac{ 1}{ 2(k+1)} = 1 - \frac{ 1}{2 } + \frac{1 }{3 } - \frac{ 1}{ 4} +...+ \frac{ {(-1) }^{2(k+1)+1 } }{ 2(k+1)}$$
where the last term on the RHS simplifies to $$- \frac{1 }{ 2(k+1)}$$
Starting with A(k), I add $$\frac{1 }{ 2k+1} + \frac{ 1}{ 2k+2} - \frac{ 1}{ k+1}$$ to each side and obtain:
[tex]\frac{ 1}{k+1 } + \frac{ 1}{k+2 } +...+ \frac{ 1}{2k } - \frac{1 }{k+1 } + \frac{1 }{ 2k+1} + \frac{1 }{2k+2 } =1- \frac{ 1}{ 2} + \frac{ 1}{ 3} - \frac{ 1}{ 4} +...- \frac{1 }{2k+2 } - \frac{1 }{ k+1} + \frac{ 1}{ 2k+1} + \frac{1 }{2k+2 }
[/tex]
After subtracting out the two like terms on each side, the left hand side becomes the sum
$$\sum_{ k=(n+1)+1}^{ 2(n+1)} \frac{1 }{k }$$
and the terms $$\frac{ 1}{ 2k+2}- \frac{ 1}{ k+1}$$ on the right simplify to $$\frac{ -1}{2(k+1) }$$, so the right side resembles the right side in (*). Therefore, the RHS becomes the sum $$\sum_{ m=1}^{2(n+1) } \frac{ {(-1) }^{m+1} }{2m }$$
I'm wasn't sure if I was supposed to use the definition of part 1 in part 2 and I didn't, but I'm wondering if that would have made things easier. Thanks.
Recognitions:
Gold Member
Science Advisor
Staff Emeritus
Quote by marcin w I'm working out a problem that requires me to proof a result by induction. I have worked out what I think is a correct proof, but I would like for somebody to look over it and give me feedback. Part 1: Give a reasonable definition of the symbol $$\sum_{k = m}^{m + n}{a}_{k}$$ I first define: $$\sum_{k = m}^{m}{a}_{k} = {a }_{ m}$$ Then assuming I have defined $$\sum_{k = m}^{n}{a}_{k}$$ for a fixed n >= m, I further defined: $$\sum_{ k=m}^{n+1 } {a }_{k } =( \sum_{k=m }^{ n} { a}_{ k} )+ { a}_{n+1 }$$ Part 2 requires me to prove by induction that for all n >= 1, we have the assertion (call it A(n)): $$\sum_{k=n+1 }^{ 2n} \frac{1 }{k } = \sum_{ m=1}^{ 2n} \frac{ {(-1) }^{m+1 } }{m }$$ I approach this problem as I would have any proof by induction. The base case A(1) is true so I won't write it out here.
Your teacher might not appreciate that! Saying something is true is not proving that it is true.
Now, assuming the assertion is true for some k: $$\frac{ 1}{ k+1} + \frac{1 }{k+2 } +...+ \frac{ 1}{ 2k} = 1 - \frac{ 1}{2 } + \frac{1 }{3 } - \frac{ 1}{ 4} +...+ \frac{ {(-1) }^{2k+1 } }{ 2k}$$ where the last term on the RHS simplifies to $$- \frac{ 1}{2k }$$. I have to show that A(k+1) is true: (*)$$\frac{ 1}{ k+2} + \frac{1 }{k+3 } +...+ \frac{ 1}{ 2(k+1)} = 1 - \frac{ 1}{2 } + \frac{1 }{3 } - \frac{ 1}{ 4} +...+ \frac{ {(-1) }^{2(k+1)+1 } }{ 2(k+1)}$$ where the last term on the RHS simplifies to $$- \frac{1 }{ 2(k+1)}$$ Starting with A(k), I add $$\frac{1 }{ 2k+1} + \frac{ 1}{ 2k+2} - \frac{ 1}{ k+1}$$ to each side and obtain: $$\frac{ 1}{k+1 } + \frac{ 1}{k+2 } +...+ \frac{ 1}{2k } - \frac{1 }{k+1 } + \frac{1 }{ 2k+1} + \frac{1 }{2k+2 } =1- \frac{ 1}{ 2} + \frac{ 1}{ 3} - \frac{ 1}{ 4} +...- \frac{1 }{2k+2 } - \frac{1 }{ k+1} + \frac{ 1}{ 2k+1} + \frac{1 }{2k+2 }$$ After subtracting out the two like terms on each side, the left hand side becomes the sum $$\sum_{ k=(n+1)+1}^{ 2(n+1)} \frac{1 }{k }$$ and the terms $$\frac{ 1}{ 2k+2}- \frac{ 1}{ k+1}$$ on the right simplify to $$\frac{ -1}{2(k+1) }$$, so the right side resembles the right side in (*). Therefore, the RHS becomes the sum $$\sum_{ m=1}^{2(n+1) } \frac{ {(-1) }^{m+1} }{2m }$$ I'm wasn't sure if I was supposed to use the definition of part 1 in part 2 and I didn't, but I'm wondering if that would have made things easier. Thanks.
In going form n to n+1, two things happen. First, since the sum starts at n+1 in the first case and (n+1)-1 in the second, you are missing the first term. Second, since the sum ends at 2n in the first case and 2(n+1)= 2n+ 2 in the second, you will have two new terms, 2n+1 and 2n+ 2. That is,
$$\sum_{k= (n+1)+1}^{2(n+1)} \frac{1}{k}= \sum_{k= n+1}^{2n} \frac{1}{k}+ \frac{1}{2n+1}+ \frac{1}{2n+2}- \frac{1}{n+1}$$
Now, replace that sum by
$$\sum_{m=1}^{2n} \frac{(-1)^m+1}{m}$$
and do the algebra.
You might want to use the fact that
$$\frac{1}{2n+2}- \frac{1}{n+1}= \frac{1- 2}{2n+2}= -\frac{1}{2n+2}$$
Thread Tools
| | | |
|------------------------------------------------------|----------------------------------|---------|
| Similar Threads for: Proof by induction of a problem | | |
| Thread | Forum | Replies |
| | Precalculus Mathematics Homework | 5 |
| | Math & Science Software | 14 |
| | General Math | 12 |
| | Calculus | 1 |
| | General Math | 6 |
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 31, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9404316544532776, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/78247/consequences-of-the-langlands-program
|
## Consequences of the Langlands program
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
In the one-dimensional case the Langlands program is equivalent to the class field theory and the two-dimensional case implies the Taniyama-Shimura conjecture.
I would like to know are there any other important consequence of the Langlands program?
-
4
The Artin conjecture on L-functions. See the Wikipedia page on Artin L-functions. – KConrad Oct 16 2011 at 5:14
9
Benedict Gross has been giving a lecture series on more or less this topic. Videos of the lectures are available online at math.columbia.edu/~staff/EilenbergVideos/… – Alison Miller Oct 16 2011 at 5:36
## 2 Answers
There are many, many consequences of the general Langlands program (which I'll interpret to mean both functoriality for automorphic forms and reciprocity between Galois representations and automorphic forms). Some of these are:
• The Selberg $1/4$ conjecture.
• The Ramanujan conjecture for cuspforms on $GL_n$ over arbitrary number fields.
• Modularity of elliptic curves over arbitrary number fields. (Indeed, Langlands reciprocity is essentially the statement that all Galois representations coming from geometry are attached to automorphic forms.)
• Analogues of Sato--Tate for Frobenius eigenvalues on the $\ell$-adic cohomology of arbitrary varieties over number fields.
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Langlands functoriality (base change for $GL(2)$) implies the virtual Haken conjecture for closed arithmetic hyperbolic 3-manifolds.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8636598587036133, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/advanced-statistics/162401-normal-distribution-interpretation-algorithm.html
|
# Thread:
1. ## Normal distribution Interpretation Algorithm
Hi all, in a program I have to generate a random number using the Normal distribution, the algorithm found online, but I'd like to help us understand complemtamente the theoretical foundation behind it: Let's see:
The random number is generated by a normal distribution, which receives the parameters $\mu$ and $\sigma$, in my case $\mu = 80$, and $\sigma = 15$. The code is as follows:
Function xNORMAL(mu, sigma)
Dim NORMAL01
Const Pi As Double = 3.14159265358979
Randomize
NORMAL01 = Sqr((-2 * LN(Rnd))) * Sin(2 * Pi * Rnd)
xNORMAL = mu + sigma * NORMAL01
End Function
Function LN(x)
LN = Log(x) / Log(Exp(1))
End Function
In this part: xNORMAL = mu + sigma * NORMAL01, I understand that what they do is to clear the typing X given by:
$<br /> Z = \displaystyle\frac{X- \mu}{\sigma}$
But I have no clear rationale behind this calculation:
I guess it is to find the value of the random variable on an integration of the density function that appears in this link:
Normal distribution - Wikipedia, the free encyclopedia
Whose limits in this case would be Ln (Rnd), is this how I think?
But why the function is expressed in terms of Sin (x)?. Perhaps using Fourier transform?
Thank you if you help me solve these questions.
A greeting.
Dogod.
2. Originally Posted by Dogod11
Hi all, in a program I have to generate a random number using the Normal distribution, the algorithm found online, but I'd like to help us understand complemtamente the theoretical foundation behind it: Let's see:
The random number is generated by a normal distribution, which receives the parameters $\mu$ and $\sigma$, in my case $\mu = 80$, and $\sigma = 15$. The code is as follows:
In this part: xNORMAL = mu + sigma * NORMAL01, I understand that what they do is to clear the typing X given by:
$<br /> Z = \displaystyle\frac{X- \mu}{\sigma}$
But I have no clear rationale behind this calculation:
I guess it is to find the value of the random variable on an integration of the density function that appears in this link:
Normal distribution - Wikipedia, the free encyclopedia
Whose limits in this case would be Ln (Rnd), is this how I think?
But why the function is expressed in terms of Sin (x)?. Perhaps using Fourier transform?
Thank you if you help me solve these questions.
A greeting.
Dogod.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 10, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8882752656936646, "perplexity_flag": "middle"}
|
http://sbseminar.wordpress.com/2007/09/17/that-trick-where-you-embed-the-free-group-into-a-lie-group/
|
## That trick where you embed the free group into a Lie group September 17, 2007
Posted by David Speyer in Number theory, representation theory.
trackback
The Banach-Tarski theorem states that a three dimensional ball can be chopped into finitely many pieces, which can then be rotated and translated to form two balls of the same volume as the first one. In the course of proving this theorem, one needs the following lemma:
The free group on two generators has an embedding into $SO_3(\mathbb{R})$.
In other words, there are two rotation matrices, $A$ and $B$, so that the only way for a product of $A$‘s, $B$‘s, $A^{-1}$‘s and $B^{-1}$‘s to be the identity is if that product is formally forced to be the identity by cancelling elements with their inverses. To see how you prove the Banach-Tarski theorem from here, read any number of excellent expositions, such as this one (PDF). In this post, I’m going to explain how you might find such an $A$ and a $B$. The proof I give isn’t the shorest, but I think it is the best motivated and it will lead into a follow up post where I talk about the Grand Picard Theorem, groups acting on trees, Schottky groups and Mumford curves.
The first thing one might think of when trying to prove this theorem is to remember places we’ve seen the free group on two generators before. We’ll write $G$ for the free group on two generators. The canny reader might recall that $G$ is isomorphic to $\Gamma(2)$. Here $\Gamma(2)$ is the subgroup of $PSL_2(\mathbb{Z})$ consisting of matrices $\left( \begin{smallmatrix} a & b \\ c & d \end{smallmatrix} \right)$ where $a$ and $d$ are odd while $b$ and $c$ are even. So $G$ is a subgroup of $PSL_2(\mathbb{R})$. The group $\Gamma(2)$ even acts on the sphere by Mobius transformations: $\left( \begin{smallmatrix} a & b \\ c & d \end{smallmatrix} \right)$ takes $z$ to $(az+b)/(cz+d)$. Using this, we can prove a Banach-Tarski theorem for the sphere — if we allow ourselves to change pieces by a Mobius transformation. Unfortunately, Mobius trasformations are not area preserving, so no one will be impressed. Hmmm. Back to the drawing board.
How different is $PSL_2(\mathbb{R})$ from $SO_3(\mathbb{R})$ anyway? Not very! The Lie group $PSL_2(\mathbb{R})$ is isomorphic to $SO(2,1)(\mathbb{R})$; the group of determinant one transformations of three-space preserving the quadratic form $x_1^2+x_2^2-x_3^2$. We are off from $SO_3(\mathbb{R})$ by just a sign. If only $i$ were a real number, we’d be done.
Well, $i$ isn’t in $\mathbb{R}$. But there are plenty of fields which do contain $i$. For example, the field $\mathbb{Q}_5$ of 5-adic numbers which Scott told us about. (Why does $\mathbb{Q}_5$ contain a square root of -1? Well, in the 5-adic absolute value, $| 2^2- (-1)|=1/5$. Now, start with $x_1=2$ and repeatedly apply the recurrence $x_{n+1} \mapsto (x_n-1/x_n)/2$. Basic estimates show that $(x_n)$ is a Cauchy sequence, and its limit must be a square root of -1. This is a simple case of Hensel’s lemma.) So, here is the idea. We’ll try to embed $G$ into $PSL_2(\mathbb{Q}_5)$, which is isomorphic to the subgroup of $SL_3(\mathbb{Q}_5)$ preserving $x_1^2+x_2^2-x_3^2$, which in turn is isomorphic to $SO_3(\mathbb{Q}_5)$. If we get lucky, our matrices $A$ and $B$ in $SO_3(\mathbb{Q}_5)$ will have rational entries, so we can also consider them as elements of $SO_3(\mathbb{R})$.
At this point, it will help a great deal to remember how we prove that $G$ embeds into $PSL_2(\mathbb{R})$. I
started out by referring to the fact that $G \cong \Gamma(2)$, but it will actually be easier to show that $G$ is isomorphic to the subgroup $H$ of $PGL_2(\mathbb{R})$ generated by
$A=\begin{pmatrix} \sqrt{3} & 0 \\ 0 & 1/\sqrt{3} \end{pmatrix}$ and $B=\begin{pmatrix} 2 & -3 \\ -1 & 2 \end{pmatrix}$.
We’ll denote the upper half plane by $\Delta$. A fundamental domain for the action of $H$ on $\Delta$ is the region $U$ bounded by the four semicircles with diameters (-3,3), (-3,-1), (-1,1) and (1,3). Let $V_1$, $V_2$, $V_3$ and $V_4$ be the regions of $\Delta$ which are, respectively, outside the semicircle with diameter (-3,3), inside the semicircle with diameter (-3,-1), inside the semicircle with diameter (-1,1) and inside the semicircle with diameter (1,3). So (up to issues on the boundary, which won’t matter), $\Delta$ is the disjoint union of $U$, $V_1$, $V_2$, $V_3$ and $V_4$.
The key computation is to check the following equations:
$\begin{matrix} A(\Delta \setminus V_3)=V_1& A^{-1}(\Delta \setminus V_1)=V_3 \\ B(\Delta \setminus V_4)=V_2 & B^{-1}(\Delta \setminus V_2)=V_4 \end{matrix} \quad (*)$ .
Now, we show that $H$ is the free group. Here is the point. Suppose that $h_1 h_2 \ldots h_r$ is a reduced word in $H$. This means that each $h_i$ is one of $A$, $B$, $A^{-1}$, $B^{-1}$ and we do not have a generator and its inverse next to each other. Pick a point $u$ in the interior of $U$. We will show that $h_1 h_2 \ldots h_r(u) \neq u$. More specifically, we will show that, if $h_1$ is (respectively) $A$, $B$, $A^{-1}$, $B^{-1}$ then $h_1 h_2 \ldots h_r(u)$ is in $V_1$, $V_2$, $V_3$ or $V_4$ respectively. The proof is just induction on $r$, using the equations above. Since $u$ isn’t in any of $V_1$, $V_2$, $V_3$ or $V_4$, this shows that $h_1 h_2 \ldots h_r(u) \neq u$. So $h_1 h_2 \ldots h_r$ is not the identity and we are done.
Now, let’s try to fit the free group on two generators into $SO_3(\mathbb{Q}_5)$ and copy the above proof. Before, the action of $PSL_2(\mathbb{R})$ on the Riemman sphere $\mathbb{C} \mathbb{P}^1$ was extremely important. How do we see $\mathbb{P}^1$ in terms of the group $SO(2,1)$? Take the hypersurface $x_1^2+x_2^2-x_3^2$ in $\mathbb{C}^3 \setminus (0,0,0)$ and take the quotient under scaling by $\mathbb{C}^*$. The resulting surface $Q$ is isomorphic to $\mathbb{C} \mathbb{P}^1$, the action of $SO(2,1)(\mathbb{C})$ on $Q$ corresponds to the action of $PSL_2(\mathbb{C})$ on $\mathbb{C} \mathbb{P}^1$ and the action of $SO(2,1)(\mathbb{R})$ corresponds to the action of $PSL_2(R)$. (There is also a copy of $SO(3)(\mathbb{R})$ contained in $SO(2,1)(\mathbb{C})$, which gives us the ordinary rotational symmetry group of the sphere, but this is transverse to $PSL_2(\mathbb{R})$ and hence not helpful.)
Let’s try to mimic 5-adically the steps that worked in the complex world. Consider the hypersurface $x_1^2+x_2^2+x_3^2$ in $\mathbb{Q}_5^3 \setminus (0,0,0)$, and mod out by the action of $\mathbb{Q}_5^*$; once again, I’ll call the result $Q$. While it is not strictly necessary for our argument, I find it extremely helpful to visualize $Q$ as a toplogical space with the quotient topology. In this case, $Q$ is isomorphic to $\mathbb{Q}_5 \mathbb{P}^1$, the space we get by attaching two copies of $\mathbb{Q}_5$ to each other, gluing $u$ to $u^{-1}$. I fix the convention that $i$ denotes the square root of -1 in $\mathbb{Q}_5$ such that $|i-2|<1$. We’ll take
$A=\begin{pmatrix} 3/5 & 4/5 & 0 \\ -4/5 & 3/5 & 0 \\ 0 & 0 & 1 \end{pmatrix}$ and $B= \begin{pmatrix} 1 & 0 & 0 \\ 0 & 3/5 & 4/5 \\ 0 & -4/5 & 3/5 \end{pmatrix}$.
Why did I choose these matrices? I wanted matrices whose action on $Q$ had an attracting fixed point and a repelling fized point, just as with my previous choice that worked in $PSL_2(\mathbb{R})$ did. Both $A$ and $B$ are diagonalizable with eigenvalues $( (2+i)/(2-i), 1, (2-i)/(2+i) )$. This means that the corresponding matrices in $PSL_2(\mathbb{Q}_5)$ have eigenvalues $((2+i)/\sqrt{5},(2-i)/\sqrt{5})$. Since, 5-adically, $|(2+i)/\sqrt{5}| >1$ and $|(2-i)/\sqrt{5}|<1$, this corresponds to an action with one repelling and one attracting fixed point. People who are used to Mobius transformations will call this a hyperbolic action on $Q$.
The above explains why we might think that $A$ and $B$ would work. Let’s actually show that they do. Let $v_1$ and $v_3$ be the attracting and repelling fixed points of$A$ on $Q$. Explicitly, $v_1=(1,i,0)$ and $v_3=(1,-i,0)$. Similarly, define $v_2$ and $v_4$ to be the attracting and repelling fixed points of $B$. We’ll now take $V_1$, $V_2$, $V_3$ and $V_4$ to be discs around $v_1$, $v_2$, $v_3$ and $v_4$. More specifically, let $q=(x:y:z)$ be a point of $Q$. By scaling the homogenous coordinates $(x:y:z)$, we can guarantee that $x$, $y$ and $z$ are all in $\mathbb{Z}_5$, and not all in $5 \mathbb{Z}_{5}$. Let $\overline{q}$ denote the reduction of $q$ modulo 5; this is an element of $\mathbb{F}_5^3$, modulo scaling. Then we define $q$ to be in $V_i$ if $\overline{q}=\overline{v_i}$. (Explicitly, $\overline{v_1}=(1:2:0)$ and we can give similar expressions for the other $\overline{v_i}$.) Now, check that the relations $(*)$ still hold and mimic the above proof to show that the group generated inside $SO(3)(\mathbb{Q}_5)$ by $A$ and $B$ is free. But, since the entries in $A$ and $B$ were rational, this also shows that the free group embeds into $SO(3)(\mathbb{Q})$ and into $SO(3)(\mathbb{R})$.
There are ways to write this proof purely in terms of the rational numbers and divisibility by 5, but I find them unnatural. Seeing an action on the 5-adic priojective line makes it clear to me what is happening.
In a comment on Tim Gowers’ blog, Terry Tao suggests that embedding the free group into $SO(3)(\mathbb{R})$ has a number of applications. I’m afraid that I don’t know any of them other than Banach-Tarski, although I imagine you could construct useful examples in dynamical systems by using this action. However, I know lots of reasons why you might want to embed the free group into $PSL_2(\mathbb{C})$ or $PSL_2(\mathbb{Q}_p)$. This will be the subject of my follow up post.
In closing, here is an exercise for you. Many papers on the Banach-Tarski theorem state that the group generated by
$\begin{pmatrix} 1/2 & \sqrt{3}/2 & 0 \\ - \sqrt{3}/2 & 1/2 & 0 \\ 0 & 0 & 1 \end{pmatrix}$ and $\begin{pmatrix} 1 & 0 & 0 \\ 0 & 1/2 & \sqrt{3}/2 \\ 0 & -\sqrt{3}/2 & 1/2 \end{pmatrix}$
is free. Explain how to adapt the above proof to show this.
## Comments»
1. gowers - September 17, 2007
This can’t be an original idea, but my first thought would be very different from yours above: it would be to choose two random matrices A and B. For each expression in A and B that doesn’t cancel in the free group, one would have to show that it wasn’t identically zero in SO_3, which I presume can’t be that hard. And then a dimension argument ought to show that the probability that any one of these expressions equals zero is zero, in which case a random choice works with probability 1. Of course, it’s interesting to have an explicit example, and what you write is an interesting route to finding such an example.
2. Terence Tao - September 17, 2007
Dear David,
This isn’t quite the same thing, but there is a variant of “A and B generate a free group inside a compact Lie group G” which has a number of applications, namely that “A and B enjoy a spectral gap inside G”. Indeed, if one views A and B as translation operators on $L^2(G)$, then the operator $\frac{1}{4}( A + A^{-1} + B + B^{-1} )$ is clearly a self-adjoint operator on $L^2(G)$ with an operator norm of 1. If one restricts to functions of mean zero, we say that there is a spectral gap if the operator norm now drops to $1 - \epsilon$ for some $\epsilon > 0$. Roughly speaking, this asserts that the only functions on G which are anywhere close to being invariant under both A and B are the constant functions (this implies, but is stronger than, the action of < A,B&\gt; on G being ergodic). Drinfeld showed that there exist pairs A, B in SU(2) (or SO(3), if you wish) which not only generate a free group, but enjoy a spectral gap; among other things, this implies that the only finitely additive rotation-invariant probability measure on the sphere is normalised Lebesgue measure (this is false for the unit circle!). There is a lot of recent work on this kind of thing (most recently by Bourgain and Gamburd); it has connections with Kazhdan’s property T and to some analytic number theory (such as the Ramanujan conjectures).
One amusing way to show the existence of A, B that generate the free group is to show that for any non-trivial word w(A,B) of A and B, the variety { (A,B): w(A,B) = id} is a proper subvariety of SO(3) x SO(3), and thus has measure zero. Since the countable union of measure zero sets is measure zero, we see that if one selects A, B uniformly at random, one is going to get a free group with probability 1. (Alternatively, one could argue using the Baire category theorem.) To show that all non-trivial words define proper subvarieties, there are many ways; for instance, one can differentiate w(A,B) with respect to A near the identity and analyse what comes out.
3. Scott Carnahan - September 17, 2007
That’s pretty cool. I had been taught the "sufficiently generic rotations" argument and never bothered to think about explicit realizations.
Minor correction: $PSL_2(\mathbb{R})$ is isomorphic to the isochronous subgroup $SO(2,1)^+(\mathbb{R})$, which is index two in $SO(2,1)(\mathbb{R})$.
4. Scott Carnahan - September 17, 2007
I just rescued the first two comments from the spam bucket. Maybe our filter detects Fields medalists.
Incidentally, Tao’s last argument works over any uncountable field, if you replace measure zero with Zariski closed, positive codimension. Are there countably infinite fields (necessarily of positive characteristic, by the post) with no embeddings of $F_2$ in $SO_3$?
Also, is there a word like proper that doesn’t also mean finite type and universally closed? I got very confused for a second, reading about proper subvarieties of an affine algebraic group, but I couldn’t think of another concise way to say the same thing.
5. Terence Tao - September 18, 2007
Ugh, sorry about that. I guess “proper” is right up there with “normal” and “regular” as a contender for the most overworked adjective in mathematics. Perhaps “strict subvariety” would have been better?
The question about the countably infinite fields is interesting, but way outside of my own expertise; I had a random naive thought that the Lowenheim-Skolem theorem might be relevant, but that might be totally off-base.
6. Scott Carnahan - September 18, 2007
To answer my own question, $SO_3(\overline{\mathbb{F}_q})$ is a union of finite groups, so it can’t even have an embedding of $\mathbb{Z}$. It shouldn’t be hard to construct an embedding of $F_2$ for fields with transcendence degree at least two, but I wonder if it can be done for function fields of curves.
7. Scott Carnahan - September 18, 2007
Sorry to flood the thread. David’s proof above works almost verbatim over $\mathbb{F}_q(t)$ (and any extension thereof) when $q$ is odd, by substituting $\{t, t^2-1, 2t, t^2+1\}$ for every appearance of $\{2,3,4,5\}$, although in the proof you will have to extend scalars on the symmetric space $Q$ when $q \equiv 3 \, (4)$. There are some subtleties when working with orthogonal groups in characteristic two that I don’ t know how to tackle.
For some reason, people don’t seem to find $t$-adic Banach-Tarski particularly compelling…
8. carlbrannen - September 18, 2007
I’m afraid this is one of those places where physicists and mathematicians have to diverge. Far more interesting to me are finite groups embedded in Lie groups.
The one I’m playing with at the moment is the “snuark” subset of SU(2) (say in the Pauli matrices) that is generated by the projection operators for spin in the +x, -x, +y, -y, +z, and -z directions.
There are 6×6 = 36 products of these six elements, six of which are zero. The 30 non zero products provide a basis set for the group generated by the six. That is, any product of the six can be written as a complex multiple of the 30 nonzero products.
The 30 non zero products are all themselves primitive projection operators, or complex multiples of projection operators. But only the six on the diagonal are Hermitian.
More generally, any primitive projection operator in the Pauli algebra (Hermitian or not) can be written uniquely as the product of two primitive Hermitian projection operators.
9. Ben Webster - September 18, 2007
Carl-
It’s not like we have to chose one or the other. Mathematicians are also very interested in finite subgroups of Lie groups. It just tends to go by the name of “the representation theory of finite groups.” That has a pretty long and storied history over the past century and a half or so.
10. David Speyer - September 19, 2007
First of all, thank you for all the replies. I’m still digesting the notion of a spectral gap.
Regarding the solution by choosing “random” rotations, suggested by both of the Field’s medalists above: I thought of this, but it was not obvious to me how to show that it worked. You can’t just differentiate near the identity: the equation
e^A e^B e^{-A} e^{-B}
is singular near (0,0). You could try expanding by Baker-Campbell-Hausdorff and showing that the lowest degree terms don’t vanish, but the combinatorics seems nasty. (Also, the free Lie algebra doesn’t embed in so_3, so terms that formally look like they are nonzero might not be.) I’m curious to see the details of the differentiation solution if anyone knows a reference.
11. Terence Tao - September 19, 2007
Dear David,
Hmm, actually I retract my claim; I didn’t handle the non-vanishing issue correctly.
Baker-Campbell-Hausdorff allows one to finish the job if one can embed the free Lie algebra over Q into so(3;R), though I am not sure if this implication is reversible. Since R has infinite transcendence degree, it suffices to show that, if X, Y, Z are the generators of so(3), that the generic elements aX+bY+cZ and dX+eY+fZ in so(3;Q(a,b,c,d,e,f)) generates a free Lie algebra over Q. But even with these reductions there is still some serious algebraic combinatorics to be done, unless there is some sort of lifting trick or something, or if one has a really good description of the free Lie algebra.
Well, we can at least turn this around; you asked for a non-trivial application of the fact that the free group embeds into SO(3), and we found one, namely that no non-trivial word on SO(3) is identically equal to the identity matrix. :-)
12. David Speyer - September 19, 2007
Sadly, one can not embed the free Lie algebra over Q into so(3,R) either. Any two vectors in so(3) have
[ [a,b], [a,[a,[a,b]]] ]=0
which is not true in the free Lie algebra.
Actually, this brings up a few interesting questions, which I have only thought about very briefly:
Any word in the free group produces a map from
SO(3) x SO(3) –> SO(3). Is it always surjective?
Any word w in the free group produces a subscheme of SO(r) \times SO(3) cut out by w=1. Is this always generically reduced?
13. Terence Tao - September 19, 2007
That’s a nice identity! It has a nice interpretation if you view the Lie bracket on so(3) as being isomorphic to the cross product on R^3.
In retrospect, it’s clear that no finite-dimensional Lie algebra can contain the free Lie group, by doing a dimension count on the number of possible words of a sufficiently long length n; the dimension is polynomial in n in the former and exponential in the latter, and so Malthus forces us to have a non-trivial relation at some point. Thanks for clearing up my intuition there; for some reason I had the vague (but wrong) impression that semisimple Lie algebras behaved somewhat like free ones.
Still, the fact that the words on SO(3) are never identically 1 does imply some very weird algebraic combinatorial statement about formal expansions in the Lie algebra of SO(3) generated by the Baker-Campbell-Hausdorff formula, namely that they do not lie in the ideal of the free Lie algebra generated by the relations that so(3) satisfies (is there a presentation of that ideal, I wonder?). It is still slightly puzzling to me that there is a purely algebraic statement which does not seem to be easily provable in a purely algebraic fashion, but now that this statement looks so contorted, I guess it doesn’t “offend” me as much.
I can’t help you with the generically reduced problem, but for the surjectivity problem, one can observe at least that the image is a connected subvariety which is invariant under conjugation and contains the identity, which seems to cut down the possibilities a bit.
This might possibly be enough to handle the SO(3) problem, but more general Lie groups a more powerful approach would be needed.
14. Terence Tao - September 20, 2007
Dear David,
I asked around here at UCLA for applications of “embedding free groups into other groups”. I got two responses that you might find interesting:
1. My colleague, Sorin Popa, points out to me a result of his student, Adrian Ioana,
http://front.math.ucdavis.edu/0701.5027
which asserts that if a countable discrete group G happens to contain a copy of the free group, then there are a uncountable number of “different” possible actions of that group on a standard probability space, where two actions are considered the “same” (or more precisely, “orbit equivalent”) if there is a measure isomorphism that converts orbits of one action into orbits of another. At the other extreme, if instead the group G is abelian or at least amenable, then all actions are orbit equivalent, thanks to work of Dye, Ornstein, and Weiss.
2. More generally, there is some sort of philosophy in the theory of discrete group actions that “non-amenability” is like containing the free group as some sort of “virtual subgroup”, whatever that means.
My colleague Dima Shylakhtenko mentions a number of deep conjectures in this direction. One of the shorter ones to state is that any non-amenable von Neumann algebra with a trace must contain a copy of the von Neumann algebra of the free group; I think the converse (that any tracial vNa which contains the vNa of the free group is non-amenable) is pretty easy.
Incidentally, there are some deep connections between von Neumann algebras of groups and of orbit equivalence problems, though I’m really not an expert on these matters. Kazhdan’s property T also seems to play a big role; free groups have this property, amenable groups don’t, and it seems to make a decisive difference as to what actions of these groups look like.
15. Terence Tao - October 7, 2007
Dear David,
I found out from Alexander Gamburd that the answer to your question “is the word map surjective”? seems to have been almost answered by Borel (he showed that the image was Zariski open). See
http://front.math.ucdavis.edu/math.GR/0211302
16. Emmanuel Kowalski - October 8, 2007
A small correction to the end of Comment 14: free groups (of rank >1) do not have Property T; indeed, the latter (for a discrete group) implies finite abelianization (because Property T goes to quotients by closed subgroups, and abelian groups have T if and only if they are compact; hence G/[G,G] is compact and discrete for a discrete G with Property T). In particular SL(2,Z) does not have T (it has a finite index subgroup which is free of rank 2); however, Kazdhan proved (among other things) that SL(n,Z), for n>2, has Property T.
The recent work of Bourgain, Gamburd and Sarnak on points in orbits of discrete groups with almost prime coordinates uses embeddings of free groups for some results. There one useful property is that the Cayley graphs of free groups (with respect to the generators) is a tree, hence homogeneous and the harmonic analysis on it is pretty transparent. On the other hand, one can manage for the free subgroup to be Zariski dense, and a deep theorem implies that its reductions modulo large enough primes are the same as that of the ambient group, which is what is needed for sieving…
Incidentally, for some purposes at least in their work, one can replace the use of combinatorial balls of growing radius in the free subgroup by considerations of random walks on the ambient group. I wonder if the same could be the case for other uses of free groups?
17. 245B, notes 2: Amenability, the ping-pong lemma, and the Banach-Tarski paradox (optional) « What’s new - January 8, 2009
[...] Exercise 14 applies. There are many such constructions. One is given (and motivated) in this blog post of David Speyer, based on passing from the reals to the 5-adics, where -1 is a square root and so SO(3) becomes [...]
18. Henry W - January 8, 2009
Emmanuel,
There one useful property is that the Cayley graphs of free groups (with respect to the generators) is a tree, hence homogeneous and the harmonic analysis on it is pretty transparent.
All Cayley graphs are homogeneous. In fact, that’s pretty much the definition of a Cayley graph! (OK, it should also be connected.) They must be using some other properties of trees.
19. Emmanuel Kowalski - January 9, 2009
Yes, I used “homogeneous” wrongly; the actual property which is important is that one can write down explicitly the eigenfunctions of the discrete laplacian and use harmonic analysis efficiently (this is also apparent in the discussion of Ramanujan graphs by Lubotzky, Phillips and Sarnak).
20. A computational perspective on set theory « What’s new - March 19, 2010
[...] for instance this post from the Secret Blogging Seminar for more discussion of this example.) Each rotation in has two fixed antipodal points in ; we let [...]
21. 245C, Notes 4: Sobolev spaces « What’s new - November 19, 2010
[...] for instance this post from the Secret Blogging Seminar for more discussion of this example.) Each rotation in has two fixed antipodal points in ; we let [...]
%d bloggers like this:
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 215, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9310752749443054, "perplexity_flag": "head"}
|
http://mathhelpforum.com/differential-geometry/146092-universal-cover-n-times-punctured-plane.html
|
# Thread:
1. ## Universal cover of n times punctured plane
I'm looking for some insight into the relationship between the complex plane punctured $n$ times and its universal cover. I understand that the universal cover of the once-punctured plane is the plane itself, with the corresponding uniformizing function being the exponential (whose automorphism group $\cong \mathbb{Z}$ is isomorphic to the fundamental group of the base and to the group of deck transformations of the cover). I understand also that that the universal cover of the twice punctured plane is the unit disc (or upper half-plane), with the elliptic modular function $\lambda=k^2$ being the corresponding uniformizing function (whose automorphism group $\cong \mbox{free group on two generators} \cong \Gamma(2) \triangleleft \mbox{PSL}(2, \mathbb{Z})$) is once again isomorphic to the fundamental group of the base, and to the group of deck transformations of the cover).
In general, what is the universal cover of the $n$-times punctured plane, and what is the corresponding uniformizing function? I suppose that the universal cover is the upper-half plane for $n\geq 2$, with a modular function as the uniformizing function. However, this would imply that $\mbox{PSL}(2, \mathbb{Z})$ contains a copy of the free group on $n$ generators as a subgroup, which I doubt very much! It's impressive enough that it contains a copy of the free group on two generators...
Any pointers are greatly appreciated!
2. I don't know enough algebraic topology to answer this question, but I do know that $\mathbb{F}_2$, the free group on two generators, contains a copy of $\mathbb{F}_n$ as a subgroup, and therefore so does $\text{PSL}(2,\mathbb{Z})$. Here, n can be any positive integer or even infinity. If a, b are generators of $\mathbb{F}_2$ then (if I remember correctly) you can take $a^kb^ka^k\ (1\leqslant k\leqslant n)$ as generators for a copy of $\mathbb{F}_n$.
3. Originally Posted by Opalg
I don't know enough algebraic topology to answer this question, but I do know that $\mathbb{F}_2$, the free group on two generators, contains a copy of $\mathbb{F}_n$ as a subgroup, and therefore so does $\text{PSL}(2,\mathbb{Z})$. Here, n can be any positive integer or even infinity. If a, b are generators of $\mathbb{F}_2$ then (if I remember correctly) you can take $a^kb^ka^k\ (1\leqslant k\leqslant n)$ as generators for a copy of $\mathbb{F}_n$.
That's awesome! So I guess the possibility of the uniformizing function being a modular function is not ruled out.
4. ## Re: Universal cover of n times punctured plane
Hi Bruno,
Did you find a good resolution to this question? It's something that I am quite interested in as well. (I've heard the term Schottky space and Schottky group come up in this context)
Ralph
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 20, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9394897222518921, "perplexity_flag": "head"}
|
http://quant.stackexchange.com/questions/700/does-the-gamma-function-have-any-application-in-quantitative-finance/702
|
# Does the gamma function have any application in quantitative finance?
I was looking into the factorial function in an R package called gregmisc and came across the implementation of the gamma function, instead of a recursive or iterative process as I was expecting. The gamma function is defined as:
$$\Gamma(z)=\int_{0}^{\infty}e^{-t}t^{z-1}dt$$
A brief history of the function points to Euler's solution to the factorial problem for non-integers (although the equation above is not his). It has some application in physics and I was curious if it is useful to any quant models, apart from being a fancy factorial calculator.
-
## 3 Answers
It shows up in Bayes Analysis where a binomial distribution is involved (integer values apply):
$$\Gamma(k + 1) = k!$$
That allows the following integral to be evaluated in closed form:
$$\int_{0}^{1}p^{j-1}(1-p)^{k-1}dp = \frac{\Gamma(k)\Gamma(j)}{\Gamma(j+k)}$$
That integral can easily show up in the numerator and/or denominator of Bayes Equation.
-
Thanks for the response. I thought it may come up in calculating half-life of a mean reverting spread, but couldn't find it there. – Milktrader Mar 13 '11 at 3:54
Your second equation looks like the beta function, which now opens up some more things to ponder. – Milktrader Mar 13 '11 at 4:28
– bill_080 Mar 13 '11 at 17:00
In certain cases some stochastic differential equations(SDE's) have closed form(deterministic) solutions in the form of well known ordinary differential equations (ODE's), partial differential equations(PDE's), and special functions like the gamma function.
Here's an example from a paper where an SDE has a closed form solution in terms of the gamma function: http://www.siam.org/books/dc13/DC13samplechpt.pdf
Solving SDE's (preferably quickly), like with a closed form solution (when one is available), is a core activity in quantitative finance.
-
Appreciate the link. Thank you. – Milktrader Mar 13 '11 at 3:54
Gamma distributions are being used to model the default rate of credit portfolios by CreditRisk.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9231139421463013, "perplexity_flag": "middle"}
|
http://mathematica.stackexchange.com/questions/2847/most-efficient-way-to-obtain-samples-from-high-dimensional-multivariate-distribu/2849
|
# Most efficient way to obtain samples from high-dimensional multivariate distributions?
Is `MultinormalDistribution[]` efficient and easy to use for high dimensions?
I have a variable $n$ representing the dimension of a Monte Carlo integration I do on a multivariate Gaussian copula, where typical values of $n$ are near 100. I am using a simple correlation matrix made from `ConstantArray[ρ, {n,m}]` (except on the diagonal).
For now I simply wrote a function that calls `RandomVariate[NormalDistribution[0,1], {n,m}]`, generates the correlation matrix, calculates its Cholesky decomposition, and then multiplies to obtain the correlated multivariate normal samples. It works fine and was easy to code and understand.
However, generating correlated multivariate $t$ samples is more involved, so it would be convenient to just drop in a call to `MultivariateTDistribution[]`. That leaves me with two issues:
1. Setting up `MultinormalDistribution[]` with an arbitrary variable count is hard because it seems to want variable names, which I am having trouble generating programmatically.
2. I am not sure that the internals of `RandomVariate` on `MultinormalDistribution[]` and `MultivariateTDistribution[]` are set up to efficiently obtain high-dimensional sample sets.
If the efficiency is not expected to be that great I will stick with my current approach. Otherwise I would appreciate advice on using `MultinormalDistribution` in this high-dimensional context, since it would be worth investing the time in this more elegant and Mathematica-like approach.
-
I don't understand the remark about `MultinormalDistribution` "seeming to want variable names." Try ```RandomVariate[ MultinormalDistribution[ConstantArray[1, 100],
IdentityMatrix[100]], 1]``` for instance. – whuber Mar 12 '12 at 19:08
I was confused by the documentation examples all having variable names. I now understand better! – Brian B Mar 12 '12 at 20:32
It looks like `Parallelize` may be a smart thing to use here as well. – Brian B Mar 12 '12 at 21:01
@BrianB You can also try using the MKL's rng (`SeedRandom[Method->"MKL"]`) for better performance if you're on Win/Linux. `Parallelize` can be tricky---it'll generally only parallelize certain functional constructs (`Map`, `Table`, etc.), and the evaluations will run in separate processes (not threads), so communication overhead can be significant. It is very nice and easy to use though when the evaluations that run in parallel each take a long time and are independent of each other. – Szabolcs Mar 13 '12 at 11:03
The Monte Carlo simulation is of course embarrassingly parallel. I ended up finding that `Parallelize` worked well if and only if I specified `Method -> "CoarsestGrained"`. But in that case it was brilliant. – Brian B Mar 13 '12 at 15:43
show 1 more comment
## 2 Answers
Multivariate distributions do not require any named variables. My hunch is that your confusion in this regard is due to the excessive use of `{{1, ρ}, {ρ, 1}}` in the documentation for `MultivariateTDistribution`. You might've missed the fact that in most cases, the value for `ρ` is being substituted via a `Table` or a `ParallelTable`.
You can directly input your covariance matrix (and any additional parameters depending on the distribution) and call `RandomVariate` to draw from that. For your example:
````Σ = With[{ρ = 0.2}, SparseArray[{i_, i_} -> 1, {100, 100}, ρ]];
x = RandomVariate[MultivariateTDistribution[Σ, 10], {100}];
````
This will generate 100 samples of multivariate T distributed random vectors, each of length 100.
-
1
That is very slick! Thank you for introducing the `SparseArray` as well. – Brian B Mar 12 '12 at 19:36
Yes.
In principle, one call to `RandomVariate` will incur some overhead to diagonalize the matrix and then cost a fixed amount of time per output vector. Let's see by first creating some data to describe such a distribution:
````μ = RandomReal[{0, 1}, 100];
Σ = DiagonalMatrix[Exp[RandomReal[{0, 1}, 100]]];
````
Now request a bunch of random draws (each one of them a 100-vector):
````Timing[RandomVariate[MultinormalDistribution[μ, Σ], 100000];]
````
The response is 0.687 seconds: that is, we can get about 150,000 100-vectors per second. Not shabby and unlikely to be bettered by other approaches. (`MultinormalDistribution` appears to be parallelized automatically, so the timing is the sum of (a) pre-processing overhead, including RAM allocation for the output, all performed by one core, and (b) generation of random values, which appears to be performed in parallel by all the available (licensed) cores. My timings reflect a four-core license.)
Incidentally, because there must be some matrix multiplication going on for each output vector, we ought to expect slightly sublinear scaling with dimension. E.g., generating 10,000 1000-vectors produces the same quantity of output numbers (i.e., $10^7$ of them in both cases) but requires 1.30 seconds--about twice as long.
-
I don't think the Parallel Computing Tools are used by default in any built-in Mathematica functions. However, some of them do have low level implementation which take advantage of multiple cores using a single kernel only (e.g. `LinearSolve`). (This is different from the high-level parallelization of the PCT) Do licenses matter in this case as well? I thought licensing restricted the number of kernels that can be run in parallel, but not how a single kernel can use multiple cores. – Szabolcs Mar 13 '12 at 11:08
I don't know the answer to Mathematica licensing questions. What I do know is that (a) I have a license for 4 cores; (b) I have 8 available cores; (c) in repeatedly running this code, I observed a period in which only one core was used followed by another in which four cores were used, but never more than four. – whuber Mar 13 '12 at 14:37
Do you really have 8 cores or do you have a CPU with hyperthreading? In either case, you can try `SetSystemOptions["ParallelOptions" -> "ParallelThreadNumber" -> 8]` Just make sure you set it back to 4 in case it would break `Parallelize` because of the licensing restrictions. (Anyway, it should reset to 4 after a kernel restart, and it should not influence the number of parallel kernels launched) – Szabolcs Mar 13 '12 at 15:02
Thanks for the suggestions. You're correct; it's four physical cores with hyperthreading (Xeon 3580). Somehow I have six kernels launched at the moment. Only one was utilized in the `RandomVariate` call. Ultimately it was able to use only 50% of total resources even after increasing `ParallelThreadNumber`. Surprisingly, explicitly parallelizing this call (via `ParallelTable`) degrades performance: although four kernels are invoked for the initial part of the calculation, four times as much RAM is allocated and the total operation takes six times as long. – whuber Mar 13 '12 at 16:57
Note that your code only uses a single kernel process, even though it can utilize more than one core. When you use `ParallelTable`, that launches multiple kernel processes, and uses a completely different sort of parallelization, which happens at the level of Mathematica code, and is not as efficient as whatever `MultinormalDistribution` uses internally. Actually the parallel computing tools (which provide ParallelTable) are fully implemented in Mathematica, and just use MathLink for communication between kernel processes. (The full source code is available) – Szabolcs Mar 13 '12 at 17:04
show 7 more comments
lang-mma
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9234791994094849, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/26127/connections-between-ultrafilters-in-topology-and-logic/26198
|
## Connections between ultrafilters in topology and logic
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I have a some-what vague question. It seems to me that there are two main ways in which ultrafilters (on a set) can be used. One is in topology. The notion of an ultrafilter converging to a point is very useful since, in particular, knowing the limit points of every ultrafilter on a space is equivalent to knowing its topology. The other use is in logic (a subject about which I admittedly know very little). For instance, ultraproducts (and more generally ultralimits) can be used to construct non-standard models etc. I'm just curious about any connections that exist between these two uses of ultrafilters. For example, is there any logical interpretation of ultrafilter convergence on a topological space, etc? Is there a connection to the internal logic of its topos of sheaves for example? I'm really a beginner with this stuff, so any interesting connections, even trivial ones, would be most interesting to me. If this question is too open-ended, feel free to change this to community wiki.
-
Ultrafilters are also used in forcing in set theory, with the same effect as in the construction of ultraproducts, namely a quotient by an ultrafilter collapses a complete Boolean algebra to the Boolean algebra $\lbrace 0,1 \rbrace$. – Andrej Bauer May 27 2010 at 10:38
1
Here are two very good pieces of exposition on ultrafilters: topologicalmusings.wordpress.com/2008/07/18/… and terrytao.wordpress.com/2007/06/25/… – David Corfield May 27 2010 at 12:40
This is very closely related to this question - mathoverflow.net/questions/11261/… – François G. Dorais♦ May 27 2010 at 17:56
Thanks very much for all the answers! I'm avoiding accepting an answer as the question can clearly have multitude of "correct" answers. – David Carchedi Jun 1 2010 at 13:11
## 7 Answers
It's a multifaceted question, and answers will be multifaceted too.
At a simpler level, you know doubt know that an ultrafilter on a set $X$ can be identified with a Boolean algebra homomorphism $PX \to P1$. More generally, an ultrafilter in a Boolean algebra $B$ is a Boolean algebra homomorphism $B \to P1$, and if we think of $B$ (or a presentation of $B$) as a propositional theory, then such Boolean algebra homomorphisms or truth-value assignments to propositions $b \in B$ can be thought of as models for the theory. The set of all Boolean algebra homomorphisms can be topologized a la Zariski, and the result is a Stone space (cf. supercooldave's reply) which is compact, Hausdorff, and totally disconnected. The compactness in particular directly implies the compactness theorem for propositional theories: thinking of propositions $b \in B$ as giving closed sets in the Zariski spectrum, if every finite conjunction of propositions from a set $\Sigma$ has a model (or a point in the Stone space), then $\Sigma$ itself has a model (i.e., the intersection of all closed sets coming from $b \in \Sigma$ is nonempty). This principle can be beefed up to encompass the compactness theorem for predicate logic.
Here's another cross-connection (since you bring up nonstandard models): the ultrapowers or ultraproducts, used to construct for example Robinson's nonstandard reals, are just examples of taking stalks. That is: if you have a bunch of models $M_x$ indexed by a set $X$, and if you have an ultrafilter on $X$ (realized as a point $U$ in the Stone-Cech compactification $\beta X$ of the discrete space $X$), then the ultraproduct
$$(\prod_{x \in X} M_x)/U$$
is the value of the structure $(M_x)_{x \in X}$ (as an object in the topos $Set/X$) under the composite functor
$$Set/X \simeq Sheaves(X_{discrete}) \stackrel{i_*}{\to} Sheaves(\beta X) \stackrel{stalk_U}{\to} Set$$
where $i_*$ is a geometric morphism between sheaves induced by the canonical continuous inclusion of $X$ into $\beta X$. Lawvere has remarked that both Robinson's construction and Cohen forcing are examples of a general phenomenon, where one starts with a model in a universe $Set$ of presumably constant sets, then passes to a universe of more variable sets (such as $Set/X$, $Sh(\beta X)$, or $Sh(P)$ where $P$ can be a poset of "forcing conditions"), and then passes back down again to more constant sets by "freezing at a point" (taking stalks at a point, or passing to a filterquotient construction) -- even if mention of the passage through toposes of variable sets is usually elided over in silence.
-
Wow, that's really cool! Thanks! Is there anywhere I can read more about this? – David Carchedi May 27 2010 at 13:10
1
Mac Lane and Moerdijk's book "Sheaves in Geometry and Logic" spells out the connections very nicely. – Steven Gubkin May 27 2010 at 13:23
I'm going on memory here, but I may have picked up on the ideas in the first paragraph by reading the first few chapters of Handbook of Mathematical Logic. Johnstone's book is also really good, and goes into a lot of detail. We're adding bits and pieces to the nLab (e.g., stuff surrounding "ultrafilter theorem"). As for the stuff in the last paragraph, you probably have Mac Lane-Moerdijk as a reference; the stuff from Lawvere comes from his Chicago lecture notes Variable Sets, Etendu, and Variable Structures in Topoi. Sorry I don't have more! – Todd Trimble May 27 2010 at 13:30
Todd, you might want to check out this old question of Joel David Hamkins - mathoverflow.net/questions/11261/… - My answer there overlaps with yours, but you probably could add your two cents. – François G. Dorais♦ May 27 2010 at 17:55
Thanks for bringing this to my attention, Francois. I may have something to add a little later. – Todd Trimble May 27 2010 at 18:18
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Another interesting use of ultrafilters takes place in metric geometry, where they are used for constructing the so-called asymptotic cones of metric spaces.
Roughly speaking, an asymptotic cone of a metric space $X$ is what you see when looking at $X$ from infinitely far away. More precisely, you rescale the metric on $X$ dividing by $n$, you let $n$ tend to $+\infty$, and you take the Gromov-Hausdorff limit point of the obtained sequence of metric spaces. Of course, usually such a sequence does not converge, and you use a non-principal ultrafilter for individuating a limit (depending on the ultrafilter). The idea is due to Gromov and has been first described in detail by van den Dries and Wilkie in:
Gromov's theorem on groups of polynomial growth and elementary logic. J. Algebra 89 (1984), no. 2, 349--374.
Another way of constructing asymptotic cones is as follows: you take a non-standard extension $^\ast X$ of $X$ with non-standard distance $^\ast d$ induced by the distance $d$ on $X$, you choose an infinite non-standard real $\lambda$ and you identify points $p,q$ of $^\ast X$ if $^\ast d (p,q)/\lambda$ is infinitesimal. You put on the quotient the metric $d'=^\ast d/\lambda$, thus obtaining an asymptotic cone of $X$ (I am cheating a bit: you should also choose a basepoint in $^\ast X$ and consider only points in the quotient of $^\ast X$ which are at finite $d'$-distance from the basepoint).
Asymptotic cones are nowadays a useful tool in geometric group theory, where they are used for studying large-scale properties of groups and spaces.
-
A beautiful reply, thank you for this! – Grant Olney Passmore May 28 2010 at 0:03
Some of the connections between topology and logic via ultrafilters have been around for quite a while.
Łoś's theorem from 1955 is the first place where ultraproducts appear in logic, as far as I know, although the ultraproduct construction is older (probably due to Hewitt). A very elegant proof of the compactness theorem of first-order logic using ultraproducts is due to Morel, Scott, and Tarski. It shows that compactness in logic is really compactness of an appropriate topological space. This pretty and inspiring connection can be extended much further.
A very nice starting point to learn about these connections is the series of papers by Xavier Caicedo (no relation):
• Compactness and normality in abstract logics. Ann. Pure Appl. Logic 59 (1993), no. 1, 33--43.
• The abstract compactness theorem revisited. Logic and foundations of mathematics (Florence, 1995), 131--141, Synthese Lib., 280, Kluwer Acad. Publ., Dordrecht, 1999.
• Logic of sheaves of structures. (in Spanish) Rev. Acad. Colombiana Cienc. Exact. Fís. Natur. 19 (1995), no. 74, 569--586.
The last paper shows how many "limit" constructions in intuitionistic logic (Kripke models), set theory (forcing) and elsewhere are examples of the same phenomenon.
This topological approach to logic is mainly guided by the ultraproduct construction. Daniele Mundici has also written about this.
Paolo Lipparini has studied variants of compactness that also turn out to have connections to logic via properties of ultrafilters, and lead to very interesting problems that seem to require Shelah's pcf theory; this line of work seems to have originated in set-theoretic topology, and R. Stephenson wrote a good survey (25 years old now) of the then state of the art as far as the topological side of things, see his article in the Handbook of Set-Theoretic Topology.
Finally, set theory is nowadays where both ultrafilters in general and the ultraproduct construction in particular are mostly used, in connection with large cardinals and elementary embeddings. Many natural problems in set-theoretic topology have been shown to have deep connections with these cardinals via the ultrafilters they generate. (Though here the connection to logic proper is weaker.)
-
Thanks very much! – David Carchedi May 29 2010 at 14:58
Speaking of asymptotic cones ... here's another connection between ultrafilters, topology and logic. Suppose that $\Gamma$ is a uniform lattice in $SL_{3}(\mathbb{R})$. Gromov suggested that the asymptotic cones of $\Gamma$ are (essentially) independent of the choice of the ultrafilter $\mathcal{D}$.'' In fact, the following is true:
(a) If $CH$ holds, then $\Gamma$ has a unique asymptotic cone up to homeomorphism.
(b) If $CH$ fails, then $\Gamma$ has $2^{2^{\omega}}$ asymptotic cones up to homeomorphism.
Amongst other things, the proof involves some ideas from nonstandard analysis. The relevant reference is:
L. Kramer, S. Shelah, K. Tent and S. Thomas Asymptotic cones of finitely presented groups, Advances in Mathematics 193 (2005), 142-173.
-
CH is continuum hypothesis? Wow, that's quite a remarkable result. – David Carchedi May 27 2010 at 21:27
Yes, $CH$ is the continuum hypothesis. The corresponding result is probably also true of the non-uniform lattice $SL_{3}(\mathbb{Z})$ ... but this question has been open for some years now. – Simon Thomas May 27 2010 at 21:29
My feeling, which may be ignorant, is that these intuitions go all the way back to Leibniz. There "point" was in some way ridded of a silly definition like "position but no magnitude", and was replaced by a "sequence of more and more accurate propositions". What "proposition" means to logicians has moved on since then (Frege). But Leibniz's "principle of indiscernibles" states that if A and B are different, then something is true of A and not of B. An early separation axiom. His point-like things became objects of metaphysics, but no one's perfect.
If model theory worked more explicitly with a "space of models", which no doubt for good reasons it doesn't, the analogy would be clearer to everyone. For the logical reading of sheaf theory and topos theory, the way of equating open sets with propositions is rather fundamental, though tacit.
-
You've stated the indiscernibility of identicals (contrapositive). $(F)x=y\supset (Fx\equiv Fy)$ I believe Leibniz's law is the identity of indiscernables. $(F)(Fx\equiv Fy)\supset x=y$ Very interesting point about separation. – Jeremy Shipley Jun 6 2010 at 16:09
I see that I messed up that first formula. The quantifier should be in consequent of course. – Jeremy Shipley Jun 6 2010 at 18:48
The book Stone Spaces by Peter T. Johnstone could be one place to begin your search. It investigates deeply one connection between topology and logic. Topology via Logic by Steven Vickers covers similar territory. (I could say more, but both books are in my home library.)
-
Thanks, I'll check it out! – David Carchedi May 27 2010 at 13:16
There is a rising use of ultrafilters in number theory,particularly additive number theory,as topological tools are being used to either simplify previous results or to develop new ones. Melvyn Nathanson recently gave a pair of as-yet-unpublished lectures describing Glazer's use of ultrafilters in giving a vastly simplified proof of Hindman's theorum. I plan to pursue this line of research this summer and I hope to make it the basis of my first published results by fall.Stay tuned.........
-
Andrew, since I think you are in New York City, drop me an email if you'd like to meet up and discuss ultraproducts. – Joel David Hamkins May 28 2010 at 2:30
I certainly am,Joel-and if you're at the City University of New York Graduate Center,I'd love to meet you for coffee to talk about it later this summer! – Andrew L May 28 2010 at 3:49
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 61, "mathjax_display_tex": 2, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.93514484167099, "perplexity_flag": "middle"}
|
http://unapologetic.wordpress.com/2009/04/23/
|
# The Unapologetic Mathematician
## The Polarization Identities
If we have an inner product on a real or complex vector space, we get a notion of length called a “norm”. It turns out that the norm completely determines the inner product.
Let’s take the sum of two vectors $v$ and $w$. We can calculate its norm-squared as usual:
$\displaystyle\begin{aligned}\lVert v+w\rVert^2&=\langle v+w,v+w\rangle\\&=\langle v,v\rangle+\langle v,w\rangle+\langle w,v\rangle+\langle w,w\rangle\\&=\lVert v\rVert^2+\lVert w\rVert^2+\langle v,w\rangle+\overline{\langle v,w\rangle}\\&=\lVert v\rVert^2+\lVert w\rVert^2+2\Re\left(\langle v,w\rangle\right)\end{aligned}$
where $\Re(z)$ denotes the real part of the complex number $z$. If $z$ is already a real number, it does nothing.
So we can rewrite this equation as
$\displaystyle\Re\left(\langle v,w\rangle\right)=\frac{1}{2}\left(\lVert v+w\rVert^2-\lVert v\rVert^2-\lVert w\rVert^2\right)$
If we’re working over a real vector space, this is the inner product itself. Over a complex vector space, this only gives us the real part of the inner product. But all is not lost! We can also work out
$\displaystyle\begin{aligned}\lVert v+iw\rVert^2&=\langle v+iw,v+iw\rangle\\&=\langle v,v\rangle+\langle v,iw\rangle+\langle iw,v\rangle+\langle iw,iw\rangle\\&=\lVert v\rVert^2+\lVert iw\rVert^2+\langle v,iw\rangle+\overline{\langle v,iw\rangle}\\&=\lVert v\rVert^2+\lVert w\rVert^2+2\Re\left(i\langle v,w\rangle\right)\\&=\lVert v\rVert^2+\lVert w\rVert^2-2\Im\left(\langle v,w\rangle\right)\end{aligned}$
where $\Im(z)$ denotes the imaginary part of the complex number $z$. The last equality holds because
$\displaystyle\Re\left(i(a+bi)\right)=\Re(ai-b)=-b=-\Im(a+bi)$
so we can write
$\displaystyle\Im\left(\langle v,w\rangle\right)=\frac{1}{2}\left(\lVert v\rVert^2+\lVert w\rVert^2-\lVert v+iw\rVert^2\right)$
We can also write these identities out in a couple other ways. If we started with $v-w$, we could find the identities
$\displaystyle\Re\left(\langle v,w\rangle\right)=\frac{1}{2}\left(\lVert v\rVert^2+\lVert w\rVert^2-\lVert v-w\rVert^2\right)$
$\displaystyle\Im\left(\langle v,w\rangle\right)=\frac{1}{2}\left(\lVert v-iw\rVert^2-\lVert v\rVert^2-\lVert w\rVert^2\right)$
Or we could combine both forms above to write
$\displaystyle\Re\left(\langle v,w\rangle\right)=\frac{1}{4}\left(\lVert v+w\rVert^2-\lVert v-w\rVert^2\right)$
$\displaystyle\Im\left(\langle v,w\rangle\right)=\frac{1}{4}\left(\lVert v-iw\rVert^2-\lVert v+iw\rVert^2\right)$
In all these ways we see that not only does an inner product on a real or complex vector space give us a norm, but the resulting norm completely determines the inner product. Different inner products necessarily give rise to different norms.
Posted by John Armstrong | Algebra, Linear Algebra | 5 Comments
## About this weblog
This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”).
I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 17, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9128674268722534, "perplexity_flag": "head"}
|
http://stats.stackexchange.com/questions/8724/exploring-data-attributes
|
# Exploring data attributes
I have a database with many attributes. I would like to know which attributes has the minimum variation in the data. Is there some standard technique? It should be like clustering without split records in clusters. I would like to know what the records in particular cluster have in common.
I was going to compute the mean ($\bar{x}$) and st.d. ($s$) for each continuous attribute $x$. After computing the coefficient of variation $CV=\frac{s}{\bar{x}}$ I would say that attributes with $CV\leq0.1$ are the similar ones. For categorical ones I would choose attributes with more than $90\%$ relative frequency for the mode.
Is there some standard technique?
-
## 1 Answer
It reminds me of what is implemented in the caret package for data pre-processing. It is fully described in one of the accompanying vignette, namely Data Sets and Miscellaneous Functions in the caret Package. What is actually done is to identify predictors that have low variance in the full dataset, as you described, whether it be a continuous or a categorical feature. They compute:
• the frequency of the most prevalent value over the second most frequent value (termed "frequency ratio"),
• the proportion of unique values (subject-wise),
considering that
If the frequency ratio is less than a pre–specified threshold and the unique value percentage is less than a threshold, we might consider a predictor to be near zero–variance. (p. 5, emphasis is mine)
The rationale is that near-zero variance predictors may have exact zero variance when using cross-validation, or induce model instability. They also address the problem of collinearity, but then this really is a matter of statistical modeling (some models, like classical regression models, don't accommodate well correlated predictors because it will inflates standard error of regression coefficients; others don't care about that).
Besides screening those low informative predictors, you can also use a hierarchical clustering method (by variables, not by individuals) to see how it goes. This is often used for studying missing data patterns (i.e., where we are interested in examining which variables are consistently showing an increased number of missing responses across all samples, or a particular subgroup).
-
Nice answer! In the last hint: could it be necessary to transform categorical attributes in numerical ones? I think in missing data patterns analysis the db is a 0/1 db (0 non missing, 1 missing). – Simone Mar 24 '11 at 13:10
@Simone Yes, we can work with a binary indicator for missingness or low variance (with a constant cut-off), but I guess you could also use a numerical summary of "variable sparsity" as defined above (which would work for any kind of variables) – chl♦ Mar 24 '11 at 13:15
Sorry, but maybe I didn't get it. I should transpose my db and then add 1 for each missing value and 0 otherwise. It works for a missing values analysis. In my case, I should compute the mean for each attribute and then for each value put 0 or 1 due to a numerical summary (eg if it is the most frequent value I choose 1 else 0?). Thanks. – Simone Mar 24 '11 at 14:18
@Simone I originally meant recode as 0/1 depending on whether your variables are below or above your fixed threshold for deciding of low-variance, or just use a numerical value (e.g., the frequency ratio cited above, or the CV you proposed) that reflects the amount of variability present in here (a numeric variable with 2 or 3 unique observed values or a categorical variable with only one modality present would be what I call a poorly discriminative variable, it is not necessarily uninformative) -- this is for examining structured patterns of sparsity, if any. – chl♦ Mar 24 '11 at 14:49
let me stop bothering you: each value of an attributes gets 0 if it's below a treshold otherwise 1? I was wondering if I should replace each value for each attribute in a different way. If we moved giving the same value for an attribute it would be nice to substitute them with a frequency ratio to obtain a sort of lattice of cluster, ie attributes with similar frequency ratio. Am I right? Thanks. – Simone Mar 24 '11 at 15:10
show 1 more comment
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9218827486038208, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/25092?sort=oldest
|
## Can all induced maps be described categorically.?. (or at least as generally as possible)
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Hi: I am new here. I went over the fAQ's, still, sorry if I break protocol.
I am pretty confused about induced maps in different areas of algebraic topology; I do know how these induced maps are defined in many cases, but I definitely do not understand well-enough the rules governing when a map between two topological spaces X,Y , induces a map in homology, or homotopy. AFAIK, if we have a map f:X-->Y , and this map takes cycles to cycles and boundaries to boundaries, then this map "passes to homology" (not clear what that means.).
Problem(at least to me) is that this word "induced" seems to be overused (in the sense that its meaning does not always seem clear.): induced quotients, induced homomorphisms, induced bundles, etc. So: does anyone know if induced maps can be described categorically, or at least, could someone please explain when a given map between topological spaces induces a map on homology or cohomology.?.
I think there is some underlying algebraic result dealing with normal subgroups(which extends to any subgroup in homology, since chain groups are Abelian.), but I am not too sure of this.
Thanks For any Help.
-
## 2 Answers
The key word in this context is functor. The point is that homology, homotopy etc. are functors. For example consider homology $H_n$. This is a functor from the category of topological spaces to the category of Abelian groups. In categories, although the usual notation obscures it, morphisms are more important than objects. It is crucial to define the action of the functor on morphisms. Returning to our example, to define the functor $H_n$ we need to define an Abelian group $H_n(X)$ for each topological space $X$, and for each continuous map $f:X\to Y$ a group homomorphism $H_n(f):H_n(X)\to H_n(Y)$. These maps $H_n(f)$ must preserve composition and identities. In practice, we don't use the notation $H_n(f)$ for these maps but typically use alternatives like $f_*$ which is quicker to write, but less informative. Similarly homotopy groups and cohomology groups form functors on suitable categories.
There is a whole algebra of categories, functors and more which are dealt with in texts on category theory. For instance the composition of two functors is a functor. In our example, some texts see the homology group functor as a composite, being a functor from topological spaces to chain complexes, and the functor taking a chain complex to its homology groups. Also texts on topology vary in the detail in which they explain the construction of the maps I've denoted as $H_n(f)$; some go into lots of detail while other wave their hands more. In general once they have done some examples in detail, they tend not to go into so much detail in subsequent examples, as they assume that the reader can now fill in more details.
-
Thanks : I have a chicken-egg confusion here when going beyond (co)homology, tho:I know that we define (Eilenberg-Steenrod) (Co)homology to be a functor;there are other cases,tho, in which we may not know in advance whether we have a functor:given any linear map between v.spaces V,W, we get a map W*-->V* (I think V->V* is a functor) . Given a map between manifolds M,N , we get an induced map between the respective tangent spaces T_pM, T_pN (where is the functor.?). Do these induced maps (all other) also follow from functoriality.?. Also, once we have functoriality, how to define f*? thanks. – confused May 23 2010 at 4:55
You are raising a lot of questions here. The Eilenberg-Steenrod axioms do not define of a (co)homology functors, they axiomatize them. Dualisation of vector spaces is a contravariant functor. You did not mention manifolds originally, but a map between two smooth manifolds induces vector maps from $T_p(M)$ to $T_{f(p)}(M)$ which can be regarded as a functor. The domain category is the category of manifolds with a base point, with base-point preserving maps, and the codomain category is that of vector spaces – Robin Chapman May 23 2010 at 10:32
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Let me add that the answer to your question "when does a map between spaces induce a map in homology/cohomology/homotopy?" is "always" (as long as you stick to continuous maps..).
In fact (say for homology) if you have a continuous $f:X\to Y$, the induced homomorphism of chain complexes $C_\bullet(f):C_\bullet(X)\to C_\bullet(Y)$ sends automatically cycles to cycles and boundaries to boundaries (simply because it is compatible with the differentials of the two complexes, in the sense that $f\circ d_X=d_Y\circ f$).
The fact that it "passes" to homology is now an algebraic fact, namely the fact that if you have four abelian groups $A,B,C,D$ and three homomorphisms $f:A\to B, g:A\to C, h:B\to D$ with $g$ and $h$ surjective (so $C$ is a quotient of $A$ and $D$ is a quotient of $B$), then you can find $f':C\to D$ such that $f'\circ g=h\circ f$ if and only if $f(\ker(g))\subseteq \ker(h)$. (you might want to draw a diagram here :D)
Take $A=Z_n(X), B=Z_n(Y), C=H_n(X)=Z_n(X)/B_n(X), D=H_n(Y)=Z_n(Y)/B_n(Y)$, where $Z_\bullet=$cycles and $B_\bullet=$ boundaries, as usual, and you get your induced map in homology.
-
Sorry, but my algebra is a bit weak(i am a begginer, please be patient): i know that a map passes to the quotient if it is constant in equivalent classes;I know the condition needed to make triangular diagrams commute, but I do not understand the precise meaning of "passing to homology": you mention that cycles are sent to cycles, but I don't see a amp sending Bn(X) to Bn'(Y). Would you please define the meaning of "passing to homology" or give a ref.?. I know it must have to see with:1)f(B_n(X))<B_n(Y) and 2) f(Z_n(X))<(Z_n(Y) and 3)your condition on kernels. Then what.? Thanks. – confused May 23 2010 at 4:59
"passing to homology" is just a way to say that the various maps C_n(f):C_n(X)->C_n(Y) induce maps in homology H_n(f):H_n(X)->H_n(Y), as in my answer above. it is defined simply like this: given an element a of H_n(X) you take a cycle c in Z_n(X) such that a=[c] in H_n(X)=Z_n(X)/B_n(X), then you apply C_n(f) to c and take its class in H_n(Y)=Z_n(Y)/B_n(Y). the facts that you mentioned above assure that this is well-defined. – Mattia Talpo May 23 2010 at 20:47
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 29, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.943114697933197, "perplexity_flag": "head"}
|
http://www.reference.com/browse/Fourier+transform+on+finite+groups
|
Definitions
Nearby Words
# Fourier transform on finite groups
In mathematics, the Fourier transform on finite groups is a generalization of the discrete Fourier transform from cyclic to arbitrary finite groups.
## Definitions
The Fourier transform of a function $f : G rightarrow mathbb\left\{C\right\},$ at a representation $rho,$ of $G,$ is
$$
widehat{f}(rho) = sum_{a in G} f(a) rho(a).
So for each representation $rho,$ of $G,$, $widehat\left\{f\right\}\left(rho\right),$ is a $d_rho times d_rho,$ matrix, where $d_rho,$ is the degree of $rho,$.
Let $rho_i,$ be the irreducible representations of $G$. Then the inverse Fourier transform at an element $a,$ of $G,$ is given by
$$
f(a) = frac{1}
sum_i d_i text{Tr}left(rho_i(a^{-1})widehat{f}(rho_i)right),>
where $d_i,$ is the degree of the representation $rho_i.,$
## Properties
### Transform of a convolution
The convolution of two functions $f, g : G rightarrow mathbb\left\{C\right\},$ is defined as
$$
(f ast g)(a) = sum_{b in G} f(ab^{-1}) g(b).
The Fourier transform of a convolution at any representation $rho,$ of $G,$ is given by
$$
widehat{f ast g}(rho) = widehat{f}(rho)widehat{g}(rho).
### Plancherel formula
For functions $f, g : G rightarrow mathbb\left\{C\right\},$, the Plancherel formula states
$$
sum_{a in G} f(a^{-1}) g(a) = frac{1}
sum_i d_i text{Tr}left(widehat{f}(rho_i)widehat{g}(rho_i)right),>
where $rho_i,$ are the irreducible representations of $G.,$
## Fourier transform on finite abelian groups
Since the irreducible representations of finite abelian groups are all of degree 1 and hence equal to the irreducible characters of the group, Fourier analysis on finite abelian groups is significantly simplified. For instance, the Fourier transform yields a scalar- and not matrix-valued function.
Furthermore, the irreducible characters of a group may be put in one-to-one correspondence with the elements of the group.
Therefore, we may define the Fourier transform for finite abelian groups as
$$
widehat{f}(s) = sum_{a in G} f(a) bar{chi_s}(a).
Note that the right-hand side is simply $langle f, chi_srangle$ for the inner product on the vector space of functions from $G,$ to $mathbb\left\{C\right\},$ defined by
$$
langle f, g rangle = sum_{a in G} f(a) bar{g}(a).
The inverse Fourier transform is then given by
$$
f(a) = frac{1}
> sum_{s in G} widehat{f}(s) chi_s(a).
A property that is often useful in probability is that the Fourier transform of the uniform distribution is simply $delta_\left\{a,0\right\},,$ where 0 is the group identity and $delta_\left\{i,j\right\},$ is the Kronecker delta.
## Applications
This generalization of the discrete Fourier transform is used in numerical analysis. A circulant matrix is a matrix where every column is a cyclic shift of the previous one. Circulant matrices can be diagonalized quickly using the fast Fourier transform, and this yields a fast method for solving systems of linear equations with circulant matrices. Similarly, the Fourier transform on arbitrary groups can be used to give fast algorithms for matrices with other symmetries . These algorithms can be used for the construction of numerical methods for solving partial differential equations that preserve the symmetries of the equations .
## References
• .
• Diaconis, P. (1988). Group Representations in Probability and Statistics. Lecture Notes — Monograph Series, Vol. 11. Hayward, California: Institute of Mathematical Statistics.
• Diaconis, P. (1991). "Finite Fourier Methods: Access to Tools." In Probabilistic Combinatorics and its Applications, Proceedings of Symposia in Applied Mathematics, Vol. 44. Bollobás, B., and Chung, F. R. K. (ed.).
• .
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 34, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.7955083847045898, "perplexity_flag": "head"}
|
http://mathhelpforum.com/advanced-algebra/71654-linearity-matrix-functions-print.html
|
# Linearity / Matrix functions?
Printable View
• February 3rd 2009, 06:06 PM
Unenlightened
Linearity / Matrix functions?
I shouldn't have left it so late to ask this, but the answers to any of these would be most helpful...
Is the function mapping a matrix A to the determinant of A linear?
I'm thinking it's non-linear, but simply because I can't think of a transformation matrix that could map [[a,b],[c,d]] <--- (A 2x2 matrix a, b, c, d reading left to right... apologies for absence of Latex!) to ad-bc...
Is the operation mapping f to f''+3f' linear?
(Where f is over the collection of all infinitely differentiable functions)
I know differentiation is linear, but does it hold for two separate derivatives added together?
Fine, A(cx + dy) = c(Ax)+d(Ay) if and only if the function is linear, but how does one actually go about picking these x and y s, or creating a matrix A to test one's theory?
Is a function mapping f to its second derivative linear?
Aye, I'm presuming this is linear, since differentiation is linear (although I'm not sure how to explain that...)
Is the function mapping a matrix A to its trace linear?
This one is linear, right? Because you're just adding two entries together. Again, I'm not sure how to construct the function as a matrix...
Is the function mapping x to 3x + 2 linear?
This is a straight out 'no', right? 3x is fine, but you can't just add a constant like that, right?
Let C = [[1,2],[3,4]]. Is the function mapping A to AC-CA linear?
Again, I'm guessing it's not, for similar reasons to the previous...
Any help on any of these much appreciated.
Thanks in advance...
• February 3rd 2009, 06:32 PM
Isomorphism
Quote:
Originally Posted by Unenlightened
I shouldn't have left it so late to ask this, but the answers to any of these would be most helpful...
Is the function mapping a matrix A to the determinant of A linear?
Idea: What is
$\det( 3A )$?
Quote:
Is the operation mapping f to f''+3f' linear?
(Where f is over the collection of all infinitely differentiable functions)
I know differentiation is linear, but does it hold for two separate derivatives added together?
Fine, A(cx + dy) = c(Ax)+d(Ay) if and only if the function is linear, but how does one actually go about picking these x and y s, or creating a matrix A to test one's theory?
Quote:
Is a function mapping f to its second derivative linear?
Aye, I'm presuming this is linear, since differentiation is linear (although I'm not sure how to explain that...)
For both these questions, you dont need matrices... Check if f and g are mapped as above, is f+g mapped like above....
Quote:
Is the function mapping a matrix A to its trace linear?
This one is linear, right? Because you're just adding two entries together. Again, I'm not sure how to construct the function as a matrix...
Yes it is. Rigorously you say: $\text{Tr }(\alpha A + \beta B) = \alpha \text{Tr } (A) + \beta \text{Tr }(B)$
Quote:
Is the function mapping x to 3x + 2 linear?
This is a straight out 'no', right? 3x is fine, but you can't just add a constant like that, right?
Yes. Show that 0 does not map to 0, thus it is not linear
Quote:
Let C = [[1,2],[3,4]]. Is the function mapping A to AC-CA linear?
Again, I'm guessing it's not, for similar reasons to the previous...
This question is not clear :(
• February 3rd 2009, 07:10 PM
Unenlightened
Thankee koindly :)
Sorry about the last one - it's supposed to be the matrix
(1 2)
(3 4)
And the function is to map A onto A*C - C*A...
Ooh also
How about the function mapping A to A transpose?
Non-linear also?
• February 4th 2009, 10:14 PM
Isomorphism
Quote:
Originally Posted by Unenlightened
Thankee koindly :)
Sorry about the last one - it's supposed to be the matrix
(1 2)
(3 4)
And the function is to map A onto A*C - C*A...
Ooh also
How about the function mapping A to A transpose?
Non-linear also?
They all are linear. Just apply the definition of linearity to get the answer :)
All times are GMT -8. The time now is 10:46 AM.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9367027878761292, "perplexity_flag": "middle"}
|
http://en.m.wikibooks.org/wiki/User:Daviddaved/On_inhomogeneous_string_of_Krein
|
# User:Daviddaved/On inhomogeneous string of Krein
The following physical model of a vibrating inhomogeneous string (or string w/with beads) by Krein provides mechanical interpretation for the study of Stieltjes continued fractions. The model is one-dimensional, but it arises as a restriction of n-dimensional inverse problems with rotational symmetry.
The string is represented by a non-decreasing positive mass function m(x) on a possibly infinite interval [0, l]. The right end of the string is fixed. The ratio of the forced oscillation to an applied periodic force @ the left end of the string is the function of frequency, called coefficient of dynamic compliance of the string.
The small vertical vibration of the string is described by the following differential equation:
$\frac{1}{\rho(x)}\frac{\partial^2 f(x,\lambda)}{\partial x^2}=\lambda f(x, \lambda),$
where
$\rho(x) = \frac{dm}{dx}$
is the density of the string, possibly including atomic masses. One can express the coefficient in terms of the fundamental solution of the ODE:
$H(\lambda) = \frac{f'(0,\lambda)}{f(0,\lambda)},$
where, $f(l,\lambda) = 0.$
A fundamental theorem of Krein and Kac, see [10], & also [19] essentially states that an analytic function $H(\lambda)$ is the coefficient of dynamic compliance of a string if and only if the function
$\beta(\lambda) = \lambda H(-\lambda^2)$
is an analytic automorphism of the right half-plane C+, that is real on the real line.
Exercise(**). Use the theorem above, Fourier transform and a change of variables to characterize the set of Dirichlet-to-Neumann maps for a unit disc with conductivity depending only on radius.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 6, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8830858469009399, "perplexity_flag": "head"}
|
http://advogato.org/person/vicious/diary.html?start=328
|
# Older blog entries for vicious (starting at number 328)
Yet another new section in DE book
In trying to avoid bad mood and keep stress level down, people turn to hobbies. One of my hobbies is working on my textbooks, so I have written a new section on the Dirichlet problem for the Laplace equation in the circle for the differential equations book. See the draft section. The previous section 4.9 is OK, but the solution is far more natural in the circle in polar coordinates than in a square, that is we obtain
$\displaystyle u(r,\theta) = \frac{a_0}{2} + \sum_{n=1}^\infty a_n r^n \cos(n\theta) + b_n r^n \sin(n\theta)$
And then we can derive the Poisson formula which is just cool. Also it’s a good example showing more complicated change of variables since we do it in polar, and also it shows a somewhat more complicated and different separation variables.
Part of the motivation was that I did this topic in my PDE class so I had lecture notes and it really felt right for that point in the book, even to leave it as reading to interested students. The other part is that I have been improving the graphing ability of genius so I can do polar coordinates for example:
That’s the graph of the solution $u(r,\theta) = r^{10} \cos(10\, \theta)$, showing that high frequency on the boundary means fast decay as you go closer to the center of your domain.
Though there is no UI for polar coordinates, there is just a function that allows you to plot arbitrary surface data now. Notice how it’s not graphed on a square grid, but above the disc. Also notice that internal rings have fewer points on them, that’s because I just compute fewer values at smaller radii, remember I am passing in arbitrary data, a list of tripples (x,y,z). This will be in version 1.0.16, which should come out end of next week sometime (have to let translators have a go at it). Actually the reason for doing this work on genius was not polar coordinates but showing numerical solutions in my PDE class. It’s just that one of my test cases was polar coordinates and so it just clicked and I thought: I have to write up this section, it’s just too cool to pass up and I can make the graphs now.
This brings the number of pages in the DE book to 315, and the number of exercises to 533. Yaikes! It’s become a beast. I’ll make the new version in a week or two so that it’s usuable for next semester (so if you have comments on the new section do let me know quickly).
I think now a two semester course could possibly be run out of the book. What’s going to be added in the new version is essentially about 5-6 lectures. At my speed the whole thing is now approximately 65 lectures, so a bit less than two semesters worth, but if you go just a tad slower (as many people do), do more examples, and if you factor in exams, reviews, quizzes, etc… it’s just right I think. You’ve got lots of room to spare if you want a two quarter course.
Syndicated 2012-12-09 07:31:39 from The Spectre of Math
Bad memory
So I just remembered, it wasn’t that we thought the computation (see a previous post) would take half a year, it would take 450 days on 3ghz CPU. I guess my memory was being optimistic. I remembered “half a year” when it was really “one and a half a year”. OK, so the computation has now been running for a bit over 2 weeks now on 4 cores. I guess I’m at least 10% there (I hope). It looks a bit worse from the output. It doesn’t seem computers have gotten all that much faster (not at all it seems at least on the load I am trying to do) in the past few years. The only thing better is more cores.
Syndicated 2012-12-07 06:47:54 from The Spectre of Math
Frobenius method and Bessel functions
I had occasion to talk about Bessel functions and mention the Frobenius method in my PDE class and I realized that I do not have any mention of this in the book. This was the section I did not quite get to when teaching at UCSD, so it never got written. Well, worry no more. I’ve written up a draft version of the section. This will appear in the next version of the book whenever I make it, though if you do have comments, do let me know. It’s good to catch typos or make changes now.
This brings the number of pages to 307 together with new delta function section and the number of exercises to 521. Yay!
This also made me realize that Genius did not have Bessel functions implemented. They were actually easy to implement as MPFR has them done. At least for integer orders and real values anyway. Then as my current working directory of genius was such a mess with trying to include LAPACK, I decided to remove LAPACK for now from the genius git. I think what I will do is link to the fortran version at some later point. It seems like the fortran LAPACK is available almost everywhere, so it should not be a bad new dependency. Much easier than trying to make the beast compile cleanly inside genius. Anyway, so Bessel functions will be in Genius, which I think I ought to make a release of soon as there are a bunch of other small changes to set upon the world.
Syndicated 2012-11-27 18:57:39 from The Spectre of Math
“Maxima is calculating”
So friday afternoon I wanted to test for existence of a certain mapping that takes one surface to another surface. Everything is algebraic so one might assume if a mapping exist it might actually be polynomial and since everything is of low degree, the mapping might be as well. So I just set up brute force equations and tried an arbitrary degree 2 mapping. After a second or two, maxima returned no solutions to the resulting system. OK, so how about plugging in degree 3. It turns out I don’t need to test the linear terms, and there are 3 variables so 16 variables per component so I get an algebraic system in 48 variables. Sounds bad, but lot of the equations become something of the form “x=0″. So I looked at a subset of the system. Already the generating of the equations took a few seconds. So I thought, this will take a few minutes. So I started “algsys” on the equations. Well, that was wednesday afternoon. It is Sunday and the thing is still running. Unfortunately it just says “Maxima is calculating” in the wxMaxima window, so one has no clue if it will take another day or so, another year or so, or if the sun will implode first. I sort of have the feeling it is doing something stupid. Once I get more time for math on monday, I’ll probably try to simplify the equations by hand first. I could also try for the solution (or lack thereof) numerically. In the meantime I’ll let it run. This is on my laptop which is surely not meant as a computation machine. It’s only running on one core so it’s not heating up too badly. When I was running some computations for days in the summer on all four cores you could almost cook eggs on the keyboard.
On a related front, I decided that my work computer is sitting too idly so I started the degree 19 calculation that we never did with Danny on our paper [1]. In 2008 we thought it would take at least half a year. Presumably the computers have gotten a tad quicker in the meantime (and since I’m running it on 4 cores), so perhaps the result will come in sooner. Still the progress seems slow from the output so far. It is a bit difficult to judge, I’ll try to estimate time left more precisely later on, but just as first guess from looking at the output I don’t think this will be done before christmas.
There is something magical about pressing ENTER to start the computation you know will take months to complete. It is one of the few places where you really use the fact that you have a fast computer. Most computer power is totally wasted. So for example in somewhat similar time frame Firefox managed to get 70 minutes of CPU time (maxima is up to 5208 now). Now that’s with only very occasional short browsing over the last few days. It seems mostly it’s the tabs being open that eat up time, run the CPU and heat our house. Come to think of that my office will be quite warm I bet once I get there on monday, I don’t think the heating runs on the room termostat, as the swich on that thing is in “off” position and it still heats the room. So with the added heating from 4 cores running at top speed and it being a small room, it should get toasty.
[1] Jiří Lebl and Daniel Lichtblau, Uniqueness of certain polynomials constant on a line, Linear Algebra and its Applications, 433 (2010), no. 4, 824-837, arXiv:0808.0284.
Syndicated 2012-11-25 15:57:59 from The Spectre of Math
News of Microsoft demise a bit premature and study habits of college students
There are apparently a number of people all excited about Microsoft now really destroying itself. Well it still remains to be seen. But I think a good indication of where things are headed are college students. Since I have google analytics now on the textbook pagess I can do some experiments. So for example essentially all the traffic from “Irvine” is from UCI students that look at the differential equations book. So I looked at the operating systems usage from Irvine. Here are the results (note that this is a small sample, very unscientific): 72.3% Windows, 10.6% Mac, 10.4% iOS (iPad + iPhone), 3.7% Linux, 3.1% Android. Now given if you watch what people use on campus it seems mostly Mac, I think that gut feeling might be a bit skewed. On the other hand there could be computer labs running that students use and have no choice over the OS. So what conclusions could one draw? Windows is still dominant, by far. Mac is doing better, but actually quite a bit worse than one would expect among college students. Linux is doing a bit better than I would expect, it’s where Mac was just a few years ago. The interesting thing is also the iOS vs Android. It seems from the news that Android phones have beat iPhones in terms of marketshare, but here it doesn’t look like it. So that would indicate tablets are being used and iPad still beats the android tablets. Interestingly 7.3% of visitors used 320×480 resolution, and that I guess means phone. I can’t figure out how to break that down in Google Analytics. By the way, this means 7.3% are reading their textbook on their phone. This number may spike during exams . Let’s test this theory.
I don’t know how to draw this graph for Irvine only, so it could be other places as well. But look at this graph for the number of visits from phone-like resolution:
But let’s stop that cynical thinking about cheating: There were two exams at UBC (University of British Columbia) for two classes using the differential equations book, but they were on the 14th, and the spike is on the 13th, so the students were studying hard, not cheating. Well maybe studying from a pub so they needed to look at the textbook from a phone, but still.
Syndicated 2012-11-18 15:36:25 from The Spectre of Math
New section in differential equations book
I have recently finally finished a new section on the Dirac delta function for the differential equations textbook. Take a look at the draft version. Note that this is a draft, so it could have typos and could still change. If you have any comments, let me know. Especially if you want to teach with it and would like to mention some detail I don’t mention right now. I will make a new version of the book including this section sometime in December, after semester ends.
In other news, the differential equations textbook is now apparently the standard book for Math 3D at University of California at Irvine. It’s nice if people pick the book to teach out of for their class, but it’s even nicer if a department decides to standardize on the book. The real analysis book is for example the standard book at University of Pittsburgh, and they even made their own changes (adding some extra material), which is a really nice example of what can be done with free textbooks.
I also added Google Analytics to the pages so I can see where the traffic is coming from. If someone uses the books by printing out a copy for students or putting a PDF on their site, I can’t quite see it, but if they simply link to my site it’s fun to watch the traffic. As the differential equations book has an HTML version, a lot of students seem to use that rather than the PDF. I assume the PDF is just downloaded and I don’t see traffic afterwards, but when they are using the HTML version, then of course they keep hitting my site. So currently there are several classes at Irvine and two classes at University of British Columbia that simply link to my site and I get lots of traffic on the HTML version of the book. These students using the HTML version takes up a large proportion of hits to my site. If you look on the map of which cities hits are coming from, there are two big circles, one over Irvine and one over Vancouver, and then lots of other smaller circles mostly distributed all over, mostly over english speaking coutries.
I am thinking I should make an HTML version of the real analysis textbook, but it’s quite a bit of work to set things up for tex4ht, and always quite a bit of work when making a new version so I have not yet gotten around to do it. Also I am more worried about formulas coming out correctly. It would be nice to get something like mathjax working with tex4ht. Or some other solution, but I don’t want to maintain two versions so it would have to take the LaTeX source and produce the HTML perhaps with a different style file. Anyway, for now it is images for equations, which do look bad when printed, but look OK on screen.
Syndicated 2012-11-15 19:44:27 from The Spectre of Math
Numerical range
I was fiddling with numerical range of two by two matrices so I modified my root testing python program to do this. The numerical range of $A$ is the set of all values
$\frac{v^* A v}{v^* v}$
for all nonzero vectors $v$. This set is a compact set (it can be seen as the image of the mapping \$latex $v^* A v$ of vectors on the unit sphere $v^* v = 1$ which is a compact set). It’s convex which is harder to show. For two by two it is an elliptic disc (could be degenerate).
See the result here, it plugs in random vectors and shows the result. Here’s an example plot for the matrix $A=\begin{bmatrix} 1 & i \\ 1 & -1 \end{bmatrix}$.
The code is really inefficient and eats up all your cpu. There’s no effort to optimize this.
Syndicated 2012-11-09 00:06:40 from The Spectre of Math
Economy and elections
I have a theory as to why did the economy improve over this summer and into the fall, which led to Obama winning the election. I bet a part of this was the money spent on the campaigns. That was 2 billion dollars that went to very targeted places like Ohio. No wonder economy in Ohio is doing quite well. If it weren’t for the election, Sheldon Adelson would not spend 100 million on random stuff over the period, he would sit on the money. This way he spent it to elect Romney improving the economy in battlegound states which led to Obama winning.
Yes it is a bit of a stretch, but it should not be totally dismissed. Apparently the campaigns spent approximately 190 million just in Ohio [1]. That means that GDP of Ohio went up by 0.04 percent just because of the election (the GDP is 477 billion [2]). That’s not much but it’s not negligible. Also note that it wasn’t spread out over the whole year it was rather concentrated. Further note that state spending is 26 billion [2], so this is 0.73 percent of what the state spends in a given year. If the state gets say 5 percent of that money in various taxes (just pulling a number out of a hat; a reasonable estimate in my layman opinion based on state budget versus GDP) that would mean approximately 10 million extra tax revenue for the state. Not at all bad.
So, Sheldon Adelson was really rooting for Obama! Sneaky way to do it too.
[1] http://www.nationaljournal.com/hotline/ad-spending-in-presidential-battleground-states-20120620
[2] http://en.wikipedia.org/wiki/Economy_of_Ohio
Syndicated 2012-11-08 18:55:26 from The Spectre of Math
Linus has way too much time on his hands
So latest news comes that Linus has switched to KDE. This apparently after first switching to XFCE, then I guess back to GNOME. Hmmm.
I’m still on XFCE. Can’t be bothered to try anything else. Yes XFCE is somewhat sucky, but once you fix its stupidities (such as the filemanager taking a minute to start up due to some vfs snafu that’s been apparently around forever), it’s there. I’ve entertained the thought of trying something else, but it’s not an exciting enough proposition.
Now I am wondering what to do once Fedora 16 stops being supported. Should I spend the afternoon upgrading to 18? The issue is that I can’t do the normal upgrade thing since that would boot into it’s own environment and would not load a necessary module that I do on startup that turns off the bad nvidia card with a screwed up heatsink. It’s impossible to do this in BIOS (stupid stupid Lenovo, never buying another Lenovo again). Anyway, that means having to do it right after boot, but before the GUI comes up since that would (even if using the intel card) turn the laptop into a portable oven, and it will just turn off and die nowdays. I am thinking that maybe if the upgrade happens during the wintertime, I could just stick the laptop on snow (and wait till it’s at least 20 below freezing) and then it could stay sane for the duration of the upgrade perhaps. I will probably try to do the upgrade by yum only, but that seems like it could be bug prone and would require some manual tinkering, and I just don’t care enough to do that.
Next time picking a distro I’m going with something LTS I think. And … Get off my lawn!!!
Syndicated 2012-11-03 20:44:28 from The Spectre of Math
Visualizing complex singularities
I needed a way to visualize which t get hit for a polynomial such as $t^2+zt+z=0$ when z ranges in a simple set such as a square or a circle. That is, really this is a generically two-valued function above the z plane. Of course we can’t just graph it since we don’t have 4 real dimensions (I want t and z to of course be complex). For each complex z, there are generically two complex t above it.
So instead of looking for existing solutions (boring, surely there is a much more refined tool out there) I decided it is the perfect time to learn a bit of Python and checkout how it does math. Surprisingly well it turns out. Look at the code yourself. You will need numpy, cairo, and pygobject. I think except for numpy everything was installed on fedora. To change the polynomial or drawing parameters you need to change the code. It’s not really documented, but it should not be too hard to find where to change things. It’s less than 150 lines long, and you should take into considerations that I’ve never before written a line of python code, so there might be some things which are ugly. I did have the advantage of knowing GTK, though I never used Cairo before and I only vaguely knew how it works. It’s probably an hour or two’s worth coding, the rest of yesterday afternoon was spent on playing around with different polynomials.
What it does is randomly pick z points in a rectangle, by default real and imagnary parts going from -1 to 1. Each z point has a certain color assigned. On the left hand side of the plot you can see the points picked along with their colors. Then it solves the polynomial and plots the two (or more if the polynomial of higher degree) solutions on the right with those colors. It uses the alpha channel on the right so that you get an idea of how often a certain point is picked. Anyway, here is the resulting plot for the polynomial given above:
I am glad to report (or not glad, depending on your point of view) to report that using the code I did find a counterexample to a Lemma I was trying to prove. In fact the counterexample is essentially the polynomial above. That is, I was thinking you’d probably have to have hit every t inside the “outline” of the image if all the roots were 0 at zero. It turns this is not true. In fact there exist polynomials where t points arbitrarily close to zero are not hit even if the outline is pretty big (actually the hypothesis in the lemma were more complicated, but no point in stating them since it’s not true). For example, $t^2+zt+\frac{z}{n}=0$ doesn’t hit a whole neighbourhood of the point $t=-\frac{1}{n}$. Below is the plot for $n=5$. Note that as n goes to infinity the singularity gets close to $t(t+z) = 0$ which is the union of two complex lines.
By the way, be prepared the program eats up quite a bit of ram, it’s very inefficient in what it does, so don’t run it on a very old machine. It will stop plotting points after a while so that it doesn’t bring your machine to its knees if you happen to forget to hit “Stop”. Also it does points in large “bursts” instead of one by one.
Update: I realized after I wrote above that I never wrote a line of python code that I did write a line of python code before. In my evince/vim/synctex setup I did fiddle with some python code that I stole from gedit, but I didn’t really write any new code there rather than just whacking some old code I did not totally understand with a hammer till it fit in the hole that I needed (a round peg will go into a square hole if hit hard enough).
Syndicated 2012-05-30 17:16:11 from The Spectre of Math
319 older entries...
New Advogato Features
New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.
Keep up with the latest Advogato features by reading the Advogato status blog.
If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 13, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9591958522796631, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/18384/spread-evenly-x-black-balls-among-a-total-of-2n-balls
|
# Spread evenly $x$ black balls among a total of $2^n$ balls
Suppose you want to line up $2^n$ balls of which $x$ are black the rest are white. Find a general method to do this so that the black balls are as dispersed as possible, assuming that the pattern will repeat itself ad infinitum. The solution can be in closed form, iterative, or algorithmic.
For example, if $n=3$, where $0$ is a white ball and $1$ is a black ball, a solution is:
````x=0: 00000000...
x=1: 10000000...
x=2: 10001000...
x=3: 10010010...
x=4: 10101010...
x=5: 01101101...
x=6: 01110111...
x=7: 01111111...
x=8: 11111111...
````
-
2
What is the precise meaning of "as dispersed as possible"? – mjqxxxx Jan 21 '11 at 3:48
Perhaps average distance between adjacent $1$s? – Yuval Filmus Jan 21 '11 at 3:50
For now let me define dispersion as follows: If 0 means move to the left a distance of d0 and 1 means move to the right d1, and d0 and d1 are chosen so that after moving 2^n-x times to the left and x times to the right you end up in the same position, minimal dispersion is the same as minimizing maximum excursion from origin over time, if the pattern is repeated endlessly. – apalopohapa Jan 21 '11 at 4:26
I.e. if the sequence is $s_i$ then we want to minimize $\max_{m \geq 0} \left| \sum_{k=0}^m (2^n s_k - x) \right|$. – Yuval Filmus Jan 21 '11 at 4:47
## 1 Answer
Let $\theta = x/2^n$. For each $m$, put a (black) ball at $$\min \{ k \in \mathbb{N} : k\theta \geq m \}.$$ In other words, look at the sequence $\lfloor k \theta \rfloor$, and put a ball in each position where the sequence increases.
For example, if $n=3$ and $x = 3$ then $\theta = 3/8$ and the sequence of floors is $$0, 0, 0, 1, 1, 1, 2, 2, 3, 3, 3, 4, 4, 4, 5, 5, \ldots,$$ and so the sequence of balls is $10010010\ldots$ .
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 17, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8913902640342712, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/56107?sort=oldest
|
## Fermat’s Last Theorem in the cyclotomic integers.
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Kummer proved that there are no non-trivial solutions to the Fermat equation FLT(n): $x^n + y^n = z^n$ with $n > 2$ natural and $x,y,z$ elements of a regular cyclotomic ring of integers $K$.
I am looking for non-trivial solutions to the Fermat equation FLT(p) in the cyclotomic integer ring $\mathbb{Z}[\zeta_{p}]$ for irregular primes p or any information about how the solutions must be (as a step toward constructing them).
George Lowther pointed out in an earlier discussion that by Kolyvagin's criterion any solution in $\mathbb{Z}[\zeta_{37}]$ must be in the second case.
-
3
Kummer's proof apparently had a gap: he "reduced" to the case when a hypothetical solution (x,y,z) in a regular cyclotomic ring of integers was pairwise relatively prime, but you can't reduce to that case if the ring has class number greater than 1. The result was proved by Hilbert. See Chapter 11 of Grosswald's "Topics from the Theory of Numbers" or section V.3 of Ribenboim's "13 Lectures on Fermat's Last Theorem". – KConrad Feb 20 2011 at 21:41
5
Actually, there is a solution to $x^5+y^5=z^5$ in $\mathbb{Z}[\zeta_3]$. Consider $\zeta_3^5+(\zeta_3^2)^5=(-1)^5$. I think "a ring of cyclotomic integers" should be replaced by "$\mathbb{Z}[\zeta_n]$" in the question. – George Lowther Feb 20 2011 at 23:25
Thank you George, that is what I meant to ask! – Quanta Feb 21 2011 at 0:08
@Quanta: I made some minor edits. I was also thinking of making more significant edits to the first sentence, but don't quite understand your intention. What $n$ did Kummer prove this for? Shouldn't it say "with n > 2 a regular prime" (maybe replace n by p)? And does "a regular ring of integers K" mean "the cyclotomic integers $\mathbb{Z}[\zeta_p]$)? – George Lowther Feb 21 2011 at 0:39
The solution with cube roots of unity noted by G.Lowther works for any exponent that is not a multiple of 3. Also noteworthy, albeit not directly relevant to the specific question at hand, is the solution $(1 + \sqrt{-7}, 1 - \sqrt{-7}, 2)$ of $x^4+y^4=z^4$. While ${\bf Q}(\sqrt{-7})$ is contained in a cyclotomic extension of ${\bf Q}$ (this is true of all quadratic number fields), the exponent $4$ is not prime. – Noam D. Elkies Jul 1 2011 at 15:36
## 1 Answer
This answer is a bit late; sorry for that.
Kummer's proof of the nonsolvability of $x^p + y^p = z^p$ for regular primes $p$ used “ideal numbers” (in present-day language: ideals) and was intact, at least basically. Hilbert in his Zahlbericht gave a modified proof. Both proofs cover not only rational integers but also numbers in $\mathbb{Z}[\zeta_p]$. On the other hand, Kummer’s second result concerning irregular primes that satisfy certain additional conditions covers just the rational integers (although Hilbert, in the very last section of Zahlbericht, erroneously says that Kummer had proven this result for $\mathbb{Z}[\zeta_p]$ as well). Thus one cannot exclude the possibility that there is indeed a solution $(x,y,z)$ for $p=37$. And because of "Kolyvagin's criterion" about $(2^{37}-2)/37$, this solution must belong to the second case, that is, at least one of these three numbers $x,y,z$ in $\mathbb{Z}[\zeta_{37}]$ must have a common factor with $37$ (as mentioned by George Lowther).
By the way, this criterion was also proven by Taro Morishima in 1935 (Japan. J. Math. 11, 241-252, Satz 1; but warning: Satz 2 or at least its proof is incorrect since it is based on some incorrect result of Vandiver).
I don’t know how to find such a solution $(x,y,z)$.
-
Welcome to MathOverflow, Professor Metsankyla! – Gerry Myerson Feb 23 2011 at 23:31
4
Thank you, Gerry. I have still to learn how to operate here. An addition to my answer: in the possible solution $(x,y,z)$ all the numbers $x,y,z$ cannot be real. This was proved by K. Inkeri (my teacher, by the way) in 1949. – Tauno Metsänkylä Feb 24 2011 at 8:21
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 30, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9418438076972961, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/advanced-algebra/81768-vector-equation-intersection-line-plane.html
|
# Thread:
1. ## vector equation of intersection of line and plane
I have a plane that's given by the cartesian equation: $2x - 3y + 3z = 11$
I had to write down the equation of a line passing through point $P (1,2,-1)$ and perpendicular to the plane.
so I worked it to be: $(1,2,-1) + t(2,-3,3)$
Now I have to find the point at which the line intersects the plane. I know this is going to be very simple but I simply cannot see how to do it..
After I have that I have to find the shortest distance from the point to the plane, which I should be able to do by just finding the distance between the two points considering the shortest distance is perpendicular to the plane.
Any help is greatly appreciated.
Thanks in advance.
2. Originally Posted by U-God
I have a plane that's given by the cartesian equation: $2x - 3y + 3z = 11$
I had to write down the equation of a line passing through point $P (1,2,-1)$ and perpendicular to the plane.
so I worked it to be: $(1,2,-1) + t(2,-3,3)$
Now I have to find the point at which the line intersects the plane. I know this is going to be very simple but I simply cannot see how to do it..
After I have that I have to find the shortest distance from the point to the plane, which I should be able to do by just finding the distance between the two points considering the shortest distance is perpendicular to the plane.
Any help is greatly appreciated.
Thanks in advance.
Substitute the parametric equations of the line into the equation of the plane and solve for t. Then substitute that value of t back into the parametric equations of the line to get the required coordinates of the intersection point.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9688637256622314, "perplexity_flag": "head"}
|
http://www.sjbrown.co.uk/2012/04/08/bidirectional-instant-radiosity/
|
# Simon's Graphics Blog
Work log for ideas and hobby projects.
## Bidirectional Instant Radiosity
with 3 comments
Bidirectional Instant Radiosity is the title of a paper by B. Segovia et al which presented a new sampling strategy to find virtual point lights (VPLs) that are relevant to the camera. The algorithm given for generating VPLs is:
• Generate $$N/2$$ “standard” VPLs by sampling light paths (i.e. vanilla instant radiosity)
• Generate $$N/2$$ “reverse” VPLs by sampling eye paths (compute their radiance using the N/2 standard VPLs)
These $$N$$ VPLs are then resampled into $$N^\prime$$ VPLs by considering their estimated contribution to the camera. Finally the $$N^\prime$$ resampled VPLs are used to render an image of the scene.
In this post I’ll describe how I think this approach can be generalised to generate VPLs using all the path construction techniques of a bidirectional path tracer. As usual I’m going to assume the reader is familiar with bidirectional path tracing in the Veach framework.
I should state that this is an unfinished investigation into VPL sampling. I’m going to describe the core idea and formally define the VPL “virtual sensor”, but proper analysis of the results will be part of a future post (and may well indicate that this approach is not advisable).
## Core Idea
Bidirectional path tracing is about generating eye and light subpaths, considering all possible ways to connect them, and combining the results using multiple importance sampling.
In order to apply bidirectional path tracing to VPL generation, it seems natural to make the start of our eye subpath a VPL. To achieve this we define a “virtual sensor” that covers every surface in the scene that is capable of being a VPL. This virtual sensor generates VPL locations as follows:
• If the light subpath hits the virtual sensor (i.e. hits anywhere in the scene) this is a VPL location. (In fact these are exactly the VPL locations you would get from “standard” instant radiosity.)
• When the virtual sensor is sampled (i.e. for the first eye subpath vertex), this is a VPL location.
To compute the value of each VPL, we evaluate each subpath combination using standard bidirectional path tracing with multiple importance sampling, accumulating the weighted contribution into the appropriate VPL. In this way, all possible subpath combinations are used as sampling techniques during VPL construction.
## Background: Lightmapping
For the motivation of this idea, consider solving a lightmap using bidirectional path tracing. A lightmap is a sensor that sits on some surface in the scene, acting as the dual of an area light. The end result is an image that captures all of the direct and indirect lighting on the surface.
In the diagram below, we consider (in the usual Veach notation) a bidirectional path consisting of the eye subpath $$\mathbf{z}_0\mathbf{z}_1\mathbf{z}_2$$ that starts from the lightmap sensor (coloured blue) and the light subpath $$\mathbf{y}_0\mathbf{y}_1\mathbf{y}_2$$ that starts from a light source.
In camera renders with a physical aperture (for example with a thin lens camera model), it is unlikely that light subpaths hit the sensor. For lightmaps, the sensor is much larger, so light subpaths that hit the sensor are much more likely. Also, unlike camera renders, it is usually only the position on the surface that affects which lightmap pixel receives the sample (not the incoming angle).
As such, for a lightmap sensor, the location of a weighted sample to be accumulated into the final image is always defined by the last (in the sense of paths going from lights to sensors) vertex of the bidirectional path. Only certain vertices can be the last vertex, namely:
• $$\mathbf{z}_0$$: this is the first vertex of the eye subpath, which we obtained by sampling the lightmap sensor
• $$\mathbf{y}_i$$, for some $$i > 0$$: this occurs when the light subpath hits the sensor during light/BSDF sampling (no eye subpath vertices are included in the path)
We’re going to use define a similar sensor, with two differences:
• Instead of mapping a single surface to an image, we are going to use every surface in the scene and store the samples directly as VPLs.
• We will try to sample this sensor in a way that produces VPLs that affect a specific camera (rather than uniformly over the sensor).
## Virtual Sensor
Let’s define this virtual sensor that covers every surface in the scene. We sample this sensor by tracing a path from our original camera until we hit a possible VPL location. If we label the VPL location as $$\mathbf{z}$$ and the camera path used to generate it as $$\mathbf{v}_i$$, a diffuse scene with a camera produces the path $$\overline{v} = \mathbf{v}_0 \mathbf{v}_1 \mathbf{z}$$ as below:
Let’s take this vertex $$\mathbf{z}$$ and use it as vertex $$\mathbf{z}_0$$ to start an eye subpath. In order to use this virtual sensor with bidirectional path tracing, we need to define the following (see Veach eqn 10.7):
• The area importance of the sensor $$W_e(\mathbf{z}_0)$$
• The probability density wrt area $$P_A(\mathbf{z}_0)$$
• The angular importance of the sensor $$W_e(\mathbf{z}_0 \to \mathbf{z}_1)$$
• The probability density wrt projected solid angle $$P_{\sigma^\perp}(\mathbf{z}_0 \to \mathbf{z}_1)$$.
The angular terms are defined by the BSDF at the surface. For our diffuse scene, each intersection point has a Lambertian BRDF with some reflectance $$\rho$$, so our angular terms would be:
$W_e(\mathbf{z}_0 \to \mathbf{z}_1) = \rho \\ P_{\sigma^\perp}(\mathbf{z}_0 \to \mathbf{z}_1) = \frac{1}{\pi}$
The sensor has uniform area importance everywhere (i.e. $$W_e(\mathbf{z}) = 1$$), so the only part remaining is to compute the probability density wrt area $$P_A(\mathbf{z})$$. As noted by Segovia et al, we compute this as a marginal probability density of all paths from the camera that can generate point $$\mathbf{z}$$. For our diffuse scene with path $$\overline{v} = \mathbf{v}_0 \mathbf{v}_1 \mathbf{z}$$, this is:
$P_A(\mathbf{z}) = \iint\limits_A \! P(\overline{v}) \, \mathrm{dA}(\mathbf{v}_1) \mathrm{dA}(\mathbf{v}_0)$
Substituting the probability density of the path $$\overline{v}$$ and using the identity $$\mathrm{d\sigma^\perp_{x^\prime}}(\widehat{\mathbf{x} – \mathbf{x^\prime}}) = G(\mathbf{x} \leftrightarrow \mathbf{x^\prime}) \mathrm{dA}(\mathbf{x})$$ we get:
\[\begin{align*}
P_A(\mathbf{z}) &= \iint\limits_A \! P(\overline{v}) \, \mathrm{dA}(\mathbf{v}_1) \mathrm{dA}(\mathbf{v}_0) \\
&= \iint\limits_A \! P_A(\mathbf{v}_0) P_A(\mathbf{v}_1|\mathbf{v}_0) P_A(\mathbf{z}|\mathbf{v}_1) \, \mathrm{dA}(\mathbf{v}_1) \mathrm{dA}(\mathbf{v}_0) \\
&= \iint\limits_A \! P_A(\mathbf{v}_0) P_{\sigma^\perp}(\mathbf{v}_0 \to \mathbf{v}_1) G(\mathbf{v}_0 \leftrightarrow \mathbf{v}_1) P_{\sigma^\perp}(\mathbf{v}_1 \to \mathbf{z}) G(\mathbf{v}_1 \leftrightarrow \mathbf{z}) \, \mathrm{dA}(\mathbf{v}_1) \mathrm{dA}(\mathbf{v}_0) \\
&= \int\limits_A \! \int\limits_\Omega \! P_A(\mathbf{v}_0) P_{\sigma^\perp}(\omega_0) P_{\sigma^\perp}(\mathbf{v}_1 \to \mathbf{z}) G(\mathbf{v}_1 \leftrightarrow \mathbf{z}) \, \mathrm{d\sigma^\perp}(\omega_0) \mathrm{dA}(\mathbf{v}_0)
\end{align*}\]
Since we can’t compute this integral analytically, we estimate it with importance sampling. Since $$P_A(\mathbf{v}_0)$$ and $$P_{\sigma^\perp}(\omega_0)$$ are probability densities themselves, sampling $$\mathbf{v}_0$$ and $$\omega_0$$ causes the function and pdf to cancel out, simplifying the estimate to:
$P_A(\mathbf{z}) \approx \frac{1}{N} \sum_{i=1}^N P_{\sigma^\perp}(\mathbf{v}_{i,1} \to \mathbf{z}) G(\mathbf{v}_{i,1} \leftrightarrow \mathbf{z}) \text{ for paths } \overline{v_i} = \mathbf{v}_{i,0} \mathbf{v}_{i,1} \mathbf{z}$
Note this matches Segovia et al’s equation 7, except we have derived this within the Veach framework so the notation is a bit different.
So in order to compute the probability density wrt area for any VPL location $$\mathbf{z}$$, we construct N camera paths $$\mathbf{v}_{i,0} \mathbf{v}_{i,1}$$, compute $$P_{\sigma^\perp}(\mathbf{v}_{i,1} \to \mathbf{z})$$ and $$G(\mathbf{v}_{i,1} \leftrightarrow \mathbf{z})$$ and use the equation above. Note that a path may estimate a zero pdf if either $$P_{\sigma^\perp}$$ or $$G$$ is zero. Considering our diffuse scene as before, here is an example for $$N=4$$:
In this example, only paths 0 and 1 would estimate a non-zero pdf. Path 2 returns 0 since $$V(\mathbf{v}_{2,1} \leftrightarrow \mathbf{z}) = 0$$ so $$G(\mathbf{v}_{2,1} \leftrightarrow \mathbf{z}) = 0$$. Path 3 returns 0 since for a Lambertian BSDF $$P_{\sigma^\perp}(\mathbf{v}_{3,i} \to \mathbf{z})$$ is 0 for incoming and outgoing directions that are in opposite hemispheres.
If all paths produce a zero pdf estimate, we must conclude that the location is not possible to sample using camera paths. I don’t think this is a problem, but we must take this into account during multiple importance sampling since it means the location is only reachable using light subpaths.
(Note: the sampling scheme for this virtual sensor is completely arbitrary, you could replace it or mix in any other sampling scheme and still use the same bidirectional construction of VPLs. For example, 10% of the time you could choose to sample from a surface in the scene using a precomputed CDF, and factor this into the pdf estimate. This would also ensure that VPL locations have a non-zero pdf estimate everywhere.)
Estimating $$P_A(\mathbf{z})$$ requires many intersection tests. To ensure good efficiency we should:
• Reuse $$\mathbf{v}_{i,0}$$ and $$\mathbf{v}_{i,1}$$ for several VPLs (e.g. all VPLs generated in one pass)
• Reuse all intersection tests used to compute $$P_A(\mathbf{z})$$ to also compute power brought to the camera for VPL resampling
## Bidirectional Path Tracing
Now we have fully defined our virtual sensor we can use it with bidirectional path tracing. As usual we sample some light subpath $$\mathbf{y}_0 \ldots \mathbf{y}_{S-1}$$ and some eye subpath $$\mathbf{z}_0 \ldots \mathbf{z}_{T-1}$$ and consider all subpath combinations.
For each weighted contribution $$C_{s,t}$$ from a path formed from $$s$$ light subpath vertices and $$t$$ eye subpath vertices, we check $$t$$ to decide which VPL to accumulate the result into. If $$t = 0$$, then this path is a light subpath that hit our virtual sensor, so use the VPL associated with vertex $$\mathbf{y}_s$$. Otherwise, the path ends at the first eye subpath vertex so use the VPL associated with vertex $$\mathbf{z}_0$$.
We also keep $$y_0$$ as a VPL for direct lighting, making the final total $$S + 1$$ VPLs. If we also handle the case where $$\mathbf{z}_0$$ happens to land on a light source (accumulating the emission into the VPL we already have at $$\mathbf{z}_0$$), we can multiple importance sample between these two cases (the ratio between pdfs is $$P_A(\mathbf{y}_0)/P_A(\mathbf{z}_0)$$).
Here’s an example for $$S = T = 3$$, directly generating VPLs $$\mathbf{y}_0$$, $$\mathbf{y}_1$$, $$\mathbf{y}_2$$ and $$\mathbf{z}_0$$ by its subpath combinations. Note that the camera is only there to estimate $$P_A(\mathbf{z})$$ for some sampled or intersected point on our virtual sensor:
## Results?
As I mentioned in the introduction I’m going to resist posting any results until I can find some proper time for analysis, which will likely not be for a few weeks. This post exists so that someone can poke some holes in the theory, so I’d welcome any comments!
Written by Simon Brown
April 8th, 2012 at 11:57 pm
Posted in Global Illumination,Rendering
### 3 Responses to 'Bidirectional Instant Radiosity'
Subscribe to comments with RSS or TrackBack to 'Bidirectional Instant Radiosity'.
1. Interesting post indeed. I find 2 things confusing, though:
1) The notation. In some formulas you use z0 and z1, which also appear in Veach’s formulas. But these should somehow correspond to v0 and v1 in your example.
2) I don’t see how the Virtual Sensor section is related to the Bidirectional Path Tracing section and how one would use BDPT for creating the VPLs. From the text I’d assume that one would sample the VPL positions with BDPT, creating a VPL position at each bidir path vertex, and then estimate the VPL pdf by connecting to the v1 vertices? This would be clearly wrong. Only VPLs created from the camera would need that, and those created from eye paths longer than 2 would need appropriate marginal density computation, which would be possible to do with some path reuse, but may not be worth it.
And one other thing worth mentioning, since Segovia omits that, as far as I remember the paper. The method is biased, because of the marginal density approximation.
iliyan
9 Apr 12 at 12:14 pm
2. Hi Iliyan, thanks very much for the feedback.
1). In the definition of the virtual sensor, the z in the diagram is $$\mathbf{z}_0$$. $$\mathbf{z}_1$$ is formed as the second vertex of the eye subpath from the virtual sensor. This should indeed be clarified, I’ll try to update the text to improve this later. Note the $$\mathbf{v}_i$$ are only used to estimate $$P_A$$, they do not take part in BDPT.
2). The VPLs are indeed created directly from BDPT. I think perhaps I should move the “Bidirectional Path Tracing” section first with more details, then cover the virtual sensor later. Bidirectional paths start at a light source and end at a VPL, this is why in the S=T=3 example there are VPLs only where the sensor is sampled (at $$\mathbf{z}_0$$) or hit (at $$\mathbf{y}_1$$ and $$\mathbf{y}_2$$, which do indeed need marginal density calculated). Edit: this is mentioned by Segovia et al, but I think a lot of the work done for the density can be reused for resampling later.
3) Ah yes this is mentioned by Segovia et al, estimating $$P_A$$ does introduce bias. I’m quite interested if approximations to this can be used (e.g. some CDF over all surfaces in the scene) instead of an estimate. If the approximation has an analytical pdf, this would eliminate the bias (as well as a ton of visibility tests).
Thank again for the feedback, I’ll post another comment when I get chance to address the changes.
Simon Brown
9 Apr 12 at 1:56 pm
3. You’re quite welcome, and thank you back for initiating the discussion. Not many people are dealing with hard core MC global illumination, and quality discussions are a rarity.
I’ve been playing recently with path reuse in a bit more general sense than BDIR, and unfortunately I’ve established that it’s rather difficult to reuse paths in an unbiased way. The main reason is that you have to compute marginal densities, for which an unbiased estimator is not sufficient to obtain unbiased final pixel estimates – the expected values just don’t work out. This is quite unfortunate, since there’s a lot potential for path reuse. There have been some successful attempts (e.g. Baekert’s path reuse paper), which are very restricted though.
The lesson is, it’s straightforward to estimate light transport along the full paths we trace with random walks, but it’s difficult or even impossible in the general case to obtain unbiased estimates along paths constructed by reusing vertices/segments from other paths.
iliyan
9 Apr 12 at 7:51 pm
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 72, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8937023282051086, "perplexity_flag": "middle"}
|
http://en.wikipedia.org/wiki/Oversampling
|
# Oversampling
In signal processing, oversampling is the process of sampling a signal with a sampling frequency significantly higher than twice the bandwidth or highest frequency of the signal being sampled. Oversampling helps avoid aliasing, improves resolution and reduces noise.
## Oversampling factor
An oversampled signal is said to be oversampled by a factor of β, defined as
$\beta \ \stackrel{\mathrm{def}}{=}\ \frac{f_s}{2 B} \,\!$
or
$f_s = 2 \beta B \,\!$.
where:
• fs is the sampling frequency
• B is the bandwidth or highest frequency of the signal; the Nyquist rate is 2B.
## Motivation
There are three main reasons for performing oversampling:
### Anti-aliasing
Oversampling can make it easier to realize analog anti-aliasing filters. Without oversampling, it is very difficult to implement filters with the sharp cutoff necessary to maximize use of the available bandwidth without exceeding the Nyquist limit. By increasing the bandwidth of the sampled signal, design constraints for the anti-aliasing filter may be relaxed.[1] Once sampled, the signal can be digitally filtered and downsampled to the desired sampling frequency. In modern integrated circuit technology, digital filters are easier to implement than comparable analog filters.
### Resolution
In practice, oversampling is implemented in order to achieve cheaper higher-resolution A/D and D/A conversion. For instance, to implement a 24-bit converter, it is sufficient to use a 20-bit converter that can run at 256 times the target sampling rate. Combining 256 consecutive 20-bit samples can increase the signal-to-noise ratio by a factor of 16 (the square root of the number of samples averaged), adding 4 bits to the resolution, producing a single sample with 24-bit resolution.
The number of samples required to get $n$ bits of additional data precision is:
$\text{NumSamples} = (2^n)^2 = 2^{2n}$
The sum of $2^{2n}$ samples is divided by $2^n$ to get the mean sample scaled up to an integer with $n$ additional bits:
$\text{result} = \frac{\text{sum(Data)}}{2^n}$
Note that this averaging is possible only if the signal contains perfect equally distributed noise which is enough to be measured by the A/D converter. If not, all $2^n$ samples will have the same value, the average will be identical to this value, and the oversampling will have no effect, so the conversion result will be as inaccurate as if it had been measured by the low-resolution core A/D. This is an interesting counter-intuitive example where adding some dithering noise can improve the results instead of degrading them.
### Noise
If multiple samples are taken of the same quantity with uncorrelated noise added to each sample, then averaging N samples reduces the noise power by a factor of 1/N.[2] If, for example, we oversample by a factor of 4, the signal-to-noise ratio in terms of power improves by factor of 4 which corresponds to a factor of 2 improvement in terms of voltage.
Certain kinds of A/D converters known as delta-sigma converters produce disproportionately more quantization noise in the upper portion of their output spectrum. By running these converters at some multiple of the target sampling rate, and low-pass filtering the oversampled down to half the target sampling rate, it is possible to obtain a result with less noise than the average over the entire band of the converter. Delta-sigma converters use a technique called noise shaping to move the quantization noise to the higher frequencies.
## Example
For example, consider a signal with a bandwidth or highest frequency of B = 100 Hz. The sampling theorem states that sampling frequency would have to be greater than 200 Hz. Sampling at 200 Hz would result in β = 1. Sampling at four times that rate (β = 4) requires a sampling frequency of 800 Hz. This gives the anti-aliasing filter a transition band of 300 Hz ((fs/2) − B = (800 Hz/2) − 100 Hz = 300 Hz) instead of 0 Hz if the sampling frequency was 200 Hz.
Achieving an anti-aliasing filter with 0 Hz transition band is unrealistic whereas an anti-aliasing filter with a transition band of 300 Hz is not difficult to create.
After being sampled at 800 Hz, the signal (ostensibly with a bandwidth of 400 Hz) could be digitally filtered to have a bandwidth of 100 Hz and then downsampled to a 200 Hz sample frequency without aliasing.
## References
1. Nauman Uppal (2004-08-30). Upsampling vs. Oversampling for Digital Audio. Retrieved 2012-10-06. "Without increasing the sample rate, we would need to design a very sharp filter that would have to cutoff at just past 20kHz and be 80-100dB down at 22kHz. Such a filter is not only very difficult and expensive to implement, but may sacrifice some of the audible spectrum in its rolloff."
• John Watkinson, The Art of Digital Audio, ISBN 0-240-51320-7
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 9, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9083312749862671, "perplexity_flag": "middle"}
|
http://gauravtiwari.org/author/wpgaurav/page/2/
|
# MY DIGITAL NOTEBOOK
A Personal Blog On Mathematical Sciences and Technology
Home » Articles posted by Gaurav Tiwari (Page 2)
# Author Archives:
## Proofs of Irrationality
Tuesday, February 14th, 2012 19:18 / 3 Comments
“Irrational numbers are those real numbers which are not rational numbers!”
Def.1: Rational Number
A rational number is a real number which can be expressed in the form of $\frac{a}{b}$ where $a$ and $b$ are both integers relatively prime to each other and $b$ being non-zero.
Following two statements are equivalent to the definition 1.
1. $x=\frac{a}{b}$ is rational if and only if $a$ and $b$ are integers relatively prime to each other and $b$ does not equal to zero.
2. $x=\frac{a}{b} \in \mathbb{Q} \iff \mathrm{g.c.d.} (a,b) =1, \ a \in \mathbb{Z}, \ b \in \mathbb{Z} \setminus \{0\}$.
(more…)
26.740278 83.888889
## Gamma Function
If we consider the integral $I =\displaystyle{\int_0^{\infty}} e^{-t} t^{a-1} \mathrm dt$ , it is once seen to be an infinite and improper integral. This integral is infinite because the upper limit of integration is infinite and it is improper because $t=0$ is a point of infinite discontinuity of the integrand, if $a<1$, where $a$ is either real number or real part of a complex number. This integral is known as Euler’s Integral. This is of a great importance in mathematical analysis and calculus. The result, i.e., integral, is defined as a new function of real number $a$, as $\Gamma (a) =\displaystyle{\int_0^{\infty}} e^{-t} t^{a-1} \mathrm dt$ .
(more…)
26.740278 83.888889
## The Area of a Disk
Friday, January 27th, 2012 16:22 / 1 Comment
[This post is under review.]
If you are aware of elementary facts of geometry, then you might know that the area of a disk with radius $R$ is $\pi R^2$.
The radius is actually the measure(length) of a line joining the center of disk and any point on the circumference of the disk or any other circular lamina. Radius for a disk is always same, irrespective of the location of point at circumference to which you are joining the center of disk. The area of disk is defined as the ‘measure of surface‘ surrounded by the round edge (circumference) of the disk.
26.740278 83.888889
## Triangle Inequality
Friday, January 20th, 2012 10:57 / 5 Comments
Triangle inequality has its name on a geometrical fact that the length of one side of a triangle can never be greater than the sum of the lengths of other two sides of the triangle. If $a$, $b$ and $c$ be the three sides of a triangle, then neither $a$ can be greater than $b+c$, nor$b$ can be greater than $c+a$, nor $c$ can be than $a+b$.
Triangle
Consider the triangle in the image, side $a$ shall be equal to the sum of other two sides $b$ and $c$, only if the triangle behaves like a straight line. Thinking practically, one can say that one side is formed by joining the end points of two other sides.
In modulus form, $|x+y|$ represents the side $a$ if $|x|$ represents side $b$ and $|y|$ represents side $c$. A modulus is nothing, but the distance of a point on the number line from point zero.
Visual representation of Triangle inequality
For example, the distance of $5$ and $-5$ from $0$ on the initial line is $5$. So we may write that $|5|=|-5|=5$.
Triangle inequalities are not only valid for real numbers but also for complex numbers, vectors and in Euclidean spaces. In this article, I shall discuss them separately. (more…)
26.740278 83.888889
## My Five Favs in Math Webcomics
Monday, January 9th, 2012 20:15 / 5 Comments
Cartoons and Comics are very useful in the process of explaining complicated topics, in a very light and humorous way. Like:
(more…)
26.740278 83.888889
## Progress Report of MY DIGITAL NOTEBOOK in 2011
The WordPress.com stats helper monkeys prepared a 2011 annual report for this blog.
Here’s an excerpt:
The concert hall at the Syndey Opera House holds 2,700 people. This blog was viewed about 53,000 times in 2011. If it were a concert at Sydney Opera House, it would take about 20 sold-out performances for that many people to see it.
26.740278 83.888889
## On Ramanujan’s Nested Radicals
Friday, December 30th, 2011 08:50 / 8 Comments
26.740278 83.888889
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 40, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9033412933349609, "perplexity_flag": "head"}
|
http://mathoverflow.net/revisions/90879/list
|
## Return to Answer
3 added 633 characters in body
1. Peaucellier–Lipkin inversor: http://en.wikipedia.org/wiki/Peaucellier-Lipkin_linkage By mid-19th century it was widely believed that one cannot transform circular motion to linear motion. For instance, Chebyshev tried quite hard but gave up and invented his polynomials instead, to deal with the issue approximately. The construction of inversor is simple and ingenious.
2. Mnev's Universality Theorem dealing with configuration spaces of linear arrangements and convex polytopes. The idea is that one can encode elementary algebraic operations into elementary geometric objects (actually, this goes back to Von Staudt in 19th century).
3. Connelly's flexible polyhedron is an example of a polyhedral sphere embedded in ${\mathbb R}^3$ which admits nontrivial deformations (so that each boundary face stays rigid). Cauchy proved (with some gaps fixed over 100 years later) that there are no flexible convex polyhedra, but general rigidity problem was open for over 150 years. People tended to believe that such polyhedra do not exist (for instance, "generic" polyhedral spheres are rigid). Connelly started by trying to prove non-existence and ended up constructing a counter-example, again, simple and ingenious.
2 fixed link to wikipedia (%27)
1. Peaucellier–Lipkin inversor: http://en.wikipedia.org/wiki/Peaucellier-Lipkin_linkage By mid-19th century it was widely believed that one cannot transform circular motion to linear motion. For instance, Chebyshev tried quite hard but gave up and invented his polynomials instead, to deal with the issue approximately. The construction of inversor is simple and ingenious.
2. Mnev's Universality Theorem: http://en.wikipedia.org/wiki/Mnev's_universality_theorem dealing with configuration spaces of linear arrangements and convex polytopes. The idea is that one can encode elementary algebraic operations into elementary geometric objects (actually, this goes back to Von Staudt in 19th century).
1 [made Community Wiki]
1. Peaucellier–Lipkin inversor: http://en.wikipedia.org/wiki/Peaucellier-Lipkin_linkage By mid-19th century it was widely believed that one cannot transform circular motion to linear motion. For instance, Chebyshev tried quite hard but gave up and invented his polynomials instead, to deal with the issue approximately. The construction of inversor is simple and ingenious.
2. Mnev's Universality Theorem: http://en.wikipedia.org/wiki/Mnev's_universality_theorem dealing with configuration spaces of linear arrangements and convex polytopes. The idea is that one can encode elementary algebraic operations into elementary geometric objects (actually, this goes back to Von Staudt in 19th century).
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9318630695343018, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/tagged/symmetry?sort=active&pagesize=15
|
# Tagged Questions
The symmetry tag has no wiki summary.
learn more… | top users | synonyms
1answer
52 views
### Topological vs. non-topological noetherian charges
What (if any) is the relationship between the conserved (non-topological) noetherian charges and topological charges? Namely, is there any "generalization" of the Noether's first theorem that includes ...
1answer
35 views
### Diagonal matrix in k-space
I'm having some trouble with an integration I hope you guys can help me with. I have that: ${{\mathbf{v}}_{i}}\left( \mathbf{k} \right)=\frac{\hbar {{\mathbf{k}}_{i}}}{m}$ and ...
7answers
2k views
### Is there something similar to Noether's theorem for discrete symmetries?
Noether's theorem states that, for every continuous symmetry of a system, there exists a conserved quantity, e.g. energy conservation for time invariance, charge conservation for $U(1)$. Is there any ...
3answers
410 views
### Does high entropy means low symmetry?
According to Bogolubov postulate (various texts name it differently) in Non-equilibrium thermodynamics, the number of needed parameters to describe our system is decreasing with time, and finally at ...
1answer
55 views
### Gauging discrete symmetries
I read somewhere what performing an orbifolding (i.e. imposing a discrete symmetry on what would otherwise be a compactification torus) is equivalent to "gauging the discrete symmetry". Can anybody ...
1answer
46 views
### Possible states for two electrons in the helium atom
Consider the helium atom with two electrons, but ignore coupling of angular momenta, relativistic effects, etc. The spin state of the system is a combination of the triplet states and the singlet ...
2answers
404 views
### What is the ontological status of Faddeev Popov ghosts?
We all know Faddeev-Popov ghosts are needed in manifestly Lorentz covariant nonabelian quantum gauge theories. We also all know they decouple from the rest of matter asymptotically, although they ...
1answer
45 views
### Baryon wave function symmetry
If a baryon wavefunction is $\Psi = \psi_{spatial} \psi_{colour} \psi_{flavour} \psi_{spin}$, and we consider the ground state (L=0) only. We know that the whole thing has to be antisymmetric under ...
0answers
59 views
### A general wavefunction in a square lattice
Suppose we have a square lattice with periodic condition in both $x$ and $y$ direction with four atoms per unit cell, the configuration of the four atoms has $C_4$ symmetry. What will be a general ...
1answer
77 views
### Gravitational field v.s. Physical variable?
I went to a talk on Newtonian mechanics some time earlier and the speaker said, and I quote, Newton's equations of motion admit a larger symmetry group than the Galilean group alone. Therefore, ...
0answers
19 views
### Coordinate transform to exploit symmetry
I have a stochastic process that can be described the following master-equation: \partial_{t}P(x,y)=-\left(W_{12}(x,y)+W_{13}(x,y)+W_{21}(x,y)+W_{23}(x,y)+W_{31}(x,y)+W_{32}(x,y)\right)P(x,y)\\ ...
0answers
96 views
### Question about Noether theorem
For the Noether theorem for pseudoeuclidean 4-spacetime a-current $J_{a}^{\mu}$ is equal to J_{a}^{\mu} = \frac{\partial L}{\partial (\partial_{\mu}\Psi_{k})}Y_{k, a} - \left( \frac{\partial ...
1answer
42 views
### “WLOG” re Schwarzschild geodesics
Why, when studying geodesics in the Schwarzschild metric, one can WLOG set $$\theta=\frac{\pi}{2}$$ to be equatorial? I assume it is so because when digging around the internet, most references seem ...
0answers
88 views
### How to define the mirror symmetry operator for Kane-Mele model?
Let us take the famous Kane-Mele(KM) model(http://prl.aps.org/abstract/PRL/v95/i22/e226801 and http://prl.aps.org/abstract/PRL/v95/i14/e146802) as our starting point. Due to the time-reversal(TR), ...
1answer
424 views
### Emergent symmetries
As we know, spontaneous symmetry breaking(SSB) is a very important concept in physics. Loosely speaking, zero temprature SSB says that the Hamiltonian of a quantum system has some symmetry, but the ...
1answer
197 views
### Coulomb gauge fixing and “normalizability”
The Setup Let Greek indices be summed over $0,1,\dots, d$ and Latin indices over $1,2,\dots, d$. Consider a vector potential $A_\mu$ on $\mathbb R^{d,1}$ defined to gauge transform as A_\mu\to ...
1answer
249 views
### A simple model that exhibits emergent symmetry?
In a previous question Emergent symmetries I asked, Prof.Luboš Motl said that emergent symmetries are never exact. But I wonder whether the following example is an counterexample that has exact ...
2answers
157 views
### A question on the existence of Dirac points in graphene?
As we know, there are two distinct Dirac points for the free electrons in graphene. Which means that the energy spectrum of the 2$\times$2 Hermitian matrix $H(k_x,k_y)$ has two degenerate points $K$ ...
0answers
54 views
### Curie's principle in electromagnetic field theory
I am looking for some explanation and if possible also some references about the applications of Curie's principle in electromagnetic field Theory, precisely in the computation of magnetic (resp. ...
0answers
35 views
### Spherical charge in two different dielectric materials
I am trying to freshen up my memory about electrical fields and I came across this exercise from school. A sphere with a constantly distributed charge is located in between two different dielectrics ...
0answers
187 views
### Extended Born relativity, Nambu 3-form and ternary (n-ary) symmetry
Background: Classical Mechanics is based on the Poincare-Cartan two-form $$\omega_2=dx\wedge dp$$ where $p=\dot{x}$. Quantum mechanics is secretly a subtle modification of this. By the other hand, ...
1answer
250 views
### Why does a transformation to a rotating reference frame NOT break temporal scale invariance?
Naively, I thought that transforming a scale invariant equation (such as the Navier-Stokes equations for example) to a rotating reference frame (for example the rotating earth) would break the ...
4answers
354 views
### When can a global symmetry be gauged?
Take a classical field theory described by a local Lagrangian depending on a set of fields and their derivatives. Suppose that the action possesses some global symmetry. What conditions have to be ...
1answer
582 views
### Classical and quantum anomalies
I have read about anomalies in different contexts and ways. I would like to read an explanation that unified all these statements or point-views: Anomalies are due to the fact that quantum field ...
1answer
61 views
### Invariance, covariance and symmetry
Though often heard, often read, often felt being overused, I wonder what are the precise definitions of invariance and covariance. Could you please give me an example from quantum field theory? ...
1answer
1k views
### Schrödinger Equation
I am reading up on the Schrödinger equation and I quote Because the potential is symmetric under $x\to-x$, we expect that there will be solutions of definite parity. Could someone kindly explain ...
0answers
61 views
### What is kappa symmetry?
On page 180 David McMohan explains that to obtain a (spacetime) supersymmetric action for a GS superstring one has to add to the bosonic part S_B = -\frac{1}{2\pi}\int d^2 \sigma ...
2answers
349 views
### Lorentz invariance of the 3 + 1 decomposition of spacetime
Why is allowed decompose the spacetime metric into a spatial part + temporal part like this for example $$ds^2 ~=~ (-N^2 + N_aN^a)dt^2 + 2N_adtdx^a + q_{ab}dx^adx^b$$ ($N$ is called lapse, $N_a$ is ...
2answers
98 views
### Eigenfunctions in periodic potential
For Hamiltonian $\operatorname H$ and lattice translation operator $\operatorname T$, if $$\operatorname H\psi=E\psi, \qquad \operatorname T\psi=e^{ik\cdot R}\psi,$$ and \operatorname ...
2answers
92 views
### What is a symmetry of a physical system?
If I understand correctly, in many context in physics (quantum mechanics?), a physical system is specified by giving its Hamiltonian. I also hear that symmetries are rather essential. As far as the ...
0answers
38 views
### CP-symmetry and Ward identities and finite temperature
I have a few questions about Ward-identities which I summarize here. For each I am very greateful for answers and references to literature. Wikipedia states about Ward-identities: The ...
1answer
47 views
### A simple example of symmetry setting the properties of a Physical System
Does anybody know of an example were one could derive some important properties of a physical system from a symmetry of said system. I´m specially looking for simple classical examples, which could ...
1answer
124 views
### What are the conserved charges related to the Virasoro generators?
I have just learned from reconsidering my demystified book, that when conformally maping the worldsheet of a closed string to the complex plain by using the transformation $z = e^{\tau + i\sigma}$ ...
1answer
121 views
### Why do we classify states under covering groups instead of the group itself?
Why do we always classify states under covering group representations instead of the group itself? For example see the following picture I lifted from 'Symmetry in physics' by Gross So in the first ...
1answer
96 views
### Lorentz invariance of the wave equation
I want to show that the 2-d wave equation is invariant under a boost, so, the starting point is the wave equation \frac{\partial^2\phi}{\partial x^2}=\frac{1}{c^2}\frac{\partial^2\phi}{\partial ...
1answer
104 views
### What kinds of inconsistencies would one get if one starts with Lorentz noninvariant Lagrangian of QFT?
What kinds of inconsistencies would one get if one starts with Lorentz noninvariant Lagrangian of QFT? The question is motivated by this preprint arXiv:1203.0609 by Murayama and Watanabe. Also, what ...
4answers
2k views
### What is the usefulness of the Wigner-Eckart theorem?
I am doing some self-study in between undergrad and grad school and I came across the beastly Wigner-Eckart theorem in Sakurai's Modern Quantum Mechanics. I was wondering if someone could tell me why ...
1answer
67 views
### Energy-momentum conservation without translation symmetry?
As I checked, the energy-momentum tensor defined as ${T^\mu}_\nu=\frac{\partial {\cal L}}{\partial(\partial_\mu \phi)}\partial_\nu \phi-{\cal L}{\delta^\mu}_\nu$ at the solution $\phi$ of equation of ...
3answers
215 views
### Noether's current expression in Peskin and Schroeder
In the second chapter of Peskin and Schroeder, An Introduction to Quantum Field Theory, it is said that the action is invariant if the Lagrangian density changes by a four-divergence. But if we ...
2answers
268 views
### Galilean invariance of the Schrodinger equation
I am only asking this question so that I can write an answer myself with the content found here: http://en.wikipedia.org/wiki/User:Likebox/Schrodinger#Galilean_invariance and here: ...
3answers
477 views
### Maxwell equations invariant under Lorentz transformation but not Galilean transformations
Why Maxwell equations are not invariant under Galilean transformations, but invariant under Lorentz transformations? What is the deep physical meaning behind it?
5answers
260 views
### Form of the Classical EM Lagrangian
So I know that for an electromagnetic field in a vacuum the Lagrangian is $\mathcal L=-\frac 1 4 F^{\mu\nu} F_{\mu\nu}$, the standard model tells me this. What I want to know is if there is an ...
3answers
373 views
### Must all symmetries have consequences?
Must all symmetries have consequences? We know that transnational invariance, for example, leads to momentum conservation, etc, cf. Noether's Theorem. Is it possible for a theory or a model to have ...
2answers
112 views
### Crystal Angular Momentum
In a crystal, we don't have full translational symmetry, but we still have discrete translations. This allows us to define "crystal momentum" that is conserved modulo a reciprocal lattice vector. In ...
2answers
234 views
### Why and how does symmetry work in circuits?
Why symmetry work in circuits? In my book there is no mention explanation as such for symmetry arguments and circuits. But there are circuits that are very difficult to solve without symmetry. Also I ...
2answers
135 views
### Does a constant factor matter in the definition of the Noether current?
This is a very basic Lagrangian Field Theory question, it is about a definition convention. It takes much more time to typeset it than answering, but here it is: Consider a field Lagrangian with only ...
2answers
71 views
### What symmetries does a lattice calculation need to preserve?
I've heard that it is impossible to have a properly Lorentz-invariant lattice QFT simulation, as the Lorentz invariance is spoiled by the nonzero lattice distance $a$. I've also heard that there are ...
1answer
210 views
### Spontaneous breaking of Lorentz invariance in gauge theories
I was browsing through the hep-th arXiv and came across this article: Spontaneous Lorentz Violation in Gauge Theories. A. P. Balachandran, S. Vaidya. arXiv:1302.3406 [hep-th]. (Submitted on 14 ...
1answer
146 views
### How do we make symmetry assumptions rigorous?
I have, for instance, a problem with a spherically symmetric charge distribution. I deduce here, in order to solve the problem easily, that the corresponding electric field must be symmetric. How is ...
0answers
58 views
### Dimensional transmutation in Gross-Neveu vs others
Firstly I don't know how generic is dimensional transmutation and if it has any general model independent definition. Is dimensional transmutation in Gross-Neveau somehow fundamentally different ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 31, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9007144570350647, "perplexity_flag": "middle"}
|
http://en.wikipedia.org/wiki/Newton's_second_law_of_motion
|
# Newton's laws of motion
(Redirected from Newton's second law of motion)
For other uses, see Laws of motion.
Newton's First and Second laws, in Latin, from the original 1687 Principia Mathematica.
Classical mechanics
Branches
Formulations
• Newtonian mechanics
(Vectorial mechanics)
Fundamental concepts
Core topics
Scientists
Newton's laws of motion are three physical laws that together laid the foundation for classical mechanics. They describe the relationship between a body and the forces acting upon it, and its motion in response to said forces. They have been expressed in several different ways over nearly three centuries,[1] and can be summarized as follows:
1. First law: An object at rest remains at rest unless acted upon by a force. An object in motion remains in motion, and at a constant velocity, unless acted upon by a force. [2][3]
2. Second law: The acceleration of a body is directly proportional to, and in the same direction as, the net force acting on the body, and inversely proportional to its mass. Thus, F = ma, where F is the net force acting on the object, m is the mass of the object and a is the acceleration of the object.
3. Third law: When one body exerts a force on a second body, the second body simultaneously exerts a force equal in magnitude and opposite in direction to that of the first body.
The three laws of motion were first compiled by Isaac Newton in his Philosophiæ Naturalis Principia Mathematica (Mathematical Principles of Natural Philosophy), first published in 1687.[4] Newton used them to explain and investigate the motion of many physical objects and systems.[5] For example, in the third volume of the text, Newton showed that these laws of motion, combined with his law of universal gravitation, explained Kepler's laws of planetary motion.
## Overview
Isaac Newton (1643-1727), the physicist who formulated the laws
Newton's laws are applied to objects which are idealized as single point masses,[6] in the sense that the size and shape of the object's body are neglected in order to focus on its motion more easily. This can be done when the object is small compared to the distances involved in its analysis, or the deformation and rotation of the body are of no importance. In this way, even a planet can be idealized as a particle for analysis of its orbital motion around a star.
In their original form, Newton's laws of motion are not adequate to characterize the motion of rigid bodies and deformable bodies. Leonard Euler in 1750 introduced a generalization of Newton's laws of motion for rigid bodies called the Euler's laws of motion, later applied as well for deformable bodies assumed as a continuum. If a body is represented as an assemblage of discrete particles, each governed by Newton’s laws of motion, then Euler’s laws can be derived from Newton’s laws. Euler’s laws can, however, be taken as axioms describing the laws of motion for extended bodies, independently of any particle structure.[7]
Newton's laws hold only with respect to a certain set of frames of reference called Newtonian or inertial reference frames. Some authors interpret the first law as defining what an inertial reference frame is; from this point of view, the second law only holds when the observation is made from an inertial reference frame, and therefore the first law cannot be proved as a special case of the second. Other authors do treat the first law as a corollary of the second.[8][9] The explicit concept of an inertial frame of reference was not developed until long after Newton's death.
In the given interpretation mass, acceleration, momentum, and (most importantly) force are assumed to be externally defined quantities. This is the most common, but not the only interpretation of the way one can consider the laws to be a definition of these quantities.
Newtonian mechanics has been superseded by special relativity, but it is still useful as an approximation when the speeds involved are much slower than the speed of light.[10]
## Newton's first law
Walter Lewin explains Newton's first law and reference frames. (MIT Course 8.01)[11]
The first law states that if the net force (the vector sum of all forces acting on an object) is zero, then the velocity of the object is constant. Velocity is a vector quantity which expresses both the object's speed and the direction of its motion; therefore, the statement that the object's velocity is constant is a statement that both its speed and the direction of its motion are constant.
The first law can be stated mathematically as
$\sum \mathbf{F} = 0\; \Rightarrow\; \frac{\mathrm{d} \mathbf{v} }{\mathrm{d}t} = 0.$
Consequently,
• An object that is at rest will stay at rest unless an external force acts upon it.
• An object that is in motion will not change its velocity unless an external force acts upon it.
This is known as uniform motion. An object continues to do whatever it happens to be doing unless a force is exerted upon it. If it is at rest, it continues in a state of rest (demonstrated when a tablecloth is skillfully whipped from under dishes on a tabletop and the dishes remain in their initial state of rest). If an object is moving, it continues to move without turning or changing its speed. This is evident in space probes that continually move in outer space. Changes in motion must be imposed against the tendency of an object to retain its state of motion. In the absence of net forces, a moving object tends to move along a straight line path indefinitely.
Newton placed the first law of motion to establish frames of reference for which the other laws are applicable. The first law of motion postulates the existence of at least one frame of reference called a Newtonian or inertial reference frame, relative to which the motion of a particle not subject to forces is a straight line at a constant speed.[8][12] Newton's first law is often referred to as the law of inertia. Thus, a condition necessary for the uniform motion of a particle relative to an inertial reference frame is that the total net force acting on it is zero. In this sense, the first law can be restated as:
In every material universe, the motion of a particle in a preferential reference frame Φ is determined by the action of forces whose total vanished for all times when and only when the velocity of the particle is constant in Φ. That is, a particle initially at rest or in uniform motion in the preferential frame Φ continues in that state unless compelled by forces to change it.[13]
Newton's laws are valid only in an inertial reference frame. Any reference frame that is in uniform motion with respect to an inertial frame is also an inertial frame, i.e. Galilean invariance or the principle of Newtonian relativity.[14]
## Newton's second law
Walter Lewin explains Newton's second law, using gravity as an example. [15]
The second law states that the net force on an object is equal to the rate of change (that is, the derivative) of its linear momentum p in an inertial reference frame:
$\mathbf{F} = \frac{\mathrm{d}\mathbf{p}}{\mathrm{d}t} = \frac{\mathrm{d}(m\mathbf v)}{\mathrm{d}t}.$
The second law can also be stated in terms of an object's acceleration. Since the law is valid only for constant-mass systems,[16][17][18] the mass can be taken outside the differentiation operator by the constant factor rule in differentiation. Thus,
$\mathbf{F} = m\,\frac{\mathrm{d}\mathbf{v}}{\mathrm{d}t} = m\mathbf{a},$
where F is the net force applied, m is the mass of the body, and a is the body's acceleration. Thus, the net force applied to a body produces a proportional acceleration. In other words, if a body is accelerating, then there is a force on it.
Consistent with the first law, the time derivative of the momentum is non-zero when the momentum changes direction, even if there is no change in its magnitude; such is the case with uniform circular motion. The relationship also implies the conservation of momentum: when the net force on the body is zero, the momentum of the body is constant. Any net force is equal to the rate of change of the momentum.
Any mass that is gained or lost by the system will cause a change in momentum that is not the result of an external force. A different equation is necessary for variable-mass systems (see below).
Newton's second law requires modification if the effects of special relativity are to be taken into account, because at high speeds the approximation that momentum is the product of rest mass and velocity is not accurate.
### Impulse
An impulse J occurs when a force F acts over an interval of time Δt, and it is given by[19][20]
$\mathbf{J} = \int_{\Delta t} \mathbf F \,\mathrm{d}t .$
Since force is the time derivative of momentum, it follows that
$\mathbf{J} = \Delta\mathbf{p} = m\Delta\mathbf{v}.$
This relation between impulse and momentum is closer to Newton's wording of the second law.[21]
Impulse is a concept frequently used in the analysis of collisions and impacts.[22]
### Variable-mass systems
Main article: Variable-mass system
Variable-mass systems, like a rocket burning fuel and ejecting spent gases, are not closed and cannot be directly treated by making mass a function of time in the second law;[17] that is, the following formula is wrong:[18]
$\mathbf{F}_\mathrm{net} = \frac{\mathrm{d}}{\mathrm{d}t}\big[m(t)\mathbf{v}(t)\big] = m(t) \frac{\mathrm{d}\mathbf{v}}{\mathrm{d}t} + \mathbf{v}(t) \frac{\mathrm{d}m}{\mathrm{d}t}. \qquad \mathrm{(wrong)}$
The falsehood of this formula can be seen by noting that it does not respect Galilean invariance: a variable-mass object with F = 0 in one frame will be seen to have F ≠ 0 in another frame.[16]
The correct equation of motion for a body whose mass m varies with time by either ejecting or accreting mass is obtained by applying the second law to the entire, constant-mass system consisting of the body and its ejected/accreted mass; the result is[16]
$\mathbf F + \mathbf{u} \frac{\mathrm{d} m}{\mathrm{d}t} = m {\mathrm{d} \mathbf v \over \mathrm{d}t}$
where u is the relative velocity of the escaping or incoming mass as seen by the body. From this equation one can derive the Tsiolkovsky rocket equation.
Under some conventions, the quantity u dm/dt on the left-hand side, known as the thrust, is defined as a force (the force exerted on the body by the changing mass, such as rocket exhaust) and is included in the quantity F. Then, by substituting the definition of acceleration, the equation becomes F = ma.
## Newton's third law
An illustration of Newton's third law in which two skaters push against each other. The first skater on the left exerts a normal force N12 on the second skater directed towards the right, and the second skater exerts a normal force N21 on the first skater directed towards the left.
The magnitude of both forces are equal, but they have opposite directions, as dictated by Newton's third law.
A description of Newton's third law and contact forces[23]
The third law states that all forces exist in pairs: if one object A exerts a force FA on a second object B, then B simultaneously exerts a force FB on A, and the two forces are equal and opposite: FA = −FB.[24] The third law means that all forces are interactions between different bodies,[25][26] and thus that there is no such thing as a unidirectional force or a force that acts on only one body. This law is sometimes referred to as the action-reaction law, with FA called the "action" and FB the "reaction". The action and the reaction are simultaneous, and it does not matter which is called the action and which is called reaction; both forces are part of a single interaction, and neither force exists without the other.[24]
The two forces in Newton's third law are of the same type (e.g., if the road exerts a forward frictional force on an accelerating car's tires, then it is also a frictional force that Newton's third law predicts for the tires pushing backward on the road).
From a conceptual standpoint, Newton's third law is seen when a person walks: they push against the floor, and the floor pushes against the person. Similarly, the tires of a car push against the road while the road pushes back on the tires—the tires and road simultaneously push against each other. In swimming, a person interacts with the water, pushing the water backward, while the water simultaneously pushes the person forward—both the person and the water push against each other. The reaction forces account for the motion in these examples. These forces depend on friction; a person or car on ice, for example, may be unable to exert the action force to produce the needed reaction force.[27]
## History
### Newton's 1st Law
From the original Latin of Newton's Principia:
“ Lex I: Corpus omne perseverare in statu suo quiescendi vel movendi uniformiter in directum, nisi quatenus a viribus impressis cogitur statum illum mutare. ”
Translated to English, this reads:
“ Law I: Every body persists in its state of being at rest or of moving uniformly straight forward, except insofar as it is compelled to change its state by force impressed.[28] ”
The ancient Greek philosopher Aristotle had the view that all objects have a natural place in the universe: that heavy objects (such as rocks) wanted to be at rest on the Earth and that light objects like smoke wanted to be at rest in the sky and the stars wanted to remain in the heavens. He thought that a body was in its natural state when it was at rest, and for the body to move in a straight line at a constant speed an external agent was needed to continually propel it, otherwise it would stop moving. Galileo Galilei, however, realized that a force is necessary to change the velocity of a body, i.e., acceleration, but no force is needed to maintain its velocity. In other words, Galileo stated that, in the absence of a force, a moving object will continue moving. The tendency of objects to resist changes in motion was what Galileo called inertia. This insight was refined by Newton, who made it into his first law, also known as the "law of inertia"—no force means no acceleration, and hence the body will maintain its velocity. As Newton's first law is a restatement of the law of inertia which Galileo had already described, Newton appropriately gave credit to Galileo.
The law of inertia apparently occurred to several different natural philosophers and scientists independently, including Thomas Hobbes in his Leviathan.[29] The 17th century philosopher and mathematician René Descartes also formulated the law, although he did not perform any experiments to confirm it.[citation needed]
### Newton's 2nd Law
Newton's original Latin reads:
“ Lex II: Mutationem motus proportionalem esse vi motrici impressae, et fieri secundum lineam rectam qua vis illa imprimitur. ”
This was translated quite closely in Motte's 1729 translation as:
“ Law II: The alteration of motion is ever proportional to the motive force impress'd; and is made in the direction of the right line in which that force is impress'd. ”
According to modern ideas of how Newton was using his terminology,[30] this is understood, in modern terms, as an equivalent of:
The change of momentum of a body is proportional to the impulse impressed on the body, and happens along the straight line on which that impulse is impressed.
Motte's 1729 translation of Newton's Latin continued with Newton's commentary on the second law of motion, reading:
If a force generates a motion, a double force will generate double the motion, a triple force triple the motion, whether that force be impressed altogether and at once, or gradually and successively. And this motion (being always directed the same way with the generating force), if the body moved before, is added to or subtracted from the former motion, according as they directly conspire with or are directly contrary to each other; or obliquely joined, when they are oblique, so as to produce a new motion compounded from the determination of both.
The sense or senses in which Newton used his terminology, and how he understood the second law and intended it to be understood, have been extensively discussed by historians of science, along with the relations between Newton's formulation and modern formulations.[31]
### Newton's 3rd Law
“ Lex III: Actioni contrariam semper et æqualem esse reactionem: sive corporum duorum actiones in se mutuo semper esse æquales et in partes contrarias dirigi. ”
“ Law III: To every action there is always an equal and opposite reaction: or the forces of two bodies on each other are always equal and are directed in opposite directions. ”
A more direct translation than the one just given above is:
LAW III: To every action there is always opposed an equal reaction: or the mutual actions of two bodies upon each other are always equal, and directed to contrary parts. — Whatever draws or presses another is as much drawn or pressed by that other. If you press a stone with your finger, the finger is also pressed by the stone. If a horse draws a stone tied to a rope, the horse (if I may so say) will be equally drawn back towards the stone: for the distended rope, by the same endeavour to relax or unbend itself, will draw the horse as much towards the stone, as it does the stone towards the horse, and will obstruct the progress of the one as much as it advances that of the other. If a body impinges upon another, and by its force changes the motion of the other, that body also (because of the equality of the mutual pressure) will undergo an equal change, in its own motion, toward the contrary part. The changes made by these actions are equal, not in the velocities but in the motions of the bodies; that is to say, if the bodies are not hindered by any other impediments. For, as the motions are equally changed, the changes of the velocities made toward contrary parts are reciprocally proportional to the bodies. This law takes place also in attractions, as will be proved in the next scholium.[32]
In the above, as usual, motion is Newton's name for momentum, hence his careful distinction between motion and velocity.
Newton used the third law to derive the law of conservation of momentum;[33] however from a deeper perspective, conservation of momentum is the more fundamental idea (derived via Noether's theorem from Galilean invariance), and holds in cases where Newton's third law appears to fail, for instance when force fields as well as particles carry momentum, and in quantum mechanics.
## Importance and range of validity
Newton's laws were verified by experiment and observation for over 200 years, and they are excellent approximations at the scales and speeds of everyday life. Newton's laws of motion, together with his law of universal gravitation and the mathematical techniques of calculus, provided for the first time a unified quantitative explanation for a wide range of physical phenomena.
These three laws hold to a good approximation for macroscopic objects under everyday conditions. However, Newton's laws (combined with universal gravitation and classical electrodynamics) are inappropriate for use in certain circumstances, most notably at very small scales, very high speeds (in special relativity, the Lorentz factor must be included in the expression for momentum along with rest mass and velocity) or very strong gravitational fields. Therefore, the laws cannot be used to explain phenomena such as conduction of electricity in a semiconductor, optical properties of substances, errors in non-relativistically corrected GPS systems and superconductivity. Explanation of these phenomena requires more sophisticated physical theories, including general relativity and quantum field theory.
In quantum mechanics concepts such as force, momentum, and position are defined by linear operators that operate on the quantum state; at speeds that are much lower than the speed of light, Newton's laws are just as exact for these operators as they are for classical objects. At speeds comparable to the speed of light, the second law holds in the original form F = dp/dt, where F and p are four-vectors.
## Relationship to the conservation laws
In modern physics, the laws of conservation of momentum, energy, and angular momentum are of more general validity than Newton's laws, since they apply to both light and matter, and to both classical and non-classical physics.
This can be stated simply, "Momentum, energy and angular momentum cannot be created or destroyed."
Because force is the time derivative of momentum, the concept of force is redundant and subordinate to the conservation of momentum, and is not used in fundamental theories (e.g., quantum mechanics, quantum electrodynamics, general relativity, etc.). The standard model explains in detail how the three fundamental forces known as gauge forces originate out of exchange by virtual particles. Other forces such as gravity and fermionic degeneracy pressure also arise from the momentum conservation. Indeed, the conservation of 4-momentum in inertial motion via curved space-time results in what we call gravitational force in general relativity theory. Application of space derivative (which is a momentum operator in quantum mechanics) to overlapping wave functions of pair of fermions (particles with half-integer spin) results in shifts of maxima of compound wavefunction away from each other, which is observable as "repulsion" of fermions.
Newton stated the third law within a world-view that assumed instantaneous action at a distance between material particles. However, he was prepared for philosophical criticism of this action at a distance, and it was in this context that he stated the famous phrase "I feign no hypotheses". In modern physics, action at a distance has been completely eliminated, except for subtle effects involving quantum entanglement.[citation needed] However in modern engineering in all practical applications involving the motion of vehicles and satellites, the concept of action at a distance is used extensively.
The discovery of the Second Law of Thermodynamics by Carnot in the 19th century showed that every physical quantity is not conserved over time, thus disproving the validity of inducing the opposite metaphysical view from Newton's laws. Hence, a "steady-state" worldview based solely on Newton's laws and the conservation laws does not take entropy into account.
## References and notes
1. For explanations of Newton's laws of motion by Newton in the early 18th century, by the physicist William Thomson (Lord Kelvin) in the mid-19th century, and by a modern text of the early 21st century, see:-
• Newton's "Axioms or Laws of Motion" starting on page 19 of volume 1 of the 1729 translation of the "Principia";
• Section 242, Newton's laws of motion in Thomson, W (Lord Kelvin), and Tait, P G, (1867), Treatise on natural philosophy, volume 1; and
2. Browne, Michael E. (1999-07). Schaum's outline of theory and problems of physics for engineering and science (Series: Schaum's Outline Series). McGraw-Hill Companies. p. 58. ISBN 978-0-07-008498-8.
3. Holzner, Steven (2005-12). Physics for Dummies. Wiley, John & Sons, Incorporated. p. 64. ISBN 978-0-7645-5433-9.
4. See the Principia on line at Andrew Motte Translation
5. [...]while Newton had used the word 'body' vaguely and in at least three different meanings, Euler realized that the statements of Newton are generally correct only when applied to masses concentrated at isolated points;Truesdell, Clifford A.; Becchi, Antonio; Benvenuto, Edoardo (2003). Essays on the history of mechanics: in memory of Clifford Ambrose Truesdell and Edoardo Benvenuto. New York: Birkhäuser. p. 207. ISBN 3-7643-1476-1.
6. Lubliner, Jacob (2008). Plasticity Theory (Revised Edition). Dover Publications. ISBN 0-486-46290-0.
7. ^ a b Galili, I.; Tseitlin, M. (2003). "Newton's First Law: Text, Translations, Interpretations and Physics Education". Science & Education 12 (1): 45–73. Bibcode:2003Sc&Ed..12...45G. doi:10.1023/A:1022632600805.
8. Benjamin Crowell. "4. Force and Motion". Newtonian Physics. ISBN 0-9704670-1-X.
9. In making a modern adjustment of the second law for (some of) the effects of relativity, m would be treated as the relativistic mass, producing the relativistic expression for momentum, and the third law might be modified if possible to allow for the finite signal propagation speed between distant interacting particles.
10. Walter Lewin (September 20, 1999) (in English) (ogg). Newton’s First, Second, and Third Laws. MIT Course 8.01: Classical Mechanics, Lecture 6. (videotape). Cambridge, MA USA: MIT OCW. Event occurs at 0:00–6:53. Retrieved December 23, 2010.
11. NMJ Woodhouse (2003). Special relativity. London/Berlin: Springer. p. 6. ISBN 1-85233-426-6.
12. Beatty, Millard F. (2006). Principles of engineering mechanics Volume 2 of Principles of Engineering Mechanics: Dynamics-The Analysis of Motion,. Springer. p. 24. ISBN 0-387-23704-6.
13. Thornton, Marion (2004). Classical dynamics of particles and systems (5th ed.). Brooks/Cole. p. 53. ISBN 0-534-40896-6.
14. Lewin, Newton’s First, Second, and Third Laws, Lecture 6. (6:53–11:06)
15. ^ a b c Plastino, Angel R.; Muzzio, Juan C. (1992). "On the use and abuse of Newton's second law for variable mass problems". Celestial Mechanics and Dynamical Astronomy (Netherlands: Kluwer Academic Publishers) 53 (3): 227–232. Bibcode:1992CeMDA..53..227P. doi:10.1007/BF00052611. ISSN 0923-2958. "We may conclude emphasizing that Newton's second law is valid for constant mass only. When the mass varies due to accretion or ablation, [an alternate equation explicitly accounting for the changing mass] should be used."
16. ^ a b Halliday; Resnick. Physics 1. p. 199. ISBN 0-471-03710-9. "It is important to note that we cannot derive a general expression for Newton's second law for variable mass systems by treating the mass in F = dP/dt = d(Mv) as a variable. [...] We can use F = dP/dt to analyze variable mass systems only if we apply it to an entire system of constant mass having parts among which there is an interchange of mass." [Emphasis as in the original]
17. ^ a b Kleppner, Daniel; Robert Kolenkow (1973). An Introduction to Mechanics. McGraw-Hill. pp. 133–134. ISBN 0-07-035048-5. "Recall that F = dP/dt was established for a system composed of a certain set of particles[. ... I]t is essential to deal with the same set of particles throughout the time interval[. ...] Consequently, the mass of the system can not change during the time of interest."
18. Hannah, J, Hillier, M J, Applied Mechanics, p221, Pitman Paperbacks, 1971
19. Raymond A. Serway, Jerry S. Faughn (2006). College Physics. Pacific Grove CA: Thompson-Brooks/Cole. p. 161. ISBN 0-534-99724-4.
20. I Bernard Cohen (Peter M. Harman & Alan E. Shapiro, Eds) (2002). The investigation of difficult things: essays on Newton and the history of the exact sciences in honour of D.T. Whiteside. Cambridge UK: Cambridge University Press. p. 353. ISBN 0-521-89266-X.
21. WJ Stronge (2004). Impact mechanics. Cambridge UK: Cambridge University Press. p. 12 ff. ISBN 0-521-60289-0.
22. Lewin, Newton’s First, Second, and Third Laws, Lecture 6. (14:11–16:00)
23. ^ a b Resnick; Halliday; Krane (1992). Physics, Volume 1 (4th ed.). p. 83.
24. C Hellingman (1992). "Newton’s third law revisited". Phys. Educ. 27 (2): 112–115. Bibcode:1992PhyEd..27..112H. doi:10.1088/0031-9120/27/2/011. "Quoting Newton in the Principia: It is not one action by which the Sun attracts Jupiter, and another by which Jupiter attracts the Sun; but it is one action by which the Sun and Jupiter mutually endeavour to come nearer together."
25. Resnick and Halliday (1977). "Physics" (Third ed.). John Wiley & Sons. pp. 78–79. "Any single force is only one aspect of a mutual interaction between two bodies." Missing or empty `|url=` (help)
26. Hewitt (2006), p. 75
27. Isaac Newton, The Principia, A new translation by I.B. Cohen and A. Whitman, University of California press, Berkeley 1999.
28. Thomas Hobbes wrote in :
That when a thing lies still, unless somewhat else stir it, it will lie still forever, is a truth that no man doubts. But [the proposition] that when a thing is in motion it will eternally be in motion unless somewhat else stay it, though the reason be the same (namely that nothing can change itself), is not so easily assented to. For men measure not only other men but all other things by themselves. And because they find themselves subject after motion to pain and lassitude, [they] think every thing else grows weary of motion and seeks repose of its own accord, little considering whether it be not some other motion wherein that desire of rest they find in themselves, consists.
29. According to Maxwell in Matter and Motion, Newton meant by motion "the quantity of matter moved as well as the rate at which it travels" and by impressed force he meant "the time during which the force acts as well as the intensity of the force". See Harman and Shapiro, cited below.
30. This translation of the third law and the commentary following it can be found in the "Principia" on page 20 of volume 1 of the 1729 translation.
31. Newton, Principia, Corollary III to the laws of motion
## Further reading and works referred to
• Crowell, Benjamin, (2011), Light and Matter, (2011, Light and Matter), especially at Section 4.2, Newton's First Law, Section 4.3, Newton's Second Law, and Section 5.1, Newton's Third Law.
• Feynman, R. P.; Leighton, R. B.; Sands, M. (2005). The Feynman Lectures on Physics. Vol. 1 (2nd ed.). Pearson/Addison-Wesley. ISBN 0-8053-9049-9.
• Fowles, G. R.; Cassiday, G. L. (1999). Analytical Mechanics (6th ed.). Saunders College Publishing. ISBN 0-03-022317-2.
• Likins, Peter W. (1973). Elements of Engineering Mechanics. McGraw-Hill Book Company. ISBN 0-07-037852-5.
• Marion, Jerry; Thornton, Stephen (1995). Classical Dynamics of Particles and Systems. Harcourt College Publishers. ISBN 0-03-097302-3.
• Newton, Isaac, "Mathematical Principles of Natural Philosophy", 1729 English translation based on 3rd Latin edition (1726), volume 1, containing Book 1, especially at the section Axioms or Laws of Motion starting page 19.
• Newton, Isaac, "Mathematical Principles of Natural Philosophy", 1729 English translation based on 3rd Latin edition (1726), volume 2, containing Books 2 & 3.
• Thomson, W (Lord Kelvin), and Tait, P G, (1867), Treatise on natural philosophy, volume 1, especially at Section 242, Newton's laws of motion.
• NMJ Woodhouse (2003). Special relativity. London/Berlin: Springer. p. 6. ISBN 1-85233-426-6.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 7, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.921187698841095, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/1102/smooth-classifying-spaces/1113
|
## Smooth classifying spaces?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Take G to be a group. I care about discrete groups, but the answer in general would be welcome too. There are the various ways to construct the classifying space of G, bar construction, cellular construction if G is finitely presented, etc.
What I'm wondering about, is there a notion of a smooth classifying space? That is, when can a classifying space for a group be given a smooth structure?
-
## 7 Answers
The answer to this does depend highly on the category in which you are prepared to work. If by "smooth structure" you mean "when is BG a finite dimensional manifold" then the answer is, as Andy says, "not many".
However if you are prepared to admit that there are more things that deserve the name "smooth" than just finite dimensional manifolds, then the answer ranges from "a few" to somewhere near "all".
To illustrate this with examples, the classifying space of ℤ is, of course, S1 whilst the classifying space of ℤ/2 is ℝℙ∞. Both are manifolds, but only the first is finite dimensional.
Here are some more details for the "somewhere near all". Take any topological model for BG. Then consider all continuous maps ℝ → BG. These correspond to G-bundles over ℝ. Amongst those will be certain bundles which deserve the name "smooth" bundles. By taking the corresponding curves, one determines a family of curves ℝ → BG which should be called "smooth". Using this one can define a Frölicher space structure on BG. (It is possible that you will get more smooth bundles than you bargained for this way. If that's a problem, you could work in the category of diffeological spaces but then you'd need to use all the ℝns).
In the middle, one can consider infinite dimensional manifolds. Then as your group is discrete it would be enough to ensure that you have a properly discontinuous action on an infinite sphere (there's a question somewhere around here about that being contractible). Some would say that your sphere "ought" to be the sphere in some Hilbert space. Failing that, if you have a faithful action on a Hilbert space (or more generally Banach space) with one or two topological conditions then you can quotient the general linear group by your group. Indeed, if your group is discrete then take the obvious action on ℓ2(G) (square summable sequences indexed by G).
A good example, but which is about as far from your situation for discrete groups as possible, is that of diffeomorphisms on a manifold. The classifying space of this group is the space of embeddings of that manifold in some suitable infinite dimensional space.
For more on the categories behind all this, see the nlab entries starting with generalised smooth spaces and the references therein. Also, anything by Kriegl, Michor, or Frolicher in the literature is worth a look.
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
I'm not sure exactly what you mean here. One possible interpretation to your questions is "Which discrete groups have classifying spaces that are smooth manifolds"? For this, here are a few isolated facts.
1) One cheap necessary condition is that your group be torsion-free, as otherwise the group's cohomological dimension would be infinite.
2) If all you care about is that there is a classifying space that is a smooth manifold (not necessarily compact), then it is enough for the space to have a compact K(G,1) -- you could embed your K(G,1) in a high-dimensional R^n and then take a regular neighborhood.
3) A more useful thing is for your group to have a classifying space that is a compact manifold without boundary. You would then need your manifold to satisfy Poincare duality in an appropriate sense.
4) However, Poincare duality is not enough. Mike Davis has constructed Poincare duality groups that are not finitely presentable (and thus cannot have classifying spaces that are compact manifolds). If you require your group to be Poincare duality and finitely presentable, then I believe that it is open whether or not the group has a closed manifold classifying space.
A good survey on Poincare duality groups is Mike Davis's paper "Poincare Duality Groups", which is available on his webpage at
http://www.math.osu.edu/~mdavis/
-
This paper by Mostow is a great example of how a classifying space can be given a smooth structure, and how this smooth structure can be used to represent characteristic classes using differential forms on BG:
Mostow, Mark A.
The differentiable space structures of Milnor classifying spaces, simplicial complexes, and geometric realizations.
J. Differential Geom. 14 (1979), no. 2, 255--293.
MR0587553
-
Since classifying spaces for compact Lie groups are constructed via direct limits of Grassmanians, you can really put smooth structures on them, where with a smooth structure I mean the structure of a manifold, modelled on a locally convex space (not necessarily finite-dimensional, not necessarily complete). This is due to the fact that "nice" direct limits of finite-dimensional manifolds admit such infinite-dimensional manifold structures (see for instance http://www.ams.org/mathscinet-getitem?mr=2188449). We also found no place where this is written up (although certainly well-known), so we spent some lines on this in http://www.ams.org/mathscinet-getitem?mr=2574141 (Lemma I.12).
-
Nice to meet you on mathoverflow! – Ulrich Pennig May 7 2010 at 11:52
Yay, I drop a line just to join you two! – Peter Arndt May 7 2010 at 15:41
Well, perhaps not in the category you're interested in, but in algebraic geometry, the classifying space of any algebraic group (group scheme over a base) is "smooth" as an Artin Stack. The idea being that the notion of smoothness here is that it has a "nice" map from a smooth object. For a finite group, this just means that there's a smooth covering space, and then the classifying space is always just the stack obtained by quotienting out a single point by the group, and so it has a smooth cover, because a single point is smooth.
-
For $G$ a Lie group, thinking of $BG$ as the smooth stack $*/G$ makes sense in the category of smooth manifolds as well as the algebraic category. – Jeffrey Giansiracusa Oct 4 2010 at 21:06
One general answer to this is to say: a generalized smooth thing is an oo-stack on some site of smooth test spaces like Diff or so. These are "smooth oo-groupoids" in a useful sense. There is some discussion aimed towards smooth classifying spaces from this perspective at motivation for sheaves, cohomology and higher stacks.
-
I was directed here not for an ordinary BG or K(pi,1) but for K(A,n)
more details on that case?
-
1
Where did you get sent here from? – Josh Roberts Oct 4 2010 at 22:26
Hi Jim. I think you may be able to do it iteratively, taking K(A,n) = B^nA, and noting that (and this is conjecture on my part) for an abelian Frolicher group G there is a Frolicher abelian group structure on BG. @Josh, it was from the category theory mailing list, by Andrew Stacey. – David Roberts Oct 4 2010 at 23:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9353395700454712, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/226102/coins-and-probability
|
# Coins and probability
Bob has $n$ coins, each of which falls heads with the probability $p$. In the first round Bob tosses all coins, in the second round Bob tosses only those coins which fell heads in the first round. Let $R_i$ the number of coins which fell heads in the round $i$.
1. What is the distribution law for $R_2$?
2. Find $Corr(R_1,R_2)$
3. How does correlation behave when $p→0$ and $p→1$? Why?
-
5
Looks like a nice homework problem, and if so, please add the `homework` tag. Also, what progress have you made on the problem? Where are you stuck? – Dilip Sarwate Oct 31 '12 at 15:01
Ready to post an answer, as soon as @Dilip's suggestions are followed. – Did Oct 31 '12 at 15:37
I have problems with understanding of how to build distribution law – Xxx Oct 31 '12 at 21:32
Are you kidding? You added NOTHING to your post to comply with @Dilip's suggestion. – Did Oct 31 '12 at 21:59
@did and now? it's a part of my answer – Xxx Nov 5 '12 at 16:25
## 2 Answers
Each coin falls head in the second round if and only it fell head in the first round (hence was tossed again) and fell head in the second round as well. Thus each coin falls head in the second round with probability $p^2$, and the number $R_2$ of these has binomial $(n,p^2)$ distribution.
Since $R_1$ is binomial $(n,p)$, $\mathbb E(R_1)=np$, $\mathrm{var}(R_1)=np(1-p)$ and $\mathbb E(R_1^2)=\mathrm{var}(R_1)+\mathbb E(R_1)^2=np(1-p)+n^2p^2$. Since $R_2$ is binomial $(n,p^2)$, $\mathbb E(R_2)=np^2$ and $\mathrm{var}(R_2)=np^2(1-p^2)$. For every $k$, conditionally on $R_1=k$, $R_2$ is binomial $(k,p)$, hence $\mathbb E(R_2\mid R_1)=R_1p$, and $$\mathbb E(R_1R_2)=\mathbb E(R_1\mathbb E(R_2\mid R_1))=p\mathbb E(R_1^2).$$ Thus, $$\text{Cov}(R_1,R_2)=\mathbb E(R_1R_2)-\mathbb E(R_1)\mathbb E(R_2)=np^2(1-p),$$ and $$\text{Corr}(R_1,R_2)=\frac{\text{Cov}(R_1,R_2)}{\sqrt{\text{var}(R_1)\text{var}(R_2)}}=\sqrt{\frac{p}{1+p}}.$$
-
you have sum with k, but k is the number of tails in 1st round. why is it so? – Xxx Nov 5 '12 at 16:58
Sorry, I had mixed up heads and tails in the first and second rounds. The revised version should be OK. – Did Nov 5 '12 at 18:31
can you please say how we get that formula for expectation of the product? – Xxx Nov 5 '12 at 18:58
This exchange will become more and more difficult if you continue to say NOTHING about what it is exactly you do not understand, and why, and what you do understand and know about the subject. At present, I do not know the kind of answer you seek. – Did Nov 5 '12 at 19:07
E(R1R2)=E(R1E(R2∣R1))=pE(R21). this thing I don't understand – Xxx Nov 5 '12 at 19:34
show 6 more comments
$P(R_2=i)=\sum_{j=i}^{n}{C(n,j)}{C(j,i)}p^{j+i}(1-p)^{n-i}$
Is that right? I can't understand how to find an expectation of $R_2$.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 27, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9565563201904297, "perplexity_flag": "middle"}
|
http://nrich.maths.org/48
|
### Fencing
Arrange your fences to make the largest rectangular space you can. Try with four fences, then five, then six etc.
### Two by One
An activity making various patterns with 2 x 1 rectangular tiles.
### Hallway Borders
A hallway floor is tiled and each tile is one foot square. Given that the number of tiles around the perimeter is EXACTLY half the total number of tiles, find the possible dimensions of the hallway.
# Pebbles
##### Stage: 2 and 3 Challenge Level:
Imagine that you're walking along the beach, a rather nice sandy beach with just a few small pebbles in little groups here and there. You start off by collecting just four pebbles and you place them on the sand in the form of a square. The area inside is of course just $1$ square something, maybe $1$ square metre, $1$ square foot, $1$ square finger ... whatever.
By adding another $2$ pebbles in line you double the area to $2$, like this:
The rule that's developing is that you keep the pebbles that are down already (not moving them to any new positions) and add as FEW pebbles as necessary to DOUBLE the PREVIOUS area, using RECTANGLES ONLY!
So, to continue, we add another three pebbles to get an area of $4$:
You could have doubled the area by doing:
But this would not obey the rule that you must add as FEW pebbles as possible each time. So this one is not allowed.
Number 6 would look like this:
So remember:-
#### The rule is that you keep the pebbles that are down already (not moving them to any new positions) and add as FEW pebbles as necessary to DOUBLE the PREVIOUS area.
Well, now it's time for you to have a go.
"It's easy,'' I hear you say. Well, that's good. But what questions can we ask about the arrangements that we are getting?
We could make a start by saying "Stand back and look at the shapes you are getting. What do you see?'' I guess you may see quite a lot of different things.
It would be good for you to do some more of this pattern. See how far you can go. You may run out of pebbles, paper or whatever you may be using. (Multilink, pegboard, elastic bands with a nail board, etc.)
Well now, what about some questions to explore?
Here are some I've thought of that look interesting:
1. How many extra pebbles are added each time? This starts off $2$, $3$, $6$ ...
2. How many are there around the edges? This starts off $4$, $6$, $8$ ...
3. How big is the area? This starts off $1$, $2$, $4$ ...
4. How many are there inside? This starts off $0$, $0$, $1$, $3$, $9$ ...
Try to answer these, and any other questions you come up with, and perhaps put them in a kind of table/graph/spreadsheet etc.
Do let me see what you get - I'll be most interested.
Don't forget the all-important question to ask - "I wonder what would happen if I ...?''
The NRICH Project aims to enrich the mathematical experiences of all learners. To support this aim, members of the NRICH team work in a wide range of capacities, including providing professional development for teachers wishing to embed rich mathematical tasks into everyday classroom practice. More information on many of our other activities can be found here.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 21, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9544678926467896, "perplexity_flag": "middle"}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.