url
stringlengths 17
172
| text
stringlengths 44
1.14M
| metadata
stringlengths 820
832
|
---|---|---|
http://castingoutnines.wordpress.com/tag/calculus/
|
# Tag Archives: Calculus
29 March 2011 · 4:25 pm
## Five questions I haven’t been able to answer yet about the inverted classroom
Between the Salman Khan TED talk I posted yesterday and several talks I saw at the ICTCM a couple of weeks ago, it seems like the inverted classroom idea is picking up some steam. I’m eager myself to do more with it. But I have to admit there are at least five questions that I have about this method, the answers to which I haven’t figured out yet.
1. How do you get students on board with this idea who are convinced that if the teacher isn’t lecturing, the teacher isn’t teaching? For that matter, how do you get ANYBODY on board who are similarly convinced?
Because not all students are convinced the inverted classroom approach is a good idea or that it even makes sense. Like I said before, the single biggest point of resistance to the inverted classroom in my experience is that vocal group of students who think that no lecture = no teaching. You have to convince that group that what’s important is what (and whether) they are learning, as opposed to my choices for instructional modes, but how?
2. Which is better: To make your own videos for the course, or to use another person’s videos even if they are of a better technical or pedagogical quality? (Or can the two be effectively mixed?)
There’s actually a bigger question behind this, and it’s the one people always ask when I talk about the inverted classroom: How much time is this going to take me? On the one hand, I can use Khan Academy or iTunesU stuff just off the rack and save myself a ton of time. On the other hand, I run the risk of appearing lazy to my students (maybe that really would be being lazy) or not connecting with them, or using pre-made materials that don’t suit my audience. I spend 6-12 hours a week just on the MATLAB class’ screencasts and would love (LOVE) to have a suitable off-the-shelf resource to use instead. But how would students respond, both emotionally and pedagogically?
3. Can the inverted classroom be employed in a class on a targeted basis — that is, for one or a handful of topics — or does it really only work on an all-or-nothing basis where the entire course is inverted?
I’ve tried the former approach, to teach least-squares solution methods in linear algebra and to do precalculus review in calculus. In the linear algebra class it was successful; in calculus it was a massive flop. On some level I’m beginning to think that you have to go all in with the inverted classroom or students will not feel the accountability for getting the out-of-class work done. At the very least, it seems that the inverted portions of the class have to be very distinct from the others — with their own grading structure and so on. But I don’t know.
4. Does the inverted classroom model fit in situations where you have multiple sections of the same course running simultaneously?
For example, if a university has 10 sections of calculus running in the Fall, is it feasible — or smart — for one instructor to run her class inverted while the other nine don’t? Would it need to be, again, an all-or-nothing situation where either everybody inverts or nobody does, in order to really work? I could definitely see me teaching one or two sections of calculus in the inverted mode, with a colleague teaching two other sections in traditional mode, and students who fall under the heading described in question #1 would wonder how they managed to sign up for such a cockamamie way of “teaching” the subject, and demand a transfer or something. When there’s only one section, or one prof teaching all sections of a class, this doesn’t come up. But that’s a relatively small portion of the full-time equivalent student population in a math department.
5. At what point does an inverted classroom course become a hybrid course?
This matters for some instructors who teach in institutions where hybrid, fully online, and traditional courses have different fee structures, office hours expectations, and so on. This question raises ugly institutional assumptions about student learning in general. For example, I had a Twitter exchange recently with a community college prof whose institution mandates that a certain percentage of the content must be “delivered” in the classroom before it becomes a “hybrid” course. So, the purpose of the classroom is to deliver content? What happens if the students don’t “get” the content in class? Has the content been “delivered”? That’s a very 1950′s-era understanding of what education is supposedly about. But it’s also the reality of the workplaces of a lot of people interested in this idea, so you have to think about it.
Got any ideas on these questions?
20 Comments
Filed under Education, Inverted classroom, Life in academia, Teaching
Tagged as Calculus, Classroom, Education, Inverted classroom, khan academy, Linear algebra, matlab, Salman Khan, student
16 December 2010 · 2:30 pm
## A problem with “problems”
I have a bone to pick with problems like the following, which is taken from a major university-level calculus textbook. Read it, and see if you can figure out what I mean.
This is located in the latter one-fourth of a review set for the chapter on integration. Its position in the set suggests it is less routine, less rote than one of the early problems. But what’s wrong with this problem is that it’s not a problem at all. It’s an exercise. The difference between the two is enormous. To risk oversimplifying, in an exercise, the person doing the exercise knows exactly what to do at the very beginning to obtain the information being requested. In a problem, the person doesn’t. What makes an exercise an exercise is its familiarity and congruity with prior exercises. What makes a problem a problem is the lack of these things.
The above is not a problem, it is an exercise. Use the Midpoint Rule with six subintervals from 0 to 24. That’s the only part of the statement that you even have to read! The rest of it has absolutely nothing with bees, the rate of their population growth, or the net amount of population growth. A student might be turning this in to an instructor who takes off points for incorrect or missing units, and then you have to think about bees and time. Otherwise, this exercise is pure pseudocontext.
Worst of all, this exercise might correctly assess students’ abilities to execute a numerical integration algorithm, but it doesn’t come close to measuring whether a student understands what an integral is in the first place and why we are even bringing them up. Even if the student realizes an integral should be used, there’s no discussion of how to choose which method and which parameters within the method, or why. Instead, the exercise flatly tells students not only to use an integral, but what method to use and even how many subdivisions. A student can get a 100% correct answer and have no earthly idea what integration has to do with the question.
A simple fix to the problem statement will change this into a problem. Keep the graph the same and change the text to:
The graph below shows the rate at which a population of honeybees was growing, in bees per week. By about how many bees did the population grow after 24 weeks?
This still may not be a full-blown problem yet — and it’s still pretty pseudocontextual, and the student can guess there should be an integral happening because it’s in the review section for the chapter on integration — but at least now we have to think a lot harder about what to do, and the questions we have to answer are better. How do I get a total change when I’m given a rate? Why can’t I just find the height of the graph at 24? And once we realize that we have to use an integral — and being able to make that realization is one of the main learning objectives of this chapter, or at least it should be — there are more questions. Can I do this with an antiderivative? Can I use geometry in some way? Should I use the Midpoint Rule or some other method? Can I get by with, say, six rectangles? or four? or even two? Why not use 24, or 2400? Is it OK just the guesstimate the area by counting boxes?
I think we who teach calculus and those who write calculus books must do a better job of giving problems to students and not just increasingly complicated exercises. It’s very easy to do so; we just have to give less information and fewer artificial cues to students, and force students to think hard and critically about their tools and how to select the right combination of tools for the job. No doubt, this makes grading harder, but students aren’t going to learn calculus in any real or lasting sense if they don’t grapple with these kinds of problems.
4 Comments
Filed under Calculus, Critical thinking, Math, Problem Solving, Teaching
Tagged as Calculus, Critical thinking, High School Math, integral, Math, Numerical integration, Population, student
29 November 2010 · 9:00 am
## What correlates with problem solving skill?
About a year ago, I started partitioning up my Calculus tests into three sections: Concepts, Mechanics, and Problem Solving. The point values for each are 25, 25, and 50 respectively. The Concepts items are intended to be ones where no calculations are to be performed; instead students answer questions, interpret meanings of results, and draw conclusions based only on graphs, tables, or verbal descriptions. The Mechanics items are just straight-up calculations with no context, like “take the derivative of $y = \sqrt{x^2 + 1}$“. The Problem-Solving items are a mix of conceptual and mechanical tasks and can be either instances of things the students have seen before (e.g. optimzation or related rates problems) or some novel situation that is related to, but not identical to, the things they’ve done on homework and so on.
I did this to stress to students that the main goal of taking a calculus class is to learn how to solve problems effectively, and that conceptual mastery and mechanical mastery, while different from and to some extent independent of each other, both flow into mastery of problem-solving like tributaries to a river. It also helps me identify specific areas of improvement; if the class’ Mechanics average is high but the Concepts average is low, it tells me we need to work more on Concepts.
I just gave my third (of four) tests to my two sections of Calculus, and for the first time I started paying attention to the relationships between the scores on each section, and it felt like there were some interesting relationships happening between the sections of the test. So I decided to do not only my usual boxplot analysis of the individual parts but to make three scatter plots, pairing off Mechanics vs. Concepts, Problem Solving vs. Concepts, and Mechanics vs. Problem Solving, and look for trends.
Here’s the plot for Mechanics vs. Concepts:
That r-value of 0.6155 is statistically significant at the 0.01 level. Likewise, here’s Problem Solving vs. Concepts:
The r-value here of 0.5570 is obviously less than the first one, but it’s still statistically significant at the 0.01 level.
But check out the Problem Solving vs. Mechanics plot:
There’s a slight upward trend, but it looks disarrayed; and in fact the r = 0.3911 is significant only at the 0.05 level.
What all this suggests is that there is a stronger relationship between conceptual knowledge and mechanics, and between conceptual knowledge and problem solving skill, than there is between mechanical mastery and problem solving skill. In other words, while there appears to be some positive relationship between the ability simply to calculate and the ability to solve problems that involve calculation (are we clear on the difference between those two things?), the relationship between the ability to answer calculus questions involving no calculation and the ability to solve problems that do involve calculation is stronger — and so is the relationship between no-calculation problems and the ability to calculate, which seems really counterintuitive.
If this relationship holds in general — and I think that it does, and I’m not the only one — then clearly the environment most likely to teach calculus students how to be effective problem solvers is not the classroom primarily focused on computation. A healthy, interacting mixture of conceptual and mechanical work — with a primary emphasis on conceptual understanding — would seem to be what we need instead. The fact that this kind of environment stands in stark contrast to the typical calculus experience (both in the way we run our classes and the pedagogy implied in the books we choose) is something well worth considering.
11 Comments
Filed under Calculus, Critical thinking, Education, Higher ed, Math, Peer instruction, Problem Solving, Teaching
Tagged as Calculus, Education, Math, mathematics, Peer instruction, Problem Solving
15 November 2010 · 9:17 pm
## Technology FAIL day
This morning as I was driving in to work, I got to thinking: Could I teach my courses without all the technology I use? As in, just me, my students, and a chalk/whiteboard with chalk/markers? As I pulled in to the college, I thought: Sure I could. It just wouldn’t be as good or fun without the tech.
Little did I know, today would be centered around living that theory out:
• I planned a Keynote presentation with clicker questions to teach the section on antiderivatives in Calculus. As soon as I tried to get the clickers going, I realized the little USB receiver wasn’t working. Turns out, updating Mac OS X to v10.6.5 breaks the software that runs the receiver. Clicker questions for this morning: Out the window. Hopefully I’ll find a useable laptop for tomorrow, when I’m using even more clicker questions.
• Also in calculus, the laptop inexplicably went into presenter mode when I tried to give the presentation without clicker questions. Most of the time when I try to get it into presenter mode, I can’t do it. This time I couldn’t make it stop.
• The Twitter client on my laptop got stuck in some kind of strange mode such that clicking on anything made it go to Expose.
• I lost the network connection to our department printer halfway through the day.
• GMail went down.
Fortunately everything I had planned could be done without any technology aside from the whiteboard. But when the technology doesn’t work, I have to improvise, and sometimes that works well and sometimes not. In calculus, I just had to revert back to what is often called the “interactive lecture”, which means just a regular lecture where you hope the students ask questions, and it was about as engaging as that sounds.
I do believe I can teach without all this technology, but the kind of teaching I do with the technology is, I think, more inherently engaging and meaningful for students. I ask better questions, interact more freely with students, and highlight the coherence and the big ideas of the material more adeptly with the technology in place. So when the tech fails on me, things seem odd and out of place and contrived. Students pick up on that. Maybe I’m simply addicted to the tech, but I don’t like teaching without it, and my classes aren’t nearly at the same level without it.
3 Comments
Filed under Educational technology, Life in academia, Math, Profhacks, Teaching, Technology
Tagged as Calculus, Classroom response systems, Clickers, ed tech, Educational technology, Math, Technology, twitter
5 September 2010 · 1:37 pm
## This week (and last) in screencasting: Functions!
So we started back to classes this past week, and getting ready has demanded much of my time and blogging capabilities. But I did get some new screencasts done. I finished the series of screencasts I was making for our calculus students to prepare for Mastery Exams, a series of short untimed quizzes over precalculus material that students have to pass with a 100% score. But then I turned around and did some more for my two sections of calculus on functions. There were three of them. The first one covers what a function is, and how we can work with them as formulas:
The second one continues with functions as graphs, tables, and verbal descriptions:
And this third one is all on domain and range:
The reason I made these was because we were doing the first section of the Stewart calculus book in one day of class. If you know this book, you realize this is impossible because there is an enormous amount of stuff crammed into this one section. Two items covered in that section are how to calculate and reduce the difference quotient $\frac{f(a+h) - f(a)}{h}$ and doing word problems. Each of these topics alone can cover multiple class meetings, since many students are historically rusty or just plain bad at manipulating formulas correctly and suffer instantaneous brain-lock when put into the presence of a word problem. So, my thought was to go all Eric Mazur on them and farm out the material that is most likely to be easy review for them as an outside “reading” assignment, and spend the time in class on the stuff that on which they were most likely to need serious help.
Our first class was last Tuesday and the second class wasn’t until Thursday, so I assigned the three videos and three related exercises from the Stewart book for Thursday, along with instructions to email questions on any of this, or post to our Moodle discussion board. I made up some clicker questions that we used to assess their grasp of the material in these videos, and guess what? Many students didn’t have any problems at all with this material, and those who did got their issues straightened out through discussions with other students as part of the clicker activity.
They’ll be assessed in 2 or 3 other ways on this stuff this week to make sure they really have the material down and are not just being shy about not having it. But it looks like using screencasts to motivate student contact with the material outside of class worked fine, at least as effectively as me lecturing over it. And we had more time for the hard stuff that I wouldn’t expect students to be able to handle, not all of them anyway.
Comments Off
Filed under Calculus, Education, Educational technology, Math, Peer instruction, Screencasts, Teaching
Tagged as Calculus, Education, Function (mathematics), High School Math, Math, Screencast, stewart, youtube
21 August 2010 · 6:50 pm
## This week in screencasting: Contour plots in MATLAB
By my count, this past week I produced and posted 22 different screencasts to YouTube! Almost all of those are short instructional videos for our calculus students taking Mastery Exams on precalculus material. But I did make two more MATLAB-oriented screencasts, like last week. These focus on creating contour plots in MATLAB.
Here’s Part 1:
And Part 2:
I found this topic really interesting and fun to screencast about. Contour plots are so useful and simple to understand — anybody who’s ever hiked or camped has probably used one, in the form of a topographical map — and it was fun to explore the eight (!) different commands that MATLAB has for producing them, each command producing a map that fits a different kind of need. There may be even more commands for contour maps that I’m missing.
I probably won’t match this week’s output next week, as I’ll be on the road in Madison, WI on Monday and Tuesday and there are several faculty meetings in the run-up to the start of the semester. But at the very least, I need to go back and do another two-variable function plot screencast because I inexplicably left off surface plots and the EZMESH and EZSURF commands on last week’s screencasts.
Comments Off
Filed under Calculus, Educational technology, Math, MATLAB, Screencasts, Technology
Tagged as Calculus, Calculus III, Contour map, Math, matlab, Multivariable calculus, Screencast, video, visualization, youtube
17 August 2010 · 4:01 pm
## Why change how we teach?
Sometimes when I read or hear discussions of innovation or change in teaching mathematics or other STEM disciplines, whether it’s me or somebody else doing the discussing, inevitably there’s the following response:
What do we need all that change for? After all, calculus [or whatever] hasn’t changed that much in 400 years, has it?
I’m not a historian of mathematics, so I can’t say how much calculus has or hasn’t changed since the times of Newton and Leibniz or even Euler. But I can say that the context in which calculus is situated has changed – utterly. And it’s those changes that surround calculus that are forcing the teaching of calculus (any many other STEM subjects) to change –radically.
What are those changes?
First, the practical problems that need to be solved and the methods used to solve them have changed. Not too long ago, practical problems could be neatly compartmentalized and solved using a very small palette of methods. I know some things about those problems from my Dad, who was an electrical engineer for 40 years and was with NASA during the Gemini and Apollo projects. The kinds of problems he’d get were: Design a circuit board for use in the navigational system of the space capsule. While this was a difficult problem that needed trained specialists, it was unambiguous and could be solved with more or less a subset of the average undergraduate electrical engineering curriculum content, plus human ingenuity. And for the most part, the math was done by hand and on slide rules (with a smattering of newfangled mechanical calculators) and the design was done with stuff from a lab — in other words, standard methods and tools for engineers.
Now, however, problems are completely different and cannot be so easily encapsulated. I can again pull an example from my Dad’s work history. During the last decade of his career, the Houston Oilers NFL franchise moved to Tennessee. Dad was employed by the Nashville Electric Service and the problem he was handed was: Design the power grid for the new Oilers stadium. This problem has some similarities with designing the navigational circuitry for a space capsule, but there are major differences as well because this was a civic project as well as a technical one. How do we make the power supply lines work with the existing road and building configurations? What about surrounding businesses and the impact that the design will have on them? How do we make Bud Adams happy with what we’ve done? The problem quickly overruns any simple categorization, and it required that Dad not only use skills other than those he learned in his (very rigorous!) EE curriculum at Texas Tech University, but also to learn new skills on the fly and to work with other non-engineers who have more in the way of those skills than he had. Also, the methods use to solve the problem were radically different. You can’t design a power grid that large using hand tools; you have to use computers, and computers need alternative representations of the models underlying the design. And the methods themselves lead to new problems.
So it is with calculus or almost any STEM discipline these days. Students today will not go on to work with simple, cleanly-defined, well-posed problems that fit neatly into a box. Nor will they be always doing things by hand; they will be using technology to solve problems, and this requires both a different way of representing the models (for calculus, think “functions”) they use and the flexibility to anticipate the problems that the methods themselves create. This is not what Newton or Leibniz had in mind, but it is the way things are. Our teaching must therefore change to give students a fighting chance at solving these problems, by emphasizing multiple representations of functions, multiple methods for solution of problems, and attention to the problems created by the methods. And of course, we also must focus on teaching problem-solving itself and on the ability to acquire new skills and information independently, because if so much has changed between 1965 and 1995, we can expect about the same amount of change in progressively shorter time spans in the future.
Also, the people who solve these problems, and what we know about how those people learn, have changed. It seems undeniable that college students are different than they were even 20 years ago, much less 200 years ago. Although they may not be natively fluent in the use of technology, they are certainly steeped in technology, and technology is a primary means for how they interact with the rest of the world. Students in college today bring a different set of values, a different cultural context, and a different outlook to their lives and how they learn. This executive summary of research done by the Pew Research Foundation goes into detail on the characteristics of the Millenial generation, and the full report (PDF, 1.3 Mb) — in addition to our own experiences — highlights the differences in this generation versus previous ones. These folks are not the same people we taught in 1995; we therefore cannot expect to teach them in the same way and expect equal or better results.
We also know a lot more now about how people in general, and Millenials in particular, learn things than we did just a few years ago. We are gradually, but also rapidly, realizing through rigorous education research that there are other methods of teaching out there besides lecture and that these methods work better than lecture does in many situations. Instructors are honing the research findings into usable tools through innovative classroom practices that yield statistically verifiable improvements over more traditional ways of teaching. Even traditional modes of teaching are finding willing and helpful partners in various technological tools that lend themselves well to classroom use and student learning. And that technology is improving in cost, accessibility, and performance at an exponential pace, to the point where it just doesn’t make sense not to use it or think about ways teaching can be improved through its use.
Finally, and perhaps at the root of the first two, the culture in which these problems, methods, people, and even the mathematics itself is situated has changed. Technology drives much of this culture. Millenials are highly connected to each other and the world around them and have little patience — for better or worse — for the usual linear, abstracted, and (let’s face it) slow ways in which calculus and other STEM subjects are usually presented. The countercultural force that tends to discourage kids from getting into STEM disciplines early on is probably stronger today than it has ever been, and it seems foolish to try to fight that force with the way STEM disciplines have been presented to students in the past.
Millenials are interested to a (perhaps) surprising degree in making the world a better place, which means they are a lot more interested in solving problems and helping people than they are with epsilon-delta definitions and deriving integrals from summation rules. The globalized economy and highly-connected world in which we all live has made almost every problem worth solving multidisciplinary. There is a much higher premium now placed on getting a list of viable solutions to a problem within a brief time span, as opposed to a single, perfectly right answer within an unlimited time span (or in the time span of a timed exam).
Even mathematics itself has a different sort of culture now than it did even just ten years ago. We are seeing the emergence of massively collaborative mathematical research via social media, the rise of computational proofs from controversy to standard practice, and computational science taking a central role among the important scientific questions of our time. Calculus may not have changed much but its role in the larger mathematical enterprise has evolved, just in the last 10-15 years.
In short, everything that lends itself to the creation of meaning in the world today — that is, today’s culture — has changed from what it used to be. Even the things that remain essentially unchanged from their previous states, like calculus, must fit into a context that has changed.
All this change presents challenges and opportunities for STEM educators. It’s challenging to go back to calculus, and other STEM disciplines, and think about things like: What are the essential elements of this subject that really need to be taught, as opposed to just the topics we really like? What new facets or topics need to be factored in? What’s the best way to factor those in, so that students are really prepared to function in the world past college? And, maybe most importantly, How do we know our students are really prepared? There’s a temptation to burrow back in to what worked for us, when faced with such daunting challenges, but that really doesn’t help students much — nor does it tap into the possibilities of making our subjects, and our students, richer.
6 Comments
Filed under Calculus, Education, Educational technology, Engineering, Engineering education, Higher ed, Math, Problem Solving, Teaching, Technology
Tagged as Bud Adams, Calculus, Education, Engineering, Engineering education, Innovation, Math, mathematics, Newton, Technology
13 August 2010 · 1:21 pm
## This week in screencasting: Making 3D plots in MATLAB
I’ve just started on a binge of screencast-making that will probably continue throughout the fall. Some of these screencasts will support one of my colleagues who is teaching Calculus III this semester; this is our first attempt at making the course MATLAB-centric, and most of the students are alums of the MATLAB course from the spring. So those screencasts will be on topics where MATLAB can be used in multivariable calculus. Other screencasts will be for my two sections of calculus and will focus both on technology training and on additional calculus examples that we don’t have time for in class. Still others will be just random topics that I would like to contribute for the greater good.
Here are the first two. It’s a two-part series on plotting two-variable functions in MATLAB. Each is about 10 minutes long.
Part of the reason I’m doing all this, too, is to force myself to master Camtasia:Mac, which is a program I enjoy but don’t fully understand. Hopefully the production value will improve with use. You’ll probably notice that I discovered the Dynamics Processor effect between the first and second screencasts, as the sound quality of Part 2 is way better than that of Part 1. I’d appreciate any constructive feedback from podcasting/screencasting or Camtasia experts out there.
I’m going to be housing all these screencasts at my newly-created YouTube channel if you’d like to subscribe. And if I manage to do more than one or two a week, I’ll put the “greatest hits” up here on the blog.
Comments Off
Filed under Calculus, Camtasia, Screencasts, Teaching, Technology, Textbook-free
Tagged as Calculus, camtasia, Math, matlab, Multivariable calculus, Screencast, screencasting, Technology
8 August 2010 · 12:47 pm
## Calculus and conceptual frameworks
Image via http://www.flickr.com/photos/loopzilla/
I was having a conversation recently with a colleague who might be teaching a section of our intro programming course this fall. In sharing my experiences about teaching programming from the MATLAB course, I mentioned that the thing that is really hard about teaching programming is that students often lack a conceptual framework for what they’re learning. That is, they lack a mental structure into which they can place the topics and concepts they’re learning and then see those ideas in their proper place and relationship to each other. Expert learners — like some students who are taking an intro programming course but have been coding since they were 6 years old — have this framework, and the course is a breeze. Others, possibly a large majority of students in a class, have never done any kind of programming, and they will be incapable of really learning programming until they build a conceptual framework to handle it. And it’s the prof’s job to help them build it.
Afterwards, I thought, this is why teaching intro programming is harder than teaching calculus. Because students who make it all the way into a college calculus surely have a well-developed conceptual framework for mathematics and can understand where the topics and methods in calculus should fit. Right? Hello?
It then hit me just how wrong I was. Students coming into calculus, even if they’ve had the course before in high school, are not guaranteed to have anything like an appropriate conceptual framework for calculus. Some students may have no conceptual framework at all for calculus — they’ll be like intro programming students who have never coded — and so when they see calculus concepts, they’ll revert back to their conceptual frameworks built in prior math courses, which might be robust and might not be. But even then, students may have multiple, mutually contradictory frameworks for mathematics generally owing to different pedagogies, curricula, or experiences with math in the past.
Take, for example, the typical first contact that calculus students get with actual calculus in the Stewart textbook: The tangent problem. The very first example of section 2.1 is a prototype of this problem, and it reads: Find an equation of the tangent line to the parabola $y = x^2$ at the point $P(1,1)$. What follows is the usual initial solution: (1) pick a point $Q$ near $(1,1)$, (2) calculate the slope of the secant line, (3) move $Q$ closer to $P$ and recalculate, and then (4) repeat until the differences between successive approximations dips below some tolerance level.
What is a student going to do with this example? The ideal case — what we think of as a proper conceptual handling of the ideas in the example — would be that the student focuses on the nature of the problem (I am trying to find the slope of a tangent line to a graph at a point), the data involved in the problem (I am given the formula for the function and the point where the tangent line goes), and most importantly the motivation for the problem and why we need something new (I’ve never had to calculate the slope of a line given only one point on it). As the student reads the problem, framed properly in this way, s/he learns: I can find the slope of a tangent line using successive approximations of secant lines, if the difference in approximations dips below a certain tolerance level. The student is then ready for example 2 of this section, which is an application to finding the rate at which a charge on a capacitor is discharged. Importantly, there is no formula for the function in example 2, just a graph.
But the problem is that most students adopt a conceptual framework that worked for them in their earlier courses, which can be summarized as: Math is about getting right answers to the odd-numbered exercises in the book. Students using this framework will approach the tangent problem by first homing in on the first available mathematical notation in the example to get cues for what equation to set up. That notation in this case is:
$m_{PQ} = \frac{x^2 - 1}{x-1}$
Then, in the line below, a specific value of x (1.5) is plugged in. Great! they might think, I’ve got a formula and I just plug a number into it, and I get the right answer: 2.5. But then, reading down a bit further, there are insinuations that the right answer is not 2.5. Stewart says, “…the closer $x$ is to 1…it appears from the tables, the closer $m_{PQ}$ is to 2. This suggests that the slope of the tangent line $t$ should be $m = 2$.” The student with this framework must then be pretty dismayed. What’s this about “it appears” the answer is 2? Is it 2, or isn’t it? What happened to my 2.5? What’s going on? And then they get to example 2, which has no formula in it at all, and at that point any sane person with this framework would give up.
It’s also worth noting that the Stewart book — and many other standard calculus books — do not introduce this tangent line idea until after a lengthy precalculus review chapter, and that chapter typically looks just like what students saw in their Precalculus courses. These treatments do not attempt to be a ramp-up into calculus, and presages of the concepts of calculus are not present. If prior courses didn’t train students on good conceptual frameworks, then this review material actually makes matters worse when it comes time to really learn calculus. They will know how to plug numbers and expressions into a function, but when the disruptively different math of calculus appears, there’s nowhere to put it, except in the plug-and-chug bin that all prior math has gone into.
So it’s extremely important that students going into calculus get a proper conceptual framework for what to do with the material once they see it. Whose responsibility is that? Everybody’s, starting with…
• the instructor. The instructor of a calculus class has to be very deliberate and forthright in bending all elements of the course towards the construction of a framework that will support the massive amount of material that will come in a calculus class. This includes telling students that they need a conceptual framework that works, and informing them that perhaps their previous frameworks were not designed to manage the load that’s coming. The instructor also must be relentless in helping students put new material in its proper place and relationship to prior material.
• But here the textbooks can help, too, by suggesting the framework to be used; it’s certainly better than not specifying the framework at all but just serving up topic after topic as non sequiturs.
• Finally, students have to work at constructing a framework as well; and they should be held accountable not only for their mastery of micro-level calculus topics like the Chain Rule but also their ability to put two or more concepts in relation to each other and to use prior knowledge on novel tasks.
What are your experiences with helping students (in calculus or otherwise) build useable conceptual frameworks for what they are learning? Any tools (like mindmapping software), assessment methods, or other teaching techniques you’d care to share?
9 Comments
Filed under Calculus, Critical thinking, Education, Educational technology, Math, Problem Solving, Teaching, Technology, Textbooks
Tagged as Calculus, Conceptual framework, Education, learning, math education, mathematics, precalculus
15 May 2010 · 11:41 am
## The semester in review
Image via Wikipedia
I’ve made it to the end of another semester. Classes ended on Friday, and we have final exams this coming week. It’s been a long and full semester, as you can see by the relative lack of posting going on here since around October. How did things go?
Well, first of all I had a record course load this time around — four different courses, one of which was the MATLAB course that was brand new and outside my main discipline; plus an independent study that was more like an undergraduate research project, and so it required almost as much prep time from me as a regular course.
The Functions and Models class (formerly known as Pre-calculus) has been one of my favorites to teach here, and this class was no exception. We do precalculus a bit differently here, focusing on using functions as data modeling tools, so the main meat of the course is simply looking at data and asking, Are the data linear? If not, are they best fit by a logarithmic, exponential, or power function? Or a polynomial? And what should be the degree of that polynomial? And so on. I enjoy this class because it’s primed for the kind of studio teaching that I’ve come to enjoy. I just bring in some data I’ve found, or which the students have collected, and we play with the data. And these are mainly students who, by virtue of having placed below calculus on our placement exam, have been used to a dry, lecture-oriented math environment, and it’s very cool to see them light up and have fun with math for a change. It was a small class (seven students) and we had fun and learned a lot.
The Calculus class was challenging, as you can tell from my boxplots posts (first post, second post). The grades in the class were nowhere near where I wanted them to be, nor for the students (I hope). I think every instructor is going to have a class every now and then where this happens, and the challenge is to find the lesson to learn and then learn them. If you read those two boxplots posts, you can see some of the lessons and information that I’ve gleaned, and in the fall when I teach two sections of this course there could be some significant changes with respect to getting more active work into the class and more passive work outside the class.
Linear Algebra was a delight. This year we increased the credit load of this class from three hours to four, and the extra hour a week has really transformed what we can do with the course. I had a big class of 15 students (that’s big for us), many of whom are as sharp as you’ll find among undergraduates, and all of whom possess a keen sense of humor and a strong work ethic that makes learning a difficult subject quite doable. I’ll be posting later about their application projects and poster session, which were both terrific.
Computer Tools for Problem Solving (aka the MATLAB course) was a tale of two halves of the semester. The first half of the semester was quite a struggle — against a relatively low comfort level around technology with the students and against the students’ expectations for my teaching. But I tried to listen to the students, giving them weekly questionnaires about how the class is going, and engaging in an ongoing dialogue about what we could be doing better. We made some changes to the course on the fly that didn’t dumb the course down but which made the learning objectives and expectations a lot clearer, and they responded extremely well. By the end of the course, I daresay they were having fun with MATLAB. And more importantly, I was receiving reports from my colleagues that those students were using MATLAB spontaneously to do tasks in those courses. That was the main goal of the course for me — get students to the point where they are comfortable and fluent enough with MATLAB that they’ll pull it up and use it effectively without being told to do so. There are some changes I need to make to next year’s offering of the course, but I’m glad to see that the students were able to come out of the course doing what I wanted them to do.
The independent study on finite fields and applications was quite a trip. Andrew Newman, the young man doing the study with me, is one of the brightest young mathematicians with whom I’ve worked in my whole career, and he took on the project with both hands from the very beginning. The idea was to read through parts of Mullen and Mummert to get basic background in finite field theory; then narrow down his reading to a particular application; then dive in deep to that application. Washington’s book on elliptic curves ended up being the primary text, though, and Andrew ended up studying elliptic curve cryptography and the Diffie-Hellman decision problem. Every independent study has a creative project requirement attached, and his was to implement the decision problem in Sage. He’s currently writing up a paper on his research and we hope to get it published in Mathematics Exchange. (Disclaimer: I’m on the editorial board of Math Exchange.) In the middle of the semester, Andrew found out that he’d been accepted into the summer REU on mathematical cryptology at Northern Kentucky University/University of Cincinnati, and he’ll be heading out there in a few weeks to study (probably) multivariate public-key systems for the summer. I’m extremely proud of Andrew and what he’s been able to do this semester — he certainly knows a lot more about finite fields and elliptic curve crypto than I do now.
In between all the teaching, here are some other things I was able to do:
• Went to the ICTCM in Chicago and presented a couple of papers. Here’s the Prezi for the MATLAB course presentation. Both of those papers are currently being written up for publication in the conference proceedings.
• Helped with hosting the Indiana MAA spring meetings at our place, and I finished up my three-year term as Student Activities Coordinator by putting together this year’s Indiana College Mathematics Competition.
• Did a little consulting work, which I can’t really talk about thanks to the NDA I signed.
• I got a new Macbook Pro thanks to my college’s generous technology grant system. Of course Apple refreshed the Macbook Pro lineup mere weeks later, but them’s the breaks.
• I’m sure there’s more, but I’ve got finals on the brain right now.
In another post I’ll talk about what’s coming up for me this summer and look ahead to the fall.
Comments Off
Filed under Abstract algebra, Calculus, Inverted classroom, Life in academia, Linear algebra, Math, MATLAB, Personal, Teaching, Vocation
Tagged as Add new tag, Calculus, Education, Linear algebra, Math, mathematics, matlab, precalculus, research
• ### Robert Talbert
Here's my personal website.
• ### Subscribe
• I'm also part of the blogging team at:
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 13, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9566676616668701, "perplexity_flag": "middle"}
|
http://www.physicsforums.com/showthread.php?t=262982
|
Physics Forums
## Finding height and range of a projectile
1. The problem statement, all variables and given/known data
A test rocket is launched by accelerating it along a 200.0-m incline at 1.25 m/s2 starting from rest at point A. The incline rises at 35.0° above the horizontal, and at the instant the rocket leaves it, its engines turn off and it is subject only to gravity (air resistance is ignored). Find:
a) the maximum height above the ground that the rocket reaches
b) the greatest horizontal range of the rocket beyond point A.
2. Relevant equations
vx = v0cosα0
vy = v0sinα0
Basically, all the projectile motion formulas.
3. The attempt at a solution
Well, I don't even know how to start because I'm horrible at identifying which to find first. I assume I should find the initial velocity (v0) of the ball, then time, then find its max. height for a). But for part b), I have no clue. And I'm confused at whether to plug in 1.25 m/s2 or 9.8 m/s2 as the acceleration. How do I get to the answer of 124 m for a) and 280 m for b) Can someone walk me through this, please?
PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug
Recognitions: Gold Member Science Advisor Staff Emeritus Lets take it step by step. What is the speed of the rocket when it reaches the end of the ramp? You know the acceleration and the distance and that it started from rest (i.e. its initial velocity was zero).
Won't the speed of the rocket be zero since at the highest point? Because the final y-velocity is zero, and the final x-velocity is , so it's zero since it's the same as the initial x-velocity.
Recognitions:
Gold Member
Science Advisor
Staff Emeritus
## Finding height and range of a projectile
Its launched from the ramp. The question states it is accelerated along it until the instant the rocket leaves it. What is the speed when it launches from the ramp?
I can't think of a kind of formula that relates distance, acceleration and speed. .___.
But how do I know what's the initial velocity if I don't know the time? D:
Recognitions: Gold Member Science Advisor Staff Emeritus Have a look at the following list. http://www.physicsforums.com/showpos...63&postcount=2
Wait, am I supposed to find the x or y-component of $$v^2 = v_0^2 + 2 a \Delta x$$, and do I plug in the 1.25 acceleration or gravity? =/
Recognitions: Gold Member Science Advisor Staff Emeritus The acceleration is along the slope so you need not worry about the components.
Well, before beginning, I’ll tell you that it was great to see that you’ve posted a projectile motion question, because I just love projectile motion! Anyways, let me clear things up first. The final velocity of the rocket for it’s journey on the inclined plane will act as the initial velocity for the rocket in it’s trajectory (and yes, then you will resolute that velocity). The reason you are not being able to find the range correctly is because you are not considering the distance the rocket covered while it was on the inclined plane. It will be equal to 200m*cosѲ (this will work in a similar manner as the time when you found how much height the rocket gained while on the inclined plane, when finding the maximum height). But, that’s not where the complications end. You must also consider the distance covered while coming down to the ground from the height it gained at the beginning (due to the inclined plane). I deduced a formula for finding out the total range from the point when it leaves the inclination to the point when it touches the ground again: ucosѲ {[usinѲ/g] + [sqrt (2h/g)]}. Note that here ‘h’ refers to the maximum height which you deduced in the beginning; the one which included the height gained by the inclined plane. Also, you’ll have to add the distance covered on the inclination with the value you get from this. I deduced this formula simply by adding the time taken by the rocket to cover the distance to the point of it’s maximum height (when it’s vertical velocity is zero) and the time taken to cover the distance from that point to the ground. After that I multiplied it by the initial velocity on the x-axis. I used this formula, and, after a lot of approximation (I repeat: a lot of approximation) I came with the answer 264m. I’m pretty sure that if you’ll use the exact values and correct calculations, you’ll come up with 280m. P.S.- I know the language of my post gets a bit confusing at times, but, if you just quick-read through it a few times, I’m sure you’ll get it!
Thread Tools
| | | |
|---------------------------------------------------------------|-------------------------------|---------|
| Similar Threads for: Finding height and range of a projectile | | |
| Thread | Forum | Replies |
| | Introductory Physics Homework | 1 |
| | Introductory Physics Homework | 1 |
| | Introductory Physics Homework | 1 |
| | Introductory Physics Homework | 3 |
| | General Physics | 3 |
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9214386940002441, "perplexity_flag": "middle"}
|
http://www.thulasidas.com/2008-11/are-radio-sources-and-gamma-ray-bursts-luminal-booms.htm
|
# Are Radio Sources and Gamma Ray Bursts Luminal Booms?
Posted on November 7, 2008 by
This article was published in the International Journal of Modern Physics D (IJMP–D) in 2007. It soon became the Top Accessed Article of the journal by Jan 2008.
Although it might seem like a hard core physics article, it is in fact an application of the philosophical insight permeating this blog and my book.
This blog version contains the abstract, introduction and conclusions. The full version of the article is available as a PDF file.
Journal Reference: IJMP-D Vol. 16, No. 6 (2007) pp. 983–1000.
.
Abstract
The softening of the GRB afterglow bears remarkable similarities to the frequency evolution in a sonic boom. At the front end of the sonic boom cone, the frequency is infinite, much like a Gamma Ray Burst (GRB). Inside the cone, the frequency rapidly decreases to infrasonic ranges and the sound source appears at two places at the same time, mimicking the double-lobed radio sources. Although a “luminal” boom violates the Lorentz invariance and is therefore forbidden, it is tempting to work out the details and compare them with existing data. This temptation is further enhanced by the observed superluminality in the celestial objects associated with radio sources and some GRBs. In this article, we calculate the temporal and spatial variation of observed frequencies from a hypothetical luminal boom and show remarkable similarity between our calculations and current observations.
#### Introduction
A sonic boom is created when an object emitting sound passes through the medium faster than the speed of sound in that medium. As the object traverses the medium, the sound it emits creates a conical wavefront, as shown in Figure 1. The sound frequency at this wavefront is infinite because of the Doppler shift. The frequency behind the conical wavefront drops dramatically and soon reaches the infrasonic range. This frequency evolution is remarkably similar to afterglow evolution of a gamma ray burst (GRB).
Figure 1:. The frequency evolution of sound waves as a result of the Doppler effect in supersonic motion. The supersonic object S is moving along the arrow. The sound waves are “inverted” due to the motion, so that the waves emitted at two different points in the trajectory merge and reach the observer (at O) at the same time. When the wavefront hits the observer, the frequency is infinity. After that, the frequency rapidly decreases.
Gamma Ray Bursts are very brief, but intense flashes of $\gamma$ rays in the sky, lasting from a few milliseconds to several minutes, and are currently believed to emanate from cataclysmic stellar collapses. The short flashes (the prompt emissions) are followed by an afterglow of progressively softer energies. Thus, the initial $\gamma$ rays are promptly replaced by X-rays, light and even radio frequency waves. This softening of the spectrum has been known for quite some time, and was first described using a hypernova (fireball) model. In this model, a relativistically expanding fireball produces the $\gamma$ emission, and the spectrum softens as the fireball cools down. The model calculates the energy released in the $\gamma$ region as $10^{53}$–$10^{54}$ ergs in a few seconds. This energy output is similar to about 1000 times the total energy released by the sun over its entire lifetime.
More recently, an inverse decay of the peak energy with varying time constant has been used to empirically fit the observed time evolution of the peak energy using a collapsar model. According to this model, GRBs are produced when the energy of highly relativistic flows in stellar collapses are dissipated, with the resulting radiation jets angled properly with respect to our line of sight. The collapsar model estimates a lower energy output because the energy release is not isotropic, but concentrated along the jets. However, the rate of the collapsar events has to be corrected for the fraction of the solid angle within which the radiation jets can appear as GRBs. GRBs are observed roughly at the rate of once a day. Thus, the expected rate of the cataclysmic events powering the GRBs is of the order of $10^4$–$10^6$ per day. Because of this inverse relationship between the rate and the estimated energy output, the total energy released per observed GRB remains the same.
If we think of a GRB as an effect similar to the sonic boom in supersonic motion, the assumed cataclysmic energy requirement becomes superfluous. Another feature of our perception of supersonic object is that we hear the sound source at two different location as the same time, as illustrated in Figure 2. This curious effect takes place because the sound waves emitted at two different points in the trajectory of the supersonic object reach the observer at the same instant in time. The end result of this effect is the perception of a symmetrically receding pair of sound sources, which, in the luminal world, is a good description of symmetric radio sources (Double Radio source Associated with Galactic Nucleus or DRAGN).
Figure 2:. The object is flying from $A'$ to $A$ through $B'$ and $B$ at a constant supersonic speed. Imagine that the object emits sound during its travel. The sound emitted at the point $B'$ (which is near the point of closest approach $B$) reaches the observer at $O$ before the sound emitted earlier at $A'$. The instant when the sound at an earlier point $A'$ reaches the observer, the sound emitted at a much later point $A$ also reaches $O$. So, the sound emitted at $A$ and $A'$ reaches the observer at the same time, giving the impression that the object is at these two points at the same time. In other words, the observer hears two objects moving away from $B'$ rather than one real object.
Radio Sources are typically symmetric and seem associated with galactic cores, currently considered manifestations of space-time singularities or neutron stars. Different classes of such objects associated with Active Galactic Nuclei (AGN) were found in the last fifty years. Figure 3 shows the radio galaxy Cygnus A, an example of such a radio source and one of the brightest radio objects. Many of its features are common to most extragalactic radio sources: the symmetric double lobes, an indication of a core, an appearance of jets feeding the lobes and the hotspots. Some researchers have reported more detailed kinematical features, such as the proper motion of the hotspots in the lobes.
Symmetric radio sources (galactic or extragalactic) and GRBs may appear to be completely distinct phenomena. However, their cores show a similar time evolution in the peak energy, but with vastly different time constants. The spectra of GRBs rapidly evolve from $\gamma$ region to an optical or even RF afterglow, similar to the spectral evolution of the hotspots of a radio source as they move from the core to the lobes. Other similarities have begun to attract attention in the recent years.
This article explores the similarities between a hypothetical “luminal” boom and these two astrophysical phenomena, although such a luminal boom is forbidden by the Lorentz invariance. Treating GRB as a manifestation of a hypothetical luminal boom results in a model that unifies these two phenomena and makes detailed predictions of their kinematics.
Figure 3:.The radio jet and lobes in the hyperluminous radio galaxy Cygnus A. The hotspots in the two lobes, the core region and the jets are clearly visible. (Reproduced from an image courtesy of NRAO/AUI.)
#### Conclusions
In this article, we looked at the spatio-temporal evolution of a supersonic object (both in its position and the sound frequency we hear). We showed that it closely resembles GRBs and DRAGNs if we were to extend the calculations to light, although a luminal boom would necessitate superluminal motion and is therefore forbidden.
This difficulty notwithstanding, we presented a unified model for Gamma Ray Bursts and jet like radio sources based on bulk superluminal motion. We showed that a single superluminal object flying across our field of vision would appear to us as the symmetric separation of two objects from a fixed core. Using this fact as the model for symmetric jets and GRBs, we explained their kinematic features quantitatively. In particular, we showed that the angle of separation of the hotspots was parabolic in time, and the redshifts of the two hotspots were almost identical to each other. Even the fact that the spectra of the hotspots are in the radio frequency region is explained by assuming hyperluminal motion and the consequent redshift of the black body radiation of a typical star. The time evolution of the black body radiation of a superluminal object is completely consistent with the softening of the spectra observed in GRBs and radio sources. In addition, our model explains why there is significant blue shift at the core regions of radio sources, why radio sources seem to be associated with optical galaxies and why GRBs appear at random points with no advance indication of their impending appearance.
Although it does not address the energetics issues (the origin of superluminality), our model presents an intriguing option based on how we would perceive hypothetical superluminal motion. We presented a set of predictions and compared them to existing data from DRAGNs and GRBs. The features such as the blueness of the core, symmetry of the lobes, the transient $\gamma$ and X-Ray bursts, the measured evolution of the spectra along the jet all find natural and simple explanations in this model as perceptual effects. Encouraged by this initial success, we may accept our model based on luminal boom as a working model for these astrophysical phenomena.
It has to be emphasized that perceptual effects can masquerade as apparent violations of traditional physics. An example of such an effect is the apparent superluminal motion, which was explained and anticipated within the context of the special theory of relativity even before it was actually observed. Although the observation of superluminal motion was the starting point behind the work presented in this article, it is by no means an indication of the validity of our model. The similarity between a sonic boom and a hypothetical luminal boom in spatio-temporal and spectral evolution is presented here as a curious, albeit probably unsound, foundation for our model.
One can, however, argue that the special theory of relativity (SR) does not deal with superluminality and, therefore, superluminal motion and luminal booms are not inconsistent with SR. As evidenced by the opening statements of Einstein’s original paper, the primary motivation for SR is a covariant formulation of Maxwell’s equations, which requires a coordinate transformation derived based partly on light travel time (LTT) effects, and partly on the assumption that light travels at the same speed with respect to all inertial frames. Despite this dependence on LTT, the LTT effects are currently assumed to apply on a space-time that obeys SR. SR is a redefinition of space and time (or, more generally, reality) in order to accommodate its two basic postulates. It may be that there is a deeper structure to space-time, of which SR is only our perception, filtered through the LTT effects. By treating them as an optical illusion to be applied on a space-time that obeys SR, we may be double counting them. We may avoid the double counting by disentangling the covariance of Maxwell’s equations from the coordinate transformations part of SR. Treating the LTT effects separately (without attributing their consequences to the basic nature of space and time), we can accommodate superluminality and obtain elegant explanations of the astrophysical phenomena described in this article. Our unified explanation for GRBs and symmetric radio sources, therefore, has implications as far reaching as our basic understanding of the nature of space and time.
This entry was posted in Articles and Essays, Physics, Science and tagged AGN, Doppler shift, galaxy, grb afterglow, Lorentz invariance, perception, Physics, radio sources, relativity, sonic boom. Bookmark the permalink.
## Comments
1. Roger Brewis says:
Thulasides,
I like it. I see a number of good reasons to hold on to this idea as potentially true.
Often, but not always, the simple answer turns out to be correct, even when initially rejected as clashing with established orthodoxy. It is also wise to avoid spurious simplicity when it fails Popper’s test of falsifiability, such as the many worlds ‘theory’.
What I like about this is that it is firmly based in the world of pre-established physics. This is in my view essential given the many fundamental problems of post 1900 physics.
Let us say for a minute that this explanation might be correct. We are talking then about a shock wave in an underlying background substance that has been comprehensively rejected. Putting that aside, which you are careful not to do, we might ask what would be some of the properties of a universe consistent with your model and hence with an aether.
We would find, for example, that the slowing of clocks was Lorentzian but not relativistic in principle. This is indeed what we see in the clocks in planes experiment, with the eastbound clock ‘running fast’ compared to the other two, as if travelling more slowly in relation to some preferred background. The experiment even suggests a remarkably simple way of determining that preferred frame.
We would find that the motion of the Earth was discernable from observations of any general background radiation, such as the CMBR. Again, as observed.
We would be able to view light as a wave, as Schrodinger did, and to use his work to explain ‘particle like’ effects.
We would need to re-read Maxwell, who assumed a background medium, and rethink the apparent contradictions in his work, in particular his conclusion that light is a transverse wave. In fact, his assumption that electromagnetism involved vortices would fit with the medium you now invoke, and the manner of his calculations on light suggest that it is (transverse) ripples on these vortices that he misinterpreted as light. Such a medium as Schrodinger, Maxwell, Hafele and Keating, Wm & JJ Thomson, and now yourself invoke, would be expected to sustain longitudinal (pressure) waves, removing the need for the hypothesis of the photon, and fitting very nicely with your shock wave theory.
Lots more. Are you sure you want to go down this road? Modern theory has fragmented and its basis is showing signs of unravelling, so perhaps you should!
1. Manoj says:
Hi Roger,
Thanks for posting your comment. You are right, there is more to this line of thinking than just GRBs and AGN jets. The sequel to this paper (http://www.thulasidas.com/2008-11/light-travel-time-effects-and-cosmological-features.htm) discusses some of the implications. But this sequel was too speculative to get published in any decent journal.
Right now, I am too busy with my day job to worry about these things, but I do hope to get back to physics (and may be even philosophy) in a couple of years.
– cheers,
– Manoj
2. [...] the absolute reality and our perception of it can be further developed and applied to certain specific astrophysical and cosmological phenomena. When it comes to the physics that happens well beyond our sensory [...]
3. [...] unpublished article is a sequel to my earlier paper (also posted here as “Are Radio Sources and Gamma Ray Bursts Luminal Booms?“). This blog version contains the abstract, introduction and conclusions. The full version of [...]
• ### Unreal Blog in the Top 50!
We are listed among the top Philosophy Blogs in the world.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 24, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9458543658256531, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/trigonometry/155116-ooh-tricky-question.html
|
# Thread:
1. ## ooh a tricky question
i don't know weather its grouped correctly(sorry)
for $0 < \theta < \frac{\pi}{2}$if
$x = \displaystyle\Sigma_{n=0}^{\infty}(\cos ^{2n} \theta)$
$y = \displaystyle\Sigma_{n=0}^{\infty}(\sin^{2n} \theta)$
$z = \displaystyle\Sigma_{n=0}^{\infty}(\sin^{2n} \theta \cos^{2n} \theta)$
then which of the following are true(multiple choice)
(A) $\frac{1}{x} + \frac{1}{y} = 1$
(B) $x + y +xy =0$
(C) $xyz = xy + z$
(D) $xyz = x + y + z$
2. You may try using mathematical induction.
3. THE ACTUAL TIME GIVEN FOR PROBLEM IS 3 MINUTES using mathematical induction it takes 10 minutes at least
i think we should expand it and take the G.P sum of n terms and do something might work
this is a tricky sum for me
4. Originally Posted by grgrsanjay
i don't know weather its grouped correctly(sorry)
for $0 < \theta < \frac{\pi}{2}$if
$x = \displaystyle\Sigma_{n=0}^{\infty}(\cos ^2n \theta)$
$y = \displaystyle\Sigma_{n=0}^{\infty}(\sin^2n \theta)$
$z = \displaystyle\Sigma_{n=0}^{\infty}(\sin^2n \theta \cos^2n \theta)$
then which of the following are true(multiple choice)
(A) $\frac{1}{x} + \frac{1}{y} = 1$
(B) $x + y +xy =0$
(C) $xyz = xy + z$
(D) $xyz = x + y + z$
There is something wrong with this question, because $x+y = \displaystyle\Sigma_{n=0}^{\infty}(\cos ^2n \theta + \sin^2n \theta) = \Sigma_{n=0}^{\infty}1$, which is infinite. So at least one of the series for x and y fails to converge. If for example $\theta = \pi/4$ then the three series for x, y, z all diverge, so none of the conditions (A), (B), (C), (D) is true.
5. oh sorry it was just a typing error i corrected the question actually it sin to the power of 2n
6. Originally Posted by grgrsanjay
oh sorry it was just a typing error i corrected the question actually it sin to the power of 2n
In that case, these are geometric series, and the sums are given by $x = \dfrac1{1-\cos^2\theta} = \dfrac1{\sin^2\theta}$ and $y = \dfrac1{1-\sin^2\theta} = \dfrac1{\cos^2\theta}$, from which it should be clear that (A) is correct.
7. yea i got (A) correct so any other options are correct??? but when substitute a value (C) and (D) also seem correct to me
8. Originally Posted by grgrsanjay
yea i got (A) correct so any other options are correct??? but when substitute a value (C) and (D) also seem correct to me
Is this a multiple choice question or isn't it?
9. Originally Posted by grgrsanjay
which of the following are true(multiple choice)says it is multiple
(A) $\frac{1}{x} + \frac{1}{y} = 1$
(B) $x + y +xy =0$
(C) $xyz = xy + z$
(D) $xyz = x + y + z$
me too getting c and d as answer with a(when substituting)
10. Originally Posted by ggn
says it is multiple
Yes thank you I can read, my point is that in a valid multiple choice question it's not possible to have more than one correct option, therefore if A is correct then B,C,D are automatically incorrect. If the OP wanted to call into question the validity of the problem, the OP could have specified that.
11. Originally Posted by grgrsanjay
yea i got (A) correct so any other options are correct??? but when substitute a value (C) and (D) also seem correct to me
That's right! I only looked at (A), because when I saw that was correct I assumed that it would be the only correct choice. But (C) and (D) are also correct.
12. yea but how to prove (C) and (D) are correct?? tell me please
13. Originally Posted by grgrsanjay
yea but how to prove (C) and (D) are correct?? tell me please
The formulas for x, y and z are $x = \dfrac1{\sin^2\theta},\quad y = \dfrac1{\cos^2\theta},\quad z = \dfrac1{1-\sin^2\theta\cos^2\theta}$. Substitute those into the equations (C) and (D), and check that both sides agree.
14. what have you used to get that formula??
15. Originally Posted by grgrsanjay
what have you used to get that formula??
Sum of a infinite geometric series, $\displaystyle\sum_{n=0}^\infty x^n = \frac1{1-x}$.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 26, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9425694346427917, "perplexity_flag": "middle"}
|
http://unapologetic.wordpress.com/2012/02/01/maxwells-equations/
|
The Unapologetic Mathematician
Maxwell’s Equations
Okay, let’s see where we are. There is such a thing as charge, and there is such a thing as current, which often — but not always — arises from charges moving around.
We will write our charge distribution as a function $\rho$ and our current distribution as a vector-valued function $J$, though these are not always “functions” in the usual sense. Often they will be “distributions” like the Dirac delta; we haven’t really gotten into their formal properties, but this shouldn’t cause us too much trouble since most of the time we’ll use them — like we’ve used the delta — to restrict integrals to smaller spaces.
Anyway, charge and current are “conserved”, in that they obey the conservation law:
$\displaystyle\nabla\cdot J=-\frac{\partial\rho}{\partial t}$
which states that the mount of current “flowing out of a point” is the rate at which the charge at that point is decreasing. This is justified by experiment.
Coulomb’s law says that electric charges give rise to an electric field. Given the charge distribution $\rho$ we have the differential contribution to the electric field at the point $r$:
$\displaystyle dE(r)=\frac{1}{4\pi\epsilon_0}\rho\frac{r}{\lvert r\rvert^3}dV$
and we get the whole electric field by integrating this over the charge distribution. This, again, is justified by experiment.
The Biot-Savart law says that electric currents give rise to a magnetic field. Given the current distribution $J$ we have the differential contribution to the magnetic field at the poinf $r$:
$\displaystyle dB(r)=\frac{\mu_0}{4\pi}J\times\frac{r}{\lvert r\rvert^3}dV$
which again we integrate over the current distribution to calculate the full magnetic field at $r$. This, again, is justified by experiment.
The electric and magnetic fields give rise to a force by the Lorentz force law. If a test particle of charge $q$ is moving at velocity $v$ through electric and magnetic fields $E$ and $B$, it feels a force of
$\displaystyle F=q(E+v\times B)$
But we don’t work explicitly with force as much as we do with the fields. We do have an analogue for work, though — electromotive force:
$\displaystyle\mathcal{E}=-\int\limits_CE\cdot dr$
One unexpected source of electromotive force comes from our fourth and final experimentally-justified axiom: Faraday’s law of induction
$\displaystyle\mathcal{E}=\frac{\partial}{\partial t}\int\limits_\Sigma B\cdot dS$
This says that the electromotive force around a circuit is equal to the rate of change of magnetic flux through any surface bounded by the circuit.
Using these four experimental results and definitions, we can derive Maxwell’s equations:
$\displaystyle\begin{aligned}\nabla\cdot E&=\frac{1}{\epsilon_0}\rho\\\nabla\cdot B&=0\\\nabla\times E&=-\frac{\partial B}{\partial t}\\\nabla\times B&=\mu_0J+\epsilon_0\mu_0\frac{\partial E}{\partial t}\end{aligned}$
The first is Gauss’ law and the second is Gauss’ law for magnetism. The third is directly equivalent to Faraday’s law of induction, while the last is Ampère’s law, with Maxwell’s correction.
About these ads
Like Loading...
10 Comments »
1. [...] is sometimes easier to understand Maxwell’s equations in their integral form; the version we outlined last time is the differential [...]
Pingback by | February 2, 2012 | Reply
2. [...] to note at this point that we didn’t have to start with our experimentally-justified axioms. Maxwell’s equations suffice to derive all the physics we [...]
Pingback by | February 3, 2012 | Reply
3. [...] Maxwell’s equations give us a collection of differential equations to describe the behavior of the electric and magnetic fields. Juggling them, we can come up with other differential equations that give us more insight into how these fields interact. And, in particular, we come up with a familiar equation that describes waves. [...]
Pingback by | February 7, 2012 | Reply
4. [...] derived a “wave equation” from Maxwell’s equations, but it’s not clear what it means, or even why this is called a wave equation. Let’s [...]
Pingback by | February 8, 2012 | Reply
5. [...] we’ve derived the wave equation from Maxwell’s equations, and we have worked out the plane-wave solutions. But there’s more to Maxwell’s [...]
Pingback by | February 9, 2012 | Reply
6. [...] look at another property of our plane wave solutions of Maxwell’s equations. Specifically, we’ll assume that the electric and magnetic fields are each plane waves in the [...]
Pingback by | February 10, 2012 | Reply
7. [...] again with Maxwell’s equations, we see all these divergences and curls which, though familiar to many, are really heavy-duty [...]
Pingback by | February 22, 2012 | Reply
8. [...] pick up where we left off last time converting Maxwell’s equations into differential [...]
Pingback by | February 24, 2012 | Reply
9. [...] factor to put time and space measurements on an equal footing, let’s actually do it to Maxwell’s equations. We start by moving the time derivatives over on the same side as all the space [...]
Pingback by | March 6, 2012 | Reply
10. [...] other two of Maxwell’s equations come automatically from taking the potentials as fundamental and coming up with the electric and [...]
Pingback by | July 16, 2012 | Reply
« Previous | Next »
About this weblog
This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”).
I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
• RSS Feeds
RSS - Posts
RSS - Comments
• Feedback
Got something to say? Anonymous questions, comments, and suggestions at Formspring.me!
%d bloggers like this:
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 18, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9323847889900208, "perplexity_flag": "head"}
|
http://mathbabe.org/2012/07/19/hcssim-workshop-day-15-3/
|
# mathbabe
Exploring and venting about quantitative issues
Home > math education > HCSSiM Workshop day 15
## HCSSiM Workshop day 15
July 19, 2012
This is a continuation of this, where I take notes on my workshop at HCSSiM.
Aaron was visiting my class yesterday and talked about Sandpiles. Here are his notes:
Sandpiles: what they are
For fixed $m,n \ge 2$, an $m \times n$ beach is a grid with some amount of sand in each spot. If there is too much sand in one place, it topples, sending one grain of sand to each of its neighbors (thereby losing 4 grains of sand). If this happens on an edge of the beach, one of the grains of sand falls off the edge and is gone forever. If it happens at a corner, 2 grains are lost. If there’s no toppling to do, the beach is stable. Here’s a 3-by-3 example I stole from here:
Do stable $m \times n$ beaches form a group?
Answer: well, you can add them together (pointwise) and then let that stabilize until you’ve got back to a stable beach (not trivial to prove this always settles! But it does). But is the sum well-defined?
In other words, if there is a cascade of toppling, does it matter what order things topple? Will you always reach the same stable beach regardless of how you topple?
Turns out the answer is yes, if you think about these grids as huge vectors and toppling as adding other 2-dimensional vectors with a ‘-4′ in one spot, a ’1′ in each of the four spots neighboring that, and ’0′ elsewhere. It inherits commutativity from addition in the integers.
Wait! Is there an identity? Yep, the beach with no sand; it doesn’t change anything when you add it.
Wait!! Are there inverses? Hmmmmm….
Lemma: There is no way to get back to all 0′s from any beach that has sand.
Proof. Imagine you could. Then the last topple would have to end up with no sand. But every topple adds sand to at least 2 sites (4 if the toppling happens in the center, 3 if on an edge, 2 if on a corner). Equivalently, nothing will topple unless there are at least 4 grains of sand total, and toppling never loses more than 2 grains, so you can never get down to 0.
Conclusion: there are no inverses; you cannot get back to the ’0′ grid from anywhere. So it’s not a group.
Try again
Question: Are there beaches that you can get back to by adding sand?
There are: on a 2-by-2 beach, the ’2′ grid (which means a 2 in every spot) plus itself is the ’4′ grid, and that topples back to the ’2′ grid if you topple every spot once. Also, the ’2′ grid adds to the $(2, 0, 0, 2)$ grid and gets it back.
Wow, it seems like the ’2′ grid is some kind of additive identity, at least for these two elements. But note that the ’1′ grid plus the ’2′ grid is the ’3′ grid, which doesn’t topple back to the ’1′ grid. So the ’2′ grid doesn’t work as an identity for everything.
We need another definition.
Recurrent sandpiles
A stable beach C is recurrent if (i) it is stable, and (ii) given any beach A, there is a beach B such that C is the stabilization of A+B. We just write this C = A+B but we know that’s kind of cheating.
Alternative definition: a stable beach C is recurrent if (i) it is stable, and (ii) you can get to C by starting at the maximum ’3′ grid, adding sand (call that part D), and toppling until you get something stable. C = ’3′ + D.
It’s not hard to show these definitions are equivalent: if you have the first, let A = ’3′. If you have the second, and if A is stable, write A + A’ = ’3′, and we have B = A’ + D. Then convince yourself A doesn’t need to be stable.
Letting A=C we get a beach E so C = C+E, and E looks like an identity.
It turns out that if you have two recurrent beaches, then if you can get back to one using a beach E then you can get back to to the other using that same beach E (if you look for the identity for C + D, note that (C+D)+E = (C+E) + D = C+D; all recurrent beaches are of the form C+D so we’re done). Then that E is an identity element under beach addition for recurrent beaches.
Is the identity recurrent? Yes it is (why? this is hard and we won’t prove it). So you can also get from A to the identity, meaning there are inverses.
The recurrent beaches form a group!
What is the identity element? On a 2-by-2 beach it is the ’2′ grid. The fact that it didn’t act as an identity on the ’1′ grid was caused by the fact that the ’1′ grid isn’t itself recurrent so isn’t considered to be inside this group.
Try to guess what it is on a 2-by-3 beach. Were you right? What is the order of the ’2′ grid as a 2-by-3 beach?
Try to guess what the identity looks like on a 198-by-198 beach. Were you right? Here’s a picture of that:
We looked at some identities on other grids, and we watched an app generate one. You can play with this yourself. (Insert link).
The group of recurrent beaches is called the m-by-n sandpile group. I wanted to show it to the kids because I think it is a super cool example of a finite commutative group where it is hard to know what the identity element looks like.
You can do all sorts of weird things with sandpiles, like adding grains of sand randomly and seeing what happens. You can even model avalanches with this. There’s a sandpile applet you can go to and play with.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 4, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9154552221298218, "perplexity_flag": "middle"}
|
http://en.wikipedia.org/wiki/Rigid_body
|
# Rigid body
Classical mechanics
Branches
Formulations
Fundamental concepts
Core topics
Scientists
The position of a rigid body is determined by the position of its center of mass and by its attitude (at least six parameters in total).[1]
In physics, a rigid body is an idealization of a solid body in which deformation is neglected. In other words, the distance between any two given points of a rigid body remains constant in time regardless of external forces exerted on it. Even though such an object cannot physically exist due to relativity, objects can normally be assumed to be perfectly rigid if they are not moving near the speed of light.
In classical mechanics a rigid body is usually considered as a continuous mass distribution, while in quantum mechanics a rigid body is usually thought of as a collection of point masses. For instance, in quantum mechanics molecules (consisting of the point masses: electrons and nuclei) are often seen as rigid bodies (see classification of molecules as rigid rotors).
## Kinematics
### Linear and angular position
The position of a rigid body is the position of all the particles of which it is composed. To simplify the description of this position, we exploit the property that the body is rigid, namely that all its particles maintain the same distance relative to each other. If the body is rigid, it is sufficient to describe the position of at least three non-collinear particles. This makes it possible to reconstruct the position of all the other particles, provided that their time-invariant position relative to the three selected particles is known. However, typically a different, mathematically more convenient, but equivalent approach is used. The position of the whole body is represented by:
1. the linear position or position of the body, namely the position of one of the particles of the body, specifically chosen as a reference point (typically coinciding with the center of mass or centroid of the body), together with
2. the angular position (also known as orientation, or attitude) of the body.
Thus, the position of a rigid body has two components: linear and angular, respectively.[2] The same is true for other kinematic and kinetic quantities describing the motion of a rigid body, such as linear and angular velocity, acceleration, momentum, impulse, and kinetic energy.[3]
The linear position can be represented by a vector with its tail at an arbitrary reference point in space (the origin of a chosen coordinate system) and its tip at an arbitrary point of interest on the rigid body, typically coinciding with its center of mass or centroid. This reference point may define the origin of a coordinate system fixed to the body.
There are several ways to numerically describe the orientation of a rigid body, including a set of three Euler angles, a quaternion, or a direction cosine matrix (also referred to as a rotation matrix). All these methods actually define the orientation of a basis set (or coordinate system) which has a fixed orientation relative to the body (i.e. rotates together with the body), relative to another basis set (or coordinate system), from which the motion of the rigid body is observed. For instance, a basis set with fixed orientation relative to an airplane can be defined as a set of three orthogonal unit vectors b1, b2, b3, such that b1 is parallel to the chord line of the wing and directed forward, b2 is normal to the plane of symmetry and directed rightward, and b3 is given by the cross product $b_3 = b_1 \times b_2$.
In general, when a rigid body moves, both its position and orientation vary with time. In the kinematic sense, these changes are referred to as translation and rotation, respectively. Indeed, the position of a rigid body can be viewed as a hypothetic translation and rotation (roto-translation) of the body starting from a hypothetic reference position (not necessarily coinciding with a position actually taken by the body during its motion).
### Linear and angular velocity
Velocity (also called linear velocity) and angular velocity are measured with respect to a frame of reference.
The linear velocity of a rigid body is a vector quantity, equal to the time rate of change of its linear position. Thus, it is the velocity of a reference point fixed to the body. During purely translational motion (motion with no rotation), all points on a rigid body move with the same velocity. However, when motion involves rotation, the instantaneous velocity of any two points on the body will generally not be the same. Two points of a rotating body will have the same instantaneous velocity only if they happen to lie on an axis parallel to the instantaneous axis of rotation.
Angular velocity is a vector quantity that describes the angular speed at which the orientation of the rigid body is changing and the instantaneous axis about which it is rotating (the existence of this instantaneous axis is guaranteed by the Euler's rotation theorem). All points on a rigid body experience the same angular velocity at all times. During purely rotational motion, all points on the body change position except for those lying on the instantaneous axis of rotation. The relationship between orientation and angular velocity is not directly analogous to the relationship between position and velocity. Angular velocity is not the time rate of change of orientation, because there is no such concept as an orientation vector that can be differentiated to obtain the angular velocity.
## Kinematical equations
main article:Rigid body kinematics
### Addition theorem for angular velocity
The angular velocity of a rigid body B in a reference frame N is equal to the sum of the angular velocity of a rigid body D in N and the angular velocity of B with respect to D:[4]
${}^\mathrm{N}\!\boldsymbol{\omega}^\mathrm{B} = {}^\mathrm{N}\!\boldsymbol{\omega}^\mathrm{D} + {}^\mathrm{D}\!\boldsymbol{\omega}^\mathrm{B}$.
In this case, rigid bodies and reference frames are indistinguishable and completely interchangeable.
### Addition theorem for position
For any set of three points P, Q, and R, the position vector from P to R is the sum of the position vector from P to Q and the position vector from Q to R:
$\mathbf{r}^\mathrm{PR} = \mathbf{r}^\mathrm{PQ} + \mathbf{r}^\mathrm{QR}$.
### Mathematical definition of velocity
The velocity of point P in reference frame N is defined using the time derivative in N of the position vector from O to P:[5]
${}^\mathrm{N}\mathbf{v}^\mathrm{P} = \frac{{}^\mathrm{N}\mathrm{d}}{\mathrm{d}t}(\mathbf{r}^\mathrm{OP})$
where O is any arbitrary point fixed in reference frame N, and the N to the left of the d/dt operator indicates that the derivative is taken in reference frame N. The result is independent of the selection of O so long as O is fixed in N.
### Mathematical definition of acceleration
The acceleration of point P in reference frame N is defined using the time derivative in N of its velocity:[5]
${}^\mathrm{N}\mathbf{a}^\mathrm{P} = \frac{^\mathrm{N}\mathrm{d}}{\mathrm{d}t} ({}^\mathrm{N}\mathbf{v}^\mathrm{P})$.
### Velocity of two points fixed on a rigid body
For two points P and Q that are fixed on a rigid body B, where B has an angular velocity $\scriptstyle{^\mathrm{N}\boldsymbol{\omega}^\mathrm{B}}$ in the reference frame N, the velocity of Q in N can be expressed as a function of the velocity of P in N:[6]
${}^\mathrm{N}\mathbf{v}^\mathrm{Q} = {}^\mathrm{N}\!\mathbf{v}^\mathrm{P} + {}^\mathrm{N}\boldsymbol{\omega}^\mathrm{B} \times \mathbf{r}^\mathrm{PQ}$.
### Acceleration of two points fixed on a rigid body
By differentiating the equation for the Velocity of two points fixed on a rigid body in N with respect to time, the acceleration in reference frame N of a point Q fixed on a rigid body B can be expressed as
${}^\mathrm{N}\mathbf{a}^\mathrm{Q} = {}^\mathrm{N}\mathbf{a}^\mathrm{P} + {}^\mathrm{N}\boldsymbol{\omega}^\mathrm{B} \times \left( {}^\mathrm{N}\boldsymbol{\omega}^\mathrm{B} \times \mathbf{r}^\mathrm{PQ} \right) + {}^\mathrm{N}\boldsymbol{\alpha}^\mathrm{B} \times \mathbf{r}^\mathrm{PQ}$
where $\scriptstyle{{}^\mathrm{N}\!\boldsymbol{\alpha}^\mathrm{B}}$ is the angular acceleration of B in the reference frame N.[6]
### Velocity of one point moving on a rigid body
If the point R is moving in rigid body B while B moves in reference frame N, then the velocity of R in N is
${}^\mathrm{N}\mathbf{v}^\mathrm{R} = {}^\mathrm{N}\mathbf{v}^\mathrm{Q} + {}^\mathrm{B}\mathbf{v}^\mathrm{R}$.
where Q is the point fixed in B that is instantaneously coincident with R at the instant of interest.[7] This relation is often combined with the relation for the Velocity of two points fixed on a rigid body.
### Acceleration of one point moving on a rigid body
The acceleration in reference frame N of the point R moving in body B while B is moving in frame N is given by
${}^\mathrm{N}\mathbf{a}^\mathrm{R} = {}^\mathrm{N}\mathbf{a}^\mathrm{Q} + {}^\mathrm{B}\mathbf{a}^\mathrm{R} + 2 {}^\mathrm{N}\boldsymbol{\omega}^\mathrm{B} \times {}^\mathrm{B}\mathbf{v}^\mathrm{R}$
where Q is the point fixed in B that instantaneously coincident with R at the instant of interest.[7] This equation is often combined with Acceleration of two points fixed on a rigid body.
### Other quantities
If C is the origin of a local coordinate system L, attached to the body,
• the spatial or twist acceleration of a rigid body is defined as the spatial acceleration of C (as opposed to material acceleration above);
$\boldsymbol\psi(t,\mathbf{r}_0) = \mathbf{a}(t,\mathbf{r}_0) - \boldsymbol\omega(t) \times \mathbf{v}(t,\mathbf{r}_0) = \boldsymbol\psi_c(t) + \boldsymbol\alpha(t) \times A(t) \mathbf{r}_0$
where
• $\mathbf{r}_0$ represents the position of the point/particle with respect to the reference point of the body in terms of the local coordinate system L (the rigidity of the body means that this does not depend on time)
• $A(t)\,$ is the orientation matrix, an orthogonal matrix with determinant 1, representing the orientation (angular position) of the local coordinate system L, with respect to the arbitrary reference orientation of another coordinate system G. Think of this matrix as three orthogonal unit vectors, one in each column, which define the orientation of the axes of L with respect to G.
• $\boldsymbol\omega(t)$ represents the angular velocity of the rigid body
• $\mathbf{v}(t,\mathbf{r}_0)$ represents the total velocity of the point/particle
• $\mathbf{a}(t,\mathbf{r}_0)$ represents the total acceleration of the point/particle
• $\boldsymbol\alpha(t)$ represents the angular acceleration of the rigid body
• $\boldsymbol\psi(t,\mathbf{r}_0)$ represents the spatial acceleration of the point/particle
• $\boldsymbol\psi_c(t)$ represents the spatial acceleration of the rigid body (i.e. the spatial acceleration of the origin of L)
In 2D the angular velocity is a scalar, and matrix A(t) simply represents a rotation in the xy-plane by an angle which is the integral of the angular velocity over time.
Vehicles, walking people, etc. usually rotate according to changes in the direction of the velocity: they move forward with respect to their own orientation. Then, if the body follows a closed orbit in a plane, the angular velocity integrated over a time interval in which the orbit is completed once, is an integer times 360°. This integer is the winding number with respect to the origin of the velocity. Compare the amount of rotation associated with the vertices of a polygon.
## Kinetics
Main article: Rigid body dynamics
Any point that is rigidly connected to the body can be used as reference point (origin of coordinate system L) to describe the linear motion of the body (the linear position, velocity and acceleration vectors depend on the choice).
However, depending on the application, a convenient choice may be:
• the center of mass of the whole system, which generally has the simplest motion for a body moving freely in space;
• a point such that the translational motion is zero or simplified, e.g. on an axle or hinge, at the center of a ball and socket joint, etc.
When the center of mass is used as reference point:
• The (linear) momentum is independent of the rotational motion. At any time it is equal to the total mass of the rigid body times the translational velocity.
• The angular momentum with respect to the center of mass is the same as without translation: at any time it is equal to the inertia tensor times the angular velocity. When the angular velocity is expressed with respect to a coordinate system coinciding with the principal axes of the body, each component of the angular momentum is a product of a moment of inertia (a principal value of the inertia tensor) times the corresponding component of the angular velocity; the torque is the inertia tensor times the angular acceleration.
• Possible motions in the absence of external forces are translation with constant velocity, steady rotation about a fixed principal axis, and also torque-free precession.
• The net external force on the rigid body is always equal to the total mass times the translational acceleration (i.e., Newton's second law holds for the translational motion, even when the net external torque is nonzero, and/or the body rotates).
• The total kinetic energy is simply the sum of translational and rotational energy.
## Geometry
Two rigid bodies are said to be different (not copies) if there is no proper rotation from one to the other. A rigid body is called chiral if its mirror image is different in that sense, i.e., if it has either no symmetry or its symmetry group contains only proper rotations. In the opposite case an object is called achiral: the mirror image is a copy, not a different object. Such an object may have a symmetry plane, but not necessarily: there may also be a plane of reflection with respect to which the image of the object is a rotated version. The latter applies for S2n, of which the case n = 1 is inversion symmetry.
For a (rigid) rectangular transparent sheet, inversion symmetry corresponds to having on one side an image without rotational symmetry and on the other side an image such that what shines through is the image at the top side, upside down. We can distinguish two cases:
• the sheet surface with the image is not symmetric - in this case the two sides are different, but the mirror image of the object is the same, after a rotation by 180° about the axis perpendicular to the mirror plane.
• the sheet surface with the image has a symmetry axis - in this case the two sides are the same, and the mirror image of the object is also the same, again after a rotation by 180° about the axis perpendicular to the mirror plane.
A sheet with a through and through image is achiral. We can distinguish again two cases:
• the sheet surface with the image has no symmetry axis - the two sides are different
• the sheet surface with the image has a symmetry axis - the two sides are the same
## Configuration space
The configuration space of a rigid body with one point fixed (i.e., a body with zero translational motion) is given by the underlying manifold of the rotation group SO(3). The configuration space of a nonfixed (with non-zero translational motion) rigid body is E+(3), the subgroup of direct isometries of the Euclidean group in three dimensions (combinations of translations and rotations).
## Notes
1. Lorenzo Sciavicco, Bruno Siciliano (2000). "§2.4.2 Roll-pitch-yaw angles". Modelling and control of robot manipulators (2nd ed.). Springer. p. 32. ISBN 1-85233-221-2.
2. In general, the position of a point or particle is also known, in physics, as linear position, as opposed to the angular position of a line, or line segment (e.g., in circular motion, the "radius" joining the rotating point with the center of rotation), or basis set, or coordinate system.
3. In kinematics, linear means "along a straight or curved line" (the path of the particle in space). In mathematics, however, linear has a different meaning. In both contexts, the word "linear" is related to the word "line". In mathematics, a line is often defined as a straight curve. For those who adopt this definition, a curve can be straight, and curved lines are not supposed to exist. In kinematics, the term line is used as a synonym of the term trajectory, or path (namely, it has the same non-restricted meaning as that given, in mathematics, to the word curve). In short, both straight and curved lines are supposed to exist. In kinematics and dynamics, the following words refer to the same non-restricted meaning of the term "line":
• "linear" (= along a straight or curved line),
• "rectilinear" (= along a straight line, from Latin rectus = straight, and linere = spread),
• "curvilinear" (=along a curved line, from Latin curvus = curved, and linere = spread).
In topology and meteorology, the term "line" has the same meaning; namely, a contour line is a curve.
4. Kane, Thomas; Levinson, David (1996). "2-4 Auxiliary Reference Frames". Dynamics Online. Sunnyvale, California: OnLine Dynamics, Inc.
5. ^ a b Kane, Thomas; Levinson, David (1996). "2-6 Velocity and Acceleration". Dynamics Online. Sunnyvale, California: OnLine Dynamics, Inc.
6. ^ a b Kane, Thomas; Levinson, David (1996). "2-7 Two Points Fixed on a Rigid Body". Dynamics Online. Sunnyvale, California: OnLine Dynamics, Inc.
7. ^ a b Kane, Thomas; Levinson, David (1996). "2-8 One Point Moving on a Rigid Body". Dynamics Online. Sunnyvale, California: OnLine Dynamics, Inc.
## References
• Roy Featherstone (1987). Robot Dynamics Algorithms. Springer. ISBN 0-89838-230-0. This reference effectively combines screw theory with rigid body dynamics for robotic applications. The author also chooses to use spatial accelerations extensively in place of material accelerations as they simplify the equations and allow for compact notation.
• JPL DARTS page has a section on spatial operator algebra (link: [1]) as well as an extensive list of references (link: [2]).
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 20, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9079076051712036, "perplexity_flag": "head"}
|
http://projecteuclid.org/DPubS?service=UI&version=1.0&verb=Display&handle=euclid.aop/1176995765
|
### Bonferroni Inequalities
Janos Galambos
Source: Ann. Probab. Volume 5, Number 4 (1977), 577-581.
#### Abstract
Let $A_1, A_2, \cdots, A_n$ be events on a probability space. Let $S_{k,n}$ be the $k$th binomial moment of the number $m_n$ of those $A$'s which occur. An estimate on the distribution $y_t = P(m_n \geqq t)$ by a linear combination of $S_{1,n}, S_{2,n}, \cdots, S_{n,n}$ is called a Bonferroni inequality. We present for proving Bonferroni inequalities a method which makes use of the following two facts: the sequence $y_t$ is decreasing and $S_{k,n}$ is a linear combination of the $y_t$. By this method, we significantly simplify a recent proof for the sharpest possible lower bound on $y_1$ in terms of $S_{1,n}$ and $S_{2,n}$. In addition, we obtain an extension of known bounds on $y_t$ in the spirit of a recent extension of the method of inclusion and exclusion.
First Page:
Primary Subjects: 60C05
Secondary Subjects: 60E05
Full-text: Open access
Permanent link to this document: http://projecteuclid.org/euclid.aop/1176995765
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8603357076644897, "perplexity_flag": "head"}
|
http://mathhelpforum.com/statistics/161310-z-score.html
|
# Thread:
1. ## z-score
the average mark on a set of university entrance exams was 70% with a standard dev. of 9.6. In order to be accepted into university, a student had to achieve a mark of 60% of better. If 800 students wrote the exam, approximately how many were accepted?
480
560
680
760
z-score = 800 - 60??/9.6
What am I doing wrong?
2. You are using your z formula wrongly.
$z = \dfrac{x - \mu}{\sigma}$
What is the value of x, the percentage that you are looking for?
What is the value of mu, the mean/average percentage that was given?
Form that, you get the z score and the probability that any student gets above 60%.
That's for 1 student. What is the expectation with 800 students? (Hint: Expectation = np)
3. OK. z= 60-70/9.6 = -1.04 the probability is 0.1492
800 x 0.1492 = 119.36
"In mathematics, you don't understand things. You just get used to them." -- Johann von Neumann
4. Originally Posted by terminator
OK. z= 60-70/9.6 = -1.04 the probability is 0.1492 Mr F says: This probability is clearly wrong. If the mean is 70 then the probability of being less than 70 is clearly going to be greater than 0.5 .... You have made simple mistake. Go back and check.
800 x 0.1492 = 119.36
"In mathematics, you don't understand things. You just get used to them." -- Johann von Neumann
..
5. The probability that you got is the probability of a student of getting less than 60%, that is the probability that they were not accepted. To help you, always make a sketch of the normal distribtution and shade the area that you are required to find. You table gives values to the left of the z score while you are looking for the probability to the right of the z score.
Try again.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9587270617485046, "perplexity_flag": "middle"}
|
http://www.physicsforums.com/showthread.php?p=4248073
|
Physics Forums
## Conditional Probability
Okay so I have a complex setup that I hope I can convey.
I have 9 sites to which X can bind. 6 out of the 9 sites are active and 3 out of the 9 sites are inactive. I need 3 of the active sites to be bound to get the response I am looking for - which we will call EMAX.
So when I add a single X - the chance of it binding to an active site is 6/9 the chance of it binding to an inactive site is 3/9.
My probability knowledge is shaky - bear with me.
Assume that the binding is irreversible. So how many X do I need to add to be sure I have activated 3 active sites. Or more precisely, how many X do I need to add to get a >95% chance that 3 active sites are bound.
Then I want to go more complicated. Say I add 3 Y - which inactivates the sites. The chances are that 2 active sites will be inactivated and 1 inactive site will still be inactive with Y bound.
Now under these new conditions - how much X do I need to add to be sure 3 remaining active sites are occupied?
So I know it will be a probability - so I guess lets say that how much X do I need to add to have a greater than 95% chance that 3 active sites are now bound with X to get EMAX
PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug
PS - this ain't homework! On an intrinsic level its clear to me that in the new condition more X has to be added to ensure that 3 active sites are bound (I hope my intrinsic thoughts are correct!) - but I want to be able to put a number on it. FYI - this is a real word problem I am trying to figure. The problem comes from trying to explain why the potency of a drug is affected initially (but not the efficacy) when you add a small concentration of antagonist to a population of receptors that have spare receptors included. Then as you increase antagonist concentration you finally see a fall in EMAX - as there are not enough active sites left for agonist to work at. My probability maths does not extend beyond coin flipping and dice rolling - hence I hope any explanations are at a level I can appreciate!!
Blog Entries: 5 Recognitions: Homework Help Science Advisor For probability 1, it's fairly easy to see that you need at least 6 X. After all, the worst case scenario is that you bind all the inactive sites first, and then the next 3 X that you place are active. For the rest, this is precisely the model that I (and probably a lot of other people too) use when thinking about hypergeometric probabilities. Suppose that you place n X on arbitrary sites. You can do this in $\binom{9}{n}$ ways (the number of ways you can choose n sites to bind to from the 9 available. The probability that k are bound to any of the 3 inactive sites and n - k to active sites is then $$\frac{\binom{3}{k} \binom{6}{n - k}}{\binom{9}{n}}$$
## Conditional Probability
OKay - there are stupid questions even though we tell everyone there ain't - so just so I am totally clear and making no assumptions - define k.
Blog Entries: 5 Recognitions: Homework Help Science Advisor k is the number (out of the n sites that you are binding to an X) of X that are bound to an inactive site. For example, the probability that if you place 4 X's and exactly 1 is on an inactive site and 3 are on an active site can be calculated by plugging in n = 4, k = 1. Of course, you are not interested in a single value of k but in all possible values (question back: what are the allowed values?)
well in situation 1 k = 3 i guess, but then in situation 2 when we add the antagonist - it is more difficult - since antagonist can either bind to an inactive receptor and do nothing (for the first antagonist particle there is a 3/9 chance of that), but then there is a possibility that the antagonist binds to an active receptor and inactivates it - and thus the pool of inactive receptors (k) will increase. Then we also have to think about the fact that I have 3 sequential antagonists binding and so that then changes the k pool too? Am i making sense?
Blog Entries: 5 Recognitions: Homework Help Science Advisor I didn't really get that far yet, I thought we'd solve the simpler problem first. If the binding of X's and Y's is independent you can find the probability that not 3 but 3 + d sites are deactivated and for every value of d you can find the probability that the X's will activate 3 sites (where the case d = 0 is the simpler one that I was looking at).
I should add though that your worst case scenario take on the problem will let me explain it beautifully. As you say in instance 1 you need 6 agonist - 3 to use up the inactive sites and the next 3 are guaranteed to fill active sites. Extending that into the second scenario - when i first add 3 antagonist particles the likeliest result is 2 active receptors are bound to 1 inactive receptor (probability is 3 inactive being bound 1/84, 3 active being bound is 20/84 - not sure how to discriminate 2 active 1 inactive or 2 inactive 1 active - think it is 45/84 that we have 2 active 1 inactive and 18/84 that 2 inactive and 1 active) - thus the total pool of inactive receptors is now 5. Therefore 5+3 agonist is required worst case to get the same response. That doesn't actually calculate the probabilities as I initially thought I would have to do - but it explains far more simply why we need more agonist in the presence of antagonist to get the same response as when no antagonist is present.
And yes - the binding of X and Y is independent and each event does not change the binding of the other. So a single receptor can have an X and a Y bound. But would not be active.
Thread Tools
| | | |
|----------------------------------------------|--------------------------------------------|---------|
| Similar Threads for: Conditional Probability | | |
| Thread | Forum | Replies |
| | Set Theory, Logic, Probability, Statistics | 3 |
| | Precalculus Mathematics Homework | 1 |
| | Calculus & Beyond Homework | 13 |
| | Precalculus Mathematics Homework | 1 |
| | Precalculus Mathematics Homework | 4 |
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9451746344566345, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/pre-calculus/104732-formula-arithmetic-sequence.html
|
# Thread:
1. ## Formula for Arithmetic sequence
find the formula for an for the arithmetic sequence.
a3=94, a6=85
ok i know how to solve a similar problem when i am given a1 and d. but i dont have either A1 or d so how can i find them? to plug into an=dn+c and c= a1-d?? please help me understand this thank you.
2. Originally Posted by flexus
find the formula for an for the arithmetic sequence.
a3=94, a6=85
ok i know how to solve a similar problem when i am given a1 and d. but i dont have either A1 or d so how can i find them? to plug into an=dn+c and c= a1-d?? please help me understand this thank you.
HI
a+2d=94 --- 1
a+5d=85 --- 2
Solve for a and d then plug into the general equation of Ap
3. Originally Posted by mathaddict
HI
a+2d=94 --- 1
a+5d=85 --- 2
Solve for a and d then plug into the general equation of Ap
i don't understand where you got that from. could you explain to me what this all means. im sorry i am just a little behind on all this stuff. im looking for A1 and d.
4. Originally Posted by flexus
i don't understand where you got that from. could you explain to me what this all means. im sorry i am just a little behind on all this stuff. im looking for A1 and d.
Well , look at the general form for AP
$T_n=a+(n-1)d$
So $T_2=a+(2-1)d\Rightarrow a+d$
$<br /> T_6=a+(6-1)d\Rightarrow a+5d<br />$
5. Originally Posted by mathaddict
HI
a+2d=94 --- 1
a+5d=85 --- 2
Solve for a and d then plug into the general equation of Ap
i don't know what you did but it made no sense to me. i actually figured it out. what i did was
a6-a3= 3 terms apart
a6=a3+3d--> 85=94+3d--> d=-3
an=a1+(n-1)d
94=a1+(3-1)-3-->a1=100
c=a1-d-->c=100-(-3)-->c=103
an=-3n+103=answer!!!!!!!
thank you very much for your time and help mathaddict! it was greatly Appreciated.
6. Originally Posted by flexus
i don't know what you did but it made no sense to me. i actually figured it out. what i did was
a6-a3= 3 terms apart
a6=a3+3d--> 85=94+3d--> d=-3
an=a1+(n-1)d
94=a1+(3-1)-3-->a1=100
c=a1-d-->c=100-(-3)-->c=103
an=-3n+103=answer!!!!!!!
OK , your method works fine too .. Then you stick to your method to make things easier .
#### Search Tags
View Tag Cloud
Copyright © 2005-2013 Math Help Forum. All rights reserved.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9209355115890503, "perplexity_flag": "middle"}
|
http://www.physicsforums.com/showthread.php?s=a4b077c13297be9df85c5d8f098a1069&p=4209618
|
Physics Forums
## Equation of state in gravity vs microphysics
Hi,
I have a very simple question. Consider a free scalar field in the realm of GR. Then its stress energy tensor, in a Roberton-Walker Universe, is the one of a perfect fluid with pressure=denity, hence an equation of state : w=p/rho=1
However this scalar model is an archetype of maslesss (spin 0) particles. Now if you consider massless particles in a box, and do your thermo/statistical physics homework on it, you find that, as any relativistic (masless) particles, the pressure must be 1/3 of the density, just as it is the case for photons. Hence here we find w=p/rho=1/3 for such a scalar field in a box.
Where's the discrepancy coming from?
Thanks for comments!
PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug
Recognitions:
Gold Member
Science Advisor
Staff Emeritus
Possibly useful:
http://www.physicsforums.com/showthread.php?t=134682
http://faculty.washington.edu/mrdepi...rk_Energy2.pdf
Quote by Jip I have a very simple question. Consider a free scalar field in the realm of GR. Then its stress energy tensor, in a Roberton-Walker Universe, is the one of a perfect fluid with pressure=denity, hence an equation of state : w=p/rho=1
I'm not sure this is right, in general. The cosmological constant can be considered as a free scalar field, and it has w=-1, not 1. (A constant is a solution of the m=0 Klein–Gordon equation.) In general (see the pdf link above), a scalar field that's not spatially varying has
$\rho = (1/2)\dot{\phi}^2+V(\phi)$
$p = (1/2)\dot{\phi}^2-V(\phi)$
In the case of a cosmological constant, the time derivatives are zero. In general, you could get any $-1 \le w \le 1$.
As a side issue, why do you say, "its stress energy tensor, in a Roberton-Walker Universe" -- I don't think it matters what the cosmology is, does it? Or is this because you're invoking the assumption that the field doesn't vary spatially?
Quote by Jip However this scalar model is an archetype of maslesss (spin 0) particles. Now if you consider massless particles in a box, and do your thermo/statistical physics homework on it, you find that, as any relativistic (masless) particles, the pressure must be 1/3 of the density, just as it is the case for photons. Hence here we find w=p/rho=1/3 for such a scalar field in a box.
So this may or may not be headed down the right road to solve your problem, but I think we can get an issue with photons that's the same as the one I referred to above for a scalar field. You can have electromagnetic fields for which $w\ne 1/3$. For example, a uniform, static electric field E in the x direction has $T_{00}=(1/2)E^2$, $T_{11}=-(1/2)E^2$, which isn't consistent with w=1/3. [Edit: fixed a mistake here]
So for example it seems to me that a cosmological constant is able to evade our expectation of w=1/3 for massless particles in a box because it's not a state in thermal equilibrium. Maybe the thermal equilibrium state of a massless scalar field does have w=1/3.
Well, some comment on your post. 1. I assume cosmological background in order to set the spatial derivatives to zero, indeed. 2. Then you get the two equations you gave for rho and p of the scalar field. By the way, you jut prove my claim (it is well known): take V(phi)=0, and see how your formulas give indeed p=rho, hence w=1 3. Yes probably the answer lies in the thermal equilibrium assumption, but I don't know precisely how! 4. Can you elaborate of the electric field example you give? How do you compute the pressure here, and find p=-rho? 5. Indeed my question is more general : what is the link (for any kind of field) between the w computed in GR with the w computed from "microphysics", e.g partition function Z => p, U => p, rho=U/V => w I'll look at the links you provided, thanks
## Equation of state in gravity vs microphysics
Ok I just saw quickly the references you gave me. I want to stress that my question is not about the cosmological constant or Dark Energy modeling, etc, but about the thermodynamical (or satistical physics) interpretation of the pressure and density as defined in GR through the perfect stress energy tensor derived from the action.
I think that we all agree (?) that this way of defining presure and density is a priori very different from the definition coming from partition function and so on. And I gave one particular example that it could lead to different results for the equation of state.
Now maybe I'm just completely wrong here. If someone can help :D
Recognitions:
Gold Member
Science Advisor
Staff Emeritus
Quote by Jip 2. Then you get the two equations you gave for rho and p of the scalar field. By the way, you jut prove my claim (it is well known): take V(phi)=0, and see how your formulas give indeed p=rho, hence w=1
It doesn't prove your claim, it disproves it. It gives a counterexample to your claim that w=1. Giving any number of examples doesn't prove a claim. Giving one counterexample disproves it.
Quote by Jip 4. Can you elaborate of the electric field example you give? How do you compute the pressure here, and find p=-rho?
Sorry, I messed that up. In that example we have $T=diag(E^2/2,-E^2/2,+E^2/2,+E^2/2)$, where the field is in the x direction. See http://en.wikipedia.org/wiki/Electro...3energy_tensor .But the point is that it doesn't have the form $(\rho,p,p,p)$ with w=1/3.
Quote by Jip 3. Yes probably the answer lies in the thermal equilibrium assumption, but I don't know precisely how!
The factor of 1/3 is thermodynamic in origin. The logic is that we assume thermal equilibrium, this leads to equipartition, and therefore the energy is partitioned equally in the 3 spatial degrees of freedom.
A uniform electric field doesn't represent a thermal-equilibrium state of the electromagnetic field, so the above logic fails, and we don't get a stress-energy tensor of the form $(\rho,p,p,p)$ with w=1/3. Similarly, a uniform scalar field isn't a state of thermal equilibrium.
Quote by Jip 5. Indeed my question is more general : what is the link (for any kind of field) between the w computed in GR with the w computed from "microphysics", e.g partition function Z => p, U => p, rho=U/V => w
They're the same. GR is locally the same as SR.
Quote by Jip Ok I just saw quickly the references you gave me. I want to stress that my question is not about the cosmological constant or Dark Energy modeling, etc, but about the thermodynamical (or satistical physics) interpretation of the pressure and density as defined in GR through the perfect stress energy tensor derived from the action.
The point is that the cosmological constant is a counterexample to your claim that w has certain values for a scalar field.
Quote by Jip I think that we all agree (?) that this way of defining presure and density is a priori very different from the definition coming from partition function and so on.
No, I don't agree. There aren't two definitions. There is a definition, and then there is a particular thermodynamic approximation for calculating the thing that's been defined, under the assumption of thermal equilibrium.
Recognitions: Science Advisor Carroll seems to get the 1/3 in Eq 8.27 of http://ned.ipac.caltech.edu/level5/M.../Carroll8.html . Perhaps a matter of definitions?
Recognitions:
Gold Member
Science Advisor
Staff Emeritus
Quote by atyy Carroll seems to get the 1/3 in Eq 8.27 of http://ned.ipac.caltech.edu/level5/M.../Carroll8.html . Perhaps a matter of definitions?
He gets the 1/3 for electromagnetic fields under the assumption of a perfect fluid, which means a fluid that's isotropic in its rest frame. There is no frame in which a uniform electric field is isotropic. More fundamentally, I don't think it's valid to talk about a fluid and an equation of state unless you have some kind of system that is in thermodynamic equilibrium.
A real scalar field has two degrees of freedom, $\phi$ and $\dot{\phi}$. If these are in equilibrium, then I think the expectation value of the two corresponding energies should be the same, $\langle\dot{\phi}^2/2\rangle=\langle V(\phi)\rangle$. Presumably this leads to a traceless stress-energy tensor if the field is massless? If so, then zero trace along with isotropy implies w=1/3. But I think what's going on here is that the interesting examples, such as inflation or a cosmological constant, are not in equilibrium.
Recognitions:
Science Advisor
Quote by bcrowell He gets the 1/3 for electromagnetic fields under the assumption of a perfect fluid, which means a fluid that's isotropic in its rest frame. There is no frame in which a uniform electric field is isotropic. More fundamentally, I don't think it's valid to talk about a fluid and an equation of state unless you have some kind of system that is in thermodynamic equilibrium.
Actually, he seems to define a perfect fluid with arbitrary w (Eq 8.21). He sets w=1/3 in Eq 8.27, which he says holds for radiation by comparing Eq 8.25, 8.26 and 8.15, 8.19. For other perfect fluids, he gets other values of w (Eq 8.31,8.48-8.51). He uses w=1/3 for radiation-filled open, flat and closed universes in Eq 8.52-8.54. I think the idea is that although pressure is hard to define in a general non-equilibrium case, as long as the expansion is "slow" then the quantities make approximate physical sense. After all, we can apply thermodynamics in everyday life where things are in "equilibrium" over our finite time of observation, even though we know nothing is in true equilibrium over an infinite time of observation (since the universe is expanding).
Recognitions:
Gold Member
Science Advisor
Staff Emeritus
Quote by atyy Actually, he seems to define a perfect fluid with arbitrary w (Eq 8.21). He sets w=1/3 in Eq 8.27, which he says holds for radiation by comparing Eq 8.25, 8.26 and 8.15, 8.19. For other perfect fluids, he gets other values of w (Eq 8.31,8.48-8.51).
You say "actually," but I don't see any point of disagreement ... ?
Recognitions:
Science Advisor
Quote by bcrowell You say "actually," but I don't see any point of disagreement ... ?
Probably half talking to my self. So anyway, it's w=1/3 for radiation, even in an FRW solution. And it is meaningful to talk about pressure, even though we're strictly in a non-equilibrium situation, because the expansion is slow. And it's w=-1 for the cosmological constant treated as a perfect fluid.
?
Recognitions:
Gold Member
Science Advisor
Staff Emeritus
Quote by atyy Probably half talking to my self. So anyway, it's w=1/3 for radiation, even in an FRW solution. And it is meaningful to talk about pressure, even though we're strictly in a non-equilibrium situation, because the expansion is slow. And it's w=-1 for the cosmological constant treated as a perfect fluid. ?
I don't think the issue of equilibrium is affected in any important way by the fact that the universe is expanding. If you want to define an equation of state, all that matters is that locally, you can grab a sample of the matter and measure its properties. Similarly, I can define the temperature of the air in Los Angeles without worrying about the fact that it's not equilibrated with the air in Chicago. GR and cosmology aren't relevant here at all. The issue here is the properties of matter that are then going to be *inputs* into a cosmological model.
Recognitions: Science Advisor So I think it's basically as bcrowell says that the massless scalar field case w=1 is not in thermal equilibrium. One can get w=1/3 FRW solutions, which would be consistent with radiation in thermal equilibrium.
Thread Tools
| | | |
|-------------------------------------------------------------------|---------------------------|---------|
| Similar Threads for: Equation of state in gravity vs microphysics | | |
| Thread | Forum | Replies |
| | Classical Physics | 0 |
| | Classical Physics | 1 |
| | Beyond the Standard Model | 27 |
| | Beyond the Standard Model | 13 |
| | Quantum Physics | 1 |
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9359357357025146, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/tagged/cardinals+ring-theory
|
# Tagged Questions
0answers
111 views
### Is there a simple example of a ring that satifies the DCC on two-sided ideals, but doesn't satisfy the ACC on two-sided ideals?
It follows from the Hopkins–Levitzki theorem that if a ring satisfies the DCC on left ideals, then it also satisfies the ACC on left ideals. I've been trying to find a counterexample to the following ...
0answers
114 views
### Is there an upper bound to the number of rings that can be obtained from a semigroup with zero by defining an additive operation?
Let $\mathscr S$ be the class of all semigroups with zero. For $(S,\times,0)\in\mathscr S,$ I want to count additive operations $+$ on $S$ such that $(S,+,\times,0)$ is a ring (possibly without ...
1answer
158 views
### Ideals in the ring of endomorphisms of a vector space of uncountably infinite dimension.
I know that if $V$ is a vector space over a field $k,$ then $\operatorname{End}(V)$ has no non-trivial ideals if $\dim V<\infty;$ $\operatorname{End}(V)$ has exactly one non-trivial ideal if ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9315277934074402, "perplexity_flag": "head"}
|
http://mathoverflow.net/revisions/60737/list
|
Return to Question
Post Undeleted by S. Carnahan♦
Post Deleted by Ricardo
2 corrected errors
Let $M$ be a manifold and $g$ a metric on $M$. Let $TM$ denote the tangent bundle of $M$, and denote points in $TM$ by $(x,v)$ where $v \in T_xM$.
The Levi-Civita connection of $(M,g)$ induces a splitting of the double tangent bundle $TTM = V \oplus H$, where $V$ is the vertical distribution, defined by $V_{(x,v)} = T_{(x,v)}T_xM$ (i.e. the tangent space to the fibre), and $H_{(x,v)}$ is the horizontal distribution, which is determined by the connection.
Suppose $A:TM \rightarrow TM$ is a map such that $A(x,v)\in T_xM$ (so the map $A(x,\cdot)$ is a map from $T_xM$ to itself for all $x \in M$).
How does one use the splitting described above to define "partial derivatives" $\nabla_xA$ and $\nabla_vA$, which again should be mapswith
$(\nabla_xA)(x,v)\in T_xM$
for all :
$x (\nabla_xA)(x,v):T_xM \in M$rightarrow T_xM\$,and similarly for
$\nabla_vA$? (\nabla_vA)(x,v):T_xM \rightarrow T_xM\$.
These should have the property that if $\gamma(t)$ is a curve on $M$ and $u(t)$ is a vector field along $\gamma$ (so $u(t) \in T_{\gamma(t)}M$ for all $t$), and $\nabla_t$ denotes the covariant derivative along $\gamma$, then
$\nabla_t(A(\gamma,u)) = (\nabla_xA)(\gamma,u)\dot{\gamma} \nabla_xA)(\gamma,u) \cdot \dot{\gamma} + (\nabla_vA)(\gamma,u)\dot{u}$ \nabla_vA)(\gamma,u) \cdot \nabla_t u\$
(here on the LHS, $A(\gamma,u)$ is itself a vector field along $\gamma$, so the notation $\nabla_t(A(\gamma,u))$ is meaningful).
The expression above "makes sense" intuitively, but I can't get the formalism to work properly.
1
Splitting of the double tangent bundle into vertical and horizontal parts, and defining partial derivatives
Let $M$ be a manifold and $g$ a metric on $M$. Let $TM$ denote the tangent bundle of $M$, and denote points in $TM$ by $(x,v)$ where $v \in T_xM$.
The Levi-Civita connection of $(M,g)$ induces a splitting of the double tangent bundle $TTM = V \oplus H$, where $V$ is the vertical distribution, defined by $V_{(x,v)} = T_{(x,v)}T_xM$ (i.e. the tangent space to the fibre), and $H_{(x,v)}$ is the horizontal distribution, which is determined by the connection.
Suppose $A:TM \rightarrow TM$ is a map such that $A(x,v)\in T_xM$ (so the map $A(x,\cdot)$ is a map from $T_xM$ to itself for all $x \in M$).
How does one use the splitting described above to define "partial derivatives" $\nabla_xA$ and $\nabla_vA$, which again should be maps with
$(\nabla_xA)(x,v)\in T_xM$
for all $x \in M$, and similarly for $\nabla_vA$?
These should have the property that if $\gamma(t)$ is a curve on $M$ and $u(t)$ is a vector field along $\gamma$ (so $u(t) \in T_{\gamma(t)}M$ for all $t$), and $\nabla_t$ denotes the covariant derivative along $\gamma$, then
$\nabla_t(A(\gamma,u)) = (\nabla_xA)(\gamma,u)\dot{\gamma} + (\nabla_vA)(\gamma,u)\dot{u}$
(here on the LHS, $A(\gamma,u)$ is itself a vector field along $\gamma$, so the notation $\nabla_t(A(\gamma,u))$ is meaningful).
The expression above "makes sense" intuitively, but I can't get the formalism to work properly.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 70, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9206181764602661, "perplexity_flag": "head"}
|
http://mathhelpforum.com/discrete-math/137016-combinatory-optimization-problem.html
|
# Thread:
1. ## Combinatory optimization problem
Dear experts,
for a practical use in a warehouse I'm searching for a way to define this problem in an algorithm:
We need to pick an article with a given amount $x$ where $x \in \mathbb N^+$.
The article is distributed in several boxes in the warehouse. Each box contains the article in different amounts. So we have a limited amount of boxes $n$ with each box containing a quantity $y_n$ where $y_n \in \mathbb N^+$. We want to find the combination of boxes where the sum of the quantity of all selected boxes gets as close as possible to $x$ (but doesn't exceed $x$).
$<br /> \begin{array}{|c||c|c|c|c|c|c|c|}<br /> \hline<br /> n & 1 & 2 & 3 & 4 & 5 & 6 & 7\\<br /> \hline<br /> y_n & 15 & 17 & 8 & 20 & 19 & 20 & 18\\<br /> \hline<br /> \end{array}<br />$
if $x=53$ then the ideal combination would be $y_1+y_4+y_7$ (15+20+18=53).
if $x=30$ the best combination would be $y_3+y_4$ (8+20=28).
It looks a little bit like a 0-1 knapsack problem to me but without the maximimizing of a value. Performance would certainly have to be considered. Can you help me to find an algorithm (I guess this problem is NP-hard)?
Regards,
Gunter
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 12, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9175729155540466, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/78939?sort=votes
|
## Two questions on rational homotopy theory
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I'm trying to read Quillen's paper "Rational homotopy theory" and am a little confused about the construction. As I understand, he associates a dg-Lie algebra over $\mathbb{Q}$ to every 1-reduced simplicial set via a somewhat long series of Quillen equivalences. But the construction that I had heard before makes spaces (rationally) Quillen equivalent to commutative dgas over $\mathbb{Q}$ via the polynomial de Rham functor. Is there a simple reason why dg-Lie algebras and commutative DGAs should be Quillen equivalent? I believe this should be Koszul duality, but I don't really understand that right now. If someone has a (preferably lowbrow) explanation for this phenomenon (even in this specific case), I'd be interested.
In addition, I would be very interested in a "high concept" explanation of why Quillen's construction works. It seems that the crux of the proof is the Quillen equivalence between simplicial groups (localized at $\mathbb{Q}$, I guess) and complete simplicial Hopf algebras. I've been struggling with why this should proof should work, since I was not familiar with the work of Curtis on lower central series filtrations referred to there.
-
9
At the far end of Quillen's long series of equivalences, beyond the DG Lie algebras, you will find DG commutative coalgebras, yes? That's the Koszul duality step. (And from these DGCs to DGAs it's basically "vector space dual of a coalgebra is an algebra"; but without finiteness conditions not every algebra is the dual of a coalgebra, so DGAs are better than DGCs for general simply connected spaces.) – Tom Goodwillie Oct 24 2011 at 0:59
Ah, I see; thanks. I guess I hadn't paid sufficient attention to the DGC part of the equivalences. – Akhil Mathew Oct 24 2011 at 1:35
"but without finiteness conditions not every algebra is the dual of a coalgebra, so DGAs are better than DGCs for general simply connected spaces." To the contrary, I think that DGCs do a much better job of capturing "spaces" as defined by, say, simplicial sets, than do DGAs. Here is one reason: DGCs comprise a (infty-)presentable category, whereas the opposite category of DGAs is not presentable. More generally, my understanding (based entirely on J. Francis's class last spring --- I haven't read the papers) is that Spaces=DGAs is only true with some "smallness" conditions. – Theo Johnson-Freyd Oct 24 2011 at 3:14
I took a course where the lecturer said that the Quillen equivalence was between the rational homotopy category of simplicial sets and the opposite category of DGAs, but I didn't understand as much as I should have, and I don't know a good source for the model category stuff here. Your point about presentability is intriguing, though. – Akhil Mathew Oct 24 2011 at 3:36
One thing that might be helpful is the difference between Sullivan models and Quillen models for rational homotopy types. The Sullivan model is a DGA that is pretty easy to construct given the rational cohomology of the space. The Quillen model is a bit more difficult to construct, but I think it gets closer to the homotopy of the space. The construction you heard before about the polynomial de Rham functor is that of Sullivan. – Sean Tilson Oct 24 2011 at 4:25
show 3 more comments
## 1 Answer
I'm not sure if this will still be helpful, but here is my understanding of the Quillen model. I'm a little more comfortable with the Sullivan approach, which replaces a space $X$ with a commutative DGA over $\mathbb{Q}$. So my understanding of the Quillen model might be a bit off (if so, someone please correct me!). Also, everything correct that I write below, I learned from John Francis. (Probably in the same lecture that Theo mentioned in his comment above.) Oh, but any mistakes are probably not his fault---more likely an error in my understanding.
Before we begin: Quillen v Sullivan.
As others have mentioned, Quillen gets you a DG Lie algebra, where as the Sullivan model will get you a commutative DG algebra. As you write, the passage from one to the other is (almost) Koszul duality. Really, a Lie algebra will get you a co-commutative coalgebra by Koszul duality, and a commutative algebra will get you a coLie algebra. You can bridge the world of coalgebras and algebras when you have some finiteness conditions--for instance, if the rational homotopy groups are finite-dimensional in each degree. Then I think you can simply take linear duals to get from coalgebras to algebras.
A way to find Lie algebras.
So where do (DG) Lie algebras come from? First let me point out that there is a natural place that one finds Lie algebras, before knowing about the Quillen model: Lie algebras arise as the tangent space (at the identity) of a Lie group $G$.
Now, if you're an algebraist, you might claim another origin of Lie algebras: If you have any kind of Hopf algebra, you can look at the primitives of the Hopf algebra. These always form a Lie algebra.
(Recall that a Hopf algebra has a coproduct $\Delta: H \to H \otimes H$, and a primitive of $H$ is defined to be an element $x$ such that $\Delta(x) = 1 \otimes x + x \otimes 1.$)
One link between the algebraist's fountain of Lie algebras, and the geometer's, is that many Hopf algebras arise as functions on finite groups. If you are well-versed in algebra, one natural place to find Lie algebras, then, would be to take a finite group, take functions on that group, then take primitives.
A cooler link arises when a geometer looks at distributions near the identity of $G$ (which are dual to 'functions on $G$') rather than functions themselves. This isn't so obviously the right thing to look at in the finite groups example, but if you believe that functions on a Lie group $G$ are like de Rham forms on $G$, then you'd believe that something like 'the duals to functions on $G$' (which are closer to vector fields) would somehow safeguard the Lie algebra structure. The point being, you should expect to find Lie structures to arise from things that look like 'duals to functions on a group'. So one should take 'distributions' to be the Hopf algebra in question, and look at its primitives to find the Lie algebra of 'vector fields.'
A (fantastical) summary of the Quillen model.
Let us assume for a moment that your space $X$ happens to equal $BG$ for some Lie group, and you want to make a Lie algebra out of it. Then, by the above, what you could do is take $\Omega X = \Omega B G = G$, then look at the primitives of the Hopf algebra known as `distributions on $\Omega X$'.
Now, instead of considering just Lie groups, let's believe in a fantasy world (later made reality) in which all the heuristics I outlined for a Lie group $G$ will also work for a based loop space $\Omega Y$. A loop space is `like a group' because it has a space of multiplications, all invertible (up to homotopy). Moreover, any space $X$ is the $B$ (classifying space) of a loop space--namely, $X \cong B \Omega X$. So this will give us a way to associate a Lie algebra to any space, if you believe in the fantasy.
Blindly following the analogy, `functions on $\Omega X$' is like cochains on $\Omega X$, and the dual to this (i.e., distributions) is now chains on $\Omega X$. That is, $C_\bullet \Omega X$ should have the structure of what looks like a Hopf algebra. And its primitives should be the Lie algebra you're looking for.
What Quillen Does.
So if that's the story, what else is there? Of course, there is the fantasy, which I have to explain. Loop spaces are most definitely not Lie groups. Their products have $A_\infty$ structure, and correspondigly, we should be talking about things like homotopy Hopf algebras, not Hopf algebras on the nose. What Quillen does is not to take care of all the coherence issues, but to change the models of the objects he's working with.
For instance, one can get an actual simplicial group out of a space $X$ by Kan's construction $G$. This is a model for the loop space $\Omega X$, and I think this is what Quillen looks at instead of looking only at $\Omega X$, which is too flimsy. From this, taking group algebras over $\mathbb{Q}$ and completing (these are the simplicial chains, i.e., distributions), he obtains completed simplicial Hopf algebras. Again, instead of trying to make my fantasy precise in a world where one has to deal with higher algebraic structures (homotopy up to homotopy, et cetera) he uses this nice simplicial model. To complete the story, he takes level-wise primitives, obtaining DG Lie algebras.
Edit: This is from Tom's comment below. To recover a $k$-connected group or a $k$-connected Lie algebra from the associated $k$-connected complete Hopf algebra, you need $k \geq 0$. And $k$-connected groups correspond to $k+1$-connected spaces. This is why you need simply connected spaces in the equivalence.
I'm not sure I gave any 'high concept' as to 'why Quillen's construction works', but this is at least a road map I can remember.
-
1
This is helpful, and maybe explains why DG-Lie algebras or coalgebras are more natural than commutative dgas. Thanks! – Akhil Mathew Oct 27 2011 at 19:29
1
$(k+1)$-connected spaces or simplicial sets correspond to $k$-connected groups even if $k$ is just $-1$. But to recover a $k$-connected group or a $k$-connected Lie algebra from the associated $k$-connected complete Hopf algebra you need $k\ge 0$. – Tom Goodwillie Oct 27 2011 at 21:54
Thanks for the correction! I've updated the answer accordingly. – Hiro Lee Tanaka Oct 27 2011 at 22:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 48, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9332469701766968, "perplexity_flag": "middle"}
|
http://rip94550.wordpress.com/2010/01/25/calculus-organizing-techniques-of-integration/
|
# Rip’s Applied Mathematics Blog
## Calculus: Organizing techniques of integration
January 25, 2010 — rip
## introduction & overview
The purpose of this notebook is to organize the useful techniques of integration which are taught in freshman calculus and then presumed known (ha!) at the beginnings of a course in ordinary differential equations.
Heads up: I’m going to mention hyperbolic trig functions, but until you meet them, they are not relevant and you should ignore them. I’m just trying to be thorough, but I fear that I might be confusing. So I’m going to mention them in the details, but omit them from the summary.
First off, there are three categories of integrals:
1. known
2. special techniques
3. general techniques
Here it is in a nutshell: If an integral is on the “known” list, you’re wasting time trying to use special or general techniques. if an integral can be done using special techniques, you’re wasting time using general techniques.
So the primary general guideline is: try the three categories in order.
This is usually the same order in which the methods are taught. What happens, however, is that because the general methods are the freshest in memory when students take the last test, all but the brightest inevitably — and unsuccessfully — try the general methods first, instead of the older, specific, methods. And all but the brightest fail to recognize known integrals, and even they miss them under stress.
There’s no reason for anyone who has mastered the individual techniques to lose out on organizing them into a coherent whole; and no reason to just hope you will somehow recognize known integrals before you’ve had much experience.
The general methods are not universal can-openers. They are the last resort, not the first.
It is no fun to get back a test and discover that you spent a whole lot of time trying to do integration by parts when it was a known integral! Or that you tried to do trig substitution when it was a known integral. Or that you tried general substitution instead of partial fractions, or whatever.
Furthermore, most of the special techniques are used to turn the given integral into a known integral. You have to know when you’ve solved the problem in principle. If you don’t recognize an integral, at least check it against the few that are “known”.
I constructed this algorithm when I was a first-year graduate student. I was TAing the sophomore math course, and I had to explain to my students what they needed to know from their freshman year in order to solve first-order differential equations. I started out, of course, with a list of techniques culled from their first-year calculus book, and then discovered that they needed a list of known integrals, because they simply didn’t recognize them.
Some faculty and grad students argued that anyone good enough to understand my organization was too good to need it. I want to believe they are wrong, and that the market for this may be small, but it’s not zero.
This organization (or algorithm or checklist) does assume that you have learned the specific integrals and the special and general techniques. If you’re struggling to get these details right, you’re not ready for the algorithm yet.
This presentation is sketchy about all the details. One, I merely list the techniques. You may have already written out that much for yourself. But I leave it to you to write out the details of each of the special techniques – and there are details!
Two, I explicitly advocate that you try general techniques last, known integrals first, and special techniques in between.
Three, and perhaps most importantly, this provides a crutch for people who do not yet recognize the known integrals by inspection. If you do enough calculus, you will come to recognize the known integrals automatically — no checklist will be necessary. But that takes time — and this checklist will work until practice pays off and provides you with automatic recognition.
Now I’m going to present the final guideline: check your answer by taking the derivative! Almost all of us can differentiate better than we integrate.
## 1. known integrals
Just what do I mean by known integrals? They’re the ones that all math faculty, and most grad students, recognize on sight — but beginners don’t. The challenge is to get these integrals before you have learned to recognize them. But if you don’t get that “ah ha!” of recognition, what can you do?
We introduce a beginning step for beginners. For motivation, let us consider
$\int (1+x^2)^{17}\ x \, dx\$.
which Mathematica could write out for us as…
$1/36+x^2/2+(17 x^4)/4+(68 x^6)/3+85 x^8+238 x^{10}$
$+(1547 x^{12})/3+884 x^{14}+(2431 x^{16})/2+(12155 x^{18})/9$
$+(2431 x^{20})/2+884 x^{22}+(1547 x^{24})/3+238 x^{26}+85 x^{28}$
$+(68 x^{30})/3+(17 x^{32})/4+x^{34}/2+x^{36}/36\$.
In Version 7, I had to force it to write it all out — in Version 5, that mess was the default answer. It’s a lousy way to write the answer! it should be simply…
$\frac{1}{36} \left(x^2+1\right)^{18}+C$
and we can confirm that by differentiating, which gets us:
$x \left(x^2+1\right)^{17}\$.
There are two issues. One, a human being could have done the integral the way Mathematica did in Version 5, but it would take time to expand that polynomial and careto get it right — and it’s an ugly answer. Two, if we recognize that this integral is of the form
$\int u^n \, du\$,
then we know the answer is simply $\frac{u^{n+1}}{n+1}+ C\$. Version 7 of Mathematica was smart enough to know this.
But what if we didn’t recognize it? What then? That, after all, is the problem a beginner faces.
Instead of just hoping to recognize it, we can check any given integral against a very short list:
known integrals (column 1, plus 2 special cases and 6 alternatives)
$\int u^n \, du$ $\int u^{-1} \, du$
$\int e^u \, du$ $\int a^u \, du$
$\int \sin (u) \, du$ $\int \cos (u) \, du$ ($\int \sinh (u) \, du\$,$\int \cosh (u) \, du\$)
$\int \sec ^2 (u) \, du$ $\int \csc ^2 (u) \, du\$, ($\int \mathrm{sech} ^2 (u) \, du\$ , $\int \mathrm{csch} ^2 (u) \, du\$)
That’s too big a list. What I actually remember is two things; column 1:
$\int u^n \, du$
$\int e^u \, du$
$\int \sin (u) \, du$
$\int \sec ^2 (u) \, du$
and “there are alternatives and special cases”. I count on the first column to jog my memory if I see one of the additional cases.
Before we start, let me point out that the fourth line is there because it’s worth remembering in general, and it is, frankly, essential when you’re embroiled in integration. And the second column is there to elaborate on the first.
If you have not seen the hyperbolic trig functions (cosh x, sinh x, etc. — trig functions with an “h” appended), then they shouldn’t be on the techniques-of-integration tests. In fact, I don’t generally include them in a face-to-face presentation of this list.
Oh, what is that last integral in column 1?
$\int \sec ^2 (u) \, du = \tan (u) + C\$.
i.e.
$\frac{d}{du} \tan (u) = \sec ^2(u)\$.
In other words, instead of remembering the derivative of the tangent, we remember the integral of the secant squared.
(While I’m sure that the integral of csc squared is ±cotangent, I’d have to work out the sign. In other words, because I know the integral of $\sec ^2 (u)\$, I know I can work out the integral of $\csc ^2 (u)\$ by differentiating the cotangent; it’s just a matter of finding the correct sign. Similarly for the hyperbolic tangent and hyperbolic cotangent: I know their derivatives are ±sech^2 and ±csch^2, and I would quickly work out whatever I needed.)
The additional columns are there because they are sort of redundant. In a fundamental sense, this entire list has 4 entries, not 8 or 12. How can we say we know how to integrate $u^n\$ if we don’t know the special case n = -1? How can we know the integral of the sine without knowing the integral of the cosine? On the other hand, getting the integral of $a^u\$ from the integral of $e^u\$ may not be obvious, but it’s just a special case and we need to know it, or how to work it out. Find the trick in your calculus book. Or look at the post about e.
To say all that another way, the single entry for $\int \sin (u) \, du\$, for example, prompts me to ask: do I even see a sine, or cosine, or hyperbolic sine, or hyperbolic cosine? If not, move on.
Back to our given problem:
$\int (1+x^2)^{17}\ x \, dx\$.
If — not that we know it is, but if it is — any one of the 4 known integrals, then it can only be the first,
$\int u^n \, du\$,
and then the only possibility is n = 17 and u = 1 + x^2. That’s a big “if”, but try it. That’s the point: to try this one possibility before we move on to consider special techniques. We compute that
du = 2 x dx,
and we discover that, because of the given x dx, our integral is, in fact,
$\frac{1}{2} \int u^{17} \, du\$
i.e.
$\frac{1}{2} \frac{u^{18}}{18}+C$
$= \frac{(1+x^2)^{18}}{36} + C\$.
(corrected: I had written 2x instead of x^2)
Thus, the guideline for known integrals is: instead of hoping for a flash of insight, we do a little pattern matching against a very small list, and see if we have a match.
And, I emphasize again, we do that before we try special techniques or general techniques.
## 2. special techniques
This gets us into some gory detail, but just as we can use pattern matching to check a given problem against one possible known integral, the special techniques are go / no-go. (And with pretty much only one exception, if one technique is a go, the other three are no-go.)
Let’s summarize them.
special techniques:
• partial fractions
• powers of trig functions
• trig substitution
• complete the square
I’m not going to discuss these in detail: see your calculus book. We can integrate any rational function of (any ratio of) polynomials — we just have to write the ratio in partial fractions, which requires that we factor the denominator. But, partial fractions works only for the the ratio of two polynomials. Period. Until we are looking at a rational function of polynomials, partial fractions is a no-go. Cross it off.
Powers of trig functions works for trig functions (or hyperbolic trig functions). Period. There are a few different cases, and one should be able to handle all of them. Until we are looking at powers of trig functions, the techniques for powers of trig functions are no-go. Cross it off.
Completing the square works for a general quadratic. Period. Until we are looking at a general quadratic, completing the square is a no-go. Cross it off.
Finally, trig substitution (or hyperbolic trig substitution) works for things of the form
$\left(\pm a^2 \pm x^2 \right)\$.
Period. Until we are looking at the sum or difference of squares, trig substitution is a no-go. Cross it off.
(The one exception I refer to is things like
$\int \frac{1}{1-x^2} \, dx\$.
We can do it by trig substituion and get an answer involving arc cos (the inverse cosine), or we can do it by partial fractions, since it is the ratio of two polynomials, and get an answer involving the natural logarithm. Either answer is correct, even though they look wildly different.
Now, completing the square usually leads to either partial fractions or trig substitution; and trig substitution may lead to powers of trig functions. It is important to understand that any substitution may lead back to a known integral or to another special technique.
So the first guideline for special techniques is: having decided that it is not a known integral, we ask if any of the special techniques can be applied. If so, use it. In the rare case that more than 1 special technique applies, take your pick.
Thus the second guideline for special techniques is: a special technique may lead back to a known integral or to another special technique.
## 3. general techniques
There are two general techniques:
1. integration by parts
2. substitution
I cannot emphasize too strongly that these are tools of last resort, to be used after we decide that the given problem is not a known integral in disguise, and that no special techniques can be applied.
(OK, by the time we recognize the known integrals automatically, we probably recognize when integration by parts is the way to go, and it is no longer a tool of last resort but the automatic choice in preference to a special technique. But this good judgment will come with experience.)
I distinguish substitution as a general technique from “known integrals” because for the so-called known integrals, we are checking against a short list; for substitution in general, we’re just hoping that something simplifies. “General substitution” is basically an act of desperation. A good or clever choice often works, and we are hoping to make an inspired guess. The purpose of the “known integrals” list is to eliminate the hope for inspiration when it isn’t necessary.
The first guideline for general techniques is: try integration by parts first,
$\int u \, dv = u\ v - \int v \, du$
remembering that dv = dx is a possible choice; for example:
$\int \log (x) \, dx = x\ \log (x) - \int 1 \, dx = x\ \log (x) - x + C\$.
(You should say something like, “Oh, wow! I can integrate a logarithm using integration by parts!)
The second guideline for general techniqes is: when guessing a substitution, leave some play; think of the integral as
$\int \text{(a function of garbage)} \, d{\text{(garbage)}}$
and make choices for both function and garbage.
I’ll remark that integration by parts is pretty much the only possibility for integrating the logarithm. It is not a known integral, and no special technique applies; and there is no plausible choice for “garbage” other than x, no plausible choice for “function” other than the logarithm. That leaves integration by parts. It happens to work. If it didn’t, we might have created and taught another “elementary function”, defined as the integral of the natural logarithm.
## an additional special technique
I suspect that almost every freshman calculus text includes (what I consider) an obscure transformation for integrating any rational combination of (i.e. any ratio of) trig functions. It would be used only if and after the special technique “powers of trig functions” failed to work out.
Let me go look it up — I don’t have it committed to memory.
z = tan x/2.
I know it exists. I know where to look it up. I think it’s not worth remembering in detail, only that it exists and what it solves. I would call it a special technique, but I need it so rarely that I don’t bother to remember it. That is, I don’t remember what it is, but I do remember that it is.
And if I were taking a test that included “techniques of integration”, I would make sure I knew this one if it had been covered. There are some gory details.
## Summary
The executive summary is simply, as I said at the beginning in red: If an integral is on the known list, you’re wasting time trying to use special or general techniques. if an integral can be done using special techniques, you’re wasting time using general techniques.
In more detail…. Given an integral to evaluate, assuming you do not recognize it immediately…
• There are three categories of techniques
1. is it a known integral?
2. if not, can a special technique be applied to it?
3. if neither, will a general technique work?
• you should try the categories in order.
• known: instead of hoping for a flash of insight, we do a little pattern matching against a very small list, and see if we have a match.
• having decided that it is not a known integral, we ask if any of the special techniques can be applied.
• a special technique or a general technique may lead back to a known integral or to another special technique.
• I would try integration by parts before trying a general substitution.
• when guessing a general substitution, leave some play when choosing what’s what.
• There is a rarely-needed very general method for integrating a rational function of sines and cosines.
• Finally, check your answer by taking the derivative!
I should remark that I could easily write down integrals which cannot be done using what we know: they end up being used to define new (“not elementary”) functions. $\int x\ \tan (x) \, dx\$ is an example — that is, it can be transformed to such a non-elementary function. (Go ahead, try it!)
In more detail (but omitting hyperbolic trig functions)
known
$\int u^n \, du$
$\int e^u \, du$
$\int \sin (u) \, du$
$\int \sec ^2 (u) \, du$
and “there are alternatives and special cases”.
special
• partial fractions
• powers of trig functions
• trig substitution
• complete the square
general
• by parts
• general substitution
Finally, check your answer by taking the derivative! Almost all of us can differentiate better than we integrate.
## appendix: my favorite “proof” that 0 = 1
I am sure that most of you have seen some algebraic “proofs” that 0 = 1, usually involving a division by zero. Here’s one “proof” using calculus. We consider
$\int \frac{1}{x} \, dx$
and integrate by parts — of course it’s a known integral, but integrate by parts anyway!! Let u=1/x, dv = dx, and then from
$\int u \, dv = u\ v - \int v \, du$
we get
$\int \frac{1}{x} \, dx = \frac{x}{x} - \int x\ \left( \frac{-1}{x^2} \right) \, dx$
i.e.
$\int \frac{1}{x} \, dx = 1 + \int \frac{1}{x} \, dx\$.
Now subtract $\int \frac{1}{x} \, dx\$ from both sides, getting:
0 = 1.
Not good. Definitely not good. Really bad, in fact.
So what did I do wrong, huh? You know I must have done something wrong.
### Like this:
Posted in calculus. Tags: calculus, integration, mathematics. 18 Comments »
### 18 Responses to “Calculus: Organizing techniques of integration”
1. Says:
February 12, 2010 at 8:14 am
[...] Applied Mathematics Blog: Calculus: Organizing techniques of integration I constructed this algorithm when I was a first-year graduate student. I was TAing the sophomore [...]
2. Says:
February 12, 2010 at 2:39 pm
Your first example is wrong. You ask the integral of (1+x^2)^17, when (I think) you meant to ask the integral of x*(1+x^2)^17.
3. rip Says:
February 12, 2010 at 8:03 pm
Thanks for the comment. I do make mistakes — and someone has already found one in here. But I got “x dx” right in the two places that jump out at me: I put the “x” with the dx….
here: “For motivation, let us consider
$\int (1+x^2)^{17}\ x \, dx\$.”
and here:
“Back to our given problem:
$\int (1+x^2)^{17}\ x \, dx\$.”
Is there one I’m not seeing?
Rip
4. Says:
February 12, 2010 at 8:51 pm
I am so embarrassed! I checked it 2 or 4 times, but I am so used to the x coming first, I just didn’t see it. Feel free to delete my comment, if you want.
I like your integral of garbage d(garbage). I call it junk when I’m teaching this stuff.
5. rip Says:
February 12, 2010 at 9:30 pm
You could look at it this way: if you didn’t see it, maybe some other people didn’t see it, either.
Not too long ago I asked a question about finite fields on the sci.math newsgroup, and I got politely spanked for overlooking a requirement. It happens.
That said, if you would like me to delete your comment — and this entire sequence — I will do it. I’d rather not, but I think it should be your call.
In any case, welcome to my blog.
Rip
Rip
6. Says:
February 13, 2010 at 7:07 am
Thank you for your generous response. (It’s fine to leave my comment.)
7. mekuria getaechew Says:
June 3, 2010 at 12:16 am
you can exeplained as student need thak you
8. rip Says:
June 12, 2010 at 10:28 am
You’re welcome. I hope it helps.
9. Asghar Says:
February 22, 2011 at 4:55 am
plz stewart calculus include
10. rip Says:
February 22, 2011 at 5:44 pm
What do you mean? How would I include a book?
11. Sper Says:
February 26, 2011 at 2:59 pm
I greatly enjoyed reading this post – it’s clear, concise and so well written. I wish I’d read it when I was younger – it would have made my life so much easier:)
As for the mistake in the example, I think it may have something to do with the continuity of 1/x in zero, doesn’t it?
12. rip Says:
February 28, 2011 at 6:46 pm
Hi Sper,
Thanks. Nice to hear from you again.
As for the false proof, let me put it this way:
SPOILER SPACE
.
.
.
.
.
.
.
.
.
.
.
.
any two antiderivatives differ by a constant; they are not the same thing. If I insist on subtracting an antiderivative (an indefinite integral) from both sides, I need to introduce a constant of integration.
13. khursheed Says:
April 23, 2011 at 10:01 pm
14. rip Says:
April 24, 2011 at 4:28 pm
That depends on how mathematical you get. The Black-Sholes equation, as I recall, is a partial differential equation. Still, that’s the study of financial derivatives, rather esoteric.
In practice, even engineers don’t use integration all that much. In my experience, third-year engineering students have forgotten the techniques of integration.
I could argue that the calculus is really just the language of science – and if you want to understand a theory, you need to know what an integral and a derivative are… and the best way to understand them, for most of us at least, is to have spent some time computing them.
Along those lines, a quantitative business degree will presuppose that you can understand equations involving integral signs.
15. amare setie Says:
November 21, 2011 at 7:56 am
I could argue that the calculus is really just the language of science .
16. mouse Says:
August 23, 2012 at 3:37 am
I’m studying apm for the first time and dont understand integration or what e has to do with it. For ex. (6=e^2k) how would you integrate this and why? the step is what I want to understand.
17. rip Says:
August 23, 2012 at 4:23 pm
Hi Mouse,
Sorry, but it’s not practical for me to teach you calculus via comments on a blog. If you have a teacher, talk to him. If you don’t have one, get one.
BTW, we don’t usually integrate equations – and we don’t usually use k as a variable of integration – so your example is not a good one. That is, I would _not_ integrate that.
As for e, well, e^x is a function, so it can be integrated or differentiated. It happens to be a very interesting function, because it is its own integral and its own derivative, but that’s another issue.
Good luck,
rip
18. Abhishek Says:
April 16, 2013 at 3:17 am
Thanks for the tips and tricks! Very helpful!
%d bloggers like this:
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 55, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9351170063018799, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/59294/surjection-that-increases-dimensions?answertab=oldest
|
# Surjection that increases dimensions
This question is somewhat inspired by a question on MathOverflow, but it is not necessary to read that question to understand what I am about to ask.
It is well known that one can establish a surjection between sets of different Hausdorff dimensions: in the regime of just set theory the cardinality of the unit interval and the unit square are the same, and in fact we get a bijection. If you add a bit of topology, one can in addition request that this surjection be given by a continuous map, but the map cannot be a bijection, else it'd be a homeomorphism.
What if, instead of continuity, we require a different condition?
Question Fix $N$ a positive integer. Let $B$ be the open unit ball in $\mathbb{R}^N$. Can we find an embedded smooth (or $C^1$) hypersurface $A\subset \mathbb{R}^N$ and a surjection $\phi:A\to B$ such that the vector $a - \phi(a)$ is orthogonal to $A$? Can it be made continuous? Can it be made a bijection?
-
I can see the real analysis, and I can see the differential geometry (I think!), but I have no idea where the elementary set theory comes into the question :-) – Asaf Karagila Aug 23 '11 at 20:21
@Asaf: I was wondering if there is a way of getting an answer based on cardinality arguments (something like: if $\gamma$ is a curve that intersects a hypersurface $A$ transversely, then $\gamma\cap A$ has countably many points etc.) – Willie Wong♦ Aug 23 '11 at 20:26
Correct me, but isn't there always a bijection between a hypersurface and the open unit ball, just by cardinality games? – Asaf Karagila Aug 23 '11 at 21:47
@Asaf: yes, which is why there is that funny condition with normality. – Willie Wong♦ Aug 23 '11 at 23:03
## 1 Answer
If $A$ is a hypersurface (co-dimension one and smooth) what you're describing is the graph of a function on $A$ -- well, locally that's what it is. But the problem boils-down to a local problem. You're asking for functions $f : D^{n-1} \to \mathbb R$ whose graph is an open subset of $\mathbb R^n$. This isn't possible, even if $f$ is discontinuous.
-
Sorry, I must be having a moment here, but why must the question be able to be localised? Even if locally the graphs are not open, what's to prevent the union over all neighborhoods of the graphs to be an open subset? I feel like there is a really elementary fact that I am overlooking. – Willie Wong♦ Aug 23 '11 at 20:33
oh... wait, I guess Baire category? – Willie Wong♦ Aug 23 '11 at 20:35
I think a little more careful local argument will tell you that not only is the interior of this set empty, but these sets have the form that countable unions of them also have empty interiors. – Ryan Budney Aug 23 '11 at 21:27
I don't understand what fails in the following argument: Take a plane-filling curve $\gamma$, and write a differential equation for a curve in the plane, $\dot\psi=n\times(\psi-\gamma)$, where $n$ is a vector orthogonal to the plane, whose magnitude can vary along the curve. The solution should be continuous, and the freedom in $n$ should allow us to prevent it from diverging or self-intersecting. Wouldn't $\phi=\gamma\circ\psi^{-1}$ then have the desired properties? – joriki Aug 24 '11 at 8:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 21, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9466912150382996, "perplexity_flag": "head"}
|
http://mathhelpforum.com/number-theory/50889-primes-quadratic-field.html
|
Thread:
1. Primes in quadratic field
Find the primes in $\mathbb{Q}(\sqrt{-1})$ which have a norm less than 6.
How do you approach this problem? Also how do you prove that you have indeed found ALL primes in $\mathbb{Q}(\sqrt{-1})$ which have a norm less than 6?
2. Hello,
Originally Posted by Pn0yS0ld13r
Find the primes in $\mathbb{Q}(\sqrt{-1})$ which have a norm less than 6.
How do you approach this problem? Also how do you prove that you have indeed found ALL primes in $\mathbb{Q}(\sqrt{-1})$ which have a norm less than 6?
An element of the quadratic field $\mathbb{Q}(\sqrt{-1})$ is in the form $a+b \sqrt{-1}$, where a and b are integers.
(so it's complex numbers)
The elements of this field are called Gaussian integers.
Conditions for a Gaussian integer to be prime are listed here : Gaussian Prime -- from Wolfram MathWorld
(note that you're asked for the norm to be less than 6, that is to say $a^2+b^2 < 36$)
3. Originally Posted by Pn0yS0ld13r
Find the primes in $\mathbb{Q}(\sqrt{-1})$ which have a norm less than 6.
A positive prime in $\mathbb{Z}$ will be called a Hacker prime and a Gaussian prime shall refer to any prime in $\mathbb{Z}[i]$. Thus, for example $2$ is a Hacker prime but it is not a Gaussian prime because $2=-i(1+i)^2$. Remember the units in $\mathbb{Z}[i]$ are the numbers $1,-1,i,-i$, so these guys cannot be primes (by definition). Two numbers $a,b\in \mathbb{Z}[i]$ are associate iff $a=bu$ where $u$ is a unit, therefore the associates of $a+bi$ are $-a-bi$, $-b+ai$, $b-ai$. Now remember that a Gaussian integer is a Gaussian prime then all its associates shall be Gaussian primes too, this means we do not need to check all the pairs Moo listed since we can ignore the associated ones.
If $\pi$ is a Gaussian prime then $N(\pi) = \pi \bar \pi = p_1 ... p_r$ where $p_1,...,p_r$ are Hacker primes. This means $\pi | p$ for some Hacker prime $p$. And so (by definition) $p = \pi \alpha$ where $\alpha \in \mathbb{Z}[i]$. Thus, $p^2 = N(p) = N(\pi \alpha) = N(\pi) N(\alpha)$. Since $N(\pi)>1$ this forces $N(\pi) = p \text{ or }p^2$. In the latter case this forces $N(\alpha)=1$ i.e. $\alpha$ is a unit and so $\pi$ is associate to a Hacker prime. In the former case $\pi$ is not associate to a Hacker prime. This gives us a necessary condition. Given a Gaussian integer we takes its norm, then for it to be a Gaussian prime it is necessary for the norm to be a Hacker prime or a square of a Hacker prime. Is this also sufficient? The answer is no! Just consider the example with $2$ above. However, if the norm is a Hacker prime then it is also sufficient, and this is simple to prove. Say that $N(\pi) = p$ where $p$ is a Hacker prime. If $\pi$ was not prime then $\pi = \alpha\beta$ where $\alpha,\beta$ are non-units, therefore, $p = N(\pi) = N(\alpha)N(\beta)$ - but this is impossible because $p$ cannot be factored non-trivially (since it is a Hacker prime). Therefore the only thing we really ought to check are Gaussian integers which have norm a Hacker prime squared. But as said above those Gaussian primes must be associate to Hacker primes, and so the problem reduces to finding all Hacker primes which remain Gaussian primes. Here is the following result which I will not prove (unless you want it): let $p$ be an odd Hacker prime (if $p=2$ then look at example above) if $p\equiv 1 ~ (\bmod 4)$ then $p$ is not a Gaussian prime and if $p\equiv 3 ~ (\bmod 4)$ then $p$ is a Gaussian prime.
Now we can solve the problem.
The (up to associates) Hacker primes are: $2,3,5$. By above the ones that remain Gaussian primes are just $3$. Since its associates are too Gaussian primes this means: $3,-3,3i,-3i$ are all Gaussian primes. Now we need to find those $a+bi$ so that $a^2+b^2$ is a Hacker prime. Since $(-a)^2 = a^2,(-b)^2=b^2$ we can restrict the problem to $0<a,b<6$. This gives the primes: $1+i,1+2i,1+4i,2+3i,2+5i$. To complete the list just interchange $b$ for $a$ in $a+bi$ and change the signs to get all possible combinations.
4. Thank you Moo and ThePerfectHacker.
Wonderful explainations!
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 60, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9403741359710693, "perplexity_flag": "head"}
|
http://eventuallyalmosteverywhere.wordpress.com/2012/12/22/how-to-prove-fermats-little-theorem/
|
# How to Prove Fermat’s Little Theorem
Posted on December 22, 2012 by
The following article was prompted by a question from one of my mentees on the Senior Mentoring Scheme. A pdf version is also available.
Background Ramble
When students first meet problems in number theory, it often seems rather different from other topics encountered at a similar time. For example, in Euclidean geometry, we immediately meet the criteria for triangle similarity or congruence, and various circle theorems. Similarly, in any introduction to inequalities, you will see AM-GM, Cauchy-Schwarz, and after a quick online search it becomes apparent that these are merely the tip of the iceberg for the bewildering array of useful results that a student could add to their toolkit.
Initially, number theory lacks such milestones. In this respect, it is rather like combinatorics. However, bar one or two hugely general ideas, a student gets better at olympiad combinatorics questions by trying lots of olympiad combinatorics questions.
I don’t think this is quite the case for the fledgling number theorist. For them, a key transition is to become comfortable with some ideas and notation, particularly modular arithmetic, which make it possible to express natural properties rather neatly. The fact that multiplication is well-defined modulo n is important, but not terribly surprising. The Chinese Remainder Theorem is a `theorem’ only in that it is useful and requires proof. When you ask a capable 15-year-old why an arithmetic progression with common difference 7 must contain multiples of 3, they will often say exactly the right thing. Many will even give an explanation for the regularity in occurrence of these which is precisely the content of the theorem. The key to improving number theory problem solving skills is to take these ideas, which are probably obvious, but sitting passively at the back of your mind, and actively think to deploy them in questions.
Fermat’s Little Theorem
It can therefore come as a bit of a shock to meet your first non-obvious (by which I mean, the first result which seems surprising, even after you’ve thought about it for a while) theorem, which will typically be Fermat’s Little Theorem. This states that:
$\text{For a prime }p,\text{ and }a\text{ any integer:}\quad a^p\equiv a\mod p.$ (1)
Remarks
• Students are typically prompted to check this result for the small cases p=3, 5 and 7. Trying p=9 confirms that we do really need the condition that p be prime. This appears on the 2012 November Senior Mentoring problem sheet and is a very worthwhile exercise in recently acquired ideas, so I will say no more about it here.
• Note that the statement of FLT is completely obvious when a is a multiple of p. The rest of the time, a is coprime to p, so we can divide by a to get the equivalent statement:
$\text{For a prime }p,\text{ and }a\text{ any integer coprime to }p:\quad a^{p-1}\equiv 1\mod p.$ (2)
• Sometimes it will be easier to prove (2) than (1). More importantly, (2) is sometimes easier to use in problems. For example, to show $a^{p^2}\equiv a \mod p$, it suffices to write as:
$a^{p^2}\equiv a^{(p-1)(p+1)+1}\equiv (a^{p-1})^{p+1}\times a\equiv 1^{p+1}\times a \equiv a.$
• A word of warning. FLT is one of those theorems which it is tempting to use on every problem you meet, once you know the statement. Try to resist this temptation! Also check the statement with small numbers (eg p=3 ,a=2) the first few times you use it, as with any new theorem. You might be surprised how often solutions contain assertions along the lines of
$a^p\equiv p \mod (a-1).$
Proofs
I must have used FLT dozens of times (or at least tried to use it – see the previous remark), before I really got to grips with a proof. I think I was daunted by the fact that the best method for, say, p=7, a careful systematic check, would clearly not work in the general case. FLT has several nice proofs, and is well worth thinking about for a while before reading what follows. However, I hope these hints provide a useful prompt towards discovering some of the more interesting arguments.
Induction on a to prove (1)
• Suppose $a^p\equiv a\mod p$. Now consider $(a+1)^p$ modulo p.
• What happens to each of the (p+1) terms in the expansions?
• If necessary, look at the expansion in the special case p=5 or 7, formulate a conjecture, then prove it for general p.
Congruence classes modulo p to prove (2)
• Consider the set $\{a,2a,3a,\ldots,(p-1)a\}$ modulo p.
• What is this set? If the answer seems obvious, think about what you would have to check for a formal proof.
• What could you do now to learn something about $a^{p-1}$?
Combinatorics to prove (1)
• Suppose I want a necklace with p beads, and I have a colours for these beads. We count how many arrangements are possible.
• Initially, I have the string in a line, so there are p labelled places for beads. How many arrangements?
• Join the two ends. It is now a circle, so we don’t mind where the labelling starts: Red-Green-Blue is the same as Green-Blue-Red.
• So, we’ve counted some arrangements more than once. How many have we counted exactly once?
• How many have we counted exactly p times? Have we counted any arrangements some other number of times?
Group Theory to prove (2)
This is mainly for the interest of students who have seen some of the material for FP3, or some other introduction to groups.
• Can we view multiplication modulo p as a group? Which elements might we have to ignore to ensure that we have inverses?
• What is $\{1,a,a^2,a^3,\ldots\}$ in this context? Which axiom is hardest to check?
• How is the size of the set of powers of a modulo p related to the size of the whole group of congruences?
• Which of the previous three proofs is this argument is most similar to this one?
• Can you extend this to show the Fermat-Euler Theorem:
$\text{For any integer }n,\text{ and }a\text{ coprime to }n:\quad a^{\phi(n)}\equiv 1 \mod n,$
where $\phi(n)$ counts how many integers between 1 and n are coprime to n.
###### Related articles
• Fermat’s Lost Theorem (quantumfrontiers.com)
• Book review: Fermat’s Enigma (mathlesstraveled.com)
• Mounting or Solving Open Problems (rjlipton.wordpress.com)
• Recreational Math Books – Part II (kintali.wordpress.com)
• The legacy of Srinivasa Ramanujan – Celebrating his 125th Birthday (ktrmurali.wordpress.com)
• Pythagorean Theorem Demonstrated by Water (koshersamurai.wordpress.com)
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 12, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9438073039054871, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/123222/convergence-of-lp-norms?answertab=oldest
|
# Convergence of $L^p$ norms
Given a measure space $X$ with its measure $\mu$, it can be shown (I'll provide a proof if asked for) that $\displaystyle \forall f \in L^\infty(X,\mu),~\textrm{if } \exists p_0:\forall q \geq p_0, f\in L^q\cap L^\infty, \textrm{ then } \lim_{p \to \infty}\|f\|_p = \|f\|_\infty$ (whih by the way justify this notation)
This convergence implies the following: $\forall f, \forall \epsilon > 0, \exists q:= q(f,\epsilon),\textrm{ such that } \forall p\geq q, |\|f\|_p-\|f\|_\infty| < \epsilon$
This means that given an approximation error of the infinity norm bounded by $\epsilon$, I should be able to compute an (let's call it) index so that, I don't need to go any further, but I have a priori knowledge of the potential error.
The idea is, I am working on some pattern recognition problems and I am using the infinity norm somewhere there. However, as it is quite unreliable against outliers, using a p-norm approximation allows to "average out" the local outliers and get a more robust result. The higher the power the more importance the local outliers (or singularities) have, and the less I like it :)
If you have any idea on a proof or a results, it would be very helpful.
-
In general it sounds not too good to "average" outliers (since they are outliers and hence, distort the mean quite heavy). How about calculating the infinity norm, but leaving out a few of the largest contributions? – Dirk Mar 22 '12 at 10:08
– Jean-Luc Bouchot Mar 22 '12 at 10:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9343083500862122, "perplexity_flag": "middle"}
|
http://mathoverflow.net/revisions/41063/list
|
## Return to Question
2 edited body
Suppose $C$ is a smoothly bounded convex body (*) in $\mathbb{R}^d$, and $p$ is a point on the boundary of $C$. Let $r>0$ and let $B(r)$ denote the ball of radius $r$ centered at $p$.
Is it true that $\mbox{vol }(B(r) \cap C) / \mbox{vol }(B(r)) \to 1/2$ as $r \to 0$?
This seems obvious, but I can't seem to state a good reason why it's true. Does it follow from some well-known theorem?
I would guess that we don't need convexity, and that something simlar similar holds for smoothly embedded hypersurfaces in Euclidean space, and maybe one can relax "smooth" to class $C^2$?
(*) My understanding is that "smoothly bounded convex body" means a compact, convex set, with nonempty interior, with a unique supporting hyperplane at each point. I am not sure how close this is to a convex image of a smooth embedding of a $d$-dimensional ball, but again, I expect that the statement probably holds in either case.
1
# Quantifying that near a point on a smooth hypersurface, it looks like a tangent hyperplane
Suppose $C$ is a smoothly bounded convex body (*) in $\mathbb{R}^d$, and $p$ is a point on the boundary of $C$. Let $r>0$ and let $B(r)$ denote the ball of radius $r$ centered at $p$.
Is it true that $\mbox{vol }(B(r) \cap C) / \mbox{vol }(B(r)) \to 1/2$ as $r \to 0$?
This seems obvious, but I can't seem to state a good reason why it's true. Does it follow from some well-known theorem?
I would guess that we don't need convexity, and that something simlar holds for smoothly embedded hypersurfaces in Euclidean space, and maybe one can relax "smooth" to class $C^2$?
(*) My understanding is that "smoothly bounded convex body" means a compact, convex set, with nonempty interior, with a unique supporting hyperplane at each point. I am not sure how close this is to a convex image of a smooth embedding of a $d$-dimensional ball, but again, I expect that the statement probably holds in either case.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 24, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9692302942276001, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/14254/rainbow-around-sun?answertab=oldest
|
# Rainbow around Sun
From the perspective of a person, a rainbow is formed when the Sun is behind the person, and there is a critical angle made by the rainbow.
However, on several occasions, usually at noon when the Sun is higher, I saw a ring around the Sun made of the colors of the rainbow. Is that a rainbow? Is within the definition of a rainbow? And how is it possible?
-
## 4 Answers
What you're asking about sounds like an optical halo. It's produced by sunlight being refracted by ice crystals in the upper troposphere. The process is similar to that involved in a rainbow, except that the light is only refracted, not reflected, in this case.
-
Well, the definition of a rainbow is "an arch of colors formed in the sky under certain circumstances" according to my Apple dictionary. More literally, it would need to be caused by rain, so you're correct that you'd see it 180° away from the sun.
But, if there are thin cirrus clouds made with tiny ice crystals, you can get what are called "sun dogs." Because of the angles of the faces of an ice crystal, sun dogs will form 22° away from the sun, not 180°. Wikipedia has some good pictures.
-
all right. Thank you. – jormansandoval Sep 2 '11 at 14:09
– Joe Sep 7 '11 at 14:54
As David mentioned above, it is not a rainbow, but an optical halo. Actually, halos are visible much more often than rainbows. Here you can find amazing pictures of halos (as well as rainbows and other optical phenomena) and some explanations of their appearance in the sky.
-
Thanks Mr. Physicsworks – jormansandoval Sep 2 '11 at 15:47
The other answers describe a rainbow-like phenomenon involving ice crystals, which may very well be what you saw. However, there is another possibility.
A normal rainbow occurs when light enters a spherical drop, refracts at the curved surface (dispersing the colors), reflects off the back of the drop, and then leaves the drop, refracting again. The angle between incident and outward-going light is about $42^\circ$, and so you see rainbows $42^\circ$ from the point directly opposite the Sun.
No one said, though, that the light had to undergo exactly one total internal reflection before leaving the drop. It can reflect multiple times, coming out at different angles. A second-order rainbow is at a slightly different angle from the normal first-order one. More interestingly, third- and fourth-order rainbows can be found circling the Sun (not circling the point opposite the Sun), simply due to the geometry. Wikipedia has some information, though unfortunately I cannot find any good diagrams for this effect online. Third-order rainbows are very hard to see, but they have been documented and are in a sense more "true" to the definition than phenomena involving ice.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.946986734867096, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/82639/using-distribution-of-primes-to-generate-random-bits/82756
|
## using distribution of primes to generate random bits?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
In his popular science book The Music of the Primes, Marcus du Sautoy tries to link the truth of the Riemann Hypothesis to the "randomness" of the primes. To do this, he invokes the idea of a "fair coin". This, he claims that probability theory tells us, must satisfy an asymptotic relation whereby the cumulative difference between the number of heads and the number of tails should be $o(x^{1/2})$. He then proposes the difficult-to-visualise idea of a "prime number die" with $\ln n$ sides, so that the probability of each $n$ being prime is $1/\ln n$. And he states that this die will be "fair" if and only if the RH is true. His attempted explanation is necessarily vague and impressionistic, relying on the fact the RH is equivalent to $li(x) - \pi(x)$ being $o(x^{1/2+\epsilon})$ for any positive epsilon (similarly for $\psi(x) - x$, or the Mertens function), and then trying to explain $o(x^{1/2})$ in terms people are familiar with (an unbiased coin toss).
I was wondering if something like this could be made more precise. Suppose we define the increasing sequence $x_k$ for $k =0,1,2,3,...$ where $x_0 = 2$ and $Li(x_{k}) - Li(x_{k-1}) = 0.5$ (that is, $\int_{x_{k-1}}^{x_k} dx/ln x = 0.5$). The idea is that each interval $(x_{k-1},x_k]$ has a 0.5 probability of containing a prime number (taking the density of primes to be $1/ln x$ as usual).
So we then have a "random bit generator": $b_n = 0$ if there's no prime in $(x_{n-1},x_n]$ and $b_n = 1$ if there's at least one prime in the interval.
So would this sequence of bits pass the test for "unbiasedness" which du Sautoy refers to?
We could consider $2H(\pi(x_n)-\pi(x_{n-1})) - 1$ where $H(x)$ is the variation of the Heaviside function which is 1 for positive $x$ and 0 for nonpositive $x$. This will produce the value +1 if there are primes in the interval, and -1 if there are none. So we sum these values and ask: is this $o(x^{1/2})$?
My guess would be that the RH will be equivalent to this function being $o(x^{1/2+\epsilon})$ for any positive $\epsilon$. Any thoughts on this?
-
Please ask a focused question. – quid Dec 4 2011 at 19:29
He has asked two focussed questions. They may be hard to answer because of the definitions involved, but they are very specific. Gerhard "Ask Me About System Design" Paseman, 2011.12.04 – Gerhard Paseman Dec 4 2011 at 19:37
@G.P. And they are? – quid Dec 4 2011 at 19:44
My reading. I define one walk on a numberline using the b_i. Does this walk stay within cn of the start after n^2 steps for all n? The second question is like the first, but uses different b_i . Gerhard "Ask Me About System Design" Paseman, 20111.12.04 – Gerhard Paseman Dec 4 2011 at 20:07
His final question is less focussed on its own, but seems natural to ask after considering the first two. Gerhard "Ask Me About System Design" Paseman, 2011.12.04 – Gerhard Paseman Dec 4 2011 at 20:10
show 5 more comments
## 2 Answers
Here is a relevant reference:
http://arxiv.org/pdf/math/0603450
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Gerhard Paseman in the end convinced me there is something to answer here, so blame him or thank him (depending on you opinion on the answer).
The question seems to be whether the `$b_i$` 'behave like a random sequence' In the sense that the difference between the commultative frequency of values $0$ and $1$ for $i \le x$ is 'small', with the rough meaning something square-root-ish (the details in the question are not quite consistent). The second questions seems to be a rescaled version (to $\pm 1$ instead of $0,1$) to be able to express this via summing.
The answer to this is (in all likelihood) a clear 'no', indeed I tend to think the question is based on a false premise. The 'in all likelihood' is due to the fact that I only give a rough reasoning to make the point, and do not really work this out for the primes (this is not a claim I could do this, but if ever the claim in the question were true the distribution of the primes would be so different than what is expected, and not at all random-like with is the punch-line of the question).
The problem is that the `$x_k$` are defined in some way with the motivation that some interval should have the property of 'ha[ving] a 0.5 probability of containing a prime number'; and thus it is assumed that the `$b_i$` takes values $0$ and $1$ about equally often.
Yet this is not at all like this in case things were 'random.' The expected number of the primes is such an interval should be $1/2$ but this is something else.
To illustrated this with a simple example: if one were to say role a four-sided die and consider the outcome as 'prime' if one gets a $1$ so in a quater of all cases and then the anaolg to the definition in the question choice for the length of an interval would be $2$. However, if one then partitions this sequence into intervalls of length $2$ then one will expect that $9/16$ of the intervalls contain no such 'prime' (corresponding to `$b_i=0$`).
Also, this not (only) an effect for the small interval size. For an $n$-sided die one one would get an intervall length of $n/2$ and the intervall would not contain any prime with propabilty $(1-1/n)^{n/2}$, so converging to $1/ \sqrt{e}$, which is not $1/2$. The logarithm should be sufficiently 'constant' that I do not expect much difference in the outcome, in any case there is no reason the analog will be $1/2$.
So the `$b_i$` should not have the same propabilty of having value $0$ and $1$ and the difference of the frequency-count should be about linear in $x$.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 46, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.938857913017273, "perplexity_flag": "head"}
|
http://nrich.maths.org/public/leg.php?code=-38&cl=3&cldcmpid=4338
|
# Search by Topic
#### Resources tagged with Permutations similar to Colour Building:
Filter by: Content type:
Stage:
Challenge level:
##### Other tags that relate to Colour Building
Visualising. Cuisenaire rods. Properties of numbers. Patterned numbers. Addition & subtraction. Generalising. Mathematical reasoning & proof. Working systematically. Creating expressions/formulae. Combinations.
### There are 21 results
Broad Topics > Decision Mathematics and Combinatorics > Permutations
### And So on and So On
##### Stage: 3 Challenge Level:
If you wrote all the possible four digit numbers made by using each of the digits 2, 4, 5, 7 once, what would they add up to?
### Painting Cubes
##### Stage: 3 Challenge Level:
Imagine you have six different colours of paint. You paint a cube using a different colour for each of the six faces. How many different cubes can be painted using the same set of six colours?
### Flagging
##### Stage: 3 Challenge Level:
How many tricolour flags are possible with 5 available colours such that two adjacent stripes must NOT be the same colour. What about 256 colours?
### Six Times Five
##### Stage: 3 Challenge Level:
How many six digit numbers are there which DO NOT contain a 5?
### Shuffle Shriek
##### Stage: 3 Challenge Level:
Can you find all the 4-ball shuffles?
### Power Crazy
##### Stage: 3 Challenge Level:
What can you say about the values of n that make $7^n + 3^n$ a multiple of 10? Are there other pairs of integers between 1 and 10 which have similar properties?
### Master Minding
##### Stage: 3 Challenge Level:
Your partner chooses two beads and places them side by side behind a screen. What is the minimum number of guesses you would need to be sure of guessing the two beads and their positions?
### Factoring a Million
##### Stage: 4 Challenge Level:
In how many ways can the number 1 000 000 be expressed as the product of three positive integers?
### Bell Ringing
##### Stage: 3 Challenge Level:
Suppose you are a bellringer. Can you find the changes so that, starting and ending with a round, all the 24 possible permutations are rung once each and only once?
### Ding Dong Bell
##### Stage: 3, 4 and 5
The reader is invited to investigate changes (or permutations) in the ringing of church bells, illustrated by braid diagrams showing the order in which the bells are rung.
### Euromaths
##### Stage: 3 Challenge Level:
How many ways can you write the word EUROMATHS by starting at the top left hand corner and taking the next letter by stepping one step down or one step to the right in a 5x5 array?
### Even Up
##### Stage: 3 Challenge Level:
Consider all of the five digit numbers which we can form using only the digits 2, 4, 6 and 8. If these numbers are arranged in ascending order, what is the 512th number?
### Thank Your Lucky Stars
##### Stage: 4 Challenge Level:
A counter is placed in the bottom right hand corner of a grid. You toss a coin and move the star according to the following rules: ... What is the probability that you end up in the top left-hand. . . .
### Permute It
##### Stage: 3 Challenge Level:
Take the numbers 1, 2, 3, 4 and 5 and imagine them written down in every possible order to give 5 digit numbers. Find the sum of the resulting numbers.
### Sheffuls
##### Stage: 4 Challenge Level:
Discover a handy way to describe reorderings and solve our anagram in the process.
### Chances Are
##### Stage: 4 Challenge Level:
Which of these games would you play to give yourself the best possible chance of winning a prize?
### Card Shuffle
##### Stage: 3 and 4
This article for students and teachers tries to think about how long would it take someone to create every possible shuffle of a pack of cards, with surprising results.
### Voting Paradox
##### Stage: 4 and 5 Challenge Level:
Some relationships are transitive, such as `if A>B and B>C then it follows that A>C', but some are not. In a voting system, if A beats B and B beats C should we expect A to beat C?
### 396
##### Stage: 4 Challenge Level:
The four digits 5, 6, 7 and 8 are put at random in the spaces of the number : 3 _ 1 _ 4 _ 0 _ 9 2 Calculate the probability that the answer will be a multiple of 396.
### Card Game (a Simple Version of Clock Patience)
##### Stage: 4 Challenge Level:
Four cards are shuffled and placed into two piles of two. Starting with the first pile of cards - turn a card over... You win if all your cards end up in the trays before you run out of cards in. . . .
The NRICH Project aims to enrich the mathematical experiences of all learners. To support this aim, members of the NRICH team work in a wide range of capacities, including providing professional development for teachers wishing to embed rich mathematical tasks into everyday classroom practice. More information on many of our other activities can be found here.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8703992366790771, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/15503/defining-subfunctors?answertab=oldest
|
# Defining “subfunctors”
I have a functor $F\colon \mathbf{Rng}\to\mathbf{Grp}$, and a correspondence on objects which assigns to every group $F(R)$ a suitable subgroup $G_o(R)\subseteq F(R)$. Is there a way to turn $G$ into a functor, defining $G_o(R)\to G_o(S)$ via the maps I have between $F(R)$ and $F(S)$? $$\begin{array}{ccc} F(R) &\to^{F(f)}& F(S) \\\ \uparrow_{\iota_R} && \uparrow_{\iota_S}\\\ G_o(R) & &G_o(S) \end{array}$$ In this diagram vertical arrows are simply the existing injections. I thought to define $G_o(R)\to G_o(S)$ taking the obvious left inverse going from the copy of $G_o(-)$ into $F(-)$ to $G_o(-)$, (call $\pi_S$ this map, thn $G(R)\to G(S)$ is defined by $\pi_s\circ F(f)\circ \iota_R$) but I'm not sure this is going to work...
If you think you'll find it useful, $F$ is the functor which assigns to every ring its group of unities.
-
1
Those injections are not morphisms in the category of groups, or rings, just sets. I think that is where you are going to bump into a problem. More importantly, what do you want out of this functor? Do you want this process to be functorial for a reason? – BBischof Dec 25 '10 at 16:13
Can we ask what exactly $G_o$ is? – Sean Tilson Dec 25 '10 at 16:17
I think usually a subfunctor is defined as already being a functor (so the maps you want to be there are part of the definition), such that its values are subobjects of the values of the big functor. – Dylan Wilson Dec 25 '10 at 18:35
## 1 Answer
Using only the things you have written, your question does not have an answer. In general there may not be any way to complete the commutative diagram you drew into a square (and in general there may not be any projection $\pi_S$, but this is a smaller problem). The basic problem is that you have only specified what $G_0$ does on objects, so as it stands $G_0$ is not even a functor (and in particular not a subfunctor).
Consider the example where $F$ is the group of units functor, as above. Let $G_0(R)$ be the whole group of units whenever $R$ is infinite, and the trivial group when $R$ is finite. Then you will see that there is in general no way to fill in the square above for e.g. the map $\mathbb{Z} \to \mathbb{Z}/n\mathbb{Z}$.
Fortunately the condition that $G_0$ should be a subfunctor of $F$ means that you have very little choice in how you define $G_0(f)$, for a morphism $f$. The fact that the diagram above should be commutative forces you to choose $G_0(f)$ to be equal to the restriction of $F(f)$ to $G_0(R)$. What you then need to check, in order for this to be well defined, is that this restriction always lands inside $G_0(S)$. (This fails in the example of the previous paragraph.) Whenever this condition holds, $G_0$ automatically becomes a functor using only that $F$ is a functor (check this).
-
This was my original idea: I want to fix an ideal $J$ in a ring $R$, and then define the set of all subgroups $G\le R^\times$ such that $J+G=R^\times$. Call $S_J(R)$ the intersection of all such groups. $G_o$ is the correspondence associating $R$ to $S_J(R)$. I'm wondering if, given a ring morphism $f:R\to S$, i can obtain a morphism between $S_J(R)$ and $S_{f(J)}(S)$, where $f(J)$ is the ideal generated by $f(J)$. – tetrapharmakon Dec 26 '10 at 16:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 47, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9505029916763306, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/14849/what-would-a-moral-proof-of-the-weil-conjectures-require/14933
|
## What would a “moral” proof of the Weil Conjectures require?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
At the very end of this 2006 interview (rm), Kontsevich says
"...many great theorems are originally proven but I think the proofs are not, kind of, "morally right." There should be better proofs...I think the Index Theorem by Atiyah and Singer...its original proof, I think it's ugly in a sense and up to now, we don't have "the right proof." Or Deligne's proof of the Weil conjectures, it's a morally wrong proof. There are three proofs now, but still not the right one."
I'm trying to understand what Kontsevich means by a proof not being "morally right." I've read this article by Eugenia Cheng on morality in the context of mathematics, but I'm not completely clear on what it means with respect to an explicit example. The general idea seems to be that a "moral proof" would be one that is well-motivated by the theory and in which each step is justified by a guiding principle, as opposed to an "immoral" one that is mathematically correct but relatively ad hoc.
To narrow the scope of this question and (hopefully) make it easier to understand for myself, I would like to focus on the second part of the comment. Why would Kontsevich says that Deligne's proof is not "morally right"? More importantly, what would a "moral proof" of the Weil Conjectures entail?
Would a morally proof have to use motivic ideas, like Grothendieck hoped for in his attempts at proving the Weil Conjectures? Have there been any attempts at "moralizing" Deligne's proof? How do do the other proofs of the Weil Conjectures measure up with respect to mathematical morality?
-
## 4 Answers
I would guess that Grothendieck's envisaged proof, via the standard conjectures, would be "morally right" in Kontsevich's sense. (Although there is the question of how the standard conjectures would be proved; since they remain conjectures, this question is open for now!)
The objection to Deligne's proof is that it relies on various techniques (passing to symmetric powers and Rankin--Selberg inspired ideas, analytic arguments related to the positivity of the coefficients of the zeta-function, and other such things) that don't seem to be naturally related to the question at hand. I believe that Grothendieck had a similar objection to Deligne's argment.
As a number-theorist, I think Deligne's proof is fantastic. One of the appeals (at least to me) of number theory is that none of the proofs are "morally right" in Kontsevich's sense. Obviously, this is a very personal feeling.
(Of course, a proof of the standard conjectures --- any proof, to my mind --- would also be fantastic!)
[Edit, for clarification; this is purely an aside, though:] Some arguments in number theory, for example the primitive root theorem discussed in the comments, are pure algebra when viewed appropriately, and here there are very natural and direct arguments. (For example, in the case of primitive roots, there is basic field theory combined with Hensel's lemma/Newton approximation; this style of argument extends, in some form, to the very general setting of complete local rings.) When I wrote that none of the proof in number theory are "morally right", I had in mind largely the proofs in modern algebraic number theory, such as the modularity of elliptic curves, Serre's conjecture, Sato--Tate, and so on. The proofs use (almost) everything under the sun, and follow no dogma. Tate wrote of abelian class field theory that "it is true because it could not be otherwise" (if I remember the quote correctly), which I took to mean (given the context) that the proofs in the end are unenlightening as to the real reason it is true; they are simply logically correct proofs. This seems to be even more the case with the proofs of results in non-abelian class field theory such as those mentioned above. Despite this, I personally find the arguments wonderful; it is one of the appeals of the subject for me.
-
1
In that theorem, "the only cyclic groups" should be "the only units groups mod m" and p should be an odd prime. There are two parts to this theorem: (1) units mod m are not cyclic for other m and (2) units mod m are cyclic for those m. It's easier to show (1). If m is any other number then the units mod m have more than one element of order 2: if m is divisible by 4 and is not 4 itself, then -1 and 1+m/2 mod m both have order 2 and are different. If m is twice an odd number, units mod m and mod m/2 are isomorphic, so we can focus on odd m. – KConrad Feb 10 2010 at 3:57
1
What problem do you have with the standard proofs? – Qiaochu Yuan Feb 10 2010 at 3:57
1
If m is not an odd prime power then we can write m = ab where a > 2 and b > 2, so (Z/m)* = (Z/a)* x (Z/b)*, so visibly there are a few elements of order 2: (-1,1), (1,-1), and (-1,-1). That completes a morally right proof of item (1). – KConrad Feb 10 2010 at 3:58
2
As for item (2), that when m is 2, 4, p^k, or 2p^k for odd prime p, the units mod m are cyclic, we can reduce easily to the case of p^k. In this case I'd use p-adics: (Z/p^k)* = (Z_p/p^kZ_p)* = Z_p*/(1 + p^kZ_p) = (mu_{p-1} x (1+pZ_p))/(1+p^kZ_p) = mu_{p-1} x (1+pZ_p)/(1+p^kZ_p). Since mu_{p-1} is cyclic of order p - 1 and 1+p is a generator of the second factor, a group of p-power order, we've written (Z/p^k)* as a product of cyclic groups of relatively prime order, so the group itself is cyclic. QED – KConrad Feb 10 2010 at 4:01
3
The only if part is dealt with by Keith Conrad's comment. For the if part: the existence of a primitive root mod p is algebra (any finite multiplicative subgroup of a field is cyclic). To get from mod $p$ to mod $p^n$, one uses Hensel's lemma (or, if you prefer, Newton approximation); of course, you can also be very explicit and just observe directly via the binomial theorem that 1 + p, or 1 + 4 when p = 2, generate the units mod p^n that are 1 mod p (or 1 mod 4 when p = 2). Hensel/Newton is the basic tool for deforming over nilpotent ideals, and I think is "morally right" in that context. – Emerton Feb 10 2010 at 4:04
show 9 more comments
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
I am by no means the expert on algebraic geometry. But maybe I can say a little bit. Kontsevich seems ever wrote a book"Beyond Number" There is one paragraph:
“Very often a mathematician considers his colleague from a different domain with disdain -- what kind of a perverse joy can this guy find in his unmotivated and plainly boring subject? I have tried to learn the hidden beauty in various things, but still for many areas the source of interest is for me a complete mysetery.
My theory is that too often people project their human weakness/properties onto their mathematical activity.
There are obvious examples on the surface: for instance, the idea of a classification of some objects is an incarnation of collector instincts, the search for maximal values is another from of greed, computability/decidability comes from the desire of a total control.
Fascination with iterations is similar to the hypnotism of rhythmic music. Of course, the classification of some kinds of objects could be very useful in the analysis of more complicated structures, or it could just be memorized in simple cases.
The knowledge of the exact maximum or an upper bound of some quantity depending on parameters gives an idea about the range of its possible values. A theoretical computability can be in fact practical for computer experiments. Still, for me the motivation is mostly the desire to understand the hidden machinery in a striking concrete example, around which one can build formalisms.
..... In a deep sense we are all geometers."
I think what Kontsevich mean is that not only the result should be correct, but also the method to get the result should be elegant and natural. Just as he mentioned, the most interesting things to him is the hidden machinery behind the striking examples. For example, say Atiyah-Singer index theorem. Rosenberg ever mentioned in the class, this theorem should have the machinery living in the abelian categories or even exact categories instead of triangulated categories(where Grothendieck Riemman Roch is now living). I guess what they are thinking about is that one should use some universal constructions ,universal theory"(in some sense). They always make emphasis on one sentence "Mathematics should be simple" which might means that the proof of some big theorem should be simple. That is to say, one does not pay much "brain thinking" because "brain of human are weak"
However, I agree with Emerton that this is very personal feelings
-
Can you give a reference for the quote? – Jonah Sinick Feb 11 2010 at 5:10
1
Found it ("The Unravelers: Mathematical Snapshots" edited by Jean-Francois Dars and Annick Lesne) – Jonah Sinick Feb 11 2010 at 19:42
Presumably, Kontsevich is referencing the fact that Deligne used a "trick" to prove the Weil conjectures. Kontsevich is presumably talking about the Grothendieck standard conjectures on algebraic cycles, which would allow us to "realize the dream of motives".
I see no way to "moralize" Deligne's proof, because as I said, it relies on a "trick" which circumvents the hard parts of the standard conjectures.
-
1
=P Can't vote me down anymore. Oh well. It's quite funny that my answer was essentially the same as Emerton's, and it was posted first. I included fewer details, but that doesn't mean my answer deserves a vote down. – Harry Gindi Feb 10 2010 at 5:39
25
Dear Harry: This koan, from the Jargon File, is one of my favorites. "A novice was trying to fix a broken Lisp machine by turning the power off and on. Knight, seeing what the student was doing, spoke sternly: "You cannot fix a machine by just power-cycling it with no understanding of what is going wrong." Knight turned the machine off and on. The machine worked." – Tom Church Feb 10 2010 at 9:11
This is both an answer and a question:
As part of a response to a previous question of mine, David Speyer wrote that:
... it is known how to adapt Weil's proof of the Riemann hypothesis to higher dimensional S, if one had an analogue of the Hodge index theorem for $S \times S$ in characteristic p. I've been told that a good reference for this is Kleiman's Algebraic Cycles and the Weil Conjectures...
So perhaps a "moral" proof would require a Hodge index theorem in characteristic p.
However, David later writes that Grothendieck's standard conjectures assert that the Hodge theorem holds. So is this possible proof the same as "Grothendieck's envisaged" one?
-
1
Yes, I think so. Grothendieck's standard conjectures incorporate certain "positivity" results about cohomology and cup product which suffice to imply RH. You could look at Serre's letter to Weil, in which he proves a version of RH for endomorphisms of a complex projective variety via Hodge theory, to get a sense of how such things might work. And then Kleiman's article gives the details, I believe. – Emerton Feb 10 2010 at 20:28
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9505729675292969, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/98106/inequality-for-modulus
|
# Inequality for modulus
Let $a$ and $b$ be complex numbers with modulus $< 1$. How can I prove that $\left | \frac{a-b}{1-\bar{a}b} \right |<1$ ? Thank you
-
Is it homework? What did you try? Where are you stuck? – Davide Giraudo Jan 11 '12 at 8:56
Not exactly homework. It is an oral question given at the entry test to the French school Ecole Polytechnique. – user20010 Jan 11 '12 at 10:04
## 1 Answer
Here are some hints: Calculate $|a-b|^2$ and $|1-\overline{a}b|^2$ using the formula $|z|^2=z\overline{z}$. To show that $\displaystyle\left | \frac{a-b}{1-\bar{a}b} \right |<1$, it's equivalent to show that $$\tag{1}|1-\overline{a}b|^2-|a-b|^2>0.$$ To show $(1)$, you need to use the fact that $|a|<1$ and $|b|<1$.
If you need more help, I can give your more details.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9258775115013123, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/67960/cycle-of-length-4-in-an-undirected-graph/68020
|
## Cycle of length 4 in an undirected graph
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Can anyone give me a hint for an algorithm to find a simple cycle of length 4 (4 edges and 4 vertices that is) in an undirected graph, given as an adjacency list? It needs to use $O(v^3)$ operations (v is the number of vertices) and I'm pretty sure that it can be done with some kind of BFS or DFS.
The algorithm only has to show that there is such a cycle, not where it is.
-
## 3 Answers
Oh, and there is another way, with the BFS you mentioned. Iteratively, do a BFS from each node. By slightly modifying the BFS algorithm, you can instead of computing the distances from your source vertex to any other, remember the number of shortest paths from your source vertex to any other.
If there is a vertex at distance two which has at least 2 shortest paths to the source vertex, you have found your $C_4$. That's $O(n^3)$.
-
I guess that's what I was looking for. Thank you! – unknown (google) Jun 16 2011 at 16:38
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Let's assume your vertices are labeled from 1 to $n$ and your adjacency list has the form $(u_1,v_1), (u_2,v_2),..., (u_E,v_E)$, where $1 \le u_i < v_i \le n$ for $1 \le i \le E$. Note that $E$, the number of edges, is $O(n^2)$.
Start with a preprocessing step that converts the adjacency list to a list of neighbor sets $N_i$, one for each $i$ between 1 and $n$: For each $k$ from 1 to $E$, put $u_k$ in set $N_{v_k}$ and $v_k$ in set $N_{u_k}$. (Sorry, those sub-subscripts don't look right.) This takes $O(n^2)$ steps.
Now go through the list of pairs $i,j$ with $1 \le i < j \le n$. For each pair, find the intersection $N_i \cap N_j$, and count its size. If you find a pair $i,j$ for which $|N_i \cap N_j| > 1$, you've found your 4-cycle: vertices $i$ and $j$ are each joined to two other vertices. (Neither $i$ nor $j$ is in $N_i \cap N_j$, since $k \notin N_k$ for any $k$.) The computation for each pair can be done in $O(n)$ steps, and there are $O(n^2)$ pairs, so the total computation takes $O(n^3)$ steps.
(Let me elaborate on why the computation of $|N_i \cap N_j|$ is $O(n)$. At worst, you can convert each neighborhood set into a 0--1 vector of dimension $n$ and then take the dot product of the two vectors.)
It might be of interest to ask a follow-up: Given an adjacency list of $E$ edges for a graph on $n$ vertices, can you detect the presence of a 4-cycle in $O(nE)$ steps?
-
@Barry, welcome to MO! – Gerry Myerson Jun 17 2011 at 5:33
Build a graph $G'$ on $|V(G)|$ elements, and keep it warm.
Then, for any vertex $v$ of your graph $G$, add to $G'$ an edge for all of the $\binom {|N_G(v)|} {2}$ pairs of vertices at distance 1 from v. If at some point, you try to create an edge that had been created before, you have found a $C_4$
-
btw, it seems to run in $O(n^2)$, as you can create at most $\binom n 2$ edges in $G'$. – Nathann Cohen Jun 16 2011 at 16:20
+1 for "keep it warm" – Hans Stricker Jun 16 2011 at 17:25
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 50, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9467945098876953, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/44382/first-order-phase-transition-in-a-classical-system/45129
|
# First order phase transition in a classical system
I've never liked discontinuous quantities in classical physics, so I find the discontinuity in heat capacity weird.
My question is, do first order phase transitions ever really exist? Or are our discontinuous experimental $C_v$ vs. $T$ graphs just really steep curves? Discontinuous for all practical and theoretical purposes.
I understand that the theoretical model of a pure component system has a first order phase transition - and therefore a discontinuity in $C_p$. But its theoretically impossible to have a pure component (the chemical potential for any impurity in $A$ is infinite for a pure $A$). So effectively we only ever have multicomponent systems.
-
– Arnold Neumaier Nov 26 '12 at 16:44
## 3 Answers
In classical thermodynamics (the best verified physical theory we have), phase transitions are true discontinuities, obtained from a continuous thermodynamic potential by taking derivatives (or even second derivatives for heat capacities) at points where these do not exist.
Thus there is nothing weird about them. It is no more weird than that the derivative of the absolute value function has a jump at zero.
First order phase transitions also appear in impure materials, though at slightly different volume, pressure and temperature than in the pure case.
Note that the discontinuities are inherited from statistical mechanics (Lee-Yang theorem).
Note that thermodynamics applies to matter regarded as a continuum. Once one looks at atoms or molecules inside a system one has entered in the realm of statistical mechanics, a game with different rules. In general, as one looks at any physical system in more detail, the previously useful description starts to become approximate if not invalid. In particular, questions about continuity or differentiability lose their meaning (or regain it in a very different way).
[Edit2] Traditional thermodynamics in its usual axiomatic form (e.g., Callen) is valid for finitely extended matter. From a microscopic point of view, thermodynamics is usually justified in terms of statistical mechanics, based on the thermodynamic limit of infinite volume. But this is necessary only if the derivation is based on the microcanonical or canonical ensemble. In the grand canonical ensemble, thermodynamics follows without a thermodynamic limit (see Chapter 9 of my book http://lanl.arxiv.org/abs/0810.1019). In the original derivation of the Lee-Yang theorem (for ferromagentism), there is no phase transition without a thermodynamic limit, as the number of particles in an Ising system is bounded. However, real fluids treated from the most basic level, relativistic quantum field theory, have no upper bound on the number of particles, so that the original Lee-Yang statement no longer applies.
-
Very interesting Arnold. My question, I guess, is more about the seam between theory and reality. I can imagine a theoretical system having a discontinuity at it's nth derivative, just like I can imagine a set of (classical) atoms standing perfectly still and achieving 0K. What happens when I add features to the model? Let's say I have a finite set of (classical) atoms at 0K in a rigid box larger than their crystal volume. I start to heat the atoms. Will the jump in Cv be perfectly discontinuous? Won't vacancies (holes) enter the solid and act like a "second component"? – user1512321 Nov 19 '12 at 16:59
@user1512321: Holes or other impurities act like an admixture of a second substance, if they can be treated collectively. But discontinuity is a matter of how closely you describe a system. The separating surface between two phases is a manifold in thermodynamics but gets increasingly rough as you magnify it, until the manifold description loses its meaning. See also the edit to my answer. – Arnold Neumaier Nov 19 '12 at 17:19
Thank you. Sorry for being nitpicky, but I've always liked the "seams" problems in science. – user1512321 Nov 19 '12 at 17:29
1
This is wrong. First-order phase transitions, in the sense of being discontinuous curves, only exist if you take the limit of a system of infinite size. Real systems that we measure are not of infinite size, they are merely very large indeed, and consequently any $C_V$ vs $T$ curve for a real system is not discontinuous but just very steep, as the OP suggests. – Nathaniel Nov 26 '12 at 11:40
1
@Nathaniel: Even continuity is an idealization. Without idealization, there is ''really'' only a physics we do not understand including quantum gravity), and all our current physics is only for all practical purposes. The standard model holds only for practical purposes, and so does statistical mechanics. Questions about what is ''really'' the case become therefore unanswerable, unless one settles on a particular basis. The notion of a phase trasition is well-defined only in thermodynamics, hence this is the correct basis. I had already mentioned in my answer what happens on lower levels. – Arnold Neumaier Nov 26 '12 at 16:31
show 4 more comments
In a sense you are right. The heat capacity only becomes discontinuous for a system of infinite extent. For all others it is continuous.
But that's a theoretical concern only. At the theoretical point of discontinuity the slope of the heat capacity is infinite. Pick any finite value for the slope, no matter how large and I can find a finite system where the slope is larger than that.
As for "impurities", there is no problem handling multicomponent systems. Chemists do it all the time. The results are only slightly more complex than for single component systems and my remarks about the phase transition above are still true.
-
I think you understood my question. I know that one can always get a system with a steeper slope, just like you can (theoretically) get arbitrarily close to 0K, but not 0K. – user1512321 Nov 19 '12 at 16:45
"My question is, do first order phase transitions ever really exist?"
Yes they certainly do. Most of phase transitions are first order. I would not estimate in percents with confidence, but my feeling is that more than 90% of all phase transitions are the transitions of the first order. That is the answer from the experimental point of view.
Discussing it from the theoreticl point of view, I cannot see, why the fact that the body has a final size brings you to a conclusion that its capacitance should be continuous. The body in thermodynamics is indeed treated as a finite, as soon as one of the thermodynamic variables is, say, volume, or a number of particles. Let me just remind you that the two phases, 1 and 2, may be characterized by their free energies, F_1, and F_2. As soon as the free energy has the variables: V, T and N, it describes a body of a finite volume. The free energies under discussion are two different functions. This is important to understand. In some cases they may be slightly different, but there are certainly transitions where they essentially differ from one another. It depends upon the transition under study. In the transition point, however, the free energies of the phases are equal: F_1=F_2, but not their derivatives. Nothing strange that the second derivatives of these two different functions are not equal. It is, in contrast, not natural to expect them to be equal.
There is a different source of a perplexity in the case of the first order transitions. It is that each experiment is always performed during a certain time, while the transition itself has its characteristic time, or few characteristic times. If the dynamic of the transition is slow, the time of the experiment duration may be not enough for relaxation. It is often the case, if the transition is diffusive, but may be met also in other cases. This kinetic nature may "wash out" the curve C_v=C_v(t) and it may seem to be continuous. This is however, the kinetic effect only.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.942468523979187, "perplexity_flag": "head"}
|
http://mathhelpforum.com/calculus/156977-question-about-differentiation-tangents-quadratics-stuff-print.html
|
# Question about differentiation and tangents and quadratics and stuff... :)
Printable View
• September 21st 2010, 02:54 PM
jgv115
Question about differentiation and tangents and quadratics and stuff... :)
I've got a basic understand of differentiation and working the equation of the tangent on a point of a parabola etc
I don't know how to do this question though:
For the curve $y=ax^2+bx+c$, where a,b and c are constants, it is given that at the points (2,12) and (-1,0) the slope of the tangent is 7 and 1 respectively. Find a, b and c.
How do I go about starting this question??
• September 21st 2010, 02:58 PM
skeeter
Quote:
Originally Posted by jgv115
I've got a basic understand of differentiation and working the equation of the tangent on a point of a parabola etc
I don't know how to do this question though:
For the curve $y=ax^2+bx+c$, where a,b and c are constants, it is given that at the points (2,12) and (-1,0) the slope of the tangent is 7 and 1 respectively. Find a, b and c.
How do I go about starting this question??
you are given that y(2) = 12 , y(-1) = 0 , y'(2) = 7, and y'(-1) = 1
If y(x) = ax^2 + bx + c , then y'(x) = 2ax + b
set up some equations in terms of a, b, and c and solve.
All times are GMT -8. The time now is 10:44 AM.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9406148791313171, "perplexity_flag": "middle"}
|
http://nrich.maths.org/6484
|
### Over-booking
The probability that a passenger books a flight and does not turn up is 0.05. For an aeroplane with 400 seats how many tickets can be sold so that only 1% of flights are over-booked?
### Into the Normal Distribution
Investigate the normal distribution
### Aim High
How do you choose your planting levels to minimise the total loss at harvest time?
# Normal Intersection
##### Stage: 5 Challenge Level:
Imagine that I plot the pdfs for two normal distributions on the same axes. Could I choose the parameters so that the curves intersect $0$, $1$ or $2$ times?
Imagine that I plot the cdfs for two normal distributions on the same axes. Could I choose the parameters so that the curves intersect $0$, $1$ or $2$ times?
Imagine that I plotted the pdf and the cdf of normal distribution on the same axes. Would they always, sometimes or never intersect?
Give examples or convincing arguments to support your reasoning. You might want to experiment with specific choices of parameters before attempting to construct more general arguments.
The NRICH Project aims to enrich the mathematical experiences of all learners. To support this aim, members of the NRICH team work in a wide range of capacities, including providing professional development for teachers wishing to embed rich mathematical tasks into everyday classroom practice. More information on many of our other activities can be found here.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.872566819190979, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/105644/cancellation-theorem-for-lattices/105671
|
## Cancellation theorem for lattices
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
By a lattice, we mean a finitely generated, free $\mathbb{Z}$-module together with a symmetric bilinear form. Typical examples are the hyperbolic lattices $U$ and the root lattices $A_{n}, D_{n}, E_{n}$ associated to Dynkin matrices. In general we cannot say that for lattices $L,M$ and $N$ $$L\oplus M \cong L\oplus N \Longrightarrow M\cong N.$$ In other words, cancellation does not hold over $\mathbb{Z}$.
I wonder when this cancellation holds. Are there any criteria? I am particularly interested in the case $L=U$.
-
If $L \oplus M \cong L \oplus N$ then at least you can say that $M, N$ have the same theta function. – Qiaochu Yuan Aug 27 at 17:43
2
...assuming that $L,M,N$ are positive definite, which apparently was not the intention because the question indicates interest in the special case $L = U$. Indeed in that case all we can conclude is that $M,N$ are in the same genus, which in general does not imply $M \cong N$. For example: if $M = E_8^2$, and $N$ is the unimodular lattice of rank $16$ that contains $D_{16}$ with index $2$, then $U \oplus M \cong U \oplus N$ (both are the even unimodular lattice II$_{17,1}$). Likewise: $M = {\bf Z}^9$, $N = {\bf Z} \oplus E_8$, $U \oplus M \cong U \oplus N \cong {\rm I}_{10,1}$. – Noam D. Elkies Aug 27 at 18:32
## 2 Answers
Right. If $L = U$ is the lattice of the quadratic form $u(x,y) = 2 xy,$ and $M,N$ are positive definite, the conclusion is that $M,N$ are in the same genus. That is, they are rationally equivalent "without essential denominator." There is no complete proof printed in one place. I first saw this on page 378 of SPLAG by Conway and Sloane, first edition. The observation may be due to Conway. This is a small part of finding certain automorphism groups, and is first apparent in the articles on the automorphism group of the Leech Lattice. Anyway, click on my name and just go through my question with promising titles. In a minute I will find the one with a sketch of a proof, put a link here.
Found it, http://mathoverflow.net/questions/70666/lorentzian-characterization-of-genus
I also checked with Wai Kiu Chan about the case of "odd" lattices such as the sum of squares, it turns out it does not matter, same outcome.
Meanwhile, it is exactly this observation that allows one to conclude, given a positive "even" lattice with covering radius strictly below $\sqrt 2,$ such as $\mathbb E_8,$ that there is only one class in the genus, i.e. that your integral cancellation holds. See http://mathoverflow.net/questions/69444/a-priori-proof-that-covering-radius-strictly-less-than-sqrt-2-implies-class-nu
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
The genus of an even lattice is characterized by its signature and discriminant form. So if $M$ and $N$ are even, they belong to the same genus. Moreover, if an even lattice $K$ is non-degenerate and indefinite with $rank(K)>h(A_{K})+1$ (where $h(A_{K})$ is the number of minimal generators of the discriminant group $A_{K}$ of $K$), then the genus of $K$ consists of only one class. So if your $M$ (equivalently $N$) satisfies the condition above, the cancelation holds. The results above are proved in "Integral symmetric bilinear form and some of their applications" by V.V. Nikulin. I don't think much is known about odd lattices.
-
Your second sentence needs "unimodular". – S. Carnahan♦ Aug 28 at 4:41
I deleted "unimodular" from the third sentence (otherwise the discriminant form is trivial). I don't think we need unimodularity anywhere. – Atsushi Kanazawa Aug 28 at 5:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 39, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9320601224899292, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/84458/do-spectra-have-diagonal-maps/84686
|
## do spectra have diagonal maps?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Topological spaces have diagonal maps $X \rightarrow X \times X$ and $X \rightarrow X \wedge X$, and suspension spectra also have diagonal maps $\Sigma^\infty X \rightarrow \Sigma^\infty(X \wedge X) \cong (\Sigma^\infty X) \wedge (\Sigma^\infty X)$. What about general spectra? (i.e. symmetric spectra, S-modules, or any other convenient definition.) I always assumed you could, but I haven't thought through it carefully. And if not, can we still get a cup product on $E^*(X)$ when $E$ and $X$ are spectra?
-
DGA's also don't have diagonal maps, there is "an obvious" (naive) choice, but this is not a map in the appropriate category. It doesn't preserve the grading for example. – Sean Tilson Dec 28 2011 at 19:40
## 4 Answers
No. Let $[X,A]$ be the set (in fact abelian group) of homotopy classes of maps from one spectrum to another. If $A$, $B$, and $C$ are spectra, any natural map of sets $[X,A]\times [X,B]\to [X,C]$ is induced by a map $A\times B\to C$. Since $A\times B=A\coprod B$, this amounts to two maps $A\to C$ and $B\to C$ inducing two homomorphisms $[X,A]\to [X,C]$ and $[X,B]\to [X,C]$ which are then added to give one homomorphism $[X,A]\times [X,B]=[X,A]\oplus [X,B] \to [X,C]$. This map cannot be distributive over addition except by being identically zero.
EDIT Taking $A$, $B$, and $C$ to be Eilenberg-MacLane spectra, this rules out nontrivial natural bilinear products on ordinary cohomology of spectra. More generally it rules out such products on generalized cohomology of spectra. And it also rules out any nontrivial natural map $X\to X\wedge X$ because if $A\to A\wedge A$ were nontrivial then this would lead to a natural bilinear map $[X,A]\times [X,A]\to [X,A\wedge A]$ that (for example when $X=A$) is nontrivial.
-
1
So if I had a natural "diagonal" $X \rightarrow X \wedge X$ then it would give a bilinear pairing $[X,A] \times [X,B] \rightarrow [X,A \wedge B]$. By your argument, such a pairing must be zero. So the only candidate for $X \rightarrow X \wedge X$ is the zero map. Did I understand that correctly? – Cary Jan 1 2012 at 15:28
Yes. I have edited to clarify this. – Tom Goodwillie Jan 1 2012 at 15:42
Thank you, this was very helpful. – Cary Jan 7 2012 at 16:21
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
The smash product is not a categorical product, so you can't speak of diagonal map, in the same way as you don't have a natural diagonal map $M\rightarrow M\otimes M$, for $M$ an abelian group or vector space. The analogy is very pertinent since the homotopy category of spectra with homotopy concentrated in dimension $0$ is equivalent to the category of abelian groups, and if we restric to abelian groups which are $\mathbb{Q}$-vector spaces, the smash product corresponds to the tensor product.
BTW, suspension spectra of base spaces (as you seem to consider) do not have a diagonal map either. You have a diagonal map if you consider suspension spectra of unbased spaces, since you need to add an outer base point first, and this operation takes products to smash products.
-
3
In based spaces, $X\sma X$ is a quotient of $X\times X$, so we do have a diagonal, namely the composite of the diagonal $X\to X\times X$ and the quotient map $X\times X\to X\sma X$. Of course, it is not a categorical diagonal, but it is often useful. Since the suspension spectrum functor commutes with smash products, this does give suspension spectra a diagonal. It is used all the time in duality theory. A relevant conceptual point is that, in spectra, the canonical map $X\wedge X\rtarr X\times X$ is an equivalence, just as in Abelian categories. – Peter May Dec 28 2011 at 18:26
2
By $X\wedge X\to X\times X$ Peter presumably meant $X\vee X\to X\times X$. – Tom Goodwillie Dec 28 2011 at 19:51
3
Thanks, Tom, of course that is another typo. It would be nice if comments could also be edited. – Peter May Dec 28 2011 at 20:54
4
So, suspension spectra are commutative coalgebras in spectra. Is there a converse? Is the homotopy theory of $E_\infty$ coalgebras in spectra the homotopy theory of $HZ$ local spaces. – Jeff Smith Dec 29 2011 at 1:09
1
Jeff, there certainly seem to be coalgebras that are not bounded below: start with a commutative differential graded coalgebra over $\mathbb Q$. – Tom Goodwillie Jan 2 2012 at 1:12
show 3 more comments
The existence of an $E_\infty$-diagonal is an obstruction for equipping a spectrum $E$ with the structure of a suspension spectrum. Conversely, in
Klein, J.R.: Moduli of suspension spectra. Trans. Amer. Math. Soc. 357 (2005), 489–507
I showed that the existence of a suitably defined notion of $A_\infty$-diagonal on $E$ is equivalent to equipping $E$ with the structure of a suspension spectrum provided we are in the metastable range. Here "metastable" means $E$ is $r$-connected (for $r \ge 1$) and is weak equivalent to a cell spectrum of dimension $\le 3r+2$.
There are various elementary ways of defining the notion of $A_\infty$-diagonal, but they in the end amount to the existence of a map $\delta: E \to (E\wedge E)^{\Bbb Z_2}$ (for a suitably defined version of the smash product), which is a homotopy section to the map $(E\wedge E)^{\Bbb Z_2} \to E$ which is given by passing from categorical to geometric fixed points. The way I do this in the paper is the use the second stage of the Taylor tower of the functor $E \mapsto \Sigma^\infty \Omega^\infty E$; this second stage turns out to be a model for $(E\wedge E)^{\Bbb Z_2}$.
-
The answer is no in general. So if $E$ is a ring spectrum, $E^\ast(X)$ need not be a ring, unless $X$ is a suspension spectrum. It is only a module over $E^\ast(pt)$. The ring structure on $E$ only gives external multiplications:
$E^\ast(X)\otimes_{E^\ast} E^\ast(Y)\to E^\ast(X\wedge Y)$
$E_\ast(X)\otimes_{E_\ast} E_\ast(Y)\to E_\ast(X\wedge Y)$
In the case that $X$ is also a ring spectrum, then the homology $E_\ast(X)$ becomes a ring, while the cohomology $E^\ast(X)$ will be a coalgebra under the assumption that $E^\ast(X\wedge X)\cong E^\ast(X)\otimes_{E^\ast}E^\ast(X)$.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 67, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9244743585586548, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/95937/copositive-matrix
|
Copositive matrix?
Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I want to check under what conditions a matrix of the form $\alpha J -Q$ is copositive, where $J$ is the all-ones matrix and $Q$ is doubly nonegative (i.e. entrywise nonnegative and positive semidefinite). Talking about theory here, not an algorithm. I scanned the literature but didn't find anything to build on. Do you have suggestions?
-
3
I believe the answer is that this is true if and only if $\alpha$ is at least the largest diagonal entry of $Q$, and this holds just under the assumption that $Q$ is nonnegative definite. This follows from the following two facts: (i) a matrix $A$ is copositive if and only if $x^T A x \geq 0$ for all $x$ in the simplex $S_n$. (ii) if $Q$ is nonnegative definite, then $x^T Q X$ is convex and $\max_{x \in S_n} x^T Q x$ is achieved at the extreme points $e_i$ of the simplex. – alex o. May 4 2012 at 5:55
Alex, thanks for the answer! Can you give a reference for fact (ii) - it seems true to me but I can't nail it? Thanks again! – Felix Goldberg May 4 2012 at 9:14
I've now proved Alex's statement to myself, deriving it from the general Motzkin-Straus theorem. But would still love to get a textbook reference. Thanks once again, Alex. – Felix Goldberg May 4 2012 at 10:15
1
Felix, if $Q$ is nonnegative definite, then it has a symmetric square root $L$ and $x^T Q x = ||Lx||_2^2$, so $x^T Q x$ is the composition of a linear function ($x \rightarrow Lx$) and a convex function ($x \rightarrow ||x||_2^2$). Consequently, it is convex. Next, for every convex function $f$, we have that by definition of convexity, $f(\sum_i \alpha_i x_i) \leq \sum_i \alpha_i f(x_i) \leq \max_i f(x_i)$, where $\alpha_i$ are nonnegative and add up to $1$. Finally, every point in the simplex is a convex combination of the vectors $e_i$. – alex o. May 4 2012 at 18:56
Thanks, this is a nice argument. – Felix Goldberg May 6 2012 at 23:29
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 25, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9236997365951538, "perplexity_flag": "head"}
|
http://mathhelpforum.com/advanced-applied-math/38420-projectile-energy.html
|
Thread:
1. projectile + energy
a shell of mass 50kg at 60 degree to the horizontal with a speed 200 m/s . neglecting air resistance, the energy of the shell in joules at its highest point?
help me with this question!
2. Originally Posted by power0
a shell of mass 50kg at 60 degree to the horizontal with a speed 200 m/s . neglecting air resistance, the energy of the shell in joules at its highest point?
By conservation of Energy, it should be the same as initial energy when you threw the projectile.
So the answer is $\frac12 m v^2 = \frac12. 50.(200)^2 = 10^6 J = 1MJ$
help me with this question!
I am happy 7 is the largest font size
3. Originally Posted by power0
a shell of mass 50kg at 60 degree to the horizontal with a speed 200 m/s . neglecting air resistance, the energy of the shell in joules at its highest point?
help me with this question!
In the future, please don't yell when asking a question. We can read regular sized font quite nicely actually.
-Dan
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9045332074165344, "perplexity_flag": "middle"}
|
http://www.math.uni-bielefeld.de/sfb701/projects/view/16
|
Faculty of Mathematics
Collaborative Research Centre 701
Spectral Structures and Topological Methods in Mathematics
# Project C2
## Linear algebraic groups over arbitrary fields
Principal Investigator(s) Other Investigators
## Summary:
The theory of semisimple linear algebraic groups is well known, up to the so-called anisotropic groups. Examples of anisotropic groups are given by the compact real Lie groups, which are relatively well known. But there are many such groups in more general situations whose properties are totally unknown. In fact it is true that all semisimple linear groups are derived by specialization from their anisotropic forms.
This project proposes to develop methods in order to classify these anisotropic groups and to get informations about their internal structure. Tools are obtained from Galois cohomology, from generic splitting techniques and from the techniques which are used by the so called underlying `related structures' of linear groups, like quadratic and Hermitean forms, Azumaya, Lie, and Jordan algebras.
Conversely, knowledge about those structures can be obtained from knowledge of these groups. Linear algebraic groups and their underlying structures always have been and still are of importance in many areas of mathematics and other sciences.
## Recent Preprints:
| | | |
|-------|------------------------------------------------------------------------------------------------------|-------------|
| 08127 | Linear Algebraic Groups and K-Theory | PDF | PS.GZ |
| 08126 | On bilinear forms of height 2 and degree 1 or 2 in characteristic 2 | PDF | PS.GZ |
| 08125 | Symbols and cyclicity of algebras after a scalar extension | PDF | PS.GZ |
| 08122 | On the basic correspondance of a splitting variety | PDF | PS.GZ |
| 08054 | Local-global principles for embedding of fields with involution into simple algebras with involution | PDF | PS.GZ |
| 08033 | Gersten resolutions with supports | PDF | PS.GZ |
| 06050 | Coincidence site modules in 3-space | PDF | PS.GZ |
| 06049 | Motivic splitting lemma | PDF | PS.GZ |
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 2, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8742654919624329, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/1890/describe-a-topic-in-one-sentence/43878
|
## Describe a topic in one sentence. [closed]
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
When you study a topic for the first time, it can be difficult to pick up the motivations and to understand where everything is going. Once you have some experience, however, you get that good high-level view (sometimes!) What I'm looking for are good one-sentence descriptions about a topic that deliver the (or one of the) main punchlines for that topic.
For example, when I look back at linear algebra, the punchline I take away is "Any nice function you can come up with is linear." After all, multilinear functions, symmetric functions, and alternating functions are essentially just linear functions on a different vector space. Another big punchline is "Avoid bases whenever possible."
What other punchlines can you deliver for various topics/fields?
-
11
This is a very good question, but to be useful and not just fun one should look critically at many of the answers below. – Gil Kalai Nov 8 2009 at 7:54
7
Gil, I am very skeptical about the value of this question. I don't think many of the answers given are that useful, because one won't get the punchlines unless one has acquired experience in the subject (and then, why would you need the punchline?). – Todd Trimble May 20 2011 at 13:27
1
@Todd: to get fodder for a cocktail party level conversation.... – S. Sra Aug 28 at 14:32
2
@Suvrit: I guess it would be more of a "Big-Bang-Theory"-kind of party ;-) – vonjd Oct 7 at 18:37
## 50 Answers
Etale cohomology - you can apply fixed-point theorems from algebraic topology to Galois actions on varieties.
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Nonlinear optimization: Newton's method beats everything else (when it works); when it doesn't, do something that looks like Newton's method.
-
Four-Dimensional Smooth Manifolds: Whitney's trick gone wrong.
-
Linear algebra: everything can be explained by a linear system.
-
4
explained, or approximated? – Colin Tan May 20 2011 at 10:11
Navier-Stokes Equations: Energy estimates and more energy estimates.
*I suppose this goes for most non-linear PDEs
-
I think this belongs on this list too:
The theory of groups is a branch of mathematics in which one does something to something and then compares the results with the result of doing the same thing to something else, or something else to the same thing. – James Newman
-
Representation theory of compact groups: The representation theory is the same as for finite groups, only that there might be infinitely many isomorphism classes of irreducible representations.
(That's the Peter Weyl Theorem!)
Perhaps it would be a much better question, to interpret a well known theorem in one sentence!
-
1
Aren't there always infinitely many isoclasses in the infinite case? – Mariano Suárez-Alvarez Aug 2 2011 at 17:11
1
Of course, $L^2(G)$ should be an inifinite dimensional Hilbert space, if $G$ is not finite. Hence Peter Weyl tells you that this is indeed so, but finite groups are compact, so I do not see a wrong statement in my answer. Btw amuch more interesting question, does this imply that every compact infiniten group has infinitely many conjugacy classes? – Marc Palm Aug 2 2011 at 22:14
• Generating functions are the 19th Century analog of addressable memory.
-
Algebraic geometry is the study of the intrinsic properties of any mathematical object which can be locally described by polynomial equations.
Or
Algebraic geometry is not about solving systems of polynomial equations, rather it's about studying the intrinsic properties thereof.
-
Set theory without choice: You have no choice, but to wonder...
Forcing: If it doesn't not fit, force it.
Large cardinals: "If you want more you have to assume more." (Dana Scott)
-
Additive combinatorics: Any two attempts to define what it means for a finite set to be `additively structured' will be approximately equivalent.
-
Morse Theory: opus dynamicum maxime.
-
Harmonic analysis: The integral operator with the kernel (blank space to fill in) is bounded from (blank space to fill in) to (blank space to fill in).
(communicated by Mark Rudelson)
-
Another favorite of mine …
• Redundancy is the essence of information.
-
Dirichlet forms: a symmetric Markov process is a self-adjoint operator is a closed symmetric form is a Markovian semigroup.
(I've left out a lot of hypotheses, but the essence is that all these are in correspondence, and the properties of any one appear in the others.)
-
Number Theory : Arithmetic properties (such as number of rational solutions) of geometric objects (such as elliptic curves) are often reflected in analytical functions (such as L-functions) associated to those objects i.e. geometry reveals its arithmetic analytically.
-
Probability/Statistical mechanics:
Take a probabilistic model (possibly complicated, involving huge state space, describing a complex system) and rescale it suitably, such that in the limit a simpler "macroscopic" object emerges;
if the latter is still random it's a central limit theorem, if it's deterministic it's a law of large numbers, if you look at fluctuations from the latter it's large deviations; if it is largely independent on the details of the starting probabilsitc model, you have a universality phenomenon (and are happy because when modelling your real system you were forced to add some assumptions just for mathematical comfort); if it changes qualitatively when playing with a parameter of the original model you have a phase transition and want to know the critical values of the parameter.
-
Geometric representation theory: keep translating the problem until you run into Hard Lefschetz, then you are done.
-
Linear Algebra is the correct generalization of dimension. (This came from Hubbard)
-
18
I thought $K$-theory was! – Mariano Suárez-Alvarez Dec 13 2009 at 4:15
QFT — every expression converges after a Wick rotation.
-
7
Wick rotation isn't what leads to convergence. A better sentence might be "Large size asymptotics of the moments of regularized path integrals are independent of the choice of regularization." – userN Oct 24 2009 at 15:07
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9172330498695374, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/87505/equation-for-a-straight-line-in-cartesian-space/87625
|
# Equation for a straight line in Cartesian space
I am trying to create a straight line in the Cartesian space made from two points that have (x,y,z) coordinates. This is for a making a robot arm move in a straight line, I would input two points and the math would give me back a certain numbers of points that create a straight line from point A to point B. The robot arm moves on a circular base.
I think I should use the equation of a line: $ax + by + c = 0$.
But I'm not exactly sure how I would get the intermediate points from that and if it works for 3 dimensions. If this is to vague let me know and I can further clarify.
-
1
Just so you know, what you are describing is actually Cartesian/Euclidean space, not the Cartesian plane. It is a subtle difference, but an important one. Your equation $ax+by+c=0$ correctly describes a line in the Cartesian plane, but not so in Cartesian/Euclidean space. – Michael Boratko Dec 1 '11 at 21:51
## 3 Answers
The equation you gave, $ax+by+c=0$, is the equation for a line in two dimentions. In three dimensions, you can define a line by a point and a vector. This is not in contradiction with the idea that two points determine a line, as obviously there is a vector between two points and therefore by specifying two points you have also specified a point and a vector. This is probably all too pedantic to be worthwhile discussing further, so let's move on to the specific useful example.
Let one point be defined as $P=(x_p,y_p,z_p)$, and another point be defined as $Q=(x_q,y_q,z_q)$. Then we can define the line $L$ as follows:
$$L=\{P+t(Q-P)\}$$ where $t$ is any real number. To dissect this a little bit, this shows us that $P$ is in the set (for $t=0$), and $Q$ is in the set (for $t=1$). By allowing $t$ to be any real number, we are scaling the vector between $P$ and $Q$, and this is what gives us the whole line.
Now, we consider any arbitrary point on the line, which we will define as $(x,y,z)$. If this point is on the line, then it is in the set $L$ and so we must have $$\begin{align}x=&x_p+t(x_q-x_p)\\ y=&y_p+t(y_q-y_p)\\ z=&z_p+t(z_q-z_p) \end{align}$$ These equations are parametric, that is they depend on a parameter $t$, but such equations are necessary in order to describe a line in three dimensions. Obviously, if the line happened to be in one of the planes, say the $xy$ plane, then $z_p=z_q=0$, and so the equations could be solved for $t$. Substituting, you would get the familiar two-dimensional equation for a line, however in general the best you can do is solve for $t$ in the above equations and reduce the three equations to two.
It is also common to write the so-called "symmetric" equations for a line in three dimensions by solving for $t$ and setting all three equal to each other, like so: $$\frac{x-x_p}{x_q-x_p}=\frac{y-y_p}{y_q-y_p}=\frac{z-z_p}{z_q-z_p}$$
Unfortunately, I don't think these sorts of equations will help with your robot project, but perhaps the discussion will engender some good ideas. You can take a look at Paul's Online Notes for more discussion about how lines are represented in three dimensions. He doesn't follow exactly the same approach as I have outlined here, but it is very similar.
You mentioned that the robot arm is on a circular base, and as such it might be advantageous to look into using Spherical Coordinates (which may simplify many of the computational aspects of movement).
-
Thank you, this was almost verbatim what my professor explained to me to do after meeting with him. – Nick Dec 2 '11 at 0:09
parameterized formula for a line through 2 points $P_1=(x_1,y_1,z_1)$ and $P_2=(x_2,y_2,z_2)$
$$\begin{cases} x=x_1+t\cdot(x_2-x_1) \\ y=y_1+t\cdot(y_2-y_1) \\ x=z_1+t\cdot(z_2-z_1) \\ \end{cases}$$
vary $t$ to get a sample of points where $t\in[0,1]$ gives points $\in[P_1,P_2]$
you can also use the weighted average of the 2 points
$$\begin{cases} x=t\cdot x_1+(1-t)\cdot x_2 \\ y=t\cdot y_1+(1-t)\cdot y_2 \\ x=t\cdot z_1+(1-t)\cdot z_2 \\ \end{cases}$$
again $t\in[0,1]$ for points $\in[P_1,P_2]$
you can eliminate the $t$ so you get 2 formulas (as the intersection of 2 planes)
-
You might also want to look up the Peaucellier–Lipkin linkage, which converts circular motion to rectilinear motion.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 30, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.954784631729126, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/69031/tensor-algebra-question-closed
|
Tensor algebra question [closed]
Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
1)Why embedding of ( not necessarily finite-dimensional) vector spaces $V\rightarrow W$ produces embedding of tensor algebras $T(V)\rightarrow T(W)$. I can prove it using Hamel basis in $W$ but is there a nicer ( more functorial ) argument? 2) How to prove the same statement for modules over an algebra instead of vector spaces?
-
A better place to ask this would have been math.stackexchange.com, by the way; see the FAQ for details on the reason. – Mariano Suárez-Alvarez Jun 28 2011 at 16:58
1
Even for modules, the ultimate reason is the universal property of T(V), as a limit object. From this it immediately springs out the structure of functoriality and adjunction, according to general categorical facts. A nice reading is, of course, Mac Lane's Categories for the Working Mathematician, in particular, the chapter about adjoint functors. – Pietro Majer Jun 28 2011 at 17:06
Do you really mean "embedding" or just a morphism? In the case of modules, only split monomorphisms are mapped to split monomorphisms (in fact, via any functor), but $T(-)$ does not preserve monomorphisms in general. – Martin Brandenburg Jun 28 2011 at 17:25
Martin, by embedding I mean injective map, not just a morphism. Thank you for the answer. – MathAndMe Jun 28 2011 at 17:40
2
Martin, by the way, can you give some easy example of injective map of modules over rings such that corresponding map of tensor algebras is not injective? – MathAndMe Jun 28 2011 at 17:55
2 Answers
If $V$ is a subspace of $W$, consider the inclusion $f:V\to W$ and any map $g:W\to V$ such that $g\circ f=1_V$; to construct $g$, you need to use bases or something equivalent, for it does not exist over, say, a general ring...
Now $T(-)$ is a functor, so $T(g)\circ T(f)=T(1_V)=1_{T(V)}$. It follows that the map $T(f):T(V)\to T(W)$ is injective.
-
Yes, that's what I had in mind, I was asking to see if it is inavoidable. Thank you! – MathAndMe Jun 28 2011 at 17:42
You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Let me give another answer to 1).
In general, given a linear mapping $$\phi \colon E \to F$$ it extends uniquely to a homomorphism $$T(\phi) \colon T(E) \to T(F).$$ The proof can be made coordinate-free, in fact it follows from the universal property of $T(E)$ applied to the map $$\eta \colon E \to T(F),$$ where $\eta=i \circ \phi$ and $i \colon F \to T(F)$ is the natural embedding.
By construction it follows
$$T(\phi)(x_1 \otimes \ldots \otimes x_p)=\phi x_1 \otimes \ldots \otimes \phi x_p.$$
If $\psi \colon F \to G$ is another linear map one obtains $$T(\psi \circ \phi)=T(\psi) \circ T(\phi),$$ hence $T(\phi)$ is injective [risp. surjective] whenever $\phi$ is injective [resp. surjective].
For more details, see for instance [Greub, Multilinear Algebra, Chapter III].
-
1
"By construction it follows" - no it doesn't. – darij grinberg Jun 28 2011 at 20:03
Why not? Given an arbitrary associative algebra $A$ with unit element $e$, and a homomorphism $\eta \colon E \to A$, by the universal property of $T(E)$ there exists a unique homomorphism $h \colon T(E) \to A$ such that $h(1)=e$ and which extends $\eta$; this is given precisely by $h(x_1 \otimes \ldots \otimes x_p)=\eta x_1 \ldots \eta x_p$ (see Greub's book). Now apply this result with $A=T(F)$ and $\eta=i \circ \phi$. Am I missing something? – Francesco Polizzi Jun 28 2011 at 21:48
@darij: Certainly, if you believe that $\phi$ extends to a unique (ring) homomorphism, then it must be the one Francesco gives on pure tensors, as that certainly is a homomorphism extending $\phi$, isn't it? The part that I don't see how it follows is the injectivity/surjectivity. Over any ring, the unique homomorphism extending $\phi$ is the Francesco's, but Martin says that $T$ does not preserve monomorphisms in general. – Theo Johnson-Freyd Jun 29 2011 at 1:02
@Theo: Over a field, the injectivity/surjectivity follows from the same functorial argument as in Mariano's answer. I have edited the post to make this clearer – Francesco Polizzi Jun 29 2011 at 7:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 33, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8942868709564209, "perplexity_flag": "head"}
|
http://mathhelpforum.com/algebra/22968-distance-formula-roots.html
|
# Thread:
1. ## Distance Formula with Roots
Hey guys! I'm having problems with some problems regarding the distance formula and I am thoroughly stumped, so I kindly ask for help please. I'm fine solving distance without roots but when the roots came in I became lost.
*don't know how to do the square root symbol on here so I will just put, a "*" before the square roots.
1) (0, -*3) and (*5, 0)
Square Roots: negative square root of 3 (the negative is on the outside), and square root of 5
2) (3*3, *5) and (-*3, 3*5)
Square Roots: (3 square root of 3, square root of 5) and (negative square root of 3, 3 square root of 5) Sorry about the lack of square root symbol! Thanks!
2. for the first one you're trying to find the distance between the points $\left( 0,-\sqrt{3} \right)$ and $\left( \sqrt{5},0 \right)$.
All you have to do is use pythagorus' theorem, you form a right angled triangle with sides of length $\sqrt{3}$ and $\sqrt{5}$ find out the length of the hypotenuse (and hence the distance between the points)
$d^2 = {\sqrt{3}}^2 + {\sqrt{5}}^2$ so $d^2 = 3 + 5$
$d^2 = 8$
$d= 2\sqrt{2}$
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 8, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9170526266098022, "perplexity_flag": "middle"}
|
http://mathoverflow.net/revisions/18388/list
|
## Return to Answer
There is a series of paper by Philippe Gaucher on the arxiv that deal with model categories in the context of theoretical computer science. E.g.:
• Abstract homotopical methods for theoretical computer science (0707.1449)0707.1449)
The purpose of this paper is to collect the homotopical methods used in the development of the theory of flows initialized by author's paper A model category for the homotopy theory of concurrency''.
• A model category for the homotopy theory of concurrency (math/0308054)math/0308054)
We construct a cofibrantly generated model structure on the category of flows such that any flow is fibrant and such that two cofibrant flows are homotopy equivalent for this model structure if and only if they are S-homotopy equivalent. This result provides an interpretation of the notion of S-homotopy equivalence in the framework of model categories.
I guess it is just because of my ignorance, but to me this was unexpected.
1
There is a series of paper by Philippe Gaucher on the arxiv that deal with model categories in the context of theoretical computer science. E.g.:
• Abstract homotopical methods for theoretical computer science (0707.1449)
The purpose of this paper is to collect the homotopical methods used in the development of the theory of flows initialized by author's paper A model category for the homotopy theory of concurrency''.
• A model category for the homotopy theory of concurrency (math/0308054)
We construct a cofibrantly generated model structure on the category of flows such that any flow is fibrant and such that two cofibrant flows are homotopy equivalent for this model structure if and only if they are S-homotopy equivalent. This result provides an interpretation of the notion of S-homotopy equivalence in the framework of model categories.
I guess it is just because of my ignorance, but to me this was unexpected.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 2, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9106338024139404, "perplexity_flag": "head"}
|
http://mathhelpforum.com/differential-geometry/199767-modulus-continuity-continuous.html
|
1Thanks
• 1 Post By girdav
# Thread:
1. ## Modulus of continuity is continuous?
Hey,
I'm currently reading a book on convergence of probability measures, and there is a property that they assert without too many details that I can't manage to work out for myself.
To put you in context, we're in the space $C[0,1]$ of functions $f:[0,1]\to\mathbb{R}$ that are continuous with respect to the standard euclidean metric $d(x,y)=|x-y|$, and we define the metric on $C[0,1]$ to be $\rho(f,g)=\sup_{t\in[0,1]}|f(t)-g(t)|$.
For every $\delta>0$ and $f\in C[0,1]$, define the modulus of continuity $w(f,\delta)$ as
$w(f,\delta)=\sup_{|x-y|<\delta}|f(x)-f(y)|$
In the book, they say that for any fixed $\delta>0$, the function $w(\cdot,\delta):C[0,1]\to\mathbb{R}$ is continuous. Their only argument is that
for any $f,g\in C[0,1]$, we have $|w(f,\delta)-w(g,\delta)|\leq 2\rho(f,g)$. It's easy to see how this implies continuity, but I can't manage to show this inequality myself. Any help or hints would be greatly appreciated.
2. ## Re: Modulus of continuity is continuous?
We have $|f(x)-f(y)|\leq 2\rho(f,g)+|g(x)-g(y)|$ then take supremum. Switch the roles of $f$ and $g$ to get what you want.
P.S. Which book are you studying?
3. ## Re: Modulus of continuity is continuous?
Thanks for the answer, I'll try this out.
I'm studying Convergence of Probability Measures 2nd edition by Patrick Billingsley, I'm trying to gain a better understanding of Donsker's Theorem, which is basically a version of the Central Limit Theorem for random variables taking values in C[0,1].
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 16, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9435731768608093, "perplexity_flag": "head"}
|
http://mathhelpforum.com/differential-geometry/23010-differential-geometry-question.html
|
# Thread:
1. ## Differential geometry question
Hi,
I have lots of theory to work off but no examples so I cant figure out
what Im doing for this question
Consider a unit-radius cylinder with centre running along the x-axis
a) write down a coordinate patch which covers this cylinder
b) compute the shape operater
c) find the principal curvatures and vectors
d) find the Gaussian and mean curvatures
I know one of the coordinate patches we have studied is the mange
patch. How do I know if that is a viable option here?
If it isn't I have a lemma that states for a coordinate patch PHI of a
function f:M->R we have PHI_u(f)=df/du and PHI_v(f)=df/du. Does this
mean I need to write a function for the cylinder and differentiate it
to get a patch?
The shape operater is S_p(v)=-(TRIANGLE)_v. n where n(p)=PHI_u(p) X
PHI_v(p)/||PHI_u(p) X PHI_v(p) but as I cannot work out what the PHI
is supposed to be I am completely stuck at the moment and can work no
further
2. Let's see...
For a coordinate patch try $C: \Phi(u,v)=(v,cosu,sinu), u\in[0,2\pi], v>0$. Now we can calculate the normal at the point $p=\Phi(u,v)$, $\eta(u,v)=\frac{1}{(\sqrt{1+v^2})}\left(0,cosu,sin u\right)=(\eta_1(u,v),\eta_2(u,v),\eta_3(u,v))$.
For $w=(w_1,w_2)\in T_p(C)$, we have $S_p(w)=-\nabla_w(\eta)=-\left((d\eta_1)\Big{|}_{(u,v)}(w),(d\eta_2)\Big{|} _{(u,v)}(w),(d\eta_3)\Big{|}_{(u,v)}(w)\right)$
if my memory is not failing me again . For the first one say, we have
$(d\eta_1)\Big{|}_{(u,v)}(w)=\frac{\partial \eta_1}{\partial u}w_1+\frac{\partial \eta_1}{\partial v}w_2$, and so on.
The principal curvatures are just the eigenvalues of $S$.
For the Gaussian and mean curvature, either use the known formulae that include the principal curvatures, or play smart and notice that the cyhlinder and the plane are isometric, so...
3. Ok so I can work out the actual question when given the patch but how do I pick a coordinate patch?
4. By force or by experience.
For the cylinder, it is quite easy, as two variables are always on a circle for fixed height. So you get polar coordinates to express these.
Why don't you try giving parametrizations for the sphere and the torus? Should be good practice.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 7, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8970873951911926, "perplexity_flag": "middle"}
|
http://en.m.wikibooks.org/wiki/Control_Systems/Gain
|
# Control Systems/Gain
| | | | |
|----------------------------------------------------------|-------------------|--------------|-------------|
| ← Realizations | Block Diagrams → | Glossary | |
This page of the Control Systems book is a stub. You can help by expanding this page, but make sure to follow the local manual of style. If you would like to help, but you don't know how, you can ask on the main discussion page. (All Stubs)
## What is Gain?
Gain is a proportional value that shows the relationship between the magnitude of the input to the magnitude of the output signal at steady state. Many systems contain a method by which the gain can be altered, providing more or less "power" to the system. However, increasing gain or decreasing gain beyond a particular safety zone can cause the system to become unstable.
Consider the given second-order system:
$T(s) = \frac{1}{s^2 + 2s + 1}$
We can include an arbitrary gain term, K in this system that will represent an amplification, or a power increase:
$T(s) = K\frac{1}{s^2 + 2s + 1}$
In a state-space system, the gain term k can be inserted as follows:
$x'(t) = Ax(t) + kBu(t)$
$y(t) = Cx(t) + kDu(t)$
The gain term can also be inserted into other places in the system, and in those cases the equations will be slightly different.
### Example: Gain
Here are some good examples of arbitrary gain values being used in physical systems:
Volume Knob
On your stereo there is a volume knob that controls the gain of your amplifier circuit. Higher levels of volume (turning the volume "up") corresponds to higher amplification of the sound signal.
Gas Pedal
The gas pedal in your car is an example of gain. Pressing harder on the gas pedal causes the engine to receive more gas, and causes the engine to output higher RPMs.
Brightness Buttons
Most computer monitors come with brightness buttons that control how bright the screen image is. More brightness causes more power to be outputed to the screen.
↑Jump back a section
## Responses to Gain
As the gain to a system increases, generally the rise-time decreases, the percent overshoot increases, and the settling time increases. However, these relationships are not always the same. A critically damped system, for example, may decrease in rise time while not experiencing any effects of percent overshoot or settling time.
↑Jump back a section
## Gain and Stability
If the gain increases to a high enough extent, some systems can become unstable. We will examine this effect in the chapter on Root Locus.
### Conditional Stability
Systems that are stable for some gain values, and unstable for other values are called conditionally stable systems. The stability is conditional upon the value of the gain, and oftentimes the threshold where the system becomes unstable is important to find.
↑Jump back a section
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 4, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9127745032310486, "perplexity_flag": "middle"}
|
http://quant.stackexchange.com/questions/7461/improving-garch-modeling-approach/7463
|
# Improving GARCH modeling approach
Modeling Exchange Rate Using GARCH
Let's consider the following exchange rate : USD/JPY
For each sequence, we consider changes in the daily difference between the highest price and the open price of the underlying exchange rates.
Thus, if:
• $O(t)$ is the open price of the underlying exchange rate at time $t$, and
• $H(t)$ is the highest price of the underlying at time $t$,
we transform the sequence as follows:
$$Y(t) = \log \frac{H(t)-O(t)}{H(t-1)-O(t-1)}$$
GARCH Model is frequently used to model changes in the variance of $Y(t)$, and I suggest to investigate in this way.
Is a common known that GARCH models are appropriate for modeling time series that exhibit a heavily-tailed distribution and display some degree of serial correlation.
So as a preliminary we must verify that the sequence $Y(t)$ is in fact heavy-tailed and does indeed exhibit serial correlation
Empirical Sequence
1. I computed : skewness = 0.11 and kurtosis = 3.9. Test Ok
2. I plotted ACF & PACF : evidence of serial correlation & long term dependence among sequence
GARCH model "OK"
GARCH Modelling
1. I fit a GARCH(1,1) / GARCH(1,2) / GARCH(1,2) to sequence to obtain parameters.
2. Ljung-Box : Only GARCH(1,1) & GARCH(1,2) succeed.
3. I simulated on 1 $Y$ and compared simulated to original sequence.
4. Result does not seems to capture salient features of the empirical sequence.
Do you see any improvement in the methodology to improve my results?
Thanks.
-
I've reworked your question to have formatting (Markdown and $\LaTeX$), plus I corrected some spelling and grammar. – chrisaycock♦ Mar 7 at 18:36
Thanks for your work chris – user1673806 Mar 7 at 18:43
## 1 Answer
I think there is some room for improvement here.
# 1. GARCH
GARCH models are appropriate for modeling time series that exhibit a heavily-tailed distribution and display some degree of serial correlation.
That's not the case. GARCH is used for modelling series where there is serial correlation in variance, not in actual observations. And heavy tails are just incidental, and could indicate any number of things that have nothing to do with GARCH.
As you may recall, the model for GARCH(N,M) is
$$Y(t) = \mu + \sigma(t) \varepsilon(t)$$ where $\varepsilon(t)$ are i.i.d. (usually $N(0,1)$) and $$\begin{aligned} \sigma^2(t) = \omega & + \alpha_1\varepsilon^2(t-1) + ...+\alpha_N\varepsilon^2(t-N) \\ & + \beta_1\sigma^2(t-1) + ...+\beta_M\sigma^2(t-M) \end{aligned}$$
So to test for appropriatness of using GARCH, you should check for variability and serial correlation of squares of residuals $(Y(t) - \mu)^2$, not of the time series itself. This can be done by e.g. comparing variances of $(Y(t) - \mu)$ on different subsets of samples and testing the hypothesis that they are different. Alternatively, this can be done by plotting ACF and PACF of $(Y(t) - \mu)^2$, though as far as I remember (and I don't remember this well), there may be some quirks there. But before you do that, check the next section:
# 2. Serial Correlation
I plotted ACF & PACF : evidence of serial correlation & long term dependence among sequence
Your ACF and PACF showed you serial dependence in $Y(t)$. This suggests that the first thing you do is should make sure that the series is stationary by applying one of stationarity tests and, if they they show lack of stationarity, apply a correction like differencing. Though given that you're working with daily differences between max and and opening price, I would expect that the series is stationary.
If the series is stationary and still comes up with significant ACF/PACF results, you should try one of the ARMA(N,M) models which model serial dependence in the time series itself:
$$\begin{align} Y(t) = \mu &+ \alpha_1 \epsilon(t-1) + ... + \alpha_N \epsilon(t-N)\\ &+ \beta_1 Y(t-1) + ... + \beta_M Y(t-M) \end{align}$$
-
Thanks for your commentary. I want to make sure everything is clear for me. In part 2 "Serial Correlation", in the second paragraph about "ARMA", do you suggest to skip GARCH for ARMA or do you suggest to build a model with both? – user1673806 Mar 8 at 9:36
I'd say first give ARMA a try, and check if the residuals have variable variance. If they do, you can check whether a combined ARMA(N,M)-GARCH(K,L) model is a better fit. – ikh Mar 8 at 13:22
@ikh Nice comment. Definitely go for the AR(N) in the mean equation if there's first moment serial correlation. – Jase Mar 9 at 4:14
– chrisaycock♦ Mar 18 at 17:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9279248714447021, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/160682/multivariable-limit-xy-lnxy/160684
|
# Multivariable limit $xy\ln(xy)$
Does anybody know how to prove that in $D=\{(x,y)\in\mathbb{R}^2:x>0\wedge y>0\}$ the following is true: $$\lim\limits_{(x,y)\to(0,0)}x\cdot y\cdot\ln{(x\cdot y)}=0$$ I have to find a $\delta$ so that if $\|(x,y)\|=\sqrt{x^2+y^2}<\delta$, that $|x\cdot y\cdot\ln{(x\cdot y)}|<\epsilon$ follows. But I don't know what to do, because the $\ln$ goes to minus infinity.
Can anybody solve this? Thank you!
-
– draks ... Jun 20 '12 at 8:54
Hint: $t \ln t \to 0$ as $t \to 0^+$. – Hans Lundmark Jun 20 '12 at 8:59
But how do I use the L'Hopital's rule in R^2? I thought it was only for normal limits, not for multivariable, because where to i have to differentiate to then? X or Y? – Carucel Jun 20 '12 at 9:01
@Carucel: Do you have to find $\delta$ which $\delta^{\delta}=e^{\epsilon}$? – Babak S. Jun 20 '12 at 9:03
## 1 Answer
The key is to treat $xy$ as one variable. Let $z = xy$. Hence, $$\lim_{(x,y) \to (0^+,0^+)} xy \ln(xy) = \underbrace{\lim_{z \to 0^+} z \ln(z) = -\lim_{t \to \infty} t e^{-t}}_{z = e^{-t}}$$ Note that $e^t \geq \dfrac{t^2}{2}$, for $t \geq 0$. $\left(\text{$\dfrac{t^2}2$is a term in Taylor series of$e^t$and all the other terms are non-negative for$t >0$} \right).$
Hence, $$t e^{-t} = \dfrac{t}{e^t} \leq \dfrac{t}{t^2/2} = \dfrac2t$$ Hence, $$0 \leq \lim_{t \to \infty} t e^{-t} \leq \lim_{t \to \infty} \dfrac2t = 0$$
-
It is the substitution xy=z which makes this limit a simple one. Ok, thank you very much! – Carucel Jun 20 '12 at 9:03
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 18, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9197277426719666, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/79/is-quantum-entanglement-mediated-by-an-interaction/34599
|
# Is quantum entanglement mediated by an interaction?
You can get two photons entangled, and send them off in different directions; this is what happens in EPR experiments. Is the entanglement then somehow affected if one puts a thick slab of EM shielding material between the entangled photons? Have such experiments been made?
According to EPR experiments measurements of the entangled states are at odds with SR, so based on that I'd assume the answer is "no"/"don't know", but any citations would be appreciated!
-
## 10 Answers
The standard test for whether two things are really entangled with one another in the spooky-action-at-a-distance sense of the EPR picture is to see whether measurements of the states of the two particles violate one of the Bell inequalities, meaning that the correlation between the states is stronger than can be explained by any local hidden variable theory. This has been done with lots of systems having significant separation between the particles-- as long ago as 1982, Alain Aspect's group did a test with time-varying detectors that were separated by 40 feet or so, and the results were something like nine standard deviations away from the LHV limit.
More recently, Chris Monroe's group at the University of Maryland has done experiments where they entangle the states of two ions in two different ion traps, and showed Bell violation by something like 3.5 standard deviations. I wrote this up on the blog a while back, and the post includes links to the relevant papers. I'm not sure whether there's a complete lack of a straight-line path between the ions, but they're in completely separate vacuum chambers (mostly stainless steel), so I think it's fairly likely to meet the requirements of the question.
-
Thanks, I'll check out the references. – mtrencseni Nov 16 '10 at 12:24
The original goal of the EPR paper was to show that quantum mechanics is incomplete. Hence, that extra variables have to be added to complete it, contrary to what Cedric claims. The goal of EPR is to show that either nature is non-local (and thus in conflict with SR) either quantum mechanics is incomplete. Since Einstein was not ready to abandon locality and SR, he concluded that quantum mechanics is incomplete.
Later however, John Bell would show that quantum mechanics is in fact non-local. To do that, he first devised an inequality that would have to be satisfied by any local physical theory. Then, he showed that this inequality is violated for certain entangled states, thereby proving that quantum mechanics is non-local. In the 70's Alain Aspect then made an experiment to check if Bell's inequalities were violated in nature or not. There have been many similar experiments since then and they all point to nature being non-local and quantum mechanics being a good description of this non-locality.
Now, there are possible loopholes in the experiments, which I won't discuss here.
One can also object that quantum mechanics described by Schrödinger's equation is not Lorentz-invariant, so we should not expect quantum mechanics to agree with SR.
What about equations which are Lorentz-invariant? Dirac equations, Klein-Gordon, etc... That's where it gets difficult. We know that a correct description of these equations requires quantum field theories. But we only manage to treat the field theories perturbatively. Other approaches are numerical or very limited. So I don't know of any detailed study within the context of quantum field theory of entanglement and Bell inequalities. But I hope someone can come in with more information about these. My knowledge is limited in these areas.
-
I think you answered your question.
"According to EPR experiments measurements of the entangled states are at odds with SR": if you mean that we cannot consider that the result of a measurement made on one entangled particle "propagates" to another one because this propagation would violate SR principles, you have to rules out a interaction in the sense of "strong, weak, ... interaction", ie an interaction not in violation with SR.
In addition, we do not need to have such an interaction as it is directly explained by the principles of quantum mechanics.
It is like imagining that an "interaction" teaches QM to the particles.
The main outcome of the treatments of the EPR paradox is to make hidden variables theories irrelevant, so basically "quantum mechanics" wins and we don't need other explanations.
-
What about his incorrect assumption "According to EPR experiments measurements of the entangled states are at odds with SR"? – Davorak Nov 2 '10 at 23:21
I misread it; I clarified my answer. – Cedric H. Nov 2 '10 at 23:23
The answer to the question depends a bit on what is meant by "mediated". A composite quantum system composed of two or more quantum subsystems can be in a quantum state in which the subsystems are entangled from the beginning, i.e., from the initial state. If the composite system evolves without any interaction among the subsystems, then the form and degree of entanglement between them will not change. If there is interaction, the entanglement will, usually, change in form and/or degree. In particular, if the initial state is not entangled, then subsequent interaction between the subsystems will entangle them. But this fact, per se, doesn't depend on the kind of interaction. Any interaction, whatsoever, between the subsystems will, at least for awhile, entangle the initially non-entangled subsystems. One might well regard this as "mediation" of entanglement by interaction. But physicists, generally, do not think of entanglement, per se, as a dynamical feature of quantum mechanics. Rather, it is regarded as a kinematical/structural possibility of quantum states for composite systems which can be modified by the presence of interactions but is not ultimately, due to interactions.
Let me add that entangled states are the most common states, by far. The unentangled states of composite systems (which are just the so-called product states) are much rarer by comparison.
-
Pretty sure that EPR does not state entanglement is at odds with SR or if it does it is incorrect. The point of the EPR paper was that the consequences of entanglement were so strange they could not be real.
Experimental evidence however supports entanglement and has never shown any violation of SR.
-
Thanks for the answer. I'am aware that there are no violations, that's why I wrote "at odds with", but perhaps that wording is too strong too. – mtrencseni Nov 3 '10 at 8:38
Conclusion 1. Definitely not a causal interaction - see Alain Aspect delayed choice experiments 2. Rather the amplitudes of causally separated particles are merely remain correlated due to past common origin events 3. See Smerlak and Rovelli at http://arxiv.org/abs/quant-ph/0604064 .
See also Rovelli at http://arxiv.org/abs/quant-ph/9609002 for a coherent point of view, but it's somewhat subtle.
-
There is no effect of the one measurement event upon the other. It is not until the results from both measurements are brought together for comparison, and accumulated statistically, that it gets interesting.
The only interactions relevant to the entanglement are at the source, when the singlet spin system falls apart into two spin one particles (or whatever exactly it is you're doing) and again when the measurements are correlated in one place. The latter is not often mentioned in QM, but only in discussions of the philosophy of QM.
Too many writers assume something nonlocal is going on, or some superluminal effect occurs. The classic papers on the Bell experiments typically state that one (at least) of these has to go: locality, causality, realism. It's an open discussion even today in 2010, which one.
Personally, I like to toss out causality, and understand things according to Cramer's Transactional Interpretation -- http://mist.npl.washington.edu/npl/int_rep/tiqm/TI_toc.html -- but at the end of the day I really don't know any better than anyone esle.
-
No.
The more or less formal definition of interaction between two systems is that you have a system 1 with Hilbert space $\cal H_1$ and a system 2 with Hilbert space $\cal H_2$. If system 1 were in isolation from other systems, it would have the Hamiltonian $H_1$ to govern its time evolution. Likewise for system 2 and $H_2$. When the systems are considered as sub-parts of a combined system, the Hilbert space for the combined system is $\cal H_1 \otimes \cal H_2$. Suppose that this combined system is, for whatever reason, governed by the Hamiltonian $H_3$. The interaction term between the two systems is defined as being the difference between this actual Hamiltonian and the Hamiltonian which would have obtained if the sub-systems had no interaction, which would've been $H_1 \otimes I_2 + I_1\otimes H_2$ where $I_i$ is the identity operator on the Hilbert space $\cal H_i$. That is, the interaction term $H_{\mathrm{int}}$ is, by definition, equal to
$$H_3 -( H_1 \otimes I_2 + I_1\otimes H_2) .$$
The two subsystems can be entangled even if this interaction term is identically zero for all time from $- \infty$ to $\infty$. They could also be unentangled (i.e., in a separable state) even if this interaction term is quite violent or weird or whatever. Entanglement has nothing to do with interaction.
As a practical matter, if you want to prepare a state, you have to have some interaction between something and your system, and if you want to prepare an interesting entangled state, you have to use an interesting interaction. But this referes to the interaction between your preparation apparatus and the two sub-systems, not to any interaction between the sub-systems themselves.
-
In a sense, entanglement acts as if spacetime doesn't exist - that's why we call it non-local. QM doesn't know about c. We could model this by calling for an extra dimension where the correlation resides, such that the two sides of the correlation are coincident in this extra dimension. But that's a purely speculative model I dreamed up.
I'm also not being strictly accurate when I say "spacetime", because time does enter in here in the sense that the first leg to be measured is the one which collapses the wavefunction (whatever that means) and destroys the entanglement. A recent paper by Wilzcek & Shapere ( http://arxiv.org/abs/1208.3841 : Constraints on Chronologies, and an informal review here http://www.technologyreview.com/view/428962/special-relativity-and-the-curious-physics-of/) demonstrates the essential role of time ordering and simultaneity; it's a deep observation which (he says) will result in a further paper, because it's about how QM and SR/GR interact. David Albert's talk at the Aharanov 80th Honorary talks about the same sort of thing ( http://ibc.chapman.edu/Mediasite/Viewer/?peid=f9b9519414844b79b36ffda1240c65061d : David Albert: “Physics and narrative”).
-
Raskolnikov is the only one who seems to grasp the essence of Bell's theorem. Einstein's whole point in the EPR paper (which was actually not directly written by Einstein ... see his autobiographical notes where he summarizes the EPR argument in a terse way) was to add hidden variables to quantum mechanics to eradicate spooky action in the theory. His idea of a Bertleman's socks explanation fails, as proven by Bell. Then, how can we explain how two entangled particles always coordinate their behavior? They must somehow communicate with each other.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.95325767993927, "perplexity_flag": "middle"}
|
http://en.wikipedia.org/wiki/Boy_or_Girl_paradox
|
# Boy or Girl paradox
The Boy or Girl paradox surrounds a well-known set of questions in probability theory which are also known as The Two Child Problem,[1] Mr. Smith's Children[2] and the Mrs. Smith Problem. The initial formulation of the question dates back to at least 1959, when Martin Gardner published one of the earliest variants of the paradox in Scientific American. Titled The Two Children Problem, he phrased the paradox as follows:
• Mr. Jones has two children. The older child is a girl. What is the probability that both children are girls?
• Mr. Smith has two children. At least one of them is a boy. What is the probability that both children are boys?
Gardner initially gave the answers 1/2 and 1/3, respectively; but later acknowledged that the second question was ambiguous.[1] Its answer could be 1/2, depending on how you found out that one child was a boy. The ambiguity, depending on the exact wording and possible assumptions, was confirmed by Bar-Hillel and Falk,[3] and Nickerson.[4]
Other variants of this question, with varying degrees of ambiguity, have been recently popularized by Ask Marilyn in Parade Magazine,[5] John Tierney of The New York Times,[6] and Leonard Mlodinow in Drunkard's Walk.[7] One scientific study[2] showed that when identical information was conveyed, but with different partially ambiguous wordings that emphasized different points, that the percentage of MBA students who answered 1/2 changed from 85% to 39%.
The paradox has frequently stimulated a great deal of controversy.[4] Many people argued strongly for both sides with a great deal of confidence, sometimes showing disdain for those who took the opposing view. The paradox stems from whether the problem setup is similar for the two questions.[2][7] The intuitive answer is 1/2.[2] This answer is intuitive if the question leads the reader to believe that there are two equally likely possibilities for the sex of the second child (i.e., boy and girl),[2][8] and that the probability of these outcomes is absolute, not conditional.[9]
## Common assumptions
The two possible answers share a number of assumptions. First, it is assumed that the space of all possible events can be easily enumerated, providing an extensional definition of outcomes: {BB, BG, GB, GG}.[10] This notation indicates that there are four possible combinations of children, labeling boys B and girls G, and using the first letter to represent the older child. Second, it is assumed that these outcomes are equally probable.[10] This implies the following model, a Bernoulli process with $p = 1/2$:
1. Each child is either male or female.
2. Each child has the same chance of being male as of being female.
3. The sex of each child is independent of the sex of the other.
In reality, this is a rather inaccurate model,[10] since it ignores (amongst other factors) the fact that the ratio of boys to girls is not exactly 50:50, the possibility of identical twins (who are always the same sex), and the possibility of an intersex child. However, this problem is about probability and not biology. The mathematical outcome would be the same if it were phrased in terms of a coin toss.
## First question
• Mr. Jones has two children. The older child is a girl. What is the probability that both children are girls?
Under the forementioned assumptions, in this problem, a random family is selected. In this sample space, there are four equally probable events:
Older child Younger child
Girl Girl
Girl Boy
Boy Girl
Boy Boy
Only two of these possible events meet the criteria specified in the question (e.g., GG, GB). Since both of the two possibilities in the new sample space {GG, GB} are equally likely, and only one of the two, GG, includes two girls, the probability that the younger child is also a girl is 1/2.
## Second question
• Mr. Smith has two children. At least one of them is a boy. What is the probability that both children are boys?
This question is identical to question one, except that instead of specifying that the older child is a boy, it is specified that at least one of them is a boy. In response to reader criticism of the question posed in 1959, Gardner agreed that a precise formulation of the question is critical to getting different answers for question 1 and 2. Specifically, Gardner argued that a "failure to specify the randomizing procedure" could lead readers to interpret the question in two distinct ways:
• From all families with two children, at least one of whom is a boy, a family is chosen at random. This would yield the answer of 1/3.
• From all families with two children, one child is selected at random, and the sex of that child is specified. This would yield an answer of 1/2.[3][4]
Grinstead and Snell argue that the question is ambiguous in much the same way Gardner did.[11]
For example, if you see the children in the garden, you may see a boy. The other child may be hidden behind a tree. In this case, the statement is equivalent to the second (the child that you can see is a boy). The first statement does not match as one case is one boy, one girl. Then the girl may be visible. (The first statement says that it can be either.)
While it is certainly true that every possible Mr. Smith has at least one boy - i.e., the condition is necessary - it is not clear that every Mr. Smith with at least one boy is intended. That is, the problem statement does not say that having a boy is a sufficient condition for Mr. Smith to be identified as having a boy this way.
Commenting on Gardner's version of the problem, Bar-Hillel and Falk [3] note that "Mr. Smith, unlike the reader, is presumably aware of the sex of both of his children when making this statement", i.e. that 'I have two children and at least one of them is a boy.' If it is further assumed that Mr Smith would report this fact if it were true then the correct answer is 1/3 as Gardner intended.
## Analysis of the ambiguity
If it is assumed that this information was obtained by looking at both children to see if there is at least one boy, the condition is both necessary and sufficient. Three of the four equally probable events for a two-child family in the sample space above meet the condition:
Older child Younger child
Girl Girl
Girl Boy
Boy Girl
Boy Boy
Thus, if it is assumed that both children were considered while looking for a boy, the answer to question 2 is 1/3. However, if the family was first selected and then a random, true statement was made about the gender of one child (whether or not both were considered), the correct way to calculate the conditional probability is not to count the cases that match. Instead, one must add the probabilities that the condition will be satisfied in each case:[11]
Older child Younger child P(this case) P("at least one boy" given this case) P(both this case, and "at least one boy")
Girl Girl 1/4 0 0
Girl Boy 1/4 1/2 1/8
Boy Girl 1/4 1/2 1/8
Boy Boy 1/4 1 1/4
The answer is found by adding the numbers in the last column wherever you would have counted that case: (1/4)/(0+1/8+1/8+1/4)=1/2. Note that this is not necessarily the same as reporting the gender of a specific child, although doing so will produce the same result by a different calculation. For instance, if the younger child is picked, the calculation is (1/4)/(0+1/4+0+1/4)=1/2. In general, 1/2 is a better answer any time a Mr. Smith with a boy and a girl could have been identified as having at least one girl.
## Bayesian analysis
Note: this section reverses the genders from the ones stated above.
Following classical probability arguments, we consider a large Urn containing two children. We assume equal probability that either is a boy or a girl. The three discernible cases are thus: 1. both are girls (GG) - with probability P(GG) = 0.25, 2. both are boys (BB) - with probability of P(BB) = 0.25, and 3. one of each (G.B) - with probability of P(G.B) = 0.50. These are the prior probabilities.
Now we add the additional assumption that "at least one is a girl" = G. Using Bayes Theorem, we find
P(GG|G) = P(G|GG) * P(GG) / P(G) = 1 * 1/4 / 3/4 = 1/3.
where P(A|B) means "probability of A given B". P(G|GG) = probability of at least one girl given both are girls = 1. P(GG) = probability of both girls = 1/4 from the prior distribution. P(G) = probability of at least one being a girl, which includes cases GG and G.B = 1/4 + 1/2 = 3/4.
Note that, although the natural assumption seems to be a probability of 1/2, so the derived value of 1/3 seems low, the actual "normal" value for P(GG) is 1/4, so the 1/3 is actually a bit higher.
The paradox arises because the second assumption is somewhat artificial, and when describing the problem in an actual setting things get a bit sticky. Just how do we know that "at least" one is a girl? One description of the problem states that we look into a window, see only one child and it is a girl. Sounds like the same assumption...but...this one is equivalent to "sampling" the distribution (i.e. removing one child from the urn, ascertaining that it is a girl, then replacing). Let's call the statement "the sample is a girl" proposition "g". Now we have:
P(GG|g) = P(g|GG) * P(GG) / P(g) = 1 * 1/4 / 1/2 = 1/2.
The difference here is the P(g), which is just the probability of drawing a girl from all possible cases (i.e. without the "at least"), which is clearly 0.5.
The Bayesian analysis generalizes easily to the case in which we relax the 50/50 population assumption. If we have no information about the populations then we assume a "flat prior", i.e. P(BB) = P(GG) = P(G.B) = 1/3. In this case the "at least" assumption produces the result P(GG|G) = 1/2, and the sampling assumption produces P(GG|g) = 2/3, a result also derivable from the Rule of Succession.
## Variants of the question
Following the popularization of the paradox by Gardner it has been presented and discussed in various forms. The first variant presented by Bar-Hillel & Falk [3] is worded as follows:
• Mr. Smith is the father of two. We meet him walking along the street with a young boy whom he proudly introduces as his son. What is the probability that Mr. Smith’s other child is also a boy?
Bar-Hillel & Falk use this variant to highlight the importance of considering the underlying assumptions. The intuitive answer is 1/2 and, when making the most natural assumptions, this is correct. However, someone may argue that “...before Mr. Smith identifies the boy as his son, we know only that he is either the father of two boys, BB, or of two girls, GG, or of one of each in either birth order, i.e., BG or GB. Assuming again independence and equiprobability, we begin with a probability of 1/4 that Smith is the father of two boys. Discovering that he has at least one boy rules out the event GG. Since the remaining three events were equiprobable, we obtain a probability of 1/3 for BB.”[3]
The natural assumption is that Mr. Smith selected the child companion at random. If so, as combination BB has twice the probability of either BG or GB of having resulted in the boy walking companion (and combination GG has zero probability, ruling it out), the union of events BG and GB becomes equiprobable with event BB, and so the chance that the other child is also a boy is 1/2. Bar-Hillel & Falk, however, suggest an alternative scenario. They imagine a culture in which boys are invariably chosen over girls as walking companions. In this case, the combinations of BB, BG and GB are assumed equally likely to have resulted in the boy walking companion, and thus the probability that the other child is also a boy is 1/3.
In 1991, Marilyn vos Savant responded to a reader who asked her to answer a variant of the Boy or Girl paradox that included beagles.[5] In 1996, she published the question again in a different form. The 1991 and 1996 questions, respectively were phrased:
• A shopkeeper says she has two new baby beagles to show you, but she doesn't know whether they're male, female, or a pair. You tell her that you want only a male, and she telephones the fellow who's giving them a bath. "Is at least one a male?" she asks him. "Yes!" she informs you with a smile. What is the probability that the other one is a male?
• Say that a woman and a man (who are unrelated) each has two children. We know that at least one of the woman's children is a boy and that the man's oldest child is a boy. Can you explain why the chances that the woman has two boys do not equal the chances that the man has two boys?
With regard to the second formulation Vos Savant gave the classic answer that the chances that the woman has two boys are about 1/3 whereas the chances that the man has two boys are about 1/2. In response to reader response that questioned her analysis vos Savant conducted a survey of readers with exactly two children, at least one of which is a boy. Of 17,946 responses, 35.9% reported two boys.[10]
Vos Savant's articles were discussed by Carlton and Stansfield[10] in a 2005 article in The American Statistician. The authors do not discuss the possible ambiguity in the question and conclude that her answer is correct from a mathematical perspective, given the assumptions that the likelihood of a child being a boy or girl is equal, and that the sex of the second child is independent of the first. With regard to her survey they say it "at least validates vos Savant’s correct assertion that the “chances” posed in the original question, though similar-sounding, are different, and that the first probability is certainly nearer to 1 in 3 than to 1 in 2."
Carlton and Stansfield go on to discuss the common assumptions in the Boy or Girl paradox. They demonstrate that in reality male children are actually more likely than female children, and that the sex of the second child is not independent of the sex of the first. The authors conclude that, although the assumptions of the question run counter to observations, the paradox still has pedagogical value, since it "illustrates one of the more intriguing applications of conditional probability."[10] Of course, the actual probability values do not matter; the purpose of the paradox is to demonstrate seemingly contradictory logic, not actual birth rates.
### Information about the child
Suppose we were told not only that Mr. Smith has two children, and one of them is a boy, but also that the boy was born on a Tuesday: does this change our previous analyses? Again, the answer depends on how this information comes to us - what kind of selection process brought us this knowledge.
Following the tradition of the problem, let us suppose that out there in the population of two-child families, the sex of the two children is independent of one another, equally likely boy or girl, and that each child is independently of the other children born on any of the seven days of the week, each with equal probability 1/7. In that case, the chance that a two child family consists of two boys, one (at least) born on a Tuesday, is equal to 1/4 (the probability of two boys) times one minus 6/7 squared = 1 - 36/49 = 13/49 (one minus the probability that neither child is born on a Tuesday). 1/4 times 13/49 equals 13/196.
The probability that a two child family consists of a boy and a girl, the boy born on a Tuesday, equals 2 (boy-girl or girl-boy) times 1/4 (the two specified sexes) times 1/7 (the boy born on Tuesday) = 1/14. Therefore, among all two child families with at least one boy born on a Tuesday, the fraction of families in which the other child is a girl is 1/14 divided by the sum of 1/14 plus 13/196 = 0.5185185.
It seems that we introduced quite irrelevant information, yet the probability of the sex of the other child has changed dramatically from what it was before (the chance the other child was a girl was 2/3, when we didn't know that the boy was born on Tuesday).
This is still a bit bigger than a half, but close! It is not difficult to check that as we specify more and more details about the boy child (for instance: born on January 1), the chance that the other child is a girl approaches one half.
However, is it really plausible that our child family with at least one boy born on a Tuesday was delivered to us by choosing just one of such families at random? It is much more easy to imagine the following scenario. We know Mr. Smith has two children. We knock at his door and a boy comes and answers the door. We ask the boy on what day of the week he was born. Let's assume that which of the two children answers the door is determined by chance! Then the procedure was (1) pick a two-child family at random from all two-child families (2) pick one of the two children at random, (3) see it's a boy and ask on what day he was born. The chance the other child is a girl is 1/2. This is a very different procedure from (1) picking a two-child family at random from all families with two children, at least one a boy, born on a Tuesday. The chance the family consists of a boy and a girl is 0.5185815...
This variant of the boy and girl problem is discussed on many recent internet blogs and is the subject of a paper by Ruma Falk, [1]. The moral of the story is that these probabilities don't just depend on the information we have in front of us, but on how we came by that information.
## Psychological investigation
From the position of statistical analysis the relevant question is often ambiguous and as such there is no “correct” answer. However, this does not exhaust the boy or girl paradox for it is not necessarily the ambiguity that explains how the intuitive probability is derived. A survey such as vos Savant’s suggests that the majority of people adopt an understanding of Gardner’s problem that if they were consistent would lead them to the 1/3 probability answer but overwhelmingly people intuitively arrive at the 1/2 probability answer. Ambiguity notwithstanding, this makes the problem of interest to psychological researchers who seek to understand how humans estimate probability.
Fox & Levav (2004) used the problem (called the Mr. Smith problem, credited to Gardner, but not worded exactly the same as Gardner's version) to test theories of how people estimate conditional probabilities.[2] In this study, the paradox was posed to participants in two ways:
• "Mr. Smith says: 'I have two children and at least one of them is a boy.' Given this information, what is the probability that the other child is a boy?"
• "Mr. Smith says: 'I have two children and it is not the case that they are both girls.' Given this information, what is the probability that both children are boys?"
The authors argue that the first formulation gives the reader the mistaken impression that there are two possible outcomes for the "other child",[2] whereas the second formulation gives the reader the impression that there are four possible outcomes, of which one has been rejected (resulting in 1/3 being the probability of both children being boys, as there are 3 remaining possible outcomes, only one of which is that both of the children are boys). The study found that 85% of participants answered 1/2 for the first formulation, while only 39% responded that way to the second formulation. The authors argued that the reason people respond differently to this question (along with other similar problems, such as the Monty Hall Problem and the Bertrand's box paradox) is because of the use of naive heuristics that fail to properly define the number of possible outcomes.[2]
## References
1. ^ a b Martin Gardner (1954). The Second Scientific American Book of Mathematical Puzzles and Diversions. Simon & Schuster. ISBN 978-0-226-28253-4.
2. Craig R. Fox & Jonathan Levav (2004). "Partition–Edit–Count: Naive Extensional Reasoning in Judgment of Conditional Probability". 133 (4): 626–642. doi:10.1037/0096-3445.133.4.626. PMID 15584810.
3. Maya Bar-Hillel and Ruma Falk (1982). "Some teasers concerning conditional probabilities". Cognition 11 (2): 109–122. doi:10.1016/0010-0277(82)90021-X. PMID 7198956.
4. ^ a b c Raymond S. Nickerson (May 2004). Cognition and Chance: The Psychology of Probabilistic Reasoning. Psychology Press. ISBN 0-8058-4899-1.
5. ^ a b Ask Marilyn. Parade Magazine. October 13, 1991; January 5, 1992; May 26, 1996; December 1, 1996; March 30, 1997; July 27, 1997; October 19, 1997.
6. Tierney, John (2008-04-10). "The psychology of getting suckered". The New York Times. Retrieved 24 February 2009.
7. ^ a b Leonard Mlodinow (2008). The Drunkard's Walk: How Randomness Rules our Lives. Pantheon. ISBN 0-375-42404-0.
8. Missing or empty `|url=` (help); `|accessdate=` requires `|url=` (help)
9. P.J. Laird et al. (1999). "Naive Probability: A Mental Model Theory of Extensional Reasoning". Psychological Review.
10. Matthew A. CARLTON and William D. STANSFIELD (2005). "Making Babies by the Flip of a Coin?". The American Statistician.
11. ^ a b
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9576500654220581, "perplexity_flag": "middle"}
|
http://mathforum.org/mathimages/index.php?title=Divergence_Theorem&diff=33017&oldid=6605
|
# Divergence Theorem
### From Math Images
(Difference between revisions)
| | | | |
|----------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| | | Current revision (15:05, 28 June 2012) (edit) (undo) (Fixed formatting again: Whoops.) | |
| (28 intermediate revisions not shown.) | | | |
| Line 1: | | Line 1: | |
| - | {{Image Description | + | {{Image Description Ready |
| | |ImageName=Fountain Flux | | |ImageName=Fountain Flux |
| | |Image=Fountainflux.gif | | |Image=Fountainflux.gif |
| - | |ImageIntro=The water flowing out of a fountain demonstrates an important property of vector fields, the Divergence Theorem. | + | |ImageIntro=The water flowing out of a fountain demonstrates an important theorem for vector fields, the '''Divergence Theorem'''. |
| - | |ImageDescElem=Consider a fountain like the one pictured, particularly its top layer. The rate that water flows out of the fountain's spout is directly related to the amount of water that flows off the top layer. Because something like water isn't easily compressed like air, if more water is pumped out of the spout, then more water will have to flow over the boundaries of the top layer. This is essentially what The Divergence Theorem states: the total the fluid being introduced into a volume is equal to the total fluid flowing out of the boundary of the volume. | + | |
| - | |Pre-K=No | + | |
| - | |Elementary=Yes | + | |
| - | |MiddleSchool=Yes | + | |
| - | |HighSchool=Yes | + | |
| - | |ImageDesc=The Divergence Theorem in its pure form applies to [[Vector Fields]]. Flowing water can be considered a vector field because at each point the water has a position and a velocity [[vector]]. Faster moving water is represented by a larger vector in our field. The '''divergence''' of a vector field is a measurement of the expansion or contraction of the field; if more water is being introduced then the divergence is positive. Analytically divergence of a field <math> F </math> is | + | |
| | | | |
| - | <math> \nabla\cdot\mathbf{F} =\partial{F_x}/\partial{x} + \partial{F_y}/\partial{y} + \partial{F_z}/\partial{z}</math>, | + | |ImageDescElem=Consider the top layer of the fountain pictured. The rate that water flows out of the fountain's spout is directly related to the amount of water that flows off the top layer. Because something like water isn't easily compressed like air, if more water is pumped out of the spout, then more water will have to flow over the boundaries of the top layer. This is essentially what The Divergence Theorem states: the total the fluid being introduced into a volume is equal to the total fluid flowing out of the boundary of the volume if the quantity of fluid in the volume is constant. |
| | | | |
| - | where <math> F _i </math> is the component of <math> F </math> in the <math> i </math> direction. Intuitively, if F has a large positive rate of change in the x direction, the partial derivative with respect to x in this direction will be large, increasing total divergence. The divergence theorem requires that we sum divergence over an entire volume. If this sum is positive, then the field must indicate some movement out of the volume through its boundary, while if this sum is negative, the field must indicate some movement into the volume through its boundary. We use the notion of '''flux''', the flow through a surface, to quantify this movement through the boundary, which itself is a surface. | + | |ImageDesc=The Divergence Theorem in its pure form applies to [[Vector Fields]]. Flowing water can be considered a vector field because at each point the water has a velocity [[vector]]. Faster moving water is represented by a larger vector in our field. The divergence of a vector field at a point is represented by the net flow of water going outwards from that point. Analytically divergence of a field <math> F </math> is expressed in [[Partial derivative|partial derivatives]]: |
| | | + | |
| | | + | :*<math> \nabla\cdot\mathbf{F} =\partial{F_x}/\partial{x} + \partial{F_y}/\partial{y} + \partial{F_z}/\partial{z}</math>, |
| | | + | |
| | | + | where <math> F _i </math> is the component of <math> F </math> in the <math> i </math> direction. Intuitively, if F has a large positive rate of change in the x direction, the partial derivative with respect to x in this direction will be large, increasing total divergence. |
| | | | |
| | The divergence theorem is formally stated as: | | The divergence theorem is formally stated as: |
| | | | |
| | | | |
| - | <math>\iiint\limits_V\left(\nabla\cdot\mathbf{F}\right)dV=\iint\limits_{\partial V}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\;\;\;\subset\!\supset \mathbf F\;\cdot\mathbf n\,{d}S .</math> | + | ::<math>\iiint\limits_V\left(\nabla\cdot\mathbf{F}\right)dV=\iint\limits_{\partial V}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\;\;\;\subset\!\supset \mathbf F\;\cdot\mathbf n\,{d}S .</math> |
| | | | |
| - | The left side of this equation is the sum of the divergence over the entire volume, and the right side of this equation is the sum of the field perpendicular to the volume's boundary at the boundary, which is the flux through the boundary. | + | This theorem requires that we sum divergence over an entire volume, as shown in the left side of this equation.. If this sum is positive, then the field must indicate some movement out of the volume through its boundary, while if this sum is negative, the field must indicate some movement into the volume through its boundary. We use the notion of '''flux''', the flow through a surface, to quantify this movement through the boundary, which itself is a surface. The right side of this equation is the sum of the field perpendicular to the volume's boundary at the boundary, which is the total flux through the boundary. |
| | | + | |
| | | + | Summing up divergence over the entire volume means we sum the flow into or out of each infinitesimal subregion. A flow into one infinitesimal subregion means flow out of an adjacent subregion, which effects the next adjacent subregion, and so on until the boundary of the entire volume is reached. The total sum of divergence over the volume is thus equal to the flow at the boundary, as the theorem states. |
| | | + | |
| | | + | [[Image:Gauss flowboxes.PNG|center|thumb|300px|A volume can be broken into infinitely small subregions, each of whose divergence effects the adjacent regions' divergence, up to the volume's boundary.]] |
| | | | |
| | ===Example of Divergence Theorem Verification=== | | ===Example of Divergence Theorem Verification=== |
| | The following example verifies that given a volume and a vector field, the Divergence Theorem is valid. | | The following example verifies that given a volume and a vector field, the Divergence Theorem is valid. |
| | | | |
| - | Consider the vector field <math> F = \begin{bmatrix} x^2 \\ 0\\ 0\\ \end{bmatrix}</math>. | + | [[Image:Gausscube.PNG|right|400px|thumb|Cutaway view of the cube used in the example. The purple lines are the vectors of the vector field F.]] |
| | | + | |
| | | + | :Consider the vector field <math> F = \begin{bmatrix} x^2 \\ 0\\ 0\\ \end{bmatrix}</math>. |
| | | | |
| | For a volume, we will use a cube of edge length two, and vertices at (0,0,0), (2,0,0), (0,2,0), (0,0,2), (2,2,0), (2,0,2), (0,2,2), (2,2,2). This cube has a corner at the origin and all the points it contains are in positive regions. | | For a volume, we will use a cube of edge length two, and vertices at (0,0,0), (2,0,0), (0,2,0), (0,0,2), (2,2,0), (2,0,2), (0,2,2), (2,2,2). This cube has a corner at the origin and all the points it contains are in positive regions. |
| | | | |
| - | We begin by calculating the left side of the Divergence Theorem. | + | *We begin by calculating the left side of the Divergence Theorem. |
| - | :'''Step 1''': Calculate the divergence of the field: | + | :'''Step 1:''' Calculate the divergence of the field: |
| - | :{{hide|1= :<math> \nabla\cdot F = 2x </math> }} | + | ::<math> \nabla\cdot F = 2x </math> |
| - | :'''Step 2''': Integrate the divergence of the field over the entire volume. | + | :'''Step 2:''' Integrate the divergence of the field over the entire volume. |
| - | :{{hide|1= :<math> \iiint\nabla\cdot F\,dV =\int_0^2\int_0^2\int_0^2 2x \, dxdydz </math> | + | ::<math> \iiint\nabla\cdot F\,dV =\int_0^2\int_0^2\int_0^2 2x \, dxdydz </math> |
| - | :<math>=\int_0^2\int_0^2 4\, dydx </math> | + | ::<math>=\int_0^2\int_0^2 4\, dydx </math> |
| - | :<math>=16 </math>}} | + | ::<math>=16 </math> |
| | | | |
| | | + | *We now turn to the right side of the equation, the integral of flux. |
| | | + | :'''Step 3:''' We first parametrize the parts of the surface which have non-zero flux. |
| | | + | ::Notice that the given vector field has vectors which only extend in the x-direction, since each vector has zero y and z components. Therefore, only two sides of our cube can have vectors normal to them, those sides which are perpendicular to the x-axis. Furthermore, the side of the cube perpendicular to the x axis with all points satisfying x = 0 cannot have any flux, since all vectors on this surface are zero vectors. |
| | | | |
| | | + | ::We are thus only concerned with one side of the cube since only one side has non-zero flux. This side is parametrized using |
| | | + | |
| | | + | ::<math>X=\begin{bmatrix} x \\ y\\ z\\ \end{bmatrix} = \begin{bmatrix} 2 \\ u\\ v\\ \end{bmatrix}\, , u \in (0,2)\, ,v \in (0,2) </math> |
| | | + | |
| | | + | :'''Step 4:''' With this parametrization, we find a general normal vector to our surface. |
| | | + | ::To find this normal vector, we find two vectors which are always tangent to (or contained in) the surface, and are not collinear. The cross product of two such vectors gives a vector normal to the surface. |
| | | + | |
| | | + | ::The first vector is the partial derivative of our surface with respect to u: <math> \frac{\part{X}}{\part{u}} = \begin{bmatrix} 0\\ 1\\ 0\\ \end{bmatrix} </math> |
| | | + | |
| | | + | ::The second vector is the partial derivative of our surface with respect to v: <math> \frac{\part{X}}{\part{v}} = \begin{bmatrix} 0\\ 0\\ 1\\ \end{bmatrix} </math> |
| | | + | |
| | | + | ::The normal vector is finally the cross product of these two vectors, which is simply <math> N = \begin{bmatrix} 1\\ 0\\ 0\\ \end{bmatrix}.</math> |
| | | + | |
| | | + | :'''Step 5:''' Integrate the [[dot product]] of this normal vector with the given vector field. |
| | | + | ::The amount of the field normal to our surface is the flux through it, and is exactly what this integral gives us. |
| | | + | ::<math> \iint\limits_{\partial V}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\;\;\;\subset\!\supset \mathbf F\;\cdot\mathbf n\,{d}S .</math> |
| | | + | ::<math>= \int_0^2 \int_0^2 F \cdot N \,dsdt </math> |
| | | + | ::<math>= \int_0^2 \int_0^2 \begin{bmatrix} x^2 \\ 0\\ 0\\ \end{bmatrix} \cdot \begin{bmatrix} 1 \\ 0\\ 0\\ \end{bmatrix} \,dsdt = \int_0^2 \int_0^2 x^2dsdt = \int_0^2 \int_0^2 4 \,dsdt </math> |
| | | + | ::<math>=16 </math> |
| | | + | |
| | | + | ::Both sides of the equation give 16, so the Divergence Theorem is indeed valid here. ■ |
| | |other=Some multivariable calculus | | |other=Some multivariable calculus |
| | |AuthorName=Brendan John | | |AuthorName=Brendan John |
| - | |SiteURL=http://www.explace.on.ca/V15.html | + | |SiteURL=http://www.cnearchives.com/v15.htm |
| | |Field=Calculus | | |Field=Calculus |
| | | + | |References=Another explanation: http://mathworld.wolfram.com/DivergenceTheorem.html |
| | | + | |
| | | + | More examples: http://faculty.eicc.edu/bwood/ma220supplemental/supplemental34.htm |
| | | + | |InProgress=No |
| | }} | | }} |
## Current revision
Fountain Flux
Field: Calculus
Image Created By: Brendan John
Website: [1]
Fountain Flux
The water flowing out of a fountain demonstrates an important theorem for vector fields, the Divergence Theorem.
# Basic Description
Consider the top layer of the fountain pictured. The rate that water flows out of the fountain's spout is directly related to the amount of water that flows off the top layer. Because something like water isn't easily compressed like air, if more water is pumped out of the spout, then more water will have to flow over the boundaries of the top layer. This is essentially what The Divergence Theorem states: the total the fluid being introduced into a volume is equal to the total fluid flowing out of the boundary of the volume if the quantity of fluid in the volume is constant.
# A More Mathematical Explanation
Note: understanding of this explanation requires: *Some multivariable calculus
[Click to view A More Mathematical Explanation]
The Divergence Theorem in its pure form applies to Vector Fields. Flowing water can be considere [...]
[Click to hide A More Mathematical Explanation]
The Divergence Theorem in its pure form applies to Vector Fields. Flowing water can be considered a vector field because at each point the water has a velocity vector. Faster moving water is represented by a larger vector in our field. The divergence of a vector field at a point is represented by the net flow of water going outwards from that point. Analytically divergence of a field $F$ is expressed in partial derivatives:
• $\nabla\cdot\mathbf{F} =\partial{F_x}/\partial{x} + \partial{F_y}/\partial{y} + \partial{F_z}/\partial{z}$,
where $F _i$ is the component of $F$ in the $i$ direction. Intuitively, if F has a large positive rate of change in the x direction, the partial derivative with respect to x in this direction will be large, increasing total divergence.
The divergence theorem is formally stated as:
$\iiint\limits_V\left(\nabla\cdot\mathbf{F}\right)dV=\iint\limits_{\partial V}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\;\;\;\subset\!\supset \mathbf F\;\cdot\mathbf n\,{d}S .$
This theorem requires that we sum divergence over an entire volume, as shown in the left side of this equation.. If this sum is positive, then the field must indicate some movement out of the volume through its boundary, while if this sum is negative, the field must indicate some movement into the volume through its boundary. We use the notion of flux, the flow through a surface, to quantify this movement through the boundary, which itself is a surface. The right side of this equation is the sum of the field perpendicular to the volume's boundary at the boundary, which is the total flux through the boundary.
Summing up divergence over the entire volume means we sum the flow into or out of each infinitesimal subregion. A flow into one infinitesimal subregion means flow out of an adjacent subregion, which effects the next adjacent subregion, and so on until the boundary of the entire volume is reached. The total sum of divergence over the volume is thus equal to the flow at the boundary, as the theorem states.
A volume can be broken into infinitely small subregions, each of whose divergence effects the adjacent regions' divergence, up to the volume's boundary.
### Example of Divergence Theorem Verification
The following example verifies that given a volume and a vector field, the Divergence Theorem is valid.
Cutaway view of the cube used in the example. The purple lines are the vectors of the vector field F.
Consider the vector field $F = \begin{bmatrix} x^2 \\ 0\\ 0\\ \end{bmatrix}$.
For a volume, we will use a cube of edge length two, and vertices at (0,0,0), (2,0,0), (0,2,0), (0,0,2), (2,2,0), (2,0,2), (0,2,2), (2,2,2). This cube has a corner at the origin and all the points it contains are in positive regions.
• We begin by calculating the left side of the Divergence Theorem.
Step 1: Calculate the divergence of the field:
$\nabla\cdot F = 2x$
Step 2: Integrate the divergence of the field over the entire volume.
$\iiint\nabla\cdot F\,dV =\int_0^2\int_0^2\int_0^2 2x \, dxdydz$
$=\int_0^2\int_0^2 4\, dydx$
$=16$
• We now turn to the right side of the equation, the integral of flux.
Step 3: We first parametrize the parts of the surface which have non-zero flux.
Notice that the given vector field has vectors which only extend in the x-direction, since each vector has zero y and z components. Therefore, only two sides of our cube can have vectors normal to them, those sides which are perpendicular to the x-axis. Furthermore, the side of the cube perpendicular to the x axis with all points satisfying x = 0 cannot have any flux, since all vectors on this surface are zero vectors.
We are thus only concerned with one side of the cube since only one side has non-zero flux. This side is parametrized using
$X=\begin{bmatrix} x \\ y\\ z\\ \end{bmatrix} = \begin{bmatrix} 2 \\ u\\ v\\ \end{bmatrix}\, , u \in (0,2)\, ,v \in (0,2)$
Step 4: With this parametrization, we find a general normal vector to our surface.
To find this normal vector, we find two vectors which are always tangent to (or contained in) the surface, and are not collinear. The cross product of two such vectors gives a vector normal to the surface.
The first vector is the partial derivative of our surface with respect to u: $\frac{\part{X}}{\part{u}} = \begin{bmatrix} 0\\ 1\\ 0\\ \end{bmatrix}$
The second vector is the partial derivative of our surface with respect to v: $\frac{\part{X}}{\part{v}} = \begin{bmatrix} 0\\ 0\\ 1\\ \end{bmatrix}$
The normal vector is finally the cross product of these two vectors, which is simply $N = \begin{bmatrix} 1\\ 0\\ 0\\ \end{bmatrix}.$
Step 5: Integrate the dot product of this normal vector with the given vector field.
The amount of the field normal to our surface is the flux through it, and is exactly what this integral gives us.
$\iint\limits_{\partial V}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\;\;\;\subset\!\supset \mathbf F\;\cdot\mathbf n\,{d}S .$
$= \int_0^2 \int_0^2 F \cdot N \,dsdt$
$= \int_0^2 \int_0^2 \begin{bmatrix} x^2 \\ 0\\ 0\\ \end{bmatrix} \cdot \begin{bmatrix} 1 \\ 0\\ 0\\ \end{bmatrix} \,dsdt = \int_0^2 \int_0^2 x^2dsdt = \int_0^2 \int_0^2 4 \,dsdt$
$=16$
Both sides of the equation give 16, so the Divergence Theorem is indeed valid here. ■
# Teaching Materials
There are currently no teaching materials for this page. Add teaching materials.
# References
Another explanation: http://mathworld.wolfram.com/DivergenceTheorem.html
More examples: http://faculty.eicc.edu/bwood/ma220supplemental/supplemental34.htm
Leave a message on the discussion page by clicking the 'discussion' tab at the top of this image page.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 19, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8531181216239929, "perplexity_flag": "middle"}
|
http://mathematica.stackexchange.com/questions/8096/is-there-a-way-to-output-histogram-x-and-y-data-without-using-the-histogram-char
|
# Is there a way to output histogram x and y data without using the Histogram chart function?
I have a large list of data (3.2 million real numbers), and I would like to plot a histogram of it. The built-in Histogram function is very nice, but on my computer, it is often extremely slow when trying to chart histograms of lists that are very long (~1 million real numbers).
So, I would like to pre-bin the data, put it into {x, y} form (i.e., a list of ordered pairs), and plot it with ListPlot -- with the hope that this will be a workaround to using Histogram[list, PerformanceGoal -> "Speed"] directly.
The BinCounts function is very nice: it takes a list, followed by a bin specification, and outputs the number of elements found within each bin. For example, consider one of the examples given in the documentation:
BinCounts[{1, 3, 2, 1, 4, 5, 6, 2}, {0, 10, 1}]
(* {0, 2, 2, 1, 1, 1, 1, 0, 0, 0} *)
where the bin specification $\{x_\min, x_\max, \text{dx}\}$ tells Mathematica to use bins which satisfy the relation $${x_\min + (i-1) \text{ dx} \leq x < x_\min + i \text{ dx}}$$ for bin $i$.
But, while BinCounts efficiently and effectively outputs the "y" values (the counts), it does not output the "x" values (the bin positions). This is probably the case because there is some ambiguity in the term "bin position," especially for lists containing a small number of elements. But, for a list of many elements, the term "bin position" becomes less important, I think.
Is there any way to automatically print both the "x" and the "y" values for a "histogram" to be plotted using ListPlot? Or should I write my own function? I can write my own function, but I just wanted to ask, because it seems somewhat odd that there does not seem to be a way to use Histogram to simply output the data (and suppress display of the fancy, time- and memory-consuming chart graphic).
As far as what to use as the working "bin position," I guess that I would like to use the midpoint of the bounds of each bin. I guess this would be $$\frac{(x_\min + (i-1) \text{ dx}) + (x_\min + i \text{ dx})}{2} = \frac{1}{2}(2x_\min + (2 i - 1) \text{ dx})$$.
-
## 2 Answers
You need the HistogramList function, which gives you both the bins (x values) and the heights (y values). For example (using your data and bin specification):
{bins, counts} = HistogramList[{1, 3, 2, 1, 4, 5, 6, 2}, {0, 10, 1}]
(* {{0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10}, {0, 2, 2, 1, 1, 1, 1, 0, 0, 0}} *)
Note that the lengths of bins and counts are not the same:
Length /@ {bins, counts}
(* {11, 10} *)
To use the above bins and counts with ListLinePlot or an equivalent function, use the various approaches in this question and its answers to pair up the two lists. Using the highest upvoted answer:
ListLinePlot[{bins, Append[counts, 0]} // Transpose, InterpolationOrder -> 0]
-
Ah, I see that HistogramList is new in version 8. I have both version 7 and 8 on my computer, but I typically use version 7 because some of my colleagues do not have 8 and I want to avoid using functions that they don't have. But, based on this, it looks like it's time to upgrade! Thanks for your time. – Andrew Jul 9 '12 at 18:50
@Andrew I've added an equivalent function for version 6 and above in an answer. – rm -rf♦ Jul 9 '12 at 19:06
As you rightly note in your comment, HistogramList is new in version 8. However, BinCounts has been around since version 6, and so here is an equivalent function that works from version 6 onwards.
histogramList[data_, binspec_List] := {Range[Sequence @@ binspec], BinCounts[data, binspec]}
You can verify that this indeed returns the same results as the version 8 equivalent above:
data = {1, 3, 2, 1, 4, 5, 6, 2};
HistogramList[data, {0, 10, 1}] == histogramList[data, {0, 10, 1}]
(* True *)
-
1
Methinks BinCounts is much faster (x10) at the expense of precalculating appropriate binspecs – belisarius Jul 9 '12 at 19:12
@belisarius This function is about twice as fast as the built-in HistogramList on my machine – rm -rf♦ Jul 9 '12 at 19:16
@belisarius Wait, so this means that R.M's histogramList is actually faster than Mathematica's HistogramList? – Andrew Jul 9 '12 at 19:36
@Andrew I would rather say that BinCounts[] is faster than HistogramList[] ... – belisarius Jul 9 '12 at 19:40
lang-mma
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9217607975006104, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/3168/why-correlation-functions
|
# Why correlation functions?
While this concept is widely used in physics, it is really puzzling (at least for beginners) that you just have to multiply two functions (or the function by itself) at different values of the parameter and then average over the domain of the function keeping the difference between those parameters:
$C(x)=\langle f(x'+x)g(x')\rangle$
Is there any relatively simple illustrative examples that gives one the intuition about correlation functions in physics?
-
## 3 Answers
a very intuitive example for correlation functions can be seen in laser speckle metrology.
If you shine light on a surface which is rough compared to the wavelength, the resulting reflected signal will be somehow random. This can also be stated as that you cannot say from one point of a signal how a neighbouring one looks like - they are uncorrelated. Such a field is often referred to as speckle pattern.
This fact can be used. Suppose you take an image $A(x,y)$ of such a random scattered field, a movement of the image $$(x,y)\rightarrow (x+\delta_x, y+\delta_y) = (x',y')$$ thus $$B(x,y) \approx A(x',y')$$
will be clearly visible and since all information are statistical, one finds that
$$C(\Delta_x,\Delta_y) = \int B(x,y) A(x + \Delta_x, y + \Delta_y) dx dy$$
will only have a "big" contribution at $(\Delta_x,\Delta_y) \equiv (\delta_x, \delta_y)$ of some peaked form. The width of the peak will be given by some physical properties of the illumination, roughness of the surface etc. - it directly corresponds to the local variation of the field.
If we had now in the field some periodic variation we could see that $C$ will have several peaks corresponding to the image's (or field's) self-similarity.
So, analyzing the correlation of a quantity will give you information on how fast it changes and if it is somehow self-similar.
I hope you don't mind that I have chosen an application coming from a more practical viewpoint.
Sincerely
Robert
PS.: More can be found in all the very rich works done by Goodman.
-
Excellent question, Kostya. Lubos already gave a detailed answer using general arguments in the language of QFT.
In astrophysics and cosmology, however, there is another, and very simple, reason why we use the correlation functions all the time. It turns out that the mean value of the function $f(\vec{x})$, denoted $\langle f(\vec{x})\rangle$, can often not be predicted by the theoretical model (e.g. hot Big Bang model with inflationary stage early on, cold dark matter at late times, etc... or whatever other model you wish to consider) - while its correlation $\langle f(\vec{x})f(\vec{y})\rangle$ can be predicted. Here $f$ can refer to any cosmological observable quantity, and $\vec{x}$ and $\vec{y}$ refer to spatial coordinates.
The most common example would be to consider the excess density of dark matter, $f(\vec{x})\equiv \delta\rho(\vec{x})/\rho$, where $\rho$ is the mean density (units of which are kilograms per meter cubed for example) and $\delta\rho(\vec{x})$ is the excess over- or under-density at location $\vec{x}$, and over some region which I will not specify for simplicity of the argument. By definition, the mean of $f$ is zero, so we explicitly indicate that we are not interested in the mean (alternatively, we cannot easily get the mean density of the universe from first principles). But the correlation function, $\langle \delta\rho(\vec{x})\delta\rho(\vec{y})/\rho^2\rangle$ can be related to fundamental parameters of the universe, in particular details of the inflationary epoch, dark matter density, etc. Details of this are involved, and are taught in a graduate course in cosmology. Suffice it to say that theory predicts not the mean of the function (1-point correlation function), but rather its (co)variances (2-point correlation function).
Intuitively, the two-point correlation function of $\delta\rho/\rho$ is related to the "probability that, given an overdense region of dark matter at location $\vec{x}$, there is an overdense region at location $\vec{y}$", and this probability is determined by the good old law of gravity - and can be predicted from first principles.
Theory also in principle predicts the 3-point (e.g. $\langle f(\vec{x})f(\vec{y})f(\vec{z})\rangle$, and higher-point correlation functions, but those are both harder to calculate theoretically and measure observationally. Nevertheless, there is a thriving subfield in particle physics and cosmology of predicting theoretically, and measuring observationally, these so-called higher-order correlation functions.
One final ingredient in all this is the role of measuring the correlation function. The angular averaging sign, $\langle\cdot\rangle$ implies that we should be averaging over different realizations of the system - that is, the universe - in the same underlying cosmological model. This is clearly impossible, since we have only one universe to measure! Instead, we assume statistical homogeneity (which is the same as translational invariance from Lubos' post). Then, instead of averaging over the different universes, cosmologists average $f(\vec{x})f(\vec{y})$ over different locations ($\vec{x}, \vec{y}$) in our universe which have a fixed distance between the two points $|\vec{x}-\vec{y}|$. This way, using the statistical homogeneity assumption, we can get good measurements of the correlation function of any quantity we desire.
-
Nice answer! Now we have QFT, astrophysics and an applied example :) – Robert Filter Jan 18 '11 at 7:21
The correlation function you wrote is a completely general correlation of two quantities, $$\langle f(X) g(Y)\rangle$$ You just use the symbol $x'$ for $Y$ and the symbol $x+x'$ for $X$.
If the environment - the vacuum or the material - is translationally invariant, it means that its properties don't depend on overall translations. So if you change $X$ and $Y$ by the same amount, e.g. by $z$, the correlation function will not change.
Consequently, you may shift by $z=-Y=-x'$ which means that the new $Y$ will be zero. So $$\langle f(X) g(Y)\rangle = \langle f(X-Y)g(0)\rangle = \langle f(x)g(0) \rangle$$ As you can see, for translationally symmetric systems, the correlation function only depends on the difference of the coordinates i.e. separation of the arguments of $f$ and $g$, which is equal to $x$ in your case.
So this should have explained the dependence on $x$ and $x'$.
Now, what is a correlator? Classically, it is some average over the probabilistic distribution $$\langle S \rangle = \int D\phi\,\rho(\phi) S(\phi)$$ This holds for $S$ being the product of several quantities, too. The integral goes over all possible configurations of the physical system and $\rho(\phi)$ is the probability density of the particular configuration $\phi$.
In quantum mechanics, the correlation function is the expectation value in the actual state of the system - usually the ground state and/or a thermal state. For a ground state which is pure, we have $$\langle \hat{S} \rangle = \langle 0 | \hat{S} | 0 \rangle$$ where the 0-ket-vector is the ground state, while for a thermal state expressed by a density matrix $\rho$, the correlation function is defined as $$\langle \hat{S} \rangle = \mbox{Tr}\, (\hat{S}\hat{\rho})$$ Well, correlation functions are functions that know about the correlation of the physical quantities $f$ and $g$ at two points. If the correlation is zero, it looks like the two quantities are independent of each other. If the correlation is positive, it looks like the two quantities are likely to have the same sign; the more positive it is, the more they're correlated. They're correlated with the opposite signs if the correlation function is negative.
In quantum field theory, correlation functions of two operators - just like you wrote - is known as the propagator and it is the mathematical expression that replaces all internal lines of Feynman diagrams. It tells you what is the probability amplitude that the corresponding particle propagates from the point $x+x'$ to the point $x'$. It is usually nonzero inside the light cone only and depends on the difference of the coordinates only.
Correlation functions involving an arbitrary positive number of operators are known as the Green's functions or $n$-point functions if a product of $n$ quantities is in between the brackets. In some sense, the $n$-point functions know everything about the calculable dynamical quantities describing the physical system. The fact that everything can be expanded into correlation functions is a generalization of the Taylor expansions to the case of infinitely many variables.
In particular, the scattering amplitude for $n$ external particles (the total number, including incoming and outgoing ones) may be calculated from the $n$-point functions. The Feynman diagrams mentioned previously are a method to do this calculation systematically: a complicated correlator may be rewritten into a function of the 2-point functions, the propagators, contracted with the interaction vertices.
There are many words to physically describe a correlation function in various contexts - such as the response functions etc. The idea is that you insert an impurity or a signal into $x'$, that's your $g(x')$, and you study how much the field $f(x+x')$ at point $x+x'$ is affected by the impurity $g(x')$.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 56, "mathjax_display_tex": 8, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.927484393119812, "perplexity_flag": "head"}
|
http://en.m.wikipedia.org/wiki/Vorticity
|
# Vorticity
Continuum mechanics
Laws
Scientists
In fluid dynamics, the vorticity is a pseudovector field that describes the local spinning motion of a fluid near some point, as would be seen by an observer located at that point and traveling along with the fluid.
Conceptually, the vorticity could be determined by marking the particles of the fluid in a small neighborhood of the point in question, and watching their relative displacements as they move along the flow. The vorticity vector would be twice the mean angular velocity vector of those particles relative to their center of mass, oriented according to the right-hand rule. This quantity must not be confused with the angular velocity of the particles relative to some other point.
More precisely, the vorticity of a flow is a pseudovector field ω→, equal to the curl (rotational) of its velocity field v→. It can be expressed by the vector analysis formula:
$\vec{\omega} = \vec{\nabla} \times \vec{v}\,,$
where ∇→ is the del operator. The vorticity of a two-dimensional flow is always perpendicular to the plane of the flow, and therefore can be considered a scalar field.
The vorticity is related to the flow's circulation (line integral of the velocity) along a closed path by the Stokes equation.[1] Namely, for any infinitesimal surface element C with normal direction n→ and area dA, the circulation dΓ along the perimeter of C is the dot product ω→ ∙ (dA n→) where ω→ is the vorticity at the center of C.[1]
Many phenomena, such as the blowing out of a candle by a puff of air, are more readily explained in terms of vorticty rather than the basic concepts of pressure and velocity. This applies, in particular, to the formation and motion of vortex rings.
## Examples
In a mass of fluid that is rotating like a rigid body, the vorticity is twice the angular velocity vector of that rotation. This is the case, for example, of water in a tank that has been spinning for a while around its vertical axis, at a constant rate.
The vorticity may be nonzero even when all particles are flowing along straight and parallel pathlines, if there is shear (that is, if the flow speed varies across streamlines). For example, in the laminar flow within a pipe with constant cross section, all particles travel parallel to the axis of the pipe; but faster near that axis, and practically stationary next to the walls. The vorticity will be zero on the axis, and maximum near the walls, where the shear is largest.
Conversely, a flow may have zero vorticity even though its particles travel along curved trajectories. An example is the ideal irrotational vortex, where most particles rotate about some straight axis, with speed inversely proportional to their distances to that axis. A small parcel of fluid that does not straddle the axis will be rotated in one sense but sheared in the opposite sense, in such a way that their mean angular velocity about their center of mass is zero.
| | | |
|--------------------------------------------------------------|--------------------------|---------------------|
| Example flows: | | |
| | | |
| Rigid-body-like vortex | Parallel flow with shear | Irrotational vortex |
| Absolute velocities around the highlighted point: | | |
| | | |
| Relative velocities (magnified) around the highlighted point | | |
| | | |
| Vorticity ≠ 0 | Vorticity ≠ 0 | Vorticity = 0 |
Another way to visualize vorticity is to imagine that, instantaneously, a tiny part of the fluid becomes solid and the rest of the flow disappears. If that tiny new solid particle is rotating, rather than just moving with the flow, then there is vorticity in the flow.
↑Jump back a section
## Mathematical definition
Mathematically, the vorticity of a three-dimensional flow is a pseudovector field, usually denoted by ω→, defined as the curl or rotational of the velocity field v→ describing the fluid motion. In Cartesian coordinates:
$\begin{array}{rcl} \vec{\omega} &=& \nabla \times \vec{v} \;=\; \left(\frac{\partial}{\partial x},\frac{\partial}{\partial y},\frac{\partial}{\partial z}\right)\times(v_x,v_y,v_z)\\ &=& \left( \frac{\partial v_z}{\partial y} - \frac{\partial v_y}{\partial z},\; \frac{\partial v_x}{\partial z} - \frac{\partial v_z}{\partial x},\; \frac{\partial v_y}{\partial x} - \frac{\partial v_x}{\partial y} \right) \end{array}$
In words, the vorticity tells how the velocity vector changes when one moves by an infinitesimal distance in a direction perpendicular to it.
In a two-dimensional flow where the velocity is independent of the z coordinate and has no z component, the vorticity vector is always parallel to the z axis, and therefore can be viewed as a scalar field:
$\vec{\omega} \;=\; \nabla \times \vec{v} \;=\; \left(\frac{\partial}{\partial x},\frac{\partial}{\partial y}\right)\times(v_x,v_y) \;=\; \frac{\partial v_y}{\partial x} - \frac{\partial v_x}{\partial y}$
↑Jump back a section
## Evolution
The evolution of the vorticity field in time is described by the vorticity equation, which can be derived from the Navier-Stokes equations.
In many real flows where the viscosity can be neglected (more precisely, in flows with high Reynolds number), the vorticity field can be modeled well by a collection of discrete vortices, the vorticity being negligible everywhere except in small regions of space surrounding the axes of the vortices. This is clearly true in the case of 2-D potential flow (i.e. 2-D zero viscosity flow), in which case the flowfield can be modeled as a complex-valued field on the complex plane.
Vorticity is a useful tool to understand how the ideal potential flow solutions can be perturbed to model real flows. In general, the presence of viscosity causes a diffusion of vorticity away from the vortex cores into the general flow field. This flow is accounted for by the diffusion term in the vorticity transport equation. Thus, in cases of very viscous flows (e.g. Couette Flow), the vorticity will be diffused throughout the flow field and it is probably simpler to look at the velocity field than at the vorticity.
↑Jump back a section
## Vortex lines and vortex tubes
A vortex line or vorticity line is a line which is everywhere tangent to the local vorticity vector. A vortex tube is the surface in the fluid formed by all vortex-lines passing through a given (reducible) closed curve in the fluid. The 'strength' of a vortex-tube (also called vortex flux) [2] is the integral of the vorticity across a cross-section of the tube, and is the same at everywhere along the tube (because vorticity has zero divergence). It is a consequence of Helmholtz's theorems (or equivalently, of Kelvin's circulation theorem) that in an inviscid fluid the 'strength' of the vortex tube is also constant with time. Viscous effects introduce frictional losses and time dependence.
In a three dimensional flow, vorticity (as measured by the volume integral of its squared magnitude) can be intensified when a vortex-line is extended — a phenomenon known as vortex stretching.[3] This phenomenon occurs in the formation of a bath-tub vortex in out-flowing water, and the build-up of a tornado by rising air-currents.
helicity, which is vorticity in motion along a third axis in a corkscrew fashion.
↑Jump back a section
## Specific sciences
### Aeronautics
In aerodynamics, the lift distribution over a finite wing may be approximated by assuming that each segment of the wing has a semi-infinite trailing vortex behind it. It is then possible to solve for the strength of the vortices using the criterion that there be no flow induced through the surface of the wing. This procedure is called the vortex panel method of computational fluid dynamics. The strengths of the vortices are then summed to find the total approximate circulation about the wing. According to the Kutta–Joukowski theorem, lift is the product of circulation, airspeed, and air density.
### Atmospheric sciences
In the atmospheric sciences,
The relative vorticity is the vorticity of the air velocity field relative to the Earth. This is often modeled as a two-dimensional flow parallel to the ground, so that the relative vorticity vector is generally perpendicular to the ground, and can then be viewed as a scalar quantity, positive when the vector points upward, negative when it points downwards. Therefore, vorticity is positive then the wind turns counter-clockwise (looking down onto the Earth's surface). In the Northern Hemisphere, positive vorticity is called cyclonic rotation, and negative vorticity is anticyclonic rotation; the nomenclature is reversed in the Southern Hemisphere.
The absolute vorticity is computed from the air velocity relative to an inertial frame, and therefore includes a term due to the Earth's rotation, the Coriolis parameter.
The potential vorticity is absolute vorticity divided by the vertical spacing between levels of constant entropy (or potential temperature). The absolute vorticity of an air mass will change if the air mass is stretched (or compressed) in the z direction, but the potential vorticity is conserved in an adiabatic flow, which predominates in the atmosphere. The potential vorticity is therefore useful as an approximate tracer of air masses over the timescale of a few days, particularly when viewed on levels of constant entropy.
The barotropic vorticity equation is the simplest way for forecasting the movement of Rossby waves (that is, the troughs and ridges of 500 hPa geopotential height) over a limited amount of time (a few days). In the 1950s, the first successful programs for numerical weather forecasting utilized that equation.
In modern numerical weather forecasting models and general circulation models (GCM's), vorticity may be one of the predicted variables, in which case the corresponding time-dependent equation is a prognostic equation.
Helicity of the air motion is important in forecasting supercells and the potential for tornadic activity.
↑Jump back a section
## See also
### Fluid dynamics
↑Jump back a section
## References
1. ^ a b Clancy, L.J., Aerodynamics, Section 7.11
2. Batchelor, section 5.2
• Clancy, L.J. (1975), Aerodynamics, Pitman Publishing Limited, London ISBN 0-273-01120-0
• "Weather Glossary"' The Weather Channel Interactive, Inc.. 2004.
• "Vorticity". Integrated Publishing.
↑Jump back a section
## Further reading
• Batchelor, G. K. (2000) [1967], An Introduction to Fluid Dynamics, Cambridge University Press, ISBN 0-521-66396-2
• Ohkitani, K., "Elementary Account Of Vorticity And Related Equations". Cambridge University Press. January 30, 2005. ISBN 0-521-81984-9
• Chorin, Alexandre J., "Vorticity and Turbulence". Applied Mathematical Sciences, Vol 103, Springer-Verlag. March 1, 1994. ISBN 0-387-94197-5
• Majda, Andrew J., Andrea L. Bertozzi, "Vorticity and Incompressible Flow". Cambridge University Press; 2002. ISBN 0-521-63948-4
• Tritton, D. J., "Physical Fluid Dynamics". Van Nostrand Reinhold, New York. 1977. ISBN 0-19-854493-6
• Arfken, G., "Mathematical Methods for Physicists", 3rd ed. Academic Press, Orlando, FL. 1985. ISBN 0-12-059820-5
↑Jump back a section
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 3, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8913733959197998, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/66722/word-problem-in-free-burnside-groups-and-other-torsion-groups
|
## word problem in free Burnside groups (and other torsion groups)
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Question 1. Is it known that for some free Burnside groups the word problem is undecidable?
Provided that the answer is negative, what about the following easier question.
Question 2. Is there a known example of a finitely generated (and preferably finitely presented) group $G$ and an integer $k$ such that all elements of $G$ have order at most $k$ and the word problem in $G$ is undecidable?
-
I am far from being an expert, but are there any examples of infinite finitely presented torsion groups? – Yiftach Barnea Jun 2 2011 at 11:38
@Yiftach: I've just googled that this was an open question in 2002 (paragraph 3, page 3 in the article "non-amenable finitely presented torsion-by cyclic groups" by Ol'shanskii and Sapir), so perhaps such examples are still not known. – Łukasz Grabowski Jun 2 2011 at 11:50
@Yiftach: there's also an article of M. Sapir from 2007 "Some group theory problems", where he mentions existence of finitely presented infinite torsion groups as an open problem. – Łukasz Grabowski Jun 2 2011 at 13:22
I asked Martin Bridson in January 2010, at which point the question was still open. I have no reason to think this has changed. – Jonathan Kiehlmann Jun 20 2011 at 12:39
## 1 Answer
Concerning question 1: for free Burnside groups of odd exponent $n\geq 665$ the decidability was shown by S.I.Adian. Lysionok proved the same in the case of even exponent $n=16k\geq 8000$. The corresponding deciding procedure is just Dehn's algorithm.
It will be interesting to know how to solve the word problem for groups $B(m,n)$ when $n=2k$, where $k\geqslant 665$ and odd. These groups are infinite (it follows from the cited result of Adian), but it seems that the decidability of the word problem for these groups is an open question.
As to the question 2:
The pure existence of non-finitely presented group with undecidable w.p. easily follows from the result of S.I.Adian who proved that there are continuum non-isomorphic periodic groups with fixed number of generators $m\geqslant 2$ satisfying the periodic law $X^n=1$ if $n$ is odd and $n \geq 665$. One should mention, that the number of "additional relations" (besides all periodic) is not necessarily finite.
Later M.Sapir constructed an example of periodic group with finite number of additional relations. The paper is "On the word problem in periodic group varieties", IJAC, 1991, V.1, No.1.
-
Many thanks, this answers all the questions I had, even the one which I didn't ask explicetely, i.e. whether there exists a periodic group with recursive presentation and undecidable word problem. – Łukasz Grabowski Jun 4 2011 at 16:37
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9389140605926514, "perplexity_flag": "middle"}
|
http://planetmath.org/Hypersurface
|
# hypersurface
###### Definition.
Let $M$ be a subset of ${\mathbb{R}}^{n}$ such that for every point $p\in M$ there exists a neighbourhood $U_{p}$ of $p$ in ${\mathbb{R}}^{n}$ and a continuously differentiable function $\rho\colon U\to{\mathbb{R}}$ with $\operatorname{grad}\rho\not=0$ on $U$, such that
$M\cap U=\{x\in U\mid\rho(x)=0\}.$
Then $M$ is called a hypersurface.
If $\rho$ is in fact smooth then $M$ is a smooth hypersurface and similarly if $\rho$ is real analytic then $M$ is a real analytic hypersurface. If we identify ${\mathbb{R}}^{{2n}}$ with ${\mathbb{C}}^{n}$ and we have a hypersurface there it is called a real hypersurface in ${\mathbb{C}}^{n}$. $\rho$ is usually called the local defining function. Hypersurface is really special name for a submanifold of codimension 1. In fact if $M$ is just a topological manifold of codimension 1, then it is often also called a hypersurface.
A real or complex analytic subvariety of codimension 1 (the zero set of a real or complex analytic function) is called a singular hypersurface. That is the definition is the same as above, but we do not require $\operatorname{grad}\rho\not=0$. Note that some authors leave out the word singular and then use non-singular hypersurface for a hypersurface which is also a manifold. Some authors use the word hypervariety to describe a singular hypersurface.
An example of a hypersurface is the hypersphere (of radius 1 for simplicity) which has the defining equation
$x_{1}^{2}+x_{2}^{2}+\ldots+x_{n}^{2}=1.$
Another example of a hypersurface would be the boundary of a domain in ${\mathbb{C}}^{n}$ with smooth boundary.
An example of a singular hypersurface in ${\mathbb{R}}^{2}$ is for example the zero set of $\rho(x_{1},x_{2})=x_{1}x_{2}$ which is really just the two axis. Note that this hypersurface fails to be a manifold at the origin.
# References
• 1 M. Salah Baouendi, Peter Ebenfelt, Linda Preiss Rothschild. Real Submanifolds in Complex Space and Their Mappings, Princeton University Press, Princeton, New Jersey, 1999.
Type of Math Object:
Definition
Major Section:
Reference
Groups audience:
## Mathematics Subject Classification
32V40 Real submanifolds in complex manifolds
14J70 Hypersurfaces
## Recent Activity
May 17
new image: sinx_approx.png by jeremyboden
new image: approximation_to_sinx by jeremyboden
new image: approximation_to_sinx by jeremyboden
new question: Solving the word problem for isomorphic groups by mairiwalker
new image: LineDiagrams.jpg by m759
new image: ProjPoints.jpg by m759
new image: AbstrExample3.jpg by m759
new image: four-diamond_figure.jpg by m759
May 16
new problem: Curve fitting using the Exchange Algorithm. by jeremyboden
new question: Undirected graphs and their Chromatic Number by Serchinnho
## Info
Owner: jirka
Added: 2004-08-11 - 17:42
Author(s): jirka
## Versions
(v8) by jirka 2013-03-22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 25, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8822665214538574, "perplexity_flag": "head"}
|
http://www.physicsforums.com/showthread.php?p=3676950
|
Physics Forums
How can Higgs field explain proton's inertial resistance to acceleration?
If most of the mass/energy of a proton is due to the kinetic energy of its quarks and gluons, rather than interaction with the Higgs field, then how can we explain its inertial mass, i.e. its resistance to acceleration, as being due to a drag induced by the Higgs field?
Alternatively imagine a body made up of particles whose mass/energy is provided by the Higgs field. Now let us spin that body very fast. Its mass/energy will increase due to relativistic mass increase of the moving particles. We know that its inertial mass will increase - it will be harder to accelerate the whole body linearly with a given force. But the Higgs field is Lorentz invariant so that if the inertial resistance force is due to the Higgs field then it shouldn't be any harder to accelerate the body whether it is spinning or not.
PhysOrg.com physics news on PhysOrg.com >> Promising doped zirconia>> New X-ray method shows how frog embryos could help thwart disease>> Bringing life into focus
Quote by johne1618 If most of the mass/energy of a proton is due to the kinetic energy of its quarks and gluons, rather than interaction with the Higgs field, then how can we explain its inertial mass, i.e. its resistance to acceleration, as being due to a drag induced by the Higgs field?
You can't. The proton mass has little to do with the Higgs. Therefore its resistance to acceleration has little to do with the Higgs. No one has claimed so.
Quote by johne1618 But the Higgs field is Lorentz invariant so that if the inertial resistance force is due to the Higgs field then it shouldn't be any harder to accelerate the body whether it is spinning or not.
A Lorentz transformation never turns a non-rotating body into a rotating body (though it can affect the linear velocity). So Lorentz invariance doesn't give you any connection between the two configurations.
Quote by petergreat You can't. The proton mass has little to do with the Higgs. Therefore its resistance to acceleration has little to do with the Higgs. No one has claimed so.
Ok - so that means that when I push a lead weight across an ice rink physics still can't explain the origin of 98% of the resistive force that the weight applies back on my hand.
How can Higgs field explain proton's inertial resistance to acceleration?
Quote by johne1618 If most of the mass/energy of a proton is due to the kinetic energy of its quarks and gluons, rather than interaction with the Higgs field, then how can we explain its inertial mass, i.e. its resistance to acceleration, as being due to a drag induced by the Higgs field? Alternatively imagine a body made up of particles whose mass/energy is provided by the Higgs field. Now let us spin that body very fast. Its mass/energy will increase due to relativistic mass increase of the moving particles. We know that its inertial mass will increase - it will be harder to accelerate the whole body linearly with a given force. But the Higgs field is Lorentz invariant so that if the inertial resistance force is due to the Higgs field then it shouldn't be any harder to accelerate the body whether it is spinning or not.
If the Higgs field is Lorentz invariant it might (in a very ad hoc manner, yes) help explain how certain other particles acquire (part of) its mass, but it can not explain how the Higgs particle itself gets its own mass, one should postulate a new field for that, and so on in infinite regress style.
Quote by TrickyDicky If the Higgs field is Lorentz invariant it might (in a very ad hoc manner, yes) help explain how certain other particles acquire (part of) its mass, but it can not explain how the Higgs particle itself gets its own mass, one should postulate a new field for that, and so on in infinite regress style.
It is all about finding the best mathematical description of the observed natural phenomena. If that is a Higgs potential, then that is our "answer" and we will say that is the law of nature as far as we know. Who are we to say that there has to be "underlying structure"?
It is not scientifically justifiable to introduce more and more dynamic degrees of freedom in that way if there is no indication or need for such a construction from an experimentally point of view.
On the other hand. there was/is a scientifically justifiable reason for introduction of the Higgs mechanism/potential, because gauge theories seemed fine in most respects apart from not at first being able to explain the origin of the masses of fundamental particles.
Quote by johne1618 Ok - so that means that when I push a lead weight across an ice rink physics still can't explain the origin of 98% of the resistive force that the weight applies back on my hand.
I believe the interesting technical question here boils down to understanding, at the level of quarks and gluons, the effect of ambient electromagnetic fields on the motion of a proton. At some level it's going to be about virtual photons changing the momentum of the proton's constituent quarks. The quarks are tumbling along together, bound by gluon exchange, and then the electromagnetic field gives little "kicks" to the quarks, which certainly don't break the bonds between them, but which do affect the motion of the whole.
Quote by johne1618 Ok - so that means that when I push a lead weight across an ice rink physics still can't explain the origin of 98% of the resistive force that the weight applies back on my hand.
Apart from being quite tricky to calculate, it does "explain" most of it. It would in principle be possible for you so make a sum of all Higgs-masses of all the fundamental particles, together with kinetic/potential energy (very important!) of all the constituent particles in your lead weight, e.g. quarks and gluons. If you calculate all of this you should get the correct rest energy in principle.
I have no idea what kind of accuracy you get if you attempt a calculation like this. If you start from experimentall observed masses of baryons, instead of quarks and gluons, you'll get very good results, though.
One reason this is a nice topic, is that the mass of the proton should be much the same, even if up and down quarks are massless! So clearly the Higgs field doesn't "explain proton's inertial resistance to acceleration". You could have QCD with two massless flavors of quark, coupled to electromagnetism and gravity, and the behavior of the proton would be the same. So a coherent "philosophy of inertia and acceleration" has to be able to deal with at least two cases - situations without a Higgs field at all, like the one I just described, and also situations where the Higgs field is at work - e.g. instead of a proton, we have an electron which does get its mass from the Higgs mechanism. I am used to just saying that gravity couples universally to energy-momentum, but it should be enlightening to dig into the details of the two cases.
Quote by torquil It is all about finding the best mathematical description of the observed natural phenomena. If that is a Higgs potential, then that is our "answer" and we will say that is the law of nature as far as we know. Who are we to say that there has to be "underlying structure"?
Not exactly, it is about finding the best mathematical description for sure, but for many physicists is also about the "underlying" picture, otherwise we wouldn't need models expressable in words nor would we care about unification of QM and GR, or the Higgs at all, the mathematical description of those two theories is complete for all the natural phenomena we do observe.
So according to your philosophy no money should be spent in the LHC,etc no?
Quote by torquil It is not scientifically justifiable to introduce more and more dynamic degrees of freedom in that way if there is no indication or need for such a construction from an experimentally point of view.
Agreed, no justification, how do you explain Higgs own mass then?
Quote by TrickyDicky Not exactly, it is about finding the best mathematical description for sure, but for many physicists is also about the "underlying" picture, otherwise we wouldn't need models expressable in words nor would we care about unification of QM and GR, or the Higgs at all, the mathematical description of those two theories is complete for all the natural phenomena we do observe. So according to your philosophy no money should be spent in the LHC,etc no?
No, I want a lot of money to be spent at LCH! :) And on the next generation acellerator. Basically, I want the experimental testing and exploration of the laws of nature to continue forever. I don't believe that we can ever be sure that our theories are perfect, so they need to be tested all the time in new experiments. If at some point the Higgs mechanism turns out to be just an approximation of something else, then I welcome all theoretical model-building and theories about underlying structures.
Agreed, no justification, how do you explain Higgs own mass then?
Similar questions can be stated for anything in the standard model. E.g., how do you explain the existence of the electron field? How do you explain the existence of gauge symmetry? Nobody knows at which point, if ever (because it might go on forever), that perhaps the current model is all there is to it and no more. But if there is no experimental reason for searching for an underlying structure, it doesn't make much sense to hypothesize much about it. Searching for an underlying structure usually makes sense when the current model turns out to be inaccurate, which has not happened yet for the Higgs mechanism.
So I want to think about the origin of the Higgs mass when experiments determine that the current Higgs mechanism doesn't perfectly conform to experiments.
Of course, everybody are free to think and research whatever they want. That's just as important as the scientific method itself... :)
Quote by torquil No, I want a lot of money to be spent at LCH! :) And on the next generation acellerator. Basically, I want the experimental testing and exploration of the laws of nature to continue forever. I don't believe that we can ever be sure that our theories are perfect, so they need to be tested all the time in new experiments. If at some point the Higgs mechanism turns out to be just an approximation of something else, then I welcome all theoretical model-building and theories about underlying structures.
Aha, I supposed so, good thing then ;)
Quote by torquil Similar questions can be stated for anything in the standard model. E.g., how do you explain the existence of the electron field? How do you explain the existence of gauge symmetry? Nobody knows at which point, if ever (because it might go on forever), that perhaps the current model is all there is to it and no more. But if there is no experimental reason for searching for an underlying structure, it doesn't make much sense to hypothesize much about it. Searching for an underlying structure usually makes sense when the current model turns out to be inaccurate, which has not happened yet for the Higgs mechanism. So I want to think about the origin of the Higgs mass when experiments determine that the current Higgs mechanism doesn't perfectly conform to experiments. Of course, everybody are free to think and research whatever they want. That's just as important as the scientific method itself... :)
Now you seem to be only thinking about "underlying structures" here
I was not referring to "existence" in that deep sense, I asked about the mechanism that allows the Higgs particle to acquire his own mass, just hinting at the problem the very Higgs mechanism can trigger while apparently resolving , say, the origin of the mass of the electron.
Quote by TrickyDicky Aha, I supposed so, good thing then ;) Now you seem to be only thinking about "underlying structures" here I was not referring to "existence" in that deep sense, I asked about the mechanism that allows the Higgs particle to acquire his own mass, just hinting at the problem the very Higgs mechanism can trigger while apparently resolving , say, the origin of the mass of the electron.
Actually, I'm not really very updated on the theoretical problems that come from spontaneous symmetry breaking, e.g. all this talk about "vacuum stability" and such. I guess there are many important avenues of research in that area, so much theoretical work is still to be done, and if someone comes along with a model that in some way explains or replaces the Higgs-mechanism with less such problems, then that would only be a good thing.
I guess my original point was only the somewhat naive observation that IF the current theoretical Higgs mechanism turns out to be fine from a theoretical and experimental standpoint, then there is no need for another "underlying explanation", since the mass of the Higgs field would simply be explained by the fact that its potential has a certain quadratic term.
Quote by torquil since the mass of the Higgs field would simply be explained by the fact that its potential has a certain quadratic term.
I would appreciate if you elaborated on this.
Quote by TrickyDicky I would appreciate if you elaborated on this.
Well, in the Lagrangian formalism, in which the standard model is formulated, the mass of a field is determined by the prefactor of the term that is square in the field variable. For example, for a free scalar Klein-Gordon field of mass $m$, the Lagrangian function is:
[tex]
L = \frac1{2}(\partial \phi)^2 - \frac1{2}m^2\phi^2
[/tex]
The Higgs mechanism is just a field that is a Lorentz-scalar (+ some nontrivial gauge-group properties which I skip here) which has a Lagrangian function similar to (a bit simplified to avoid unnecessary details):
[tex]
L = \frac1{2}(\partial \phi)^2 - V(\phi)
[/tex]
where $V(\phi)$ has a continuum of global minima, away from $\phi=0$. The standard Higgs-mechanism uses something like:
[tex]
V(\phi) = -\mu^2\phi^2 + \lambda\phi^4
[/tex]
Therefore, at low energy, the Higgs field $\phi$ settles into a nonzero global minimum $\phi_0$ which gives it a nonzero vacuum expectation value (VEV). Around this minimum, the potential has the form:
[tex]
V(h) = a + \frac1{2}m^2h^2 + ch^3 + dh^4
[/tex]
where $h := \phi - \phi_0$, and where the Higgs mass $m$ is determined from the two parameters $\mu,\lambda$ that define the Higgs potential.
So the introduction of the Higgs potential defines the mass of the small fluctuations around the potential mininum, which is by definition the Higgs mass.
As a reference with more details:
http://en.wikipedia.org/wiki/Higgs_mechanism
Thanks, it must be said the Higgs potential and that quadratic term has its own problems, like the hierarchy problem and vacuum related instabilities depending on the Higgs mass that are not solved at all as of now. But anyway this only shows that the Higgs mas is introduced "by hand" in that potential and doesn't explain what I was referring to, wich is an "underlying" problem: how does a Lorentz invariant field gives itself an invariant mass? How can something that is out to break a symmetry respect itself that symmetry?
Quote by TrickyDicky Thanks, it must be said the Higgs potential and that quadratic term has its own problems, like the hierarchy problem and vacuum related instabilities depending on the Higgs mass that are not solved at all as of now. But anyway this only shows that the Higgs mas is introduced "by hand" in that potential
Yes the potential is put in "by hand". The SM Lagrangian is restricted by several principles that are believed to be fundamental, but those principles do not determine the Lagrangian uniquely. In particular, the dimensionless parameters are not determined but those principles, and nor is the particle content. But at the moment I don't know any experimental reason for believing that the potential is not "fundamental".
Yes there are some theoretical problems related to renormalisation, and that's not a surprise considering the weak mathematical grounding for the whole field of quantum gauge theories. As far as I know, the perturbation expansion that is used even in quite simple QFTs doesn't even converge.
and doesn't explain what I was referring to, wich is an "underlying" problem: how does a Lorentz invariant field gives itself an invariant mass? How can something that is out to break a symmetry respect itself that symmetry?
The field theory itself respects the SU(2)xU(1) gauge symmetry. It is the solution of the field theory that breaks it down to a U(1) symmetry. The field theory is the "law of nature", which has some large symmetry group, and the "state of nature" is a solution to the field theory. Mathematically, there is no reason for the solution of a differential equation to have the full symmetry of the differential equation itself. Our world corresponds to a particular solution of the standard model field theory. It just happens to have a nonzero constant value of the Higgs scalar field, caused by the shape of the potential.
Thread Tools
| | | |
|------------------------------------------------------------------------------------------------|----------------------------------------|---------|
| Similar Threads for: How can Higgs field explain proton's inertial resistance to acceleration? | | |
| Thread | Forum | Replies |
| | Classical Physics | 10 |
| | High Energy, Nuclear, Particle Physics | 7 |
| | Introductory Physics Homework | 0 |
| | High Energy, Nuclear, Particle Physics | 21 |
| | Introductory Physics Homework | 0 |
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9473835229873657, "perplexity_flag": "head"}
|
http://unapologetic.wordpress.com/2008/01/23/a-note-on-the-periodic-functions-problem/?like=1&source=post_flair&_wpnonce=134bd2d71b
|
The Unapologetic Mathematician
A note on the Periodic Functions Problem
Over at The Everything Seminar, Jim Belk mentions an interesting little problem.
Show that there exist two periodic functions $f,g:\mathbb{R}\rightarrow\mathbb{R}$ whose sum is the identity function:
$f(x)+g(x)=x$ for all $x\in\mathbb{R}$
He notes right off that, “Obviously the functions $f$ and $g$ can’t be continuous, since any continuous periodic function is bounded.” I’d like to explain why, in case you didn’t follow that.
If a function $f$ is periodic, that means it factors through a map to the circle, which we call $S^1$. Why? Because “periodic” with period $p$ means we can take the interval $\left[0,p\right)$ and glue one end to the other to make a circle. As we walk along the real line we walk around the circle. When we come to the end of a period in the line, that’s like getting back to where we started on the circle. Really what we’re doing is specifying a function on the circle and then using that function over and over again to give us a function on the real line. And if $f$ is going to be continuous, the function $\bar{f}:S^1\rightarrow\mathbb{R}$ had better be as well.
Now, I assert that the circle is compact. I could do a messy proof inside the circle itself (and I probably should in the long run) but for now we can just see the circle lying in the plane $\mathbb{R}^2$ as the collection of points distance $1$ from the origin. Then this subspace of the plane is clearly bounded, and it’s not hard to show that it’s closed. The Heine-Borel theorem tells us that it’s compact!
And now since the circle is compact we know that its image under the continuous map $\bar{f}$ must be compact as well! And since the image of $f$ is the same as the image of $\bar{f}$, it must also be a compact subspace of $\mathbb{R}$ — a closed, bounded interval. Neat.
Like this:
Posted by John Armstrong | Analysis, Calculus
13 Comments »
1. It looks like you are using heavy machinery where some hand tools are enough. A continuous function over any closed interval is bounded, in particular, any continuous periodic function is.
Comment by | January 23, 2008 | Reply
2. But how do you know that a continuous function on a closed interval is bounded?
Comment by | January 23, 2008 | Reply
3. Suppose it’s unbounded, then there is a sequence $x_n$ such that $f(x_n) \rightarrow \infty$. We may assume that $x_n \rightarrow x$, otherwise take a convergent subsequence. By continuity we must have $f(x_n) \rightarrow f(x) \neq \infty$, a contradiction.
Comment by | January 24, 2008 | Reply
4. And you have a convergent subsequence by compactness. You’re actually reproving parts of the theorems I quoted but now in the language of sequential compactness (which is equivalent to compactness in metric spaces).
Yes, I’m using a different viewpoint than an advanced calculus textbook would, emphasizing the point-set topology underlying the analysis. Besides, I stumbled across something on another weblog that related back to my recent posts, so why not show the connection?
Comment by | January 24, 2008 | Reply
5. Here is another proof that is not using compactness. Let our function f be defined on [a,b], and let c be the lowest upper bound of such t that f is bounded on [a,t]. Assume $c < b$, then f is bounded (by continuity at c) on some open interval (c-e,c+e), and also bounded on [a,c-e], therefore it’s bounded on [a,c+e/2], and c is not an upper bound.
Comment by | January 24, 2008 | Reply
6. Which is exactly the same method I used to prove that an interval is compact.
Comment by | January 24, 2008 | Reply
7. I guess your blog rubs me the wrong way because of too many complicated explanations of simple things and too few simple explanations of complicated things. It gives people rather cartoonish biew of mathematics. And by the way, do you plan to explain the topological meaning of gluing one end of a segment to the other, while you are still at it?
Comment by | January 24, 2008 | Reply
8. That’s a quotient space, as I described them earlier.
I’m not intending this to be a textbook so much as an overview, showing a certain natural flow emphasizing common themes throughout. It’s not a palette, it’s a painting.
Comment by | January 24, 2008 | Reply
9. Though there is a fair amount of heavy machinery involved in the posts, I do think that a more “general” approach to analysis is important in developing an understanding of the subject. And, emphasizing a point-set topological foundation of analysis goes a long way in “seeing” what’s actually going on.
Comment by | January 24, 2008 | Reply
10. Though there is a fair amount of heavy machinery involved in the posts, I do think that a more “general” approach to analysis is important in developing an understanding of the subject. And, emphasizing a point-set topological foundation of analysis goes a long way in “seeing” what’s actually going on.
Comment by | January 24, 2008 | Reply
11. John, this is one of the few places I know where it is possible to get an idea of how these many complex pieces fit together without spending a lifetime learning all the details about the pieces. Your blog provides motivation. It is balanced. It greatly facilitates a more detailed study, should that be desired. Clearly communicating ideas (even simple ones, let alone complex ones!) is one of the most difficult tasks we face; out of hundreds of people who “understand” something, we are lucky indeed to find even one who is willing and able to clearly explain it. Creating and maintaining this blog is a difficult task and a significant commitment. Know that it is greatly appreciated! Thank you!
Comment by Charlie C | January 24, 2008 | Reply
12. I’d echo Charlie C’s comment. The style of this blog is great for people who have yet to come across material being mentioned and is quite motivating in that regard.
I sometimes read posts here and don’t take the time to follow/understand them fully but just take the gist of it and try and then hopefully when I seriously study the aforementioned topic at least it will seem faintly familiar.
Comment by Jake | January 28, 2008 | Reply
13. [...] need to get down a few facts about metric spaces before we can continue on our course. Firstly, as I alluded in an earlier comment, compact metric spaces are sequentially compact — every sequence has a convergent [...]
Pingback by | January 31, 2008 | Reply
« Previous | Next »
About this weblog
This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”).
I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 22, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9460546374320984, "perplexity_flag": "head"}
|
http://physics.aps.org/articles/print/v4/81
|
# Viewpoint: Few and Far Between
, The Institute of Mathematical Sciences C.I.T. Campus, Taramani, Chennai 600113, India
Published October 17, 2011 | Physics 4, 81 (2011) | DOI: 10.1103/Physics.4.81
How nodes connect to each other may explain why we don’t see certain classes of networks.
At the end of the 20th century, the study of networks—systems of nodes connected by links—took off as scientists realized how ubiquitous they were. Complex networks describe interactions among proteins in a cell, coordinate communication among the neurons in our brain, and govern how individuals in a society connect. The list goes on. Many fundamental questions about networks remain unanswered, however, including several on scale-free networks, which are characterized by a power-law distribution in the number of connections (degree) each node has.
Now, in a paper in Physical Review Letters, Charo Del Genio at the Max Planck Institute for the Physics of Complex Systems, Germany, and coauthors tell us that the only kind of scale-free network that is possible is one with an average degree that remains finite as its size becomes arbitrarily large [1]. The connection density $C$ of a network can be interpreted as the ratio of the average degree to the network size (the number of nodes, $N$), and so this result implies that all realizable scale-free networks are sparse ($C→0$ as $N→∞$). The paper should stimulate fresh activity in this area, as it appears to contradict earlier mathematical results [2] which suggested that growth by node and link duplication would give rise to dense scale-free networks.
Scale-free networks [3] have become prominent recently, but power-law degree distributions have been known for some time. In the 1960s, the polymath Derek de Solla Price proposed [4] a preferential attachment scheme—now colloquially referred to as the “rich get richer” effect, first studied by George Yule—that generates power-law distributions seen in many contexts, including several in economics, such as the Pareto law for income distribution [5]. For networks, preferential attachment works as follows [6]: Start with a existing network and at subsequent time steps add new nodes that connect to existing ones preferentially, according to their degree. If the probability that a new node will connect to a node of degree $k$ is proportional to $k$, then it can be shown that in the steady state the network exhibits a power-law degree distribution $P(k)∼k-γ$, where the exponent $γ=3$. As many networks (the internet is one example) that evolve by accumulating nodes show scale-free degree distribution, one expects the evolutionary algorithm above to describe their growth. However, scale-free networks that are prevalent, whose exponents can span a variety of values, appear to mostly cluster around $γ=2$, not $3$.
Why does the value of the exponent matter? For one thing, it governs the statistics of the various moments of the distribution of the degree (including the average degree and its standard deviation). This has repercussions for the dynamical processes on the networks. An example is how a contagion spreads in a population where the contact network among individuals is scale-free. The epidemic can only spread if the rate of infection exceeds a threshold, which is the ratio of the first and second moments of the degree distribution. If the distribution is a power law with an exponent between $2$ and $3$, the second moment diverges so that this ratio is zero and the threshold disappears. In other words, even an infection with an arbitrarily small rate of spread can result in an epidemic that spans the entire population [7]. This may appear to have ominous consequences, but there is a silver lining. A sparse scale-free network is characterized by the existence of hubs—nodes with a very high degree compared to the average (Fig. 1)—that dominate the spreading process. Therefore, identifying and selectively immunizing the few “super-spreaders” would be a workable control strategy.
However, when the exponent is smaller than $2$, the first moment diverges with system size. This means that the number of hubs is no longer small but rather of the order of the size of the network. Selective control of such a large number of hubs is no longer an efficient strategy. Fortunately, with a few key exceptions, most social networks do not appear to be scale-free: diseases that spread through casual social interaction may have a finite threshold.
So, can there be scale-free networks that exhibit power-law nature at arbitrarily high degrees and still have $γ<2$? The few empirical reports of such systems have not decisively settled the issue. They were mostly based on small data sets, with possibly unreliable statistics [8]. However, a mathematical result on scale-free networks that are grown by adding nodes that duplicate the links of a randomly selected existing node with a selection probability $p$ suggests that degree distribution exponents between $1$ and $2$ can be obtained for $p>0.5$ [2]. And there the matter seemed to rest. In their paper, however, Del Genio et al. claim that such networks can be ruled out purely on theoretical grounds. Although their proof does not depend on the exact procedure used to construct a network from its degree sequence, i.e., the complete list of the number of connections that each of its nodes has, it is instructive to consider one. For example, the Havel-Hakimi algorithm [9] begins with a graph with no links and then gradually constructs links consistent with a given degree sequence, arranged in a non-increasing order. The first node is connected to the next vertices in this list, and then removed from the list. The list is rearranged and the process is repeated until all the degrees are properly assigned and (a) either the network is constructed successfully, or (b) the conditions cannot be fulfilled at some stage, so that the corresponding network cannot be constructed. Using analytical reasoning as well as numerical calculations, the authors show that no degree sequences corresponding to scale-free networks having exponents between $0$ and $2$ can be realized.
An important question the work raises is what it implies for earlier mathematical results on dense scale-free networks obtained by duplication processes. Could it be that realizable instances of such networks may exist in principle but are so rare that they will never be encountered in practice? Further research will hopefully clarify this issue. It is important to note that the new result does not imply that all networks have to be sparse. Indeed, dense networks where each node is connected to all others are easy to realize. Similarly, as the authors point out, degree distribution exponents smaller than $0$ correspond to dense networks. However, such networks are not the familiar scale-free networks. Thus, for real scale-free networks, the number of hubs will always be much smaller relative to the size of the network. This is not necessarily good news. While the sparse nature of the network will make it possible to formulate efficient immunization strategies based on identification of hubs, it also suggests that targeted attacks on a few vulnerable nodes in large complex technological systems, such as the internet, can bring them down. Recent results also show that sparse inhomogeneous networks are the most difficult systems to control by driving a small fraction of their nodes with input signals [10].
### References
1. C. I. Del Genio, T. Gross, and K. E. Bassler, Phys. Rev. Lett. 107, 178701 (2011).
2. F. Chung and L. Lu, Complex Graphs and Networks (American Mathematical Society, Providence, 2006)[Amazon][WorldCat].
3. G. Caldarelli, Scale-Free Networks (Oxford University Press, Oxford, 2007)[Amazon][WorldCat].
4. D. J. de Solla Price, Science 149, 510 (1965).
5. S. Sinha, A. Chatterjee, A. Chakraborti, and B. K. Chakrabarti, Econophysics: An Introduction (Wiley-VCH, Weinheim, 2011)[Amazon][WorldCat].
6. R. Albert and A.-L. Barabasi, Science 286, 509 (1999).
7. R. Pastor-Satorras and A. Vespignani, Phys. Rev. Lett. 86, 3200 (2001).
8. A. Clauset, C. R. Shalizi, and M. E. J. Newman, SIAM Review 51, 661 (2009).
9. S. L. Hakimi, SIAM J. Appl. Math. 10, 496 (1962).
10. Y.-Y. Liu, J.-J. Slotine, and A.-L. Barabasi, Nature 473, 167 (2011).
### Highlighted article
#### All Scale-Free Networks Are Sparse
Charo I. Del Genio, Thilo Gross, and Kevin E. Bassler
Published October 17, 2011 | PDF (free)
### Figures
ISSN 1943-2879. Use of the American Physical Society websites and journals implies that the user has read and agrees to our Terms and Conditions and any applicable Subscription Agreement.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 21, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9303798675537109, "perplexity_flag": "head"}
|
http://mathhelpforum.com/number-theory/92262-induction-problem.html
|
# Thread:
1. ## induction problem
i think this one is pretty easy but i'm not that great with induction. any help?
Show that $2^n > n$ for any integer n that is an element of $Z^+.$
my attempt:
base case - let n=1
2^1=2 > 1 -> base case is true
2^(n+1) > n+1
2^n + 2 > n+1
2^n > n-1
statement holds true by the base case 2^1 > 1-1
2. For n=1, $2^{1} = 2 > 1$.
Now Assume $2^{k} > k$ for some k>1
then $2^{k+1} = 2*2^{k} > 2k$ by the induction step
and $2k = k+k > k+1$ which is what we wanted to show
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8790494799613953, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/algebra/133350-write-equation-quadratic-function-given-only-vertex.html
|
# Thread:
1. ## Write the equation of a quadratic function given only the vertex
The function has a minimum value of 6 at x = 4.
I know that part of the equation will be $y = a(x - 4) + 6$, though I have no idea how to get $a$. The answer is " $y = 3(x - 4)^2 +6$; answers may vary". The only thing I can think of would be to create a table of values to input an $x$ and $y$ value. Though I am unsure of how to do that. Any suggestions?
2. Originally Posted by shadow6
The function has a minimum value of 6 at x = 4.
I know that part of the equation will be $y = a(x - 4) + 6$, though I have no idea how to get $a$. The answer is " $y = 3(x - 4)^2 +6$; answers may vary". The only thing I can think of would be to create a table of values to input an $x$ and $y$ value. Though I am unsure of how to do that. Any suggestions?
What you'd have to do, is expand that function, to get;
$y = (ax - 4a)(x - 4) + 6$
$y = ax^2 - 8ax + 16a + 6$
Then equate to $y = 6$ and $x = 4$.
$6 = 16a - 32a + 16a + 6$
Hence, $0 = 0$
Damn.
3. Originally Posted by shadow6
The function has a minimum value of 6 at x = 4.
I know that part of the equation will be $y = a(x - 4){\color{red}^2} + 6$, though I have no idea how to get $a$. The answer is " $y = 3(x - 4)^2 +6$; answers may vary". The only thing I can think of would be to create a table of values to input an $x$ and $y$ value. Though I am unsure of how to do that. Any suggestions?
Note the correction in red.
Without further information the value of a cannot be got. The only thing that can be said is that a > 0. So either you have not posted all of the question or the question is incomplete.
4. Just read the answers to the succeeding question. Basically, a can be any real number; hence why answers may vary.
5. Originally Posted by shadow6
Just read the answers to the succeeding question. Basically, a can be any real number; hence why answers may vary.
No, the parabola has a minimum therefore a > 0, which is not the same as any real number.
6. ## Youtube
Originally Posted by shadow6
The function has a minimum value of 6 at x = 4.
I know that part of the equation will be $y = a(x - 4) + 6$, though I have no idea how to get $a$. The answer is " $y = 3(x - 4)^2 +6$; answers may vary". The only thing I can think of would be to create a table of values to input an $x$ and $y$ value. Though I am unsure of how to do that. Any suggestions?
Here's a video explanation:
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 26, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9369238018989563, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/233541/finding-the-equivalence-class/233547
|
# Finding The Equivalence Class
Okay, so the question I am working on is, "Suppose that A is a nonempty set, and $f$ is a function that has A as its domain. Let R be the relation on A consisting of all ordered pairs $(x, y)$ such that $f(x)=f(y)$.
a)Show that R is an equivalence relation on A.
b)What are the equivalence classes of R?"
I was able to part a, but I am not certain how to answer part b. I know that the domain is the set A, but what is the range? And when you say that $f(x)=f(y)$, does that mean that a function is equal to its inverse? Like for instance, $\Large y=\frac{1}{x}$
-
We are not defining $y$ in terms of $x$ via the equation $f(x)=f(y)$. Nor are we defining the function in terms of this equation. Rather, the function is already a priori given to us, and we construct the set $R$ out of it, comprised precisely of all ordered pairs $(x,y)$ for which $f(x)=f(y)$ happens to holds true. – anon Nov 9 '12 at 23:48
So, when I plug in x into the functions f(x) and f(y), I should get the same output, namely, y? And that is the condition for an ordered-pair to be in the relation? – Mack Nov 9 '12 at 23:52
$f$ is a function that maps elements of $A$ to some other set, and $f(x)$ is the function $f$ evaluated at $x\in A$, i.e. the output of $f$ when the input is $x$. Just as $f(y)$ is the output of $f$ when the input is $y\in A$. You seem to be thinking in a strange mixture of different viewpoints, one of them involving $x$ as input and $y$ as output (often used in, say, graphing equations), which is not what is going on here. We are saying if we take two inputs $x,y$ from the domain $A$, they are in relation to each other ($x\sim y$) if and only if they have the same output under $f$. – anon Nov 10 '12 at 0:24
Could you read my comment on Peter Smith's response to my question? Because it seems that what I said was correct, even though he thought it was wrong. – Mack Nov 10 '12 at 0:29
1
No, f(x) and f(y) are not two different functions. We say either (a) $x\mapsto f(x)$ and $y\mapsto f(y)$ are the same function [sometimes the $\mapsto$ part is suppressed, so we say that $f(x)$ is a function, but clearly having more then one input is confusing you], or (b) neither $f(x)$ or $f(y)$ are functions, but are rather values taken in the codomain, and are in particular the outputs of f corresponding to the inputs x and y respectively. It is not correct that x=y is necessary for f(x)=f(y). (If f is a constant function, for instance, then f(x)=f(y) is true for any pair x,y.) – anon Nov 10 '12 at 0:45
show 4 more comments
## 3 Answers
The equivalence classes are all sets of the form $\{a\in A:f(a)=b\}$ for $b$ in the range of $f$. $f(x)=f(y)$ simply mean that $x$ and $y$ are mapped to the same element, not that the function is its inverse.
-
So, $f(x)=b=f(y)$ when $a$ is substituted into the function $f$? – Mack Nov 10 '12 at 15:31
Let's say that $f$ discriminates $x$ from $y$ if it maps $x$ and $y$ to different objects, i.e. if $f(x) \neq f(y)$. Then the corresponding $R$ relation is defined to hold between $x$ and $y$ when $f$ doesn't discriminate them.
"When you say that $f(x)=f(y)$, does that mean that a function is equal to its inverse?" Not at all. $f$ can be any function you like here: to repeat, we are just told that the corresponding $R$-relation is defined to hold between $x$ and $y$ just when the function $f$ (whatever it is) happens to map $x$ and $y$ to the same thing, so doesn't discriminate.
So, as you say, that evidently makes $R$ (i.e. being indiscriminable-by-$f$) an equivalence relation. What are its equivalence classes? Well they must be classes such that objects are in the same class when $f$ doesn't discriminate them. It is as simple as that!
-
So, let me see if I understand. As you said, $f(x)$ and $f(y)$ do not have to be inverses of each other--one is just some function of x, and the other is just some function of y. And so, an ordered pair $(x,y)$ can only be in the relation on the set A if, when I plug x into the $f(x)$, and y into $f(y)$, the two functions have the same output value? I am still having trouble seeing how to find part b. In the answer key, they talk about inverse images. – Mack Nov 9 '12 at 13:33
@EMACK: No, $f$ is the function — just $f$. $f(x)$ is the value given by applying that function to some variable named $x$, $f(y)$ is the value given by applying that same function to $y$, $f(q)$ is the value given by applying the function to $q$, $f(\aleph)$ is the value given by applying $f$ to $\aleph$, and so on. – Ilmari Karonen Nov 9 '12 at 13:57
By convention, functions are often defined by an expression that gives their value for some argument $x$, such as "$f(x) = x^2 - x + 1$". This just means that, when you apply the function to some value that is not named $x$, you'll get the value of the function by replacing all occurrences of $x$ in the expression with the actual argument of the function, e.g. $f(z/2) = (z/2)^2 - z/2 + 1$. – Ilmari Karonen Nov 9 '12 at 14:03
@EMACK As Ilmari Karonen is gently pointing out, you do seem to have very fundamentally misunderstood standard function notation. What to do? As always in such a case, read two or three explanations in different textbooks: with luck, reading more than one presentation will help you iron out any bad misunderstandings. – Peter Smith Nov 9 '12 at 15:09
I will give an example:
Let $f:\mathbb{R}\to \mathbb{R}$ with $f(x)=x^2$.
Then $x$ is equivalent to $-x$ because $f(x)=f(-x)$.
The equivalence classes are $\bar{x} = \{y \in \mathbb{R}:f(x)=f(y)\}=\{x,-x\}$ for $x \in \mathbb{R}$. Thus the equivalence classes are $\{\bar{x}:x\geq0\}$.
Observe that $\{x,-x\}=f^{-1}\left(\{x\}\right)$. In general the equivalence classes are $\{f^{-1}\left(\{y\}\right): \ y \text{ in range of } f\}$.
-
If $f^{-1}$ doesn't exist, you can't write $f^{-1}( y )$ but you can write $f^{-1}( \{ y \} )$ which is the set of the numbers whose image by $f$ is $y$. – xavierm02 Nov 9 '12 at 13:29
Correct, I will edit. – P.. Nov 9 '12 at 13:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 99, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9520136713981628, "perplexity_flag": "head"}
|
http://cms.math.ca/10.4153/CMB-2003-035-1
|
Canadian Mathematical Society
www.cms.math.ca
| | | | |
|----------|----|-----------|----|
| | | | | | |
| Site map | | | CMS store | |
location: Publications → journals → CMB
Abstract view
# Some Questions about Semisimple Lie Groups Originating in Matrix Theory
Read article
[PDF: 177KB]
http://dx.doi.org/10.4153/CMB-2003-035-1
Canad. Math. Bull. 46(2003), 332-343
Published:2003-09-01
Printed: Sep 2003
• Dragomir Z. Đoković
• Tin-Yau Tam
Features coming soon:
Citations (via CrossRef) Tools: Search Google Scholar:
Format: HTML LaTeX MathJax PDF PostScript
## Abstract
We generalize the well-known result that a square traceless complex matrix is unitarily similar to a matrix with zero diagonal to arbitrary connected semisimple complex Lie groups $G$ and their Lie algebras $\mathfrak{g}$ under the action of a maximal compact subgroup $K$ of $G$. We also introduce a natural partial order on $\mathfrak{g}$: $x\le y$ if $f(K\cdot x) \subseteq f(K\cdot y)$ for all $f\in \mathfrak{g}^*$, the complex dual of $\mathfrak{g}$. This partial order is $K$-invariant and induces a partial order on the orbit space $\mathfrak{g}/K$. We prove that, under some restrictions on $\mathfrak{g}$, the set $f(K\cdot x)$ is star-shaped with respect to the origin.
MSC Classifications: 15A45 - Miscellaneous inequalities involving matrices 20G20 - Linear algebraic groups over the reals, the complexes, the quaternions 22E60 - Lie algebras of Lie groups {For the algebraic theory of Lie algebras, see 17Bxx}
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 13, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.7313686609268188, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/47813/what-does-a-subatomic-charge-actually-mean/47816
|
# What does a subatomic charge actually mean?
I was recently reading a popular science book called The Canon - The Beautiful Basics of Science by Natalie Angier, and it talks about subatomic particles like protons, neutrons and electrons in chapter 3. I came across this section on subatomic charges that made me wonder about the nature of the positive and negative charges that we associate with protons and electrons respectively.
When you talk about a fully charged battery, you probably have in mind a battery loaded with a stored source of energy that you can slip into the compartment of your digital camera to take many exciting closeups of flowers. In saying that the proton and electron are charged particles while the neutron is not, however, doesn't mean that the proton and electron are little batteries of energy compared to the neutron. A particles's charge is not a measure of the particles's energy content. Instead, the definition is almost circular. A particle is deemed charged by its capacity to attract or repel other charged particles.
I found this definition/description a bit lacking, and I still don't grasp the nature of a "subatomic charge", or what do physicists mean when they say that a proton is positively charged and electron is negatively charged?
-
3
The same thing they mean when they talk about macroscopic objects being (electrically) charged. Indeed, macroscopic charges are explained in terms of microscopic charges, but at some level you must get down to "because these behaviors are observed and are correctly modeled with this math". – dmckee♦ Dec 28 '12 at 23:34
## 3 Answers
When physicists say that a particle has electric charge, they mean that it is either a source or sink for electric fields, and that such a particle experiences a force when an electric field is applied to them.
In a sense, a single pair of charged particles are a battery, if you arrange them correctly and can figure out how to get them to do useful work for you. It is the tendency for charged particles to move in an electric field that lets us extract work from them.
A typical electronic device uses moving electrons to generate magnetic fields (moving electrons cause currents, and currents generate magnetic fields) and these magnetic fields can move magnets, causing a motor to turn. What is happening at a fundamental level is that an electric field is being applied (via the potential across the battery) that is causing those electrons to move.
If I wanted a magnetic field to be generated, I could get one from a single pair of charges, say, two protons placed next to one another. The protons will repel (like charges repel) and fly away from each other. These moving protons create a current (moving charge) which creates a magnetic field.
Your author is right when he says that charges attract or repel other charges. To help connect it to more familiar concepts, consider this: The negative end of your battery terminal attracts electrons and the positive end repels them. (The signs of battery terminals are actually opposite the conventional usage of positive and negative when referring to elementary charges. As a physicist, I blame electrical engineers.) The repelled and attracted electrons start moving, and these moving electrons can be used to do work.
-
To expand on my comments, electrical phenomena includes a force between electrically charged objecs of $\vec{F}_1 = -k\frac{q_1 q_2}{r^2_{1,2}}\hat{r}_{1,2}$ where $q_1$ and $q_2$ are the charges on two objects, $r_{1,2}$ is the postion vector from object 1 to object 2 and $k$ is a constant of proportionality that depends on the system of units you choose.
We can measure those forces very simply in a macroscopic setting and with a little more work in a microscopic setting.
And this interaction (and the rest of electrodynamics) is essentially how we define electrical charges at all scales.
The "little more work" I refer to above means things like scattering experiments where you throw one charged particle past another and look at how the angle of deflection varies with the impact parameter.
-
$1$. The nature of a "subatomic charge" "The reason for the electric charge is that the quarks couple to a U(1) gauge vector-potential also known as the photon, which endows the quark with an electric charge. .. The theory encompassing the quarks and their properties is called Yang-Mills theory with associated Lie algebras U(1), SU(2), and S(3) which are called gauge groups."
quoted from Krchov2000 See Yang-Mills theory (Wikipedia).
However, I can't just say that charge is a fundamental property which we must just accept to be true without an explanation.Rather, the explanation is too complicated to just 'answer', you need to study entire subjects to get the 'real' explanation. You need to learn stuff like Yang-Mills theory to actually understand the nature of subatomic charge.
$2$. What do physicists mean when they say that a proton is positively charged and electron is negatively charged?
Charge refers to a property. When an electron is charged with a certain magnitude or charge, then we have a certain magnitude of electrical force etc. according to equations like $F=\frac {kq_1 q_2}{r^2}$, where $q_1$ and $q_2$ are the charges of two particles.
The positive and negative charges are just by definitions: if we wanted, we could define protons to be negatively charged and electrons to be positively charged. What really matters is to understand that protons and electrons have opposite charges.
-
2
While this is correct I'm not sure that it is the best approach to take with questioner is unsure of what is meant by "charge". – dmckee♦ Dec 28 '12 at 23:51
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.948400616645813, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/118633/summation-of-a-doubly-infinite-series
|
summation of a doubly infinite series
I want to show that $\sum\limits_{n=-\infty}^{\infty} \frac{(-1)^{n}}{x + \pi n} = \csc x$.
If you let $f(z) = \frac{\pi \csc \pi z}{x + \pi z}$ and integrate around the square in the complex plane that has vertices at $(\pm 1 \pm i)(N+\frac{1}{2})$, the integral does not evaluate to zero when you let $N$ go to $\infty$ (at least I don't think it does). So that must not be the right contour for this problem.
-
1 Answer
First of all, the series as written does not converge absolutely, which makes doing many manipulations difficult. Thus, use the principal value: $$f(z)=\frac1z+2z\sum_{n=1}^\infty\frac{(-1)^n}{z^2-\pi^2n^2}\tag{1}$$ Looking at $f$ as $\lim\limits_{N\to\infty}\sum\limits_{n=-N}^N \frac{(-1)^{n}}{z + \pi n}$ yields that $f$ is $2\pi$-periodic: $$\begin{align} &f(z+2\pi)-f(z)\\ &=\lim_{N\to\infty}\left(\frac{(-1)^{N-1}}{z+\pi(N+1)}+\frac{(-1)^N}{z+\pi(N+2)}-\frac{(-1)^{-N}}{z-\pi N}-\frac{(-1)^{-N+1}}{z-\pi(N-1)}\right)\\ &=0\tag{2} \end{align}$$ Pairing up odd and even terms and taking the limit as $\operatorname{Im}(z)\to\pm\infty$ $$\begin{align} \lim f(z) &=\frac1z+2z\sum_{n=1}^\infty\frac{(-1)^n}{z^2-\pi^2n^2}\\ &=\frac1z+2z\sum_{n=1}^\infty\frac{4\pi^2n-\pi^2}{(z^2-4\pi^2n^2)(z^2-4\pi^2n^2+4\pi^2n-\pi^2)}\\ &=\frac1z+\frac2z\sum_{n=1}^\infty\frac{(4\pi^2n-\pi^2)/z\;1/z}{(1-4\pi^2n^2/z^2)(1-(4\pi^2n^2+4\pi^2n-\pi^2)/z^2)}\\ &=\frac1z-\frac2z\int_0^\infty\frac{4\pi^2t\;\mathrm{d}t}{(1+4\pi^2t^2)^2}\\ &=0\tag{3} \end{align}$$
Where the sum approximates the Riemann sum of the integral above.
Next, notice that the poles all match up and have the same residue. Thus, $f(z)-\csc(z)$ has no poles.
Therefore, $f(z)-\csc(z)$ is $2\pi$-periodic, has no poles, and $\lim\limits_{\operatorname{Im}(z)\to\pm\infty}f(z)-\csc(z)=0$ implies $f(z)-\csc(z)$ is bounded. By Liouville's Theorem, $f(z)-\csc(z)$ is a constant, which must be $0$.
Thus, $$\frac1z+2z\sum_{n=1}^\infty\frac{(-1)^n}{z^2-\pi^2n^2}=\csc(z)\tag{4}$$
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 17, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9612420201301575, "perplexity_flag": "head"}
|
http://mathoverflow.net/revisions/11308/list
|
## Return to Answer
Post Undeleted by Steven Sam
2 deleted 136 characters in body
The
If I'm not mistaken, the extra information is not contained in the Hopf algebra structurerepresentation ring, you have to look at the category of representations. In particular, you want to look at the representation category equipped with its forgetful functor to vector spaces. Then the group can be recovered as the group of group-like elements (if $\Delta$ denotes the coproduct, then the group-like elements are those which satisfy $\Delta(x) = x \otimes x$). Just considering the algebra structure (over the complex numbers), you only see the dimensions automorphisms of the irreducible representations (by the Artin-Wedderburn's theorem http://en.wikipedia.org/wiki/Artin%E2%80%93Wedderburn_theorem)this functor. Here's a related questionblogpost I wrote which may be helpful: http://mathoverflow.net/questions/500/finite-groups-with-the-same-character-tablehttp://concretenonsense.wordpress.com/2009/05/16/tannaka%E2%80%93krein-duality/
Post Deleted by Steven Sam
1
The extra information is in the Hopf algebra structure. In particular, the group can be recovered as the group of group-like elements (if $\Delta$ denotes the coproduct, then the group-like elements are those which satisfy $\Delta(x) = x \otimes x$). Just considering the algebra structure (over the complex numbers), you only see the dimensions of the irreducible representations (by the Artin-Wedderburn's theorem http://en.wikipedia.org/wiki/Artin%E2%80%93Wedderburn_theorem).
Here's a related question: http://mathoverflow.net/questions/500/finite-groups-with-the-same-character-table
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8476607203483582, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/22650/how-do-quaternions-represent-rotations
|
How do quaternions represent rotations?
I wonder how $qvq^{-1}$ gives the rotated vector of $v$. Is there any easy-to-understand proof for it?
I was on Wikipedia, but I could not understand the proof there because of the conversions.
Why is $uv-vu$ the same as $2(u \times v)$, and why is $uvu$ the same as $v(uu)-2(uv)u$?
-
2 Answers
A (real) quaternion is a "number" of the form $q=a+bi+cj+dk$ where the coefficients $a$, $b$, $c$ and $d$ are real numbers and $i^2=j^2=-1$, $ij=k$, $jk=i$, $ki=j$, $ji=-k$ and so on.
The conjugate quaternion is $\overline{q}=a-bi-cj-dk$, and the (reduced) quaternionic norm of $q$ is the real number $q\overline{q}$.
Pure quaternions, i.e. those for which $a=0$ or equivalently $\overline{q}=-q$ form a 3-dimensional real space, an obvious basis being $\{i,j,k\}$. The quaternionic norm restricted to the space of pure quaternions turns out to be simply the euclidean norm.
The transformation $u\mapsto quq^{-1}$ preserves the space of pure quaternions and preserves the norm, thus can be read as an orthogonal tranformation of ${\Bbb R}^3$ which moreover preserves orientation.
One concludes recalling that an orientation-preserving orthogonal tranformation of ${\Bbb R}^3$ is rotation around some axis.
-
What does the adjective "reduced" signify in "(reduced) quaternionic norm"? – Jason DeVito Feb 18 '11 at 13:50
1
Well, if you have a finite algebra $A$ over a field $F$, the norm of an element $a\in A$ is the determinant of the multiplication by $a$ thought of as an $F$-linear map $A\rightarrow A$. In the quaternion case, reduced means that instead of taking this as the norm, you take its square root. Since the quaternions are 4-dimensional over $\Bbb R$, the reduced norm defines a quadratic form, which is what one would expect from an euclidean norm. – Andrea Mori Feb 18 '11 at 15:56
But I am still don't understand the 'mechanic' under the quaternion rotation ( why qv(q^-1) gives rotated vector ). And I am still curios how the mentioned conversions (uv−vu ->2 (u×v) and uvu -> v(uu)−2(uv)u) work. – user7217 Feb 18 '11 at 20:50
A proof is outlined here, although I skipped a few computations you should verify.
-
Not that I get any of it, but "maximally nice?" What is maximally nice? – bobobobo Oct 9 '11 at 21:46
@bobobobo: it was a mathematical figure of speech. I just mean "extremely nice." – Qiaochu Yuan Oct 9 '11 at 21:59
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 32, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9242798686027527, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/calculus/168522-double-integrals.html
|
# Thread:
1. ## double integrals
Could someone go through this step step by step?
I get stuck at part ii
Thanks
2. Originally Posted by bille
Could someone go through this step step by step?
I get stuck at part ii
Thanks
Assuming that you have done part (i), the essential next step is to draw a diagram of the domain D. That should enable you to describe D in terms of u and v, namely that D is enclosed by the lines $u=1$, $u=e$, $v=0$ and $v=\pi$. The change of variables formula then tells you that the integral is equal to
. . . . . $\displaystyle\int_1^e\int_0^\pi\frac{\sin v}{x^2}\,\frac{x^2}u\,dvdu$.
The " $x^2$"s conveniently cancel, and from then on the integral should be easy.
3. Sorry, what is the change of variable formula?
The formula says that if you make a change of variables from (x,y) to (u,v), so that f(x,y) becomes g(u,v), then the integral $\displaystyle\iint_Df(x,y)\,dxdy$ becomes $\displaystyle\iint_Dg(u,v)\left|\frac{\partial(x,y )}{\partial(u,v)}\right|dudv$.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 8, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9186901450157166, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/16632/stiefel-whitney-classes-over-integers/57043
|
## Stiefel-Whitney Classes over Integers?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
An interesting thing happened the other day. I was computing the Stiefel-Whitney numbers for $\mathbb{C}P^2$ connect sum $\mathbb{C}P^2$ to show that it was a boundary of another manifold. Of course, one can calculate the signature, check that it is non-zero and conclude that it can't be the boundary of an oriented manifold. I decided it might be interesting to calculate the first and only Pontrjagin number to check that it doesn't vanish. I believe Hirzebruch's Signature Theorem can be used to show that it is 6, but I was interested in relating the Stiefel-Whitney classes to the Pontrjagin classes.
I believe one relation is
$p_i (\mathrm{mod} 2) \equiv w_{2i}^2$ (pg. 181 Milnor-Stasheff)
So I went ahead and did a silly thing. I took my first Chern classes of the original connect sum pieces say 3a and 3b, used the fact that the inclusion should restrict my 2nd second "Stiefel-Whitney Class" (scare quotes because we haven't reduced mod 2) on each piece to these two to get $w_2(connect sum)=(3\bar{a},3\bar{b})$. I can use the intersection form to square this and get $3\bar{a}^2+3\bar{b}^2=6c$ since the top dimensional elements in a connect sum are identified. Evaluating this against the fundamental class gives us exactly the first Pontrjagin number! This is false. Of course this is wrong because it should be 9+9=18 as pointed out below. This does away with my supposed miracle example. My Apologies!
This brings me to a broader question, namely of defining Stiefel-Whitney Classes over the integers. This was hinted at in Ilya Grigoriev's response to Solbap's question when he says
On thing that confuses me: why are the pullbacks of the integer cohomology of the real Grassmanian never called characteristic classes?
Of course the natural reason to restrict to $\mathbb{Z}/2$ coefficients is to get around orientability concerns. But it seems like if we restrict our orientation to orientable bundles we could use a construction analogous to those of the Chern classes where Milnor-Stasheff inductively declare the top class to be the Euler class, then look at the orthogonal complement bundle to the total space minus its zero section and continue. I suppose the induction might break down because the complex structure is being used, but I don't see where explicitly. If someone could tell me where the complex structure is being used directly, I'd appreciate it. Note the Euler class on odd dimensional fibers will be 2-torsion so this might produce interesting behavior in this proposed S-W class extension.
Another way of extending Stiefel-Whitney classes would be to use Steenrod squares. Bredon does use Steenrod powers with coefficient groups other than $\mathbb{Z}/2$ (generally $\mathbb{Z}/p$ $p\neq 2$), but this creates awkward constraints on the cohomology groups. Is this an obstruction to extending it to $\mathbb{Z}$ coefficients? It would be interesting to see what these two proposed extensions of S-W classes do and how they are related.
-
2
Stiefel-Whitney classes were originally defined as obstruction classes to sections of Stiefel-bundles of a manifold. If you take the pull back of integral homology you no longer get these kinds of obstructions. Your questions seems to be more aimed towards "why do we call the pull-back of certain cohomology classes characteristic, and others not?" – Ryan Budney Feb 27 2010 at 23:08
That's funny - I was just about to ask this question when I saw yours. – Ilya Grigoriev Feb 28 2010 at 2:26
## 5 Answers
I'm grateful to Allen Hatcher, who pointed out that this answer was incorrect. My apologies to readers and upvoters. I thought it more helpful to correct it than delete outright, but read critically.
If $X$ and $Y$ are cell complexes, finite in each degree, and two maps $f_0$ and $f_1\colon X\to Y$ induce the same map on cohomology with coefficients in $\mathbb{Q}$ and in $\mathbb{Z}/(p^l)$ for all primes $p$ and natural numbers $l$, then they induce the same map on cohomology with $\mathbb{Z}$ coefficients. To see this, write $H^n(Y;\mathbb{Z})$ as a direct sum of $\mathbb{Z}^{r}$ and various primary summands $\mathbb{Z}/(p^k)$, and note that the summand $\mathbb{Z}/(p^k)$ restricts injectively to the mod $p^l$ cohomology when $l\geq k$. One can take only those $p^l$ such that there is $p^l$-torsion in $H^\ast(Y;\mathbb{Z})$. (I previously claimed that one could take $l=1$, which on reflection is pretty implausible, and is indeed wrong.)
We can try to apply this to $Y=BG$, for $G$ a compact Lie group. For example, $H^{\ast}(BU(n))$ is torsion-free (and Chern classes generate the integer cohomology), and so rational characteristic classes suffice. In $H^{\ast}(BO(n))$ and $H^{\ast}(BSO(n))$ there's only 2-primary torsion. That leaves the possibility that the mod 4 cohomology contains sharper information than the mod 2 cohomology. It does not, because, as Allen Hatcher has pointed out in this recent answer, all the torsion is actually 2-torsion.
It's sometimes worthwhile to consider the integral Stiefel-Whitney classes $W_{i+1}=\beta_2(w_i)\in H^{i+1}(X;\mathbb{Z})$, the Bockstein images of the usual ones. These classes are 2-torsion, and measure the obstruction to lifting $w_i$ to an integer class. For instance, an oriented vector bundle has a $\mathrm{Spin}^c$-structure iff $W_3=0$.
[I'm sceptical of your example in $2\mathbb{CP}^2$. So far as I can see, $3a+3b$ squares to 18, not 6, and indeed, $p_1$ is not a square.]
-
Thank you for pointing out that! I guess I wanted to see 6 and I saw 6. – Justin Curry Feb 27 2010 at 23:13
This is a very interesting answer; I really enjoyed reading it. Also, it also made quite a few more questions appear in my mind. First, you imply that the cohomology of the Grassmanian with Q coefficients corresponds to the Pontryagin classes, is that true? Secondly, did people ever try to calculate the cohomology of the Grassmanian with Z/p coefficients where p is not 2? Are the resulting classes interesting? Are they related to Steenrod powers the same way as the Stiefel-Whitney classes related to Steenrod Squares? Oh, and can you see them geometrically (e.g. like obstructions) in any way? – Ilya Grigoriev Feb 28 2010 at 3:01
1
@Ilya: Thanks - the cohomology mod 2-torsion of a real (unoriented) Grassmannian is polynomial in the Pontraygin classes; see Hatcher, "Vector bundles and K-theory", Theorem 3.16. Over Z/p for p odd, you'll therefore just get the polynomials in the mod p Pontryagin classes. As you say, one should get examples of mod p char classes by applying the Steenrod p-powers in the Thom space; maybe it would be fun to work out a formula for them. To say that a char class is zero mod p (i.e. divisible by p) is the sort of thing that the signature theorem or other index theorems sometimes tell us. – Tim Perutz Feb 28 2010 at 4:33
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Of course any cohomology class of $BG$, with any coefficients, serves as a characteristic class of $G$-principal bundles ($G$ is an arbitrary group). This is more or less the definition of characteristic classes. However, if $G=O(n)$ or $SO(n)$, it is quite difficult to get a hand on integral coefficients. John Klein gave a link here:
http://mathoverflow.net/questions/56932/what-characteristic-class-information-comes-from-the-2-torsion-of-hbsonz
To see the essential ingredients for the definition of Stiefel-Whitney classes for real vector bundles (and similar series), it is helpful to ignore Milnor-Stasheff and forget about the cell decompositions of Grassmannians for a moment. (I learnt the following definition from Matthias Kreck) Let $V \to X$ be a real $n$-dimensional vector bundle and $L \to RP^{\infty}$ be the universal line bundle. The external tensor product $V \boxtimes L$ is a bundle over $X \times RP^{\infty}$. It has an Euler class $e \in H^n (X \times RP^{\infty};Z/2)$. Use Kuenneth to write this group as $\oplus_{k=0,...,n} H^k (X) \otimes H^{n-k} (RP^{\infty})$. Under this isomorphism, $e$ becomes $\sum_k w_k(V) \otimes x^{n-k}$, $x$ the generator of $H^{\ast}(RP^{\infty})$.
The same construction yields the Chern classes, replacing $Z/2$ by $Z$ and $R$ by $C$ throughout.
What you see from this construction is that if you wish to have integral classes, you need the Euler class, i.e. orientability. But, no matter whether $V$ is oriented or not, the bundle $V \boxtimes L$ is not oriented.
What you can do is to replace $L \to RP^{\infty}$ by the universal $2$-dimensional oriented vector bundle $U \to BSO(2)=CP^{\infty}$. The point is that $U$ and hence $V \boxtimes U$ is a complex vector bundle and hence oriented. More precisely
$$V \boxtimes_R U \cong V \boxtimes_R (C \otimes_C U) = (V \otimes_R C) \boxtimes_C U.$$
You get the Pontrjagin classes! You can play the same game with the quaternions and the universal quaternionic line bundle $H \to HP^{\infty}$. Here it is important that for each quaternionic vector bundle $V \to X$, the bundle $V \boxtimes_H H$ is only real oriented and not complex. The classes obtained in this way are also called Pontrjagin classes.
Having defined these classes, one computes the cohomology of the classifying spaces $BG(n)$ ($G=U,O,SO,Sp$) with different coefficient rings $A$ with the help of the Gysin sequence of the sphere bundle $BG(n) \to BG(n+1)$ and induction on $n$. An important point is that the computation goes smoothly if (and only if!) the Euler numbers of the occuring spheres are either zero or invertible (in $A$). Of course, the two cases produce quite different looking results.
If $G=U$ or $G=Sp$, all spheres are odd-dimensional and thus have zero Euler number. Thus the compuation goes well for any $A$.
If $G=O,SO$, then there are even-dimensional spheres around, with Euler number $2$. Therefore the computation is smooth with $Z/2$-coefficients and also if $2$ is invertible in the coefficient ring. But the results are really different in characteristic $2$ and $\neq 2$! If $2$ is neither zero nor invertible in the coefficient ring, things become messy at this point.
-
The definition you mention of the SW classes reminds me of the construction of the Steenrod operations given in Steenrod & Epstein. This answer is very enlightening! – Sean Tilson Mar 1 2011 at 22:24
@Johannes: Do you know if a similar construction (taking the Euler class of exterior tensor product with the canonical line bundle) yields the SW-classes for unoriented cobordism? – Mark Grant Sep 7 2011 at 11:22
@Mark: I am not sufficiently familiar with the computational aspects of bordism theory to answer this question quickly. – Johannes Ebert Sep 7 2011 at 18:22
@Johannes: Thanks anyway. I think the answer is no, the coefficients of the formal group law are involved in the formula (this is what led me to ask mathoverflow.net/questions/74770/…) – Mark Grant Sep 9 2011 at 14:52
The "orthogonal complement" construction of Chern classes works like this: if $V\to X$ is a $U(n)$-bundle, let $p:S(V)\to X$ be the sphere bundle, and write $p^*V\approx W\oplus \mathbb{C}$ for the decomposition of the pullback. We want to define $c_{n-1}(V)$ to be $e(W)$, but this is in the wrong group; to obtain a well-defined class in $H^{2n-2}X$, we need to know that $$p^*:H^{2n-2}X \to H^{2n-2}S(V)$$ is an isomorphism. It is, because the fiber of the bundle $p$ is $S^{2n-1}$, which is $(2n-2)$-connected.
If we try to do this for an $SO(n)$-bundle, we'd want to know that $$p^*: H^{n-1}X\to H^{n-1}S(V)$$ is an isomorphism. But now the fiber is $S^{n-1}$, which is only $(n-2)$-connected. So the map $p^*$ on cohomology can fail to be surjective. So you may not be able to lift your element to $H^{n-1}(X)$. This fails already for the universal $SO(3)$-bundle; in this case, the Euler class $e(W)\in H^2S(V)=H^2BSO(2)$ doesn't lift to $H^2BSO(3)$.
-
The integral cohomology rings of both $BO(n)$ and $BSO(n)$ were computed by E. H. Brown, Proceedings AMS, 85, 2, 1982, p. 283-288. These rings are generated by the Pontrjagin classes, Bocksteins of monomials in even Stiefel-Whitney classes and, in the case of $BSO(2k)$, the Euler class. The description is as follows. All torsion is 2-torsion. The subalgebra generated by the Pontrjagin classes (and the Euler class in the case of $BSO(2k)$) has no torsion and is subject to just one relation: the square of the Euler class is the corresponding Pontrjagin class in the $BSO(2k)$ case. The torsion ideal can be identified with the $A$-submodule of the mod 2 cohomology generated by the image of $Sq^1$ where $A$ is the subalgebra generated by the reductions of the Pontrjagin classes, and the reduction of the Euler class in the case of $BS(2k)$. The key observation is Lemma 2.2.
The cohomology of $BO(n)\times BO(m)$ and $BSO(n)\times BSO(m)$ can be described in a similar way. E.H. Brown also computes the images of the Pontrjagin and Euler classes under the Whitney sum maps $BO(n)\times BO(m)\to BO(n+m),BSO(n)\times BSO(m)\to BSO(n+m)$. The Euler classes behave as expected; the torsion component of the images of the Pontrjagin classes is a bit more complicated. Finally, the image of the Bockstein of a monomial in the Siefel-Whitney classes can be computed using Lemma 2.2 and the action of the Steenrod algebra on the mod 2 cohomology.
So integral characteristic classes'' do not give any new tools for distinguishing real vector bundles up to isomorphism. However, in principle these classes may give new obstructions to representing bundles as Whitney sums and, by the splitting principle, as tensor products, symmetric or exterior powers etc.
-
I think (but I'm not sure) that you CAN define characteristic classes from $BSO(n)$ with $\mathbb{Z}$ coefficients, but it's not nearly as easy to work with them or compute them.
One of the big problems is that the integer cohomology ring of the infinite Grassmanians is quite nasty while with $\mathbb{Z}/2\mathbb{Z}$ or $\mathbb{Q}$ coefficients, it's not so bad (giving rise to the Stiefel-Whitney classes and with a bit more work, the Pontrjagin classes and Euler class).
Another general issue is the following: For the standard inclusions $SO(k)\rightarrow SO(n)$ as a block form, one wants to know what the induced maps $BSO(k)\rightarrow BSO(n)$ look like, or at least the induced map $H^{*}(BSO(n))\rightarrow H^{*}(BSO(k))$ looks like. For $\mathbb{Z}/2\mathbb{Z}$ coefficients, it's easy: the map has kernel all of the $w_i$ with $i > k$ and is an isomorphism on the rest. It's equally easy for rational coefficients. (And, as an aside, one can repeat the same questions with $BU(n)$ and the Chern classes. Turns out the cohomology ring of $BU(n)$ with $\mathbb{Z}$ coefficients is fine, and these induced maps are also easy to compute.)
Not only are the integral cohomology rings of $BSO(n)$ messy, these induced maps on cohomology are not nearly as well understood.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 131, "mathjax_display_tex": 3, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9309324622154236, "perplexity_flag": "head"}
|
http://en.wikipedia.org/wiki/Hypergraph
|
# Hypergraph
An example of a hypergraph, with $X = \{v_1, v_2, v_3, v_4, v_5, v_6, v_7\}$ and $E = \{e_1,e_2,e_3,e_4\} =$ $\{\{v_1, v_2, v_3\},$ $\{v_2,v_3\},$ $\{v_3,v_5,v_6\},$ $\{v_4\}\}$.
In mathematics, a hypergraph is a generalization of a graph in which an edge can connect any number of vertices. Formally, a hypergraph $H$ is a pair $H = (X,E)$ where $X$ is a set of elements called nodes or vertices, and $E$ is a set of non-empty subsets of $X$ called hyperedges or edges. Therefore, $E$ is a subset of $\mathcal{P}(X) \setminus\{\emptyset\}$, where $\mathcal{P}(X)$ is the power set of $X$.
While graph edges are pairs of nodes, hyperedges are arbitrary sets of nodes, and can therefore contain an arbitrary number of nodes. However, it is often desirable to study hypergraphs where all hyperedges have the same cardinality; a k-uniform hypergraph is a hypergraph such that all its hyperedges have size k. (In other words, it is a collection of sets of size k.) So a 2-uniform hypergraph is a graph, a 3-uniform hypergraph is a collection of unordered triples, and so on.
A hypergraph is also called a set system or a family of sets drawn from the universal set X. The difference between a set system and a hypergraph (which is not well defined) is in the questions being asked. Hypergraph theory tends to concern questions similar to those of graph theory, such as connectivity and colorability, while the theory of set systems tends to ask non-graph-theoretical questions, such as those of Sperner theory.
There are variant definitions; sometimes edges must not be empty, and sometimes multiple edges, with the same set of nodes, are allowed.
Hypergraphs can be viewed as incidence structures. In particular, there is a bipartite "incidence graph" or "Levi graph" corresponding to every hypergraph, and conversely, most, but not all, bipartite graphs can be regarded as incidence graphs of hypergraphs.
Hypergraphs have many other names. In computational geometry, a hypergraph may sometimes be called a range space and then the hyperedges are called ranges.[1] In cooperative game theory, hypergraphs are called simple games (voting games); this notion is applied to solve problems in social choice theory. In some literature edges are referred to as hyperlinks or connectors.[2]
Special kinds of hypergraphs include, besides k-uniform ones, clutters, where no edge appears as a subset of another edge; and abstract simplicial complexes, which contain all subsets of every edge.
The collection of hypergraphs is a category with hypergraph homomorphisms as morphisms.
## Terminology
Because hypergraph links can have any cardinality, there are several notions of the concept of a subgraph, called subhypergraphs, partial hypergraphs and section hypergraphs.
Let $H=(X,E)$ be the hypergraph consisting of vertices
$X = \lbrace x_i | i \in I_v \rbrace,$
and having edge set
$E = \lbrace e_i | i\in I_e, e_i \subseteq X \rbrace,$
where $I_v$ and $I_e$ are the index sets of the vertices and edges respectively.
A subhypergraph is a hypergraph with some vertices removed. Formally, the subhypergraph $H_A$ induced by a subset $A$ of $X$ is defined as
$H_A=\left(A, \lbrace e_i \cap A | e_i \cap A \neq \varnothing \rbrace \right).$
The partial hypergraph is a hypergraph with some edges removed. Given a subset $J \subset I_e$ of the edge index set, the partial hypergraph generated by $J$ is the hypergraph
$\left(X, \lbrace e_i | i\in J \rbrace \right).$
Given a subset $A\subseteq X$, the section hypergraph is the partial hypergraph
$H \times A = \left(A, \lbrace e_i | i\in I_e, e_i \subseteq A \rbrace \right).$
The dual $H^*$ of $H$ is a hypergraph whose vertices and edges are interchanged, so that the vertices are given by $\lbrace e_i \rbrace$ and whose edges are given by $\lbrace X_m \rbrace$ where
$X_m = \lbrace e_i | x_m \in e_i \rbrace.$
When a notion of equality is properly defined, as done below, the operation of taking the dual of a hypergraph is an involution, i.e.,
$\left(H^*\right)^* = H.$
A connected graph G with the same vertex set as a connected hypergraph H is a host graph for H if every hyperedge of H induces a connected subgraph in G. For a disconnected hypergraph H, G is a host graph if there is a bijection between the connected components of G and of H, such that each connected component G' of G is a host of the corresponding H'.
A hypergraph is bipartite if and only if its vertices can be partitioned into two classes U and V in such a way that each hyperedge with cardinality at least 2 contains at least one vertex from both classes.
The primal graph of a hypergraph is the graph with the same vertices of the hypergraph, and edges between all pairs of vertices contained in the same hyperedge. The primal graph is sometimes also known as the Gaifman graph of the hypergraph.
## Bipartite graph model
A hypergraph H may be represented by a bipartite graph BG as follows: the sets X and E are the partitions of BG, and (x1, e1) are connected with an edge if and only if vertex x1 is contained in edge e1 in H. Conversely, any bipartite graph with fixed parts and no unconnected nodes in the second part represents some hypergraph in the manner described above. This bipartite graph is also called incidence graph.
## Isomorphism and equality
A hypergraph homomorphism is a map from the vertex set of one hypergraph to another such that each edge maps to one other edge.
A hypergraph $H=(X,E)$ is isomorphic to a hypergraph $G=(Y,F)$, written as $H \simeq G$ if there exists a bijection
$\phi:X \to Y$
and a permutation $\pi$ of $I$ such that
$\phi(e_i) = f_{\pi(i)}$
The bijection $\phi$ is then called the isomorphism of the graphs. Note that
$H \simeq G$ if and only if $H^* \simeq G^*$.
When the edges of a hypergraph are explicitly labeled, one has the additional notion of strong isomorphism. One says that $H$ is strongly isomorphic to $G$ if the permutation is the identity. One then writes $H \cong G$. Note that all strongly isomorphic graphs are isomorphic, but not vice-versa.
When the vertices of a hypergraph are explicitly labeled, one has the notions of equivalence, and also of equality. One says that $H$ is equivalent to $G$, and writes $H\equiv G$ if the isomorphism $\phi$ has
$\phi(x_n) = y_n$
and
$\phi(e_i) = f_{\pi(i)}$
Note that
$H\equiv G$ if and only if $H^* \cong G^*$
If, in addition, the permutation $\pi$ is the identity, one says that $H$ equals $G$, and writes $H=G$. Note that, with this definition of equality, graphs are self-dual:
$\left(H^*\right) ^* = H$
A hypergraph automorphism is an isomorphism from a vertex set into itself, that is a relabeling of vertices. The set of automorphisms of a hypergraph H (= (X, E)) is a group under composition, called the automorphism group of the hypergraph and written Aut(H).
### Examples
Consider the hypergraph $H$ with edges
$H = \lbrace e_1 = \lbrace a,b \rbrace, e_2 = \lbrace b,c \rbrace, e_3 = \lbrace c,d \rbrace, e_4 = \lbrace d,a \rbrace, e_5 = \lbrace b,d \rbrace, e_6 = \lbrace a,c \rbrace \rbrace$
and
$G = \lbrace f_1 = \lbrace \alpha,\beta \rbrace, f_2 = \lbrace \beta,\gamma \rbrace, f_3 = \lbrace \gamma,\delta \rbrace, f_4 = \lbrace \delta,\alpha \rbrace, f_5 = \lbrace \alpha,\gamma \rbrace, f_6 = \lbrace \beta,\delta \rbrace \rbrace$
Then clearly $H$ and $G$ are isomorphic (with $\phi(a)=\alpha$, etc.), but they are not strongly isomorphic. So, for example, in $H$, vertex $a$ meets edges 1, 4 and 6, so that,
$e_1 \cap e_4 \cap e_6 = \lbrace a\rbrace$
In graph $G$, there does not exist any vertex that meets edges 1, 4 and 6:
$f_1 \cap f_4 \cap f_6 = \varnothing$
In this example, $H$ and $G$ are equivalent, $H\equiv G$, and the duals are strongly isomorphic: $H^*\cong G^*$.
## Symmetric hypergraphs
The rank $r(H)$ of a hypergraph $H$ is the maximum cardinality of any of the edges in the hypergraph. If all edges have the same cardinality k, the hypergraph is said to be uniform or k-uniform, or is called a k-hypergraph. A graph is just a 2-uniform hypergraph.
The degree d(v) of a vertex v is the number of edges that contain it. H is k-regular if every vertex has degree k.
The dual of a uniform hypergraph is regular and vice-versa.
Two vertices x and y of H are called symmetric if there exists an automorphism such that $\phi(x)=y$. Two edges $e_i$ and $e_j$ are said to be symmetric if there exists an automorphism such that $\phi(e_i)=e_j$.
A hypergraph is said to be vertex-transitive (or vertex-symmetric) if all of its vertices are symmetric. Similarly, a hypergraph is edge-transitive if all edges are symmetric. If a hypergraph is both edge- and vertex-symmetric, then the hypergraph is simply transitive.
Because of hypergraph duality, the study of edge-transitivity is identical to the study of vertex-transitivity.
## Transversals
A transversal (or "hitting set") of a hypergraph H = (X, E) is a set $T\subseteq X$ that has nonempty intersection with every edge. A transversal T is called minimal if no proper subset of T is a transversal. The transversal hypergraph of H is the hypergraph (X, F) whose edge set F consists of all minimal transversals of H.
Computing the transversal hypergraph has applications in combinatorial optimization, in game theory, and in several fields of computer science such as machine learning, indexing of databases, the satisfiability problem, data mining, and computer program optimization.
## Incidence matrix
Let $V = \{v_1, v_2, ~\ldots, ~ v_n\}$ and $E = \{e_1, e_2, ~ \ldots ~ e_m\}$. Every hypergraph has an $n \times m$ incidence matrix $A = (a_{ij})$ where
$a_{ij} = \left\{ \begin{matrix} 1 & \mathrm{if} ~ v_i \in e_j \\ 0 & \mathrm{otherwise}. \end{matrix} \right.$
The transpose $A^t$ of the incidence matrix defines a hypergraph $H^* = (V^*,\ E^*)$ called the dual of $H$, where $V^*$ is an m-element set and $E^*$ is an n-element set of subsets of $V^*$. For $v^*_j \in V^*$ and $e^*_i \in E^*, ~ v^*_j \in e^*_i$ if and only if $a_{ij} = 1$.
## Hypergraph coloring
Hypergraph coloring is defined as follows. Let $H=(V, E)$ be a hypergraph such that $\Vert V\Vert = n$. Then $C=\{c_1, c_2, \ldots, c_n\}$ is a proper coloring of $H$ if and only if, for all $e \in E, \vert e\vert > 1,$ there exists $v_i, v_j \in e$ such that $c_i \neq c_j$. In other words, there must be no monochromatic hyperedge with cardinality at least 2.
Hypergraphs for which there exists a coloring using up to k colors are referred to as k-colorable. The 2-colorable hypergraphs are exactly the bipartite ones.
## Partitions
A partition theorem due to E. Dauber[3] states that, for an edge-transitive hypergraph $H=(X,E)$, there exists a partition
$(X_1, X_2,\cdots,X_K)$
of the vertex set $X$ such that the subhypergraph $H_{X_k}$ generated by $X_k$ is transitive for each $1\le k \le K$, and such that
$\sum_{k=1}^K r\left(H_{X_k} \right) = r(H)$
where $r(H)$ is the rank of H.
As a corollary, an edge-transitive hypergraph that is not vertex-transitive is bicolorable.
Graph partitioning (and in particular, hypergraph partitioning) has many applications to IC design[4] and parallel computing.[5][6][7]
## Theorems
Many theorems and concepts involving graphs also hold for hypergraphs. Ramsey's theorem and Line graph of a hypergraph are typical examples. Some methods for studying symmetries of graphs extend to hypergraphs.
Two prominent theorems are the Erdős–Ko–Rado theorem and the Kruskal–Katona theorem on uniform hypergraphs.
## Hypergraph drawing
This circuit diagram can be interpreted as a drawing of a hypergraph in which four vertices (depicted as white rectangles and disks) are connected by three hyperedges drawn as trees.
Although hypergraphs are more difficult to draw on paper than graphs, several researchers have studied methods for the visualization of hypergraphs.
In one possible visual representation for hypergraphs, similar to the standard graph drawing style in which curves in the plane are used to depict graph edges, a hypergraph's vertices are depicted as points, disks, or boxes, and its hyperedges are depicted as trees that have the vertices as their leaves.[8][9] If the vertices are represented as points, the hyperedges may also be shown as smooth curves that connect sets of points, or as simple closed curves that enclose sets of points.[10][11]
An order-4 Venn diagram, which can be interpreted as a subdivision drawing of a hypergraph with 15 vertices (the 15 colored regions) and 4 hyperedges (the 4 ellipses).
In another style of hypergraph visualization, the subdivision model of hypergraph drawing,[12] the plane is subdivided into regions, each of which represents a single vertex of the hypergraph. The hyperedges of the hypergraph are represented by contiguous subsets of these regions, which may be indicated by coloring, by drawing outlines around them, or both. An order-n Venn diagram, for instance, may be viewed as a subdivision drawing of a hypergraph with n hyperedges (the curves defining the diagram) and 2n − 1 vertices (represented by the regions into which these curves subdivide the plane). In contrast with the polynomial-time recognition of planar graphs, it is NP-complete to determine whether a hypergraph has a planar subdivision drawing,[13] but the existence of a drawing of this type may be tested efficiently when the adjacency pattern of the regions is constrained to be a path, cycle, or tree.[14]
## Generalizations
One possible generalization of a hypergraph is to allow edges to point at other edges. There are two variations of this generalization. In one, the edges consist not only of a set of vertices, but may also contain subsets of vertices, ad infinitum. Set membership then provides an ordering, but the ordering is neither a partial order nor a preorder, since it is not transitive. The graph corresponding to the Levi graph of this generalization is a directed acyclic graph. Consider, for example, the generalized hypergraph whose vertex set is $V= \{a,b\}$ and whose edges are $e_1=\{a,b\}$ and $e_2=\{a,e_1\}$. Then, although $b\in e_1$ and $e_1\in e_2$, it is not true that $b\in e_2$. However, the transitive closure of set membership for such hypergraphs does induce a partial order, and "flattens" the hypergraph into a partially ordered set.
Alternately, edges can be allowed to point at other edges, (irrespective of the requirement that the edges be ordered as directed, acyclic graphs). This allows graphs with edge-loops, which need not contain vertices at all. For example, consider the generalized hypergraph consisting of two edges $e_1$ and $e_2$, and zero vertices, so that $e_1 = \{e_2\}$ and $e_2 = \{e_1\}$. As this loop is infinitely recursive, sets that are the edges violate the axiom of foundation. In particular, there is no transitive closure of set membership for such hypergraphs. Although such structures may seem strange at first, they can be readily understood by noting that the equivalent generalization of their Levi graph is no longer bipartite, but is rather just some general directed graph.
The generalized incidence matrix for such hypergraphs is, by definition, a square matrix, of a rank equal to the total number of vertices plus edges. Thus, for the above example, the incidence matrix is simply
$\left[ \begin{matrix} 0 & 1 \\ 1 & 0 \end{matrix} \right].$
## Notes
1. Haussler, David; Welzl, Emo (1987), "ε-nets and simplex range queries", 2 (2): 127–151, doi:10.1007/BF02187876, MR 884223 .
2. Judea Pearl, in HEURISTICS Intelligent Search Strategies for Computer Problem Solving, Addison Wesley (1984), p. 25.
3. E. Dauber, in Graph theory, ed. F. Harary, Addison Wesley, (1969) p. 172.
4. Karypis, G., Aggarwal, R., Kumar, V., and Shekhar, S. (March 1999), "Multilevel hypergraph partitioning: applications in VLSI domain", IEEE Transactions on Very Large Scale Integration (VLSI) Systems 7 (1): 69–79, doi:10.1109/92.748202.
5. Hendrickson, B., Kolda, T.G. (2000), "Graph partitioning models for parallel computing", Parallel Computing 26 (12): 1519–1545, doi:10.1016/S0167-8191(00)00048-X.
6. Catalyurek, U.V.; C. Aykanat (1995). "A Hypergraph Model for Mapping Repeated Sparse Matrix-Vector Product Computations onto Multicomputers". Proc. Internation Conference on Hi Performance Computing (HiPC'95).
7. Catalyurek, U.V.; C. Aykanat (1999), "Hypergraph-Partitioning Based Decomposition for Parallel Sparse-Matrix Vector Multiplication", IEEE Transactions on Parallel and Distributed Systems (IEEE) 10 (7): 673–693, doi:10.1109/71.780863.
8. Sander, G. (2003), "Layout of directed hypergraphs with orthogonal hyperedges", , Lecture Notes in Computer Science 2912, Springer-Verlag, pp. 381–386 .
9. Eschbach, Thomas; Günther, Wolfgang; Becker, Bernd (2006), "Orthogonal hypergraph drawing for improved visibility", 10 (2): 141–157 .
10. Mäkinen, Erkki (1990), "How to draw a hypergraph", International Journal of Computer Mathematics 34 (3): 177–185, doi:10.1080/00207169008803875 .
11. Bertault, François; Eades, Peter (2001), "Drawing hypergraphs in the subset standard", , Lecture Notes in Computer Science 1984, Springer-Verlag, pp. 45–76, doi:10.1007/3-540-44541-2_15 .
12. Kaufmann, Michael; van Kreveld, Marc; Speckmann, Bettina (2009), "Subdivision drawings of hypergraphs", , Lecture Notes in Computer Science 5417, Springer-Verlag, pp. 396–407, doi:10.1007/978-3-642-00219-9_39 .
13. Johnson, David S.; Pollak, H. O. (2006), "Hypergraph planarity and the complexity of drawing Venn diagrams", Journal of graph theory 11 (3): 309–325, doi:10.1002/jgt.3190110306 .
14. Buchin, Kevin; van Kreveld, Marc; Meijer, Henk; Speckmann, Bettina; Verbeek, Kevin (2010), "On planar supports for hypergraphs", , Lecture Notes in Computer Science 5849, Springer-Verlag, pp. 345–356, doi:10.1007/978-3-642-11805-0_33 .
## References
• Claude Berge, Dijen Ray-Chaudhuri, "Hypergraph Seminar, Ohio State University 1972", Lecture Notes in Mathematics 411 Springer-Verlag
• Vitaly I. Voloshin. "Introduction to Graph and Hypergraph Theory". Nova Science Publishers, Inc., 2009.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 123, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9123739004135132, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/67613?sort=oldest
|
## Collatz related question [closed]
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Howdy,
Not sure this will be entirely clear, but when considering the relationship between a start value n in the Collatz algorithm and the length of the sequence generated by n, is there a function f() such that f(n) = y >= length(Collatz(n))?
-
I've voted to close. Your question isn't well formulated. I think you'd be better off asking your question on math.stackexchange.com, but you ought to be a little more careful in its formulation. – Ryan Budney Jun 12 2011 at 23:24
This is so close to a good question that I am sorry to see it closed. An answer to the good questionn could be along the lines of "Consider the directed graph which has (n, c(n)) as an edge, where c(n) is either n/2 or 3n+1. The desired length function is the number of edges from n to 4 (or 2 or 1). This is an example of a partial recursive function, for which we do not know if it is total or even (on values where it known to be defined) if it is bounded above by a total recursive function." Perhaps someone will edit it to reopen. Gerhard "Ask Me About System Design" Paseman, 2011.06.12 – Gerhard Paseman Jun 13 2011 at 3:32
## 1 Answer
The existence of such a function is equivalent to the unproven statement that all n eventually settle into the $4, 2, 1$ cycle. Or do I misunderstand your question?
-
Although I suspect the poster is not asking the following, a nontrivial variant is: Is there an interesting partial recursive function f, an infinite interesting set D of positive integers, and a proof such that for all n in D, the collatz length function c(n) is defined and has a nonnegative value, similarly for f(n), and f(n) > c(n), and the proof demonstrates this? If c(n) is total, then it is recursive, but other than lower bounds, we know nothing of the asymptotic growth rate of c(n). Gerhard "Interesting Will Be Defined Later" Paseman, 2011.06.12 – Gerhard Paseman Jun 13 2011 at 6:18
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9227714538574219, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/198794/find-all-integer-solution-of-the-equations?answertab=active
|
# Find all integer solution of the equations:
Find all integer solutions of the equations: \begin{cases} 6x+21y&=&33 \\ 14x-49y&=&13 \end{cases}
I'm not sure how to find all integer solutions for a, but I know there are no integer solutions for b, because $\mbox{gcd}(14,49)=7$ and $7$ does not divide $13$.
-
In a), first step is to divide through by 3. – Gerry Myerson Sep 18 '12 at 23:46
I know that. Which gives 2x+7y=11. By looking at it I can tell one integer solution is x=2 and y=1, but I don't know how to find all integer solutions. – Pink Panda Sep 18 '12 at 23:47
Good. Now, solve for $x$ and figure out what $y$ has to do to make $x$ an integer. – Gerry Myerson Sep 19 '12 at 0:05
## 1 Answer
For a, can you find values $a$ and $b$ so that $2(x+a)+7(y+b)=2x+7y$? If so, then given one solution (which you have) you can get more by $x'=x+na, y'=y+nb$ for any integer $n$. Then you need to prove that these are all the solutions.
-
I'm not sure I follow, the only values for a, and b that would make 2(x+a)+7(y+b)=2x+7y would be a,b=0. What does this tell me? – Pink Panda Sep 19 '12 at 15:55
@PinkPanda: that is not true. You can cancel the terms in $x$ and $y$, leaving $2a+7b=0$. Doesn't that have a solution in the integers? In fact, many of them? – Ross Millikan Sep 19 '12 at 16:03
Oh, I see. It does have many, such as a=-7 and b=2. Still don't see how I would solve for all integer solutions. Do I solve for a and then solve for b? a=-7b/2 and b=-2a/7 – Pink Panda Sep 19 '12 at 16:17
@PinkPanda: so in addition to your solution of $(2,1)$, you can have $(2-7,1+2), (2-15\cdot 7, 1+15\cdot 2), \ldots$ – Ross Millikan Sep 19 '12 at 16:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 16, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9367988705635071, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/39029?sort=oldest
|
## Why doesn’t Stein effect happen for multinomial distributions?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
(Medeen, et all, 1998)" show that Maximum Likelihood estimate is admissible for multinomial distribution under squared error. On other hand, James and Stein showed that arithmetic average is not an admissible estimator of Gaussian location parameter in 3 dimensions. But since maximum likelihood estimates of multinomial parameters are averages of observed counts, which become normally distributed for large sample sizes, why doesn't Stein effect happen here?
$\hat{p}$ is an inadmissible estimator of $\theta$ if there's an estimator that is no worse for every $\theta$ and better for at least one
-
Minor comment: I believe that was James and Stein, not "James Stein". – Mark Meckes Sep 17 2010 at 1:01
## 1 Answer
This is not an answer, but maybe worth thinking about (and I cannot yet leave comments). My intuition about the Stein phenomenon is that while the individual coordinates of the Gaussian random variable are independent, the loss function involves all of the location parameters jointly. Stein type estimators take this into account and by doing so outperform the MLE, making it inadmissible.
In the case of the multinomial parameters, they inherently have dependence via the sum-to-one constraint as a probability vector and you take this into account when averaging over possible parameter values. So a question related to yours, which may shed some light on it, is whether or not the MLE is admissible for a Gaussian location vector $\mu$ under the restriction that $\|\mu\| = c$ for some positive constant $c$.
UPDATE: "Admissibility and complete class results for the multinomial estimation problem...", Ighodaro, Thomas & Brown (Journal of Mult. Analysis '82) shows the MLE for the multinomial parameter becomes inadmissible if you remove the vertices of the simplex from the action space. It is a property of the risk behavior of the MLE at these extremal points that makes it admissible, then. Since the corresponding Gaussian problem has no such extremal points, this may constitute an explanation to your question.
-
I think normalization issue is minor because you can just drop the last parameter, and measure admissibility on the reduced parameter vector. To make it more in line with Gaussian example, let your random variable be an average of k observations with last component dropped. For large k it becomes distributed as a Gaussian. Then MLE estimate of the mean of your distribution (first d-1 components) is an arithmetic average of a sample of (approximately) Gaussian-distributed random variables – Yaroslav Bulatov Sep 17 2010 at 4:58
It isn't the normalization per se, it's the restriction of the action space. I've edited my answer to clarify this and included a reference. – R Hahn Sep 17 2010 at 6:25
Nice find! So I guess Stein effect does happen there. – Yaroslav Bulatov Sep 18 2010 at 19:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8883407115936279, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/tagged/nuclear-physics?page=6&sort=newest&pagesize=15
|
# Tagged Questions
Nuclear physics is the study of the composition, behavior and interaction of atomic nuclei and their constituent parts.
4answers
2k views
### Where does the energy from a nuclear bomb come from?
I'll break this down to two related questions: With a fission bomb, Uranium or Plutonium atoms are split by a high energy neutron, thus releasing energy (and more neutrons). Where does the energy ...
2answers
2k views
### How do alpha and beta particles ionise surrounding particles?
I've been wondering about this question for a while. If you have alpha and beta particles released from a radioactive core, how do they ionise surrounding particles?
1answer
687 views
### Why must the deuteron wavefunction be antisymmetric?
Wikipedia article on deuterium says this: The deuteron wavefunction must be antisymmetric if the isospin representation is used (since a proton and a neutron are not identical particles, ...
1answer
606 views
### Why doesn't orbital electron fall into the nucleus of Rb85, but falls into the nucleus of Rb83?
Rb83 is unstable and decays to Kr-83. Mode of the decay is electron capture. Rb85 is stable. The nuclei Rb83 and Rb85 have the same charge. Rb85 is heavier than Rb85, but gravitation is too weak to ...
3answers
874 views
### Age of the Earth and the star that preceded the Sun
One of the great unheralded advances made in the history of science was the ability to determine the age of Earth based on the decay of isotopic uranium. Based on the apparent abundance of uranium in ...
5answers
732 views
### How many times has the “stuff” in our solar system been recycled from previous stars?
Is there a cosmologist in the house? I've got a basic understanding (with some degree of error) of some simple facts: The Universe is a little over 13 billion years old. Our galaxy is almost that ...
1answer
284 views
### Cherenkov radiation in nuclear bomb
Would Cherenkov radiation occur at the explosion of a nuclear bomb? Suppose it would not be occluded by smoke or anything else for that matter.
6answers
778 views
### Is there any thing other than time that “triggers” a radioactive atom to decay?
Say you have a vial of tritium and monitor their atomic decay with a geiger counter. How does an atom "know" when it's time to decay? It seems odd that all the tritium atoms are identical except with ...
2answers
254 views
### Weak contribution to nuclear binding
Does the weak nuclear force play a role (positive or negative) in nuclear binding? Normally you only see discussions about weak decay and flavour changing physics, but is there a contribution to ...
3answers
674 views
### What does a nucleus look like?
It's a Christmas time and so I hope I'll be pardoned for asking a question which probably doesn't make much sense :-) In standard undergraduate nuclear physics course one learns about models such as ...
1answer
630 views
### Fermi's Golden Rule
It is well known that to calculate the probability of transition in the scattering processes, as a first approximation, we use the Fermi golden rule. This rule is obtained considering the initial ...
2answers
218 views
### Isotope properties plotting tool?
I'm looking for something that will generate scatter plots comparing different properties of isotopes. Ideally I'd like some web page that lets me select axis and click go but a CSV file with lost of ...
1answer
466 views
### Turned to steel in the great magnetic field
This is obviously a "fun" question, but I'm sure it still has valid physics in it, so bear with me. How great of a magnetic field would you need to transmute other elements into iron/nickel, if ...
5answers
2k views
### Is it possible to obtain gold through nuclear decay?
Is there a series of transmutations through nuclear decay that will result in the stable gold isotope ${}^{197}\mathrm{Au}$ ? How long will the process take?
3answers
762 views
### Why some nuclei with “magic” numbers of neutrons have a half-life less than their neighbor isotopes?
It's easy to find the "magic" numbers of neutrons on the diagrams of alpha-decay energy: 82, 126, 152, 162. Such "magic" nuclei should be more stable than their neighbors. But why some nuclei ...
3answers
4k views
### Why is the nucleus of an Iron atom so stable?
Lighter nuclei liberate energy when undergoing fusion, heavier nuclei when undergoing fission. What is it about the nucleus of an Iron atom that makes it so stable? Alternatively: Iron has the ...
5answers
705 views
### How does Positronium exist?
I've just recently heard of Positronium, an "element" with interesting properties formed by an electron and positron, and I was shocked to hear that physicists were actually working with this element, ...
1answer
182 views
### How is it possible to calculate the energy liberated by a given fission process?
How is it possible to calculate the energy liberated by a given fission process? For example, in the fission of a $^{235}$U induced by capturing a neutron?
1answer
210 views
### Obtaining isotope stability
For a given isotope, one can obtain the binding energy using the semi-empirical mass formula. For example, has a binding energy of 1782.8 MeV. From this information, how can the likelihood of the ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9323472380638123, "perplexity_flag": "middle"}
|
http://en.wikipedia.org/wiki/Resampling_(statistics)
|
# Resampling (statistics)
For other uses, see Resampling.
In statistics, resampling is any of a variety of methods for doing one of the following:
1. Estimating the precision of sample statistics (medians, variances, percentiles) by using subsets of available data (jackknifing) or drawing randomly with replacement from a set of data points (bootstrapping)
2. Exchanging labels on data points when performing significance tests (permutation tests, also called exact tests, randomization tests, or re-randomization tests)
3. Validating models by using random subsets (bootstrapping, cross validation)
Common resampling techniques include bootstrapping, jackknifing and permutation tests.
## Bootstrap
Main article: Bootstrap (statistics)
Bootstrapping is a statistical method for estimating the sampling distribution of an estimator by sampling with replacement from the original sample, most often with the purpose of deriving robust estimates of standard errors and confidence intervals of a population parameter like a mean, median, proportion, odds ratio, correlation coefficient or regression coefficient. It may also be used for constructing hypothesis tests. It is often used as a robust alternative to inference based on parametric assumptions when those assumptions are in doubt, or where parametric inference is impossible or requires very complicated formulas for the calculation of standard errors.
## Jackknife
Jackknifing, which is similar to bootstrapping, is used in statistical inference to estimate the bias and standard error (variance) of a statistic, when a random sample of observations is used to calculate it. Historically this method preceded the invention of the bootstrap with Quenouille inventing this method in 1949 and Tukey extending it in 1958.[1][2] This method was foreshadowed by Mahalanobis who in 1946 suggested repeated estimates of the statistic of interest with half the sample chosen at random. [3] He coined the name 'interpenetrating samples' for this method.
Quenouille invented this method with the intention of reducing the bias of the sample estimate. Tukey extended this method by assuming that if the replicates could be considered identically and independently distributed that an estimate of the variance of the sample parameter could be made and that it would be approximately distributed as a t variate with n - 1 degrees of freedom (n being the sample size).
The basic idea behind the jackknife variance estimator lies in systematically recomputing the statistic estimate leaving out one or more observations at a time from the sample set. From this new set of replicates of the statistic, an estimate for the bias and an estimate for the variance of the statistic can be calculated.
Instead of using the jackknife to estimate the variance, it may instead be applied to the log of the variance. This transformation may result in better estimates particularly when the distribution of the variance itself may be non normal.
For many statistical parameters the jackknife estimate of variance tends asymptotically to the true value almost surely. In technical terms one says that the jackknife estimate is consistent. The jackknife is consistent for the sample means, sample variances, central and non-central t-statistics (with possibly non-normal populations), sample coefficient of variation, maximum likelihood estimators, least squares estimators, correlation coefficients and regression coefficients.
It is not consistent for the sample median. In the case of a unimodal variate the ratio of the jackknife variance to the sample variance tends to be distributed as one half the square of a chi square distribution with two degrees of freedom.
The jackknife like the original bootstrap is dependent on the independence of the data. Extensions of the jackknife to allow for dependence in the data have been proposed.
Another extension is the delete a group method used in association with Poisson sampling.
## Comparison of Bootstrap and Jackknife
Both methods, the bootstrap and the jackknife, estimate the variability of a statistic from the variability of that statistic between subsamples, rather than from parametric assumptions. For the more general jackknife, the delete-m observations jackknife, the bootstrap can be seen as a random approximation of it. Both yield similar numerical results, which is why each can be seen as approximation to the other. Although there are huge theoretical differences in their mathematical insights, the main practical difference for statistics users is that the bootstrap gives different results when repeated on the same data, whereas the jackknife gives exactly the same result each time. Because of this, the jackknife is popular when the estimates need to be verified several times before publishing (e.g. official statistics agencies). On the other hand, when this verification feature is not crucial and it is of interest not to have a number but just an idea of its distribution the bootstrap is preferred (e.g. studies in physics, economics, biological sciences).
Whether to use the bootstrap or the jackknife may depend more on operational aspects than on statistical concerns of a survey. The jackknife, originally used for bias reduction, is more of a specialized method and only estimates the variance of the point estimator. This can be enough for basic statistical inference (e.g. hypothesis testing, confidence intervals). The bootstrap, on the other hand, first estimates the whole distribution (of the point estimator) and then computes the variance from that. While powerful and easy, this can become highly computer intensive.
"The bootstrap can be applied to both variance and distribution estimation problems. However, the bootstrap variance estimator is not as good as the jackknife or the balanced repeated replication (BRR) variance estimator in terms of the empirical results. Furthermore, the bootstrap variance estimator usually requires more computations than the jackknife or the BRR . Thus, the bootstrap is mainly recommended for distribution estimation." [4]
There is a special consideration with the jackknife, particularly with the delete-1 observation jackknife. It should only be used with smooth differentiable statistics, that is: totals, means, proportions, ratios, odd ratios, regression coefficients, etc.; but not with medians or quantiles. This clearly may become a practical disadvantage (or not, depending on the needs of the user). This disadvantage is usually the argument against the jackknife in benefit to the bootstrap. More general jackknifes than the delete-1, such as the delete-m jackknife, overcome this problem for the medians and quantiles by relaxing the smoothness requirements for consistent variance estimation.
Usually the jackknife is easier to apply to complex sampling schemes than the bootstrap. Complex sampling schemes may involve stratification, multi-stages (clustering), varying sampling weights (non-response adjustments, calibration, post-stratification) and under unequal-probability sampling designs. Theoretical aspects of both the bootstrap and the jackknife can be found in Shao and Tu (1995),[5] whereas a basic introduction is accounted in Wolter (2007).[6]
## Cross-validation
Main article: Cross-validation (statistics)
Cross-validation is a statistical method for validating a predictive model. Subsets of the data are held out for use as validating sets; a model is fit to the remaining data (a training set) and used to predict for the validation set. Averaging the quality of the predictions across the validation sets yields an overall measure of prediction accuracy.
One form of cross-validation leaves out a single observation at a time; this is similar to the jackknife. Another, K-fold cross-validation, splits the data into K subsets; each is held out in turn as the validation set.
This avoids "self-influence". For comparison, in regression analysis methods such as linear regression, each y value draws the regression line toward itself, making the prediction of that value appear more accurate than it really is. Cross-validation applied to linear regression predicts the y value for each observation without using that observation.
This is often used for deciding how many predictor variables to use in regression. Without cross-validation, adding predictors always reduces the residual sum of squares (or possibly leaves it unchanged). In contrast, the cross-validated mean-square error will tend to decrease if valuable predictors are added, but increase if worthless predictors are added.[citation needed]
## Permutation tests
Main article: Exact test
A permutation test (also called a randomization test, re-randomization test, or an exact test) is a type of statistical significance test in which the distribution of the test statistic under the null hypothesis is obtained by calculating all possible values of the test statistic under rearrangements of the labels on the observed data points. In other words, the method by which treatments are allocated to subjects in an experimental design is mirrored in the analysis of that design. If the labels are exchangeable under the null hypothesis, then the resulting tests yield exact significance levels; see also exchangeability. Confidence intervals can then be derived from the tests. The theory has evolved from the works of R.A. Fisher and E.J.G. Pitman in the 1930s.
To illustrate the basic idea of a permutation test, suppose we have two groups $A$ and $B$ whose sample means are $\bar{x}_{A}$ and $\bar{x}_{B}$, and that we want to test, at 5% significance level, whether they come from the same distribution. Let $n_{A}$ and $n_{B}$ be the sample size corresponding to each group. The permutation test is designed to determine whether the observed difference between the sample means is large enough to reject the null hypothesis H$_{0}$ that the two groups have identical probability distribution.
The test proceeds as follows. First, the difference in means between the two samples is calculated: this is the observed value of the test statistic, T(obs). Then the observations of groups $A$ and $B$ are pooled.
Next, the difference in sample means is calculated and recorded for every possible way of dividing these pooled values into two groups of size $n_{A}$ and $n_{B}$ (i.e., for every permutation of the group labels A and B). The set of these calculated differences is the exact distribution of possible differences under the null hypothesis that group label does not matter.
The one-sided p-value of the test is calculated as the proportion of sampled permutations where the difference in means was greater than or equal to T(obs). The two-sided p-value of the test is calculated as the proportion of sampled permutations where the absolute difference was greater than or equal to ABS(T(obs)).
If the only purpose of the test is reject or not reject the null hypothesis, we can as an alternative sort the recorded differences, and then observe if T(obs) is contained within the middle 95% of them. If it is not, we reject the hypothesis of identical probability curves at the 5% significance level.
### Relation to parametric tests
Permutation tests are a subset of non-parametric statistics. The basic premise is to use only the assumption that it is possible that all of the treatment groups are equivalent, and that every member of them is the same before sampling began (i.e. the slot that they fill is not differentiable from other slots before the slots are filled). From this, one can calculate a statistic and then see to what extent this statistic is special by seeing how likely it would be if the treatment assignments had been jumbled.
In contrast to permutation tests, the reference distributions for many popular "classical" statistical tests, such as the t-test, F-test, z-test and χ2 test, are obtained from theoretical probability distributions. Fisher's exact test is an example of a commonly used permutation test for evaluating the association between two dichotomous variables. When sample sizes are large, the Pearson's chi-square test will give accurate results. For small samples, the chi-square reference distribution cannot be assumed to give a correct description of the probability distribution of the test statistic, and in this situation the use of Fisher's exact test becomes more appropriate. A rule of thumb is that the expected count in each cell of the table should be greater than 5 before Pearson's chi-squared test is used.[citation needed]
Permutation tests exist in many situations where parametric tests do not (e.g., when deriving an optimal test when losses are proportional to the size of an error rather than its square). All simple and many relatively complex parametric tests have a corresponding permutation test version that is defined by using the same test statistic as the parametric test, but obtains the p-value from the sample-specific permutation distribution of that statistic, rather than from the theoretical distribution derived from the parametric assumption. For example, it is possible in this manner to construct a permutation t-test, a permutation chi-squared test of association, a permutation version of Aly's test for comparing variances and so on.
The major down-side to permutation tests are that they
• Can be computationally intensive and may require "custom" code for difficult-to-calculate statistics. This must be rewritten for every case.
• Are primarily used to provide a p-value. The inversion of the test to get confidence regions/intervals requires even more computation.
### Advantages
Permutation tests exist for any test statistic, regardless of whether or not its distribution is known. Thus one is always free to choose the statistic which best discriminates between hypothesis and alternative and which minimizes losses.
Permutation tests can be used for analyzing unbalanced designs [7] and for combining dependent tests on mixtures of categorical, ordinal, and metric data (Pesarin, 2001). They can also be used to analyze qualitative data that has been quantitized (i.e., turned into numbers). Permutation tests may be ideal for analyzing quantitized data that do not satisfy statistical assumptions underlying traditional parametric tests (e.g., t-tests, ANOVA) (Collingridge, 2013).
Before the 1980s, the burden of creating the reference distribution was overwhelming except for data sets with small sample sizes.
Since the 1980s, the confluence of relatively inexpensive fast computers and the development of new sophisticated path algorithms applicable in special situations, made the application of permutation test methods practical for a wide range of problems. It also initiated the addition of exact-test options in the main statistical software packages and the appearance of specialized software for performing a wide range of uni- and multi-variable exact tests and computing test-based "exact" confidence intervals.
### Limitations
An important assumption behind a permutation test is that the observations are exchangeable under the null hypothesis. An important consequence of this assumption is that tests of difference in location (like a permutation t-test) require equal variance. In this respect, the permutation t-test shares the same weakness as the classical Student's t-test (the Behrens–Fisher problem). A third alternative in this situation is to use a bootstrap-based test. Good (2000)[citation needed] explains the difference between permutation tests and bootstrap tests the following way: "Permutations test hypotheses concerning distributions; bootstraps test hypotheses concerning parameters. As a result, the bootstrap entails less-stringent assumptions." Of course, bootstrap tests are not exact.
### Monte Carlo testing
An asymptotically equivalent permutation test can be created when there are too many possible orderings of the data to allow complete enumeration in a convenient manner. This is done by generating the reference distribution by Monte Carlo sampling, which takes a small (relative to the total number of permutations) random sample of the possible replicates. The realization that this could be applied to any permutation test on any dataset was an important breakthrough in the area of applied statistics. The earliest known reference to this approach is Dwass (1957).[8] This type of permutation test is known under various names: approximate permutation test, Monte Carlo permutation tests or random permutation tests.[9]
After $\scriptstyle\ N$ random permutations, it is possible to obtain a confidence interval for the p-value based on the Binomial distribution. For example, if after $\scriptstyle\ N = 10000$ random permutations the p-value is estimated to be $\scriptstyle\ \hat{p}=0.05$, then a 99% confidence interval for the true $\scriptstyle\ p$ (the one that would result from trying all possible permutations) is $\scriptstyle\ [0.044, 0.056]$.
On the other hand, the purpose of estimating the p-value is most often to decide whether $\scriptstyle\ p \leq \alpha$, where $\scriptstyle\ \alpha$ is the threshold at which the null hypothesis will be rejected (typically $\scriptstyle\ \alpha=0.05$). In the example above, the confidence interval only tells us that there is roughly a 50% chance that the p-value is smaller than 0.05, i.e. it is completely unclear whether the null hypothesis should be rejected at a level $\scriptstyle\ \alpha=0.05$.
If it is only important to know whether $\scriptstyle\ p \leq \alpha$ for a given $\scriptstyle\ \alpha$, it is logical to continue simulating until the statement $\scriptstyle\ p \leq \alpha$ can be established to be true or false with a very low probability of error. Given a bound $\scriptstyle\ \epsilon$ on the admissible probability of error (the probability of finding that $\scriptstyle\ \hat{p} > \alpha$ when in fact $\scriptstyle\ p \leq \alpha$ or vice versa), the question of how many permutations to generate can be seen as the question of when to stop generating permutations, based on the outcomes of the simulations so far, in order to guarantee that the conclusion (which is either $\scriptstyle\ p \leq \alpha$ or $\scriptstyle\ p > \alpha$) is correct with probability at least as large as $\scriptstyle\ 1-\epsilon$. ($\scriptstyle\ \epsilon$ will typically be chosen to be extremely small, e.g. 1/1000.) Stopping rules to achieve this have been developed[10] which can be incorporated with minimal additional computational cost. In fact, depending on the true underlying p-value it will often be found that the number of simulations required is remarkably small (e.g. as low as 5 and often not larger than 100) before a decision can be reached with virtual certainty.
## See also
### References
1. Quenouille M (1949) Approximate tests of correlation in time series. J Roy Stat Soc Series B 11: 68-84
2. Tukey JW (1958) Bias and confidence in not quite large samples (abstract). Ann Math Stats 29: 614
3. Mahalanobis PC (1946). Recent experiments in statistical sampling in the Indian Statistical Institute. J Roy Stat Soc 109: 325-370
4. Shao, J. and Tu, D. (1995). The Jackknife and Bootstrap. Springer-Verlag, Inc. pp. 281.
5. Shao, J. and Tu, D. (1995). The Jackknife and Bootstrap. Springer-Verlag, Inc.
6. Wolter, K.M. (2007). Introduction to Variance Estimation. Second Edition. Springer, Inc.
7. Meyer Dwass, "Modified Randomization Tests for Nonparametric Hypotheses", , 28:181-187, 1957.
8. Gandy, Axel (2009). "Sequential implementation of Monte Carlo tests with uniformly bounded resampling risk". Journal of the American Statistical Association 104 (488): 1504–1511.
## Bibliography
### Introductory statistics
• Good, P. (2005) Introduction to Statistics Through Resampling Methods and R/S-PLUS. Wiley. ISBN 0-471-71575-1
• Good, P. (2005) Introduction to Statistics Through Resampling Methods and Microsoft Office Excel. Wiley. ISBN 0-471-73191-9
• Hesterberg, T. C., D. S. Moore, S. Monaghan, A. Clipson, and R. Epstein (2005). Bootstrap Methods and Permutation Tests.[full citation needed]
• Wolter, K.M. (2007). Introduction to Variance Estimation. Second Edition. Springer, Inc.
#### Bootstrapping
• Efron, Bradley (1979). "Bootstrap methods: Another look at the jackknife", The Annals of Statistics, 7, 1-26.
• Efron, Bradley (1981). "Nonparametric estimates of standard error: The jackknife, the bootstrap and other methods", Biometrika, 68, 589-599.
• Efron, Bradley (1982). The jackknife, the bootstrap, and other resampling plans, In Society of Industrial and Applied Mathematics CBMS-NSF Monographs, 38.
• Diaconis, P.; Efron, Bradley (1983), "Computer-intensive methods in statistics," Scientific American, May, 116-130.
• Efron, Bradley; Tibshirani, Robert J. (1993). An introduction to the bootstrap, New York: Chapman & Hall, software.
• Davison, A. C. and Hinkley, D. V. (1997): Bootstrap Methods and their Application, software.
• Mooney, C Z & Duval, R D (1993). Bootstrapping. A Nonparametric Approach to Statistical Inference. Sage University Paper series on Quantitative Applications in the Social Sciences, 07-095. Newbury Park, CA: Sage.
• Simon, J. L. (1997): Resampling: The New Statistics.
#### Jackknife
• Berger, Y.G. (2007). A jackknife variance estimator for unistage stratified samples with unequal probabilities. Biometrika. Vol. 94, 4, pp. 953–964.
• Berger, Y.G. and Rao, J.N.K. (2006). Adjusted jackknife for imputation under unequal probability sampling without replacement. Journal of the Royal Statistical Society B. Vol. 68, 3, pp. 531–547.
• Berger, Y.G. and Skinner, C.J. (2005). A jackknife variance estimator for unequal probability sampling. Journal of the Royal Statistical Society B. Vol. 67, 1, pp. 79–89.
• Jiang, J., Lahiri, P. and Wan, S-M. (2002). A unified jackknife theory for empirical best prediction with M-estimation. The Annals of Statistics. Vol. 30, 6, pp. 1782–810.
• Jones, H.L. (1974). Jackknife estimation of functions of stratum means. Biometrika. Vol. 61, 2, pp. 343–348.
• Kish, L. and Frankel M.R. (1974). Inference from complex samples. Journal of the Royal Statistical Society B. Vol. 36, 1, pp. 1–37.
• Krewski, D. and Rao, J.N.K. (1981). Inference from stratified samples: properties of the linearization, jackknife and balanced repeated replication methods. The Annals of Statistics. Vol. 9, 5, pp. 1010–1019.
• Quenouille, M.H. (1956). Notes on bias in estimation. Biometrika. Vol. 43, pp. 353–360.
• Rao, J.N.K. and Shao, J. (1992). Jackknife variance estimation with survey data under hot deck imputation. Biometrika. Vol. 79, 4, pp. 811–822.
• Rao, J.N.K., Wu, C.F.J. and Yue, K. (1992). Some recent work on resampling methods for complex surveys. Survey Methodology. Vol. 18, 2, pp. 209–217.
• Shao, J. and Tu, D. (1995). The Jackknife and Bootstrap. Springer-Verlag, Inc.
• Tukey, J.W. (1958). Bias and confidence in not-quite large samples (abstract). The Annals of Mathematical Statistics. Vol. 29, 2, pp. 614.
• Wu, C.F.J. (1986). Jackknife, Bootstrap and other resampling methods in regression analysis. The Annals of Statistics. Vol. 14, 4, pp. 1261–1295.
### Monte Carlo methods
• George S. Fishman (1995). Monte Carlo: Concepts, Algorithms, and Applications, Springer, New York. ISBN 0-387-94527-X.
• James E. Gentle (2009). Computational Statistics, Springer, New York. Part III: Methods of Computational Statistics. ISBN 978-0-387-98143-7.
• Dirk P. Kroese, Thomas Taimre and Zdravko I. Botev. Handbook of Monte Carlo Methods, John Wiley & Sons, New York. ISBN 978-0-470-17793-8.
• Christian P. Robert and George Casella (2004). Monte Carlo Statistical Methods, Second ed., Springer, New York. ISBN 0-387-21239-6.
• Shlomo Sawilowsky and Gail Fahoome (2003). Statistics via Monte Carlo Simulation with Fortran. Rochester Hills, MI: JMASM. ISBN 0-9740236-0-4.
#### Permutation test
Original references:
• Fisher, R.A. (1935) The Design of Experiments, New York: Hafner
• Pitman, E. J. G. (1937) "Significance tests which may be applied to samples from any population", Royal Statistical Society Supplement, 4: 119-130 and 225-32 (parts I and II). JSTOR 2984124 JSTOR 2983647
• Pitman, E. J. G. (1938) "Significance tests which may be applied to samples from any population. Part III. The analysis of variance test", Biometrika, 29 (3-4): 322-335. doi:10.1093/biomet/29.3-4.322
Modern references:
• Collingridge, D.S. (2013). A Primer on Quantitized Data Analysis and Permutation Testing. Journal of Mixed Methods Research, 7(1), 79-95.
• Edgington. E.S. (1995) Randomization tests, 3rd ed. New York: Marcel-Dekker
• Good, Phillip I. (2005) Permutation, Parametric and Bootstrap Tests of Hypotheses, 3rd ed., Springer ISBN 0-387-98898-X
• Good, P. (2002) "Extensions of the concept of exchangeability and their applications", J. Modern Appl. Statist. Methods, 1:243-247.
• Lunneborg, Cliff. (1999) Data Analysis by Resampling, Duxbury Press. ISBN 0-534-22110-6.
• Pesarin, F. (2001). Multivariate Permutation Tests : With Applications in Biostatistics, John Wiley & Sons. ISBN 978-0471496700
• Welch, W. J. (1990) "Construction of permutation tests", Journal of the American Statistical Association, 85:693-698.
Computational methods:
• Mehta, C. R.; Patel, N. R. (1983). "A network algorithm for performing Fisher's exact test in r x c contingency tables", Journal of the American Statistical Association, 78(382):427–434.
• Metha, C. R.; Patel, N. R.; Senchaudhuri, P. (1988). "Importance sampling for estimating exact probabilities in permutational inference", Journal of the American Statistical Association, 83(404):999–1005.
• Gill, P. M. W. (2007). "Efficient calculation of p-values in linear-statistic permutation significance tests", Journal of Statistical Computation and Simulation , 77(1):55-61. doi:10.1080/10629360500108053
### Resampling methods
• Good, P. (2006) Resampling Methods. 3rd Ed. Birkhauser.
• Wolter, K.M. (2007). Introduction to Variance Estimation. 2nd Edition. Springer, Inc.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 30, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8484124541282654, "perplexity_flag": "middle"}
|
http://unapologetic.wordpress.com/2011/05/04/the-existence-and-uniqueness-theorem-of-ordinary-differential-equations-statement/?like=1&source=post_flair&_wpnonce=8cb43e0c56
|
# The Unapologetic Mathematician
## The Existence and Uniqueness Theorem of Ordinary Differential Equations (statement)
I have to take a little detour for now to prove an important result: the existence and uniqueness theorem of ordinary differential equations. This is one of those hard analytic nubs that differential geometry takes as a building block, but it still needs to be proven once before we can get back away from this analysis.
Anyway, we consider a continuously differentiable function $F:U\to\mathbb{R}^n$ defined on an open region $U\subseteq\mathbb{R}^n$, and the initial value problem:
$\displaystyle\begin{aligned}v'(t)&=F(v(t))\\v(0)&=a\end{aligned}$
for some fixed initial value $a\in U$. I say that there is a unique solution to this problem, in the sense that there is some interval $(-a,a)$ and a unique function $v:(-a,a)\to\mathbb{R}^n$ satisfying both conditions.
In fact, more is true: the solution varies continuously with the starting point. That is, there is an interval $I$ around $0\in\mathbb{R}$, some neighborhood $W$ of $a$ and a continuously differentiable function $\psi:I\times W\to U$ called the “flow” of the system defined by the differential equation $v'=F(v)$, which satisfies the two conditions:
$\displaystyle\begin{aligned}\frac{\partial}{\partial t}\psi(t,u)&=F(\psi(t,u))\\\psi(0,u)&=u\end{aligned}$
Then for any $w\in W$ we can get a curve $v_w:I\to U$ defined by $v_w(t)=\psi(t,w)$. The two conditions on the flow then tell us that $v_w$ is a solution of the initial value problem with initial value $w$.
This will take us a short while, but then we can put it behind us and get back to differential geometry. Incidentally, the approach I will use generally follows that of Hirsch and Smale.
### Like this:
Posted by John Armstrong | Analysis, Differential Equations
## 4 Comments »
1. [...] we can start actually closing in on a solution to our initial value problem. Recall the [...]
Pingback by | May 5, 2011 | Reply
2. [...] convergence of the Picard iteration shows the existence part of our existence and uniqueness theorem. Now we prove the uniqueness [...]
Pingback by | May 9, 2011 | Reply
3. [...] that we’ve got the existence and uniqueness of our solutions down, we have one more of our promised results: the smooth dependence of solutions on initial conditions. That is, if we use our existence and [...]
Pingback by | May 16, 2011 | Reply
4. [...] should look familiar, since they’re very similar to the conditions we wrote down for the flow of a differential [...]
Pingback by | May 28, 2011 | Reply
« Previous | Next »
## About this weblog
This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”).
I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 18, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9227439761161804, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/75371/the-pigeon-hole-principle-and-the-finite-subgroup-test/75376
|
# The Pigeon Hole Principle and the Finite Subgroup Test
I am currently reading this document and am stuck on Theorem 3.3 on page 11:
Let $H$ be a nonempty finite subset of a group $G$. Then $H$ is a subgroup of $G$ if $H$ is closed under the operation of $G$.
I have the following questions:
1.
It suffices to show that $H$ contains inverses.
I don't understand why that alone is sufficient.
2.
Choose any $a$ in $G$...then consider the sequence $a,a^2,..$ This sequence is contained in $H$ by the closure property.
I know that if $G$ is a group, then $ab$ is in $G$ for all $a$ and $b$ in $G$.But, I don't understand why the sequence has to be contained in $H$ by the closure property.
3.
By the Pigeonhole Principle, since $H$ is finite, there are distinct $i,j$ such that $a^i=a^j$.
I understand the Pigeonhole Principle (as explained on page 2) and why $H$ is finite, but I don't understand how the Pigeonhole Principle was applied to arrive at $a^i=a^j$.
4. Reading the proof, it appears to me that $H$ = $\left \langle a \right \rangle$ where $a\in G$. Is this true?
-
2
I think #2 is supposed to read 'Choose any $a$ in $H$...' – REDace0 Oct 24 '11 at 13:00
@REDace0 After reading the answers to this question, I also think that it should read 'Choose any $a$ in $H$'. I guess it is a typo in the lecture then? – Sara Oct 31 '11 at 16:08
## 5 Answers
Question 1. Suppose $H$ is a nonempty subset of a finite (multiplicative) group $G$, such that whenever $a,b \in H$, the element $a \cdot b$ is also in $H$. We want to show that $H$ is a subgroup of $G$. That is, we want to show that $(H, \cdot)$ is a group. We verify the group axioms for $H$.
1. Closure under the $\cdot$ operation. If $a,b$ are elements of $H$, their product $a \cdot b$ is guaranteed to lie in $H$. So $H$ is indeed closed under $\cdot$. So there's nothing to prove.
2. Associativity. Suppose $a,b,c$ are elements of $H$, and hence of the group $G$. Then since the $\cdot$ is associative (remember that $G$ is a group), we have $a \cdot (b \cdot c) = (a \cdot b) \cdot c$. Thus the subgroup $H$ inherits the associativity of $\cdot$ from $G$. [Still nothing to prove! Don't we wish all proofs are as simple? :-)]
3. Existence of identity element $e$ in $H$. In this case, there is something to prove, but we will come to this after looking at inverses.
4. Existence of inverses. Again, there's something to prove. This was the point that was explicitly shown in great detail in the lecture (through the pigeonhole principle argument).
Coming back to item (3.), note that we still have not established that the identity element $e$ is in $H$. Let $a$ be an arbitrary element of $H$ (here we need the hypothesis that $H$ is nonempty). From item (4.), the inverse $a^{-1}$ is in $H$. Since $H$ is closed w.r.t. $\cdot$, it follows that $e = a \cdot a^{-1} \in H$ as well. $\Box$
Question 2. Suppose $a$ is in $H$. Then $a^2 = a \cdot a$ is also in $H$ by closure property. Then $a^3 = a \cdot a^2$ is also in $H$. Similarly, $a^4 = a \cdot a^3$ is also in $H$, and so on. More generally, we can use mathematical induction to prove the proposition that $a^n$ is in $H$ for each natural number $n$. (For the induction step, we need to show that if $a^{n-1} \in H$, then $a^n \in H$ as well. But this is true because $a^n = a \cdot a^{n-1}$ and $H$ is closed under the $\cdot$ operation.)
Question 3. Consider the sequence $\langle a, a^2, a^3, \ldots, a^n, \ldots \rangle$ of infinite length. Since the group $G$ is finite, there are only a finite number of distinct terms in that sequence. Therefore, some two terms, say the $i^{th}$ and $j^{th}$ terms, are equal. That is, $a^i = a^j$.
Question 4. That conclusion is false. What is true is that if $a$ is in $H$, then $\langle a \rangle \subseteq H$.
-
For question 1, you might note that the place where you implicitly used the hypothesis that $H$ is not empty is where you said "Suppose $a\in H$." – Gerry Myerson Oct 24 '11 at 12:27
@Gerry, Ah, good point. Thanks. – Srivatsan Oct 24 '11 at 12:29
I like this answer the best because it is presented in a neat manner as it clearly identify the question that is being answered and I know where an answer begins and ends.Thank you, Srivatsan for your effort. – Sara Oct 31 '11 at 16:14
You are welcome, @Sara. – Srivatsan Nov 1 '11 at 8:39
To show $H$ is a subgroup you must show it's closed, contains the identity, and contains inverses. But if it's closed, non-empty, and contains inverses, then it's guaranteed to contain the identity, because it's guaranteed to contain something, say, $x$, then $x^{-1}$, then $xx^{-1}$, which is the identity.
$H$ is assumed closed, so if it contains $a$ and $b$, it contains $ab$. But $a$ and $b$ don't have to be different: if it contains $a$, it contains $a$ and $a$, so it contains $aa$, which is $a^2$. But then it contains $a$ and $a^2$ so it contains $aa^2$ which is $a^3$. Etc.
So it contains $a,a^2,a^3,a^4,\dots$. $H$ is finite, so these can't all be different, so some pair is equal, that is, $a^i=a^j$ for some $i\ne j$.
As for your last question, do you know any example of a group with a non-cyclic subgroup?
-
I am guessing that by "closed under the operation of $G$" they mean closed under the multiplication on $G$. Then $H$ is a subgroup if it then also contains inverses because it also contains the identity. Since $H$ is finite, then so must be $\{a^k:k\in\mathbb{Z}^+\}$ since it is contained in $H$. Thus, for some $i>j$, we must have $a^i=a^j$. Thus, $a^{i-j}$ must be the identity and $a^{i-j-1}=a^{-1}$.
$H$ is not necessaily cyclic. Here we have shown that $\{a^k:k\in\mathbb{Z}^+\}$ is a cyclic subgroup, but there is no guarantee that this is all of $H$. We may need to perform the above process on any $b\in H\setminus\{a^k:k\in\mathbb{Z}^+\}$, etc.
-
I like this answer because it explains why my conclusion on Question 4 is wrong as opposed to just providing counter examples. – Sara Oct 31 '11 at 16:24
Let's start by 1. To show that $H$ is a subgroup you need three things
1. $e$ (the neutral element) is in $H$.
2. For every $x,y$ in $H$ $xy$ is in $H$ too.
3. For every $a \in H$ $a^{-1}$ (the inverse of $a$) is in $H$ too.
So by the first hypothesis on $H$, (2) is true. It remains to show (1) and (3). But if you have (3) and since $H$ is nonempty, there is at least one $a$ in $H$. By (3) (which we assume) and by (2) which is known already $a^{-1}$ belongs to $H$ and $aa^{-1}$ belongs to $H$. So $e=aa^{-1}$ belongs to $H$ too. Hence (1) is satisfied.
To show the second property, namely that if $a$ is in $H$ then the sequence $a,a^2,\dots$ is in $H$ by the closure property observe that if $a$ is in $H$ then necessarily $a^2=aa$ is in $H$ too by the closure property, hence also $a^3=a a^2$ and so on so the whole sequence lies in $H$.
For the third question, it suffices to consider the sequence $a, a^2,a^3 , \dots$. By the second property, this sequence lies in $H$, but since $H$ is finite and the sequence is infinite the sequence admits arbitrary large finite subsequences so if the cardinality of $H$ is $n$ the sequence contains a finite subsequence $a, a^2 ,\dots, a^{n+1}$. It follows that there must be one bin (as in the pigeonhole principle) where $a^i=a^j$ for $i,j$ distinct.
Finally in the fourth question the statement is wrong. There are many non-cyclic finite groups for instance $\mathbb{Z} / 2\mathbb{Z} \times \mathbb{Z}/ 2\mathbb{Z}$.
-
I like this answer because it helped me understand exactly where the Pigeonhole Principle comes in for Question 3. – Sara Oct 31 '11 at 16:20
Let $a\in H$ and consider the map $\phi: H \to G$ given by $\phi(x)=ax$. Since $H$ is closed under the operation of $G$, we have that $\phi$ actually goes $H\to H$. Since the cancellation law holds for $G$, the map $\phi$ is injective. Thus, $\phi$ is a bijection because $H$ is finite. In particular, $\phi$ is surjective and there is $u\in H$ such that $a = \phi(u) = au$. This implies that $u=e$ and so $e\in H$. Now there is $b\in H$ such that $e=\phi(b)$. Of course, $b=a^{-1}$ and so $a^{-1}\in H$. We have proved that $H$ contains the identity element and is closed under inverses. Since it is closed under multiplication, $H$ is a subgroup.
-
Interesting answer! – user17090 Oct 24 '11 at 14:04
Unfortunately, I am not able to follow this proof as it makes use of concepts not covered in the lecture. – Sara Oct 31 '11 at 16:25
@Sara, which concepts were not covered in the lecture? The proof relies on two facts: cancellation law for groups, which I assume has been covered in the lectures, and the fact that injective=surjective for functions between finite sets of the same cardinality, which is a basic fact in set theory. – lhf Oct 31 '11 at 16:28
@lhf, I am actually studying linear algebra now and was trying to understand the formula for determinants. Specifically, I wanted to know why n!permutations will have n!/2 odd permutations and n!/2 even permutations. The answer is in Lecture 12, Theorem 5.7 but I started from Lecture 1 since I couldn't understand it. Hence, the reason I am not familiar with the cancellation law for groups. My module did briefly touch on set theory but did not mention injective or surjective functions. But I will definitely revisit your answer once I have sufficient background knowledge to comprehend it. – Sara Oct 31 '11 at 17:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 193, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9685670733451843, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/41433/how-to-simulate-wave-interference
|
# how to simulate wave interference [closed]
I need to simulate wave interference with reflection from surfaces.
What formulas I need to use?
What differential equation I need to solve? - Could someone help me out?
-
3
What do you mean by simulate? Write software to do this? What fluid do you want to deal with - invicid or vicid - do you wish to include dissipation in your simulation? – Killercam Oct 22 '12 at 16:39
– Bernhard Oct 22 '12 at 18:22
@Killercam yes, I need to create software. – cristaloleg Oct 23 '12 at 4:56
@Bernhard your link is 404 – cristaloleg Oct 23 '12 at 4:56
@cristaloleg Yep, it was removed, but the question was identical to yours. – Bernhard Oct 23 '12 at 6:00
show 2 more comments
## closed as not a real question by Qmechanic♦, Manishearth♦, Ϛѓăʑɏ βµԂԃϔ, David Zaslavsky♦Dec 16 '12 at 7:31
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, see the FAQ.
## 1 Answer
There are many different fluid systems that can produce waves and the associated interference: surface water waves, gas dynamics, etc. Each of these are treated very differently from a computational point of view - each using different numerical methods.
For your fairly poorly defined question, I would suggest you look at ideal gas dynamics. That is non-dissipative, non-viscous gas dynamics. The system of equations used to study such flows are quasi-linear hyperbolic system of PDEs, or conservation laws. In one-dimension (which will be the simplest case you want to consider as a starter) they take the generic form
$$\partial_{t} \mathbf{U} + \mathbf{\mathbf{A}}(\mathbf{U})\partial_{x}\mathbf{U} = 0,$$
where $\mathbf{U} = (\rho, \rho u, e)^{\mathsf{T}}$ is the vector of conserved quantities (for mass, momentum and energy) and $\mathbf{\mathbf{A}}(\mathbf{U})$ is the Jacobian matrix for the system. The Eigenvalues of the Jacobian matrix provide the wavespeeds for the given system under consideration. The above equation can be recast using the vector of conservative fluxed $\mathbf{F}(\mathbf{U})$ via
$$\partial_{t} \mathbf{U} + \partial_{x} \mathbf{F}(\mathbf{U}) = 0.$$
Now, for one-dimensional gas dynamics this system can be solved analytically, but it is not easy and it is mathematically involved. In two dimensions there are exact solutions but they require an iterative solver. The most common way to get solutions for such systems is to use something called Godunov's theorem and associated Riemann Solver, these methods provide a very fast and accurate wave of resolving the flow AND wave solutions for such systems.
Even to create your own one-dimensional solver for gas dynamics will not be easy. For two dimensions (with the reflection from walls that you mention above) will be another level up. My advice would be to first read this paper (from a masters thesis). This will provide a tutorial for creating your own one-dimensional solver for gas dynamics.
Good luck.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9429678916931152, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/262701/how-to-obtain-prove-5-stencil-formula-for-2nd-derivative
|
# How to obtain (prove) 5-stencil formula for 2nd derivative?
My question seems pretty easy. Prove the correctness of the following approximation:
$$f(x)''= \frac{-f(x-2h)+16f(x-h)-30f(x)+16f(x-h)-f(x+2h)}{12h^2}$$
I rendered myself deeply saddened upon stumbling on this and being seemingly unable solve it by my own. I also failed to find proof anywhere online. Only final answer.
The way I tried to it is via pretty common Taylor series expansion:
$$f(x+h) = f(x) + f'(x)h+\frac{1}{2!}f''(x)h^2+\frac{1}{3!}f^{(3)}(x)h^3 + \frac{1}{n!}f^{(n)}(x)h^n\quad (1)$$
I cut it off after $f^{(3)}$.
I use this formula to get rest of the points to have 5-stencils, simply by substituting $h$ with $\{-h; 2h; -2h\}$ Thus I get:
$$f(x-h) = f(x) - f'(x)h+\frac{1}{2!}f''(x)h^2-\frac{1}{3!}f^{(3)}(x)h^3\quad (2)$$ $$f(x+2h) = f(x) + 2f'(x)h+\frac{4}{2!}f''(x)h^2+\frac{8}{3!}f^{(3)}(x)h^3\quad (3)$$ $$f(x-2h) = f(x) - 2f'(x)h+\frac{4}{2!}f''(x)h^2-\frac{8}{3!}f^{(3)}(x)h^3\quad (4)$$
When I use equations $(1)$ and $(2)$, and add them by sides, I can get the 3 point formula:
$$f(x+h) + f(x-h) = 2f(x) + f''(x)h^2\quad (5)$$
$$f''(x) = \frac{f(x-h) - f(2x) + f(x+h)}{h^2}$$
However when I try to do this with all the equations $(1)$-\$(4) I get:
$$f(x+h) + f(x-h) = 2f(x) + f''(x)h^2 \quad (6)$$
$f(x+2h) + f(x-2h) = 2f(x) +4f''(x)h^2\quad (7)$
Then I can try subtracting $(6)$ from $(7)$ and I get:
$$f(x+h) + f(x-h) - f(x+2h) - f(x-2h) = 3 f''(x)h^2\quad (8)$$
which gives
$$f''(x) = \frac{f(x+h) + f(x-h) - f(x+2h) - f(x-2h) } {3 h^2} \quad(9)$$
This is clearly different from what I am expecting. Also doing $(6)$+$(7)$ doesn't seem to yield correct coefficients, even though it preserves $f(x)$ term.
Could you point out flaw in the approach and provide correct reasoning or any materials. All I found were very general or final answers with no explicit transformations. I feel kinda stupid being unable to get it right but I can't spot the flaw.
-
Please check your first formula for typos: $dh$ for $h$, $16(x-dh)$ where presumably $16(x+h)$ is meant. – Christian Blatter Dec 20 '12 at 17:39
@ChristianBlatter Yes, sorry. h and dh mean exactly the same. Sorry for inconsistency. I edited the question to unifiy it. – luk32 Dec 20 '12 at 17:43
– Amzoti Dec 20 '12 at 17:57
## 2 Answers
For more familiar notation, I write e.g. $x\pm h$ instead of $x \pm dh$ as in your question.
Accounting for typos (please see @ChristianBlatter 's comment on your question), note that the Taylor expansions you wrote out imply the following:
$16 f(x+h) - 16 f(x-h) = 32 f(x) + 16 h^2 f''(x)$
and
$f(x+2h) - f(x-2h) = 2f(x) + 4 h^2 f''(x).$
Subtracting the second equation from the first yields
$16 f(x+h) - 16 f(x-h) - f(x+2h) + f(x-2h) = 30 f(x) - 12 h^2 f''(x)$,
which can be solved for $f''(x)$ by moving $30 f(x)$ to the other side and dividing by $12 h^2$. We get
$f''(x) = \dfrac{16 f(x+h) - 16f(x-h) - 30 f(x) + f(x-2h) - f(x+2h)}{12 h^2}$,
which is perhaps that which you were looking for...
-
Ha! I knew it was stupid-level easy. Thank you. However, why use such weights, and is the formula (9) wrong or less accurate ? It just looks like dropping in some weights to increace imporantce of points closer to the $x$. Am I right? – luk32 Dec 21 '12 at 5:00
@luk32: The question in your comment is answered by the last sentence of my answer. – John Bentin Dec 21 '12 at 9:56
@JohnBentin Yeah I got it after a while. Precisely after I read your answer for 3rd time and already had a morning coffe. Too bad I can't accept both answers but I upvoted yours too as it helped me understand the difference. Thank You ! – luk32 Dec 21 '12 at 9:59
You are on the right track, but you need to take the Taylor series up to the term in $h^4$. Add the series for $f(x+h)$ to the one for $f(x-h)$, which cancels the terms in $h$ and $h^3$, to obtain an expression involving only even powers of $h$. Do the same for the series for $f(x+2h)$ and $f(x-2h)$ to get another expression involving $h^2$ and $h^4$. Now take $16$ times the first expression from the latter one to eliminate $h^4$, and rearrange algebraically to get the required result.
This method shows that the result is accurate up to the fourth order of Taylor approximation. You could get the same result by choosing suitable linear combinations of the second-order approximations, but that wouldn't demonstrate that the accuracy is any better than second-order.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 38, "mathjax_display_tex": 10, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9615221619606018, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/new-users/205063-how-would-you-solve-2-practice-exam.html
|
# Thread:
1. ## How would you solve #2 on this practice exam
http://faculty.uml.edu/rbrent/131/spE1.pdf
2. ## Re: How would you solve #2 on this practice exam
a)
Find common denominator, multiply both sides by denominator. Solve basic equation for x. The answer is 1/2 if you need a reference.
b)
Add 5 to both sides, square both sides, add 4 to both sides, take square root.
3. ## Re: How would you solve #2 on this practice exam
2. c) is a quadratic equation if you let $\displaystyle \begin{align*} x = s^2 \end{align*}$, giving $\displaystyle \begin{align*} x^2 + 3x - 6 = 0 \end{align*}$. Solve for $\displaystyle \begin{align*} x \end{align*}$ using the Quadratic Formula, then use this to evaluate $\displaystyle \begin{align*} s \end{align*}$.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.888026773929596, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/95399/what-is-known-about-formality-of-flag-varieties
|
## What is known about formality of flag varieties?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Let $G$ be a connected, complex reductive algebraic group and $X=G/P$ a (partial) flag variety. For example by "Real homotopy theory of Kähler manifolds", we know that its cohomology with real coefficients is formal. I have a bunch of questions about other coefficients:
Does real formality automatically imply formality over $\mathbb Q$? Probably not in general, but rational formality of flag varieties should be known right?
Are flag varieties even formal over $\mathbb Z$? I guess not, since the cohomology is more complicated in that case. Are there counterexamples?
What about $\mathbb {CP}^n$? I guess it is formal over $\mathbb Z$, is there a proof written somewhere?
Are flag varieties formal over the integers with say the order of the Weyl group invertible? In this case at least the cohomology ring is isomorphic to the coinvariants.
-
## 1 Answer
Formality over $\mathbb{C}$ implies formality over $\mathbb{Q}$. This is Theorem 12.1. in Sullivan ''Infinitesimal computations in topology''.
-
Thanks, this is good to know! – Jan Weidner Apr 30 2012 at 15:18
Sullivans proof uses algebraic groups. There is another proof, without algebraic geometry, in the book "Algebraic models in Geometry", by Felix, Oprea, Tanre. – Johannes Ebert Apr 30 2012 at 16:54
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.906036376953125, "perplexity_flag": "middle"}
|
http://unapologetic.wordpress.com/2010/05/26/more-properties-of-integrals/?like=1&source=post_flair&_wpnonce=d0199d491e
|
# The Unapologetic Mathematician
## More Properties of Integrals
Today we will show more properties of integrals of simple functions. But the neat thing is that they will follow from the last two properties we showed yesterday. And so their proofs really have nothing to do with simple functions. We will be able to point back to this post once we establish the same basic linearity and order properties for the integrals of wider and wider classes of functions.
First up: if $f$ and $g$ are integrable simple functions with $f\geq g$ a.e. then
$\displaystyle\int f\,d\mu\geq\int g\,d\mu$
Indeed, the function $f-g$ is nonnegative a.e., and so we conclude that
$\displaystyle0\leq\int f-g\,d\mu=\int f\,d\mu-\int g\,d\mu$
Next, if $f$ and $g$ are integrable simple functions then
$\displaystyle\int\lvert f+g\rvert\,d\mu\leq\int\lvert f\rvert\,d\mu+\int\lvert g\rvert\,d\mu$
Here we use the triangle inequality $\lvert f+g\rvert\leq\lvert f\rvert+\lvert g\rvert$ and invoke the previous result.
Now, if $f$ is an integrable simple function then
$\displaystyle\left\lvert\int f\,d\mu\right\rvert\leq\int\lvert f\rvert\,d\mu$
The absolute value $\lvert f\rvert$ is greater than both $f$ and $-f$, and so we find
$\displaystyle\begin{aligned}\int f\,d\mu\leq&\int\lvert f\rvert\,d\mu\\-\int f\,d\mu\leq&\int\lvert f\rvert\,d\mu\end{aligned}$
which implies the inequality we asserted.
As a heuristic, this last result is sort of like the triangle inequality to the extent that the integral is like a sum; adding inside the absolute value gives a smaller result than adding outside the absolute value. However, we have to be careful here; the integral we’re working with is not the limit of a sum like the Riemann integral was. In fact, we have no reason yet to believe that this integral and the Riemann integral have all that much to do with each other. But that shouldn’t stop us from using this analogy to remember the result.
Finally, if $f$ is an integrable simple function, $E$ is a measurable set, and $\alpha$ and $\beta$ are real numbers so that $\alpha\leq f(x)\leq\beta$ for almost all $x\in E$, then
$\displaystyle\alpha\mu(E)\leq\int\limits_Ef\,d\mu\leq\beta\mu(E)$
Indeed, the assumed inequality is equivalent to the assertion that $\alpha\chi_E\leq f\chi_E\leq\beta\chi_E$ a.e., and so — as long as $\mu(E)<\infty$ — we conclude that
$\displaystyle\int\alpha\chi_E\,d\mu\leq\int f\chi_E\,d\mu\leq\int\beta\chi_E\,d\mu$
which is equivalent to the above. On the other hand, if $\mu(E)=\infty$, then $f$ must be zero on all but a portion of $E$ of finite measure or else it wouldn’t be integrable. Thus, in order for the assumed inequalities to hold, we must have $\alpha\leq0$ and $\beta\geq0$. The asserted inequalities are then all but tautological.
### Like this:
Posted by John Armstrong | Analysis, Measure Theory
## 5 Comments »
1. [...] for every there is a so that for all measurable with . Indeed, if is an upper bound for , then we can show [...]
Pingback by | May 27, 2010 | Reply
2. [...] can easily verify that and that , using our properties of integrals. The catch is that doesn’t imply that is identically zero, but only that almost [...]
Pingback by | May 28, 2010 | Reply
3. many of the properties of integrals statisfy those of a measure. could we not view the integral as a special kind of measure ?
Comment by ip | May 30, 2010 | Reply
4. We could, ip. Look at the next entry, on indefinite integrals to see where that line of thinking goes next.
Comment by | May 30, 2010 | Reply
5. [...] of all, from what we know about convergence in measure and algebraic and order properties of integrals of simple functions, we can see that if and are integrable functions and [...]
Pingback by | June 3, 2010 | Reply
« Previous | Next »
## About this weblog
This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”).
I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 31, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9243150949478149, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/89324?sort=newest
|
## Are all zeros of $\Gamma(s) \pm \Gamma(1-s)$ on a line with real part = $\frac12$ ?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
The function $\Gamma(s)$ does not have zeros, but $\Gamma(s)\pm \Gamma(1-s)$ does.
Ignoring the real solutions for now and assuming $s \in \mathbb{C}$ then:
$\Gamma(s)-\Gamma(1-s)$ yields zeros at:
$\frac12 \pm 2.70269111740240387016556585336 i$ $\frac12 \pm 5.05334476784919736779735104686 i$ $\frac12 \pm 6.82188969510663531320292827393 i$ $\frac12 \pm 8.37303293891455628139008877004 i$ $\frac12 \pm 9.79770751746885191388078483695 i$ $\frac12 \pm 11.1361746342106720656243966380 i$ $\frac12 \pm 12.4106273718343980402685363665 i$
$\dots$
and
$\Gamma(s)+\Gamma(1-s)$ gives zeros at:
$\frac12 \pm 4.01094805906156869492043027819 i$ $\frac12 \pm 5.97476992595365858561703252235 i$ $\frac12 \pm 7.61704024553573658642606787126 i$ $\frac12 \pm 9.09805003388841581320246381948 i$ $\frac12 \pm 10.4760650707765536619292369200 i$ $\frac12 \pm 11.7804020877663106830617193188 i$ $\frac12 \pm 13.0283749883477570386353012761 i$
$\dots$
By multiplication, both functions can be combined into: $\Gamma(s)^2 - \Gamma(1-s)^2$
After playing with the domain of $s$ and inspecting the associated 3D output charts, I now dare to conjecture that all 'complex' zeros of this function must have a real part of $\frac12$.
Has this been proven? If not, appreciate any thoughts on possible approaches.
Thanks!
-
Should be an easy consequence of Euler's reflection formula, I'd guess. $\Gamma(z) \Gamma(1-z) = \pi / \sin(\pi z)$. – Marty Feb 23 2012 at 20:59
1
@Marty: It does not seem so easy. – GH Feb 23 2012 at 21:27
Indeed. I rushed to judgment. And my rushing is not leading to an "easy" consequence. I'm getting more interested in the question now! – Marty Feb 23 2012 at 21:47
2
I guess something like this works: apply complex Stirling approximation to $|\Gamma(s)| = |\Gamma(1-s)|$ to show no non-real zeros with real part outside $[0,1]$, then contour integrate to count zeros of $\Gamma(s) \pm \Gamma(1-s)$ in a rectangle $[0,1]+i[-T,T]$, and compare with the number of roots of real part $1/2$ that can again be estimated by Stirling. – Noam D. Elkies Feb 23 2012 at 23:52
1
How are my approximations of zeros with 5000 digits precision explained? Checked with precision 100 in sage, pari and maple. – joro Apr 9 2012 at 9:07
show 9 more comments
## 6 Answers
Here is a partial answer, which shows that there are no zeros for $z = s + i t$ with $|t| \ge 4$ .
Let $\psi(z):= \Gamma'(z)/\Gamma(z)$ be the digamma function. If $z = s + i t$, then $$\frac{d}{ds} |\Gamma(z)|^2 = \frac{d}{ds} \Gamma(z) \Gamma(\overline{z}) = |\Gamma(z)|^2 \left(\psi(z) + \psi(\overline{z})\right).$$ (Both $\Gamma(z)$ and $\psi(z)$ are real for real $z$, and so satisfy the Schwartz reflection principle.) The product formula for the Gamma function implies that there is an identity $$\psi(z) = - \ \gamma + \sum_{n=1}^{\infty} \left(\frac{1}{n} - \frac{1}{z + n} \right) = 1 - \gamma + \sum_{n=1}^{\infty} \left(\frac{1}{n + 1} - \frac{1}{z + n} \right),$$ and hence $$\psi(z) + \psi(\overline{z}) = 2(1 - \gamma) + \sum_{n=1}^{\infty} \left(\frac{2}{n + 1} - \frac{1}{z + n} - \frac{1}{\overline{z} + n} \right).$$ Suppose that $z = s + i t$, and that $s \in [0,1]$. Then $$\frac{2}{n + 1} - \frac{1}{s + i t + n} - \frac{1}{s - i t + n} = \frac{2(s^2 + t^2 + n s - s - n)}{(1+n)(n^2 + 2 n s + s^2 + t^2)} \ge \frac{-2}{(n^2 + t^2)}.$$ (The last inequality comes from ignoring all the positive terms in the numerator, and then setting $s = 0$ in the denominator.) It follows that $$\psi(z) + \psi(\overline{z}) \ge 2(1 - \gamma) - \sum_{n=1}^{\infty} \frac{2}{n^2 + t^2},$$ which is positive for $t$ big enough, e.g. $|t| \ge 4$. On the other hand, $$\psi(z + 1) + \psi(\overline{z} + 1) = \psi(z) + \psi(\overline{z}) + \frac{1}{z} + \frac{1}{\overline{z}} = \psi(z) + \psi(\overline{z}) + \frac{2s}{|z|^2}.$$ In particular, if $\psi(z) + \psi(\overline{z})$ is positive for $s \in [0,1]$ for some particular $t$, it is positive for all $s$ and that particular $t$. It follows that, if $|t| > 4$, that $|\Gamma(s + it)|^2$ is increasing as a function of $s$. In particular, if $|t| > 4$, then any equality $$|\Gamma(s + i t)| = |\Gamma(1 - (s + i t))| = |\Gamma(1 - s + i t)|$$ implies that $s = 1/2$.
Since this method applies equally well to $\Gamma(z) + \theta \cdot \Gamma(1 - z)$ for any $|\theta| = 1$, it is not sufficient to answer the question .
(NDE's comment seem to suggest one can reduce to the case of $z$ with real part in $[0,1]$ which is handled by this method, but I don't understand the remark. I made this communitity wiki if someone wants to complete the argument.)
-
1
There are some results in the literature that prove this monotonicity property for fairly small t but I don't recall offhand how small. – Matt Young Feb 24 2012 at 3:44
In fact the monotonicity property holds for $|t|>5/4$ but fails for $|t|\leq 1$. See math.ca/10.4153/CMB-2010-107-8 – GH Feb 25 2012 at 0:59
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Just to point out there are very good approximation to complex zeros off your line of $$\Gamma(s)-\Gamma(1-s) \qquad(1)$$
At $\rho \approx -1.69711183621729718874218687438 - 0.305228379993226071272967719419 i$ (1) appears to vanish while $\Gamma(\rho) \approx 1.4648039 + 0.3642699441 i$
Root finding with better precision converges to $\rho$ while (1) still appear to vanish in both sage and gp/pari (modulo bugs).
Checked to precision $5000$ digits and (1) still appears to vanish.
Here is $\rho$ with $100$ digits of precision:
````-1.697111836217297188742186874382163077146364585981726518217373889827452772242797069678994954785699956 - 0.3052283799932260712729677194188512919331197338088909477524842921187943642970297308885952936796125572*I
````
... for $\Gamma(s)+\Gamma(1-s)$ approximation of zero appears $\rho \approx -0.60940537628997711023 - 0.82913081575572747216 i$ checked to $5000$ digits of precision.
With 100 digits:
```` -0.6094053762899771102337308158313839002012166649163876907688596366808893391382113824494098816671945331 - 0.8291308157557274721587141536678087800797120641344787653174391388417832472543392187032283839972409848*I
````
Edit In comments juan suggested using x-ray to investigate the zeros.
The primary reference for x-ray I found is X-Ray of Riemann zeta-function, J. Arias-de-Reyna
AFAICT x-ray are the plots of Re(f(s))=0 and Im(f(s))=0. The zeros are the intersection.
The x-ray and juan's comments suggest the above quadruples of zeros are indeed zeros off $\frac12$ and possibly there are no more complex zeros zeros off the line.
Here is the x-ray of $\Gamma(s)-\Gamma(1-s)$. Blue is $\Re(\Gamma(s)-\Gamma(1-s))=0$ and red is $\Im(\Gamma(s)-\Gamma(1-s))=0$.
-
@Joro, you are the "Master of the Counter Example" :-) As you pointed out, the zeros of $\Gamma(s) \pm \Gamma(1-s)$ are very small, so precision of the calculation can be an issue, however with 5000 digits accuracy, your two counter examples could also easily fall in the category: (...)this reduces the claim to z=s+it with |s|≤15 and |t|≤4 where the claim can be checked directly(...). If your counter examples are correct, then my only escape is to restrict the claim to the critical strip only (similar to the $\zeta(s)$ and $\zeta^{(k)}(s)$ equivalents). – Agno Apr 9 2012 at 11:04
Since we have
$\Gamma(1/2+it)=\sqrt{\pi/\cosh(\pi t)}\exp[i(2 \vartheta(t)+t \log(2\pi)+\arctan(\tanh(\pi t/2)))]$
where $\vartheta(t)$ is the Riemann Siegel function. The zeros on the critical line have ordinates the zeros of
the cosine or sine of the real function
$2 \vartheta(t)+t \log(2\pi)+\arctan(\tanh(\pi t/2))$
But there are real zeros, for example there is one at $s = 4.0260426340124070065475\dots$
X-ray:
-
1
juan, how do you explain the approximations of zeros in my answer with precision 100 checked in pari, sage and maple? Have I misunderstood the question? – joro Apr 9 2012 at 9:26
1
With mpmath I check also this zero. I think now that my parenthesis "(with some effort)" contains an error. :-/ What it is clear is that the real zeros are those of the function given. But the behavior of this function for t complex is really not simple. I will try to do an X-ray of this function. Later we will try to post it. If I know how to do it. – juan Apr 9 2012 at 19:39
Apparently I can not post here a plot. The x-ray gives little doubt that the zeros of $\Gamma(s)-\Gamma(1-s)$ are the ones with real part $1/2$ that appeared computed in this question. The complex at $-1.69-0.30 i$ its complex conjugate the symmetrical of this with respect the critical line $2.69+0.30i$ and its complex conjugate. And then the real zeros one at $0.5$. The others real zeros can be obtained best from a real plot of the function. In the x-ray this zeros, that are very near the poles at $4$, $5$, $\dots$, can not be seen since they are contained in very short lines. – juan Apr 10 2012 at 16:11
Of course the real zeros are symmetric with respect to 0.5 so that there are zeros near $-3$, $-4$, $\dots$ – juan Apr 10 2012 at 16:24
juan, you can post images (possibly unless you don't have enough reputation). The format is HTML, i.e. write <img src="server/file.gif">; in your answer. The x-rays i found are at drememi.ludost.net/gamma1.png and drememi.ludost.net/gamma2.png – joro Apr 11 2012 at 5:19
show 7 more comments
I would like to expand on Guild of Pepperers's answer by noting that the zeros are essentially uniformly spaced and may easily be approximated to a high degree of accuracy. Using Stirling approximation, I obtained the formula $$\Gamma\left(\frac12+it\right) = \sqrt{\frac{2\pi}{1+e^{-2\pi|t|}}}\exp\left(-\frac\pi2|t|+i(t\log|t|-t+\varepsilon(t))\right),$$ valid for real $t$, where the error $\varepsilon(t)$ is an odd, bounded, real-valued function asymptotically equal to $\frac{1}{24t}$. (Indeed, $\varepsilon(t)$ has asymptotic and convergent expansions coming from the asymptotic and convergent versions of Stirling approximation, respectively.) We then have, for $s = \frac12+it$ on the critical line, $$\Gamma(s)+\Gamma(1-s) = 2\sqrt{\frac{2\pi}{1+e^{-2\pi|t|}}}e^{-\frac\pi2|t|}\cos\left(t\log|t|-t+\varepsilon(t)\right),$$ $$\Gamma(s)-\Gamma(1-s) = 2\sqrt{\frac{2\pi}{1+e^{-2\pi|t|}}}e^{-\frac\pi2|t|}\sin\left(t\log|t|-t+\varepsilon(t)\right).$$ One may show by means fair or foul that $t\log|t|-t+\varepsilon(t)$ is monotonically increasing for $|t|\geq1.05$, is bounded between $-0.96$ and $0.96$ for $|t|<1.05$, and is only zero when t = 0. Therefore, the zeros of $\Gamma(s)+\Gamma(1-s)$ on the critical line occur, with multiplicity one, very near those $t$ for which $t\log|t|-t$ is an odd integer multiple of $\frac{\pi}{2}$, and similarly for $\Gamma(s)-\Gamma(1-s)$ and the even integer multiples of $\frac{\pi}{2}$.
It's interesting that the number of zeros up to a given height $T$ is of the same order of magnitude, $T \log(T)$, as for the Riemann zeta function, but that these zeros have (essentially) uniform spacings rather than GUE spacings.
-
Given that $\Gamma(s)$ and $\Gamma(1-s)$ are complex conjugates when $\Re(s)=1/2$, it is not surprising that $\Gamma(s)+\theta\Gamma(1-s)$ has an infinitude of zeros on the line $\Re(s)=1/2$, as long as $|\theta|=1$. The monotonicity argument given in the first answer then shows that there are no other zeros with $0<\Re(s)<1$. With the possible exception when the imaginary part of $s$ is small, the zeros for two different $\theta$ should interlace (if $\theta$ goes around the unit circle once, a zero is carried to an adjacent zero).
-
This is a continuation of the argument above, which completes the argument.
Let $C_n$ denote the square with vertices $[n \pm 1/2, \pm 4 I]$ for a positive integer $n$. We have the following inequalities for $z \in C_n$ and $n \ge 15$: $$|\sin(\pi z)| \ge 1, \quad z \in C_n.$$ $$|\Gamma(z)| \ge \frac{1}{2} \Gamma(n - 1/2),$$ $$|\Gamma(1-z)| \le \frac{\pi}{\Gamma(n - 1/2)} \le 1,$$ $$|\psi(1-z)|, |\psi(z)| \le 2 \log(n),$$
The first is easy, the second follows from Stirling's formula (this requires $n$ to be big enough, and also requires $z$ to have imaginary part at most $4$), the third follows from the previous two by the reflection formula for $\Gamma(z)$, the last follows by induction and by the formula $\psi(z+1) = \psi(z) + 1/z$. It follows that $$\left| \frac{1}{2 \pi i} \oint_{C_n} \frac{\Gamma'(z)}{\Gamma(z)} - \frac{d/dz (\Gamma(z) + \theta \cdot \Gamma(1-z))}{\Gamma(z) + \theta\cdot \Gamma(1-z)} \right|$$ $$= \left| \frac{1}{2 \pi i} \oint_{C_n} \frac{\theta \Gamma(1-z) (\psi(1-z) + \psi(z))} {\Gamma(z) + \theta \cdot \Gamma(1-z)} \right|$$ $$\le \frac{8 |\theta| \cdot \log(n) \pi}{2 \pi \cdot \Gamma(n - 1/2)} \oint_{C_n} \frac{1} {|\Gamma(z) + \theta \cdot \Gamma(1-z)|}$$ $$\le \frac{8 |\theta| \cdot \log(n) \pi}{2 \pi \cdot \Gamma(n - 1/2)} \cdot \frac{1}{1/2 \Gamma(n - 1/2) + 1} \ll 1,$$ where $\theta = \pm 1$ (or anything small) and $n \ge 15$, where the final inequality holds by a huuuge margin. It follows that $\Gamma(z) + \theta \cdot\Gamma(1-z)$ and $\Gamma(z)$ have the same number of zeros minus the number of poles in $C_n$. Since $\Gamma(z)$ has no zeros and poles in $C_n$, it follows that $\Gamma(z) + \theta\cdot\Gamma(1-z)$ has the same number of zeros and poles. It has exactly one pole, and thus exactly one zero. If $\theta = \pm 1$ (and so in particular is real), by the Schwarz reflection principle, this zero is forced to be real. By symmetry, the same argument applies in the region $z = s + i t$ with $|t| \le 4$ and $s \le -15$. Combined with the above argument, this reduces the claim to $z = s + i t$ with $|s| \le 15$ and $|t| \le 4$ where the claim can be checked directly.
Hence all the zeros are either in $\mathbf{R}$, or lie on the line $1/2 + i \mathbf{R}$.
EDIT To clarify, I didn't actually check that there were no exceptional'' zeros in the box $\pm 15 \pm 4 I$, since I presumed that the original poster had done so. If $F(z) = \Gamma(z) - \Gamma(1-z)$, then computing the integral $$\frac{1}{2 \pi i} \oint \frac{F'(z)}{F(z)} dz$$ around that box, one obtains (numerically, and thus exactly) $1$. There are (assuming the OP at least computed the critical line zeros correctly) $2$ zeros in that range on the critical line. Along the real line in that range, there are $30$ poles and $25$ zeros. This means that there must be $1 + 30 - 25 = 6$ unaccounted for zeros. For such a zero $\rho$ off the line, by symmetry one also has $\overline{\rho}$, $1 - \rho$ and $1 - \overline{\rho}$ as zeros. Hence there must be either $1$ or $3$ pairs of zeros on the critical line, and either $1$ or $0$ quadruples of roots off the line. Varying the parameters of the integral, one can confirm there is a zero with $\rho \sim 2.7 + 0.3 i$, which is one of the four conjugates of the root found by joro. A similar argument applies for $\Gamma(z)+\Gamma(1-z)$. Hence:
Any zero of $\Gamma(z) - \Gamma(1-z)$ is either in $\mathbf{R}$, on the line $1/2 + i \mathbf{R}$, or is one of the four exceptional zeros $\{\rho,1-\rho,\overline{\rho},1-\overline{\rho}\}$. A similar calculation implies the same for $\Gamma(z) + \Gamma(1-z)$, except now with an exceptional set $\{\mu,1-\mu,\overline{\mu},1-\overline{\mu}\}$.
-
1
Wonderful. You can simplify and strengthen the proof by using a generalized Rouché's theorem. This tells us that $\Gamma(z)+\theta\cdot\Gamma(1-z)$ and $\Gamma(z)$ have the same number of zeros minus the number of poles in $C_n$ when $|\Gamma(1-z)|<|\Gamma(z)|$ holds on the boundary. This is equivalent to $\pi/|\sin(\pi z)|<|\Gamma(z)|^2$, hence it suffices to have $\pi<|\Gamma(z)|^2$ on $\partial C_n$. It seems that the last inequality holds for $n\geq 5$. – GH Feb 25 2012 at 0:15
Very impressive, although I honestly have to say that fully understanding the proof is beyond my math skills. Still got the goosebumps from reading it though :-) The proof does induce two follow up questions: 1) could the function $\Gamma(s)^2 - \Gamma(1-s)^2$ be uniquely represented by an infinite product involving its 'complex' zeros (via Weierstrass factorization)? 2) is there a function for locating the zeros (similar to $Z(t)$ for the Riemann non trivial zeros)? Thanks. – Agno Feb 25 2012 at 0:33
@Agno: Rouché's theorem is contained in basic textbooks, and this is all you need (actually a slight generalization of it). Using this you can shorten the above proof to a few lines (e.g. no integrals), see my comment above. – GH Feb 25 2012 at 0:39
4
@GH: Rouché? Touché! – Lavender Honey Feb 25 2012 at 3:19
@Agno: the logarithmic derivative is the tool to count zeros and it is always available. – Marc Palm Mar 6 2012 at 18:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 179, "mathjax_display_tex": 20, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9101079702377319, "perplexity_flag": "head"}
|
http://mathdl.maa.org/mathDL/23/?pa=content&sa=viewDocument&nodeId=3342&pf=1
|
# Trisecting a Line Segment (With World Record Efficiency!)
by Robert Styer (Villanova Univ.)
### Abstract
The book Proofs without Words is a collection of many short simple proofs, including a construction to trisect a line segment. This short construction naturally leads one to wonder what the shortest construction is (that is, the construction using the least number of circles and lines). We find the shortest trisection of a line segment, then the shortest construction using only circles (Mohr-Mascheroni) and using only lines (Poncelet-Steiner).
This article uses Flash animations to illustrate many of the constructions. You will need a reasonably current version of the Flash Player plug-in installed for your browser in order to view the animations.
## Trisecting a Line
### Introduction
In high school geometry, one of the most useful ruler and compass constructions is bisecting a line segment. This beautiful construction can be done with only two circles and one line.
Euclid's very first proposition contains the essence of this construction, though he does not explicitly bisect a segment until the tenth proposition.
But what if we wanted to trisect the line segment? Given a line with segment $$AB$$, construct a point $$F$$ on the segment so that $$AF = (1/3) AB$$, using the classical straightedge and compass.
### Trisecting with Two Circles and Four Lines
Scott Coble found a clever construction, reprinted in the wonderful book Proofs without Words [1]. (Here is a proof why it works.)
This construction uses two circles and four additional lines. Certainly two is the fewest number of circles that is possible, since one needs two circles to construct a point not on the given line through $$AB$$.
Can we do better than two circles and four lines?
### Trisecting with Two Circles and Three Lines
This elegant construction takes two circles and only three additional lines. (Here is a proof why it works.)
### Trisecting with Three Circles and Two Lines
What if one allows a third circle?
Hartshorne in his wonderful textbook Companion to Euclid [3] says the average good geometer can trisect a segment with a "par" of 5 (average of five steps). Here is a version of the classical five step construction.
We have used three circles and two additional lines, which is Hartshorne's "par 5." (Here is a proof.)
### Trisecting with Four Circles
If we wanted to use circles and no additional lines, how many would it take?
Only four circles, one under "par"!
Here is a proof. This construction can be generalized to construct a segment of length $$1/n$$ by replacing the circle of radius 3 by one of radius $$n$$.
Four circles is in fact the best possible, since three circles or three circles and one line cannot suffice (details). Nor are two circles and two additional lines enough to trisect the segment (details).
By the way, it is impossible to trisect an arbitrary angle with unmarked ruler and compass.
(See also the Mohr-Mascheroni and Poncelet-Steiner constructions.)
## Proofs of Constructions with Five or Six Steps
### Picturing the Proof of the Coble construction (Two Circles and Four Lines)
The following diagrams outline the geometric reasoning behind the Coble construction.
Begin with the original diagram and add several auxiliary circles and lines. Note the three congruent rhombuses, $$BHCA$$, $$GBAD$$, and $$JGDI$$, with $$E$$ at the intersection of the diagonals of the middle rhombus $$DGBA$$, which is known to be at the midpoint of each diagonal. Note the similar triangles $$ACF$$ and $$ICJ$$.
Since $$|CA| = 1$$, $$|CI| = 3$$, and $$|IJ| =1$$, ratios of similar triangles shows that $$|AF| = 1/3$$.
A referee suggested a nice alternative outline of a proof. The triangles $$ADG$$ and $$ABG$$ are both equilateral triangles. Thus, the measure of angle $$DAE$$ equals the measure of angle $$BAE$$; by Side-Angle-Side, the triangles $$DAE$$ and $$BAE$$ are congruent. Then the length of the segments $$DE$$ and $$BE$$ are equal, and also the length of the segments $$CA$$ and $$AD$$ are equal, so the segments $$AB$$ and $$CE$$ are medians of the triangle $$BCD$$. It is now a well known fact that the intersection $$F$$ of the medians is at the centroid which is one-third of the distance along the median $$AB$$.
(Return to the Coble construction)
### Picturing the Proof of the Two Circles and Three Lines construction
Begin with the original diagram, add several auxiliary circles and lines, then note the similar triangles $$ACF$$ and $$GCH$$. Since $$|AC| = 1$$, $$|GC| = 3$$, and $$|GH| =1$$, we see that $$|AF| = 1/3$$.
Of course, the picture hides the hardest part of the proof: showing that the circle with center B and the two lines $$CH$$ and $$DJ$$ go through the point $$E$$. Possibly the easiest way to see this is to use analytic geometry. We will find the point $$E_2$$ on the intersection of the lines $$CH$$ and $$DJ$$, and then show this point is exactly distance one from $$B$$, that is, the point $$E_2$$ is the same as the point $$E$$. For convenience, we will make $$B$$ the origin, then one can easily verify the coordinates of these points:
• $$A\quad (-1,0)$$
• $$B\quad (0,0)$$
• $$C\quad (-1/2, \sqrt{3}/2)$$
• $$D\quad (-3/2, -\sqrt{3}/2)$$
• $$H\quad (-1, -\sqrt{3} )$$
• $$J\quad (1,0)$$.
Then the slope of the line $$DJ$$ is $$\sqrt{3}/5$$ and the equation is $$y = (\sqrt{3}/5) x - (\sqrt{3}/5)$$. The slope of the line $$CH$$ is $$3 \sqrt{3}$$ and the equation is $$y = 3 \sqrt{3} x + 2 \sqrt{3}$$. Some algebra shows that the point $$E_2$$ where these two lines intersect has $$x = -11/14$$, so $$y = -5 \sqrt{3} / 14$$. The point $$E_2$$ $$( -11/14, -5 \sqrt{3}/ 14 )$$ satisfies $$(11/14)^2 + ( 5 \sqrt{3} / 14 )^2 = ( 121 + 75)/ 14^2 = 1$$, thus showing it is the point $$E$$ from our construction.
(Return to the 2 circle 3 line construction)
### Picturing the Proof of the Three Circles and Two Lines construction
Begin with the original diagram, add a few auxiliary circles and lines, then note the similar triangles $$ADF$$ and $$GDE$$. Since $$|AD|=1$$, $$|GD|=3$$, and $$|GE| =1$$, we see that $$|AF| = 1/3$$.
Elegant! Beautiful! Satisfying!
(Return to the 3 circle 2 line construction)
## Proofs of Constructions with Four Steps
### Picturing the Proof of the Four Circles Construction
Begin with the original diagram and add a few auxiliary lines. Angle $$ADG$$ is a right angle since it is an inscribed angle subtending a diameter. Now $$GDA$$ and $$DJA$$ are similar right triangles. By our construction, $$|AG| = 6$$ and $$|AD| = 1$$, and by similar triangles (or trigonometry) $$|AG| / |AD| = |AD| / |AJ|$$. Thus, $$|AJ| = 1 / |AG| = 1/6$$. Since $$DJ$$ is a perpendicular bisector of the chord $$AF$$, we have $$|AF| = 2 |AJ|$$ and so $$|AF| = 1/3$$.
(Return to the 4 circles construction)
### Proof that Three Circles and One Additional Line is Not Enough
Beginning with points $$A$$ and $$B$$ and the line through them, we construct the circle with center at $$B$$ and radius to $$A$$. There are only two ways, up to symmetry, to construct a second circle.
The following outlines all the possibilities for three circles and a line. We use symmetry whenever possible; for instance, we only need to consider third circles with centers in the "first quadrant" ($$E$$, $$D$$ or $$C$$) for the case of the two congruent circles. We only draw lines from points in the top half to points in the bottom half since a line through an existing point on the $$x$$-axis does not create a new point on the $$x$$-axis.
We assume that $$B$$ is at the origin and that $$A$$ is at -1 and $$C$$ is at 1 on the $$x$$-axis. Since we could equally well consider $$A$$ or $$C$$ to be the origin, we only consider the constructible values on the $$x$$-axis modulo 1, hence only need to list values between -0.5 and 0.5. Since we can flip the diagrams, we also can consider the constructible values on the x-axis modulo the +/- sign, hence we only need to list values between 0 and 0.5. For instance, in the third 3-circle-1-line construction below which adds the circle with radius $$DB$$, the point $$M$$ is actually at $$x =$$ 2.6180339887 which modulo 1 is -0.3819660113 which modulo the +/- sign is 0.3819660113. This happens to be the value of $$K$$ as well.
(Return to 3 Circles 2 Lines Construction)
### Proof that Two Circles and Two Lines are Not Enough
We first consider the two ways we can construct two circles. The second case does not allow any new lines to be drawn so may be eliminated. For the case with two congruent circles, by symmetry we only need consider lines through the "first quadrant:": through the point $$C$$, $$D$$ or $$E$$. The only possibilities are the blue lines. The only new points created are $$G$$, $$H$$, $$I$$, $$J$$. The only new lines that go through one of these new points and the original points off the $$x$$-axis are the two red lines which are parallel to the $$x$$-axis. In any case, the only constructible points on the $$x$$-axis are integral or half-integral.
## Mohr-Mascheroni and Poncelet-Steiner Constructions
### The Mohr-Mascheroni Construction
Someone might object that we began with the line through $$A$$ and $$B$$. What if we do not want to begin with this line? So our goal is to find a point $$F$$ such that the distance from $$A$$ to $$F$$ is one third that of $$A$$ to $$B$$ without using any line.
Mascheroni (1797) showed that all Euclidean constructions could be done just using a compass without a ruler. A few decades later, a work by Georg Mohr in 1672 was discovered proving the same result. (details in [2])
Here is a Mohr-Mascheroni (uses no lines) trisection construction with seven circles. (Here is a proof that this construction works. Seven circles is in fact the least number possible: see the details.)
Beautiful!
### The Poncelet-Steiner Construction
I personally love circles, but... What if we like straight lines and really do not like circles?
In the 1800s, Poncelet and Steiner showed that all Euclidean constructions can be done with a ruler only provided one is given a single circle, its center, and a couple suitable points off the circle. Milos Tatarevic (emails dated 7 June 2003) found a construction using twelve lines. Here is his construction; he notes the key point is constructing the parallel line $$A'C'$$ and the midpoint $$E$$. (We cannot prove this is the shortest such construction.)
Martin [9] points out that when one wishes to construct a rational point, then one need not use the circle. Following his convention, we begin with the points $$(0,1)$$, $$(1,0)$$, $$(0,2)$$, $$(2,0)$$. Let the point $$A$$ be $$(1,0)$$ and the point $$B$$ be $$(2,0)$$ so we want to construct the trisecting point $$(4/3, 0)$$. Of course we easily construct the origin $$(0,0)$$.
Here is a Poncelet-Steiner construction, using Martin's starting convention, of the trisecting point $$(4/3, 0)$$ using eight lines. Here is a proof that it trisects, illustrated with a slightly more general starting convention. This is the least number of lines needed: see the details.
For more information on trisections and geometric constructions, see the annotated reference page.
### Picturing the Proof of the Mohr-Mascheroni Construction
Begin with the original diagram and add a few auxiliary lines. The angle $$CDG$$ is a right angle since this angle subtends a diameter of the circle. Then $$CDG$$ and $$CHD$$ are similar right triangles. By our construction, $$|CD| = 2$$ and $$|CG| = 6$$, and by similar triangles (or trigonometry) $$|CH| / |CD| = |CD| / |CG|$$ so $$|CH| = 2/3$$. Since $$DH$$ is a perpendicular bisector of the chord $$CF$$, we have $$|CF| = 4/3$$. Since $$|CA| = 1$$, we have $$|AF| = |CF| - |CA| = 1/3$$.
(Return to the Mohr-Mascheroni construction.)
### Proof that Six Circles are Not Enough for Mohr-Mascheroni
We have seen a trisecting construction using seven circles.
We prove six circles is not enough by using a Maple program that creates all points that can be constructed with a given number of circles (Maple file, pdf). Essentially, we construct all four-circle constructions (there are 14 up to symmetry) and note that none of these go through the point $$(1/3, 0)$$ or $$(-1/3, 0)$$. If a six circle construction exists, the next two circles added must go through the desired trisecting point. It is then easy to verify that no fifth circle goes through the desired $$(1/3,0)$$ or $$(-1/3,0)$$ point, hence one requires at least seven circles.
We summarize by giving the number of points generated by N circles by our Mohr-Mascheroni construction.
For convenience, we assume we begin with the two points, $$(1,0)$$ and $$(-1,0)$$. By symmetry, we need only list those points in the first quadrant. Here are the number of new points in the first quadrant generated by $$N$$ circles:
• $$N =$$ 2 circles: 2 points
• $$N =$$ 3 circles: 2 new points
• $$N =$$ 4 circles: 11 new points
• $$N =$$ 5 circles: 300 new points
A list of all points that can be constructed by 2, 3, 4, or 5 circles is in this Maple file (pdf).
(Return to Mohr-Mascheroni construction.)
### Picturing the Proof of Martin's Poncelet-Steiner Construction
In order to see the essence of the proof, we draw a more general picture: the key assumptions are that $$|OA| = |AB|$$ and that $$|OG| = |GH|$$.
(Return to the Poncelet-Steiner construction.)
### Proof that Seven Lines are Not Enough For Poncelet-Steiner
Beginning with Martin's four points, we have seen a trisecting construction using eight lines.
The proof that seven lines is not enough is contained in this Maple file (pdf). This worksheet generates all points that can be constructed by line-only constructions up to ten lines. A list of all points that can be constructed by two through ten lines is given in this Maple file (pdf). Note that seven lines can generate the point $$(2/3, 0)$$ so seven lines does trisect a line segment, though not the segment $$AB$$.
We summarize by giving the number of points generated by $$N$$ lines with Martin's starting configuration.
We begin with Martin's four points $$(1,0)$$, $$(2,0)$$, $$(0,1)$$ and $$(0,2)$$. If we use $$N$$ lines, we have $$M$$ possible new points where
• $$N=$$ 2 lines: $$M =$$ 2 new points
• $$N=$$ 3, 4, 5 lines: no additional points
• $$N=$$ 6 lines: $$M =$$ 2 new points
• $$N =$$ 7: $$M =$$ 6
• $$N=$$ 8: $$M =$$ 11
• $$N=$$ 9: $$M =$$ 70
• $$N =$$ 10: $$M =$$ 309
(Return to Poncelet-Steiner construction.)
## Appendix: Trisecting Angles
A very famous problem is trisecting arbitrary angles. This problem occupied mathematicians for thousands of years, and although in 1837 Pierre Wantzel proved that angles cannot be trisected by Euclidean methods, people still keep trying (see references, especially Dudley's Budget of Trisections [1]).
If you change the rules and allow something other than a straightedge and compass, you can often trisect angles. The most famous method is Archimedes' who used a straightedge with two marked points on it; see the Geometry Forum's http://www.geom.uiuc.edu/docs/forum/angtri/. Origami paper folding is another elegant way to trisect an angle.
Here are a few of many web pages discussing angle trisection:
MacTutor History of Mathematics
http://www-history.mcs.st-and.ac.uk/history/HistTopics/Trisecting_an_angle.html
MathWorld
http://mathworld.wolfram.com/AngleTrisection.html
Math Forum's Ask Dr. Math FAQ has a link to an outstanding web page by Jim Loy:
http://www.jimloy.com/geometry/trisect.htm
The Mathematical Atlas
http://www.math.niu.edu/~rusin/known-math/index/51M15.html
One of the earliest methods, the Quadratrix of Hippias
http://www.perseus.tufts.edu/GreekScience/Students/Tim/Trisection.page.html
## Annotated References
[1] Dudley, Underwood, A Budget of Trisections, Springer Verlag, 1987.
This delightful book begins with a historical overview of angle trisections, using devices other than a straightedge and compass, such as Archimedes' angle trisection using a compass and a straightedge with two marks. He then describes the personalities of some would-be angle trisectors, then details dozens of angle trisection attempts.
[2] Eves, Howard, A Survey of Geometry, Allyn and Bacon, 1963.
This encyclopedic text is chock full of fascinating tidbits. In particular, Section 4.4 (pp. 198-204) discuss the Mohr-Mascheroni construction theorem which states that all Euclidean constructions could be carried out with a compass alone and no straightedge. Section 4.5 (pp.204-210) details the Poncelet-Steiner theorem, which shows that a straightedge along with one given circle and its center is sufficient to carry out any desired Euclidean construction. Section 4.6 (pp. 210-217) discusses other construction results, and mentions Lemoine's 1907 geometrography, a counting scheme for determining the complexity of a construction. Lemoine generally counts three operations where Hartshorne's construction counts one "step." We have followed Hartshorne's simpler counting method.
[3] Hartshorne, Robin, Companion to Euclid, American Mathematical Society, Berkeley Mathematics Lecture Notes, Volume 9, 1997
[4] Hartshorne, Robin, Geometry: Euclid and Beyond, Springer, Undergraduate Texts in Mathematics, 2000.
The 2000 title is an update of the 1997 version. Pages 20-22 in the newer version discuss the number of steps Euclid used for a proof versus the number needed for a mere construction. The homework following this section contains several constructions with an average or "par" estimate of how many steps an experienced geometer might use. Page 25, problem 2.14, asks for both of the trisection points of a segment and rates it "par 6". Of course, we are only asking for the left trisection point, so our equivalent is "par 5." Fascinatingly, in the 1997 version, the same problem 2.14, only now on page 23, says par=9. Either experienced geometers improved a lot in three years, or the original textbook had a typo.
[5] Heath, Sir Thomas, The Thirteen Books of Euclid's Elements with introduction and commentary, Dover Publications, 1956.
The bisection construction pictured is very close to that in Euclid's first proposition. Euclid actually does not prove the bisection construction until Proposition 10. (The relevant Propositions 1 and 10 are pages 241 and 267-268.) The bisection construction we have pictured is said to be due to Appolonius. Euclid clearly is more interested in the logical development of the proofs rather than in the shortest constructions.
[6] Hull, Thomas, http://kahuna.merrimack.edu/~thull/origamimath.html
Tom Hull has a fascinating set of origami geometric constructions, including how to trisect an angle. He has numerous books on origami and math, such as Project Origami: Activities for Exploring Mathematics.
[7] Lang, Robert J., Origami and Geometric Constructions, pdf file available at http://www.langorigami.com/science/hha/origami_constructions.pdf
Robert Lang is a physicist who is an expert on mathematics and origami. In particular, he summarizes the best set of mathematical axioms for paper-folding, and discusses the most efficient ways to trisect a line segment. His web site http://www.langorigami.com/ has a beautiful collection of origami objects he has folded.
[8] Lemoine, Emile, Geometrographie, ou Art des Constructions Geometriques, Scientia, Phys.-Math. no. 18, Paris, February 1902.
Lemoine invented a method to measure the complexity of geometric constructions. His method has four parameters; one gives the required number of lines, a second the number of circles, while two others count the moves needed to place the ruler and the compass. Trisection is not explicitly mentioned in this monograph, although pages 34-36 give a more general construction, a corollary of which is essentially our third trisection method (see Reusch and Ringenberg below).
[9] Martin, George, Geometric Constructions, Springer, 1998.
This nice undergraduate geometry textbook explicitly develops the Mohr-Mascheroni and the Poncelet-Steiner constructions. We use his convention for "ruler points", pages 69-82, for the Poncelet-Steiner type of constructions.
[10] Nelson, Roger, Proofs Without Words, Mathematical Association of America, 1993.
Our first trisection is taken from page 13, attributed to Scott Coble.
[11] Reusch, J., Planimetrische Konstruktionen in Geometrographischer Ausfuhrung, Druck and Verlag von B. G. Teubner, Leipzig and Berlin, 1904.
Reusch expands on Lemoine's monograph with many diagrams and even more explicit analysis of the basic geometric constructions. Pages 17-20 explicitly deal with trisections; he gives three different constructions. The one he calls classical takes 4 lines and 6 circles; his second is the our third one a la Hartshorne that we are calling classical. Reusch's third construction uses four circles and a line (another "par" 5 construction). Here are scans of the relevant pages: V, VI, 17, 18, 19 ,20
[12] Ringenberg, Lawrence, Informal Geometry, John Wiley and Sons, 1967.
This standard text contains the typical method of trisecting a segment. Here is his version of the classical construction, page 139. In our third construction, we use two circles to construct his point C and draw the line
A C and then a third circle constructs C2. We do not need to draw C3 nor B C3 since the geometry is such that our line C2 D is already parallel to B C3. Thus, this slick version of the classical construction takes only three circles and two additional lines.
Our second and fourth trisection constructions do not seem to appear on the web or in standard modern texts, nor are our Mohr-Mascheroni and the Poncelet-Steiner trisection constructions explicitly shown anywhere. (The 12 line Poncelet-Steiner construction illustrated is due to Milos Tatarevic, who sent it in a couple emails dated June 7, 2003, and is used by permission.)
The third construction given, using three circles and two lines, is well known. Here is the typical diagram that is used to trisect the segment AB (due to Dr. Math at the Math Forum.) In our third trisection, the two initial circles construct the point C in Dr. Math's and Ringenberg's classical construction.
` / D o. / `. / `. C o. `. / `. `. / `. `. A o--------+---------+--------o B `. `. / `. `. / `. `o E `. / `. / `o F /`
Here are some trisection constructions that appeared in a Google search.
http://forum.swarthmore.edu/dr.math/problems/cbr.8.13.99.html
http://www.math.niu.edu/~rusin/known-math/98/trisect
http://www.cut-the-knot.com/arithmetic/rational.html#const
Using origami for geometric constructions can delight one for hours: Robert Lang has a nice discussion showing a trisection using four paper folds.
These emails from Milos Tatrevich discuss the general Poncelet-Steiner constructions: he has generalized the 1/3 construction for 1/n with beautiful combinatorics and conjectures how many lines are needed, for instance, to construct 1/7 probably takes 14 lines.
I received an email explaining how carpenters use a square and a ruler to trisect line segments:
Subject:
Date:
Sun, 23 Feb 2003 20:36:11 -0500
From: "T. Wilson"
Enjoyed your methods of trisecting a line segment. Now if the criteria include the use of only a compass and an unmarked straightedge, then the method I was taught as an apprentice carpenter would qualify, wouldn't it?
Take line AB. Extend two parallel lines perpendicular to A and B. Take the straightedge and mark off four equidistant points along one edge. Place the straightedge at an angle between the parallel lines so that the first and fourth points coincide with the parallel lines drawn from A and B. With the compass determine the perpendicular distance of each of the three middle points from either parallel line and transfer that distance to the line AB. Voila.
Simple strokes for simple minds (mine, of course, not yours!).
Best wishes, T. Wilson, Richmond, VA
Note that carpenters use a marked ruler, which opens a fascinating world of constructions going beyond Euclidean, in particular, one can use a marked ruler to trisect angles.
I received another email with a beautifully symmetric construction using two circles and four lines:
Subject: Trisecting a line segment
Date: Sun, 3 Aug 2003 12:19:14 -0500
From: "Fred Barnes"
To:
I just stumbled across your wonderful web page on trisecting a line segment. I've included a method, not on your page, using four circles and two lines. See attachment.
Fred Barnes
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 181, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.86443692445755, "perplexity_flag": "middle"}
|
http://en.wikipedia.org/wiki/Phase_transition
|
# Phase transition
This diagram shows the nomenclature for the different phase transitions.
A phase transition is the transformation of thermodynamic system from one phase or state of matter to another.
A phase of a thermodynamic system and the states of matter have uniform physical properties.
During a phase transition of a given medium certain properties of the medium change, often discontinuously, as a result of some external condition, such as temperature, pressure, and others. For example, a liquid may become gas upon heating to the boiling point, resulting in an abrupt change in volume. The measurement of the external conditions at which the transformation occurs is termed the phase transition.
Phase transitions are common occurrences observed in nature and many engineering techniques exploit certain types of phase transition.
The term is most commonly used to describe transitions between solid, liquid and gaseous states of matter, and, in rare cases, plasma.
## Types of phase transition
Examples of phase transitions include:
• The transitions between the solid, liquid, and gaseous phases of a single component, due to the effects of temperature and/or pressure:
| To | Solid | Liquid | Gas |
|----------------------------|------------------|------------------------------|------------|
| | | | |
| | | | |
| Solid-solid transformation | Melting / fusion | Sublimation | — |
| Freezing | — | Boiling / evaporation | — |
| Deposition | Condensation | — | Ionization |
| — | — | Recombination / deionization | — |
• (see also vapor pressure and phase diagram)
A typical phase diagram. The dotted line gives the anomalous behavior of water.
A small piece of rapidly melting argon ice simultaneously shows the transitions from solid to liquid to gas.
• A eutectic transformation, in which a two component single phase liquid is cooled and transforms into two solid phases. The same process, but beginning with a solid instead of a liquid is called a eutectoid transformation.
• A peritectic transformation, in which a two component single phase solid is heated and transforms into a solid phase and a liquid phase.
• A spinodal decomposition, in which a single phase is cooled and separates into two different compositions of that same phase.
• Transition to a mesophase between solid and liquid, such as one of the "liquid crystal" phases.
• The transition between the ferromagnetic and paramagnetic phases of magnetic materials at the Curie point.
• The transition between differently ordered, commensurate or incommensurate, magnetic structures, such as in cerium antimonide.
• The martensitic transformation which occurs as one of the many phase transformations in carbon steel and stands as a model for displacive phase transformations.
• Changes in the crystallographic structure such as between ferrite and austenite of iron.
• Order-disorder transitions such as in alpha-titanium aluminides.
• The emergence of superconductivity in certain metals and ceramics when cooled below a critical temperature.
• The transition between different molecular structures (polymorphs, allotropes or polyamorphs), especially of solids, such as between an amorphous structure and a crystal structure, between two different crystal structures, or between two amorphous structures.
• Quantum condensation of bosonic fluids (Bose-Einstein condensation). The superfluid transition in liquid helium is an example of this.
• The breaking of symmetries in the laws of physics during the early history of the universe as its temperature cooled.
Phase transitions occur when the thermodynamic free energy of a system is non-analytic for some choice of thermodynamic variables (cf. phases). This condition generally stems from the interactions of a large number of particles in a system, and does not appear in systems that are too small.
At the phase transition point (for instance, boiling point) the two phases of a substance, liquid and vapor, have identical free energies and therefore are equally likely to exist. Below the boiling point, the liquid is the more stable state of the two, whereas above the gaseous form is preferred.
It is sometimes possible to change the state of a system diabatically (as opposed to adiabatically) in such a way that it can be brought past a phase transition point without undergoing a phase transition. The resulting state is metastable, i.e., less stable than the phase to which the transition would have occurred, but not unstable either. This occurs in superheating, supercooling, and supersaturation, for example.
## Classifications
### Ehrenfest classification
Paul Ehrenfest classified phase transitions based on the behavior of the thermodynamic free energy as a function of other thermodynamic variables. Under this scheme, phase transitions were labeled by the lowest derivative of the free energy that is discontinuous at the transition. First-order phase transitions exhibit a discontinuity in the first derivative of the free energy with respect to some thermodynamic variable.[1] The various solid/liquid/gas transitions are classified as first-order transitions because they involve a discontinuous change in density, which is the first derivative of the free energy with respect to chemical potential. Second-order phase transitions are continuous in the first derivative (the order parameter, which is the first derivative of the free energy with respect to the external field, is continuous across the transition) but exhibit discontinuity in a second derivative of the free energy.[1] These include the ferromagnetic phase transition in materials such as iron, where the magnetization, which is the first derivative of the free energy with respect to the applied magnetic field strength, increases continuously from zero as the temperature is lowered below the Curie temperature. The magnetic susceptibility, the second derivative of the free energy with the field, changes discontinuously. Under the Ehrenfest classification scheme, there could in principle be third, fourth, and higher-order phase transitions.
Though useful, Ehrenfest's classification has been found to be an inaccurate method of classifying phase transitions, for it does not take into account the case where a derivative of free energy diverges (which is only possible in the thermodynamic limit). For instance, in the ferromagnetic transition, the heat capacity diverges to infinity.
### Modern classifications
In the modern classification scheme, phase transitions are divided into two broad categories, named similarly to the Ehrenfest classes:
First-order phase transitions are those that involve a latent heat. During such a transition, a system either absorbs or releases a fixed (and typically large) amount of energy. During this process, the temperature of the system will stay constant as heat is added: the system is in a "mixed-phase regime" in which some parts of the system have completed the transition and others have not. Familiar examples are the melting of ice or the boiling of water (the water does not instantly turn into vapor, but forms a turbulent mixture of liquid water and vapor bubbles).
Second-order phase transitions are also called continuous phase transitions. They are characterized by a divergent susceptibility, an infinite correlation length, and a power-law decay of correlations near criticality. Examples of second-order phase transitions are the ferromagnetic transition, superconducting transition (for a Type-I superconductor the phase transition is second-order at zero external field and for a Type-II superconductor the phase transition is second-order for both normal state-mixed state and mixed state-superconducting state transitions) and the superfluid transition. Lev Landau gave a phenomenological theory of second order phase transitions.
Several transitions are known as the infinite-order phase transitions. They are continuous but break no symmetries. The most famous example is the Kosterlitz–Thouless transition in the two-dimensional XY model. Many quantum phase transitions in two-dimensional electron gases belong to this class.
The liquid-glass transition is observed in many polymers and other liquids that can be supercooled far below the melting point of the crystalline phase. This is atypical in several respects. It is not a transition between thermodynamic ground states: it is widely believed that the true ground state is always crystalline. Glass is a quenched disorder state, and its entropy, density, and so on, depend on the thermal history. Therefore, the glass transition is primarily a dynamic phenomenon: on cooling a liquid, internal degrees of freedom successively fall out of equilibrium. Some theoretical methods predict an underlying phase transition in the hypothetical limit of infinitely long relaxation times.[2][3] No direct experimental evidence supports the existence of these transitions.
## Characteristic properties
### Critical points
In any system containing liquid and gaseous phases, there exists a special combination of pressure and temperature, known as the critical point, at which the transition between liquid and gas becomes a second-order transition. Near the critical point, the fluid is sufficiently hot and compressed that the distinction between the liquid and gaseous phases is almost non-existent. This is associated with the phenomenon of critical opalescence, a milky appearance of the liquid due to density fluctuations at all possible wavelengths (including those of visible light).
### Order parameters
The order parameter is normally a quantity which is zero in one phase (usually above the critical point), and non-zero in the other. It characterises the onset of order at the phase transition. The order parameter susceptibility will usually diverge approaching the critical point. For a ferromagnetic system undergoing a phase transition, the order parameter is the net magnetization. For liquid/gas transitions, the order parameter is the difference of the densities.
When symmetry is broken, one needs to introduce one or more extra variables to describe the state of the system. For example, in the ferromagnetic phase, one must provide the net magnetization, whose direction was spontaneously chosen when the system cooled below the Curie point. Such variables are examples of order parameters. An order parameter is a measure of the degree of order in a system; it ranges between zero for total disorder and the saturation value for complete order.[4] For example, an order parameter can indicate the degree of order in a liquid crystal. However, note that order parameters can also be defined for non-symmetry-breaking transitions. Some phase transitions, such as superconducting and ferromagnetic, can have order parameters for more than one degree of freedom. In such phases, the order parameter may take the form of a complex number, a vector, or even a tensor, the magnitude of which goes to zero at the phase transition.
There also exist dual descriptions of phase transitions in terms of disorder parameters. These indicate the presence of line-like excitations such as vortex- or defect lines.
### Relevance in cosmology
Symmetry-breaking phase transitions play an important role in cosmology. It has been speculated that, in the hot early universe, the vacuum (i.e. the various quantum fields that fill space) possessed a large number of symmetries. As the universe expanded and cooled, the vacuum underwent a series of symmetry-breaking phase transitions. For example, the electroweak transition broke the SU(2)×U(1) symmetry of the electroweak field into the U(1) symmetry of the present-day electromagnetic field. This transition is important to understanding the asymmetry between the amount of matter and antimatter in the present-day universe (see electroweak baryogenesis.)
Progressive phase transitions in an expanding universe are implicated in the development of order in the universe, as is illustrated by the work of Eric Chaisson[5] and David Layzer.[6] See also Relational order theories.
See also: Order-disorder
### Critical exponents and universality classes
Main article: critical exponent
Continuous phase transitions are easier to study than first-order transitions due to the absence of latent heat, and they have been discovered to have many interesting properties. The phenomena associated with continuous phase transitions are called critical phenomena, due to their association with critical points.
It turns out that continuous phase transitions can be characterized by parameters known as critical exponents. The most important one is perhaps the exponent describing the divergence of the thermal correlation length by approaching the transition. For instance, let us examine the behavior of the heat capacity near such a transition. We vary the temperature T of the system while keeping all the other thermodynamic variables fixed, and find that the transition occurs at some critical temperature Tc. When T is near Tc, the heat capacity C typically has a power law behavior:
$C \propto |T_c - T|^{-\alpha}.$
A similar behavior, but with the exponent $\nu$ instead of $\alpha$, applies for the correlation length.
The exponent $\nu$ is positive. This is different with $\alpha$. Its actual value depends on the type of phase transition we are considering.
For -1 < α < 0, the heat capacity has a "kink" at the transition temperature. This is the behavior of liquid helium at the lambda transition from a normal state to the superfluid state, for which experiments have found α = -0.013±0.003. At least one experiment was performed in the zero-gravity conditions of an orbiting satellite to minimize pressure differences in the sample.[7] This experimental value of α agrees with theoretical predictions based on variational perturbation theory.[8]
For 0 < α < 1, the heat capacity diverges at the transition temperature (though, since α < 1, the enthalpy stays finite). An example of such behavior is the 3-dimensional ferromagnetic phase transition. In the three-dimensional Ising model for uniaxial magnets, detailed theoretical studies have yielded the exponent α ∼ +0.110.
Some model systems do not obey a power-law behavior. For example, mean field theory predicts a finite discontinuity of the heat capacity at the transition temperature, and the two-dimensional Ising model has a logarithmic divergence. However, these systems are limiting cases and an exception to the rule. Real phase transitions exhibit power-law behavior.
Several other critical exponents - β, γ, δ, ν, and η - are defined, examining the power law behavior of a measurable physical quantity near the phase transition. Exponents are related by scaling relations such as $\beta=\gamma/(\delta-1)$, $\nu=\gamma/(2-\eta)$. It can be shown that there are only two independent exponents, e.g. $\nu$ and $\eta$.
It is a remarkable fact that phase transitions arising in different systems often possess the same set of critical exponents. This phenomenon is known as universality. For example, the critical exponents at the liquid-gas critical point have been found to be independent of the chemical composition of the fluid. More amazingly, but understandable from above, they are an exact match for the critical exponents of the ferromagnetic phase transition in uniaxial magnets. Such systems are said to be in the same universality class. Universality is a prediction of the renormalization group theory of phase transitions, which states that the thermodynamic properties of a system near a phase transition depend only on a small number of features, such as dimensionality and symmetry, and are insensitive to the underlying microscopic properties of the system. Again, the divergency of the correlation length is the essential point.
### Critical slowing down and other phenomena
There are also other critical phenomena; e.g., besides static functions there is also critical dynamics. As a consequence, at a phase transition one may observe critical slowing down or speeding up. The large static universality classes of a continuous phase transition split into smaller dynamic universality classes. In addition to the critical exponents, there are also universal relations for certain static or dynamic functions of the magnetic fields and temperature differences from the critical value.
### Percolation theory
Another phenomenon which shows phase transitions and critical exponents is percolation. The simplest example is perhaps percolation in a two dimensional square lattice. Sites are randomly occupied with probability p. For small values of p the occupied sites form only small clusters. At a certain threshold pc a giant cluster is formed and we have a second order phase transition.[9] The behavior of P∞ near pc is, P∞~(p-pc)β, where β is a critical exponent.
## References
1. ^ a b Blundell, Stephen J.; Katherine M. Blundell (2008). Concepts in Thermal Physics. Oxford University Press. ISBN 978-0-19-856770-7.
2. Gotze, Wolfgang. "Complex Dynamics of Glass-Forming Liquids: A Mode-Coupling Theory."
3. Lubchenko, V. Wolynes, P.G. "Theory of Structural Glasses and Supercooled Liquids" Annual Review of Physical Chemistry. 2007, Vol 58. Pg 235.
4. A. D. McNaught and A. Wilkinson (ed.). "Compendium of Chemical Terminology (commonly called [[Gold Book|The Gold Book]])". IUPAC. ISBN 0-86542-684-8. Retrieved 2007-10-23.
5. Chaisson, “Cosmic Evolution”, Harvard, 2001
6. David Layzer, Cosmogenesis, The Development of Order in the Universe", Oxford Univ. Press, 1991
7. Lipa, J.; Nissen, J.; Stricker, D.; Swanson, D.; Chui, T. (2003). "Specific heat of liquid helium in zero gravity very near the lambda point". Physical Review B 68 (17). arXiv:cond-mat/0310163. Bibcode:2003PhRvB..68q4518L. doi:10.1103/PhysRevB.68.174518.
8. Kleinert, Hagen (1999). "Critical exponents from seven-loop strong-coupling φ4 theory in three dimensions". Physical Review D 60 (8). arXiv:hep-th/9812197. Bibcode:1999PhRvD..60h5001K. doi:10.1103/PhysRevD.60.085001.
9. Armin Bunde and Shlomo Havlin (1996). Fractals and Disordered Systems. Springer.
### Further reading
• Anderson, P.W., Basic Notions of Condensed Matter Physics, Perseus Publishing (1997).
• Goldenfeld, N., Lectures on Phase Transitions and the Renormalization Group, Perseus Publishing (1992).
• Ivancevic, Vladimir G; Ivancevic, Tijana T (2008), Chaos, Phase Transitions, Topology Change and Path Integrals, Berlin: Springer, ISBN 978-3-540-79356-4, retrieved 14 March 2013 e-ISBN 978-3-540-79357-1
• Krieger, Martin H., Constitutions of matter : mathematically modelling the most everyday of physical phenomena, University of Chicago Press, 1996. Contains a detailed pedagogical discussion of Onsager's solution of the 2-D Ising Model.
• Landau, L.D. and Lifshitz, E.M., Statistical Physics Part 1, vol. 5 of Course of Theoretical Physics, Pergamon Press, 3rd Ed. (1994).
• Kleinert, H., Gauge Fields in Condensed Matter, Vol. I, "Superfluid and Vortex lines; Disorder Fields, Phase Transitions,", pp. 1–742, World Scientific (Singapore, 1989); Paperback ISBN 9971-5-0210-0 (readable online physik.fu-berlin.de)
• Kleinert, H. and Verena Schulte-Frohlinde, Critical Properties of φ4-Theories, World Scientific (Singapore, 2001); Paperback ISBN 981-02-4659-5 (readable online here).
• Mussardo G., "Statistical Field Theory. An Introduction to Exactly Solved Models of Statistical Physics", Oxford University Press, 2010.
• Schroeder, Manfred R., Fractals, chaos, power laws : minutes from an infinite paradise, New York: W.H. Freeman, 1991. Very well-written book in "semi-popular" style—not a textbook—aimed at an audience with some training in mathematics and the physical sciences. Explains what scaling in phase transitions is all about, among other things.
• Yeomans J. M., Statistical Mechanics of Phase Transitions, Oxford University Press, 1992.
• H. E. Stanley, Introduction to Phase Transitions and Critical Phenomena (Oxford University Press, Oxford and New York 1971).
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 9, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8780913352966309, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/65250?sort=oldest
|
Subspaces isomorphic to C[0,omega_1]
Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Let $\omega_1$ be smallest uncountable ordinal. I am trying to understand the possible "large" subspaces of $C[0,\omega_1]$, namely those which are isomorphic to the whole space. Therefore I have the followin question:
Does every subspace of $C[0,\omega_1]$ isomorphic to $C[0,\omega_1]$ contain a complemented copy isomorphic to itself? The only (complemented) examples that I can "construct by hand", excluding the finite-codimensional ones, are of the form
$\mbox{cl lin}(\mathbf{1}_{[0,\gamma{\sigma}]}\colon \sigma\leq \omega_1)$
where $(\gamma_\sigma)_{\sigma<\omega_1}$ is increasing long sequence of limit ordinals and $\sigma_{\omega_1}=\omega_1$ (note that the family $({\mathbf{1}_{[0,\alpha]}\colon \alpha\leq \omega_1})$ forms the long Schauder basis for $C[0,\omega_1]$).
Thank you, T.
-
1 Answer
Have you searched the literature? There are a large number of papers about uncomplemented and complemented embeddings of $C(K)$ into $C(K)$. In particular, you should look at papers of Bessaga and Pelczynski from the 1960s (especially their fourth one on spaces of continuous functions), papers of Dan Amir from the late 1960s and 1970s, and Alspach and Benyamini from the 1970s. For an overview of part of the material, read Rosenthal's article in volume 2 of the Handbook of the Geometry of Banach Spaces. In particular, in Section 3C of that article you will find that there is an isomorphic copy of $C[0,\omega^\omega]$ in itself that is not complemented; from that it is very easy to prove that there is an isomorphic copy of $C[0,\omega_1]$ in itself that is not complemented.
-
1
Please read what I wrote. From the uncomplementation result for $C[0,\omega^\omega]$ you immediately get the answer to your first question, so I do not understand why you asked the first question. (Just observe that $C[0,\omega_1]$ is isomorphic to $C[0,\omega_1]\oplus C[0,\omega^\omega]$.) – Bill Johnson May 17 2011 at 18:08
Indeed, sometimes I write faster than I think but I'm trying to fight with that, sorry. By the way, it is not hard to prove that every closed non-separable subspace of $C[0,\omega_1]$ contains a complemented copy of $c_0(\omega_1)$... – Tomek Kania May 17 2011 at 19:54
BTW: For deep results on non separable $C(K)$ spaces, some of which rely on special set theoretic axioms, look at recent papers by Koszmider. (Probably you know this, but I mention it just in case.) – Bill Johnson May 17 2011 at 20:04
Now, we know that each subspace of $C[0,\omega_1]$ isomorphic to $C[0,\omega_1]$ contains a further subspace isomorphic to itself and complemented in $C[0,\omega_1]$. The preprint is about to be uploaded. – Tomek Kania Jun 18 at 12:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 22, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9426006078720093, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/41998/is-single-photon-annihination-of-electron-positron-pair-prohibited-by-feynman-di/42002
|
# Is single photon annihination of electron-positron pair prohibited by Feynman diagram analysis?
It is obvious that electron-positron pair cannot annihilate to a single photon which will violate the momentum conservation. My question is can we get this knowledge from Feynman diagram or perturbative expansion of QED? If we can't, how can we make sure not to over-count diagrams when we draw Feynman diagram to compute the scattering amplitude?
-
## 2 Answers
If you want to see this from a straightforward implementation of the Feynman rules: You can always calculate the diagram for $e^- e^+ \rightarrow \gamma$ and for arbitrary momentum it will be nonzero. After all, this is how we calculate the effective vertex in a low energy effective Lagrangian in a theory like QED. When you go to calculate a physical result like a cross section though you will multiply this amplitude by a momentum conserving delta function. The delta function will won't have any support in the physical region so it will just kill the whole thing right there and you will get zero. So to answer your question, you don't have to worry about removing things like this by hand from your calculation, they will take care of themselves as long as you are careful.
-
The delta function in the feynman integral for the energy-momentum 4-vectors of the two ingoing e+ e- lines and one outgoing photon line will vanish for on-shell momenta.
I don't understand the last part of the question.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9027777314186096, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/tagged/elementary-number-theory+totient-function
|
# Tagged Questions
3answers
144 views
### Show that the only solution to $\phi(n) =n-2$ is $n=4$
Came across this question in Number Theory. Let $\phi$ denote Euler's totient function; Show that the only solution to $\phi(n) =n-2$ is $n=4$ My workings so far have included, firstly ...
2answers
115 views
### Find all the natural numbers where $ϕ(n)=110$ (Euler's totient function)
Find all the natural numbers where $ϕ(n)=110$ (Euler's totient function) What the idea behind this kind of questions?
1answer
103 views
### Finding All Integers in such that $\phi(n)=80$
I don't know where to start with this problem so please help. The problem is: Find all integers n such that $\phi(n) = 80$.
1answer
46 views
### Why is this fact about the totient function true? [duplicate]
$\displaystyle \sum_{k<n}_{gcd(k,n)=1}k = \frac{1}{2} n \phi(n)$ This is a homework problem. I would ideally like to get to the final proof on my own. But at the moment I can't even decide how to ...
2answers
52 views
### Totient function and Euler's Theorem
Given $\big(m, n\big) = 1$, Prove that $$m^{\varphi(n)} + n^{\varphi(m)} \equiv 1 \pmod{mn}$$ I have tried saying $$\text{let }(a, mn) = 1$$ $$a^{\varphi(mn)} \equiv 1 \pmod{mn}$$ ...
1answer
150 views
### Modified Euler's Totient function for counting constellations in reduced residue systems
I am working on a modified totient function for counting constellations in reduced residue systems for the same range that Euler's totient function is defined over. This post is separated into three ...
0answers
91 views
### Partition minimizing maximum of Euler's totient function across terms
Given natural numbers $M$ and $N$, I'd like to find a partition of $2^N$ with $M$ or fewer terms, $t_1 + t_2 + ... + t_M$, such that $\max(\phi(t_1), \phi(t_2), ..., \phi(t_M))$ is minimized, where ...
2answers
125 views
### Problems with Euler $\phi$ function (2)
If $a$, $b$ are coprime, then $$a^{\phi(b)}+b^{\phi(a)}\equiv 1 \bmod (ab) \, .$$ If $\left(n=2\phi(n)\right)$, then $n$ is a power of $2$.
2answers
157 views
### Is my shorter expression for $s_m(n)= 1^m+2^m+3^m+\cdots+(n-1)^m \pmod n$ true?
I'm considering the following sums for natural numbers n,m $$s_m(n)= \sum_{k=1}^{n-1} k^m =1^m+2^m+3^m+\cdots+(n-1)^m$$ modulo n . Looking at odd n first, I found by analysis of the pattern of ...
1answer
171 views
### Seeking a proof of $\sum_{d|n}\phi(\frac{n}{d})a^d\equiv 0 \mod{n}$, where $\phi$ is the Euler Totient Function.
I need to prove the proposition. Let $a$ be an arbitrary integer. Then for every positive integer $n$, we have $$\sum_{d \mid n}\phi\left(\frac{n}{d}\right)a^d\equiv0\pmod{n}.$$
3answers
268 views
### Properties of Euler's $\phi()$ function
This is part of the $\phi(mn) = \phi(m)\cdot \phi(n)$ theorem. For some integer $a$ relatively prime to $m\cdot n$ how do I know the following: $a\mod m$ is relatively prime to $m$ $a \mod n$ is ...
0answers
47 views
### (Please check working) Given RSA encoding function $E: x\to x^{11} \pmod{3737}$ find the decoding function $D$
Please check the working and final answer to the question: Question: Given RSA encoding function $E: x\to x^{11} \pmod{3737}$ find the decoding function $D$ My working: \$\phi(3737) = \phi(37) \times ...
3answers
85 views
### Given RSA encoding function $E: x\to x^7 \pmod{6161}$ find decoding function D
So far I got: $7\alpha \equiv 1$ mod $\phi(6161)$ $\phi(6161) = \phi(61) \times \phi(101) = 6000$ $7\alpha \equiv 1$(mod $6000)$ At this point we are supposed to do euclid's algorithm and somehow ...
0answers
232 views
### Induction in proof of multiplicativity of Euler totient function
(Updated below) I'm working through John Stillwell's Elements of Algebra, and while his exercises are generally crafted to be not too difficult, there's one that I don't even understand what it's ...
2answers
93 views
### Show that $c^{\varphi(m)/2} \equiv 1 \pmod{m}$ if $m$ has two odd prime divisors
The following problem is one of the exercises in Topics in the Theory of Numbers (Erdős et al.) Show that if the positive integer $m$ has at least two distinct odd prime divisors, and $c$ is ...
5answers
966 views
### What's the proof that the Euler totient function is multiplicative?
That is, why is $\varphi (A\cdot B)=\varphi (A)\cdot \varphi (B)$, if A and B are coprime? It's not just a technical trouble—I can't see why this should be, intuitively: I bellyfeel that its ...
3answers
312 views
### Does knowing the totient of a number help factoring it? [duplicate]
Possible Duplicate: Factoring a number $p^a q^b$ knowing its totient Edit: The quoted question addresses only numbers of the form $p^a q^b$, I asked a general question for arbitrary $n$. ...
2answers
153 views
### Solutions of $\phi(x)=n$ for a given n.
I need to prove for a given n, if $\phi(x)=n$ has a solution for x, it always has another? We know $\phi(2)=\phi(1)=1$ and can easily prove that n must be even for x>2. So, n can be of the form ...
3answers
467 views
### $\phi(n)=\frac{n}{2}$ if and only if $n=2^k$ for some positive integer k
Show that $\phi(n)=\frac{n}{2}$ if and only if $n=2^k$ for some positive integer k. I think I have it figured and would like to see if I am on the right track. Thank you.
0answers
188 views
### Sum of floor function $\pmod{n}$
Let $n$ be a positive integer. Let $a$ be a nonzero integer such that $\gcd(a,n)=1$. How to show that \frac{a^{\phi (n)}-1}{n} \equiv \sum_i \frac{1}{ai} \left \lfloor \frac{ai}{n} \right \rfloor ...
1answer
464 views
### Show that $\phi(mn) = \phi(m)\phi(n)\frac{d}{\phi(d)}$ [duplicate]
Possible Duplicate: Proof of a formula involving Euler's totient function. For positive integers $m$ and $n$ where $d=gcd(m,n)$, show that \phi(mn) = ...
2answers
158 views
### How to prove $n*\varphi(n)/2$ sum?
how do I prove The second formula from Euler's totient function ? $$\sum_{\substack{1\le k\le n\\(k,n)=1}} k=\frac 12 n \varphi(n)$$ for $n>1$.
1answer
113 views
### How to show $\varphi (ab) = d\varphi(a)\varphi(b) / \varphi(d)$? [duplicate]
Possible Duplicate: Proof of a formula involving Euler's totient function. I have this interesting question that I have difficulty to prove. I know that: $\gcd(a,b) = d$ And I need ...
2answers
140 views
### If $n=2\phi(n)$, then $n=2^j$.
I need to show that if $n=2\phi(n)$, then $n=2^j$, where $n,j\in\mathbb{N}$. I have a strong feeling that this can only be shown by contradiction. Therefore, I assumed that both $n=2\phi(n)$ and ...
2answers
2k views
### Find all positive integers $n$ such that $\phi(n)=6$.
I am asked to find all positive integers $n$ such that $\phi(n)=6$, and to prove that I have found all solutions. The way I am tackling this is by constructing all combinations of prime powers such ...
2answers
551 views
### How to prove $\phi(n) = n/2$ iff $n = 2^k$?
How can I prove this statement ? $\phi(n) = n/2$ iff $n = 2^k$ I'm thinking n can be decomposed into its prime factors, then I can use multiplicative property of the euler phi function to get the ...
2answers
617 views
### How to prove $\phi(mn) > \phi(m)\phi(n)$ if $(m,n) \ne 1$
I need to prove that $$\phi(mn) > \phi(m)\phi(n)$$ if $m$ and $n$ have a common factor greater than 1. I have read up on the case where $m$ and $n$ are relatively prime, then ...
2answers
124 views
### Calculate $a^8 \bmod 15$ for $a = 1,2,\dots,14$
I am trying to calculate $a^8 \bmod 15$ for $a = 1,2,\dots,14$ I get that because $a = 2,4,7,8,11,13,14$ are relatively prime to $15$, the answer will be $1$ in those cases. But how to get this for ...
1answer
263 views
### An approximate relationship between the totient function and sum of divisors
I was playing around with a few of the number theory functions in Mathematica when I found an interesting relationship between some of them. Below I have plotted points with coordinates ...
2answers
194 views
### Nice formula for $\sum\limits_{d|n}(-1)^{n/d}\Phi(d)$?
How do I evaluate $$\sum_{d|n}(-1)^{n/d}\Phi(d)?$$ $\Phi(d)$ is Euler's totient function. Thanks.
2answers
280 views
### Is there a methodical way to compute Euler's Phi function
Is there an algorithmic or methodical way to "factorise" the numbers in euler's phi function such that it becomes easily computable? For example, $\phi(7000) = \phi(2^3 \cdot 5^3 \cdot 7)$ I'm ...
2answers
337 views
### Finding the maximum number with a certain Euler's totient value
Euler's totient function has a lower bound for large values, but is there any way to pick out maximums for specific values of the function? That is, how would I find the maximum number n such that ...
3answers
218 views
### Why is 2 totative of 36?
Based on my understanding, the totient of any number K is the number of relative primes to K, i.e. numbers less than or equal to K that do not share a divisor. Everywhere I look is telling me that 2 ...
2answers
569 views
### Why does $\phi(pq)=\phi(p)\phi(q)$?
In an RSA paper I am reading it is assumed that where $p$ and $q$ are distinct prime numbers: $\phi(pq)=\phi(p)\phi(q)=(p-1)(q-1)$ I would love to know why/how this is so? Is there some way to prove ...
1answer
235 views
### How many irreducible fractions between 0 and 1 have denominator less than $n$?
Or, in an $n\times n$ grid of dots, how many distinct lines pass through at least two of the dots, one of which is the lower left dot? Is there a good way to do this? Thanks.
4answers
1k views
### Identity involving Euler's totient function: $\sum \limits_{k=1}^n \left\lfloor \frac{n}{k} \right\rfloor \varphi(k) = \frac{n(n+1)}{2}$
Let $\varphi(n)$ be Euler's totient function, the number of positive integers less than or equal to $n$ and relatively prime to $n$. Challenge: Prove \sum_{k=1}^n \left\lfloor \frac{n}{k} ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 115, "mathjax_display_tex": 9, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9201138615608215, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/56444/varying-an-action-cosmological-perturbation-theory
|
# Varying an action (cosmological perturbation theory)
I am stuck varying an action, trying to get an equation of motion. (Going from eq. 91 to eq. 92 in the image.) This is the action
$$S~=~\int d^{4}x \frac{a^{2}(t)}{2}(\dot{h}^{2}-(\nabla h)^2).$$
And this is the solution,
$$\ddot{h} + 2 \frac{\dot{a}}{a}\dot{h} - \nabla^{2}h~=~0.$$
This is what I get
$$\partial_{0}(a^{2}\partial_{0}h)-\partial_{0}(a^{2}\nabla h)-\nabla(a^{2}\partial_{0}h)+\nabla^{2}(ha^{2})~=~0.$$
I don't really see my mistake, perhaps I am missing something. (dot represents $\partial_{0}$)
It is this problem (see Lectures on the Theory of Cosmological Perturbations, by Brandenburger):
-
Comment to the question (v1): How do you get the second and third term with mixed temporal and spatial derivatives? – Qmechanic♦ Mar 10 at 17:41
– Qmechanic♦ Apr 23 at 20:18
## 1 Answer
Hints:
1. The Lagrangian density in the $(+,-,-,-)$ convention is $${\cal L}~=~\frac{a^2}{2}d_{\mu}h ~d^{\mu}h.$$
2. The corresponding Euler-Lagrange equation (by varying the action $S[h]=\int \!d^4x ~{\cal L}$ wrt. the field $h$) is $$d_{\mu}(a^2 ~d^{\mu}h)~=~0.$$
3. Or equivalently, under the assumption that $a=a(t)$, $$\frac{2\dot{a}\dot{h}}{a} + d_{\mu}d^{\mu}h~=~0.$$
4. Finally, Fourier transform the three spatial directions to get eq. (92).
-
thx of course, just to check does the Fourier transform mean that one should sub in $h=h(t)e^{ikx}$ or is it $h=h(t)e^{ikx}+h(t)^{*}e^{-ikx}$ ? I always get confused with doing this, how would one simplify it to get the desired result? – user21119 Mar 10 at 18:07
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 6, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.917725682258606, "perplexity_flag": "head"}
|
http://en.wikipedia.org/wiki/Black_Hole
|
# Black hole
(Redirected from Black Hole)
For other uses, see Black hole (disambiguation).
Simulated view of a black hole (center) in front of the Large Magellanic Cloud. Note the gravitational lensing effect, which produces two enlarged but highly distorted views of the Cloud. Across the top, the Milky Way disk appears distorted into an arc.
General relativity
$G_{\mu \nu} + \Lambda g_{\mu \nu}= {8\pi G\over c^4} T_{\mu \nu}$
Introduction
Mathematical formulation
Resources · Tests
Fundamental concepts
Phenomena
Advanced theories
Scientists
A black hole is a region of spacetime from which gravity prevents anything, including light, from escaping.[1] The theory of general relativity predicts that a sufficiently compact mass will deform spacetime to form a black hole. Around a black hole there is a mathematically defined surface called an event horizon that marks the point of no return. The hole is called "black" because it absorbs all the light that hits the horizon, reflecting nothing, just like a perfect black body in thermodynamics.[2][3] Quantum field theory in curved spacetime predicts that event horizons emit radiation like a black body with a finite temperature. This temperature is inversely proportional to the mass of the black hole, making it difficult to observe this radiation for black holes of stellar mass or greater.
Objects whose gravity fields are too strong for light to escape were first considered in the 18th century by John Michell and Pierre-Simon Laplace. The first modern solution of general relativity that would characterize a black hole was found by Karl Schwarzschild in 1916, although its interpretation as a region of space from which nothing can escape was first published by David Finkelstein in 1958. Long considered a mathematical curiosity, it was during the 1960s that theoretical work showed black holes were a generic prediction of general relativity. The discovery of neutron stars sparked interest in gravitationally collapsed compact objects as a possible astrophysical reality.
Black holes of stellar mass are expected to form when very massive stars collapse at the end of their life cycle. After a black hole has formed it can continue to grow by absorbing mass from its surroundings. By absorbing other stars and merging with other black holes, supermassive black holes of millions of solar masses may form. There is general consensus that supermassive black holes exist in the centers of most galaxies.
Despite its invisible interior, the presence of a black hole can be inferred through its interaction with other matter and with electromagnetic radiation such as light. Matter falling onto a black hole can form an accretion disk heated by friction, forming some of the brightest objects in the universe. If there are other stars orbiting a black hole, their orbit can be used to determine its mass and location. These data can be used to exclude possible alternatives (such as neutron stars). In this way, astronomers have identified numerous stellar black hole candidates in binary systems, and established that the core of our Milky Way galaxy contains a supermassive black hole of about 4.3 million solar masses.
## History
Simulation of gravitational lensing by a black hole, which distorts the image of a galaxy in the background (larger animation)
The idea of a body so massive that even light could not escape was first put forward by geologist John Michell in a letter written to Henry Cavendish in 1783 of the Royal Society:
If the semi-diameter of a sphere of the same density as the Sun were to exceed that of the Sun in the proportion of 500 to 1, a body falling from an infinite height towards it would have acquired at its surface greater velocity than that of light, and consequently supposing light to be attracted by the same force in proportion to its vis inertiae, with other bodies, all light emitted from such a body would be made to return towards it by its own proper gravity.
—John Michell[4]
In 1796, mathematician Pierre-Simon Laplace promoted the same idea in the first and second editions of his book Exposition du système du Monde (it was removed from later editions).[5][6] Such "dark stars" were largely ignored in the nineteenth century, since it was not understood how a massless wave such as light could be influenced by gravity.[7]
### General relativity
In 1915, Albert Einstein developed his theory of general relativity, having earlier shown that gravity does influence light's motion. Only a few months later, Karl Schwarzschild found a solution to Einstein field equations, which describes the gravitational field of a point mass and a spherical mass.[8] A few months after Schwarzschild, Johannes Droste, a student of Hendrik Lorentz, independently gave the same solution for the point mass and wrote more extensively about its properties.[9][10] This solution had a peculiar behaviour at what is now called the Schwarzschild radius, where it became singular, meaning that some of the terms in the Einstein equations became infinite. The nature of this surface was not quite understood at the time. In 1924, Arthur Eddington showed that the singularity disappeared after a change of coordinates (see Eddington–Finkelstein coordinates), although it took until 1933 for Georges Lemaître to realize that this meant the singularity at the Schwarzschild radius was an unphysical coordinate singularity.[11]
In 1931, Subrahmanyan Chandrasekhar calculated, using special relativity, that a non-rotating body of electron-degenerate matter above a certain limiting mass (now called the Chandrasekhar limit at 1.4 solar masses) has no stable solutions.[12] His arguments were opposed by many of his contemporaries like Eddington and Lev Landau, who argued that some yet unknown mechanism would stop the collapse.[13] They were partly correct: a white dwarf slightly more massive than the Chandrasekhar limit will collapse into a neutron star,[14] which is itself stable because of the Pauli exclusion principle. But in 1939, Robert Oppenheimer and others predicted that neutron stars above approximately three solar masses (the Tolman–Oppenheimer–Volkoff limit) would collapse into black holes for the reasons presented by Chandrasekhar, and concluded that no law of physics was likely to intervene and stop at least some stars from collapsing to black holes.[15]
Oppenheimer and his co-authors interpreted the singularity at the boundary of the Schwarzschild radius as indicating that this was the boundary of a bubble in which time stopped. This is a valid point of view for external observers, but not for infalling observers. Because of this property, the collapsed stars were called "frozen stars",[16] because an outside observer would see the surface of the star frozen in time at the instant where its collapse takes it inside the Schwarzschild radius.
### Golden age
See also: Golden age of general relativity
In 1958, David Finkelstein identified the Schwarzschild surface as an event horizon, "a perfect unidirectional membrane: causal influences can cross it in only one direction".[17] This did not strictly contradict Oppenheimer's results, but extended them to include the point of view of infalling observers. Finkelstein's solution extended the Schwarzschild solution for the future of observers falling into a black hole. A complete extension had already been found by Martin Kruskal, who was urged to publish it.[18]
These results came at the beginning of the golden age of general relativity, which was marked by general relativity and black holes becoming mainstream subjects of research. This process was helped by the discovery of pulsars in 1967,[19][20] which, by 1969, were shown to be rapidly rotating neutron stars.[21] Until that time, neutron stars, like black holes, were regarded as just theoretical curiosities; but the discovery of pulsars showed their physical relevance and spurred a further interest in all types of compact objects that might be formed by gravitational collapse.
In this period more general black hole solutions were found. In 1963, Roy Kerr found the exact solution for a rotating black hole. Two years later, Ezra Newman found the axisymmetric solution for a black hole that is both rotating and electrically charged.[22] Through the work of Werner Israel,[23] Brandon Carter,[24][25] and David Robinson[26] the no-hair theorem emerged, stating that a stationary black hole solution is completely described by the three parameters of the Kerr–Newman metric; mass, angular momentum, and electric charge.[27]
At first, it was suspected that the strange features of the black hole solutions were pathological artifacts from the symmetry conditions imposed, and that the singularities would not appear in generic situations. This view was held in particular by Vladimir Belinsky, Isaak Khalatnikov, and Evgeny Lifshitz, who tried to prove that no singularities appear in generic solutions. However, in the late 1960s Roger Penrose[28] and Stephen Hawking used global techniques to prove that singularities appear generically.[29]
Work by James Bardeen, Jacob Bekenstein, Carter, and Hawking in the early 1970s led to the formulation of black hole thermodynamics.[30] These laws describe the behaviour of a black hole in close analogy to the laws of thermodynamics by relating mass to energy, area to entropy, and surface gravity to temperature. The analogy was completed when Hawking, in 1974, showed that quantum field theory predicts that black holes should radiate like a black body with a temperature proportional to the surface gravity of the black hole.[31]
The term "black hole" was first publicly used by John Wheeler during a lecture in 1967. Although he is usually credited with coining the phrase, he always insisted that it was suggested to him by somebody else. The first recorded use of the term is by a journalist Ann Ewing in her article "'Black Holes' in Space", dated 18 January 1964, which was a report on a meeting of the American Association for the Advancement of Science.[32] After Wheeler's use of the term, it was quickly adopted in general use.
## Properties and structure
The no-hair theorem states that, once it achieves a stable condition after formation, a black hole has only three independent physical properties: mass, charge, and angular momentum.[27] Any two black holes that share the same values for these properties, or parameters, are indistinguishable according to classical (i.e. non-quantum) mechanics.
These properties are special because they are visible from outside a black hole. For example, a charged black hole repels other like charges just like any other charged object. Similarly, the total mass inside a sphere containing a black hole can be found by using the gravitational analog of Gauss's law, the ADM mass, far away from the black hole.[33] Likewise, the angular momentum can be measured from far away using frame dragging by the gravitomagnetic field.
When an object falls into a black hole, any information about the shape of the object or distribution of charge on it is evenly distributed along the horizon of the black hole, and is lost to outside observers. The behavior of the horizon in this situation is a dissipative system that is closely analogous to that of a conductive stretchy membrane with friction and electrical resistance—the membrane paradigm.[34] This is different from other field theories like electromagnetism, which do not have any friction or resistivity at the microscopic level, because they are time-reversible. Because a black hole eventually achieves a stable state with only three parameters, there is no way to avoid losing information about the initial conditions: the gravitational and electric fields of a black hole give very little information about what went in. The information that is lost includes every quantity that cannot be measured far away from the black hole horizon, including approximately conserved quantum numbers such as the total baryon number and lepton number. This behavior is so puzzling that it has been called the black hole information loss paradox.[35][36]
### Physical properties
The simplest black holes have mass but neither electric charge nor angular momentum. These black holes are often referred to as Schwarzschild black holes after Karl Schwarzschild who discovered this solution in 1916.[8] According to Birkhoff's theorem, it is the only vacuum solution that is spherically symmetric.[37] This means that there is no observable difference between the gravitational field of such a black hole and that of any other spherical object of the same mass. The popular notion of a black hole "sucking in everything" in its surroundings is therefore only correct near a black hole's horizon; far away, the external gravitational field is identical to that of any other body of the same mass.[38]
Solutions describing more general black holes also exist. Charged black holes are described by the Reissner–Nordström metric, while the Kerr metric describes a rotating black hole. The most general stationary black hole solution known is the Kerr–Newman metric, which describes a black hole with both charge and angular momentum.[39]
While the mass of a black hole can take any positive value, the charge and angular momentum are constrained by the mass. In Planck units, the total electric charge Q and the total angular momentum J are expected to satisfy
$Q^2+\left ( \tfrac{J}{M} \right )^2\le M^2\,$
for a black hole of mass M. Black holes saturating this inequality are called extremal. Solutions of Einstein's equations that violate this inequality exist, but they do not possess an event horizon. These solutions have so-called naked singularities that can be observed from the outside, and hence are deemed unphysical. The cosmic censorship hypothesis rules out the formation of such singularities, when they are created through the gravitational collapse of realistic matter.[40] This is supported by numerical simulations.[41]
Due to the relatively large strength of the electromagnetic force, black holes forming from the collapse of stars are expected to retain the nearly neutral charge of the star. Rotation, however, is expected to be a common feature of compact objects. The black-hole candidate binary X-ray source GRS 1915+105[42] appears to have an angular momentum near the maximum allowed value.
Black hole classifications
Class Mass Size
Supermassive black hole ~105–1010 MSun ~0.001–400 AU
Intermediate-mass black hole ~103 MSun ~103 km ≈ REarth
Stellar black hole ~10 MSun ~30 km
Micro black hole up to ~MMoon up to ~0.1 mm
Black holes are commonly classified according to their mass, independent of angular momentum J or electric charge Q. The size of a black hole, as determined by the radius of the event horizon, or Schwarzschild radius, is roughly proportional to the mass M through
$r_\mathrm{sh} =\frac{2GM}{c^2} \approx 2.95\, \frac{M}{M_\mathrm{Sun}}~\mathrm{km,}$
where rsh is the Schwarzschild radius and MSun is the mass of the Sun.[43] This relation is exact only for black holes with zero charge and angular momentum; for more general black holes it can differ up to a factor of 2.
### Event horizon
Main article: Event horizon
Far away from the black hole a particle can move in any direction, as illustrated by the set of arrows. It is only restricted by the speed of light.
Closer to the black hole spacetime starts to deform. There are more paths going towards the black hole than paths moving away.[Note 1]
Inside of the event horizon all paths bring the particle closer to the center of the black hole. It is no longer possible for the particle to escape.
The defining feature of a black hole is the appearance of an event horizon—a boundary in spacetime through which matter and light can only pass inward towards the mass of the black hole. Nothing, not even light, can escape from inside the event horizon. The event horizon is referred to as such because if an event occurs within the boundary, information from that event cannot reach an outside observer, making it impossible to determine if such an event occurred.[45]
As predicted by general relativity, the presence of a mass deforms spacetime in such a way that the paths taken by particles bend towards the mass.[46] At the event horizon of a black hole, this deformation becomes so strong that there are no paths that lead away from the black hole.
To a distant observer, clocks near a black hole appear to tick more slowly than those further away from the black hole.[47] Due to this effect, known as gravitational time dilation, an object falling into a black hole appears to slow down as it approaches the event horizon, taking an infinite time to reach it.[48] At the same time, all processes on this object slow down, for a fixed outside observer, causing emitted light to appear redder and dimmer, an effect known as gravitational redshift.[49] Eventually, at a point just before it reaches the event horizon, the falling object becomes so dim that it can no longer be seen.
On the other hand, an observer falling into a black hole does not notice any of these effects as he crosses the event horizon. According to his own clock, he crosses the event horizon after a finite time without noting any singular behaviour. In particular, he is unable to determine exactly when he crosses it, as it is impossible to determine the location of the event horizon from local observations.[50]
The shape of the event horizon of a black hole is always approximately spherical.[Note 2][53] For non-rotating (static) black holes the geometry is precisely spherical, while for rotating black holes the sphere is somewhat oblate.
### Singularity
Main article: Gravitational singularity
At the center of a black hole as described by general relativity lies a gravitational singularity, a region where the spacetime curvature becomes infinite.[54] For a non-rotating black hole, this region takes the shape of a single point and for a rotating black hole, it is smeared out to form a ring singularity lying in the plane of rotation.[55] In both cases, the singular region has zero volume. It can also be shown that the singular region contains all the mass of the black hole solution.[56] The singular region can thus be thought of as having infinite density.
Observers falling into a Schwarzschild black hole (i.e., non-rotating and not charged) cannot avoid being carried into the singularity, once they cross the event horizon. They can prolong the experience by accelerating away to slow their descent, but only up to a point; after attaining a certain ideal velocity, it is best to free fall the rest of the way.[57] When they reach the singularity, they are crushed to infinite density and their mass is added to the total of the black hole. Before that happens, they will have been torn apart by the growing tidal forces in a process sometimes referred to as spaghettification or the "noodle effect".[58]
In the case of a charged (Reissner–Nordström) or rotating (Kerr) black hole, it is possible to avoid the singularity. Extending these solutions as far as possible reveals the hypothetical possibility of exiting the black hole into a different spacetime with the black hole acting as a wormhole.[59] The possibility of traveling to another universe is however only theoretical, since any perturbation will destroy this possibility.[60] It also appears to be possible to follow closed timelike curves (going back to one's own past) around the Kerr singularity, which lead to problems with causality like the grandfather paradox.[61] It is expected that none of these peculiar effects would survive in a proper quantum treatment of rotating and charged black holes.[62]
The appearance of singularities in general relativity is commonly perceived as signaling the breakdown of the theory.[63] This breakdown, however, is expected; it occurs in a situation where quantum effects should describe these actions, due to the extremely high density and therefore particle interactions. To date, it has not been possible to combine quantum and gravitational effects into a single theory, although there exist attempts to formulate such a theory of quantum gravity. It is generally expected that such a theory will not feature any singularities.[64][65]
### Photon sphere
Main article: Photon sphere
The photon sphere is a spherical boundary of zero thickness such that photons moving along tangents to the sphere will be trapped in a circular orbit. For non-rotating black holes, the photon sphere has a radius 1.5 times the Schwarzschild radius. The orbits are dynamically unstable, hence any small perturbation (such as a particle of infalling matter) will grow over time, either setting it on an outward trajectory escaping the black hole or on an inward spiral eventually crossing the event horizon.[66]
While light can still escape from inside the photon sphere, any light that crosses the photon sphere on an inbound trajectory will be captured by the black hole. Hence any light reaching an outside observer from inside the photon sphere must have been emitted by objects inside the photon sphere but still outside of the event horizon.[66]
Other compact objects, such as neutron stars, can also have photon spheres.[67] This follows from the fact that the gravitational field of an object does not depend on its actual size, hence any object that is smaller than 1.5 times the Schwarzschild radius corresponding to its mass will indeed have a photon sphere.
### Ergosphere
Main article: Ergosphere
The ergosphere is an oblate spheroid region outside of the event horizon, where objects cannot remain stationary.
Rotating black holes are surrounded by a region of spacetime in which it is impossible to stand still, called the ergosphere. This is the result of a process known as frame-dragging; general relativity predicts that any rotating mass will tend to slightly "drag" along the spacetime immediately surrounding it. Any object near the rotating mass will tend to start moving in the direction of rotation. For a rotating black hole, this effect becomes so strong near the event horizon that an object would have to move faster than the speed of light in the opposite direction to just stand still.[68]
The ergosphere of a black hole is bounded by the (outer) event horizon on the inside and an oblate spheroid, which coincides with the event horizon at the poles and is noticeably wider around the equator. The outer boundary is sometimes called the ergosurface.
Objects and radiation can escape normally from the ergosphere. Through the Penrose process, objects can emerge from the ergosphere with more energy than they entered. This energy is taken from the rotational energy of the black hole causing it to slow down.[69]
## Formation and evolution
Considering the exotic nature of black holes, it may be natural to question if such bizarre objects could exist in nature or to suggest that they are merely pathological solutions to Einstein's equations. Einstein himself wrongly thought that black holes would not form, because he held that the angular momentum of collapsing particles would stabilize their motion at some radius.[70] This led the general relativity community to dismiss all results to the contrary for many years. However, a minority of relativists continued to contend that black holes were physical objects,[71] and by the end of the 1960s, they had persuaded the majority of researchers in the field that there is no obstacle to forming an event horizon.
Once an event horizon forms, Penrose proved that a singularity will form somewhere inside it.[28] Shortly afterwards, Hawking showed that many cosmological solutions describing the Big Bang have singularities without scalar fields or other exotic matter (see Penrose-Hawking singularity theorems). The Kerr solution, the no-hair theorem and the laws of black hole thermodynamics showed that the physical properties of black holes were simple and comprehensible, making them respectable subjects for research.[72] The primary formation process for black holes is expected to be the gravitational collapse of heavy objects such as stars, but there are also more exotic processes that can lead to the production of black holes.
### Gravitational collapse
Main article: Gravitational collapse
Gravitational collapse occurs when an object's internal pressure is insufficient to resist the object's own gravity. For stars this usually occurs either because a star has too little "fuel" left to maintain its temperature through stellar nucleosynthesis, or because a star that would have been stable receives extra matter in a way that does not raise its core temperature. In either case the star's temperature is no longer high enough to prevent it from collapsing under its own weight.[73] The collapse may be stopped by the degeneracy pressure of the star's constituents, condensing the matter in an exotic denser state. The result is one of the various types of compact star. The type of compact star formed depends on the mass of the remnant—the matter left over after the outer layers have been blown away, such from a supernova explosion or by pulsations leading to a planetary nebula. Note that this mass can be substantially less than the original star—remnants exceeding 5 solar masses are produced by stars that were over 20 solar masses before the collapse.[73]
If the mass of the remnant exceeds about 3–4 solar masses (the Tolman–Oppenheimer–Volkoff limit[15])—either because the original star was very heavy or because the remnant collected additional mass through accretion of matter—even the degeneracy pressure of neutrons is insufficient to stop the collapse. No known mechanism (except possibly quark degeneracy pressure, see quark star) is powerful enough to stop the implosion and the object will inevitably collapse to form a black hole.[73]
The gravitational collapse of heavy stars is assumed to be responsible for the formation of stellar mass black holes. Star formation in the early universe may have resulted in very massive stars, which upon their collapse would have produced black holes of up to 103 solar masses. These black holes could be the seeds of the supermassive black holes found in the centers of most galaxies.[74]
While most of the energy released during gravitational collapse is emitted very quickly, an outside observer does not actually see the end of this process. Even though the collapse takes a finite amount of time from the reference frame of infalling matter, a distant observer sees the infalling material slow and halt just above the event horizon, due to gravitational time dilation. Light from the collapsing material takes longer and longer to reach the observer, with the light emitted just before the event horizon forms delayed an infinite amount of time. Thus the external observer never sees the formation of the event horizon; instead, the collapsing material seems to become dimmer and increasingly red-shifted, eventually fading away.[75]
#### Primordial black holes in the Big Bang
Gravitational collapse requires great density. In the current epoch of the universe these high densities are only found in stars, but in the early universe shortly after the big bang densities were much greater, possibly allowing for the creation of black holes. The high density alone is not enough to allow the formation of black holes since a uniform mass distribution will not allow the mass to bunch up. In order for primordial black holes to form in such a dense medium, there must be initial density perturbations that can then grow under their own gravity. Different models for the early universe vary widely in their predictions of the size of these perturbations. Various models predict the creation of black holes, ranging from a Planck mass to hundreds of thousands of solar masses.[76] Primordial black holes could thus account for the creation of any type of black hole.
### High-energy collisions
A simulated event in the CMS detector, a collision in which a micro black hole may be created.
Gravitational collapse is not the only process that could create black holes. In principle, black holes could be formed in high-energy collisions that achieve sufficient density. As of 2002, no such events have been detected, either directly or indirectly as a deficiency of the mass balance in particle accelerator experiments.[77] This suggests that there must be a lower limit for the mass of black holes. Theoretically, this boundary is expected to lie around the Planck mass (mP = √/ ≈ 1.2×1019 GeV/c2 ≈ 2.2×10−8 kg), where quantum effects are expected to invalidate the predictions of general relativity.[78] This would put the creation of black holes firmly out of reach of any high-energy process occurring on or near the Earth. However, certain developments in quantum gravity suggest that the Planck mass could be much lower: some braneworld scenarios for example put the boundary as low as 1 TeV/c2.[79] This would make it conceivable for micro black holes to be created in the high-energy collisions occurring when cosmic rays hit the Earth's atmosphere, or possibly in the new Large Hadron Collider at CERN. Yet these theories are very speculative, and the creation of black holes in these processes is deemed unlikely by many specialists.[80] Even if micro black holes should be formed in these collisions, it is expected that they would evaporate in about 10−25 seconds, posing no threat to the Earth.[81]
### Growth
Once a black hole has formed, it can continue to grow by absorbing additional matter. Any black hole will continually absorb gas and interstellar dust from its direct surroundings and omnipresent cosmic background radiation. This is the primary process through which supermassive black holes seem to have grown.[74] A similar process has been suggested for the formation of intermediate-mass black holes in globular clusters.[82]
Another possibility is for a black hole to merge with other objects such as stars or even other black holes. This is thought to have been important especially for the early development of supermassive black holes, which could have formed from the coagulation of many smaller objects.[74] The process has also been proposed as the origin of some intermediate-mass black holes.[83][84]
### Evaporation
Main article: Hawking radiation
In 1974, Hawking showed that black holes are not entirely black but emit small amounts of thermal radiation;[31] an effect that has become known as Hawking radiation. By applying quantum field theory to a static black hole background, he determined that a black hole should emit particles in a perfect black body spectrum. Since Hawking's publication, many others have verified the result through various approaches.[85] If Hawking's theory of black hole radiation is correct, then black holes are expected to shrink and evaporate over time because they lose mass by the emission of photons and other particles.[31] The temperature of this thermal spectrum (Hawking temperature) is proportional to the surface gravity of the black hole, which, for a Schwarzschild black hole, is inversely proportional to the mass. Hence, large black holes emit less radiation than small black holes.[86]
A stellar black hole of one solar mass has a Hawking temperature of about 100 nanokelvins. This is far less than the 2.7 K temperature of the cosmic microwave background radiation. Stellar-mass or larger black holes receive more mass from the cosmic microwave background than they emit through Hawking radiation and thus will grow instead of shrink. To have a Hawking temperature larger than 2.7 K (and be able to evaporate), a black hole needs to have less mass than the Moon. Such a black hole would have a diameter of less than a tenth of a millimeter.[87]
If a black hole is very small the radiation effects are expected to become very strong. Even a black hole that is heavy compared to a human would evaporate in an instant. A black hole the weight of a car would have a diameter of about 10−24 m and take a nanosecond to evaporate, during which time it would briefly have a luminosity more than 200 times that of the Sun. Lower-mass black holes are expected to evaporate even faster; for example, a black hole of mass 1 TeV/c2 would take less than 10−88 seconds to evaporate completely. For such a small black hole, quantum gravitation effects are expected to play an important role and could even—although current developments in quantum gravity do not indicate so[88]—hypothetically make such a small black hole stable.[89]
## Observational evidence
By their very nature, black holes do not directly emit any signals other than the hypothetical Hawking radiation; since the Hawking radiation for an astrophysical black hole is predicted to be very weak, this makes it impossible to directly detect astrophysical black holes from the Earth. A possible exception to the Hawking radiation being weak is the last stage of the evaporation of light (primordial) black holes; searches for such flashes in the past have proven unsuccessful and provide stringent limits on the possibility of existence of light primordial black holes.[90] NASA's Fermi Gamma-ray Space Telescope launched in 2008 will continue the search for these flashes.[91]
Astrophysicists searching for black holes thus have to rely on indirect observations. A black hole's existence can sometimes be inferred by observing its gravitational interactions with its surroundings. A project run by MIT's Haystack Observatory is attempting to observe the event horizon of a black hole directly. Initial results are encouraging.[92]
### Accretion of matter
See also: Accretion disc
A computer simulation of a star being consumed by a black hole. The blue dot indicates the location of the black hole.
Due to conservation of angular momentum, gas falling into the gravitational well created by a massive object will typically form a disc-like structure around the object. Friction within the disc causes angular momentum to be transported outward, allowing matter to fall further inward, releasing potential energy and increasing the temperature of the gas.[93] In the case of compact objects such as white dwarfs, neutron stars, and black holes, the gas in the inner regions becomes so hot that it will emit vast amounts of radiation (mainly X-rays), which may be detected by telescopes. This process of accretion is one of the most efficient energy-producing processes known; up to 40% of the rest mass of the accreted material can be emitted in radiation.[93] (In nuclear fusion only about 0.7% of the rest mass will be emitted as energy.) In many cases, accretion discs are accompanied by relativistic jets emitted along the poles, which carry away much of the energy. The mechanism for the creation of these jets is currently not well understood.
As such many of the universe's more energetic phenomena have been attributed to the accretion of matter on black holes. In particular, active galactic nuclei and quasars are believed to be the accretion discs of supermassive black holes.[94] Similarly, X-ray binaries are generally accepted to be binary star systems in which one of the two stars is a compact object accreting matter from its companion.[94] It has also been suggested that some ultraluminous X-ray sources may be the accretion disks of intermediate-mass black holes.[95]
### X-ray binaries
See also: X-ray binary
X-ray binaries are binary star systems that are luminous in the X-ray part of the spectrum. These X-ray emissions are generally thought to be caused by one of the component stars being a compact object accreting matter from the other (regular) star. The presence of an ordinary star in such a system provides a unique opportunity for studying the central object and determining if it might be a black hole.
If such a system emits signals that can be directly traced back to the compact object, it cannot be a black hole. The absence of such a signal does, however, not exclude the possibility that the compact object is a neutron star. By studying the companion star it is often possible to obtain the orbital parameters of the system and obtain an estimate for the mass of the compact object. If this is much larger than the Tolman–Oppenheimer–Volkoff limit (that is, the maximum mass a neutron star can have before collapsing) then the object cannot be a neutron star and is generally expected to be a black hole.[94]
This animation compares the X-ray 'heartbeats' of GRS 1915 and IGR J17091, two black holes that ingest gas from companion stars.
The first strong candidate for a black hole, Cygnus X-1, was discovered in this way by Charles Thomas Bolton,[96] Louise Webster and Paul Murdin[97] in 1972.[98][99] Some doubt, however, remained due to the uncertainties resultant from the companion star being much heavier than the candidate black hole.[94] Currently, better candidates for black holes are found in a class of X-ray binaries called soft X-ray transients.[94] In this class of system the companion star is relatively low mass allowing for more accurate estimates in the black hole mass. Moreover, these systems are only active in X-ray for several months once every 10–50 years. During the period of low X-ray emission (called quiescence), the accretion disc is extremely faint allowing for detailed observation of the companion star during this period. One of the best such candidates is V404 Cyg.
#### Quiescence and advection-dominated accretion flow
The faintness of the accretion disc during quiescence is suspected to be caused by the flow entering a mode called an advection-dominated accretion flow (ADAF). In this mode, almost all the energy generated by friction in the disc is swept along with the flow instead of radiated away. If this model is correct, then it forms strong qualitative evidence for the presence of an event horizon.[100] Because, if the object at the center of the disc had a solid surface, it would emit large amounts of radiation as the highly energetic gas hits the surface, an effect that is observed for neutron stars in a similar state.[93]
#### Quasi-periodic oscillations
Main article: Quasi-periodic oscillations
The X-ray emission from accretion disks sometimes flickers at certain frequencies. These signals are called quasi-periodic oscillations and are thought to be caused by material moving along the inner edge of the accretion disk (the innermost stable circular orbit). As such their frequency is linked to the mass of the compact object. They can thus be used as an alternative way to determine the mass of potential black holes.[101]
### Galactic nuclei
See also: Active galactic nucleus
Astronomers use the term "active galaxy" to describe galaxies with unusual characteristics, such as unusual spectral line emission and very strong radio emission. Theoretical and observational studies have shown that the activity in these active galactic nuclei (AGN) may be explained by the presence of supermassive black holes. The models of these AGN consist of a central black hole that may be millions or billions of times more massive than the Sun; a disk of gas and dust called an accretion disk; and two jets that are perpendicular to the accretion disk.[102][103]
Although supermassive black holes are expected to be found in most AGN, only some galaxies' nuclei have been more carefully studied in attempts to both identify and measure the actual masses of the central supermassive black hole candidates. Some of the most notable galaxies with supermassive black hole candidates include the Andromeda Galaxy, M32, M87, NGC 3115, NGC 3377, NGC 4258, NGC 4889, NGC 1277, OJ 287, APM08279+5255 and the Sombrero Galaxy.[104]
It is now widely accepted that the center of nearly every galaxy, not just active ones, contains a supermassive black hole.[105] The close observational correlation between the mass of this hole and the velocity dispersion of the host galaxy's bulge, known as the M-sigma relation, strongly suggests a connection between the formation of the black hole and the galaxy itself.[106]
Simulation of gas cloud after close approach to the black hole at the centre of the Milky Way.[107]
Currently, the best evidence for a supermassive black hole comes from studying the proper motion of stars near the center of our own Milky Way.[108] Since 1995 astronomers have tracked the motion of 90 stars in a region called Sagittarius A*. By fitting their motion to Keplerian orbits they were able to infer in 1998 that 2.6 million solar masses must be contained in a volume with a radius of 0.02 lightyears.[109] Since then one of the stars—called S2—has completed a full orbit. From the orbital data they were able to place better constraints on the mass and size of the object causing the orbital motion of stars in the Sagittarius A* region, finding that there is a spherical mass of 4.3 million solar masses contained within a radius of less than 0.002 lightyears.[108] While this is more than 3000 times the Schwarzschild radius corresponding to that mass, it is at least consistent with the central object being a supermassive black hole, and no "realistic cluster [of stars] is physically tenable".[109]
### Effects of strong gravity
Another way that the black hole nature of an object may be tested in the future is through observation of effects caused by strong gravity in their vicinity. One such effect is gravitational lensing: The deformation of spacetime around a massive object causes light rays to be deflected much like light passing through an optic lens. Observations have been made of weak gravitational lensing, in which light rays are deflected by only a few arcseconds. However, it has never been directly observed for a black hole.[110] One possibility for observing gravitational lensing by a black hole would be to observe stars in orbit around the black hole. There are several candidates for such an observation in orbit around Sagittarius A*.[110]
Another option would be the direct observation of gravitational waves produced by an object falling into a black hole, for example a compact object falling into a supermassive black hole through an extreme mass ratio inspiral. Matching the observed waveform to the predictions of general relativity would allow precision measurements of the mass and angular momentum of the central object, while at the same time testing general relativity.[111] These types of events are a primary target for the proposed Laser Interferometer Space Antenna.
### Alternatives
The evidence for stellar black holes strongly relies on the existence of an upper limit for the mass of a neutron star. The size of this limit heavily depends on the assumptions made about the properties of dense matter. New exotic phases of matter could push up this bound.[94] A phase of free quarks at high density might allow the existence of dense quark stars,[112] and some supersymmetric models predict the existence of Q stars.[113] Some extensions of the standard model posit the existence of preons as fundamental building blocks of quarks and leptons, which could hypothetically form preon stars.[114] These hypothetical models could potentially explain a number of observations of stellar black hole candidates. However, it can be shown from general arguments in general relativity that any such object will have a maximum mass.[94]
Since the average density of a black hole inside its Schwarzschild radius is inversely proportional to the square of its mass, supermassive black holes are much less dense than stellar black holes (the average density of a 108 solar mass black hole is comparable to that of water).[94] Consequently, the physics of matter forming a supermassive black hole is much better understood and the possible alternative explanations for supermassive black hole observations are much more mundane. For example, a supermassive black hole could be modelled by a large cluster of very dark objects. However, such alternatives are typically not stable enough to explain the supermassive black hole candidates.[94]
The evidence for stellar and supermassive black holes implies that in order for black holes not to form, general relativity must fail as a theory of gravity, perhaps due to the onset of quantum mechanical corrections. A much anticipated feature of a theory of quantum gravity is that it will not feature singularities or event horizons (and thus no black holes).[115] In recent years[when?], much attention has been drawn by the fuzzball model in string theory. Based on calculations in specific situations in string theory, the proposal suggests that generically the individual states of a black hole solution do not have an event horizon or singularity, but that for a classical/semi-classical observer the statistical average of such states does appear just like an ordinary black hole in general relativity.[116]
## Open questions
### Entropy and thermodynamics
Further information: Black hole thermodynamics
The formula for the Bekenstein–Hawking entropy (S) of a black hole, which depends on the area of the black hole (A). The constants are the speed of light (c), the Boltzmann constant (k), Newton's constant (G), and the reduced Planck constant (ħ).
In 1971, Hawking showed under general conditions[Note 3] that the total area of the event horizons of any collection of classical black holes can never decrease, even if they collide and merge.[117] This result, now known as the second law of black hole mechanics, is remarkably similar to the second law of thermodynamics, which states that the total entropy of a system can never decrease. As with classical objects at absolute zero temperature, it was assumed that black holes had zero entropy. If this were the case, the second law of thermodynamics would be violated by entropy-laden matter entering a black hole, resulting in a decrease of the total entropy of the universe. Therefore, Bekenstein proposed that a black hole should have an entropy, and that it should be proportional to its horizon area.[118]
The link with the laws of thermodynamics was further strengthened by Hawking's discovery that quantum field theory predicts that a black hole radiates blackbody radiation at a constant temperature. This seemingly causes a violation of the second law of black hole mechanics, since the radiation will carry away energy from the black hole causing it to shrink. The radiation, however also carries away entropy, and it can be proven under general assumptions that the sum of the entropy of the matter surrounding a black hole and one quarter of the area of the horizon as measured in Planck units is in fact always increasing. This allows the formulation of the first law of black hole mechanics as an analogue of the first law of thermodynamics, with the mass acting as energy, the surface gravity as temperature and the area as entropy.[118]
One puzzling feature is that the entropy of a black hole scales with its area rather than with its volume, since entropy is normally an extensive quantity that scales linearly with the volume of the system. This odd property led Gerard 't Hooft and Leonard Susskind to propose the holographic principle, which suggests that anything that happens in a volume of spacetime can be described by data on the boundary of that volume.[119]
Although general relativity can be used to perform a semi-classical calculation of black hole entropy, this situation is theoretically unsatisfying. In statistical mechanics, entropy is understood as counting the number of microscopic configurations of a system that have the same macroscopic qualities (such as mass, charge, pressure, etc.). Without a satisfactory theory of quantum gravity, one cannot perform such a computation for black holes. Some progress has been made in various approaches to quantum gravity. In 1995, Andrew Strominger and Cumrun Vafa showed that counting the microstates of a specific supersymmetric black hole in string theory reproduced the Bekenstein–Hawking entropy.[120] Since then, similar results have been reported for different black holes both in string theory and in other approaches to quantum gravity like loop quantum gravity.[121]
### Information loss paradox
Main article: Black hole information paradox
‹ The template below (Unsolved) is being considered for possible deletion. See templates for discussion to help reach a consensus.›
Is physical information lost in black holes?
Because a black hole has only a few internal parameters, most of the information about the matter that went into forming the black hole is lost. It does not matter if it is formed from television sets or chairs, in the end the black hole only remembers the total mass, charge, and angular momentum. As long as black holes were thought to persist forever this information loss is not that problematic, as the information can be thought of as existing inside the black hole, unaccessible from the outside. However, black holes slowly evaporate by emitting Hawking radiation. This radiation does not appear to carry any detailed information about the stuff that formed the black hole, meaning that this information appears to be gone forever.[122]
For a long time, the question whether information is truly lost in black holes (the black hole information paradox) has divided the theoretical physics community (see Thorne–Hawking–Preskill bet). In quantum mechanics, loss of information corresponds to the violation of vital property called unitarity, which has to do with the conservation of probability. It has been argued that loss of unitarity would also imply violation of conservation of energy.[123] Over recent years evidence has been building that indeed information and unitarity are preserved in a full quantum gravitational treatment of the problem.[124]
## Notes
1. The set of possible paths, or more accurately the future light cone containing all possible world lines (in this diagram represented by the yellow/blue grid), is tilted in this way in Eddington–Finkelstein coordinates (the diagram is a "cartoon" version of an Eddington–Finkelstein coordinate diagram), but in other coordinates the light cones are not tilted in this way, for example in Schwarzschild coordinates they simply narrow without tilting as one approaches the event horizon, and in Kruskal–Szekeres coordinates the light cones don't change shape or orientation at all.
2. This is true only for 4-dimensional spacetimes. In higher dimensions more complicated horizon topologies like a black ring are possible.
## References
1. Schutz, Bernard F. (2003). Gravity from the ground up. Cambridge University Press. p. 110. ISBN 0-521-45506-5.
2. Davies, P. C. W. (1978). "Thermodynamics of Black Holes". 41 (8): 1313–1355. Bibcode:1978RPPh...41.1313D. doi:10.1088/0034-4885/41/8/004.
3. Michell, J. (1784). "On the Means of Discovering the Distance, Magnitude, &c. of the Fixed Stars, in Consequence of the Diminution of the Velocity of Their Light, in Case Such a Diminution Should be Found to Take Place in any of Them, and Such Other Data Should be Procured from Observations, as Would be Farther Necessary for That Purpose". 74 (0): 35–57. Bibcode:1784RSPT...74...35M. doi:10.1098/rstl.1784.0008. JSTOR 106576.
4. Gillispie, C. C. (2000). Pierre-Simon Laplace, 1749–1827: a life in exact science. Princeton paperbacks. Princeton University Press. p. 175. ISBN 0-691-05027-9.
5. Israel, W. (1989). "Dark stars: the evolution of an idea". In Hawking, S. W.; Israel, W. 300 Years of Gravitation. Cambridge University Press. ISBN 978-0-521-37976-2.
6. ^ a b
7. Droste, J. (1917). "On the field of a single centre in Einstein's theory of gravitation, and the motion of a particle in that field". 19 (1): 197–215.
8. Kox, A. J. (1992). "General Relativity in the Netherlands: 1915–1920". In Eisenstaedt, J.; Kox, A. J. Studies in the history of general relativity. Birkhäuser. p. 41. ISBN 978-0-8176-3479-7. []
9. 't Hooft, G. (2009). Introduction to the Theory of Black Holes. Institute for Theoretical Physics / Spinoza Institute. pp. 47–48.
10. Venkataraman, G. (1992). Chandrasekhar and his limit. Universities Press. p. 89. ISBN 81-7371-035-X.
11. Detweiler, S. (1981). "Resource letter BH-1: Black holes". 49 (5): 394–400. Bibcode:1981AmJPh..49..394D. doi:10.1119/1.12686.
12. Harpaz, A. (1994). Stellar evolution. A K Peters. p. 105. ISBN 1-56881-012-1.
13. ^ a b Oppenheimer, J. R.; Volkoff, G. M. (1939). "On Massive Neutron Cores". 55 (4): 374–381. Bibcode:1939PhRv...55..374O. doi:10.1103/PhysRev.55.374.
14. Ruffini, R.; Wheeler, J. A. (1971). "Introducing the black hole". (1): 30–41.
15. Finkelstein, D. (1958). "Past-Future Asymmetry of the Gravitational Field of a Point Particle". 110 (4): 965–967. Bibcode:1958PhRv..110..965F. doi:10.1103/PhysRev.110.965.
16. Kruskal, M. (1960). "Maximal Extension of Schwarzschild Metric". 119 (5): 1743. Bibcode:1960PhRv..119.1743K. doi:10.1103/PhysRev.119.1743.
17. Hewish, A. et al. (1968), "Observation of a Rapidly Pulsating Radio Source", 217 (5130): 709–713, Bibcode:1968Natur.217..709H, doi:10.1038/217709a0
18. Pilkington, J. D. H. et al. (1968), "Observations of some further Pulsed Radio Sources", 218 (5137): 126–129, Bibcode:1968Natur.218..126P, doi:10.1038/218126a0
19. Hewish, A. (1970). "Pulsars". 8 (1): 265–296. Bibcode:1970ARA&A...8..265H. doi:10.1146/annurev.aa.08.090170.001405.
20. Newman, E. T. et al. (1965), "Metric of a Rotating, Charged Mass", 6 (6): 918, Bibcode:1965JMP.....6..918N, doi:10.1063/1.1704351
21. Israel, W. (1967). "Event Horizons in Static Vacuum Space-Times". 164 (5): 1776. Bibcode:1967PhRv..164.1776I. doi:10.1103/PhysRev.164.1776.
22. Carter, B. (1971). "Axisymmetric Black Hole Has Only Two Degrees of Freedom". 26 (6): 331. Bibcode:1971PhRvL..26..331C. doi:10.1103/PhysRevLett.26.331.
23. Carter, B. (1977). "The vacuum black hole uniqueness theorem and its conceivable generalisations". Proceedings of the 1st Marcel Grossmann meeting on general relativity. pp. 243–254.
24. Robinson, D. (1975). "Uniqueness of the Kerr Black Hole". 34 (14): 905. Bibcode:1975PhRvL..34..905R. doi:10.1103/PhysRevLett.34.905.
25. ^ a b Heusler, M. (1998). "Stationary Black Holes: Uniqueness and Beyond". Living Reviews in Relativity 1 (6). Archived from the original on 1999/02/03. Retrieved 2011-02-08.
26. ^ a b Penrose, R. (1965). "Gravitational Collapse and Space-Time Singularities". 14 (3): 57. Bibcode:1965PhRvL..14...57P. doi:10.1103/PhysRevLett.14.57.
27. Ford, L. H. (2003). "The Classical Singularity Theorems and Their Quantum Loopholes". 42 (6): 1219. doi:10.1023/A:1025754515197.
28. Bardeen, J. M.; Carter, B.; Hawking, S. W. (1973). "The four laws of black hole mechanics". 31 (2): 161–170. Bibcode:1973CMaPh..31..161B. doi:10.1007/BF01645742. MR MR0334798. Zbl 1125.83309.
29. ^ a b c Hawking, S. W. (1974). "Black hole explosions?". Nature 248 (5443): 30–31. Bibcode:1974Natur.248...30H. doi:10.1038/248030a0.
30. Quinion, M. (26 April 2008). "Black Hole". . Retrieved 2008-06-17.
31. Thorne, K. S.; Price, R. H. (1986). Black holes: the membrane paradigm. Yale University Press. ISBN 978-0-300-03770-8.
32. Anderson, Warren G. (1996). "The Black Hole Information Loss Problem". Usenet Physics FAQ. Retrieved 2009-03-24.
33. Preskill, J. (1994-10-21). "Black holes and information: A crisis in quantum physics". Caltech Theory Seminar.
34. Seeds, Michael A.; Backman, Dana E. (2007), Perspectives on Astronomy, Cengage Learning, p. 167, ISBN 0-495-11352-2
35. Shapiro, S. L.; Teukolsky, S. A. (1983). Black holes, white dwarfs, and neutron stars: the physics of compact objects. John Wiley and Sons. p. 357. ISBN 0-471-87316-0.
36. Wald, R. M. (1997). "Gravitational Collapse and Cosmic Censorship". arXiv:gr-qc/9710068 [gr-qc].
37. Berger, B. K. (2002). "Numerical Approaches to Spacetime Singularities". Living Reviews in Relativity 5. Retrieved 2007-08-04.
38. McClintock, J. E.; Shafee, R.; Narayan, R.; Remillard, R. A.; Davis, S. W.; Li, L.-X. (2006). "The Spin of the Near-Extreme Kerr Black Hole GRS 1915+105". Astrophysical Journal 652 (1): 518–539. arXiv:astro-ph/0606076. Bibcode:2006ApJ...652..518M. doi:10.1086/508457.
39. "Inside a black hole". Knowing the universe and its secrets. Retrieved 2009-03-26.
40. Emparan, R.; Reall, H. S. (2008). "Black Holes in Higher Dimensions". Living Reviews in Relativity 11 (6). arXiv:0801.3471. Bibcode:2008LRR....11....6E. Retrieved 2011-02-10.
41. Obers, N. A. (2009). "Black Holes in Higher-Dimensional Gravity". In Papantonopoulos, Eleftherios. Lecture Notes in Physics 769: 211–258. arXiv:0802.0519. doi:10.1007/978-3-540-88460-6.
42. Lewis, G. F.; Kwan, J. (2007). "No Way Back: Maximizing Survival Time Below the Schwarzschild Event Horizon". Publications of the Astronomical Society of Australia 24 (2): 46–52. arXiv:0705.1029. Bibcode:2007PASA...24...46L. doi:10.1071/AS07012.
43. Droz, S.; Israel, W.; Morsink, S. M. (1996). "Black holes: the inside story". Physics World 9 (1): 34–37. Bibcode:1996PhyW....9...34D.
44. Poisson, E.; Israel, W. (1990). "Internal structure of black holes". Physical Review D 41 (6): 1796. Bibcode:1990PhRvD..41.1796P. doi:10.1103/PhysRevD.41.1796.
45. Hamade, R. (1996). "Black Holes and Quantum Gravity". Cambridge Relativity and Cosmology. University of Cambridge. Retrieved 2009-03-26.
46. Palmer, D. "Ask an Astrophysicist: Quantum Gravity and Black Holes". NASA. Retrieved 2009-03-26.
47. ^ a b Nitta, Daisuke; Chiba, Takeshi; Sugiyama, Naoshi (September 2011), "Shadows of colliding black holes", Physical Review D 84 (6), arXiv:1106.242, Bibcode:2011PhRvD..84f3008N, doi:10.1103/PhysRevD.84.063008
48. Nemiroff, R. J. (1993). "Visual distortions near a neutron star and black hole". American Journal of Physics 61 (7): 619. arXiv:astro-ph/9312003. Bibcode:1993AmJPh..61..619N. doi:10.1119/1.17224.
49. Einstein, A. (1939). "On A Stationary System With Spherical Symmetry Consisting of Many Gravitating Masses". Annals of Mathematics 40 (4): 922–936. doi:10.2307/1968902.
50. Kerr, R. P. (2009). "The Kerr and Kerr-Schild metrics". In Wiltshire, D. L.; Visser, M.; Scott, S. M. The Kerr Spacetime. Cambridge University Press. arXiv:0706.1109. ISBN 978-0-521-88512-6.
51. Hawking, S. W.; Penrose, R. (January 1970). "The Singularities of Gravitational Collapse and Cosmology". 314 (1519): 529–548. Bibcode:1970RSPSA.314..529H. doi:10.1098/rspa.1970.0021. JSTOR 2416467.
52. ^ a b c
53. ^ a b c Rees, M. J.; Volonteri, M. (2007). "Massive black holes: formation and evolution". In Karas, V.; Matt, G. Black Holes from Stars to Galaxies—Across the Range of Masses. Cambridge University Press. pp. 51–58. arXiv:astro-ph/0701512. ISBN 978-0-521-86347-6.
54. Penrose, R. (2002). "Gravitational Collapse: The Role of General Relativity". General Relativity and Gravitation 34 (7): 1141. Bibcode:2002GReGr..34.1141P. doi:10.1023/A:1016578408204.
55. Carr, B. J. (2005). "Primordial Black Holes: Do They Exist and Are They Useful?". In Suzuki, H.; Yokoyama, J.; Suto, Y. et al. Inflating Horizon of Particle Astrophysics and Cosmology. Universal Academy Press. arXiv:astro-ph/0511743. ISBN 4-946443-94-0. `|displayeditors=` suggested (help)
56. Giddings, S. B.; Thomas, S. (2002). "High energy colliders as black hole factories: The end of short distance physics". Physical Review D 65 (5): 056010. arXiv:hep-ph/0106219. Bibcode:2002PhRvD..65e6010G. doi:10.1103/PhysRevD.65.056010.
57. Harada, T. (2006). "Is there a black hole minimum mass?". Physical Review D 74 (8): 084004. arXiv:gr-qc/0609055. Bibcode:2006PhRvD..74h4004H. doi:10.1103/PhysRevD.74.084004.
58. Arkani–Hamed, N.; Dimopoulos, S.; Dvali, G. (1998). "The hierarchy problem and new dimensions at a millimeter". Physics Letters B 429 (3–4): 263. arXiv:hep-ph/9803315. Bibcode:1998PhLB..429..263A. doi:10.1016/S0370-2693(98)00466-3.
59.
60. Cavaglià, M. (2010). "Particle accelerators as black hole factories?". Einstein-Online (Max Planck Institute for Gravitational Physics (Albert Einstein Institute)) 4: 1010.
61. Vesperini, E.; McMillan, S. L. W.; D'Ercole, A. et al. (2010). "Intermediate-Mass Black Holes in Early Globular Clusters". The Astrophysical Journal Letters 713 (1): L41–L44. arXiv:1003.3470. Bibcode:2010ApJ...713L..41V. doi:10.1088/2041-8205/713/1/L41.
62. Zwart, S. F. P.; Baumgardt, H.; Hut, P. et al. (2004). "Formation of massive black holes through runaway collisions in dense young star clusters". Nature 428 (6984): 724. arXiv:astro-ph/0402622. Bibcode:2004Natur.428..724P. doi:10.1038/nature02448. PMID 15085124.
63. O'Leary, R. M.; Rasio, F. A.; Fregeau, J. M. et al. (2006). "Binary Mergers and Growth of Black Holes in Dense Star Clusters". The Astrophysical Journal 637 (2): 937. arXiv:astro-ph/0508224. Bibcode:2006ApJ...637..937O. doi:10.1086/498446.
64. Page, D. N. (2005). "Hawking radiation and black hole thermodynamics". New Journal of Physics 7: 203. arXiv:hep-th/0409024. Bibcode:2005NJPh....7..203P. doi:10.1088/1367-2630/7/1/203.
65. "Evaporating black holes?". Einstein online. Max Planck Institute for Gravitational Physics. 2010. Retrieved 2010-12-12.
66. Giddings, S. B.; Mangano, M. L. (2008). "Astrophysical implications of hypothetical stable TeV-scale black holes". Physical Review D 78 (3): 035009. arXiv:0806.3381. Bibcode:2008PhRvD..78c5009G. doi:10.1103/PhysRevD.78.035009.
67. Peskin, M. E. (2008). "The end of the world at the Large Hadron Collider?". Physics 1: 14. Bibcode:2008PhyOJ...1...14P. doi:10.1103/Physics.1.14.
68. Fichtel, C. E.; Bertsch, D. L.; Dingus, B. L. et al. (1994). "Search of the energetic gamma-ray experiment telescope (EGRET) data for high-energy gamma-ray microsecond bursts". Astrophysical Journal 434 (2): 557–559. Bibcode:1994ApJ...434..557F. doi:10.1086/174758.
69. Naeye, R. "Testing Fundamental Physics". NASA. Retrieved 2008-09-16.
70. "Event Horizon Telescope". MIT Haystack Observatory. Retrieved 6 April 2012.
71. ^ a b c McClintock, J. E.; Remillard, R. A. (2006). "Black Hole Binaries". In Lewin, W.; van der Klis, M. Compact Stellar X-ray Sources. Cambridge University Press. arXiv:astro-ph/0306213. ISBN 0-521-82659-4. section 4.1.5.
72. Celotti, A.; Miller, J. C.; Sciama, D. W. (1999). "Astrophysical evidence for the existence of black holes". Classical and Quantum Gravity 16 (12A): A3–A21. arXiv:astro-ph/9912186. doi:10.1088/0264-9381/16/12A/301.
73. Winter, L. M.; Mushotzky, R. F.; Reynolds, C. S. (2006). "XMM‐Newton Archival Study of the Ultraluminous X‐Ray Population in Nearby Galaxies". The Astrophysical Journal 649 (2): 730. arXiv:astro-ph/0512480. Bibcode:2006ApJ...649..730W. doi:10.1086/506579.
74. Bolton, C. T. (1972). "Identification of Cygnus X-1 with HDE 226868". Nature 235 (5336): 271–273. Bibcode:1972Natur.235..271B. doi:10.1038/235271b0.
75. Webster, B. L.; Murdin, P. (1972). "Cygnus X-1—a Spectroscopic Binary with a Heavy Companion ?". Nature 235 (5332): 37–38. Bibcode:1972Natur.235...37W. doi:10.1038/235037a0.
76. Rolston, B. (10 November 1997). "The First Black Hole". The bulletin. University of Toronto. Archived from the original on 2008-05-02. Retrieved 2008-03-11.
77. Shipman, H. L. (1 January 1975). "The implausible history of triple star models for Cygnus X-1 Evidence for a black hole". Astrophysical Letters 16 (1): 9–12. Bibcode:1975ApL....16....9S. doi:10.1016/S0304-8853(99)00384-4.
78. Narayan, R.; McClintock, J. (2008). "Advection-dominated accretion and the black hole event horizon". New Astronomy Reviews 51 (10–12): 733. arXiv:0803.0322. Bibcode:2008NewAR..51..733N. doi:10.1016/j.newar.2008.03.002.
79. "NASA scientists identify smallest known black hole" (Press release). Goddard Space Flight Center. 2008-04-01. Retrieved 2009-03-14.
80. Krolik, J. H. (1999). Active Galactic Nuclei. Princeton University Press. Ch. 1.2. ISBN 0-691-01151-6.
81. Sparke, L. S.; Gallagher, J. S. (2000). Galaxies in the Universe: An Introduction. Cambridge University Press. Ch. 9.1. ISBN 0-521-59740-4.
82. Kormendy, J.; Richstone, D. (1995). "Inward Bound—The Search For Supermassive Black Holes In Galactic Nuclei". Annual Reviews of Astronomy and Astrophysics 33 (1): 581–624. Bibcode:1995ARA&A..33..581K. doi:10.1146/annurev.aa.33.090195.003053.
83. King, A. (2003). "Black Holes, Galaxy Formation, and the MBH-σ Relation". The Astrophysical Journal Letters 596 (1): 27–29. arXiv:astro-ph/0308342. Bibcode:2003ApJ...596L..27K. doi:10.1086/379143.
84. Ferrarese, L.; Merritt, D. (2000). "A Fundamental Relation Between Supermassive Black Holes and their Host Galaxies". The Astrophysical Journal Letters 539 (1): 9–12. arXiv:astro-ph/0006053. Bibcode:2000ApJ...539L...9F. doi:10.1086/312838.
85. "A Black Hole's Dinner is Fast Approaching". ESO Press Release. Retrieved 6 February 2012.
86. ^ a b Gillessen, S.; Eisenhauer, F.; Trippe, S. et al. (2009). "Monitoring Stellar Orbits around the Massive Black Hole in the Galactic Center". The Astrophysical Journal 692 (2): 1075. arXiv:0810.4674. Bibcode:2009ApJ...692.1075G. doi:10.1088/0004-637X/692/2/1075.
87. ^ a b Ghez, A. M.; Klein, B. L.; Morris, M. et al. (1998). "High Proper‐Motion Stars in the Vicinity of Sagittarius A*: Evidence for a Supermassive Black Hole at the Center of Our Galaxy". The Astrophysical Journal 509 (2): 678. arXiv:astro-ph/9807210. Bibcode:1998ApJ...509..678G. doi:10.1086/306528.
88. ^ a b Bozza, V. (2010). "Gravitational Lensing by Black Holes". General Relativity and Gravitation (42): 2269–2300. arXiv:0911.2187. Bibcode:2010GReGr..42.2269B. doi:10.1007/s10714-010-0988-2.
89. Barack, L.; Cutler, C. (2004). "LISA capture sources: Approximate waveforms, signal-to-noise ratios, and parameter estimation accuracy". Physical Review D (69): 082005. arXiv:gr-qc/0310125. Bibcode:2004PhRvD..69h2005B. doi:10.1103/PhysRevD.69.082005.
90. Kovacs, Z.; Cheng, K. S.; Harko, T. (2009). "Can stellar mass black holes be quark stars?". Monthly Notices of the Royal Astronomical Society 400 (3): 1632–1642. arXiv:0908.2672. Bibcode:2009MNRAS.400.1632K. doi:10.1111/j.1365-2966.2009.15571.x.
91. Kusenko, A. (2006). "Properties and signatures of supersymmetric Q-balls". arXiv:hep-ph/0612159 [hep-ph].
92. Hansson, J.; Sandin, F. (2005). "Preon stars: a new class of cosmic compact objects". Physics Letters B 616 (1–2): 1. arXiv:astro-ph/0410417. Bibcode:2005PhLB..616....1H. doi:10.1016/j.physletb.2005.04.034.
93. Kiefer, C. (2006). "Quantum gravity: general introduction and recent developments". Annalen der Physik 15 (1–2): 129. arXiv:gr-qc/0508120. Bibcode:2006AnP...518..129K. doi:10.1002/andp.200510175.
94. Skenderis, K.; Taylor, M. (2008). "The fuzzball proposal for black holes". Physics Reports 467 (4–5): 117. arXiv:0804.0552. Bibcode:2008PhR...467..117S. doi:10.1016/j.physrep.2008.08.001.
95. Hawking, S. W. (1971). "Gravitational Radiation from Colliding Black Holes". Physical Review Letters 26 (21): 1344–1346. Bibcode:1971PhRvL..26.1344H. doi:10.1103/PhysRevLett.26.1344.
96. ^ a b Wald, R. M. (2001). "The Thermodynamics of Black Holes". Living Reviews in Relativity 4 (6). arXiv:gr-qc/9912119. Bibcode:1999gr.qc....12119W. Retrieved 2011-02-10.
97. 't Hooft, G. (2001). "The Holographic Principle". In Zichichi, A. Basics and highlights in fundamental physics. Subnuclear series 37. World Scientific. arXiv:hep-th/0003004. ISBN 978-981-02-4536-8.
98. Strominger, A.; Vafa, C. (1996). "Microscopic origin of the Bekenstein-Hawking entropy". Physics Letters B 379 (1–4): 99. arXiv:hep-th/9601029. Bibcode:1996PhLB..379...99S. doi:10.1016/0370-2693(96)00345-0.
99. Carlip, S. (2009). "Black Hole Thermodynamics and Statistical Mechanics". Lecture Notes in Physics 769: 89. arXiv:0807.4520. doi:10.1007/978-3-540-88460-6_3.
100. Hawking, S. W. "Does God Play Dice?". www.hawking.org.uk. Retrieved 2009-03-14.
101. Giddings, S. B. (1995). "The black hole information paradox". Particles, Strings and Cosmology. Johns Hopkins Workshop on Current Problems in Particle Theory 19 and the PASCOS Interdisciplinary Symposium 5. arXiv:hep-th/9508151.
102. Mathur, S. D. (2011). "The information paradox: conflicts and resolutions". XXV International Symposium on Lepton Photon Interactions at High Energies. arXiv:1201.2079.
## Further reading
Popular reading
• Ferguson, Kitty (1991). Black Holes in Space-Time. Watts Franklin. ISBN 0-531-12524-6.
• Hawking, Stephen (1988). . Bantam Books, Inc. ISBN 0-553-38016-8.
• Hawking, Stephen; Penrose, Roger (1996). The Nature of Space and Time. Princeton University Press. ISBN 0-691-03791-4.
• Melia, Fulvio (2003). The Black Hole at the Center of Our Galaxy. Princeton U Press. ISBN 978-0-691-09505-9.
• Melia, Fulvio (2003). The Edge of Infinity. Supermassive Black Holes in the Universe. Cambridge U Press. ISBN 978-0-521-81405-8.
• Pickover, Clifford (1998). Black Holes: A Traveler's Guide. Wiley, John & Sons, Inc. ISBN 0-471-19704-1.
• , poem.[]
• Thorne, Kip S. (1994). . Norton, W. W. & Company, Inc. ISBN 0-393-31276-3.
• Wheeler, J. Craig (2007). Cosmic Catastrophes (2nd ed.). Cambridge University Press. ISBN 0-521-85714-7.
University textbooks and monographs
• Carroll, Sean M. (2004). Spacetime and Geometry. Addison Wesley. ISBN 0-8053-8732-3. , the lecture notes on which the book was based are available for free from Sean Carroll's website.
• Carter, B. (1973). "Black hole equilibrium states". In DeWitt, B. S.; DeWitt, C. Black Holes.
• Chandrasekhar, Subrahmanyan (1999). Mathematical Theory of Black Holes. Oxford University Press. ISBN 0-19-850370-9.
• Frolov, V. P.; Novikov, I. D. (1998). Black hole physics.
• Frolov, Valeri P.; Zelnikov, Andrei (2011). Introduction to Black Hole Physics. Oxford: Oxford University Press. ISBN 978-0-19-969229-3. Zbl 1234.83001.
• Hawking, S. W.; Ellis, G. F. R. (1973). Large Scale Structure of space time. Cambridge University Press. ISBN 0-521-09906-4.
• Melia, Fulvio (2007). The Galactic Supermassive Black Hole. Princeton U Press. ISBN 978-0-691-13129-0.
• Taylor, Edwin F.; Wheeler, John Archibald (2000). Exploring Black Holes. Addison Wesley Longman. ISBN 0-201-38423-X.
• Thorne, Kip S.; Misner, Charles; Wheeler, John (1973). Gravitation. W. H. Freeman and Company. ISBN 0-7167-0344-0.
• Wald, Robert M. (1984). General Relativity. University of Chicago Press. ISBN 978-0-226-87033-5.
• Wald, Robert M. (1992). Space, Time, and Gravity: The Theory of the Big Bang and Black Holes. University of Chicago Press. ISBN 0-226-87029-4.
Review papers
• Gallo, Elena; Marolf, Donald (2009). "Resource Letter BH-2: Black Holes". American Journal of Physics 77 (4): 294. arXiv:0806.2316. Bibcode:2009AmJPh..77..294G. doi:10.1119/1.3056569.
• Hughes, Scott A. (2005). "Trust but verify: The case for astrophysical black holes". arXiv:hep-ph/0511217 [hep-ph]. Lecture notes from 2005 SLAC Summer Institute.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 3, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8603492975234985, "perplexity_flag": "middle"}
|
http://mathematica.stackexchange.com/questions/3311/variable-sized-lists-and-using-lists-as-variables?answertab=votes
|
# variable sized lists and using lists as variables
I am trying to scan a parameter space of varying numbers of parameters subject to some constraints (I am interested in any number of constraints just out of curiosity, but in reality no more than 2 constraints in the actual). So it would look something like :
```` TestScan[c1,...,cm,{list1,....listn}]
````
Where the ci are constraints and the list_i's take the form
```` {x_i,x_i_initial, x_i_final, x_i_increment}
````
that will be fed to a Do loop.
The problem is that I want to change the actual value of the x_i so that the c_j's (which are a function of x_i) know that x_i has changed value.
For example I might have a constraint like
```` x^2+y^2+z^2=1,
````
I want to scan over values of y and z, solve for the corresponding values of x, then store all the possible solutions in a list. By putting the variables I want to scan over in an array it seems easier since then I can have Mathematica see the size of the array, and run the proper number of loops.
Obviously if I just put the variables in a function like
```` TestScan[c1,...,cm,x_i,x_i_initial, x_i_final, x_i_increment,....]
````
and then I would call it like
```` TestScan[c[x1,x2],x1,x1initial, x1final, x1increment,x2,x2initial, x2final, x2increment]
````
it works since know Mathematica knows that the argument of c is that same thing as the x1 that appears elsewhere, but I would have to change the code every time I change the number of variables in the constraints. My naive straightforward generalization of the just above does't work when I try to pass all
```` {x_i,x_i_initial, x_i_final, x_i_increment}
````
in a list. In this case mathematica might replace the 1 element in the list with something else, but I need to be able to change the value of x_i itself.
-
Sorry, but I don't understand how do you plan to link the c's with the corresponding x_i's if you don't declare them explicitly – belisarius Mar 21 '12 at 15:23
If I call it like TestScan[c[x1,x2],{x1,x1initial, x1final, x1increment}] they should be linked no? – DJBunk Mar 21 '12 at 15:29
4
It's usually better for questions like these to focus on explaining what the problem is, and not focusing on your idea how to solve it. The latter might help, the first will leave everyone the freedom of unbiased thinking about how to solve things. For example, your suggestion of a `Do` loop is often considered bad practice in Mathematica. – David Mar 21 '12 at 15:32
## 1 Answer
I would start by creating a function that returns a value based on your equation, i.e. the solution of your example $x^2+y^2+z^2=1$:
````findSolutions[y_, z_] := Module[{x},
x /. Solve[x^2 + y^2 + z^2 == 1, x]
]
````
The output looks like this:
````findSolutions[1, 2]
````
````{-2 I, 2 I}
````
Next, set up the table of $(y,z)$ values you want to feed that function, for example by generating random numbers or by using some formula:
````sampleData = Table[{y, z}, {y, -2, 2, 1/2}, {z, -1, 1, 1/3}];
sampleData = Flatten[sampleData, 1]
````
````{{-2, -1}, {-2, -2/3}, {-2, -1/3}, ...}
````
The second line is necessary since `Table` creates an additional nested layer for each variable it cycles through, so that the data initiall looks like `{{{ ... }}}`. The `Flatten` gets rid of this (here) unnecessary layer.
Alright, let's apply our function to the data,
````findSolutions @@@ sampleData
````
````{{-2 I, 2 I}, {-((I Sqrt[31])/3), (I Sqrt[31])/3}, ...}
````
`findSolutions @@@ sampleData` applies `findSolutions` to every sublist of `sampleData`, the result is a list of all results of the function based on the data provided. You can now do additional stuff with that, for exmple use `Union` (will sort the list as well) or `DeleteDuplicates` (won't do that) to get rid of double entries; you may also want to flatten the result, since `findSolutions` returns tuples of all possible solutions for a given $(y,z)$, etc. For example `// Flatten // Union` yields
````{-1, 0, -I/3, I/3, ...}
````
-
Hi Dave, Thanks for all your time - this definitely helps a lot and puts me on a good track. In working through this, I had another question which which I posted separately. Thanks again. – DJBunk Mar 22 '12 at 13:27
lang-mma
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8879901170730591, "perplexity_flag": "middle"}
|
http://unapologetic.wordpress.com/2007/02/27/direct-products-of-groups/?like=1&_wpnonce=81a70c627a
|
# The Unapologetic Mathematician
## Direct Products of Groups
There are two sorts of products on groups that I’d like to discuss. Today I’ll talk about direct products.
The direct product says that we can take two groups, form the Cartesian product of their sets, and put the structure of a group on that. Given groups $G$ and $H$ we form the group $G\times H$ as the set of pairs $(g,h)$ with $g$ in $G$ and $h$ in $H$. We compose them term-by-term: $(g_1,h_1)(g_2,h_2)=(g_1g_2,h_1h_2)$. It can be verified that this gives us a group.
There’s a very interesting property about this group. It comes equipped with two homomorphisms, $\pi_G$ and $\pi_H$, the “projections” of $G\times H$ onto $G$ and $H$, respectively. As one might expect, $\pi_G(g,h)=g$, and similarly for $\pi_H$. Even better, let’s consider any other group $X$ with homomorphisms $f_G:X\rightarrow G$ and $f_H:X\rightarrow H$. There is a unique homomorphism $f_G\times f_H:X\rightarrow G\times H$ — defined by $f_G\times f_H(x)=(f_G(x),f_H(x))$ — so that $\pi_G(f_G\times f_H(x))=f_G(x)$ and $\pi_H(f_G\times f_H(x))=f_H(x)$. Here’s the picture.
The vertical arrow from $X$ to $G\times H$ is $f_G\times f_H$, and I assert that that’s the only homomorphism from $X$ to $G\times H$ so that both paths from $X$ to $G$ are the same, as are both paths from $X$ to $H$. When we draw a diagram like this with groups on the points and homomorphisms for arrows, we say that the diagram “commutes” if any two paths joining the same point give the same homomorphism between those two groups.
To restate it again, $G\times H$ has homomorphisms to $G$ and $H$, and any other group $X$ with a pair of homomorphisms to $G$ and $H$ has a unique homomorphism from $X$ to $G\times H$ so that the above diagram commutes. This uniqueness means that has this property is unique up to isomorphism.
Let’s say two groups $P_1$ and $P_2$ have this product property. That is, each has given homomorphisms to $G$ and $H$, and given any other group with a pair of homomorphisms there is a unique homomorphism to $P_1$ and one to $P_2$ that make the diagrams commute (with $P_1$ or $P_2$ in the place of $G\times H$). Then from the $P_1$ diagram with $P_2$ in place of $X$ we get a unique homomorphism $f_1:P_2\rightarrow P_1$. On the other hand, from the $P_2$ diagram with $P_1$ in place of $X$, we get a unique homomorphism $f_2:P_1\rightarrow P_2$. Putting these two together we get homomorphisms $f_1f_2:P_2\rightarrow P_2$ and $f_2f_1:P_1\rightarrow P_1$.
Now if we think of the diagram for $P_1$ with $P_1$ itself in place of $X$, we see that there’s a unique homomorphism from $P_1$ to itself making the diagram commute. We just made one called $f_2f_1$, but the identity homomorphism on $P_1$ also works, so they must be the same! Similarly, $f_1f_2$ must be the identity on $P_2$, so $f_1$ and $f_2$ are inverses of each other, and $P_1$ and $P_2$ are isomorphic!
So let’s look back at this whole thing again. I take two groups $G$ and $H$, and I want a new group $G\times H$ that has homomorphisms to $G$ and $H$ and so any other such group with two homomorphisms has a unique homomorphism to $G\times H$. Any two groups satisfying this property are isomorphic, so if we can find any group satisfying this property we know that any other one will be essentially the same. The group structure we define on the Cartesian product of the sets $G$ and $H$ satisfies just such a property, so we call it the direct product of the two groups.
This method of defining things is called a “universal property”. The argument I gave to show that the product is essentially unique works for any such definition, so things defined to satisfy universal properties are unique (up to isomorphism) if they actually exist at all. This is a viewpoint on group theory that often gets left out of basic treatments of the subject, but one that I feel gets right to the heart of why the theory behaves the way it does. We’ll definitely be seeing more of it.
About these ads
Like Loading...
## 13 Comments »
1. [...] course, by the exact same sort of argument I gave when discussing direct products of groups, once we have a universal property any two things satisfying that property are isomorphic. This is [...]
Pingback by | March 5, 2007 | Reply
2. [...] Exact Sequences and Semidirect Products The direct product of two groups provides a special sort of short exact sequence. We know that there is a surjection , [...]
Pingback by | March 8, 2007 | Reply
3. [...] sums of Abelian groups Let’s go back to direct products and free products of groups and consider them just in the context of abelian [...]
Pingback by | April 12, 2007 | Reply
4. [...] First Isomorphism theorems, for example. I’ll also show how, in the language of categories, direct products of groups are like greatest lower [...]
Pingback by | May 20, 2007 | Reply
5. [...] categories and we define the product category like we did the direct product of groups and other such algebraic gadgets. We need a category with “projection functors” and [...]
Pingback by | June 1, 2007 | Reply
6. I wonder if universal properties could be made easier for beginners to grasp if more of the structure of the concept was displayed directly in the language. For example, an attempted definition of direct product:
$H$ has the direct product property ‘flabbily’ for $F, G$ via functions $f, g$ iff $f$ is a homomorphism from $H$ to $F$ and $g$ is a homomorphsim from $H$ to $G$.
$H$ is has the direct product property universally for $F, G$ via functions $f, g$ if it does so flabbily, and for any other $H', f', g'$ which also does so flabbily, there is a unique homomorphism $h:H'\rightarrow H$ such that $f'=fh$ and $g'=gh$.
$H, f, g$ are a direct product of $F, G$ if $H$ has the direct product universally for $F, G$ via $f, g$
Grammatically, the first part (the ‘flabby’ version of the property) is supposed to be a predicate with four arguments, corresponding to various bits of the cone construction: the apex, which is involved in the unique arrow, the ingredients (objects and arrows) of the base, the property that characterizes how these ingredients and the `via’ arrows are supposed to be related, and the via-arrows that connect the apex to the ingredient objects.
Then a lot of ‘verb-phrase anaphora’ is used in defining the strict/universal part, in order to gammatically display the fact that the concepts from the flabby part are being re-used.
Maybe this is too cumbersome to be useful to beginners, but, speaking for myself, I was never able to understand the usual verbiage concerning universal properties until I sort of understood how cones/limits worked. So the idea here is to express the structure of the cone (and cocone) ideas more directly than usual in the language, without dragging in as much abstraction.
Comment by MathOutsider | October 9, 2007 | Reply
7. Well, MO, that’s an idea that actually shows up in some circles. Witness the term “weak Natural Numbers Object” in topos theory. Unfortunately, “weak” already tends to mean something else in other areas of category theory…
Comment by | October 9, 2007 | Reply
8. That’s why I chose ‘flabbily’ – doesn’t sound very nice but everything else I could think of that sounded better already had some other meanings that even I have encountered, although not necessarily understood.
Comment by MathOutsider | October 10, 2007 | Reply
9. Maybe smoother reformulation of 6, which is supposed to follow the standard formulation a bit more closely:
$C, f, g$ is a flabby product of $A, B$ iff $f:C\rightarrow A, g:C\rightarrow A$.
$C, f, g$ is a (real/universal) product of $A, B$ if it is a flabby one such that for any (possibly other) flabby one $C', f', g'$ there is a unique $h:C'\rightarrow C$ s.t. $f'=fh, g'=gh$.
‘wannabe’ would perhaps be an alternative to ‘flabby’. Well only the actual beginners can judge whether something like this is helpful, assuming it isn’t actively misleading.
Comment by MathOutsider | October 10, 2007 | Reply
10. The (co-)universal property behind commutator subgroups in the above tediously explicit format:
$A, f$ is a wannabe ‘abelianator’ of group $G$ if $A$ is an abelian group and the image of $G$ under $h$.
$A, f$ is a co-universal abelianator of group $G$ if it is a wannabe, and, for any other such wannabe $A',f'$, there is a unique group homorphism $h:A\rightarrow A'$ such that $f'=hf$.
Putting it in this form caused me to see that the relevant property was co-universal rather than just universal, tho presumably people with more talent or experience would see this immediately.
Reformulating in terms of the subgroup embeddings rather than homomorphisms between quotients, things get rearranged a bit:
$C,\iota$ is a wannabe-commutator of $G$ if $\iota$ is an injection of $C$ into $G$ and $G/C$ is commutative.
$C, \iota$ is a (co?)universal commutator of $G$ if it is a wannabe, and for any other such wannabe $C',\iota'$, there is a unique $h:C\rightarrow C'$ such that $\iota=\iota' h$.
Here general format of a (co-)cone isn’t being followed, since the universal object’s arrows are factoring thru the wannabe’s rather than vice-versa. Don’t know what the terminology for this is.
Comment by MathOutsider | October 12, 2007 | Reply
11. Well, no. In that case it’s not really a cone. It’s a universal object in some category — one which you described correctly — but not a category of cones.
Limits and colimits are terminal and initial objects in categories of cones and cocones, but not all universal properties comes from these sorts of categories.
Comment by | October 12, 2007 | Reply
12. [...] elements altogether and draw this diagram: What does this mean? Well, it’s like the diagram I drew for products of groups. The product of and is a set with functions and so that for any other set with functions to [...]
Pingback by | December 5, 2007 | Reply
13. [...] Product Groups An important construction for groups is their direct product. Given two groups and we take the cartesian product of their underlying sets and put a group [...]
Pingback by | November 1, 2010 | Reply
« Previous | Next »
## About this weblog
This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”).
I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
• ## RSS Feeds
RSS - Posts
RSS - Comments
• ## Feedback
Got something to say? Anonymous questions, comments, and suggestions at Formspring.me!
%d bloggers like this:
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 129, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.940155565738678, "perplexity_flag": "head"}
|
http://nrich.maths.org/1173
|
### F'arc'tion
At the corner of the cube circular arcs are drawn and the area enclosed shaded. What fraction of the surface area of the cube is shaded? Try working out the answer without recourse to pencil and paper.
### Do Unto Caesar
At the beginning of the night three poker players; Alan, Bernie and Craig had money in the ratios 7 : 6 : 5. At the end of the night the ratio was 6 : 5 : 4. One of them won \$1 200. What were the assets of the players at the beginning of the evening?
### Plutarch's Boxes
According to Plutarch, the Greeks found all the rectangles with integer sides, whose areas are equal to their perimeters. Can you find them? What rectangular boxes, with integer sides, have their surface areas equal to their volumes?
# Egyptian Fractions
##### Stage: 3 Challenge Level:
The ancient Egyptians didn't write fractions with a numerator greater than 1 - they wouldn't, for example, write $\frac{2}{7}$, $\frac{5}{9}$, $\frac{123}{467}$.....
Instead they wrote fractions like these as a sum of different unit fractions.
Experiment with this interactivity to see some of the different ways of writing numbers as Egyptian Fractions:
This text is usually replaced by the Flash movie.
There are several NRICH problems based on Egyptian fractions. You can start by exploring unit fractions at Keep it Simple
In this problem we are going to start by considering how the Egyptians might have written fractions with a numerator of 2 (i.e. of the form $\frac{2}{n}$).
For example
$\frac{2}{3} = \frac{1}{3} + \frac{1}{3}$ (but since these are the same, this wasn't allowed.)
or
$\frac{2}{3} = \frac{1}{3} + \frac{1}{4} + \frac{1}{12}$
or
$\frac{2}{3} = \frac{1}{3} + \frac{1}{5} + \frac{1}{20} + \frac{1}{12}$
or
$\frac{2}{3} = \frac{1}{3} + \frac{1}{6} + \frac{1}{30} + \frac{1}{20} + \frac{1}{12}$
or
$\frac{2}{3} = \frac{1}{4} + \frac{1}{12} + \frac{1}{7} + \frac{1}{42} + \frac{1}{31} + \frac{1}{930} + \frac{1}{21} + \frac{1}{420} + \frac{1}{13} + \frac{1}{156}$
and so on, and so on!!
You might want to check that these are correct.
(If you can't see how these have been generated, take a look at Jamie's method in Keep it Simple )
#### BUT wouldn't it be simpler to write it as the sum of just
two different unit
fractions?
For $\frac{2}{3}$ that's quite easy.........$\frac{2}{3} = \frac{1}{2} + \frac{1}{6}$
But is it always so easy?
Try some other fractions with a numerator of 2.
Can they also be written as the sum of just two different unit fractions?
#### Can all fractions with a numerator of 2 (i.e. of the form
$\frac{2}{n}$) be written as the sum of just two different unit
fractions?
Can you find an efficient method for doing this?
You might want to explore fractions of the form $\frac{3}{n}$, $\frac{4}{n}$, $\frac{5}{n}$...... and think about how the Egyptians would have represented these, using sums with the least number of unit fractions.
You might like to take a look at a follow up problem, The Greedy Algorithm
NOTES AND BACKGROUND
The ancient Egyptians lived thousands of years ago, how do we know what they thought about numbers? A little research on this topic will show that famous mathematicians have asked and answered questions about the Egyptian fraction system for hundreds of years. You can find references to results in this field that were proved in the 1200s and in the 2000s, and you can also find some open questions - things mathematicians think are true, but have not been proved yet.
Throughout history, different civilisations have had different ways of representing numbers. Some of these systems seem strange or complicated from our perspective. The ancient Egyptian ideas about fractions are quite surprising.
For example, they wrote $\frac{1}{5}$, $\frac{1}{16}$ and $\frac{1}{429}$ as
(but using their numerals)
They didn't write fractions with a numerator greater than 1 - they wouldn't, for example,write $\frac{2}{7}$, $\frac{5}{9}$, $\frac{123}{167}$.... although there is evidence that the specific fraction $\frac{2}{3}$ was used by the Egyptians, and $\frac{3}{4}$ sometimes as well. They had special symbols for these two fractions.
The Rhind Mathematical Papyrus is an important historical source for studying Egyptian fractions - it was probably a reference sheet, or a lesson sheet and contains Egyptian fraction sums for all the fractions $\frac{2}{3}$, $\frac{2}{5}$, $\frac{2}{7}... \frac{2}{101}$.
Why did they only include the odd ones?
#### $\frac{4}{n}$ and $\frac{3}{n}$
In the 1940s, the mathematicians Paul Erdos and Ernst G. Straus conjectured that every fraction with numerator = 4 can be written as an Egyptian fraction sum with three terms. If you have found an example that appears to need more than three, can you find an alternative sum? Can you find a reason why it must work, or a counter-example - the conjecture isn't yet proved. It is proved for $\frac{3}{n}$.
The NRICH Project aims to enrich the mathematical experiences of all learners. To support this aim, members of the NRICH team work in a wide range of capacities, including providing professional development for teachers wishing to embed rich mathematical tasks into everyday classroom practice. More information on many of our other activities can be found here.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 29, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9605897068977356, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/40442/product-of-two-riemann-integrable-is-riemann-integrable/40443
|
## product of two riemann integrable is riemann integrable [closed]
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
first show you only need to consider squares of functions as f.g = 1/4 [(f+g)sqr - (f-g)sqr]. show then that you only need to consider only positive valued functions becuase f(x).g(x)=|f(x)|sqr. then , if 0 <=f(x) <= M on [a,b] show that f sqr(x) - f sqr(y) <= 2M (f(x)-f(y)).
does anyone know how i would answer this ??
-
If this is a homework problem, then -- as stated in the FAQ mathoverflow.net/faq#whatnot -- your question would be better suited to one of the sites mentioned there. – Yemon Choi Sep 29 2010 at 7:58
1
This question has been reasked on MSE, so could be closed here without loss. – Mariano Suárez-Alvarez Sep 29 2010 at 8:07
## 2 Answers
It follows from Lebesgue's characterization of Riemann integrable functions as bounded functions continuous outside a set of Lebesgue measure zero.
-
thanks for your advice, but is there a simpler approach because i am only a second year student and we have not covered Lebesgue's characterization of Riemann integrable functions at all. – sam Sep 29 2010 at 7:53
1
So this was a homework question? – Robin Chapman Sep 29 2010 at 18:23
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
If $f$ and $g$ are Riemann integrable over the interval $[a,b]$ then there is an $M$ such that $|f|$ and $|g|$ are both $\le M$ on $[a,b]$. The Riemann integrability of $f g$ then immediately follows from the inequality $$|f(x)g(x)-f(x')g(x')|\le |f(x)-f(x')||g(x)|+|f(x')||g(x)-g(x')|$$ $$\le M(|f(x)-f(x')| +|g(x)-g(x')|)$$ for all $x, x'\in [a,b]$.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.931272566318512, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/165797/is-it-possible-to-combine-two-integers-in-such-a-way-that-you-can-always-take-th?answertab=active
|
Is it possible to combine two integers in such a way that you can always take them apart later?
Given two integers $n$ and $m$ (assuming for both $0 < n < 1000000$) is there some function $f$ so that if I know $f(n, m) = x$ I can determine $n$ and $m$, given $x$? Order is important, so $f(n, m) \not= f(m,n)$ (unless $n=m$).
-
3
It rather depends what you want to achieve. It is certainly possible because $\mathbb Z \times \mathbb Z$ is countable – Mark Bennet Jul 2 '12 at 18:22
1
The most obvious such function is $f(n,m)=(n,m)$. Or do you want the function to output integers or something? – Chris Eagle Jul 2 '12 at 18:32
4
Google "pairing function". This is a duplicate of many prior questions. – Gone Jul 2 '12 at 18:41
2
@BillDubuque This is one of those cases where as soon as I knew what the term was, I'd have my answer. But, of course, I didn't. – Jordan Reiter Jul 2 '12 at 20:35
– Jordan Reiter Jul 2 '12 at 20:35
show 1 more comment
5 Answers
If there are no bounds on your integers, use the Cantor pairing function. It is pleasantly easy to compute, as are its two "inverses."
For the case where your integers are bounded by say $10^6$, you can simply concatenate the decimal expansions, padding with initial $0$'s as appropriate. Or do something similar with binary expansions. Dirt cheap to combine and uncombine, an easy string manipulation even when we allow bounds much larger than $10^6$.
-
6
Of course the concatenation procedure can also be written mathematically: $f(m,n) = 10^6 m + n$. Given $f(m,n)$, you get back $m$ and $n$ as quotient and reminder when dividing by $10^6$. – celtschk Jul 2 '12 at 18:42
f(m,n) = m +(1/n)
If there are numbers to the right of the decimal,
m =number to the left of decimal.
n = the reciprocal of the numbers to the right of the decimal
example
f(m,n) =10.5 then m =10, n=1/(0.5) =2
If there are no numbers to right of the decimal [f(m,n) is itself an integer]
then m = f(m,n) -1,
n =1
example
f(m,n) =20 then m=19, n =1
-
Since others have answered, here is another idea - less easy to write down explicitly as a mathematical function, but easy to describe and easy to implement if you have a machine which can handle strings. Send the digits of the first number to odd positions and the digits of the other to even positions so (31, 5681) would go to 50,608,311.
-
But this would only work if the numbers had the same number of digits, right? – Jordan Reiter Jul 2 '12 at 20:33
4
@JordanReiter I have deliberately used an example where the number of digits is different. – Mark Bennet Jul 2 '12 at 21:03
1
This is the first answer so far that doesn't necessarily use at least one more than the maximum number of digits each integer is allowed, plus it has an infinite range and it's trivially easy to parse back out without knowing the original maximums. It also works just as well in binary as decimal. If storage is a premium - and I can't think of any other reason to do something like this - yours would be my first choice. – SilverbackNet Jul 2 '12 at 23:02
@MarkBennet sorry, I see that now! – Jordan Reiter Jul 3 '12 at 16:19
Define $f(n,m) = n+1000000m$. Then $n = f(n,m) \mod 1000000$, and $m = \frac{f(n,m) - n}{1000000}$.
-
$f(n,m) = 2^n 3^m$.
Alternatively, use the bijection between $\mathbb N \times \mathbb N$ and $\mathbb N$ which is given by $$f(n,m) = \frac{(n+m)(n+m+1)}{2} + m$$
-
3
The advantage the $2^n 3^m$ approach has over all the other suggestions so far is that it gives you a way of encoding an ordered $r$-tuple without knowing in advance what $r$ is, by adding in powers of 5, 7 ... . – Mark Bennet Jul 2 '12 at 18:50
@MarkBennet This scaling advantage is also shared by the Cantor pairing function, through recursive application. – Michael Boratko Jul 2 '12 at 19:46
@MichaelBoratko If you have an integer given by the Cantor Pairing you cannot decode it without knowing how many iterations you have to go through. The prime product gives you this, at the cost of not being surjective, and this also means that the coded values will tend to be significantly higher for the prime product version. – Mark Bennet Jul 2 '12 at 19:53
– houbysoft Jul 2 '12 at 23:14
@MarkBennet Ah, now I understand what you meant. – Michael Boratko Jul 2 '12 at 23:53
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 29, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9225691556930542, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/46110/canonical-commutation-relations?answertab=active
|
# Canonical Commutation Relations
Is it logically sound to accept the canonical commutation relation (CCR)
$$[x,p]~=~i\hbar$$
as a postulate of quantum mechanics? Or is it more correct to derive it given some form for $p$ in the position basis?
I understand QM formalism works, it's just that I sometimes end up thinking in circles when I try to see where the postulates are.
Could someone give me a clear and logical account of what should be taken as a postulate in this regard, and an explanation as to why their viewpoint is the most right, in some sense!
-
## 3 Answers
You can either accept it as a postulate (in which case it is often more convenient to postulate the CCR and CAR for creation and annihilation operators) or you can derive the relation in the position basis with
$$\hat x = x \wedge \hat p = -i \hbar \nabla \Rightarrow [ \hat x , \hat p ] = - i \hbar x \nabla + i \hbar + i \hbar x \nabla$$
as you have to take the product rule when you apply $\nabla x$ to a function $f$.
You could also get these by the equivalence principle with classical mechanics, which says that $\{ q , p \} = 1$ for the Poisson brackets $\{\cdot,\cdot\}$ which are related to the commutator by a factor of $i \hbar$. That this equivalence principle holds is visible for example in the Ehrenfest theorem.
-
The choice of postulates is somewhat arbitrary in the sense that given a set of postulates you almost always can find an alternative set. The choice is guided by subjective criteria such as simplicity, closeness to experiment, or theoretical elegance.
However there are situations where some postulates/theorems do not make sense. For instance, $[\hat{x},\hat{p}] = i\hbar$ makes no sense in the Wigner & Moyal formulation of quantum mechanics, neither as postulate nor as theorem, because this formulation of quantum mechanics does not use operators:
The chief advantage of the phase space formulation is that it makes quantum mechanics appear as similar to Hamiltonian mechanics as possible by avoiding the operator formalism, thereby "'freeing' the quantization of the 'burden' of the Hilbert space.
Although the phase space formulation of quantum mechanics does not use commutation relations, them can be still obtained as a theorem when one makes the transition from the general phase space state to the configuration space wavefunction: $W(p,x;t) \rightarrow \Psi(x;t)$. Precisely, an explicit derivation of the $[\hat{x},\hat{p}] = i\hbar$ is given in my paper Positive definite phase space quantum mechanics
-
Wigner-Moyal still need to construct a Hilbert space to be a complete foundation, and then there are operators, and the CCR makes sense, though not as a postulate. – Arnold Neumaier Dec 7 '12 at 11:49
@ArnoldNeumaier: Before answering I would like to know what do you exactly mean by the "Wigner-Moyal still need to construct a Hilbert space to be a complete foundation". I can interpret that in several ways. – juanrga Dec 7 '12 at 14:30
The OP asked about CCR as part of a foundation for QM. You mentioned Wigner-Moyal. No matter what yuo meant by it, it either constructs a Hilbert space, then the CCR makes sense there but not as a postulate, or it doesn't, then it is a lousy foundation. – Arnold Neumaier Dec 7 '12 at 14:57
@ArnoldNeumaier: Sorry, but this continues being unclear to me. I still can interpret your words in several alternative ways and cannot chose the correct answer. Let me be more specific. Why do you think/believe that it is needed to construct a Hilbert space to give a foundation to the Wigner-Moyal formulation, when it does not even require wavefunctions? – juanrga Dec 7 '12 at 15:07
Without wqve function, the setting is far too restricted. How do you compute the color of gold in the Wigner-Moyal setting, without having a wave function? – Arnold Neumaier Dec 7 '12 at 18:15
show 1 more comment
Your running into circles will stop once you commit yourself to a choice.
What to regard as postulate is always a matter of choice (by you or by whoever writes an exposition of the basics). One starts from a point where the development is in some sense simplest. And one may motivate the postulates by analogies or whatever. The CCR are a simple coordinate-independent starting point.
However it is more sensible to introduce the momentum as the infinitesimal generator of a translation in position space. This is its fundamental meaning and essential for Noether's theorem, and has the CCR as a simple corollary.
-
+1 for Emmy :-) – Tobias Kienzler Dec 7 '12 at 11:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9324049949645996, "perplexity_flag": "head"}
|
http://www.physicsforums.com/showthread.php?p=3356482
|
Physics Forums
## invariance of scalar products on Lie algebras
Hi folks,
If I have a Lie algebra $$\mathfrak{g}$$ with an invariant (under the adjoint action ad of the Lie algebra) scalar product, what are the conditions that this scalar product is also invariant under the adjoint action Ad of the group? For instance, the Killing form is invariant under both actions. Is this also true in general?
My idea for the proof would be the following: If I know that the scalar product is invariant under Ad, then for any fixed vectors v,w in the Lie algebra, the function
$$f: G \rightarrow \mathbf{R}$$
$$\ g \mapsto \langle Ad(g)v, Ad(g)w \rangle$$
is constant, i.e.
$$f(g)=f(1)=k$$
By differentiating this function, I should be able to obtain the converse of the statement I need. I hope that this can be used to derive a condition for the invariance under Ad, given the invariance under ad.
I would be grateful for any hints since I'm stuck with this very crude ansatz.
PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug
If the group is connected, then ad-invariance will automatically imply Ad-invariance. Basically, ad-invariance implies that this function f that you've defined is locally constant (since its differential will be 0). If G is connected, then locally constant implies constant. Hope this helps!
Quote by rmehta If the group is connected, then ad-invariance will automatically imply Ad-invariance. Basically, ad-invariance implies that this function f that you've defined is locally constant (since its differential will be 0). If G is connected, then locally constant implies constant. Hope this helps!
Thank you!
Thread Tools
| | | |
|--------------------------------------------------------------------|------------------------------|---------|
| Similar Threads for: invariance of scalar products on Lie algebras | | |
| Thread | Forum | Replies |
| | Calculus | 5 |
| | Advanced Physics Homework | 0 |
| | Calculus & Beyond Homework | 1 |
| | Special & General Relativity | 4 |
| | Classical Physics | 14 |
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.871824324131012, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/176559/lp-relaxation-for-ilp-ip-integer-linear-programming?answertab=votes
|
# LP relaxation for ILP\IP (integer linear programming)
I am familiar with LP relaxation for ILP (or IP). Assume we concern with integer minimization problem, which we formalize using ILP; we then relax the ILP into LP and we say that the LP provides a lower bound for the ILP. Why this is correct?
I do understand that a feasible solution for the ILP is a feasible solution to the LP, and the reversed is not always so, i.e. a feasible solution for the LP is not necessarily a feasible solution for the ILP.
Can one please point out and explain briefly why it is so?
-
## 1 Answer
As you say, a feasible solution for the ILP is a feasible solution for the LP. So if the LP has an optimal solution with objective value $\alpha$, this implies there is no feasible solution for the LP with objective value $< \alpha$, and in particular no feasible solution for the ILP with objective value $<\alpha$. That's what it means for $\alpha$ to be a lower bound for the ILP.
-
thank you for your answer, but i still do not get it... why it is a lower bound rather than an upper bound? could you please elaborate on it a little bit more? – user36774 Jul 29 '12 at 19:29
"this implies there is no feasible solution for the LP with objective value < alpha". Doesn't it define a different goal function, which it's optimum might exist? – user36774 Jul 29 '12 at 19:35
@user36774: it's a lower bound because this is a minimization problem. – Robert Israel Jul 29 '12 at 22:50
For example, take this simple ILP: minimize $2 x$ subject to $2 x \ge 3$, $x$ an integer. Leaving out the "integer" requirement, the linear programming problem has optimal solution $x=3/2$ with objective value $3$. That says there is no real solution with $2x < 3$, and therefore no integer solution with $2x < 3$, so $3$ is a lower bound on the objective values for the ILP. Of course the actual optimal solution for the ILP has objective value $4$ in this case. – Robert Israel Jul 29 '12 at 22:56
Maybe I am wrong, but x=2 is a feasible solution for the ILP, so why do you say that the optimal solution for the ILP has an objective value of 4? – user36774 Jul 30 '12 at 5:17
show 3 more comments
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 13, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9006178379058838, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/24054/shouldnt-gravity-travel-at-light-speed-immediately?answertab=oldest
|
# shouldn't gravity travel at light speed immediately
if gravity travels at c(light speed), why aren't objects pulled to earth at that speed?
Since the velocity of gravity is 9.8 meters per second squared, will it eventually accelerate until it maxes out at c then hold constant?
And if that is the case, then why doesn't the gravitational pull between objects and earth immediately travel at c like photons?
so the acceleration of gravity is 9.8 meters per second squared only on earth.
The gravitational pull is contingent on the body off mass and the distance between the masses.
Gravity waves travel at the speed of light.
So if a gravity wave extending from one primary object of greater mass to another object of lesser mass was to move a lights speed it wouldn't affect the speed or acceleration of the secondary object. the secondary object would just react to the primary object at the speed of light, but the reaction it self is dependent upon the size and distance between them?
-
1
– Qmechanic♦ Apr 19 '12 at 23:13
2
Where you got that the velocity of gravity is 9.8 meters per second squared? Notice that velocity is measured in meters per second, not in meters per second squared. – Anixx Apr 20 '12 at 1:12
## 2 Answers
I think there's a combination of terminology and information misunderstanding going on here, so let me try and explain this at an appropriate level. First, the phrase "travel at light speed immediately" doesn't make much sense. In physics, there's not really any such thing as "immediately." "Immediately" is synonymous with "instantaneously," and there's nothing we've ever measured that we can call instantaneous communication. But, regardless of a lack of instantaneous anything, phrased this way, I hope it's clear that saying something moves instantaneously at a finite speed is nonsensical from a conceptual point of view. It's like saying something is moving at 5 m/s and 20 m/s at the same time; 5 just doesn't equal 20, no matter how you slice it.
When you talk about the velocity of gravity, you're talking about the speed at which the force carrier of gravity (or the spacetime disturbance) propagates outward. That value is c, as far as we can tell. When you start talking about 9.8 meters per second squared, you're talking about the acceleration due to gravity, which is not the same thing. How hard you push and something and how fast it moves are related, but they're not the same, right? It's the difference between velocity and acceleration.
Now, if something provides a continuous acceleration, the object that is accelerating will keep going faster and faster, approaching a velocity of c. It doesn't matter what provides the acceleration; could be gravity, could be a rocket booster with infinite fuel. The point is, the speed that gravity propagates has nothing to do with how hard it pulls on objects. Those are completely different properties that are unrelated.
Finally, the gravitational pull between all objects does respond at speed c. This includes the earth and moon. But just because gravity "gets from" the earth to the moon at a speed c, that doesn't mean it causes the moon to move toward the earth with velocity c. I hope that clears up some of the confusion.
-
Also, this 9.8 $m/s^2$ is only true near the earths surface, so there is not enough time/distance to accelerate to $c$ anyhow. – Bernhard Apr 20 '12 at 5:35
9.8 m/sec/sec is not the speed of gravity, it is the acceleration due to gravity at the surface of the earth. At the surface of the moon it is a good deal less. At the surface of the sun it is a lot more.
It is true that if you could fall in a straight line, gaining 9.8 m/sec every second, after about a year you would approach the speed of light, but you would never surpass it. It would be hard to find such a building to jump out of. You could do it in space if you had a good enough rocket motor and enough fuel.
Think about sound in air. The speed of sound is about 340 meters/second, but that does not mean if the wind blows something around, it blows it at that speed. What it means is if someone claps their hands 340 meters away, you hear it one second later.
Gravity is like that. If a big piece of matter, like a planet, suddenly moves into position 30 000 kilometers away, its gravity is felt by you 1/10 second later. But that only means you feel the force at that time, not that you are traveling at that speed.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9653066396713257, "perplexity_flag": "head"}
|
http://cstheory.stackexchange.com/questions/tagged/hypergraphs
|
# Tagged Questions
The hypergraphs tag has no wiki summary.
4answers
480 views
### hardness of approximating the chromatic number in graphs with bounded degree
I am looking for hardness results on vertex coloring of graphs with bounded degree. Given a graph $G(V,E)$, we know that for any $\epsilon>0$, it's hard to approximate $\chi(G)$ within a factor ...
2answers
1k views
### Recognizing line graphs of hypergraphs
The line graph of a hypergraph $H$ is the (simple) graph $G$ having edges of $H$ as vertices with two edges of $H$ are adjacent in $G$ if they have nonempty intersection. A hypergraph is an ...
0answers
137 views
### k-uniform k-partite hypergraph matching in polynomial time
I have what seems like an elementary question, but google didn't throw up any answers for it. I would appreciate any pointers users here may provide. Please note that I have also asked this question ...
4answers
474 views
### What are the root difficulties in going from graphs to hypergraphs?
There are many examples in combinatorics and computer science where we can analyze a graph-theoretic problem but for the problem's hypergraph analog, our tools are lacking. Why do you think problems ...
2answers
418 views
### Do Shift-chains have Property B?
For $A\subset [n]$ denote by $a_i$ the $i^{th}$ smallest element of $A$. For two $k$-element sets, $A,B\subset [n]$, we say that $A\le B$ if $a_i\le b_i$ for every $i$. A $k$-uniform hypergraph ...
1answer
176 views
### CSPs with unbounded fractional hypertree width
At SODA 2006, Martin Grohe and D$\acute{\rm a}$niel Marx's paper "Constraint solving via fractional edge covers" (ACM citation) showed that for the class of hypergraphs $H$ with bounded fractional ...
1answer
361 views
### Max-clique in line graph of hypergraph
Suppose we have a multigraph (later, a multihypergraph). An edge-clique is a set of edges which all pairwise intersect (have at least one common vertex). Then any edge-clique $C$ in a multigraph ...
0answers
196 views
### Follow-up on Nair/Tetali's Correlation Decay Tree for hypergraphs?
I'm wondering if anybody has followed up on Nair/Tetali's correlation decay tree construction for hypergraphs. I didn't find anything relevant in back-citation on google scholar. Interesting question ...
2answers
292 views
### Consequences of lower bounds for $\epsilon$-nets on approximation
Many here are probably aware of Alon's recent super-linear lower bounds for $\epsilon$-nets in a natural geometric setting [PDF]. I would like to know what, if anything, such a lower bound implies ...
1answer
291 views
### Efficient algorithm for near-optimal edge-colourings of hypergraphs
Graph colouring problems are, already, hard enough for most people. Even so, I'm going to have to be difficult and ask a problem about hypergraph colouring. Question. What efficient algorithms are ...
4answers
359 views
### “All-different hypergraph coloring” - known problem?
I am interested in the following problem: Given a set X and subsets X_1, ..., X_n of X, find a coloring of the elements of X with k colors such that the elements in each X_i are all differently ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 23, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9305316805839539, "perplexity_flag": "middle"}
|
http://projecteuclid.org/DPubS?service=UI&version=1.0&verb=Display&handle=euclid.aoms/1177699597
|
### Limiting Behavior of Posterior Distributions when the Model is Incorrect
Robert H. Berk
Source: Ann. Math. Statist. Volume 37, Number 1 (1966), 51-58.
#### Abstract
The large sample behavior of posterior distributions is examined without the assumption that the model is correct. Under certain conditions it is shown that asymptotically, the posterior distribution for a parameter $\theta$ is confined to a set (called the asymptotic carrier) which may, in general, contain more than one point. The asymptotic carrier depends on the model, the carrier of the prior distribution and the actual distribution of the observations. An example shows that, in general, there need be no convergence (in any sense) of the posterior distribution to a limiting distribution over the asymptotic carrier. This is in contrast to the (known) asymptotic behavior when the model is correct; see e.g. [7], p. 304: the asymptotic carrier then contains only one point, the "true value" of $\theta$ and the posterior distribution converges in distribution to the distribution degenerate at the "true value."
First Page:
#### Related Works:
See Correction: Robert H. Berk. Correction Notes: Correction to Limiting Behavior of Posterior Distributions when the Model is Incorrect. Ann. Math. Statist., Volume 37, Number 3 (1966), 745--746.
Project Euclid: euclid.aoms/1177699477
Full-text: Open access
Permanent link to this document: http://projecteuclid.org/euclid.aoms/1177699597
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.736160159111023, "perplexity_flag": "head"}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.