url
stringlengths 17
172
| text
stringlengths 44
1.14M
| metadata
stringlengths 820
832
|
---|---|---|
http://www.haskell.org/haskellwiki/index.php?title=Typeclassopedia&diff=prev&oldid=43155
|
# Typeclassopedia
### From HaskellWiki
(Difference between revisions)
| | | | |
|------------------------------|---------------------------------------------------------------------------------------------------------------------------|--------------------|---------------------------------------------------------------------------------------------------------------------------------------------------|
| ( Moar links and a footnote) | | ( Citation better) | |
| Line 418: | | Line 418: | |
| | ==Laws== | | ==Laws== |
| | | | |
| - | There are several laws that instances of <code>Monad</code> should satisfy [[Monad laws]]. The standard presentation is: | + | There are several laws that instances of <code>Monad</code> should satisfy (see also the [[Monad laws]] wiki page). The standard presentation is: |
| | | | |
## Revision as of 01:29, 26 November 2011
By Brent Yorgey, byorgey@cis.upenn.edu
As published 12 March 2009, issue 13 of the Monad.Reader, with tiny November 2011 updates by Geheimdienst
Alternate formats: PDF / tex source / bibliography
The standard Haskell libraries feature a number of type classes with algebraic or category-theoretic underpinnings. Becoming a fluent Haskell hacker requires intimate familiarity with them all, yet acquiring this familiarity often involves combing through a mountain of tutorials, blog posts, mailing list archives, and IRC logs.
The goal of this article is to serve as a starting point for the student of Haskell wishing to gain a firm grasp of its standard type classes. The essentials of each type class are introduced, with examples, commentary, and extensive references for further reading.
# 1 Introduction
Have you ever had any of the following thoughts?
• What the heck is a monoid, and how is it different from a monad?
• I finally figured out how to use Parsec with do-notation, and someone told me I should use something called `Applicative` instead. Um, what?
• Someone in the #haskell IRC channel used `(***)`, and when I asked lambdabot to tell me its type, it printed out scary gobbledygook that didn’t even fit on one line! Then someone used `fmap fmap fmap` and my brain exploded.
• When I asked how to do something I thought was really complicated, people started typing things like `zip.ap fmap.(id &&& wtf)` and the scary thing is that they worked! Anyway, I think those people must actually be robots because there’s no way anyone could come up with that in two seconds off the top of their head.
If you have, look no further! You, too, can write and understand concise, elegant, idiomatic Haskell code with the best of them.
There are two keys to an expert Haskell hacker’s wisdom:
1. Understand the types.
2. Gain a deep intuition for each type class and its relationship to other type classes, backed up by familiarity with many examples.
It’s impossible to overstate the importance of the first; the patient student of type signatures will uncover many profound secrets. Conversely, anyone ignorant of the types in their code is doomed to eternal uncertainty. “Hmm, it doesn’t compile ... maybe I’ll stick in an `fmap` here ... nope, let’s see ... maybe I need another `(.)` somewhere? ... um ...”
The second key—gaining deep intuition, backed by examples—is also important, but much more difficult to attain. A primary goal of this article is to set you on the road to gaining such intuition. However—
There is no royal road to Haskell. —Euclid
This article can only be a starting point, since good intuition comes from hard work, not from learning the right metaphor ∗. Anyone who reads and understands all of it will still have an arduous journey ahead—but sometimes a good starting point makes a big difference.
It should be noted that this is not a Haskell tutorial; it is assumed that the reader is already familiar with the basics of Haskell, including the standard `Prelude`, the type system, data types, and type classes.
The type classes we will be discussing and their interrelationships:
∗ When Typeclassopedia was originally written, `Pointed` and `Comonad` were in the category-extras library. It has since been deprecated and they have moved to the pointed package and the comonad package. —Geheimdienst, Nov 2011
• Solid arrows point from the general to the specific; that is, if there is an arrow from Foo to Bar it means that every Bar is (or should be, or can be made into) a Foo.
• Dotted arrows indicate some other sort of relationship.
• `Monad` and `ArrowApply` are equivalent.
• `Pointed` and `Comonad` are greyed out since they are not actually (yet) in the standard Haskell libraries ∗.
One more note before we begin. I’ve seen “type class” written as one word, “typeclass,” but let’s settle this once and for all: the correct spelling uses two words (the title of this article notwithstanding), as evidenced by, for example, the Haskell 98 Revised Report, early papers on type classes like Type classes in Haskell and Type classes: exploring the design space, and Hudak et al.’s history of Haskell.
We now begin with the simplest type class of all: `Functor`.
# 2 Functor
The `Functor` class (haddock) is the most basic and ubiquitous type class in the Haskell libraries. A simple intuition is that a `Functor` represents a “container” of some sort, along with the ability to apply a function uniformly to every element in the container. For example, a list is a container of elements, and we can apply a function to every element of a list using `map`. A binary tree is also a container of elements, and it’s not hard to come up with a way to recursively apply a function to every element in a tree.
Another intuition is that a `Functor` represents some sort of “computational context.” This intuition is generally more useful, but is more difficult to explain, precisely because it is so general. Some examples later should help to clarify the `Functor`-as-context point of view.
In the end, however, a `Functor` is simply what it is defined to be; doubtless there are many examples of `Functor` instances that don’t exactly fit either of the above intuitions. The wise student will focus their attention on definitions and examples, without leaning too heavily on any particular metaphor. Intuition will come, in time, on its own.
## 2.1 Definition
The type class declaration for `Functor`:
```class Functor f where
fmap :: (a -> b) -> f a -> f b```
`Functor` is exported by the `Prelude`, so no special imports are needed to use it.
First, the `f a` and `f b` in the type signature for `fmap` tell us that `f` isn’t just a type; it is a type constructor which takes another type as a parameter. (A more precise way to say this is that the kind of `f` must be `* -> *`.) For example, `Maybe` is such a type constructor: `Maybe` is not a type in and of itself, but requires another type as a parameter, like `Maybe Integer`. So it would not make sense to say `instance Functor Integer`, but it could make sense to say `instance Functor Maybe`.
Now look at the type of `fmap`: it takes any function from `a` to `b`, and a value of type `f a`, and outputs a value of type `f b`. From the container point of view, the intention is that `fmap` applies a function to each element of a container, without altering the structure of the container. From the context point of view, the intention is that `fmap` applies a function to a value without altering its context. Let’s look at a few specific examples.
## 2.2 Instances
∗ Recall that `[]` has two meanings in Haskell: it can either stand for the empty list, or, as here, it can represent the list type constructor (pronounced “list-of”). In other words, the type `[a]` (list-of-`a`) can also be written `([] a)`.
∗ You might ask why we need a separate `map` function. Why not just do away with the current list-only `map` function, and rename `fmap` to `map` instead? Well, that’s a good question. The usual argument is that someone just learning Haskell, when using `map` incorrectly, would much rather see an error about lists than about `Functor`s.
As noted before, the list constructor `[]` is a functor ∗; we can use the standard list function `map` to apply a function to each element of a list ∗. The `Maybe` type constructor is also a functor, representing a container which might hold a single element. The function `fmap g` has no effect on `Nothing` (there are no elements to which `g` can be applied), and simply applies `g` to the single element inside a `Just`. Alternatively, under the context interpretation, the list functor represents a context of nondeterministic choice; that is, a list can be thought of as representing a single value which is nondeterministically chosen from among several possibilities (the elements of the list). Likewise, the `Maybe` functor represents a context with possible failure. These instances are:
```instance Functor [] where
fmap _ [] = []
fmap g (x:xs) = g x : fmap g xs
-- or we could just say fmap = map
instance Functor Maybe where
fmap _ Nothing = Nothing
fmap g (Just a) = Just (g a)```
As an aside, in idiomatic Haskell code you will often see the letter `f` used to stand for both an arbitrary `Functor` and an arbitrary function. In this tutorial, I will use `f` only to represent `Functor`s, and `g` or `h` to represent functions, but you should be aware of the potential confusion. In practice, what `f` stands for should always be clear from the context, by noting whether it is part of a type or part of the code.
There are other `Functor` instances in the standard libraries; below are a few. Note that some of these instances are not exported by the `Prelude`; to access them, you can import `Control.Monad.Instances`.
• `Either e` is an instance of `Functor`; `Either e a` represents a container which can contain either a value of type `a`, or a value of type `e` (often representing some sort of error condition). It is similar to `Maybe` in that it represents possible failure, but it can carry some extra information about the failure as well.
• `((,) e)` represents a container which holds an “annotation” of type `e` along with the actual value it holds.
• `((->) e)`, the type of functions which take a value of type `e` as a parameter, is a `Functor`. It would be clearer to write it as `(e ->)`, by analogy with an operator section like `(1 +)`, but that syntax is not allowed. However, you can certainly think of it as `(e ->)`. As a container, `(e -> a)` represents a (possibly infinite) set of values of `a`, indexed by values of `e`. Alternatively, and more usefully, `(e ->)` can be thought of as a context in which a value of type `e` is available to be consulted in a read-only fashion. This is also why `((->) e)` is sometimes referred to as the reader monad; more on this later.
• `IO` is a `Functor`; a value of type `IO a` represents a computation producing a value of type `a` which may have I/O effects. If `m` computes the value `x` while producing some I/O effects, then `fmap g m` will compute the value `g x` while producing the same I/O effects.
• Many standard types from the containers library (such as `Tree`, `Map`, `Sequence`, and `Stream`) are instances of `Functor`. A notable exception is `Set`, which cannot be made a `Functor` in Haskell (although it is certainly a mathematical functor) since it requires an `Ord` constraint on its elements; `fmap` must be applicable to any types `a` and `b`.
A good exercise is to implement `Functor` instances for `Either e`, `((,) e)`, and `((->) e)`.
## 2.3 Laws
As far as the Haskell language itself is concerned, the only requirement to be a `Functor` is an implementation of `fmap` with the proper type. Any sensible `Functor` instance, however, will also satisfy the functor laws, which are part of the definition of a mathematical functor. There are two:
```fmap id = id
fmap (g . h) = (fmap g) . (fmap h)```
∗ Technically, these laws make `f` and `fmap` together an endofunctor on Hask, the category of Haskell types (ignoring ⊥, which is a party pooper). See Wikibook: Category theory.
Together, these laws ensure that `fmap g` does not change the structure of a container, only the elements. Equivalently, and more simply, they ensure that `fmap g` changes a value without altering its context ∗.
The first law says that mapping the identity function over every item in a container has no effect. The second says that mapping a composition of two functions over every item in a container is the same as first mapping one function, and then mapping the other.
As an example, the following code is a “valid” instance of `Functor` (it typechecks), but it violates the functor laws. Do you see why?
```-- Evil Functor instance
instance Functor [] where
fmap _ [] = []
fmap g (x:xs) = g x : g x : fmap g xs```
Any Haskeller worth their salt would reject this code as a gruesome abomination.
## 2.4 Intuition
There are two fundamental ways to think about `fmap`. The first has already been touched on: it takes two parameters, a function and a container, and applies the function “inside” the container, producing a new container. Alternately, we can think of `fmap` as applying a function to a value in a context (without altering the context).
Just like all other Haskell functions of “more than one parameter,” however, `fmap` is actually curried: it does not really take two parameters, but takes a single parameter and returns a function. For emphasis, we can write `fmap`’s type with extra parentheses: `fmap :: (a -> b) -> (f a -> f b)`. Written in this form, it is apparent that `fmap` transforms a “normal” function (`g :: a -> b`) into one which operates over containers/contexts (`fmap g :: f a -> f b`). This transformation is often referred to as a lift; `fmap` “lifts” a function from the “normal world” into the “`f` world.”
## 2.5 Further reading
A good starting point for reading about the category theory behind the concept of a functor is the excellent Haskell wikibook page on category theory.
# 3 Pointed
∗ The `Pointed` type class lives in the pointed library, moved from the category-extras library. The `point` function was originally named `pure`.
Edward Kmett, the author of category-extras, pointed, and many related packages, has since moved his focus to semigroupoids and semigroups. He finds them more interesting and useful, and considers `Pointed` to be historical now (he still provides the pointed package only because “people were whinging”). Nevertheless, `Pointed` has kept its value for explaining, and its place in Typeclassopedia. —Geheimdienst, Nov 2011
The `Pointed` type class represents pointed functors. It is not actually a type class in the standard libraries ∗. But it could be, and it’s useful in understanding a few other type classes, notably `Applicative` and `Monad`, so let’s pretend for a minute.
Given a `Functor`, the `Pointed` class represents the additional ability to put a value into a “default context.” Often, this corresponds to creating a container with exactly one element, but it is more general than that. The type class declaration for `Pointed` is:
```class Functor f => Pointed f where
point :: a -> f a -- aka pure, singleton, return, unit```
Most of the standard `Functor` instances could also be instances of `Pointed`—for example, the `Maybe` instance of `Pointed` is `point = Just`; there are many possible implementations for lists, the most natural of which is `point x = [x]`; for `((->) e)` it is ... well, I’ll let you work it out. (Just follow the types!)
One example of a `Functor` which is not `Pointed` is `((,) e)`. If you try implementing `point :: a -> (e,a)` you will quickly see why: since the type `e` is completely arbitrary, there is no way to generate a value of type `e` out of thin air! However, as we will see, `((,) e)` can be made `Pointed` if we place an additional restriction on `e` which allows us to generate a default value of type `e` (the most common solution is to make `e` an instance of `Monoid`).
∗ For those interested in category theory, this law states precisely that `point` is a natural transformation from the identity functor to `f`. The `Pointed` class has only one law ∗:
`fmap g . point = point . g`
∗ ... modulo ⊥, `seq`, and assuming a lawful `Functor` instance.
However, you need not worry about it: this law is actually a so-called “free theorem” guaranteed by parametricity (see Wadler’s Theorems for free!); it’s impossible to write an instance of `Pointed` which does not satisfy it ∗.
# 4 Applicative
A somewhat newer addition to the pantheon of standard Haskell type classes, applicative functors represent an abstraction lying exactly in between `Functor` and `Monad`, first described by McBride and Paterson. The title of their classic paper, Applicative Programming with Effects, gives a hint at the intended intuition behind the `Applicative` type class. It encapsulates certain sorts of “effectful” computations in a functionally pure way, and encourages an “applicative” programming style. Exactly what these things mean will be seen later.
## 4.1 Definition
The `Applicative` class adds a single capability to `Pointed` functors. Recall that `Functor` allows us to lift a “normal” function to a function on computational contexts. But `fmap` doesn’t allow us to apply a function which is itself in a context to a value in another context. `Applicative` gives us just such a tool. Here is the type class declaration for `Applicative`, as defined in `Control.Applicative`:
```class Functor f => Applicative f where
pure :: a -> f a -- aka point
(<*>) :: f (a -> b) -> f a -> f b```
Note that every `Applicative` must also be a `Functor`. In fact, as we will see, `fmap` can be implemented using the `Applicative` methods, so every `Applicative` is a functor whether we like it or not; the `Functor` constraint forces us to be honest.
∗ Recall that `($)` is just function application: `f $ x = f x`.
As always, it’s crucial to understand the type signature of `(<*>)`. The best way of thinking about it comes from noting that the type of `(<*>)` is similar to the type of `($)` ∗, but with everything enclosed in an `f`. In other words, `(<*>)` is just function application within a computational context. The type of `(<*>)` is also very similar to the type of `fmap`; the only difference is that the first parameter is `f (a -> b)`, a function in a context, instead of a “normal” function `(a -> b)`.
Of course, `pure` looks rather familiar. It is the `point` function from the `Pointed` type class. If we actually had it in the standard library, and `pure` appearing under the other name didn’t bother you, then `Applicative` could instead be defined as:
```class Pointed f => Applicative' f where
(<*>) :: f (a -> b) -> f a -> f b```
## 4.2 Laws
There are several laws that `Applicative` instances should satisfy ∗, but only one is crucial to developing intuition, because it specifies how `Applicative` should relate to `Functor` (the other four mostly specify the exact sense in which `pure` deserves its name). This law is:
`fmap g x = pure g <*> x`
It says that mapping a pure function `g` over a context `x` is the same as first injecting `g` into a context with `pure`, and then applying it to `x` with `(<*>)`. In other words, we can decompose `fmap` into two more atomic operations: injection into a context, and application within a context. The `Control.Applicative` module also defines `(<$>)` as a synonym for `fmap`, so the above law can also be expressed as:
`g <$> x = pure g <*> x`.
## 4.3 Instances
Most of the standard types which are instances of `Functor` are also instances of `Applicative`.
`Maybe` can easily be made an instance of `Applicative`; writing such an instance is left as an exercise for the reader.
The list type constructor `[]` can actually be made an instance of `Applicative` in two ways; essentially, it comes down to whether we want to think of lists as ordered collections of elements, or as contexts representing multiple results of a nondeterministic computation (see Wadler’s How to replace failure by a list of successes).
Let’s first consider the collection point of view. Since there can only be one instance of a given type class for any particular type, one or both of the list instances of `Applicative` need to be defined for a `newtype` wrapper; as it happens, the nondeterministic computation instance is the default, and the collection instance is defined in terms of a `newtype` called `ZipList`. This instance is:
```newtype ZipList a = ZipList { getZipList :: [a] }
instance Applicative ZipList where
pure = undefined -- exercise
(ZipList gs) <*> (ZipList xs) = ZipList (zipWith ($) gs xs)```
To apply a list of functions to a list of inputs with `(<*>)`, we just match up the functions and inputs elementwise, and produce a list of the resulting outputs. In other words, we “zip” the lists together with function application, `($)`; hence the name `ZipList`. As an exercise, determine the correct definition of `pure`—there is only one implementation that satisfies the law (see section “Laws”).
The other `Applicative` instance for lists, based on the nondeterministic computation point of view, is:
```instance Applicative [] where
pure x = [x]
gs <*> xs = [ g x | g <- gs, x <- xs ]```
Instead of applying functions to inputs pairwise, we apply each function to all the inputs in turn, and collect all the results in a list.
Now we can write nondeterministic computations in a natural style. To add the numbers `3` and `4` deterministically, we can of course write `(+) 3 4`. But suppose instead of `3` we have a nondeterministic computation that might result in `2`, `3`, or `4`; then we can write
`pure (+) <*> [2,3,4] <*> pure 4`
or, more idiomatically,
`(+) <$> [2,3,4] <*> pure 4.`
There are several other `Applicative` instances as well:
• `IO` is an instance of `Applicative`, and behaves exactly as you would think: when `g <$> m1 <*> m2 <*> m3` is executed, the effects from the `mi`’s happen in order from left to right.
• `((,) a)` is an `Applicative`, as long as `a` is an instance of `Monoid` (section Monoid). The `a` values are accumulated in parallel with the computation.
• The `Applicative` module defines the `Const` type constructor; a value of type `Const a b` simply contains an `a`. This is an instance of `Applicative` for any `Monoid a`; this instance becomes especially useful in conjunction with things like `Foldable` (section Foldable).
• The `WrappedMonad` and `WrappedArrow` newtypes make any instances of `Monad` (section Monad) or `Arrow` (section Arrow) respectively into instances of `Applicative`; as we will see when we study those type classes, both are strictly more expressive than `Applicative`, in the sense that the `Applicative` methods can be implemented in terms of their methods.
## 4.4 Intuition
McBride and Paterson’s paper introduces the notation $[[g \; x_1 \; x_2 \; \cdots \; x_n]]\$ to denote function application in a computational context. If each $x_i\$ has type $f \; t_i\$ for some applicative functor $f\$, and $g\$ has type $t_1 \to t_2 \to \dots \to t_n \to t\$, then the entire expression $[[g \; x_1 \; \cdots \; x_n]]\$ has type $f \; t\$. You can think of this as applying a function to multiple “effectful” arguments. In this sense, the double bracket notation is a generalization of `fmap`, which allows us to apply a function to a single argument in a context.
Why do we need `Applicative` to implement this generalization of `fmap`? Suppose we use `fmap` to apply `g` to the first parameter `x1`. Then we get something of type `f (t2 -> ... t)`, but now we are stuck: we can’t apply this function-in-a-context to the next argument with `fmap`. However, this is precisely what `(<*>)` allows us to do.
This suggests the proper translation of the idealized notation $[[g \; x_1 \; x_2 \; \cdots \; x_n]]\$ into Haskell, namely
`g <$> x1 <*> x2 <*> ... <*> xn,`
recalling that `Control.Applicative` defines `(<$>)` as convenient infix shorthand for `fmap`. This is what is meant by an “applicative style”—effectful computations can still be described in terms of function application; the only difference is that we have to use the special operator `(<*>)` for application instead of simple juxtaposition.
## 4.5 Further reading
There are many other useful combinators in the standard libraries implemented in terms of `pure` and `(<*>)`: for example, `(*>)`, `(<*)`, `(<**>)`, `(<$)`, and so on (see haddock for Applicative). Judicious use of such secondary combinators can often make code using `Applicative`s much easier to read.
McBride and Paterson’s original paper is a treasure-trove of information and examples, as well as some perspectives on the connection between `Applicative` and category theory. Beginners will find it difficult to make it through the entire paper, but it is extremely well-motivated—even beginners will be able to glean something from reading as far as they are able.
∗ Introduced by an earlier paper that was since superceded by Push-pull functional reactive programming. —Geheimdienst, Nov 2011
Conal Elliott has been one of the biggest proponents of `Applicative`. For example, the Pan library for functional images and the reactive library for functional reactive programming (FRP) ∗ make key use of it; his blog also contains many examples of `Applicative` in action. Building on the work of McBride and Paterson, Elliott also built the TypeCompose library, which embodies the observation (among others) that `Applicative` types are closed under composition; therefore, `Applicative` instances can often be automatically derived for complex types built out of simpler ones.
Although the Parsec parsing library (paper) was originally designed for use as a monad, in its most common use cases an `Applicative` instance can be used to great effect; Bryan O’Sullivan’s blog post is a good starting point. If the extra power provided by `Monad` isn’t needed, it’s usually a good idea to use `Applicative` instead.
A couple other nice examples of `Applicative` in action include the ConfigFile and HSQL libraries and the formlets library.
# 5 Monad
It’s a safe bet that if you’re reading this article, you’ve heard of monads—although it’s quite possible you’ve never heard of `Applicative` before, or `Arrow`, or even `Monoid`. Why are monads such a big deal in Haskell? There are several reasons.
• Haskell does, in fact, single out monads for special attention by making them the framework in which to construct I/O operations.
• Haskell also singles out monads for special attention by providing a special syntactic sugar for monadic expressions: the `do`-notation.
• `Monad` has been around longer than various other abstract models of computation such as `Applicative` or `Arrow`.
• The more monad tutorials there are, the harder people think monads must be, and the more new monad tutorials are written by people who think they finally “get” monads (the monad tutorial fallacy).
I will let you judge for yourself whether these are good reasons.
In the end, despite all the hoopla, `Monad` is just another type class. Let’s take a look at its definition.
## 5.1 Definition
The type class declaration for `Monad` is:
```class Monad m where
return :: a -> m a
(>>=) :: m a -> (a -> m b) -> m b
(>>) :: m a -> m b -> m b
m >> n = m >>= \_ -> n
fail :: String -> m a```
The `Monad` type class is exported by the `Prelude`, along with a few standard instances. However, many utility functions are found in `Control.Monad`, and there are also several instances (such as `((->) e)`) defined in `Control.Monad.Instances`.
Let’s examine the methods in the `Monad` class one by one. The type of `return` should look familiar; it’s the same as `pure`. Indeed, `return` is `pure`, but with an unfortunate name. (Unfortunate, since someone coming from an imperative programming background might think that `return` is like the C or Java keyword of the same name, when in fact the similarities are minimal.) From a mathematical point of view, every monad is a pointed functor (indeed, an applicative functor), but for historical reasons, the `Monad` type class declaration unfortunately does not require this.
We can see that `(>>)` is a specialized version of `(>>=)`, with a default implementation given. It is only included in the type class declaration so that specific instances of `Monad` can override the default implementation of `(>>)` with a more efficient one, if desired. Also, note that although `_ >> n = n` would be a type-correct implementation of `(>>)`, it would not correspond to the intended semantics: the intention is that `m >> n` ignores the result of `m`, but not its effects.
The `fail` function is an awful hack that has no place in the `Monad` class; more on this later.
The only really interesting thing to look at—and what makes `Monad` strictly more powerful than `Pointed` or `Applicative`—is `(>>=)`, which is often called bind. An alternative definition of `Monad` could look like:
```class Applicative m => Monad' m where
(>>=) :: m a -> (a -> m b) -> m b```
We could spend a while talking about the intuition behind `(>>=)`—and we will. But first, let’s look at some examples.
## 5.2 Instances
Even if you don’t understand the intuition behind the `Monad` class, you can still create instances of it by just seeing where the types lead you. You may be surprised to find that this actually gets you a long way towards understanding the intuition; at the very least, it will give you some concrete examples to play with as you read more about the `Monad` class in general. The first few examples are from the standard `Prelude`; the remaining examples are from the monad transformer library (mtl).
• The simplest possible instance of `Monad` is `Identity`, which is described in Dan Piponi’s highly recommended blog post on The Trivial Monad. Despite being “trivial,” it is a great introduction to the `Monad` type class, and contains some good exercises to get your brain working.
• The next simplest instance of `Monad` is `Maybe`. We already know how to write `return`/`pure` for `Maybe`. So how do we write `(>>=)`? Well, let’s think about its type. Specializing for `Maybe`, we have
`(>>=) :: Maybe a -> (a -> Maybe b) -> Maybe b.`
If the first argument to `(>>=)` is `Just x`, then we have something of type `a` (namely, `x`), to which we can apply the second argument—resulting in a `Maybe b`, which is exactly what we wanted. What if the first argument to `(>>=)` is `Nothing`? In that case, we don’t have anything to which we can apply the `a -> Maybe b` function, so there’s only one thing we can do: yield `Nothing`. This instance is:
```instance Monad Maybe where
return = Just
(Just x) >>= g = g x
Nothing >>= _ = Nothing```
We can already get a bit of intuition as to what is going on here: if we build up a computation by chaining together a bunch of functions with `(>>=)`, as soon as any one of them fails, the entire computation will fail (because `Nothing >>= f` is `Nothing`, no matter what `f` is). The entire computation succeeds only if all the constituent functions individually succeed. So the `Maybe` monad models computations which may fail.
• The `Monad` instance for the list constructor `[]` is similar to its `Applicative` instance; I leave its implementation as an exercise. Follow the types!
• Of course, the `IO` constructor is famously a `Monad`, but its implementation is somewhat magical, and may in fact differ from compiler to compiler. It is worth emphasizing that the `IO` monad is the only monad which is magical. It allows us to build up, in an entirely pure way, values representing possibly effectful computations. The special value `main`, of type `IO ()`, is taken by the runtime and actually executed, producing actual effects. Every other monad is functionally pure, and requires no special compiler support. We often speak of monadic values as “effectful computations,” but this is because some monads allow us to write code as if it has side effects, when in fact the monad is hiding the plumbing which allows these apparent side effects to be implemented in a functionally pure way.
• As mentioned earlier, `((->) e)` is known as the reader monad, since it describes computations in which a value of type `e` is available as a read-only environment. It is worth trying to write a `Monad` instance for `((->) e)` yourself.
The `Control.Monad.Reader` module provides the `Reader e a` type, which is just a convenient `newtype` wrapper around `(e -> a)`, along with an appropriate `Monad` instance and some `Reader`-specific utility functions such as `ask` (retrieve the environment), `asks` (retrieve a function of the environment), and `local` (run a subcomputation under a different environment).
• The `Control.Monad.Writer` module provides the `Writer` monad, which allows information to be collected as a computation progresses. `Writer w a` is isomorphic to `(a,w)`, where the output value `a` is carried along with an annotation or “log” of type `w`, which must be an instance of `Monoid` (see section Monoid); the special function `tell` performs logging.
• The `Control.Monad.State` module provides the `State s a` type, a `newtype` wrapper around `s -> (a,s)`. Something of type `State s a` represents a stateful computation which produces an `a` but can access and modify the state of type `s` along the way. The module also provides `State`-specific utility functions such as `get` (read the current state), `gets` (read a function of the current state), `put` (overwrite the state), and `modify` (apply a function to the state).
• The `Control.Monad.Cont` module provides the `Cont` monad, which represents computations in continuation-passing style. It can be used to suspend and resume computations, and to implement non-local transfers of control, co-routines, other complex control structures—all in a functionally pure way. `Cont` has been called the “mother of all monads” because of its universal properties.
## 5.3 Intuition
Let’s look more closely at the type of `(>>=)`. The basic intuition is that it combines two computations into one larger computation. The first argument, `m a`, is the first computation. However, it would be boring if the second argument were just an `m b`; then there would be no way for the computations to interact with one another. So, the second argument to `(>>=)` has type `a -> m b`: a function of this type, given a result of the first computation, can produce a second computation to be run. In other words, `x >>= k` is a computation which runs `x`, and then uses the result(s) of `x` to decide what computation to run second, using the output of the second computation as the result of the entire computation.
Intuitively, it is this ability to use the output from previous computations to decide what computations to run next that makes `Monad` more powerful than `Applicative`. The structure of an `Applicative` computation is fixed, whereas the structure of a `Monad` computation can change based on intermediate results.
To see the increased power of `Monad` from a different point of view, let’s see what happens if we try to implement `(>>=)` in terms of `fmap`, `pure`, and `(<*>)`. We are given a value `x` of type `m a`, and a function `k` of type `a -> m b`, so the only thing we can do is apply `k` to `x`. We can’t apply it directly, of course; we have to use `fmap` to lift it over the `m`. But what is the type of `fmap k`? Well, it’s `m a -> m (m b)`. So after we apply it to `x`, we are left with something of type `m (m b)`—but now we are stuck; what we really want is an `m b`, but there’s no way to get there from here. We can add `m`’s using `pure`, but we have no way to collapse multiple `m`’s into one.
This ability to collapse multiple `m`’s is exactly the ability provided by the function `join :: m (m a) -> m a`, and it should come as no surprise that an alternative definition of `Monad` can be given in terms of `join`:
```class Applicative m => Monad'' m where
join :: m (m a) -> m a```
In fact, monads in category theory are defined in terms of `return`, `fmap`, and `join` (often called η, T, and μ in the mathematical literature). Haskell uses the equivalent formulation in terms of `(>>=)` instead of `join` since it is more convenient to use; however, sometimes it can be easier to think about `Monad` instances in terms of `join`, since it is a more “atomic” operation. (For example, `join` for the list monad is just `concat`.) An excellent exercise is to implement `(>>=)` in terms of `fmap` and `join`, and to implement `join` in terms of `(>>=)`.
## 5.4 Utility functions
The `Control.Monad` module provides a large number of convenient utility functions, all of which can be implemented in terms of the basic `Monad` operations (`return` and `(>>=)` in particular). We have already seen one of them, namely, `join`. We also mention some other noteworthy ones here; implementing these utility functions oneself is a good exercise. For a more detailed guide to these functions, with commentary and example code, see Henk-Jan van Tuyl’s tour.
∗ Still, it is unclear how this "bug" should be fixed. Making `Monad` require a `Functor` instance has some drawbacks, as mentioned in this 2011 mailing-list discussion. —Geheimdienst
• `liftM :: Monad m => (a -> b) -> m a -> m b`. This should be familiar; of course, it is just `fmap`. The fact that we have both `fmap` and `liftM` is an unfortunate consequence of the fact that the `Monad` type class does not require a `Functor` instance, even though mathematically speaking, every monad is a functor. However, `fmap` and `liftM` are essentially interchangeable, since it is a bug (in a social rather than technical sense) for any type to be an instance of `Monad` without also being an instance of `Functor` ∗.
• `ap :: Monad m => m (a -> b) -> m a -> m b` should also be familiar: it is equivalent to `(<*>)`, justifying the claim that the `Monad` interface is strictly more powerful than `Applicative`. We can make any `Monad` into an instance of `Applicative` by setting `pure = return` and `(<*>) = ap`.
• `sequence :: Monad m => [m a] -> m [a]` takes a list of computations and combines them into one computation which collects a list of their results. It is again something of a historical accident that `sequence` has a `Monad` constraint, since it can actually be implemented only in terms of `Applicative`. There is also an additional generalization of `sequence` to structures other than lists, which will be discussed in the section on `Traversable`.
• `replicateM :: Monad m => Int -> m a -> m [a]` is simply a combination of `replicate` and `sequence`.
• `when :: Monad m => Bool -> m () -> m ()` conditionally executes a computation, evaluating to its second argument if the test is `True`, and to `return ()` if the test is `False`. A collection of other sorts of monadic conditionals can be found in the IfElse package.
• `mapM :: Monad m => (a -> m b) -> [a] -> m [b]` maps its first argument over the second, and `sequence`s the results. The `forM` function is just `mapM` with its arguments reversed; it is called `forM` since it models generalized `for` loops: the list `[a]` provides the loop indices, and the function `a -> m b` specifies the “body” of the loop for each index.
• `(=<<) :: Monad m => (a -> m b) -> m a -> m b` is just `(>>=)` with its arguments reversed; sometimes this direction is more convenient since it corresponds more closely to function application.
• `(>=>) :: Monad m => (a -> m b) -> (b -> m c) -> a -> m c` is sort of like function composition, but with an extra `m` on the result type of each function, and the arguments swapped. We’ll have more to say about this operation later.
• The `guard` function is for use with instances of `MonadPlus`, which is discussed at the end of the `Monoid` section.
Many of these functions also have “underscored” variants, such as `sequence_` and `mapM_`; these variants throw away the results of the computations passed to them as arguments, using them only for their side effects.
## 5.5 Laws
There are several laws that instances of `Monad` should satisfy (see also the Monad laws wiki page). The standard presentation is:
```return a >>= k = k a
m >>= return = m
m >>= (\x -> k x >>= h) = (m >>= k) >>= h
fmap f xs = xs >>= return . f = liftM f xs```
The first and second laws express the fact that `return` behaves nicely: if we inject a value `a` into a monadic context with `return`, and then bind to `k`, it is the same as just applying `k` to `a` in the first place; if we bind a computation `m` to `return`, nothing changes. The third law essentially says that `(>>=)` is associative, sort of. The last law ensures that `fmap` and `liftM` are the same for types which are instances of both `Functor` and `Monad`—which, as already noted, should be every instance of `Monad`.
∗ I like to pronounce this operator “fish,” but that’s probably not the canonical pronunciation ...
However, the presentation of the above laws, especially the third, is marred by the asymmetry of `(>>=)`. It’s hard to look at the laws and see what they’re really saying. I prefer a much more elegant version of the laws, which is formulated in terms of `(>=>)` ∗. Recall that `(>=>)` “composes” two functions of type `a -> m b` and `b -> m c`. You can think of something of type `a -> m b` (roughly) as a function from `a` to `b` which may also have some sort of effect in the context corresponding to `m`. (Note that `return` is such a function.) `(>=>)` lets us compose these “effectful functions,” and we would like to know what properties `(>=>)` has. The monad laws reformulated in terms of `(>=>)` are:
```return >=> g = g
g >=> return = g
(g >=> h) >=> k = g >=> (h >=> k)```
∗ As fans of category theory will note, these laws say precisely that functions of type `a -> m b` are the arrows of a category with `(>=>)` as composition! Indeed, this is known as the Kleisli category of the monad `m`. It will come up again when we discuss `Arrow`s.
Ah, much better! The laws simply state that `return` is the identity of `(>=>)`, and that `(>=>)` is associative ∗. Working out the equivalence between these two formulations, given the definition `g >=> h = \x -> g x >>= h`, is left as an exercise.
There is also a formulation of the monad laws in terms of `fmap`, `return`, and `join`; for a discussion of this formulation, see the Haskell wikibook page on category theory.
## 5.6 `do` notation
Haskell’s special `do` notation supports an “imperative style” of programming by providing syntactic sugar for chains of monadic expressions. The genesis of the notation lies in realizing that something like `a >>= \x -> b >> c >>= \y -> d ` can be more readably written by putting successive computations on separate lines:
```a >>= \x ->
b >>
c >>= \y ->
d```
This emphasizes that the overall computation consists of four computations `a`, `b`, `c`, and `d`, and that `x` is bound to the result of `a`, and `y` is bound to the result of `c` (`b`, `c`, and `d` are allowed to refer to `x`, and `d` is allowed to refer to `y` as well). From here it is not hard to imagine a nicer notation:
```do { x <- a ;
b ;
y <- c ;
d
}```
(The curly braces and semicolons may optionally be omitted; the Haskell parser uses layout to determine where they should be inserted.) This discussion should make clear that `do` notation is just syntactic sugar. In fact, `do` blocks are recursively translated into monad operations (almost) like this:
``` do e ⇨ e
do { e; stmts } ⇨ e >> do { stmts }
do { v <- e; stmts } ⇨ e >>= \v -> do { stmts }
do { let decls; stmts} ⇨ let decls in do { stmts }
```
This is not quite the whole story, since `v` might be a pattern instead of a variable. For example, one can write
```do (x:xs) <- foo
bar x```
but what happens if `foo` produces an empty list? Well, remember that ugly `fail` function in the `Monad` type class declaration? That’s what happens. See section 3.14 of the Haskell Report for the full details. See also the discussion of `MonadPlus` and `MonadZero` in the section on other monoidal classes.
A final note on intuition: `do` notation plays very strongly to the “computational context” point of view rather than the “container” point of view, since the binding notation `x <- m` is suggestive of “extracting” a single `x` from `m` and doing something with it. But `m` may represent some sort of a container, such as a list or a tree; the meaning of `x <- m` is entirely dependent on the implementation of `(>>=)`. For example, if `m` is a list, `x <- m` actually means that `x` will take on each value from the list in turn.
## 5.7 Monad transformers
One would often like to be able to combine two monads into one: for example, to have stateful, nondeterministic computations (`State` + `[]`), or computations which may fail and can consult a read-only environment (`Maybe` + `Reader`), and so on. Unfortunately, monads do not compose as nicely as applicative functors (yet another reason to use `Applicative` if you don’t need the full power that `Monad` provides), but some monads can be combined in certain ways.
The monad transformer library mtl provides a number of monad transformers, such as `StateT`, `ReaderT`, `ErrorT` (haddock), and (soon) `MaybeT`, which can be applied to other monads to produce a new monad with the effects of both. For example, `StateT s Maybe` is an instance of `Monad`; computations of type `StateT s Maybe a` may fail, and have access to a mutable state of type `s`. These transformers can be multiply stacked. One thing to keep in mind while using monad transformers is that the order of composition matters. For example, when a `StateT s Maybe a` computation fails, the state ceases being updated; on the other hand, the state of a `MaybeT (State s) a` computation may continue to be modified even after the computation has failed. (This may seem backwards, but it is correct. Monad transformers build composite monads “inside out”; for example, `MaybeT (State s) a` is isomorphic to `s -> Maybe (a, s)`. Lambdabot has an indispensable `@unmtl` command which you can use to “unpack” a monad transformer stack in this way.)
All monad transformers should implement the `MonadTrans` type class, defined in `Control.Monad.Trans`:
```class MonadTrans t where
lift :: Monad m => m a -> t m a```
It allows arbitrary computations in the base monad `m` to be “lifted” into computations in the transformed monad `t m`. (Note that type application associates to the left, just like function application, so `t m a = (t m) a`. As an exercise, you may wish to work out `t`’s kind, which is rather more interesting than most of the kinds we’ve seen up to this point.) However, you should only have to think about `MonadTrans` when defining your own monad transformers, not when using predefined ones.
∗ The only problem with this scheme is the quadratic number of instances required as the number of standard monad transformers grows—but as the current set of standard monad transformers seems adequate for most common use cases, this may not be that big of a deal.
There are also type classes such as `MonadState`, which provides state-specific methods like `get` and `put`, allowing you to conveniently use these methods not only with `State`, but with any monad which is an instance of `MonadState`—including `MaybeT (State s)`, `StateT s (ReaderT r IO)`, and so on. Similar type classes exist for `Reader`, `Writer`, `Cont`, `IO`, and others ∗.
There are two excellent references on monad transformers. Martin Grabmüller’s Monad Transformers Step by Step is a thorough description, with running examples, of how to use monad transformers to elegantly build up computations with various effects. Cale Gibbard’s article on how to use monad transformers is more practical, describing how to structure code using monad transformers to make writing it as painless as possible. Another good starting place for learning about monad transformers is a blog post by Dan Piponi.
## 5.8 MonadFix
The `MonadFix` class describes monads which support the special fixpoint operation `mfix :: (a -> m a) -> m a`, which allows the output of monadic computations to be defined via recursion. This is supported in GHC and Hugs by a special “recursive do” notation, `mdo`. For more information, see Levent Erkök’s thesis, Value Recursion in Monadic Computations.
## 5.9 Further reading
Philip Wadler was the first to propose using monads to structure functional programs. His paper is still a readable introduction to the subject.
Much of the monad transformer library mtl, including the `Reader`, `Writer`, `State`, and other monads, as well as the monad transformer framework itself, was inspired by Mark Jones’s classic paper Functional Programming with Overloading and Higher-Order Polymorphism. It’s still very much worth a read—and highly readable—after almost fifteen years.
∗ {{{1}}}
There are, of course, numerous monad tutorials of varying quality ∗.
A few of the best include Cale Gibbard’s Monads as containers and Monads as computation; Jeff Newbern’s All About Monads, a comprehensive guide with lots of examples; and Dan Piponi’s You Could Have Invented Monads!, which features great exercises. If you just want to know how to use `IO`, you could consult the Introduction to IO. Even this is just a sampling; the monad tutorials timeline is a more complete list. (All these monad tutorials have prompted parodies like think of a monad ... as well as other kinds of backlash like Monads! (and Why Monad Tutorials Are All Awful) or Abstraction, intuition, and the “monad tutorial fallacy”.)
Other good monad references which are not necessarily tutorials include Henk-Jan van Tuyl’s tour of the functions in `Control.Monad`, Dan Piponi’s field guide, and Tim Newsham’s What’s a Monad?. There are also many blog articles which have been written on various aspects of monads; a collection of links can be found under Blog articles/Monads.
One of the quirks of the `Monad` class and the Haskell type system is that it is not possible to straightforwardly declare `Monad` instances for types which require a class constraint on their data, even if they are monads from a mathematical point of view. For example, `Data.Set` requires an `Ord` constraint on its data, so it cannot be easily made an instance of `Monad`. A solution to this problem was first described by Eric Kidd, and later made into a library named rmonad by Ganesh Sittampalam and Peter Gavin.
There are many good reasons for eschewing `do` notation; some have gone so far as to [[Do_notation_considered_harmful|consider it harmful].
Monads can be generalized in various ways; for an exposition of one possibility, see Robert Atkey’s paper on parameterized monads, or Dan Piponi’s Beyond Monads.
For the categorically inclined, monads can be viewed as monoids (From Monoids to Monads) and also as closure operators Triples and Closure. Derek Elkins’s article in issue 13 of the Monad.Reader contains an exposition of the category-theoretic underpinnings of some of the standard `Monad` instances, such as `State` and `Cont`. There is also an alternative way to compose monads, using coproducts, as described by Lüth and Ghani, although this method has not (yet?) seen widespread use.
Links to many more research papers related to monads can be found under Research papers/Monads and arrows.
# 6 Monoid
A monoid is a set $S\$ together with a binary operation $\oplus\$ which combines elements from $S\$. The $\oplus\$ operator is required to be associative (that is, $(a \oplus b) \oplus c = a \oplus (b \oplus c)\$, for any $a,b,c\$ which are elements of $S\$), and there must be some element of $S\$ which is the identity with respect to $\oplus\$. (If you are familiar with group theory, a monoid is like a group without the requirement that inverses exist.) For example, the natural numbers under addition form a monoid: the sum of any two natural numbers is a natural number; $(a+b)+c = a+(b+c)\$ for any natural numbers $a\$, $b\$, and $c\,\$; and zero is the additive identity. The integers under multiplication also form a monoid, as do natural numbers under $\max\$, Boolean values under conjunction and disjunction, lists under concatenation, functions from a set to itself under composition ... Monoids show up all over the place, once you know to look for them.
## 6.1 Definition
The definition of the `Monoid` type class (defined in `Data.Monoid`; haddock) is:
```class Monoid a where
mempty :: a
mappend :: a -> a -> a
mconcat :: [a] -> a
mconcat = foldr mappend mempty```
The `mempty` value specifies the identity element of the monoid, and `mappend` is the binary operation. The default definition for `mconcat` “reduces” a list of elements by combining them all with `mappend`, using a right fold. It is only in the `Monoid` class so that specific instances have the option of providing an alternative, more efficient implementation; usually, you can safely ignore `mconcat` when creating a `Monoid` instance, since its default definition will work just fine.
The `Monoid` methods are rather unfortunately named; they are inspired by the list instance of `Monoid`, where indeed `mempty = []` and `mappend = (++)`, but this is misleading since many monoids have little to do with appending (see these Comments from OCaml Hacker Brian Hurt on the haskell-cafe mailing list).
## 6.2 Laws
Of course, every `Monoid` instance should actually be a monoid in the mathematical sense, which implies these laws:
```mempty `mappend` x = x
x `mappend` mempty = x
(x `mappend` y) `mappend` z = x `mappend` (y `mappend` z)```
## 6.3 Instances
There are quite a few interesting `Monoid` instances defined in `Data.Monoid`.
• `[a]` is a `Monoid`, with `mempty = []` and `mappend = (++)`.
It is not hard to check that `(x ++ y) ++ z = x ++ (y ++ z)` for any lists `x`, `y`, and `z`, and that the empty list is the identity: `[] ++ x = x ++ [] = x`.
• As noted previously, we can make a monoid out of any numeric
type under either addition or multiplication. However, since we can’t have two instances for the same type, `Data.Monoid` provides two `newtype` wrappers, `Sum` and `Product`, with appropriate `Monoid` instances.
```> getSum (mconcat . map Sum $ [1..5])
15
> getProduct (mconcat . map Product $ [1..5])
120```
This example code is silly, of course; we could just write
`sum [1..5]` and `product [1..5]`. Nevertheless, these instances are useful in more generalized settings, as we will see in the section `Foldable`.
• `Any` and `All` are `newtype` wrappers providing `Monoid`
instances for `Bool` (under disjunction and conjunction, respectively).
• There are three instances for `Maybe`: a basic instance which
lifts a `Monoid` instance for `a` to an instance for `Maybe a`, and two `newtype` wrappers `First` and `Last` for which `mappend` selects the first (respectively last) non-`Nothing` item.
• `Endo a` is a newtype wrapper for functions `a -> a`, which form
a monoid under composition.
• There are several ways to “lift” `Monoid` instances to
instances with additional structure. We have already seen that an instance for `a` can be lifted to an instance for `Maybe a`. There are also tuple instances: if `a` and `b` are instances of `Monoid`, then so is `(a,b)`, using the monoid operations for `a` and `b` in the obvious pairwise manner. Finally, if `a` is a `Monoid`, then so is the function type `e -> a` for any `e`; in particular, `g `mappend` h` is the function which applies both `g` and `h` to its argument and then combines the result using the underlying `Monoid` instance for `a`. This can be quite useful and elegant (see example).
• The type `Ordering = LT || EQ || GT` is a `Monoid`, defined in
such a way that `mconcat (zipWith compare xs ys)` computes the lexicographic ordering of `xs` and `ys`. In particular, `mempty = EQ`, and `mappend` evaluates to its leftmost non-`EQ` argument (or `EQ` if both arguments are `EQ`). This can be used together with the function instance of `Monoid` to do some clever things (example).
• There are also `Monoid` instances for several standard data
structures in the containers library (haddock), including `Map`, `Set`, and `Sequence`.
`Monoid` is also used to enable several other type class instances. As noted previously, we can use `Monoid` to make `((,) e)` an instance of `Applicative`:
```instance Monoid e => Applicative ((,) e) where
pure x = (mempty, x)
(u, f) <*> (v, x) = (u `mappend` v, f x)```
`Monoid` can be similarly used to make `((,) e)` an instance of `Monad` as well; this is known as the writer monad. As we’ve already seen, `Writer` and `WriterT` are a newtype wrapper and transformer for this monad, respectively.
`Monoid` also plays a key role in the `Foldable` type class (see section Foldable).
## 6.4 Other monoidal classes: Alternative, MonadPlus, ArrowPlus
The `Alternative` type class (haddock) is for `Applicative` functors which also have a monoid structure:
```class Applicative f => Alternative f where
empty :: f a
(<|>) :: f a -> f a -> f a```
Of course, instances of `Alternative` should satisfy the monoid laws.
Likewise, `MonadPlus` (haddock) is for `Monad`s with a monoid structure:
```class Monad m => MonadPlus m where
mzero :: m a
mplus :: m a -> m a -> m a```
The `MonadPlus` documentation states that it is intended to model monads which also support “choice and failure”; in addition to the monoid laws, instances of `MonadPlus` are expected to satisfy
```mzero >>= f = mzero
v >> mzero = mzero```
which explains the sense in which `mzero` denotes failure. Since `mzero` should be the identity for `mplus`, the computation `m1 `mplus` m2` succeeds (evaluates to something other than `mzero`) if either `m1` or `m2` does; so `mplus` represents choice. The `guard` function can also be used with instances of `MonadPlus`; it requires a condition to be satisfied and fails (using `mzero`) if it is not. A simple example of a `MonadPlus` instance is `[]`, which is exactly the same as the `Monoid` instance for `[]`: the empty list represents failure, and list concatenation represents choice. In general, however, a `MonadPlus` instance for a type need not be the same as its `Monoid` instance; `Maybe` is an example of such a type. A great introduction to the `MonadPlus` type class, with interesting examples of its use, is Doug Auclair’s MonadPlus: What a Super Monad! in the Monad.Reader issue 11.
There used to be a type class called `MonadZero` containing only `mzero`, representing monads with failure. The `do`-notation requires some notion of failure to deal with failing pattern matches. Unfortunately, `MonadZero` was scrapped in favor of adding the `fail` method to the `Monad` class. If we are lucky, someday `MonadZero` will be restored, and `fail` will be banished to the bit bucket where it belongs (see MonadPlus reform proposal). The idea is that any `do`-block which uses pattern matching (and hence may fail) would require a `MonadZero` constraint; otherwise, only a `Monad` constraint would be required.
Finally, `ArrowZero` and `ArrowPlus` (haddock) represent `Arrow`s (see below) with a monoid structure:
```class Arrow (~>) => ArrowZero (~>) where
zeroArrow :: b ~> c
class ArrowZero (~>) => ArrowPlus (~>) where
(<+>) :: (b ~> c) -> (b ~> c) -> (b ~> c)```
## 6.5 Further reading
Monoids have gotten a fair bit of attention recently, ultimately due to a blog post by Brian Hurt, in which he complained about the fact that the names of many Haskell type classes (`Monoid` in particular) are taken from abstract mathematics. This resulted in a long haskell-cafe thread arguing the point and discussing monoids in general.
∗ May its name live forever.
However, this was quickly followed by several blog posts about `Monoid` ∗. First, Dan Piponi wrote a great introductory post, [http://blog.sigfpe.com/2009/01/haskell-monoids-and-their-uses.html Haskell Monoids and their Uses]. This was quickly followed by Heinrich Apfelmus’s Monoids and Finger Trees, an accessible exposition of Hinze and Paterson’s classic paper on 2-3 finger trees, which makes very clever use of `Monoid` to implement an elegant and generic data structure. Dan Piponi then wrote two fascinating articles about using `Monoids` (and finger trees): Fast Incremental Regular Expressions and Beyond Regular Expressions
In a similar vein, David Place’s article on improving `Data.Map` in order to compute incremental folds (see the Monad Reader issue 11) is also a good example of using `Monoid` to generalize a data structure.
Some other interesting examples of `Monoid` use include [http://www.reddit.com/r/programming/comments/7cf4r/monoids_in_my_programming_language/c06adnx building elegant list sorting combinators], collecting unstructured information, and a brilliant series of posts by Chung-Chieh Shan and Dylan Thurston using `Monoid`s to [http://conway.rutgers.edu/~ccshan/wiki/blog/posts/WordNumbers1/ elegantly solve a difficult combinatorial puzzle] (followed by part 2, part 3, part 4).
As unlikely as it sounds, monads can actually be viewed as a sort of monoid, with `join` playing the role of the binary operation and `return` the role of the identity; see Dan Piponi’s blog post.
# 7 Foldable
The `Foldable` class, defined in the `Data.Foldable` module (haddock), abstracts over containers which can be “folded” into a summary value. This allows such folding operations to be written in a container-agnostic way.
## 7.1 Definition
The definition of the `Foldable` type class is:
```class Foldable t where
fold :: Monoid m => t m -> m
foldMap :: Monoid m => (a -> m) -> t a -> m
foldr :: (a -> b -> b) -> b -> t a -> b
foldl :: (a -> b -> a) -> a -> t b -> a
foldr1 :: (a -> a -> a) -> t a -> a
foldl1 :: (a -> a -> a) -> t a -> a```
This may look complicated, but in fact, to make a `Foldable` instance you only need to implement one method: your choice of `foldMap` or `foldr`. All the other methods have default implementations in terms of these, and are presumably included in the class in case more efficient implementations can be provided.
## 7.2 Instances and examples
The type of `foldMap` should make it clear what it is supposed to do: given a way to convert the data in a container into a `Monoid` (a function `a -> m`) and a container of `a`’s (`t a`), `foldMap` provides a way to iterate over the entire contents of the container, converting all the `a`’s to `m`’s and combining all the `m`’s with `mappend`. The following code shows two examples: a simple implementation of `foldMap` for lists, and a binary tree example provided by the `Foldable` documentation.
```instance Foldable [] where
foldMap g = mconcat . map g
data Tree a = Empty | Leaf a | Node (Tree a) a (Tree a)
instance Foldable Tree where
foldMap f Empty = mempty
foldMap f (Leaf x) = f x
foldMap f (Node l k r) = foldMap f l ++ f k ++ foldMap f r
where (++) = mappend```
The `foldr` function has a type similar to the `foldr` found in the `Prelude`, but more general, since the `foldr` in the `Prelude` works only on lists.
The `Foldable` module also provides instances for `Maybe` and `Array`; additionally, many of the data structures found in the standard containers library (for example, `Map`, `Set`, `Tree`, and `Sequence`) provide their own `Foldable` instances.
## 7.3 Derived folds
Given an instance of `Foldable`, we can write generic, container-agnostic functions such as:
```-- Compute the size of any container.
containerSize :: Foldable f => f a -> Int
containerSize = getSum . foldMap (const (Sum 1))
-- Compute a list of elements of a container satisfying a predicate.
filterF :: Foldable f => (a -> Bool) -> f a -> [a]
filterF p = foldMap (\a -> if p a then [a] else [])
-- Get a list of all the Strings in a container which include the
-- letter a.
aStrings :: Foldable f => f String -> [String]
aStrings = filterF (elem 'a')```
The `Foldable` module also provides a large number of predefined folds, many of which are generalized versions of `Prelude` functions of the same name that only work on lists: `concat`, `concatMap`, `and`, `or`, `any`, `all`, `sum`, `product`, `maximum`(`By`), `minimum`(`By`), `elem`, `notElem`, and `find`. The reader may enjoy coming up with elegant implementations of these functions using `fold` or `foldMap` and appropriate `Monoid` instances.
There are also generic functions that work with `Applicative` or `Monad` instances to generate some sort of computation from each element in a container, and then perform all the side effects from those computations, discarding the results: `traverse_`, `sequenceA_`, and others. The results must be discarded because the `Foldable` class is too weak to specify what to do with them: we cannot, in general, make an arbitrary `Applicative` or `Monad` instance into a `Monoid`. If we do have an `Applicative` or `Monad` with a monoid structure—that is, an `Alternative` or a `MonadPlus`—then we can use the `asum` or `msum` functions, which can combine the results as well. Consult the `Foldable` documentation for more details on any of these functions.
Note that the `Foldable` operations always forget the structure of the container being folded. If we start with a container of type `t a` for some `Foldable t`, then `t` will never appear in the output type of any operations defined in the `Foldable` module. Many times this is exactly what we want, but sometimes we would like to be able to generically traverse a container while preserving its structure—and this is exactly what the `Traversable` class provides, which will be discussed in the next section.
## 7.4 Further reading
The `Foldable` class had its genesis in McBride and Paterson’s paper introducing `Applicative`, although it has been fleshed out quite a bit from the form in the paper.
An interesting use of `Foldable` (as well as `Traversable`) can be found in Janis Voigtländer’s paper Bidirectionalization for free!.
# 8 Traversable
## 8.1 Definition
The `Traversable` type class, defined in the `Data.Traversable` module (haddock), is:
```class (Functor t, Foldable t) => Traversable t where
traverse :: Applicative f => (a -> f b) -> t a -> f (t b)
sequenceA :: Applicative f => t (f a) -> f (t a)
mapM :: Monad m => (a -> m b) -> t a -> m (t b)
sequence :: Monad m => t (m a) -> m (t a)```
As you can see, every `Traversable` is also a foldable functor. Like `Foldable`, there is a lot in this type class, but making instances is actually rather easy: one need only implement `traverse` or `sequenceA`; the other methods all have default implementations in terms of these functions. A good exercise is to figure out what the default implementations should be: given either `traverse` or `sequenceA`, how would you define the other three methods? (Hint for `mapM`: `Control.Applicative` exports the `WrapMonad` newtype, which makes any `Monad` into an `Applicative`. The `sequence` function can be implemented in terms of `mapM`.)
## 8.2 Intuition
The key method of the `Traversable` class, and the source of its unique power, is `sequenceA`. Consider its type:
`sequenceA :: Applicative f => t (f a) -> f (t a)`
This answers the fundamental question: when can we commute two functors? For example, can we turn a tree of lists into a list of trees? (Answer: yes, in two ways. Figuring out what they are, and why, is left as an exercise. A much more challenging question is whether a list of trees can be turned into a tree of lists.)
The ability to compose two monads depends crucially on this ability to commute functors. Intuitively, if we want to build a composed monad `M a = m (n a)` out of monads `m` and `n`, then to be able to implement `join :: M (M a) -> M a`, that is, `join :: m (n (m (n a))) -> m (n a)`, we have to be able to commute the `n` past the `m` to get `m (m (n (n a)))`, and then we can use the `join`s for `m` and `n` to produce something of type `m (n a)`. See Mark Jones’s paper for more details.
## 8.3 Instances and examples
What’s an example of a `Traversable` instance? The following code shows an example instance for the same `Tree` type used as an example in the previous `Foldable` section. It is instructive to compare this instance with a `Functor` instance for `Tree`, which is also shown.
```data Tree a = Empty | Leaf a | Node (Tree a) a (Tree a)
instance Traversable Tree where
traverse g Empty = pure Empty
traverse g (Leaf x) = Leaf <$> g x
traverse g (Node l x r) = Node <$> traverse g l
<*> g x
<*> traverse g r
instance Functor Tree where
fmap g Empty = Empty
fmap g (Leaf x) = Leaf $ g x
fmap g (Node l x r) = Node (fmap g l)
(g x)
(fmap g r)```
It should be clear that the `Traversable` and `Functor` instances for `Tree` are almost identical; the only difference is that the `Functor` instance involves normal function application, whereas the applications in the `Traversable` instance take place within an `Applicative` context, using `(<$>)` and `(<*>)`. In fact, this will be true for any type.
Any `Traversable` functor is also `Foldable`, and a `Functor`. We can see this not only from the class declaration, but by the fact that we can implement the methods of both classes given only the `Traversable` methods. A good exercise is to implement `fmap` and `foldMap` using only the `Traversable` methods; the implementations are surprisingly elegant. The `Traversable` module provides these implementations as `fmapDefault` and `foldMapDefault`.
The standard libraries provide a number of `Traversable` instances, including instances for `[]`, `Maybe`, `Map`, `Tree`, and `Sequence`. Notably, `Set` is not `Traversable`, although it is `Foldable`.
## 8.4 Further reading
The `Traversable` class also had its genesis in [http://www.soi.city.ac.uk/~ross/papers/Applicative.html McBride and Paterson’s `Applicative` paper], and is described in more detail in Gibbons and Oliveira, The Essence of the Iterator Pattern, which also contains a wealth of references to related work.
# 9 Category
`Category` is another fairly new addition to the Haskell standard libraries; you may or may not have it installed depending on the version of your `base` package. It generalizes the notion of function composition to general “morphisms.”
The definition of the `Category` type class (from `Control.Category`—haddock) is shown below. For ease of reading, note that I have used an infix type constructor `(~>)`, much like the infix function type constructor `(->)`. This syntax is not part of Haskell 98. The second definition shown is the one used in the standard libraries. For the remainder of the article, I will use the infix type constructor `(~>)` for `Category` as well as `Arrow`.
```class Category (~>) where
id :: a ~> a
(.) :: (b ~> c) -> (a ~> b) -> (a ~> c)
-- The same thing, with a normal (prefix) type constructor
class Category cat where
id :: cat a a
(.) :: cat b c -> cat a b -> cat a c```
Note that an instance of `Category` should be a type constructor which takes two type arguments, that is, something of kind `* -> * -> *`. It is instructive to imagine the type constructor variable `cat` replaced by the function constructor `(->)`: indeed, in this case we recover precisely the familiar identity function `id` and function composition operator `(.)` defined in the standard `Prelude`.
Of course, the `Category` module provides exactly such an instance of `Category` for `(->)`. But it also provides one other instance, shown below, which should be familiar from the previous discussion of the `Monad` laws. `Kleisli m a b`, as defined in the `Control.Arrow` module, is just a `newtype` wrapper around `a -> m b`.
```newtype Kleisli m a b = Kleisli { runKleisli :: a -> m b }
instance Monad m => Category (Kleisli m) where
id = Kleisli return
Kleisli g . Kleisli h = Kleisli (h >=> g)```
The only law that `Category` instances should satisfy is that `id` and `(.)` should form a monoid—that is, `id` should be the identity of `(.)`, and `(.)` should be associative.
Finally, the `Category` module exports two additional operators: `(<<<)`, which is just a synonym for `(.)`, and `(>>>)`, which is `(.)` with its arguments reversed. (In previous versions of the libraries, these operators were defined as part of the `Arrow` class.)
## 9.1 Further reading
The name `Category` is a bit misleading, since the `Category` class cannot represent arbitrary categories, but only categories whose objects are objects of `Hask`, the category of Haskell types. For a more general treatment of categories within Haskell, see the category-extras package. For more about category theory in general, see the excellent Haskell wikibook page, Steve Awodey’s new book, Benjamin Pierce’s Basic category theory for computer scientists, or Barr and Wells’s category theory lecture notes. Benjamin Russell’s blog post is another good source of motivation and category theory links. You certainly don’t need to know any category theory to be a successful and productive Haskell programmer, but it does lend itself to much deeper appreciation of Haskell’s underlying theory.
# 10 Arrow
The `Arrow` class represents another abstraction of computation, in a similar vein to `Monad` and `Applicative`. However, unlike `Monad` and `Applicative`, whose types only reflect their output, the type of an `Arrow` computation reflects both its input and output. Arrows generalize functions: if `(~>)` is an instance of `Arrow`, a value of type `b ~> c` can be thought of as a computation which takes values of type `b` as input, and produces values of type `c` as output. In the `(->)` instance of `Arrow` this is just a pure function; in general, however, an arrow may represent some sort of “effectful” computation.
## 10.1 Definition
The definition of the `Arrow` type class, from `Control.Arrow` (haddock), is:
```class Category (~>) => Arrow (~>) where
arr :: (b -> c) -> (b ~> c)
first :: (b ~> c) -> ((b, d) ~> (c, d))
second :: (b ~> c) -> ((d, b) ~> (d, c))
(***) :: (b ~> c) -> (b' ~> c') -> ((b, b') ~> (c, c'))
(&&&) :: (b ~> c) -> (b ~> c') -> (b ~> (c, c'))```
∗ In versions of the `base` package prior to version 4, there is no `Category` class, and the `Arrow` class includes the arrow composition operator `(>>>)`. It also includes `pure` as a synonym for `arr`, but this was removed since it conflicts with the `pure` from `Applicative`.
The first thing to note is the `Category` class constraint, which means that we get identity arrows and arrow composition for free: given two arrows `g :: b ~> c` and `h :: c ~> d`, we can form their composition `g >>> h :: b ~> d` ∗.
As should be a familiar pattern by now, the only methods which must be defined when writing a new instance of `Arrow` are `arr` and `first`; the other methods have default definitions in terms of these, but are included in the `Arrow` class so that they can be overridden with more efficient implementations if desired.
## 10.2 Intuition
Let’s look at each of the arrow methods in turn. Ross Paterson’s web page on arrows has nice diagrams which can help build intuition.
• The `arr` function takes any function `b -> c` and turns it into a
generalized arrow `b ~> c`. The `arr` method justifies the claim that arrows generalize functions, since it says that we can treat any function as an arrow. It is intended that the arrow `arr g` is “pure” in the sense that it only computes `g` and has no “effects” (whatever that might mean for any particular arrow type).
• The `first` method turns any arrow from `b` to `c` into an arrow
from `(b,d)` to `(c,d)`. The idea is that `first g` uses `g` to process the first element of a tuple, and lets the second element pass through unchanged. For the function instance of `Arrow`, of course, `first g (x,y) = (g x, y)`.
• The `second` function is similar to `first`, but with the elements of the
tuples swapped. Indeed, it can be defined in terms of `first` using an auxiliary function `swap`, defined by `swap (x,y) = (y,x)`.
• The `(***)` operator is “parallel composition” of arrows: it takes two
arrows and makes them into one arrow on tuples, which has the behavior of the first arrow on the first element of a tuple, and the behavior of the second arrow on the second element. The mnemonic is that `g *** h` is the product (hence `*`) of `g` and `h`. For the function instance of `Arrow`, we define `(g *** h) (x,y) = (g x, h y)`. The default implementation of `(***)` is in terms of `first`, `second`, and sequential arrow composition `(>>>)`. The reader may also wish to think about how to implement `first` and `second` in terms of `(***)`.
• The `(&&&)` operator is “fanout composition” of arrows: it takes two arrows
`g` and `h` and makes them into a new arrow `g &&& h` which supplies its input as the input to both `g` and `h`, returning their results as a tuple. The mnemonic is that `g &&& h` performs both `g` and `h` (hence `&`) on its input. For functions, we define `(g &&& h) x = (g x, h x)`.
## 10.3 Instances
The `Arrow` library itself only provides two `Arrow` instances, both of which we have already seen: `(->)`, the normal function constructor, and `Kleisli m`, which makes functions of type `a -> m b` into `Arrow`s for any `Monad m`. These instances are:
```instance Arrow (->) where
arr g = g
first g (x,y) = (g x, y)
newtype Kleisli m a b = Kleisli { runKleisli :: a -> m b }
instance Monad m => Arrow (Kleisli m) where
arr f = Kleisli (return . f)
first (Kleisli f) = Kleisli (\ ~(b,d) -> do c <- f b
return (c,d) )```
## 10.4 Laws
There are quite a few laws that instances of `Arrow` should satisfy ∗:
```arr id = id
arr (h . g) = arr g >>> arr h
first (arr g) = arr (g *** id)
first (g >>> h) = first g >>> first h
first g >>> arr (id *** h) = arr (id *** h) >>> first g
first g >>> arr fst = arr fst >>> g
first (first g) >>> arr assoc = arr assoc >>> first g
assoc ((x,y),z) = (x,(y,z))```
Note that this version of the laws is slightly different than the laws given in the first two above references, since several of the laws have now been subsumed by the `Category` laws (in particular, the requirements that `id` is the identity arrow and that `(>>>)` is associative). The laws shown here follow those in Paterson’s Programming with Arrows, which uses the `Category` class.
∗ Unless category-theory-induced insomnolence is your cup of tea.
The reader is advised not to lose too much sleep over the `Arrow` laws ∗, since it is not essential to understand them in order to program with arrows. There are also laws that `ArrowChoice`, `ArrowApply`, and `ArrowLoop` instances should satisfy; the interested reader should consult Paterson: Programming with Arrows.
## 10.5 ArrowChoice
Computations built using the `Arrow` class, like those built using the `Applicative` class, are rather inflexible: the structure of the computation is fixed at the outset, and there is no ability to choose between alternate execution paths based on intermediate results. The `ArrowChoice` class provides exactly such an ability:
```class Arrow (~>) => ArrowChoice (~>) where
left :: (b ~> c) -> (Either b d ~> Either c d)
right :: (b ~> c) -> (Either d b ~> Either d c)
(+++) :: (b ~> c) -> (b' ~> c') -> (Either b b' ~> Either c c')
(|||) :: (b ~> d) -> (c ~> d) -> (Either b c ~> d)```
A comparison of `ArrowChoice` to `Arrow` will reveal a striking parallel between `left`, `right`, `(+++)`, `(|||)` and `first`, `second`, `(***)`, `(&&&)`, respectively. Indeed, they are dual: `first`, `second`, `(***)`, and `(&&&)` all operate on product types (tuples), and `left`, `right`, `(+++)`, and `(|||)` are the corresponding operations on sum types. In general, these operations create arrows whose inputs are tagged with `Left` or `Right`, and can choose how to act based on these tags.
• If `g` is an arrow from `b` to `c`, then `left g` is an arrow
from `Either b d` to `Either c d`. On inputs tagged with `Left`, the `left g` arrow has the behavior of `g`; on inputs tagged with `Right`, it behaves as the identity.
• The `right` function, of course, is the mirror image of `left`. The arrow `right g`
has the behavior of `g` on inputs tagged with `Right`.
• The `(+++)` operator performs “multiplexing”: `g +++ h` behaves as `g`
on inputs tagged with `Left`, and as `h` on inputs tagged with `Right`. The tags are preserved. The `(+++)` operator is the sum (hence `+`) of two arrows, just as `(***)` is the product.
• The `(|||)` operator is “merge” or “fanin”: the arrow `g ||| h`
behaves as `g` on inputs tagged with `Left`, and `h` on inputs tagged with `Right`, but the tags are discarded (hence, `g` and `h` must have the same output type). The mnemonic is that `g ||| h` performs either `g` or `h` on its input.
The `ArrowChoice` class allows computations to choose among a finite number of execution paths, based on intermediate results. The possible execution paths must be known in advance, and explicitly assembled with `(+++)` or `(|||)`. However, sometimes more flexibility is needed: we would like to be able to compute an arrow from intermediate results, and use this computed arrow to continue the computation. This is the power given to us by `ArrowApply`.
## 10.6 ArrowApply
The `ArrowApply` type class is:
```class Arrow (~>) => ArrowApply (~>) where
app :: (b ~> c, b) ~> c```
If we have computed an arrow as the output of some previous computation, then `app` allows us to apply that arrow to an input, producing its output as the output of `app`. As an exercise, the reader may wish to use `app` to implement an alternative “curried” version, `app2 :: b ~> ((b ~> c) ~> c)`.
This notion of being able to compute a new computation may sound familiar: this is exactly what the monadic bind operator `(>>=)` does. It should not particularly come as a surprise that `ArrowApply` and `Monad` are exactly equivalent in expressive power. In particular, `Kleisli m` can be made an instance of `ArrowApply`, and any instance of `ArrowApply` can be made a `Monad` (via the `newtype` wrapper `ArrowMonad`). As an exercise, the reader may wish to try implementing these instances:
```instance Monad m => ArrowApply (Kleisli m) where
app = -- exercise
newtype ArrowApply a => ArrowMonad a b = ArrowMonad (a () b)
instance ArrowApply a => Monad (ArrowMonad a) where
return = -- exercise
(ArrowMonad a) >>= k = -- exercise```
## 10.7 ArrowLoop
The `ArrowLoop` type class is:
```class Arrow a => ArrowLoop a where
loop :: a (b, d) (c, d) -> a b c
trace :: ((b,d) -> (c,d)) -> b -> c
trace f b = let (c,d) = f (b,d) in c```
It describes arrows that can use recursion to compute results, and is used to desugar the `rec` construct in arrow notation (described below).
Taken by itself, the type of the `loop` method does not seem to tell us much. Its intention, however, is a generalization of the `trace` function which is also shown. The `d` component of the first arrow’s output is fed back in as its own input. In other words, the arrow `loop g` is obtained by recursively “fixing” the second component of the input to `g`.
It can be a bit difficult to grok what the `trace` function is doing. How can `d` appear on the left and right sides of the `let`? Well, this is Haskell’s laziness at work. There is not space here for a full explanation; the interested reader is encouraged to study the standard `fix` function, and to read Paterson’s arrow tutorial.
## 10.8 Arrow notation
Programming directly with the arrow combinators can be painful, especially when writing complex computations which need to retain simultaneous reference to a number of intermediate results. With nothing but the arrow combinators, such intermediate results must be kept in nested tuples, and it is up to the programmer to remember which intermediate results are in which components, and to swap, reassociate, and generally mangle tuples as necessary. This problem is solved by the special arrow notation supported by GHC, similar to `do` notation for monads, that allows names to be assigned to intermediate results while building up arrow computations. An example arrow implemented using arrow notation, taken from Paterson, is:
```class ArrowLoop (~>) => ArrowCircuit (~>) where
delay :: b -> (b ~> b)
counter :: ArrowCircuit (~>) => Bool ~> Int
counter = proc reset -> do
rec output <- idA -< if reset then 0 else next
next <- delay 0 -< output + 1
idA -< output```
This arrow is intended to represent a recursively defined counter circuit with a reset line.
There is not space here for a full explanation of arrow notation; the interested reader should consult [http://www.soi.city.ac.uk/~ross/papers/notation.html Paterson’s paper introducing the notation], or his later [http://www.soi.city.ac.uk/~ross/papers/fop.html tutorial which presents a simplified version].
## 10.9 Further reading
An excellent starting place for the student of arrows is the arrows web page, which contains an introduction and many references. Some key papers on arrows include Hughes’s original paper introducing arrows, Generalising monads to arrows, and Paterson’s paper on arrow notation.
Both Hughes and Paterson later wrote accessible tutorials intended for a broader audience: Paterson: Programming with Arrows and Hughes: Programming with Arrows.
Although Hughes’s goal in defining the `Arrow` class was to generalize `Monad`s, and it has been said that `Arrow` lies “between `Applicative` and `Monad`” in power, they are not directly comparable. The precise relationship remained in some confusion until analyzed by Lindley, Wadler, and Yallop, who also invented a new calculus of arrows, based on the lambda calculus, which considerably simplifies the presentation of the arrow laws (see The arrow calculus).
Some examples of `Arrow`s include Yampa, the Haskell XML Toolkit, and the functional GUI library Grapefruit.
Some extensions to arrows have been explored; for example, the `BiArrow`s of Alimarine et al., for two-way instead of one-way computation.
The Haskell wiki has links to many additional research papers relating to `Arrow`s.
# 11 Comonad
The final type class we will examine is `Comonad`. The `Comonad` class is the categorical dual of `Monad`; that is, `Comonad` is like `Monad` but with all the function arrows flipped. It is not actually in the standard Haskell libraries, but it has seen some interesting uses recently, so we include it here for completeness.
## 11.1 Definition
The `Comonad` type class, defined in the `Control.Comonad` module of the category-extras library, is:
```class Functor f => Copointed f where
extract :: f a -> a
class Copointed w => Comonad w where
duplicate :: w a -> w (w a)
extend :: (w a -> b) -> w a -> w b```
As you can see, `extract` is the dual of `return`, `duplicate` is the dual of `join`, and `extend` is the dual of `(>>=)` (although its arguments are in a different order). The definition of `Comonad` is a bit redundant (after all, the `Monad` class does not need `join`), but this is so that a `Comonad` can be defined by `fmap`, `extract`, and either `duplicate` or `extend`. Each has a default implementation in terms of the other.
A prototypical example of a `Comonad` instance is:
```-- Infinite lazy streams
data Stream a = Cons a (Stream a)
instance Functor Stream where
fmap g (Cons x xs) = Cons (g x) (fmap g xs)
instance Copointed Stream where
extract (Cons x _) = x
-- 'duplicate' is like the list function 'tails'
-- 'extend' computes a new Stream from an old, where the element
-- at position n is computed as a function of everything from
-- position n onwards in the old Stream
instance Comonad Stream where
duplicate s@(Cons x xs) = Cons s (duplicate xs)
extend g s@(Cons x xs) = Cons (g s) (extend g xs)
-- = fmap g (duplicate s)```
## 11.2 Further reading
Dan Piponi explains in a blog post what [http://blog.sigfpe.com/2006/12/evaluating-cellular-automata-is.html cellular automata have to do with comonads]. In another blog post, Conal Elliott has examined [http://conal.net/blog/posts/functional-interactive-behavior/ a comonadic formulation of functional reactive programming]. Sterling Clover’s blog post Comonads in everyday life explains the relationship between comonads and zippers, and how comonads can be used to design a menu system for a web site.
Uustalu and Vene have a number of papers exploring ideas related to comonads and functional programming:
# 12 Acknowledgements
A special thanks to all of those who taught me about standard Haskell type classes and helped me develop good intuition for them, particularly Jules Bean (quicksilver), Derek Elkins (ddarius), Conal Elliott (conal), Cale Gibbard (Cale), David House, Dan Piponi (sigfpe), and Kevin Reid (kpreid).
I also thank the many people who provided a mountain of helpful feedback and suggestions on a first draft of this article: David Amos, Kevin Ballard, Reid Barton, Doug Beardsley, Joachim Breitner, Andrew Cave, David Christiansen, Gregory Collins, Mark Jason Dominus, Conal Elliott, Yitz Gale, George Giorgidze, Steven Grady, Travis Hartwell, Steve Hicks, Philip Hölzenspies, Edward Kmett, Eric Kow, Serge Le Huitouze, Felipe Lessa, Stefan Ljungstrand, Eric Macaulay, Rob MacAulay, Simon Meier, Eric Mertens, Tim Newsham, Russell O’Connor, Conrad Parker, Walt Rorie-Baety, Colin Ross, Tom Schrijvers, Aditya Siram, C. Smith, Martijn van Steenbergen, Joe Thornber, Jared Updike, Rob Vollmert, Andrew Wagner, Louis Wasserman, and Ashley Yakeley, as well as a few only known to me by their IRC nicks: b_jonas, maltem, tehgeekmeister, and ziman. I have undoubtedly omitted a few inadvertently, which in no way diminishes my gratitude.
Finally, I would like to thank Wouter Swierstra for his fantastic work editing the Monad.Reader, and my wife Joyia for her patience during the process of writing the Typeclassopedia.
# 13 About the author
Brent Yorgey (blog, homepage) is a first-year Ph.D. student in the programming languages group at the University of Pennsylvania]. He enjoys teaching, creating EDSLs, playing Bach fugues, musing upon category theory, and cooking tasty lambda-treats for the denizens of #haskell.
# 14 Colophon
The Typeclassopedia was written by Brent Yorgey and initally published in March 2009. Painstakingly converted to wiki syntax by User:Geheimdienst in November 2011, after asking Brent’s permission.
If something like this tex to wiki syntax conversion ever needs to be done again, here are some vim commands that helped:
• %s/\\section{\([^}]*\)}/=\1=/gc
• %s/\\subsection{\([^}]*\)}/==\1==/gc
• %s/^ *\\item /\r* /gc
• %s/---/—/gc
• %s/\\$\([^\$]*\)\\$/<math>\1\\ <\/math>/gc Appending “\ ” forces images to be rendered. Otherwise, Mediawiki would go back and forth between one font for short <math> tags, and another more Tex-like font for longer tags (containing more than a few characters)""
• %s/|\([^|]*\)|/<code>\1<\/code>/gc
• %s/\\dots/.../gc
• %s/^\\label{.*\$//gc
• %s/\\emph{\([^}]*\)}/''\1''/gc
• %s/\\term{\([^}]*\)}/''\1''/gc
The biggest issue was taking the academic-paper-style citations and turning them into hyperlinks with an appropriate title and an appropriate target. In most cases there was an obvious thing to do (e.g. online PDFs of the cited papers or Citeseer entries). Sometimes, however, it’s less clear and you might want to check the original Typeclassopedia PDF with the original bibliography file.
To get all the citations into the main text, I first tried processing the source with Tex or Lyx. This didn’t work due to missing unfindable packages, syntax errors, and my general ineptitude with Tex.
I then went for the next best solution, which seemed to be extracting all instances of “\cite{something}” from the source and in that order pulling the referenced entries from the .bib file. This way you can go through the source file and sorted-references file in parallel, copying over what you need, without searching back and forth in the .bib file. I used:
• egrep -o "\cite\{[^\}]*\}" ~/typeclassopedia.lhs | cut -c 6- | tr "," "\n" | tr -d "}" > /tmp/citations
• for i in \$(cat /tmp/citations); do grep -A99 "\$i" ~/typeclassopedia.bib|egrep -B99 '^\}\$' -m1 ; done > ~/typeclasso-refs-sorted
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 23, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.92729651927948, "perplexity_flag": "middle"}
|
http://www.physicsforums.com/showthread.php?p=4236982
|
Physics Forums
Page 2 of 2 < 1 2
Mentor
## Why does mass warp spacetime?
Quote by Zmunkz I've heard Einstein attributed to this sort of interpretation on some of those science channel shows, like Through the Wormhole (or one of those). I've never been able to find a technical discussion of what this interpretation comes from, or if Einstein ever said anything like it
This is why the rules of the forum specify mainstream scientific references, not pop-sci references. Do a search for "Brian Greene" on this forum and you will get a taste of the headache that pop-sci treatments cause.
Recognitions: Gold Member Although there is no real scientific answer to your question, string theory offers some interesting insights. [That mass and its equivalent, gravity, can affect space and as well the relative passage of time is one of most profound findings of all time. It's downright 'crazy' based on our everyday intuitions.] In string theory, fundamental components of particles, strings, are vibrating energy modes. It turns out that these interact with the degrees of freedom, or geometrical dimensions, in which we all find ourselves. Different sizes and shapes of additional dimensions can be mathematically associated with different characteristics of strings: varying vibration patterns correspond to things like particle size, charge, spin that we observe macroscopically. So strings and geometry, spacetime, interact analogous to mass/energy in general relativity. This offers some insights, perhaps, why not only mass, but also energy and momentum density warps spacetime.
Recognitions:
Gold Member
It was described more-or-less as above: every single object has a "total velocity" of c through space and time. Photons move entirely along the space axis, and everything else has a vector with components in both space and time, changing in proportion according to relativity, but always maintaining a magnitude of c.
I've got Brian Greene's book FABRIC OF THE COSMOS where he explains acceleration using the above concepts. I found it useful as ONE perspective, but it seems unpopular here among some. I found it helpful when approaching light cones for the first time....I don't see much difference...
I find Greene's above description along the lines of the 'rubber sheet' analogy for gravity...[which Greene discuss right after in his book] or the 'balloon analogy' for cosmology, useful as a perspective, but they all come with limitations.
Quote by Zmunkz Perhaps we can dismiss RotatingFrame's delivery and treat it as a question: is there any validity to that interpretation?
Definitely. Depending on what this 4-velocity actually "means," if we manage to interpret it correctly, we're going to be able to predict Special Relativity.
Though this has nothing, or almost nothing, to do with gravity.
Mentor
Quote by Rotating Frame A body at rest moves through time at the speed of light. An body in motion has some of it's energy diverted away from the time dimension, into the spatial dimensions, causing warps.
Brian Greene actually uses language like that, but (as you can see in his books) it doesn't have anything to do with "warps" (i.e. gravity). He uses it to explain time dilation and other SR phenomena. Here's a quote from "The fabric of the cosmos".
And just as Bart’s speed in the northward direction slowed down when he diverted some of his northward motion into eastward motion, the speed of the car through time slows down when it diverts some of its motion through time into motion through space. This means that the car’s progress through time slows down and therefore time elapses more slowly for the moving car and its driver than it elapses for you and everything else that remains stationary.
Quote by Zmunkz I've never been able to find a technical discussion of what this interpretation comes from, ... It was described more-or-less as above: every single object has a "total velocity" of c through space and time. ... is there any validity to that interpretation?
I think this way of looking at it was made popular by Brian Greene. He uses it in both "The elegant universe" and "The fabric of the cosmos". The technical explanation (in units such that c=1, and with a -++++ signature) is as follows.
In my own rest frame, my world line coincides with the time axis. So the tangent to my world line is in the 0 direction (axes numbered from 0 to 3, with "time" being 0). Every vector of the form
$$\begin{pmatrix}r\\ 0\\ 0\\ 0\end{pmatrix}$$ where r is a real number is a tangent vector to the world line. The tangent vector with Minkowski "norm" -1 (-c for those who don't set c=1) is called my four-velocity. I will denote its coordinate matrix in my own rest frame by u. We have
$$u=\begin{pmatrix}1\\ 0\\ 0\\ 0\end{pmatrix},\qquad u^2=u^T\eta u=\begin{pmatrix}1 & 0 & 0 & 0\end{pmatrix}\begin{pmatrix}-1 & 0 & 0 & 0\\ 0 & 1 & 0 & 0\\ 0 & 0 & 1 & 0\\ 0 & 0 & 0 & 1\end{pmatrix}\begin{pmatrix}1\\ 0\\ 0\\ 0\end{pmatrix}=-1.$$
Greene calls ##\sqrt{-u^2}## the speed through spacetime. We have ##\sqrt{-u^2}=1##. If we restore factors of c, this is ##\sqrt{-u^2}=c##. (Greene uses the metric signature +--- instead of -++++, so when he does this, he gets ##u^2=c^2##, and is therefore able to write the speed through spacetime as ##\sqrt{u^2}##. His ##u^2## is equal to my ##-u^2##).
Let's boost my four velocity to the rest frame of an observer who has velocity -v in my coordinate system. I should have velocity v in his.
$$u'=\Lambda(-v)u=\gamma\begin{pmatrix}1 & v^1 & v^2 & v^3\\ v^1 & * & * & *\\ v^2 & * & * & *\\ v^3 & * & * & *\end{pmatrix}\begin{pmatrix}1\\ 0\\ 0\\ 0\end{pmatrix}=\gamma\begin{pmatrix}1\\ v^1\\ v^2\\ v^3\end{pmatrix}.$$ The asterisks denote matrix elements that are irrelevant to what we're doing here. If anyone cares, they are the components of the 3×3 matrix
$$\frac{1}{\gamma}I+\left(1-\frac 1 \gamma\right)\frac{vv^T}{v^Tv}.$$ The velocity components can be calculated like this:
$$\frac{dx^i}{dt^i}=\frac{u'^i}{u'^0}=\frac{\gamma v^i}{\gamma}=v^i.$$ As expected, my velocity in the new coordinate system is minus the velocity of the boost. This result is the reason why the normalized tangent vector is called the four-velocity.
The world line is the range of a curve ##C:\mathbb R\to M## where M is Minkowski spacetime. Its representation in a global coordinate system ##x:M\to\mathbb R^4## is the curve ##x\circ C:\mathbb R\to\mathbb R^4##. The world line is said to be parametrized by proper time if the curve C that we use to represent it has the property that for each point p on the world line, the number ##\tau## such that ##C(\tau)=p##, is the proper time along the curve from C(0) to p. Such a C has the advantage that the four-vector with components ##(x\circ C)^\mu{}'(t)## is automatically normalized. So if y is my rest frame, and x is the coordinate system we transformed to above, we have ##u^\mu=(y\circ C)^\mu{}'(\tau)## and ##u'^\mu=(x\circ C)^\mu{}'(\tau)##. It's conventional to denote ##(x\circ C)^\mu(\tau)## by ##dx^\mu/d\tau##, so we have
$$u'^\mu=\frac{dx^\mu}{d\tau}.$$ Now let's use the fact that ##u^2## is Lorentz invariant.
$$-1=u^2=u'^2 =-(u^0)^2+(u^1)^2+(u^2)^2+(u^3)^2 =-\left(\frac{dt}{d\tau}\right)^2+\sum_{i=1}^3 \left(\frac{dx^i}{d\tau}\right)^2.$$ Let's manipulate this result with some non-rigorous physicist mathematics. (These things can of course be made rigorous).
\begin{align}
&\frac{dt}{d\tau} =\sqrt{1+\sum_{i=1}^3\left(\frac{dx^i}{d\tau} \right)^2}\\
&\frac{d\tau}{dt} =\frac{1}{\sqrt{1 +\sum_{i=1}^3\left(\frac{dx^i}{d\tau} \right)^2}}\\
&1=\left(\frac{d\tau}{dt}\right)^2 \left(1+\sum_{i=1}^3\left(\frac{dx^i}{d\tau} \right)^2\right) =\left(\frac{d\tau}{dt}\right)^2+\sum_{i=1}^3 \left(\frac{dx^i}{dt}\right)^2.
\end{align} Greene calls the square root of the first term the speed through time and the square root of the second term the speed through space. (This is according to note 6 for chapter 2 (p. 392) of "The elegant universe"). This allows him to say that an increase of the speed through space must be accompanied by an decrease of the speed through space.
If we had been talking about the motion of a massless particle (i.e. light) instead of the motion of an observer, we would have had ##u^2=0## instead of ##u^2=-1##. It's easy to see that what this does to the calculation above is to eliminate the first term on the right-hand side above. Since ##\tau=0## along the world line of a massless particle, this means that the result
$$\left(\frac{d\tau}{dt}\right)^2+\sum_{i=1}^3 \left(\frac{dx^i}{dt}\right)^2=1$$ holds for massless particles too.
Recognitions:
Gold Member
Fredrick:
He uses it to explain time dilation and other SR phenomena.
yes,
it is A way to view the Lorentz transforms.....that space and time 'morph' into each other as a result of speed...are seen differently by different observers....in his explanation is the [unstated, I think] assumption that space and time remain a fixed background.
He also uses it to explain that acceleration is a curve in space-time, fixed velocity plots as a straight line, while rotational motion appears as a corkscrew. It is easy to picture yourself riding along in such situations where you 'see' space-time different from your neighbor, and they different from you.
He does NOT [and cannot] use it to explain the dynamical nature of space-time due to mass,energy or gravity.
I haven't thought about this for too long, so it might not be an air tight argument, but hopefully it's a good qualitative explanation. Since you have the equivalence principle, you cannot distinguish acceleration from gravity (effect of spacetime) directly. However, you can distinguish indirectly, by observing matter around you, which is a subject of common confusion for students first learning GR (if you had a window in your accelerating elevator, you could see things accelerating outside of it which would be still if it is only gravitation). So the distribution of matter is important for probing the structure of spacetime. This suggests that one can write a relation between the stress-energy tensor (a tensor that contains the information about the matter distribution of the universe) and the metric (another tensor that contains the information about the structure of spacetime). Note, we have not yet implied that matter warps spacetime, only that since you can use matter to measure spacetime, the two are related somehow, and we simply propose that there is a relation. From here we follow with some mathematics, and one thing we know about matter, which is that 4-momentum is conserved. In the language of the stress-energy tensor, this means that the divergence of the stress-energy tenser is 0. So we write one side of the relation as the stress-energy tensor, and the other side we write the most general mathematically sensible tensor we can make out of the metric, but also make sure that this general tensor has the property that it's divergence is zero. If you do this you get einstein's equation, at which point you have GR and gravity warping spacetime etc. So it's all an observation that the equivalence principle makes it so that the only way you can measure spacetime is through matter distributions. Also, the idea that matter warps spacetime is a little misleading. The stress energy tensor typically depends on the metric, and thus depends on the spacetime itself. So spacetime and matter are determined simultaneously. It's better to think that two are closely related, but one does not cause the other. Quantum mechanically this might change though (for example, string theory is typically written as perturbations of a fixed background spacetime), but classically this is the most "complete" understanding of GR.
hmmm - doesnt the "fact" that higgs bosons have been "discovered" now, which disconnects mass as an instrinsic aspect of matter, mean that there is some particle nature of gravity, or some connection between the action of the higgs boson and the gravitational field, or some other ridiculously confusing "explanation" for gravity now? i am unable to fathom how gravity is merely a warped field since we have now introduced the idea that the higgs boson is responsible for "mass" in some manner that is separate from the matter itself...
Recognitions:
Gold Member
i am unable to fathom how gravity is merely a warped field since we have now introduced the idea that the higgs boson is responsible for "mass" in some manner that is separate from the matter itself...
Most of the particle mass does NOT come from the Higgs field, but from intrinsic particle energy. Anyway, the QM description [Higgs} is a distinctly different mathematical formalism from GR...we'll have to await 'quantum gravity' to combine them.
Quote by Zmunkz Although phrased as his own idea, this is not. I've heard Einstein attributed to this sort of interpretation on some of those science channel shows, like Through the Wormhole (or one of those). I've never been able to find a technical discussion of what this interpretation comes from, or if Einstein ever said anything like it, but I just wanted to throw it out there that this reasoning has worked it's way into the non-technical science mainstream. It was described more-or-less as above: every single object has a "total velocity" of c through space and time. Photons move entirely along the space axis, and everything else has a vector with components in both space and time, changing in proportion according to relativity, but always maintaining a magnitude of c. Perhaps we can dismiss RotatingFrame's delivery and treat it as a question: is there any validity to that interpretation?
The interpretation is valid, but apparently not very useful to do math. However it allows some nice, geometrically intuitive explanations. Basically you just rearrange the formula for time-like worldliness:
dtau2 = dt2 - dx2
to:
dt2 = dtau2 + dx2
So instead of a pseudo Euclidean line element dtau, you have an ordinary Euclidean line element dt and tau as a dimension.
- proper time rate is a projection of c on the time dimension
- spatial speed is a projection c on the space dimension
It was used by Lewis C. Epstein in this book Relativity Visualized Here some visualizations based on this:
http://www.adamtoons.de/physics/relativity.swf (relation of speed to length contraction and time dilation)
http://www.adamtoons.de/physics/twins.swf (visual comparison to the Minkowski interpretation)
http://www.physics.ucla.edu/demoweb/...spacetime.html (some of Epstein's original illustrations)
http://www.adamtoons.de/physics/gravitation.swf (relation of gravity and gravitational time dialtion)
http://www.relativitet.se/Webtheses/lic.pdf (technical discussion in Chapter 6)
The neat thing that if you have the direction in space-propertime, then you can replace:
space -> momentum
propertime -> restmass
coordinate time -> Energy (relativistic mass)
E2 = m2 + p2
Which again allows a geometrical interpretation:
- rest mass is a projection of energy on the time dimension
- momentum is a projection of energy on the space dimension
Mentor
Quote by A.T. So instead of a pseudo Euclidean line element dtau, you have an ordinary Euclidean line element dt and tau as a dimension.
I would be careful with this. The term "dimension" doesn't seem to apply to dtau. It isn't a basis vector in any vector space, like dt and dx are.
Quote by jnorman hmmm - doesnt the "fact" that higgs bosons have been "discovered" now, which disconnects mass as an instrinsic aspect of matter, mean that there is some particle nature of gravity, or some connection between the action of the higgs boson and the gravitational field, or some other ridiculously confusing "explanation" for gravity now? i am unable to fathom how gravity is merely a warped field since we have now introduced the idea that the higgs boson is responsible for "mass" in some manner that is separate from the matter itself...
No, the higgs boson doesn't imply anything of that sort. First off, "mass" is not required for gravitation, photons themselves cause gravitational curvature but are massless. Anything with a non-zero stress-energy tensor causes gravitation. The higgs boson does explain features of mass, which helps dictate which particle can decay into which other particle etc. but it doesn't cause gravitation (the same particles without the higgs would still cause gravity).
So you can claim that ANY particle is related to the gravitational field, so the higgs doesn't hold an especially important place in it.
inre: "photons themselves cause gravitational curvature" i do not think this is true. a photon has no location, and does not cause curvature of spacetime.
Mentor
Quote by jnorman inre: "photons themselves cause gravitational curvature" i do not think this is true. a photon has no location, and does not cause curvature of spacetime.
A classical electromagnetic field certainly contributes to the stress-energy tensor of the fields in spacetime, and therefore to the metric (which determines the curvature).
Things get more complicated with photons. The term is defined by quantum electrodynamics, so now we're talking about a quantum field's contribution to the metric. We seem to need a quantum theory of gravity to determine that, but there's no reason to think that it would be zero.
Quote by jnorman inre: "photons themselves cause gravitational curvature" i do not think this is true. a photon has no location, and does not cause curvature of spacetime.
Actually, you have it the wrong way around. General relativity works worse for point particles (not test particles) than for spread out fields. By photons, I was refering more to classical light than to the quantum description of the photon. But classically at least, the electromagnetic field, and thus light, has a stress-energy tensor and does in fact cause gravity. The electromagnetic field is the limit of the quantum field the photon is part of, and that's why I say that "photons cause gravitational curvature".
Recognitions:
Gold Member
[Things get more complicated with photons. The term is defined by quantum electrodynamics, so now we're talking about a quantum field's contribution to the metric. We seem to need a quantum theory of gravity to determine that, but there's no reason to think that it would be zero.
Hopefully that result will be the theory that 'corrects' GR and current quantum mechanics near space-time singularities...divergences.....like the big bang and the 'center' of a black hole.
Page 2 of 2 < 1 2
Tags
general relativity, mass, spacetime, warp
Thread Tools
| | | |
|----------------------------------------------------|------------------------------|---------|
| Similar Threads for: Why does mass warp spacetime? | | |
| Thread | Forum | Replies |
| | Beyond the Standard Model | 29 |
| | Special & General Relativity | 3 |
| | Special & General Relativity | 7 |
| | Special & General Relativity | 28 |
| | Special & General Relativity | 1 |
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 8, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.93519526720047, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/24190/a-charged-particle-moves-in-a-plane-subject-to-the-oscillatory-potential
|
# A charged particle moves in a plane subject to the oscillatory potential
A charged particle moves in a plane subject to the oscillatory potential:
$U(r)=\frac{m\omega^2 r^2}{2}$
There is also a constant EM-field described by:
$\vec{A}=\frac{1}{2}[\vec{B}\times\vec{r}]$
where B is normal to the plane.
This produces the Lagrangian:
$L=\frac{m}{2}\dot{\vec{r}}^2+\frac{e}{2}\dot{\vec{r}}\vec{A}-U(r)$
Now my friend says we need to transform this into polar coordinates and that produces:
$L=\frac{m}{2}(\dot{r}^2+r^2\dot{\phi}^2)-mr^2\omega_L\dot{\phi}-U(r)$
where $\omega_L$ is the Larmor precession frequency:
$\omega_L=-\frac{eB}{2mc}$
My question is, How does he get this transformation? I don't really understand where the second term is coming from in the mechanical kinetic energy.
-
## 2 Answers
In polar coordinates $d\vec{r}=\hat{e}_r dr+\hat{e}_{\phi}rd\phi$. Devide it by $dt$ and you will have the particle velocity $\dot{\vec{r}}$. Square the latter and you will get the kinetic energy.
-
Okay this makes a lot of sense. Thanks. This notation is much easier to read. – mnky9800n Apr 21 '12 at 21:50
2
@mnky9800n: Note that you can "accept" and answer by clicking the green tick if you feel that it helped you (and you don't want to wait for another answer). – Manishearth♦ Apr 22 '12 at 2:46
$\newcommand{\er}{\hat e_r} \newcommand{\et}{\hat e_\tau} \newcommand{\d}{\dot} \newcommand{\m}{\frac{1}{2}m}$
This one gave me a feeling of déjà vu, since I's already answered a similar one. Here's the relevant part of the derivation:
My $\theta$ is your $\phi$\ (usually $\phi$ is used for the azimuthal angle in spherical coordinates--which are a 3D extension of polar coordinates)
In radial coordinates, $\d\er=\d\theta \et$, and (useless here) $\d\et= -\d r \er$. $\er,\et$ are unit vectors in radial and tangential directions respectively. Due to this mixing of unit vectors (they move along with the particle), things get a little more complicated than plain 'ol cartesian system, where the unit vectors are constant. $$\vec p= r\er$$ $$\therefore \vec v=\d{\vec p}= \d r\er + r\d\er=\d r \er + r\d\theta\et$$ $$\therefore v^2= \vec v\cdot\vec v= \d r^2+r^2\d\theta^2$$
$$\therefore KE=\frac12m\vec v\cdot\vec v=\frac12m|\vec v|^2=\frac12m (\d r^2+r^2\d\theta^2)$$
So basically it's just a few steps of math that he neglected (IIRC this is usually considered an identity).
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 16, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9329257607460022, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/43197/what-is-the-connection-between-grothendiecks-differential-operators-and-hochsch/43214
|
# What is the connection between Grothendieck's Differential Operators and Hochschild Cohomology
For a given commutative algebra $A$ over a field $\mathbb{K}$(with char=0) the algebra of differential operators on $A$ is the set of endomorphism $D$ of $A$ such for some $n$ we have that for any sequence $\left\lbrace a_i\right\rbrace_{1\leq i\leq n}$ for $a_i\in A$ one has $[\ldots[[D,a_0],a_1],\ldots,a_n]=0$.
By analogy, I think of Hochschild Cohomology as a sort of "algebraic differential forms"(maybe this isn't the right approach, but $C^{n}(A,A)=Hom(A^{\otimes n},A)$ and it gives a cohomology theory etc.), thus it seems like there should be a connection to differential operators on the algebra.
Sorry if the question is a little general, but I am open to accepting a variety of answers.
Thanks in advance!
-
1
The first sentence seems to contain some kind of structural error: I am not so knowledgeable in this field, but I doubt that the alebra of differential operators on $A$ is an endomorphism of $A$... – Pete L. Clark Jun 4 '11 at 16:52
hehe, thanks @Pete L. Clark and @Mariano, the edit was what I intended. :) – BBischof Jun 4 '11 at 17:06
## 3 Answers
If $X$ is an smooth affine variety over a field of characteristic zero, with coordinate ring $\mathcal O$, the ring of global differential operators $\mathcal{D}$ on $X$ is the same thing as the Grothendieck algebra of differential operators on $A$.
One can show that the Hochschild homology $HH_\bullet(D)$ of the algebra $\mathcal D$ is, according to a theorem of Mariusz Wodzicki, precisely the same thing as the algebraic de Rham cohomology $H_{\mathrm{dR}}(X)$ of $X$, which is the same thing as the de Rham cohomology of $A$ and, if the base field is $\mathbb C$, classical theorems of Grothendieck and Hartshorne then tell you that these two latter cohomologies are precisely the same thing as the topological de Rham cohomology of the analytic variety $X_{\mathrm{an}}$.
(All this can be globalized to schemes, but one needs to be careful. For example, not all schemes are $\mathcal D$-affine, &c.)
-
Sorry, but I have a few more questions, first is $\mathcal{O}$ the same as $A$ here? Second, what do you mean de Rham of $A$? If you mean de Rham on the variety with coordinate ring $A$, then how is that different than the topological de Rham. Sorry if these questions are stupid. – BBischof Jun 4 '11 at 22:41
@BBischof: Indeed: $A$ was supposed to be $\mathcal O$ here. By "de Rham cohomology of $A$" I mean "construct the module of Kähler differential forms on $A$, from it construct its exterior algebra and define a differential in complete analogy of the exterior differential, and tak e the homology of the resulting complex". It is a non-trivial result that this gives the same thing (o0ver $\mathbb C$) as the topological de Rham cohomology of the underlying analytic variety. – Mariano Suárez-Alvarez♦ Jun 6 '11 at 16:10
– Kevin Lin Jun 6 '11 at 16:13
Like Grigory M says, look up the Hochschild-Kostant-Rosenberg theorem.
This states that for a smooth algebra $A$, the Hochschild chain complex is quasi-isomorphic to the chain complex $(\Omega^\bullet_A, d=0)$ of differential forms with zero differential.
There is another version of HKR which states that, again for smooth $A$, the Hochschild cochain complex is quasi-isomorphic to the cochain complex $(\Lambda^\bullet T_A, d=0)$ of polyvector fields with zero differential. So here you see (poly)vector fields (i.e. derivations), so maybe that gives you one connection to differential operators.
Now in this second version of HKR, these are actually more than just chain complexes but dg Lie algebras. The formality theorem of Kontsevich -- it's in his deformation quantization paper -- says that, as dg Lie algebras (or $L_\infty$ algebras) the two sides are still quasi-isomorphic.
Moreover, if you look at the deformation quantization paper, you'll notice that Kontsevich's definition of Hochschild cohomology is not the "standard" definition (that is, Hochschild's original definition) involving $\operatorname{Hom}(A^{\otimes n}, A)$. Instead, he takes the subcomplex of the Hochschild cochain complex $\operatorname{Hom}(A^{\otimes n}, A)$ consisting of those maps which are polydifferential operators. [However, see e.g. the paper "The Continuous Hochschild Cochain Complex of a Scheme" by Yekutieli for comparison of different definitions of Hochschild cohomology (both for algebras and more generally for schemes).] Then he shows that this thing is quasi-isomorphic to $(\Lambda^\bullet T_A, d=0)$ (as a chain complex and as a dg Lie algebra). So that gives you another connection to differential operators...
-
The usual way to relate Hochschild (co)homology of a commutative algebra to "differential-geometric" concepts is the Hochschild-Kostant-Rosenberg theorem: if A is a smooth commutative k-algebra, there exists a (graded) isomorphism $\Omega^\bullet A\to HH_\bullet(A,A)$ (see e.g. theorem 3.4.4 in Loday's Cyclic homology book or the nLab entry and links there).
Now, I don't know about any direct connection to differential operators. But there is a general way to get differential operators from differential forms: to apply (relative) Koszul duality to de Rham complex (it's described... well, e.g. in Positselski's "Two kinds of derived categories...", although it's an overkill, perhaps; see also "What is Koszul duality?").
Hope, it helps.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 43, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.92317134141922, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/58329/looking-for-a-function-f-that-is-n-differentiable-but-fn-is-not-cont/58370
|
# Looking for a function $f$ that is $n$-differentiable, but $f^{(n)}$ is not continuous
I am looking for a real valued function of real variable that is $n$-differentiable, but whose $n$th derivative is not continuous.
This is my function: $f_n(x) = x^{n+1} \cdot \sin{\frac{1}{x}}$, if $x \neq 0$ and 0, if $x=0$. $n\in \{0,1,2,\ldots\}$ For example, if $n=0$, $f_0$ it's continuous and non diferentiable in $0$.
-
– user17762 Aug 18 '11 at 15:40
@Sivaram: According to his title, Mario is looking for a function that is $n$ times differentiable but has a discontinuous $n$-th derivative. His example, however, suggests that he’s actually looking (for each $n$) for functions that are $C^n$ but not $C^{n-1}$. – Brian M. Scott Aug 18 '11 at 16:04
@Brian M. Scott read my mind. – Mario De León Urbina Aug 18 '11 at 16:06
@Mario: Your post should be self contained. You cannot have part of the question in your title and expect people to understand what is in your mind and answer. – user17762 Aug 18 '11 at 16:10
@Sivaram: You have reason. The next time i'm going to write the question in the title. – Mario De León Urbina Aug 18 '11 at 16:17
show 1 more comment
## 4 Answers
Let $n$ be a positive integer and let
$$f(x) = x^{2n} \cdot \sin\left(\frac{1}{x}\right)$$ $$f(0) = 0$$
Then mathematical induction can be used to prove: (a) The $n$th derivative of $f(x)$ exists for each value of $x$. (b) The $n$th derivative $f(x)$ is not continuous at $x = 0$.
Let $n$ be a positive integer and let
$$g(x) = x^{2n+1} \cdot \sin\left(\frac{1}{x}\right)$$ $$g(0) = 0$$
Then mathematical induction can be used to prove: (a) The $n$th derivative of $g(x)$ is continuous at each value of $x$. (b) The $(n+1)$st derivative of $g(x)$ does not exist at $x = 0$.
More generally, let $a$, $b$ be positive real numbers, let $n$ be a positive integer, and define $h(x)$ by:
$$h(x) = x^a \cdot \sin\left(\frac{1}{x^b}\right)$$ $$h(0) = 0$$
1. The $n$th derivative of $h(x)$ exists for all values of $x$ if and only if $a > n + (n-1)b$.
2. The $n$th derivative of $h(x)$ is bounded on every bounded interval if and only if $a \geq n + nb$.
3. The $n$th derivative of $h(x)$ is continuous at each point if and only if $a > n + nb$.
To prove these statements, you can use mathematical induction to prove that the $n$th derivative of $h(x)$ has the form
$$P_{n}(x) \cdot \cos\left(\frac{1}{x^b}\right) + Q_{n}(x) \cdot \sin\left(\frac{1}{x^b}\right),$$
where $P_n$ and $Q_n$ are polynomials such that at least one of them has a lowest degree term that is a NONZERO multiple of $x^{a-n-nb}$ and neither has a lower degree term. [The added emphasis on nonzero is because this becomes vital for the "only if" halves of the statements above.] To prove the "if" halves, you may want to incorporate into the induction statement the fact that the $n$th derivative at $x = 0$ is zero.
-
Once you do it for the first derivative, indefinitely integrate $n-1$ times. Your original function is continuous (since differentiable), so it is integrable.
-
My question is: $f$ satisfies that $f^{(n)}$ exists and the last it's non continuos in all $\mathbb{R}$? – Mario De León Urbina Aug 18 '11 at 16:03
Once you find an example that does this for $n=1$, get the larger values of $n$ as I said. – GEdgar Aug 19 '11 at 2:18
I think you want $f_n(x) = x^{2n} \sin(1/x)$.
-
Your function works for $n=1$ but not for $n=2$.
For $n=1$, the function is everywhere differentiable, and it holds $f'_1(x) = 2x \sin(1/x) - \cos(1/x)$ for $x \neq 0$, and $f'_1 (0) = 0$; hence $f'_1$ is not continuous at $0$.
For $n=2$, on the other hand, the function is everywhere differentiable but not twice differentiable on $\mathbb{R}$. Indeed, $f'_2$ is given by $$f'_2 (x) = 3x^2 \sin (1/x) - x\cos (1/x),\;\; x \neq 0,$$ and $$f'_2 (0) = 0.$$ However, for $h \neq 0$, $$\frac{{f'_2 (h) - f'_2 (0)}}{{h - 0}} = \frac{{3h^2 \sin (1/h) - h\cos (1/h)}}{h} = 3h\sin (1/h) - \cos (1/h).$$ So, letting $h \to 0$, this shows that $f'_2$ is not differentiable at $0$. Hence $f_2$ is not twice differentiable.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 75, "mathjax_display_tex": 10, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9288505911827087, "perplexity_flag": "head"}
|
http://quant.stackexchange.com/questions/1891/how-much-data-is-needed-to-validate-a-short-horizon-trading-strategy
|
# How much data is needed to validate a short-horizon trading strategy?
Suppose one has an idea for a short-horizon trading strategy, which we will define as having an average holding period of under 1 week and a required latency between signal calculation and execution of under 1 minute. This category includes much more than just high-frequency market-making strategies. It also includes statistical arbitrage, news-based trading, trading earnings or economics releases, cross-market arbitrage, short-term reversal/momentum, etc. Before even thinking about trading such a strategy, one would obviously want to backtest it on a sufficiently long data sample.
How much data does one need to acquire in order to be confident that the strategy "works" and is not a statistical fluke? I don't mean confident enough to bet the ranch, but confident enough to assign significant additional resources to forward testing or trading a relatively small amount of capital.
Acquiring data (and not just market price data) could be very expensive or impossible for some signals, such as those based on newer economic or financial time-series. As such, this question is important both for deciding what strategies to investigate and how much to expect to invest on data acquisition.
A complete answer should depend on the expected Information Ratio of the strategy, as a low IR strategy would take a much longer sample to distinguish from noise.
-
## 2 Answers
Consider the standard error, and in particular the distance between the upper and lower limits:
\begin{equation} \Delta = (\bar{x} + SE \cdot \alpha) - (\bar{x} - SE \cdot \alpha) = 2 \cdot SE \cdot \alpha \end{equation}
Using the formula for standard error, we can solve for sample size:
\begin{equation} n = \left(\frac{2 \cdot s \cdot \alpha}{\Delta}\right)^{2} \end{equation}
where $s$ is the measured standard deviation, which you already have from your IR calculation.
High-frequency Example
I was testing a market-making model recently that was expected to return a couple basis points for each trade and I wanted to be confident that my returns were really positive (ie, not a fluke). So, I chose a distance of 3 bps $(\Delta = .0003)$. My sample's measured standard deviation was 45 bps $(s = .0045)$. For a confidence interval of 95% $(\alpha = 1.96)$, my sample size needs to be $n = 3458$ trades. I would have picked a tighter distance if I had been simulating this model, but I was trading live and I couldn't be too choosy with money on the line.
Low-frequency Example
I imagine that for a low-frequency model that was expected to return 1.5% per month, I'd want maybe 1% as the distance $(\Delta = .01)$. If the hoped-for Sharpe ratio were 3, then the standard deviation would be 1.7% $(s = .017)$, which I came-up with by backing-out the monthly returns. So for a confidence interval of 95% $(\alpha = 1.96)$, I'd need 45 months of data.
-
– chrisaycock♦ Sep 14 '11 at 23:05
Good answer. Can u post the chat transcript here also for completeness. I get a page not found error for the link above. – Suminda Sirinath Salpitikorala Jan 9 '12 at 3:49
– chrisaycock♦ Jan 9 '12 at 13:46
how exactly did you get s=1.7% from r=1.5% and SR=3? – eyaler Jan 27 '12 at 17:38
I would also note that you need to watch out for correlations between data points. (EG ,if you have a data point proving this works for oil company x. Another data point for oil company y may not actually count as separate.)
If you are looking at 5 day holding periods, why not just grab all the EOD data that you can as well.EOD data is obviously not tradeable but can be used as a sanity check for long term trading strategy returns when you do not actually have the data.
-
Hi Michael RB, welcome to quant.SE and thanks for contributing an answer. Do you have any ideas on how the correlation reduces the confidence? As for EOD data, of course it will be used as appropriate, but the question here is how much intraday data I need for a theoretical strategy. – Tal Fishman Sep 14 '11 at 0:40
honestly, it was just an example. for equities, you might want to remove sector returns/ market returns. etc. – Michael RB Sep 14 '11 at 1:36
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.947355329990387, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/35622?sort=oldest
|
## 2-groups are to crossed modules as 2-categories are to…?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Given a 2-group $\mathcal{G}$, you can construct a crossed module $(G,H,t,\alpha)$ and vice versa.
Is there something similar you can say for strict 2-categories?
In a personal attempt to understand strict 2-categories, I ended up constructing a speculative conceptual tool (whose validity remains to be seen) that I call the boundary of a 2-morphism. I've written up some raw notes here:
The basic idea is that given morphisms $f,g:x\to y$ and a 2-morphism $\alpha:f\Rightarrow g$, we define its boundary as an endomorphism
$$\partial\alpha:y\to y$$
satisfying
$$\partial\alpha\circ f = g.$$
When the source of the 2-morphism is an identity morphism, then we have
$$\partial\alpha = t(\alpha),$$
which seems to relate things well to cross modules when all morphisms are invertible.
I'm curious if there is anything like a crossed module, but where we're not dealing with groups and morphisms are not invertible. What I'm trying to cook up seems like it might be related to such a thing if it exists.
Any thoughts and/or any comments on my notes would be greatly appreciated.
PS: Apologies in advance if my writing is not very clear. I'm not a mathematician, but am trying to teach myself some basic higher category theory.
-
It might make sense to first ask your question for 2-categories with one single object. Indeed, 2-groups are 2-categories with one single object, and with the additional requirement that all 1- and 2-morphisms are invertible. – André Henriques Aug 15 2010 at 6:52
## 2 Answers
This is not an answer, but rather a "no go" observation. I claim that you should not expect 2-categories in general to have "crossed-module" like descriptions, or at least not any such description that's any easier to think about than "2-category". Part of what makes 2-groups easy is that they have lots and lots of symmetry. Ignoring the 2-morphisms (and 2-composition), the 1-morphisms form a group, so by group translation you can relate the structure between any two 1-morphisms to the structure between some 1-morphism and the identity. And that structure is group- or torsor-like, since if you ignore the 0-morphism and the 1-composision, the 1-morphisms are a groupoid.
I expect that you can construct something for a 2-category with (1) only one 0-morphism and (2) all 1-morphisms invertible. I.e. this is a 2-group but relaxing the invertibility condition on arbitrary 2-morphisms. Then I would expect that this should correspond to a "crossed module of groups" where the second "group" $H$ need only be a monoid, although I haven't thought about the details.
-
Thank you Theo. I'm sure you're right about the "no go" in general. In trying to answer my own question, I had the idea to look at Cat with (small) categories, functors, and natural transformations. If there were no "no go", then every natural transformation would have a boundary. I'm still trying to work out the conditions under which a boundary exists. I don't think the morphisms need to be all invertible, but maybe they need to have a "right inverse" (?) There may be something in between general strict 2-category and 2-groups for which we can define some crossed module-like construction. – EricForgy Aug 16 2010 at 5:57
1
There are various notions in the literature that correspond to analogues of crossed modules where the top group H is just a monoid, usually an algebra. Plenty of these appear for instance in the literature on orbifold/equivariant CFT. These indeed define 2-categories with invertible 1-morphisms and non-invertible 2-morphisms. Maybe I find the time to dig out some references from when I looked into this... – Urs Schreiber Aug 16 2010 at 14:58
Thanks Urs. I think Theo and Chris have made the "no go" observation clear for general strict 2-categories. But it is interesting to hear about this crossed module-like construction where the top group is just a monoid. I can't help but think there is a little bit further we can go. Perhaps where even the bottom group is a monoid, but this may put restrictions on the allowed 2-categories. For example, I'm thinking perhaps the 2-category must be "directed" somehow. – EricForgy Aug 17 2010 at 5:31
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
I think Theo's "no go" is exactly right. Here is an example which might make things easier to understand: Let X be any category. I am going to construct an interesting 2-category with one object which is like a 2-group, but without the invertibility. So there is a single 0-morphism p. The morphisms from p to itself form a category which is a disjoint union of X and two points: $$0 \sqcup X \sqcup \infty$$ This is the disjoint union of categories so X and these other points don't interact. That completely describes the vertical composition. Now I need to tell you the horizontal composition. The element 0 is the (strict) identity for the horizontal composition. The point $\infty$ has the property that $z \cdot \infty = \infty = \infty \cdot z$ for any z. Finally the horizontal composite of any two things in X results in $\infty$.
Equivalently we can describe this as a monoidal structure on $0 \sqcup X \sqcup \infty$. It is actually strictly commutative too.
The reason this an important example is that we have embedded the category X fully-faithfully into this monoidal category. So any sort of algebraic description of monoidal categories or 2-categories or even strict 2-categories must be at least as complicated as the theory of all categories. This is in severe contrast with the situation for 2-groups for the reasons that Theo pointed out.
This example is also related to Reid Barton's answer to my question: Hom alg for comm. monoids. See also the related questions: A peculiar model strcture on simplicial sets? and simplicial commutative monoids group completion. The example I just described also works to give a simplicial commutative monoid where now X is any simplicial set. However when you apply the "Dold-Kan correspondence" you always get the zero chain complex. This shows that the Dold-Kan correspondence fails to be an equivalence for commutative monoids. It also says that in order to describe higher categories in terms of something like a chain complex (e.g. something like a crossed module) you absolutely need some invertablity.
-
Awesome example. – Theo Johnson-Freyd Aug 16 2010 at 21:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9412026405334473, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/86477/simple-question-in-the-representation-of-sl2-c/86529
|
## Simple question in the representation of SL(2,C)
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Let $V$ the standard two dimensional representation of SL(2,C). The Fulton's book in representation theory say in pag 156 that $Sym^3(Sym^2V)=Sym^6(V) \oplus Sym^2(V)$. In the excercises 11.23, the books asks to prove the decomposition
$$Sym^3(Sym^3V) = Sym^9(V) \oplus Sym^5(V) \oplus Sym^3V$$
In my work, I found $Sym^5(Sym^3V)$,and $Sym^k(Sym^3V)$, So I was looking for a similar decomposition. I am not familiar enought with the theory, and to study the subject will take a little to far from my current work. So I decided to ask here (sorry if the question is too simple).
Thanks for any help!!
-
## 3 Answers
You're looking at plethysm of $SL_2(\mathbb{C})$-modules. According to a paper of Manivel (An extension of the Cayley-Sylvester formula, 2008) the answer is given by the Cayley-Sylvester formula. In your case it states that the multiplicity of $Sym^e(V)$ in $Sym^n(Sym^3(V))$ is $$Par(n,3;(3n-e)/2) - Par(n,3;(3n-e)/2 - 1),$$ where $Par(n,k,m)$ is the number of partitions in an $n$-by-$k$ box of size $m$. For example, if $n=3$ and $e=5$ then $Par(3,3;2)=2$ and $Par(3,3;1) = 1$, which agrees with what you have above.
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Highest weight theory is ideally suited to answer just this sort of question. Here's how to figure out your problem.
1. First recall that $Sym^k(V)$ is the irreducible representation of highest weight $k$. So, it has weight spaces with weights $-k,-k+2,\ldots,k-2,k$ occurring with multiplicity one. In particular, $Sym^3(V)$ has weights $-3,-1,1,3$.
2. Then the weights occurring in $Sym^k(Sym^3(V))$ correspond to all possible ways of adding $k$ of the weights $-3,-1,1,3$ together. For example, $3k$ will be a weight occurring, and in fact it will be the highest weight, corresponding to the fact that $Sym^{3k}(V)$ will always be a direct summand of $Sym^k(Sym^3(V))$.
3. Having found all the weights, you know need to know the multiplicity with which they occur. For example, the reason that $Sym^3(Sym^2(V)) = Sym^6(V) \oplus Sym^2(V)$ is that the weights $-6,-4,\ldots,6$ all occur, but the weights $-2,0,2$ all occur with multiplicity two.
I am not sufficiently motivated to write out a formula for $Sym^k(Sym^3(V))$ right now, but at this point it's a matter of combinatorics.
-
2
«at this point it's a matter of combinatorics»: great last words! :) – Mariano Suárez-Alvarez Jan 24 2012 at 6:53
1
If insufficiently motivated, you can always ask the computer (young.sp2mi.univ-poitiers.fr/cgi-bin/form-prep/…) to do the work (available from www-math.univ-poitiers.fr/~maavl/LiE/form.html). Of course you'll need to look at the source code to understand how the answer was obtained. – Marc van Leeuwen Jan 24 2012 at 14:53
This is a bit of expansion on the answer of Mike Skirvin; in particular it gives one way of explicitly calculating the combinatorics involved. My previous answer, although correct in its result, is horribly roundabout and overly computational, a mathematical Rube Goldberg machine if you will; so after waking up this morning I realized there is a much easier approach using a recursion on Symmetric powers.
Define:
$L = U^3 + U + U^{-1} + U^{-3}$
$M = U^4 + U^2 + 2 + U^{-2} + U^{-4}$
Now define a sequence $S_i$ by:
$S_{-2} = 0$
$S_{-1} = 0$
$S_0 = 1$
$S_1 = L$
$S_k = L\cdot S_{k-1} - M\cdot S_{k-2} + L\cdot S_{k-3} - S_{k-4}$ for $k\geq 2$.
Note that the exponents of $U$ in $S_1$ are exactly the weights mentioned by Mike in his comment (1). It turns out the same is true for all the $S_k$: the exponents of $U$ in $S_k$ are exactly the set of weights of $Sym^k(Sym^3(V))$ and the coefficient of $U^\ell$ is exactly the multiplicity of the weight $\ell$ in $Sym^k(Sym^3(V))$.
From this, you can pick out the subrepresentations by looking at where coefficients change; since the weights of any $Sym^\ell(V)$ occur with multiplicity 1, the only time the coefficients change is when a new summand occurs.
For example, working out $Sym^3(Sym^3(V))$ one gets the following expression:
$U^9 + U^7 + 2U^5 + 3U^3 + 3U + 3U^{-1} + 3U^{-3} + 2U^{-5} + U^{-7} + U^{-9}$
For the module corresponding to the leading coefficient, subtract 1 from each exponent giving a copy of $Sym^9(V)$ and leaving:
$U^5 + 2U^3 + 2U + 2U^{-1} + 2U^{-3} + U^{-5}$
Repeat this process to pull out a copy of $Sym^5(V)$ and finally a copy of $Sym^3(V)$; there are no more terms left, so this is the complete decomposition of $Sym^3(Sym^3(V))$. In general, the expression for $Sym^k(Sym^3(V))$ in $U$ so obtained is of the form:
$a_0U^{3k} + a_2U^{3k-2} + a_4U^{3k-4} + ... + a_4U^{-3k+4} + a_2U^{-3k+2} + a_0U^{-3k}$
Then $a_0 = 1$ by Mike's comment (2) and the multiplicity of $Sym^\ell(V)$ for $\ell\geq 0$ in the decomposition is just $(a_\ell - a_{\ell+2})$ and the multiplicity of $Sym^{3k}(V)$ is 1 since $a_{-2} = 0$.
As for the recursion, it ultimately expresses symmetric powers in terms of lower symmetric powers and exterior powers; this can be proven using multiplication of Young diagrams and inclusion-exclusion although I don't have a good reference at hand.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 65, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9241496324539185, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/8341/does-throwing-a-watch-into-the-air-cause-it-to-gain-or-lose-time?answertab=active
|
# Does throwing a watch into the air cause it to gain or lose time?
Suppose I'm on a non rotating planet. I have two identical, perfect watches. I synchronize them. Then I throw one of them into the air and catch it. Does the one I throw into the air gain or lose time with respect to the one I was holding?
-
5
– Marek Apr 9 '11 at 20:47
1
@Carl: you have a point there but still, if the answers are essentially the same it means the questions are too, in my opinion. That's because answers usually tend to be a little longer than a single digit. Also, if some question has an answer as short as one sentence (as yours does) I don't think it's a very good question. By the way, from the close button box: "This question covers exactly the same ground as earlier questions on this topic; its answers may be merged with another identical question." I think this is satisfied. – Marek Apr 9 '11 at 22:00
1
@Carl: I agree with Marek. This question is not necessary. Especially if you thought of it after reading the other question. – MBN Apr 9 '11 at 22:22
4
I agree with @Marek, too. Sorry, @Carl. Moreover, I don't think that an Internet user who doesn't think about these matters physically will find it via Google, anyway. There are many ways (wordings) to formulate the same problem. A female extraterrestrial alien is in love but she's older than her sweetheart, and is offered to be repeatedly shot by a cannon by the Jovian Hegaxon Department of Defense. Will she accept? I write it to Google and it doesn't give me your question (later, it will give, when my comment is added to the index haha). – Luboš Motl Apr 10 '11 at 5:15
1
You also write: "In addition, acceleration and gravitational fields are only theoretically identical. One can imagine gravitational theories where they are different. Consequently the physical situation is not identical." - Nope. You are completely wrong. First of all, gravitation appeared in both question: the two questions are identical. Moreover, the effect of gravitation and acceleration is identical in practice, not just theory. The fact that you don't believe general relativity has nothing to do with it: GR has been proved by science while your-like alternatives have been refuted. – Luboš Motl Apr 10 '11 at 5:21
show 5 more comments
## 2 Answers
Lawrence has the details, but this question can actually be determined exactly without assuming anything about mass distribution, etc. A geodesic (which is what the watch in free-fall follows) in GR has maximal proper time along it. The "stationary" watch, which is actually accelerated, is following some other path and so must experience a shorter proper time.
-
And so the held or "stationary" watch loses time, and the thrown watch gains time. – Carl Brannen Apr 12 '11 at 21:35
Hmmm. But the watch is not in free-fall while it is being thrown and caught. So it seems to me this answer is not sufficient, we would have to give a reason why that is negligible. (We are dealing with very small time differences in the first place.) – Retarded Potential Mar 28 at 17:39
What astounds me is there is considerable quibbling over the nature of the question, but nobody answers it! This is comparatively simple to address. Let us consider the Schwarzschild metric in a weak gravity field $$ds^2~=~-(1~-~2\phi/c^2)dt^2~+~dr^2~+~r^2d\Omega^2$$ for $\phi~=~GM/r$ the Newtonian gravity potential. The unit velocity is then $$1~=~-(1~-~2\phi/c^2)u_t^2~+~u_r^2~-~\dots$$ where we can consider the motion in the radial direction for simplicity. The derivative of this with respect to the proper time $s$ is then $$0~=~-(1~-~2\phi/c^2)u_ta_t~+~u_ra_r.$$ If the gravity potential is zero the solutions are $t~=~g^{-1}sinh(gs)$ $r~=~g^{-1}cosh(gs)$, for $g$ the acceleration parameter. Here $g$ counters the gravitation of the Earth. If the gravity potential is turned on we can the write the time solution solution as $t~=~g^{-1}sinh(gs~+~\gamma)$, which we input into the third equation $$0~=~-(1~-~2\phi/c^2)g~cosh(gs)sinh(gs)~+~g~cosh(gs~+~\gamma) sinh(gs~+~\gamma)$$ $$=~(1~-~2\phi/c^2)\frac{g}{2}sinh(2gs~+~2\gamma) ~+~sinh(2gs)$$ If we consider weak fields, small accelerations and small proper time $s$ we have $$0~\simeq~-(1~-~2\phi/c^2)(g^2s~+~g\gamma)~+~g^2s,$$ where $\gamma~\simeq~-2\phi gs/c^2$.
The coordinate time is reduced with the turning on of the acceleration. This implies that the watch on the accelerated frame will mark off a shorter interval of time than the watch which is placed on a geodesic motion in the local gravity field with acceleration $g$. This is a gravitational version of the twin paradox. The twin which travels outwards and back is on an accelerated frame, which is a path in spacetime that is non-extremal, or maximal. As a result the proper time marked off is shorter.
-
Schwarzchild metric seems like overkill -- if we are talking about a normal person throwing a normal watch, we could just use an accelerated frame of reference in flat space. Also the comment I made to genneth's answer about throw/catch time applies also -- this is qualitatively different from the twin paradox in that respsect. – Retarded Potential Mar 28 at 17:18
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 6, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9612599611282349, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/tagged/electricity+superconductivity
|
# Tagged Questions
5answers
1k views
### How can Ohm's law be correct if superconductors have 0 resistivity
Ohm's law states that the relationship between current ( I ) voltage ( V ) and resistance ( R ) is $$I = \frac{V}{R}$$ However superconductors cause the resistance of a material to go to zero, and ...
1answer
186 views
### drift velocity of electrons in a superconductor
is there a formula for the effective speed of electron currents inside superconductors? The formula for normal conductors is: $$V = \frac{I}{nAq}$$ I wonder if there are any changes to this ...
1answer
173 views
### Faraday's law and superconductivity
According to Faraday's law of induction, volts = -Number of coils in a solenoid * change in strength of magnet / change in time. This doesn't take into account distance or speed, only time. If amps = ...
4answers
1k views
### If something has zero resistance, does it have infinite amperage?
If amps = volts / ohms, and ohms is 0, then what is x volts / 0 ohms?
2answers
330 views
### How can I measure the conductivity of a copper rod?
I would like to perform an experiment to measure the conductivity of a copper rod. What device can I use to perform to experiment? is there such a thing as a conductivity meter? All i found was an ...
1answer
90 views
### Superconductors and electrical fields
I have been looking around to figure out how superconductors are made. What ways are there to create a superconductor that don't involve a coolant like liquid nitrogen? Is it possible to cause a ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9338124394416809, "perplexity_flag": "middle"}
|
http://www.territorioscuola.com/wikipedia/en.wikipedia.php?title=Change_of_variables
|
More results on: Download PDF files on: Download Word files on: Images on: Video/Audio on: Download PowerPoint on: More results from.edu web: Map (if applicable) of:
Change of variables - Wikipedia, the free encyclopedia
# Change of variables
Calculus
Definitions
Concepts
Rules and identities
Integral calculus
Definitions
Integration by
Formalisms
Definitions
Specialized calculi
In mathematics, a change of variables is a basic technique used to simplify problems in which the original variables are replaced with other variables derived from the originals; the new and old variables being related in some specified way. The intent is that when expressed in new variables, the problem may become simpler, or equivalent to a better understood problem.
A very simple example of a useful variable change can be seen in the problem of finding the roots of the sixth order polynomial:
$x^6 - 9 x^3 + 8 = 0. \,$
Sixth order polynomial equations are generally impossible to solve in terms of elementary functions. This particular equation, however, may be simplified by defining a new variable x3 = u. Substituting this into the polynomial:
$u^2 - 9 u + 8 = 0 \,$
which is just a quadratic equation with solutions:
$u = 1 \quad \mbox{and} \quad u = 8.$
The solution in terms of the original variable is obtained by replacing the original variable:
$x^3 = 1 \quad \mbox{and} \quad x^3 = 8 \quad \Rightarrow \qquad x = (1)^{1/3} = 1 \quad \mbox{and} \quad x = (8)^{1/3} = 2.\,$
## Simple Example
Consider the system of equations
$xy+x+y=71$
$x^2y+xy^2=880$
where $x$ and $y$ are positive integers with $x>y$. (Source: 1991 AIME)
Solving this normally is not terrible, but it may get a little tedious. However, we can rewrite the second equation as $xy(x+y)=880$. Making the substitution $s=x+y, t=xy$ reduces the system to $s+t=71, st=880.$ Solving this gives $(s,t)=(16,55)$ or $(s,t)=(55,16).$ Back-substituting the first ordered pair gives us $x+y=16, xy=55$, which easily gives the solution $(x,y)=(11,5).$ Back-substituting the second ordered pair gives us $x+y=55, xy=16$, which gives no solutions. Hence the solution that solves the system is $(x,y)=(11,5)$.
## Formal introduction
Let $A$, $B$ be smooth manifolds and let $\Phi: A \rightarrow B$ be a $C^r$-diffeomorphism between them, that is: $\Phi$ is a $r$ times continuously differentiable, bijective map from $A$ to $B$ with $r$ times continuously differentiable inverse from $B$ to $A$. Here $r$ may be any natural number (or zero), $\infty$ (smooth) or $\omega$ (analytic).
The map $\Phi$ is called a regular coordinate transformation or regular variable substitution, where $regular$ refers to the $C^r$-ness of $\Phi$. Usually one will write $x = \Phi(y)$ to indicate the replacement of the variable $x$ by the variable $y$ by substituting the value of $\Phi$ in $y$ for every occurrence of $x$.
## Other examples
### Coordinate transformation
Some systems can be more easily solved when switching to cylindrical coordinates. Consider for example the equation
$U(x, y, z) := (x^2 + y^2) \sqrt{ 1 - \frac{x^2}{x^2 + y^2} } = 0.$
This may be a potential energy function for some physical problem. If one does not immediately see a solution, one might try the substitution
$\displaystyle (x, y, z) = \Phi(r, \theta, z)$ given by $\displaystyle \Phi(r, \theta, z) = (r \cos(\theta), r \sin(\theta), z)$.
Note that if $\theta$ runs outside a $2\pi$-length interval, for example, $[0, 2\pi]$, the map $\Phi$ is no longer bijective. Therefore $\Phi$ should be limited to, for example $(0, \infty] \times [0, 2\pi) \times [-\infty, \infty]$. Notice how $r = 0$ is excluded, for $\Phi$ is not bijective in the origin ($\theta$ can take any value, the point will be mapped to (0, 0, z)). Then, replacing all occurrences of the original variables by the new expressions prescribed by $\Phi$ and using the identity $\sin^2 x + \cos^2 x = 1$, we get
$V(r, \theta, z) = r^2 \sqrt{ 1 - \frac{r^2 \cos^2 \theta}{r^2} } = r^2 \sqrt{1 - \cos^2 \theta} = r^2 \sin\theta$.
Now the solutions can be readily found: $\sin(\theta) = 0$, so $\theta = 0$ or $\theta = \pi$. Applying the inverse of $\Phi$ shows that this is equivalent to $y = 0$ while $x \not= 0$. Indeed we see that for $y = 0$ the function vanishes, except for the origin.
Note that, had we allowed $r = 0$, the origin would also have been a solution, though it is not a solution to the original problem. Here the bijectivity of $\Phi$ is crucial.
### Differentiation
Main article: Chain rule
The chain rule is used to simplify complicated differentiation. For example, to calculate the derivative
$\frac{d}{d x}\left(\sin(x^2)\right)\,$
the variable x may be changed by introducing x2 = u. Then, by the chain rule:
$\frac{d}{d x} = \frac{d}{d u} \frac{d u}{d x} = \frac{d}{d x}\left(u\right) \frac{d}{d u} = \frac{d}{d x}\left(x^2\right) \frac{d}{d u} = 2 x \frac{d}{d u}\,$
so that
$\frac{d}{d x}\left(\sin(x^2)\right) = 2 x \frac{d}{d u}\left(\sin(u)\right) = 2 x \cos(x^2)\,$
where in the very last step u has been replaced with x2.
### Integration
Main article: Integration by substitution
Difficult integrals may often be evaluated by changing variables; this is enabled by the substitution rule and is analogous to the use of the chain rule above. Difficult integrals may also be solved by simplifying the integral using a change of variables given by the corresponding Jacobian matrix and determinant. Using the Jacobian determinant and the corresponding change of variable that it gives is the basis of coordinate systems such as polar, cylindrical, and spherical coordinate systems.
### Differential equations
Variable changes for differentiation and integration are taught in elementary calculus and the steps are rarely carried out in full.
The very broad use of variable changes is apparent when considering differential equations, where the independent variables may be changed using the chain rule or the dependent variables are changed resulting in some differentiation to be carried out. Exotic changes, such as the mingling of dependent and independent variables in point and contact transformations, can be very complicated but allow much freedom.
Very often, a general form for a change is substituted into a problem and parameters picked along the way to best simplify the problem.
### Scaling and shifting
Probably the simplest change is the scaling and shifting of variables, that is replacing them with new variables that are "stretched" and "moved" by constant amounts. This is very common in practical applications to get physical parameters out of problems. For an nth order derivative, the change simply results in
$\frac{d^n y}{d x^n} = \frac{y_\text{scale}}{x_\text{scale}^n} \frac{d^n \hat y}{d \hat x^n}$
where
$x = \hat x x_\text{scale} + x_\text{shift}$
$y = \hat y y_\text{scale} + y_\text{shift}.$
This may be shown readily through the chain rule and linearity of differentiation. This change is very common in practical applications to get physical parameters out of problems, for example, the boundary value problem
$\mu \frac{d^2 u}{d y^2} = \frac{d p}{d x} \quad ; \quad u(0) = u(L) = 0$
describes parallel fluid flow between flat solid walls separated by a distance δ; µ is the viscosity and $d p/d x$ the pressure gradient, both constants. By scaling the variables the problem becomes
$\frac{d^2 \hat u}{d \hat y^2} = 1 \quad ; \quad \hat u(0) = \hat u(1) = 0$
where
$y = \hat y L \qquad \mbox{and} \qquad u = \hat u \frac{L^2}{\mu} \frac{d p}{d x}.$
Scaling is useful for many reasons. It simplifies analysis both by reducing the number of parameters and by simply making the problem neater. Proper scaling may normalize variables, that is make them have a sensible unitless range such as 0 to 1. Finally, if a problem mandates numeric solution, the fewer the parameters the fewer the number of computations.
### Momentum vs. velocity
Consider a system of equations
$m \dot v = - \frac{ \partial H }{ \partial x }$
$m \dot x = \frac{ \partial H }{ \partial v }$
for a given function $H(x, v)$. The mass can be eliminated by the (trivial) substitution $\Phi(p) = 1/m \cdot v$. Clearly this is a bijective map from $\mathbb{R}$ to $\mathbb{R}$. Under the substitution $v = \Phi(p)$ the system becomes
$\dot p = - \frac{ \partial H }{ \partial x }$
$\dot x = \frac{ \partial H }{ \partial p }$
### Lagrangian mechanics
Main article: Lagrangian mechanics
Given a force field $\phi(t, x, v)$, Newton's equations of motion are
$m \ddot x = \phi(t, x, v)$.
Lagrange examined how these equations of motion change under an arbitrary substitution of variables $x = \Psi(t, y)$, $v = \frac{\partial \Psi(t, y)}{\partial t} + \frac{\partial\Psi(t, y)}{\partial y} \cdot w$.
He found that the equations
$\frac{ \partial{L} }{ \partial y} = \frac{\mathrm{d}}{\mathrm{d}t} \frac{\partial{L}}{\partial{w}}$
are equivalent to Newton's equations for the function $L = T - V$, where T is the kinetic, and V the potential energy.
In fact, when the substitution is chosen well (exploiting for example symmetries and constraints of the system) these equations are much easier to solve than Newton's equations in Cartesian coordinates.
## See also
HPTS - Area Progetti - Edu-Soft - JavaEdu - N.Saperi - Ass.Scuola.. - TS BCTV - TSODP - TRTWE
TSE-Wiki - Blog Lavoro - InterAzioni- NormaScuola - Editoriali - Job Search - DownFree !
TerritorioScuola. Some rights reserved. Informazioni d'uso ☞
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 91, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8957516551017761, "perplexity_flag": "head"}
|
http://mathhelpforum.com/discrete-math/129252-combinations-permutations-counting-probability.html
|
# Thread:
1. ## combinations permutations counting and probability?
It turns out counting things really isn't that easy. Any help with explanations and formulae would be greatly appreciated.
A group of 30 people consists of 15 men and 15 women. Each of the 30 choose one cooldrink from 10 types:
(1) How many ways could they do this? (I'm thinking 10^30?)
(2) How many different combinations of cooldrinks could be ordered from the cafeteria where you don't care who gets which but only want to get the number of each type right? (is this in terms of n because they don't tell us the number of each type? Formula?I don't understand this question)
Given the group orders
15 of type 1
6 of type 2
3 of type 3
6 of type 4
(3)In how many different ways can I distribute the cooldrinks amongst the 30 people if I don't care who ordered what?(30! ways?)
(4) What is the probability that a random division of the cooldrink amongst the 30 people give each what they ordered?
I don't even no where to start with the second and fourth questions so any input would be an immense help. Thanks in advance
2. Originally Posted by chocaholic
A group of 30 people consists of 15 men and 15 women. Each of the 30 choose one cooldrink from 10 types:
(1) How many ways could they do this? (I'm thinking 10^30?)
(2) How many different combinations of cooldrinks could be ordered from the cafeteria where you don't care who gets which but only want to get the number of each type right? (is this in terms of n because they don't tell us the number of each type? Formula?I don't understand this question)
Given the group orders
15 of type 1
6 of type 2
3 of type 3
6 of type 4
(3)In how many different ways can I distribute the cooldrinks amongst the 30 people if I don't care who ordered what?(30! ways?)
(4) What is the probability that a random division of the cooldrink amongst the 30 people give each what they ordered?
Number 1 is correct.
Number 2 is a multi-selection (multi-set) problem.
The number of ways to place $N$ identical objects into $K$ distinct cells is $\binom{N+K-1}{N}$.
So in #2 $N=30~\&~K=10$, we are selecting 30 from 10.
#3 is a matter of combinations: $\binom{30}{15}\binom{15}{6} \binom{9}{3} \binom{6}{6}$
What do you think #3 has to do with #4?
3. Hi Plato
Thanks for the input.I still don't quite understand the multi-set one but I'll read up on it. As to the relation between #3 and #4 I think the probability=the number of combinations /the ways to distribute =answer#2/answer#3 Am I on the right track?
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9485903978347778, "perplexity_flag": "middle"}
|
http://unapologetic.wordpress.com/2008/04/15/convex-functions-are-continuous/?like=1&source=post_flair&_wpnonce=5d5a17dad5
|
# The Unapologetic Mathematician
## Convex Functions are Continuous
Yesterday we defined a function $f$ defined on an open interval $I$ to be “convex” if its graph lies below all of its secants. That is, given any $x_1<x_2$ in $I$, for any point $x\in\left[x_1,x_2\right]$ we have
$\displaystyle f(x)\leq f(x_1)+\frac{f(x_2)-f(x_1)}{x_2-x_1}(x-x_1)$
which we can rewrite as
$\displaystyle\frac{f(x)-f(x_1)}{x-x_1}\leq\frac{f(x_2)-f(x_1)}{x_2-x_1}$
or (with a bit more effort) as
$\displaystyle\frac{f(x_2)-f(x_1)}{x_2-x_1}\leq\frac{f(x_2)-f(x)}{x_2-x}$
That is, the slope of the secant above $\left[x_1,x\right]$ is less than that above $\left[x_1,x_2\right]$, which is less than that above $\left[x,x_2\right]$. Here’s a graph to illustrate what I mean:
The slope of the red line segment is less than that of the green, which is less than that of the blue.
In fact, we can push this a bit further. Let $s$ be the function with takes a subinterval $\left[a,b\right]\subseteq I$ and gives back the slope of the secant over that subinterval:
$\displaystyle s(\left[a,b\right])=\frac{f(b)-f(a)}{b-a}$
Now if $\left[x_1,x_2\right]$ and $\left[x_3,x_4\right]$ are two subintervals of $I$ with $x_1\leq x_3$ and $x_2\leq x_4$ then we find
$s(\left[x_1,x_2\right])\leq s(\left[x_1,x_4\right])\leq s(\left[x_3,x_4\right])$
by using the above restatements of the convexity property. Roughly, as we move to the right our secants get steeper.
If $\left[a,b\right]$ is a subinterval of $I$, I claim that we can find a constant $C$ such that $\left|s(\left[x_1,x_2\right])\right|\leq C$ for all $\left[x_1,x_2\right]\subseteq\left[a,b\right]$. Indeed, since $I$ is open we can find points $a'$ and $b'$ in $I$ with $a'<a$ and $b<b'$. Then since secants get steeper we find that
$s(\left[a',a\right])\leq s(\left[x_1,x_2\right])\leq s(\left[b,b'\right])$
giving us the bound we need. This tells us that within $\left[a,b\right]$ we have $|f(x_2)-f(x_1)|\leq C|x_2-x_1|$ (the technical term here is that $f$ is “Lipschitz”, which is what Mr. Livshits kept blowing up about), and it’s straightforward from here to show that $f$ must be uniformly continuous on $\left[a,b\right]$, and thus continuous everywhere in $I$ (but maybe not uniformly so!)
### Like this:
Posted by John Armstrong | Analysis, Calculus
## 2 Comments »
1. Nice blog you have here!
Indeed, for a convex function defined on an open I it needs to be continuous. The crucial part of the proof seems to be the existence of a’ and b’, which can fail if we consider cl(I), in which case the function can have a discontinuity at the boundary of the domain. Although it would be a fairly funky function, it would still be one.
Comment by rasha | November 19, 2009 | Reply
2. Not even that funky; it’s just a step discontinuity. We should require the convexity criterion to hold over a (non-compact) open set.
Comment by | May 14, 2011 | Reply
« Previous | Next »
## About this weblog
This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”).
I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 38, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9385265111923218, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/56553/are-newtons-law-of-cooling-and-stefans-law-related
|
# Are Newton's law of Cooling and Stefan's law related?
Many of Indian school textbooks claim a proof of Newton's law of cooling from Stefan's law of black-body radiation.
As far as I am aware of, Newton's law is based on cooling from convection currents and Stefan's law on radiation. There is not supposed to be any relation between them.
Question: Is there any relation between them, and can Newton's law of cooling be derived from Stefan's law?
I found many answers and resources on Google, but an answer referring to a well-established paper, book or resource of same kind will be highly appreciated.
UPDATE
There's a question in the syllabus they teach us such as "Derive Newton's law of Cooling from Stefan's Law." And here's one of the links which show the solution. They are approximating Stefan's law to Newton's law(mathematically) by considering $T-T_0$ very small. They even claim that Newton's law is applicable for even small temperature differences whereas in reality Newton's law is applicable for all temperature ranges.
Please help. Is there any strong reference which can help shut down this misconception. I understand why are they different but a reference might help much.
-
## 2 Answers
You are correct - the Stefan-Boltzmann law and Newton's Law of Cooling are unrelated. The former deals only with radiation heat exchange whereas the latter with conduction.
This can be considered mathematically as well. The Stefan-Boltzmann law states that heat is transferred at a rate proportional to the fourth power of temperature:
$\frac{dT}{dt} = -k(T^4 - T^4_0)$
whereas Newton's Law of Cooling involves a first power rate of heat transfer:
$\frac{dT}{dt} = -k(T - T_0)$
This causes the two laws to be fundamentally different and unrelated.
-
I know. I made an update in the question. What I am telling you now is a question in exams for years. Can you help justify that with a strong reference? – Cheeku Mar 11 at 23:12
Newton's Law of Cooling is fundamentally an empirical relation for the rate of heat transfer into a body in the limit of a small temperature difference between the body and its surroundings. Given any arbitrary heat transfer law,
$\dot{Q} = f(T)$,
a corresponding first order law of cooling can be deriving by performing a Taylor expansion around the equilibrium temperature $T_0$ as follows:
$\dot{Q} = f'(T_0) \cdot (T - T_0) + \mathcal{O}(T-T_0)^2$
The mechanism of heat transfer here can be arbitrary, since Newton's Law of Cooling holds for any such mechanism in the limit where $T$ does not differ too much from $T_0$. Indeed, in the heat transfer literature, you will find that heat transfer coefficients are reported for systems involving static conduction, convection, radiation, or any combination of these mechanisms.
From the equation above, we can see by inspection that given a heat transfer law $f(T)$, the heat transfer coefficient $h$ in Newton's Law of Cooling is given by
$h = f'(T_0)$
Thus, in the specific example of the Stefan-Boltzmann law, we have
\begin{align*} \dot{Q} &= \sigma_B \,(T^4-T_0^4)\\&= 4\, \sigma_B\,T_0^3 \,\left(T - T_0 \right ) + \mathcal{O} \left(T-T_0\right)^2 \\ h &= 4 \, \sigma_B \, T_0^3 \end{align*}
Your confusion arises from wrongly thinking of Newton's Law of Cooling as a fundamental law of heat transfer, where in fact it is simply an approximation that makes solving heat transfer problems much easier in the limit of small temperature differences. So Newton's Law of Cooling is not strictly valid for all temperatures, or put in a different way, the heat transfer coefficient $h$ in the law will take on different values at different temperatures. For simple systems like the one above, $h$ can be derived from first principles, but in practice it must be estimated from experimental data.
-
So do you suggest that deriving Newton's law of cooling from Stefan's law is valid? – Cheeku Mar 12 at 1:31
Yes it is indeed valid, and as I said above a version of Newton's law of cooling can be derived from ANY fundamental heat transfer law. Newton's "law" is just a Taylor expansion put in fancy terms. However, when the actual heat transfer law depends highly nonlinearly on temperature (like the Stefan-Boltzmann law), the Newton's law approximation will only be valid over a very narrow temperature range. In contrast, in the static conduction systems we are more accustomed to, the heat transfer rate is linear in temperature over a much wider range. – Arvind Kannan Mar 12 at 1:38
Isn't Stefan's law talks about radiation and Newton's law of cooling as is empirically stated talks about convection and conduction? How can then Newton's law of Cooling be an approximation of Stefan's law – Cheeku Mar 12 at 1:41
Incorrect, Newton's law of cooling does not imply any particular form of heat transfer - it is just a convenient functional form in which to express the heat transfer rate. It is an approximation of WHATEVER law happens to be governing the heat transfer process - in conduction this is the heat equation, in convection it is the heat equation + Navier-Stokes equations, in radiation it is the Stefan-Boltzmann law – Arvind Kannan Mar 12 at 1:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 13, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9313534498214722, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/tagged/differential-equations+reference-request
|
# Tagged Questions
1answer
46 views
### Problem books in ODE
I'm studying Ordinary differential equations right now in the level of Hartman's book. I've never seen problem books in ODE in this level even if you consider it without solutions. I would like to ...
2answers
69 views
### resources to study PDE from
I am an undergrad engineering student. I recently completed my second year, with that said, I have taken several calculus courses. Most recently I completed differential equations and multivariable ...
1answer
30 views
### Dynamical Systems. Bendixson's and Dulac-Bendixson's theorems.
I am looking for a place to read the proofs of Bendixsons and Dulac-Bendixsons theorems. Namely let D be a simply connected set and the following system be defined in D. $$\dot x=P(x,y)$$ \dot ...
1answer
60 views
### Results for $y^{\prime\prime}(x) = a(x)y(x)$, where $a(x) > 0$.
I'm looking for references to any known results regarding solutions to the following 2nd order ODE $y''(x)=a(x)y(x)$, where $a(x)>0$ and $x \in \mathbb{R}$. Any help would be appreciated.
1answer
109 views
### Does a bound on a solution to an ODE allow for it to be defined over all $t \in \Bbb R$?
Consider the ODE $$x^{(n)}(t) = f(t, x, x^{(1)}, \dots, x^{(n-1)})$$ Much of the books I have read through talk about results for very loose conditions on $f$. My first question is are there any ...
0answers
32 views
### References that discuss systems of ODEs on the non-negative orthant of $\mathbb{R}^n$?
Does anyone know of any references discussing initial value problems on the non-negative orthant? More specifically, consider the initial value problem \$\frac{dx}{dt}=f(x),\quad\quad ...
0answers
28 views
### Are there references on solving a system of first order nonlinear odes?
I would like to learn about what kinds of systems of first order nonlinear odes may have exact solutions, while trying to solve my previous question (which is a system of nonlinear, separable and ...
0answers
90 views
### George Simmons' “Differential Equations with Applications and Historical Notes” vs. “Differential Equations: Theory, Technique, and Practice”
I've heard much acclaim for George F. Simmons' "Differential Equations with Applications and Historical Notes" (2nd edition). I've noticed there's a newer book by Simmons and Krantz entitled ...
0answers
34 views
### Solutions of linear ODE with quadratic coefficients (reference request)
I am interested in the linear differential equation: $$\dot{x}(t) = (A t^2 + Bt + C) x(t)$$ where $A, B, C \in \mathbb{R}_{n \times n}$ and $x(t) \in \mathbb{R}^n$. Does anyone know about the ...
1answer
127 views
### Looking for a logically coherent book for the self-study of differential equations
I'm looking for a logically coherent book for the self-study of differential equations. Let me clarify. By logically coherent, I don't mean proofs of the limit laws, uniqueness theorems etc. By ...
3answers
203 views
### Mathematical applications of ordinary differential equations.
I'm looking for more mathematically oriented applications of ODEs (if possible of first order equations). I've browsed through several books and they are all full of physics applications and very ...
0answers
26 views
### Links to pdf-articles or books where there is an information on some linear integral operator
Please write me links to pdf-articles or books where there is some information on properties of operators like these: $$(Af)(x,y)=\int_{D}\frac{f(z) \, dz}{|x-z| |z-y|}$$ or (Bf)(x,y)=\int_D ...
2answers
63 views
### Can someone recommend a good textbook for a 3rd year ordinary differential equations class?
The class that I'm taking is called, Intermediate Ordinary Differential Equations and our required textbook is called Differential equations, Dynamical Systems and an Introduction to Chaos by Morris ...
0answers
10 views
### Qualifying Parameters
you have two parameters, 1) rates of trees per land size, ranging from 30%-100%, and 2) rates of birds per land size, ranging from 5%-30% goal is that you're trying to find out which is overall ...
0answers
46 views
### ODE: continuous dependence on parameters
Is it true that the solutions of the problem: \begin{cases} \frac{\text{d}}{\text{d} s} [s^{2-2/N} u^\prime (s)] + \frac{\lambda}{c_N^2}\ u(s)=0 \\ u(\bar{s})=1\\ u^\prime ...
6answers
390 views
### Free differential equations textbook?
I've seen questions on what are some good differential equations textbook and people generally points to Ordinary Differential Equations by Morris Tenenbaum and Harry Pollard and so on I was ...
1answer
210 views
### How does one parameterize the surface formed by a *real paper* Möbius strip?
Here is a picture of a Möbius strip, made out of some thick green paper: I want to know either an explicit parametrization, or a description of a process to find the shape formed by this strip, as ...
1answer
84 views
### Harmonic Extension
Let be $u$ a harmonic function defined on an open set $\Omega \setminus \{p\} \subset \mathbb{C}$ of the complex plane. Show that if $u$ is bounded in a neighborhood of $p$ then $u$ admits a harmonic ...
3answers
67 views
### Is following system of nonlinear ODEs recognized?
The following system of ODEs – is it recognized as distinct system, with meaningful background and uses? $$\frac{dx}{dt} = - [x(t)]^2 - x(t)y(t)$$ $$\frac{dy}{dt} = - [y(t)]^2 - x(t)y(t)$$ This is ...
1answer
355 views
### Looking for a book on Differential Equations *with solutions*
I'm studying differential equations (specifically Laplace Transforms) right now with my college assigned 'Differential Equations with Application and Historical Notes'-George F Simmons. While I like ...
0answers
154 views
### Addition formula for $f_n(x+y)$ in closed form.
$n$ is a positive integer. $$f_n(x)^n+\left(\frac{df_n(x)}{dx}\right)^n=1$$ $f_n(0)=0$, $f_n'(0)=1$ then I am looking for the addition formula for $f_n(x+y)$ in closed form. if $n=1$ then ...
1answer
111 views
### What is a strong stable manifold?
For a dynamical system in $\mathbb{R}^n$ given by $\dot{x} = f(x)$, and a fixed point $p$, one defines stable and unstable manifolds at the point $p$. These are well documented, and a quick ...
0answers
88 views
### Differential Equations, Probability/Statistics, Optimization Problem - Relations?
While I am working on some physical/mathematical problems, I feel strongly that these three areas are almost the identical thing, except that they have different methods/from different aspects to ...
0answers
75 views
### Measure-driven differential equations
Background: I need some help to understand the concept behind measure-driven differential equations. The solution of an ordinary differential equation is continuous. In order to describe discontinuous ...
2answers
68 views
### What do I need to know to simulate many particles, waves, or fluids?
I've never had a numerical analysis course so I don't know what I need to know. I'm just wondering what kind of books I should get to make me able to simulate these things. I'm wanting to simulate ...
2answers
86 views
### Reference request: stability theory in infinite dimensional dynamical systems/ partial differential equations
I am looking for some references (text books, elementary review papers, journal articles etc) regarding the phenomenon of breakdown in stability for (nonlinear) partial differential equations, i.e if ...
2answers
360 views
### ODE book recommendation
I have just completed my first year study and know elementary analysis and a little bit functional analysis. I found that most of the ODE books just focus on calculation but no substantial explanation ...
2answers
126 views
### Method to solve $xx'-x=f(t)$
I would like to resolve this differential equation: $xx'-x=f(t)$ any suggestions (or any online texts on similar differential equation) please? Thanks.
1answer
60 views
### Comparison theorem for systems of ODE
Let vector-function $x(t)$ satisfy a differential equation $$\dot x = f(x),$$ and a vector-function $y($t) satisfy a differential inequality $$\dot y \leq f(y)$$ with starting positions \$y(0) ...
1answer
80 views
### How to show existence and uniqueness of a SL problem with von Neumann BCs
Let $f\in C[0,1]$ be a continuous function and consider for $x\in(0,1)$ the Sturm-Liouvile problem $$-u''(x)+x\cdot u(x)=f(x) \tag1$$ where $u'(0)=u'(1)=0.$ I need to show that for any $f\in C[0,1]$ ...
1answer
87 views
### Citable Reference for Picard's Theorem in Banach Space
I was wondering if anyone knew of a legitimate citable reference where Picard's Theorem on the existence of solutions to ODEs in Banach space is proven? For some reason I can only find proofs for the ...
0answers
77 views
### Literature on Riccati equations (algebraic and differential)
Advise me please some book on algebraic and differential Riccati equations: I'm interested in such questions as theorems of existence, uniqueness and extendibility of solutions of differential ...
1answer
69 views
### Exponential stability of inhomogeneous linear ODE's
Can anybody give me a good reference which under suitable assumptions discusses exponential stability of $0$ for the equation $\dot{u}_t = A(t)u(t) + b(t)$ Here $u_t\in\mathbb R^n$ is the unknown, ...
2answers
137 views
### Newtonian potential of a rotationally-invariant function
Lately I read up in the wikipedia article about the Newtonian potential, that for any compactly supported continuous function $f: \mathbb{R}^d \rightarrow \mathbb{R}$ that is rotationally invariant ...
0answers
64 views
### test function to check region of stability - A stability
I have an ODE that increments exponentially and need to 'use the test function method' to describe its stability region and whether it's A-stable. Can anyone point me to a resource that's written for ...
1answer
266 views
### References to re-learn differentiation and integration
I'm looking to re-learn "differentiation and integration", it has really been a long time since I touched the subject. I'm considering starting with Algebra then differentiation and integration. ...
3answers
100 views
### Concise ODEs reference?
Is there any text that I can use as a short reference for the standard techniques for solving basic ODEs? I currently have been using Boyce and diPrima as my ODEs text, and it is far too wordy for my ...
1answer
88 views
### Basic Reference material about ODEs such as saparability with calculations and examples?
I am trying to show this kind of non-linear $y''''=y'y''/(1+x)$ in normal form. For example here if $y=e^{x}\rightarrow y^{(n)}=e^{x}\rightarrow x=-1$, where $y^{(n)}$ ...
1answer
133 views
### Suggestions for a Global Analysis book
can somebody tell me some good books or lecture notes in "global analysis" ? I am a newcomer in this subject. thanks in advance. greetings trito
3answers
285 views
### Theory of the Mathieu Operator
How important is the theory of the Mathieu operator in mathematics/applied mathematics? What are the major mathematical concepts required to study it? The Mathieu operator is an ordinary periodic ...
2answers
293 views
### Essay about the art and applications of differential equations?
I teach a high school calculus class. We've worked through the standard derivatives material, and I incorporated a discussion of antiderivatives throughout. I've introduced solving "area under a ...
3answers
1k views
### What is a good differential equations textbook?
I have taken a lot of math in university, but chose to omit differential equations. Unfortunately, now I have to read computer science proofs that use them, mostly ODEs, and this is always a struggle. ...
1answer
343 views
### Numerical solving a constrained system of differential equation
I am in trouble on finding a numerical technique to solve the following system of equations $$\ddot q_1(t)=f_1(q_1(t),q_2(t))$$ $$\ddot q_2(t)=f_2(q_1(t),q_2(t))$$ with a constrain of the kind: ...
1answer
126 views
### Can all first order ODEs be made exact?
Elementary differential equations classes usually cover exact differential equations. These are equations of the form: M(x,y)+N(x,y)y'=0 \qquad \mathrm{such\;that} \qquad \frac{\partial ...
2answers
278 views
### Differential Equations reference for Putnam preparation
I have two problem collections I am currently working through, the "Berkeley Problems in Mathematics" book, and the first of the three volumes of Putnam problems compiled by the MAA. These both ...
2answers
93 views
### High order methods for solving ODEs
I would like to know about really high order methods for solving ODEs. Say of order 30 and higher. What are they? Any surveys/reviews?
1answer
82 views
### Book on stability theory
I am looking for a book on stability theory. More precisely, I am interested in the case of a system of differential equations $\frac{dx}{dt}=Ax + F(x),$ where $A$ is a constant matrix, such that two ...
4answers
154 views
### Where can I find good, free resources on differential equations?
I'd like to know if there are any good online books, lecture notes, videos, tutorials, or similar that are free to the public (on differential equations). Suggestions are welcome!
3answers
199 views
### Numerical Analysis References
Could anyone suggest any good (perhaps online ref papers) reference material on numerical analysis focusing on determining accuracy/estimated errors, rates/orders of convergence especially when ...
0answers
233 views
### Which branch of mathematics is this and what are the introductory references?
I am self-studying a physics textbook on waves. While discussing solutions to linear homogeneous ODEs, the author talked about the exponential as "irreducible" solutions and on a footnote, said that ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 46, "mathjax_display_tex": 12, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9255381226539612, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/50625/algorithm-to-determine-if-a-diophantine-equation-has-an-infinite-number-of-solut/50715
|
# Algorithm to determine if a Diophantine Equation has an infinite number of solutions
In their paper , Marker and Slaman, proved the decidability of the the theory of the natural numbers with the quantifier "for all but finitely many", One can obviously encode the question of whether any diophantine equation has an infinite number of solution as the question of whether a statement belonging to this language is satisfied by the Naturals or not. Since they proved the theory is decidable we know that such algorithm exists. My question is: Has any one tried to come up with an efficient algorithm for that ?
-
4
The proof of decidability shows that the theory is the same as the corresponding theory of the reals. So the algorithms for deciding sentences about the field of real numbers will also work for the naturals in this language. That gives an upper bound, at least. – Carl Mummert Jul 10 '11 at 11:43
That sounds like an answer to me, Carl (or at any rate, more than just a comment). – Gerry Myerson Jul 10 '11 at 12:29
## 2 Answers
There is a fundamental error in this question.
For a one-variable diophantine equation, the question whether it has infinitely many solutions is rather trivial to answer (and the answer is the same over $\mathbb{Z}$ and over $\mathbb{R}$, as Carl pointed out).
The situation is different with more than one variables. The basic reason for this is that $$\exists^\infty x \exists^\infty y \cdots$$ does not mean
There are infinitely many pairs $x, y$ such that $\cdots$
It rather means,
There are infinitely many $x$ for each of which there are infinitely many $y$ such that $\cdots$
In fact, it is not possible to say there are infinitely many pairs (triples, quadruples, etc.) in the fragment of first-order logic that Marker and Slaman consider in their paper (for the reason that David pointed out).
-
What about Goedel coding? "There are infinitely many w such that $w=2^x3^y$ and..." – Charles Jul 11 '11 at 1:01
@Charles: The paper does mention that the same result holds when adding exponentials provided Schanuel's Conjecture is true. However, even then, how do you define the corresponding logarithms without $\exists$ and $\forall$? – François G. Dorais Jul 11 '11 at 2:15
Hmm. Is there no total recursive function we can use? I wasn't able to get one, but the requirements for a Goedel numbering are pretty low. To put it another way: if I had an odd() predicate I could make a coding 2^a * (2b+1) recursively, so we can't even test the parity of a number if pairing is impossible. Right? – Charles Jul 11 '11 at 2:43
1
@Charles: Without $\exists$ and $\forall$ you don't even have bounded quantifiers, so it would take some serious effort to get most primitive recursive functions in this context. – François G. Dorais Jul 11 '11 at 2:48
1
Exponentials are a distraction, you can encode ordered pairs as "$x$ and $y$ are positive and $w = x + (x+y)(x+y+1)/2$." The trouble is that you don't have $\exists$. So you want to say "$\exists^{\infty} w \exists x \exists y : w = x + (x+y)(x+y+1)/2$ and ...", but you can't ask for $x$ and $y$ to exist. – David Speyer Jul 11 '11 at 15:28
show 1 more comment
The question of whether a diophantine equation has infinitely many solutions is at least hard as the question of whether a diophantine equation has any solutions, and is therefore undecidable. (As JDH comments below, in a certain technical sense, the question of whether there are infinitely many solutions may be harder.)
Proof: Let $F(x_1, x_2, \ldots, x_m)=0$ be any diophantine equation. Then $F=0$ has a solution if and only if $F(x_1, x_2, \ldots, x_{m-1}, y-z)=0$ has infinitely many solutions.
Your error is equating "infinitely many" with the much larger "all but finitely many".
UPDATE: As Mohammed points out "It is false that all but finitely many bagels are delicious" is equivalent to "infinitely many bagels are not delicious". And the theorem of Marker and Slaman is for all first order statements, not just for diophantine equations, so we should be able to use this rewording.
On skimming their paper, the first thing that I notice is that they have the quantifier $\forall^{\infty}$ ranging over a single integer variable, not over a $k$-tuple of integers, so you can say "For all but finitely many $x$, the following holds for all but finitely many $y$..." but can't directly say the stronger "For all but finitely many ordered pairs $(x,y)$..." But I'm not sure whether this is the heart of the difference, or just a minor technical issue.
-
2
But aren't both quantifiers dual? For example consider the statement $\neg \forall^{\infty} \neg F = 0$, where $\forall^{\infty}$ is the quantifier "for all but finitely many", doesnt this statement mean there is an infinite number of solutions? – Mohamed Alaa El Behairy Jul 10 '11 at 17:25
(I removed my comment and posted it as an answer.) – François G. Dorais Jul 10 '11 at 21:59
4
David, you claim that the infinite-solution problem is "as hard as" the solution problem for diophantine equations, and this suffices to answer the question, but it would be more correct to say that it is "at least as hard as," since in fact it is strictly harder. The problem of determining whether a diophantine equation has a integer solution has complexity $\Sigma^0_1$---it is computably enumerable---but the problem of determining in general whether a diophantine equation has infinitely many solutions has complexity $\Pi^0_2$; indeed, it is $\Pi^0_2$-complete, hence strictly harder. – JDH Jul 10 '11 at 23:03
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 32, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9467529058456421, "perplexity_flag": "head"}
|
http://mathhelpforum.com/algebra/83151-another-geometric-series-question-print.html
|
# Another Geometric Series Question.
Printable View
• April 10th 2009, 12:30 PM
db5vry
Another Geometric Series Question.
A geometric series has first term a and common ratio r. The sum of the first two terms of the geometric series is 7.2. The sum to infinity of the series is 20. Given that r is positive, find the values of r and a. [6]
First I look at the sum to infinity and see that 20 - 20r = a
And then I put 7.2 = $\frac{a (1 - r^2)}{(1 - r)}$
And subbing what I worked out above, $\frac{20 - 20r (1 - r^2)}{(1 - r)}$
And then I end up with 7.2 = $\frac{20 - 20r - 20r^2 - 20r^3}{(1 - r)}$
• April 10th 2009, 01:11 PM
stapel
I think you're on the right track, but you maybe went slightly askew.... (Blush)
From the formula for the sum of a geometric series, you have arrived at $a\, =\, 20\, -\, 20r$, where $a$ is the first term of the series.
Now use the fact that the second term is $ar$:
. . . . . $20\, -\, 20r + (20\, -\, 20r)r\, =\, 20\, -\, 20r^2\, =\, 7.2$
Solve by square roots. (Wink)
All times are GMT -8. The time now is 06:01 AM.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 7, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.942225992679596, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/68966/mean-value-of-arithmetic-function/68976
|
# Mean value of arithmetic function
Suppose we define a mean value of arithmetic function $G(f)$ as $$G(f)=\lim_{x \rightarrow \infty} \frac{1}{x \log{x}} \sum_{n \leq x} f(n) \log{n},$$ and suppose now for an arithmetic function $f$, $G(f)$ exist and is equal to $A$, how to use this result to show that the ordinary mean value of arithmetic function $M(f)=\lim_{x \rightarrow \infty} \frac{1}{x} \sum_{n \leq x} f(n)$ also exists?
-
Punctution should go inside double dollar signs (ideally separated by "`\;`"), otherwise it appears on the next line. – joriki Oct 1 '11 at 8:46
## 2 Answers
Via Abel's summation formula: $$\sum_{n\le x} (f(n)\log n)\frac{1}{\log n}=\left(\sum_{n\le x}f(n)\log n\right)\frac{1}{\log x}+\int_2^x \left(\sum_{m\le u} f(m)\log m\right)\frac{du}{u\log^2 u}.$$ Divide by $x$ and subtract, obtain: $$M_x(f)-G_x(f)=\frac{1}{x}\int_2^xG_u(f)\frac{du}{\log u}=\frac{\mathrm{Li}(x)}{x}\left(A+O(1)\right)\to0.$$
-
You'll want to use "partial summation", also called "summation by parts". Define $G(f;x) = \sum_{n\le x} f(n)\log n$ and $M(f;x) = \sum_{n\le x} f(n)$. Then you can write $M(f;x)$ as a Riemann-Stieltjes integral $$M(f;x) = \int_1^x \frac1{\log t} \, dG(f;t).$$ (Technically the lower endpoint should be $1-\epsilon$.) Then integrating by parts gives $$M(f;x) = \frac{G(f;x)}{\log x} + \int_1^x \frac{G(f;t)}{t(\log t)^2} \,dt. \tag1$$ (Even if you don't know Riemann-Stieltjes integrals, you can still verify this last identity by hand - just split the integral up into intervals of length 1, on which $G$ is constant.)
When you divide both sides of equation (1) by $x$ and take the limit as $x\to\infty$, all that remains to show is that the term with the integral tends to $0$. (Note that $G(f;t)=0$ for $t<2$, so there's no problem with the integral at the lower endpoint.)
-
(+1) I just realized my answer is essentially the same thing, I didn't see it at first because I'm not used to RS integration. The integrand in $\mathrm{(1)}$ is $(A+o(1))/t$ so its integral divided by $x$ is $\frac{\log x}{x}(A+O(1))$ which is $o(1)$. Thus the arithmetic mean exists and equals $A$. – anon Oct 1 '11 at 9:40
True ... the integrand is in fact $(A+o(1))/\log t$, but the integral still turns out to be $o(1)$. – Greg Martin Oct 2 '11 at 3:29
Oh yes, you're correct. – anon Oct 2 '11 at 3:43
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 24, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9331643581390381, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/90820/set-theories-without-junk-theorems/90843
|
## Set theories without “junk” theorems?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Clearly I first need to formally define what I mean by "junk" theorem. In the usual construction of natural numbers in set theory, a side-effect of that construction is that we get such theorems as $2\in 3$, $4\subset 33$, $5 \cap 17 = 5$ and $1\in (1,3)$ but $3\notin (1,3)$ (as ordered pairs, in the usual presentation).
Formally: Given an axiomatic theory T, and a model of the theory M in set theory, a true sentence $S$ in the language of set theory is a junk theorem if it does not express a true sentence in T.
Would it be correct to say that structural set theory is an attempt to get rid of such junk theorems?
EDIT: as was pointed out $5 \cap 17 = 5$ could be correctly interpreted in lattice theory as not being a junk theorem. The issue I have is that (from a computer science perspective) this is not modular: one is confusing the concrete implementation (in terms of sets) with the abstract signature of the ADT (of lattices). Mathematics is otherwise highly modular (that's what Functors, for example, capture really well), why not set theory too?
-
13
I believe the (nowadays) usual set-theoretic coding of ordered pairs is Kuratowski's `$\{\{x\},\{x,y\}\}$`. So 1 would not be a member of (1,3), but it would be a member of (0,3), which is probably worse junk. – Andreas Blass Mar 10 2012 at 16:41
10
Some time ago, the question was raised (perhaps by Peter Freyd), on the categories discussion list, whether a finite simple group could be a zero of the Riemann zeta function. I believe someone checked that, with the usual set-theoretic codings of such entities, the answer is negative. Whew! – Andreas Blass Mar 10 2012 at 16:44
7
Isn't the set theory you are looking for type theory? Why do you want sets, or global membership? – Andrej Bauer Mar 10 2012 at 18:12
12
But mathematical practice uses type theory, not set theory! It is not acceptable to have junk theorems. Mathematicians want variables to have types, either explicitly ("In this paper we assume that $G$ is a simple group...") or by convention ($f$ is a function, $k$ is an integer, etc). What would happen if a student wrote on a math exam "If $1 \in (x,y)$ then $x = 0$ or $y = 0$"? They would say the statement makes no sense and would refuse to judge its truth value. These are clear indications that we have a type theory. – Andrej Bauer Mar 11 2012 at 6:18
4
Both some pro-junk and some anti-junk commenters are implicitly using the term ‘set theory’ to mean only material set theory, that is set theory based on a global membership relation. But the OP and other commenters are also talking about structural set theory. What's important is that mathematicians can keep on talking about sets like we always do but still purge the junk statements from our formal language. A structural set theory, while arguably a type theory and certainly quite similar to a type theory, is still a set theory because it is a theory of sets. – Toby Bartels Mar 11 2012 at 16:18
show 15 more comments
## 7 Answers
What you are describing is the idea of "breaking" an abstraction. That there is an abstraction to be broken is pretty much intrinsic to the very notion of "model theory", where we interpret the concepts in one theory in terms of objects and operations in another one (typically set theory).
It may help to see a programming analogy of what you're doing:
````uint32_t x = 0x12345678;
unsigned char *ptr = (unsigned char *) x;
assert( ptr[0] == 0x12 || ptr[0] == 0x78 ); // Junk!
const char text[] = "This is a string of text.";
assert( text[0] == 84 ); // Junk!
// Using the GMP library.
mpz_t big_number;
mpz_init_ui(big_number, 1234);
assert(big_number[0]._mp_d[0] == 1234); // Junk!
````
All of these are examples of the very same thing you are complaining about in the mathematical setting: when you are presented with some sort of 'type', and operations for working on that type, but it is actually implemented in terms of some other underlying notions. In the above:
• I've broken the abstraction of a `uint32_t` representing a number modulo $2^{32}$, by peeking into its byte representation and extracting a byte.
• I've broken the abstraction of a string being made out of characters, by using knowledge that the character `'T'` and the ASCII value `84` are the same thing
• In the third, I've broken the abstraction that `big_number` is an object of type integer, and peeked into the internals of how the GMP library stores such things.
In order to avoid "junk", I think you are going to have to do one of two things:
• Abandon the notion of model entirely
• Realize that you were actually lying in your theorems: it's not that $2 \in 3$ for natural numbers $2$ and $3$, but $i(2) \in i(3)$ for a particular interpretation $i$ of Peano arithmetic. Maybe making the interpretation explicit would let you be more comfortable?
(Or, depending on exactly what you mean by the notation, the symbols $2$ and $3$ aren't expressing constants in the theory of natural numbers, but are instead expressing constants in set theory)
-
1
Yes, that accords perfectly with my thinking. But I am not sure one has to abandon the idea of a model entirely - just a model which is in the same 'universe' as the abstraction. Your second point (about $i(2)\in i(3)$) is exactly right, in that making the interpretation explicit would make me hugely more comfortable. [I am working on combining code generation and symbolic computation in a typed setting, where all these interpretations are fully explicit, which made me 'see' these subtleties in mathematics more clearly] – Jacques Carette Mar 10 2012 at 19:17
5
Here's a question to ponder: does making a model of peano arithmetic out of the real numbers count as being in the same 'universe'? If so, is $\sqrt{11 - 6 \sqrt{2}} + \sqrt{11 + 6 \sqrt{2}} = 6$ a junk theorem? – Hurkyl Mar 10 2012 at 19:35
In computer science, there is a deep theory on how to avoid this kind of junk by forcing type safety. One approach is to view objects as though they were hidden by a non-computable oracle, and you can only access their properties by querying the oracle. – David Harris Mar 10 2012 at 22:03
@David: and a much, much better and more advanced approach is to use techniques of programming language design, of which all the good ones are forms of type theory. @Hurkyl: your question in the context of $\lambda$-calculus and programming languages is answered by realizing that there are two kinds of semantics, Church-style or intrinsic, and Curry-style or extrinsic. – Andrej Bauer Mar 11 2012 at 6:22
2
@Hurkyl: This is going the other way; a junk theorem is a formal theorem that is not informally correct, while you have an informal theorem that is not formally correct. However, such theorems always have a correct formal analogue, in this case $\sqrt{11 - 6\sqrt{2}} + \sqrt{11 + 6\sqrt{2}} = i(6)$, where $i$ is now the type conversion from natural numbers to real numbers (and actually there are some more applications of $i$ in there). This type conversion is not an artefact but rather something that we very much want; but we suppress all mention of it by abuse of notation. – Toby Bartels Mar 11 2012 at 16:34
show 1 more comment
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
I apologize for posting as an answer what should really be a comment, connected to one of Jacques Carette's comments on my earlier answer. Unfortunately, this is way too long for a comment. Jacques asked why we would bother with set-theoretic foundations at all. It happens that I wrote down my opinion about that about 15 years ago (in a private e-mail) and repeated some of it on the fom (= foundations of mathematics) e-mail list. Here's a slightly edited version of that:
Mathematicians generally reason in a theory T which (up to possible minor variations between individual mathematicians) can be described as follows. It is a many-sorted first-order theory. The sorts include numbers (natural, real, complex), sets, ordered pairs and other tuples, functions, manifolds, projective spaces, Hilbert spaces, and whatnot. There are axioms asserting the basic properties of these and the relations between them. For example, there are axioms saying that the real numbers form a complete ordered field, that any formula determines the set of those reals that satisfy it (and similarly with other sorts in place of the reals), that two tuples are equal iff they have the same length and equal components in all positions, etc.
There are no axioms that attempt to reduce one sort to another. In particular, nothing says, for example, that natural numbers or real numbers are sets of any kind. (Different mathematicians may disagree as to whether, say, the real numbers are a subset of the complex ones or whether they are a separate sort with a canonical embedding into the complex numbers. Such issues will not affect the general idea that I'm trying to explain.) So mathematicians usually do not say that the reals are Dedekind cuts (or any other kind of sets), unless they're teaching a course in foundations and therefore feel compelled (by outside forces?) to say such things.
This theory T, large and unwieldy though it is, can be interpreted in far simpler-looking theories. ZFC, with its single sort and single primitive predicate, is the main example of such a simpler theory. (I've left large categories out of T in order to make this literally true, but Feferman has shown how to interpret most of category theory, including large categories, in a conservative extension of ZFC.)
The simplicity and efficiency of ZFC and the fact that T can be interpreted in it (i.e., that all the concepts of T have set-theoretic definitions which make all the axioms of T set-theoretically provable) have, as far as I can see, two main uses. One is philosophical: one doesn't need to understand the nature of all these different abstract entities; if one understands sets (philosophically) then one can explain all the rest. The other is in proofs of consistency and independence. To show that some problem, say in topology, can't be decided in current mathematics means to show it's independent of T. So you'd want to construct lots of models of T to get lots of independence results. But models of T are terribly complicated objects. So instead we construct models of ZFC, which are not so bad, and we rely on the interpretation to convert them into models of T. And usually we don't mention T at all and just identify ZFC with "current mathematics" via the interpretation.
-
4
No need to apologize for this second, and wonderful answer. +1! – Asaf Karagila Mar 11 2012 at 23:08
Indeed, that is a wonderful answer. Thank you for posting it, it really does add something new. I wish that, as OP, I could upvote more that once! – Jacques Carette Mar 12 2012 at 2:36
Structural set theory, as described on the nlab page you linked to, is probably the best answer to your question. To avoid junk theorems, one must deviate somewhat from ordinary ZF-style set theory where everything is a set. That's because, once you decide, in the context of such a "material" set theory, that 5 and 17 are to be sets (because there's nothing else for them to be), they have to have a union, and there's no intuitively reasonable choice for that. (I said "union" rather than "intersection" because one might consider the empty set a reasonable intersection; but the union can't be empty unless both sets are.) A very elementary (undergraduate) presentation of some mathematics from this viewpoint is in the book "Sets for Mathematics" by Lawvere and Rosebrugh; a more advanced presentation is (if I remember correctly) Paul Taylor's "Practical Foundations of Mathematics".
-
1
The natural follow-up question would be: why stick to 'set theory' and not advocate type theory? – Jacques Carette Mar 10 2012 at 19:11
Even though MO seems to really like your answer, I think that Hurkyl's is actually closer to what I was looking for. – Jacques Carette Mar 10 2012 at 19:18
The question being, "Would it be correct to say that structural set theory is an attempt to get rid of such junk theorems?", the answer I think is "only partly or only if extremely limited."
Clicking on the link, I find a theory called ETCS as an example of structural set theory. ETCS has 0, N (the natural numbers), and S (the successor function) as primitives in its language, and it assumes effectively as axioms the normal assumptions about them (e.g. it assumes the existence and uniqueness of recursion).
Obviously, if you assume 0, N, and S as primitives and make the normal assumptions about them, rather than constructing them and proving the normal assumptions (Russell's honest toil rather than theft), then one can avoid junk theorems about the natural numbers. The same effect could be achieved, by modifying ZFC by introducing the same primitives and assuming, on top of the normal ZFC axioms, the Peano Axioms.
ETCS does not, however, get rid of all junk theorems unless it is only supposed to be about arithmetic and the natural numbers. If it, for instance, is also supposed to allow the construction of the real numbers and the development of analysis, then it will still get junk theorems about the real numbers.
-
"The same effect could be achieved, by modifying ZFC by introducing the same primitives and assuming, on top of the normal ZFC axioms, the Peano Axioms." No, even if you add $0$, $\mathbb{N}$ and $S$ as primitives, you will still be able to ask whether $\mathbb{N}\in S$ which is a "junk question" (and its answer will be a junk theorem) – Guillaume Brunerie Mar 11 2012 at 15:21
1
The language about natural numbers isn't in ETCS to avoid junk statements about natural numbers but to serve in an axiom of infinity. But you make a good point, that there can still be junk statements involving things like real numbers. For example, if a real number is defined as a lower set of rational numbers, then we have such junk theorems as $3 \in \pi$. We even have $2 \in 3$, where here $2$ is the rational number $2$ and $3$ is the real number $3$. (The abuse of language here is essentially the same as in material set theory.) – Toby Bartels Mar 11 2012 at 16:26
@Toby: I'm not sure what you mean. I suspect you and I have different systems in mind. In ETCS, I'd define an element of a set $X$ to be a map $1 \to X$. So for "$3 \in \pi$" to make sense, $\pi$ would have to be a set and $3$ would have to be a map $1 \to \pi$. And this isn't the case. @abo: for the same reason, I don't see any evidence of junk theorems in ETCS. – Tom Leinster Mar 12 2012 at 15:37
@Tom - You are right, but there will still be junk theorems about the embeddings of one set into another. For example, if you are representing the real numbers as a monomorphism to 2^Q, you are going to have theorems about "evaluating a real number at a rational", which wouldn't make sense to most people out of context. – Steven Gubkin Mar 12 2012 at 20:43
1
Tom, in ETCS there are two distinct meanings of $x \in y$. In one of these, $y$ is an abstract set (an object of the category $Set$) and $x$ is an element of $y$ as you said. In the other, there is some abstract set $z$ lying around but unmentioned, $x$ is an element of $z$, and $y$ is a subset of $z$ (meaning a monomorphism to $z$). This latter sense may be written $x \in_z y$, but ordinary mathematical practice abused the notation. If real numbers are encoded in ETCS as lower subsets of rational numbers, then $2_\mathbb{Q} \in_\mathbb{Q} 3_\mathbb{R}$ is a theorem. – Toby Bartels Mar 13 2012 at 6:33
show 2 more comments
Many of these answers are quite satisfying, but I'd just like to emphasize that much of the confusion may come from overloading of symbols like "$\in$", "$\subset$", "$\cap$", and "$2$", that is, such symbols have multiple context-dependent meanings. In particular, the junk theorems you provide are situations where some kind of overloading has been misinterpreted - indeed, the validity of the theorems may change if you switch to viewing the natural numbers as complex numbers.
The overloading of symbols is useful, because many algebraic and geometric structures like rings and manifolds admit a notion of "underlying set", but we should be careful not to confuse the $\subset$ attached to manifolds-as-we-use-them with the $\subset$ attached to a chosen pure set-theoretic encoding of manifolds. For example, the intersection of submanifolds is likely to look quite complicated once we choose a method to unfold such an operation into a pure set-theoretic formula.
Another way to view junk theorems is to say that they are statements that depend on a non-canonical choice of encoding of mathematical objects as pure sets. This is not to be interpreted as a claim that I know a way to sort out the foundations attached to notions like "non-canonical choice of encoding".
-
i think the construction of formal theories has a price, the price is that you will get a lot of prepositions (not theorems) that would be natural and simple, but that is part of life, you will meet a lot of them all the time.
-
Among the many subtle realities of mathematics in the 21st century, the most amazing is the lack of imagination. The language of set theory is built from the ground up to be as simple as possible. To appreciate the complexity inherent and information encoded in such simple statements (even the ones you might not find aesthetically pleasing) requires detachment.
This detachment I'm talking about is the clear distinction between: syntax and semantics. Statements made in the formal language have absolutely no meaning outside of formal manipulation, and so are not meant to be seen as anything more than symbols without meaning.
It is only when you attach meaning (or an interpretation) to these symbols that something of value can be said.
That having been said:
The examples you give are not actually statements in the language of set theory; they are artifacts of a general lack of communication between logic/model theory and the rest of mathematics. The symbols you strung together (1, $2$, 5, $4 \subset 54$, $\cap$, and so on) are examples of defined notions, which are used as a convenience.
And when we attach meaning to these statements something amazing happens:
What was $2 \in 3$ becomes the obviously true
$\{ \{\}, \{\{\}\} \} \in \{\{\}, \{ \{\}, \{\{\}\} \}\}$
and $1 \in \langle 0, 3 \rangle$ becomes
$\{\{\}\} \in \{ \{ \{\} \}, \{ \{\{\}, \{ \{\}, \{\{\}\} \}\} \}\}$
In Summary:
You are confusing the formal language with the actual interpretation of the language.
As such you are faced with something every body has known since the 19th century:
Our perception imposes "phantom" structure on the universe in an attempt to have it make sense; not the other way around.
PS: Feel free to edit. You also might want to change the title, since the post I wanted to put here would have gotten me banned.
-
6
There's nothing wrong with being honest, but you suggest Jacques has a misunderstanding of set theory, which doesn't seem to be supported by the evidence. You may disagree with him about whether this phenomenon is important, or how to think about it philosophically, but there's no doubt that the statements he points out are true theorems of ZFC (under standard definitions) that would be marked as wrong or meaningless in many elementary mathematics courses. That's at least a strange situation, even though of course they don't look objectionable when the definitions are expanded out. – Henry Cohn Mar 11 2012 at 6:39
1
OK, maybe I mischaracterized it as a misunderstanding of set theory, but you do refer to "artifacts of your misunderstanding". I don't think it's productive to write MO answers in a tone that can reasonably be read as insulting. – Henry Cohn Mar 11 2012 at 7:04
2
Changed the wording. – Michael Blackmon Mar 11 2012 at 7:09
5
ALthough I agree that it's important to distinguish syntax from semantics, I don't see how that distinction helps with the original question. You seem to say that, since 2 is defined in set theory as `$\{\{\},\{\{\}\}\}$`, this is the only meaning for 2; anything else is mere syntax. Jacques's question is based on the fact that mathematicians generally intend a different meaning for 2, not a set at all but a natural number. By formalizing everything in set theory, do we lose the original meaning of 2 and retain only its set-theoretic surrogate? – Andreas Blass Mar 11 2012 at 22:08
2
Steven: to say that set theory exists just to serve as a formal framework is like saying mathematics exists just to allow physicists use the language. – Asaf Karagila Mar 13 2012 at 11:07
show 14 more comments
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 69, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9541478753089905, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/121437/complexifying-a-real-banach-space-and-its-dual/121545
|
## Complexifying a real Banach space and its dual
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
A standard way to define the "complexification" $E_\mathbb{C}$ of a real Banach space $E$ is to define a complex linear structure on $E\times E$ by (1) $(x,y)+(u,v)=(x+u, y+v)$, (2) $(a+ib)(x,y)=(ax-by, bx+ay)$ and a norm by (3) $\|(x,y)\|^\mathbb{C}=\sup_{\theta\in [0,2 \pi]}\|\cos(\theta)x+\sin(\theta)y\|$. (That $\|\cdot\|^\mathbb{C}$ is a norm requires a little proving.)
This definition gives us what we want when going from real $C(K)$ or $l_p$, etc., to their complex versions.
1. I believe I can show that the (complex) Banach dual of $E_\mathbb{C}$ is the complexification of the real dual $E^*$, i.e., $(E_\mathbb{C})^* = (E^*)_\mathbb{c}$. But my proof is somewhat messy. This must surely be in the literature somewhere! Can anyone suggest a reference?
2. How about the converse? What if we know that $E_\mathbb{C} = V^*$ for some complex Banach space $V$. Is $E$ the (real) Banach dual of some Banach space? (Again, a reference would be appreciated).
[Edit Feb. 13, 2013] Based on Bill Johnson's comment below, perhaps I should motivate the question a bit, and revise question 2. For a compact space $K$, we know that the Banach space $C(K)$ over the real scalars is isometrically the dual of a real Banach space if and only if $C(K)$ over the complex scalars is isometrically the dual of some complex Banach space. A standard proof is particular to the situation, going through hyperstonian spaces $K$ and normal measures. I was wondering if this is just a special case of a more general fact. So here is a revision of 2.
Q3. Suppose $E$ is a real Banach lattice and the complexification $E_\mathbb{C}$ is a Banach dual space. Must $E$ then be the dual of some real Banach space?
-
1
Isn't $E_{\mathbb{C}}$ just $E \otimes_{\mathbb{R}} \mathbb{C}$ with the injective norm? Do you know Grothendieck's work on topological vector spaces? – Martin Brandenburg Feb 11 at 2:35
2
I believe the standard way is due to Dieudonné: dx.doi.org/10.1090/S0002-9939-1952-0047252-8 where he also proves that the James space is not the underlying real space of a complex Banach space thus disproving a conjecture of Banach. I think Ivan Singer's Bases in Banach spaces, I contains a discussion of the complexification in quite some detail on the first few pages. – Martin Feb 11 at 2:58
Tensor product seems related, but I don't see right now. Thanks to Martin for reference to Dieudonne and to Singer's book. But I still don't see an answer to my question 2 above. – Fred Dashiell Feb 11 at 12:32
1
I think the norm of the OP is $E\otimes^2_{\mathbb R} \mathbb C$ for the $\ell^2$-tensor product norm (i.e., the one such that the tensor product would be the space of $\mathbb R$-linear Hilbert-Schmidt operators $\mathbb C\to E$). Thus it should be compatible with duality. The injective norm would be the one induced from $L_{\mathbb R}(\mathbb C, E)$ with the operator norm. – Peter Michor Feb 11 at 15:37
## 2 Answers
Not an answer, but this is a bit long for a comment.
A Banach space is a dual space iff there is a total family of continuous linear functionals so that the unit ball of the space is compact in the weak topology on the space generated by the family of functionals. From this it is easy to see that if $E_\Bbb{C}$ is a dual space, then $E_\Bbb{C}$ is a dual space when considered as a real Banach space, which implies that there is a norm on $E\oplus E$ which is a dual norm and the projections onto the copies of $E$ have norm one.
This suggests the following questions, which as far as I know are open problems.
1. If $E\oplus E$ is isomorphic to a dual space, is $E$ isomorphic to a dual space? This question is equivalent to: if $E_\Bbb{C}$ is isomorphic to a dual space, is $E$ isomorphic to a dual space?
2. Same as (1), but with the additional condition that $E$ be separable.
3. Is every complemented subspace of a separable dual space isomorphic to a dual space?
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
This is a response to the request for references, but not an answer to Q3. I came across a related paper: "Complexifications of real Banach spaces, polynomials and multilinear maps", by Munoz, Sarantopoulos, and Tonge, Studia Math. 134 (1999), 1-33.
They point out that there are many different ways to put a reasonable norm on the algebraic complexification $E\times E$.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 39, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9341905117034912, "perplexity_flag": "head"}
|
http://mathhelpforum.com/advanced-algebra/189834-x-gxg-1-a.html
|
# Thread:
1. ## |x|=|gxg^{-1}|
Let G be a group and let G act on itself by conjugation, so each g in G maps G to G by $x\mapsto gxg^{-1}$. For each g in G, prove that conjugation by g is an isomorphism from G onto itself. Deduce that x and $gxg^{-1}$ have the same order for all x in G and that for any subset A of G, $|A|=|gAg^{-1}|$.
I have shown this an isomorphism. I am not sure how to show x and $gxg^{-1}$ have the same order.
2. ## Re: |x|=|gxg^{-1}|
Originally Posted by dwsmith
Let G be a group and let G act on itself by conjugation, so each g in G maps G to G by $x\mapsto gxg^{-1}$. For each g in G, prove that conjugation by g is an isomorphism from G onto itself. Deduce that x and $gxg^{-1}$ have the same order for all x in G and that for any subset A of G, $|A|=|gAg^{-1}|$.
I have shown this an isomorphism. I am not sure how to show x and $gxg^{-1}$ have the same order.
If $\phi$ is an isomorphism then $\phi(x)^{|x|}=\phi(x^{|x|})=\phi(e)=e$ and $e=\phi^{-1}(e)=\phi^{-1}(\phi(x)^{|\phi(x)|})=x^{|\phi(x)|}$ imply $|\phi(x)|\mid |x|$ and $|x|\mid |\phi(x)|$ respectively....so.
3. ## Re: |x|=|gxg^{-1}|
there are different ways to do this:
1). suppose that x has order n. then x^n = e, so (gxg^-1)^n = (gxg^-1)(gxg^-1).....(gxg^-1) (n times)
= gx(g^-1g)x(g^-1g).....xg^-1 = (gx)(x)(x)....(x)g^-1 = g(x^n)g^-1 = geg^-1 = gg^-1 = e.
(ok, technically using induction on n would be better, but you get the idea). thus |gxg^-1| divides n.
now suppose that |gxg^-1| = k < n. then (gxg^-1)^k = g(x^k)g^-1 = e, so x^k = g^-1g = e, a contradiction.
hence |gxg^-1| = |x|.
2). since x-->gxg^-1 is an isomorphism, the order of gxg^-1 must be the order of x. why? suppose not.
case 2a) |x| < |gxg^-1|. in this case we can find a k with x^k = e, but (gxg^-1)^k ≠ e.
using our isomorphism e = geg^-1 = g(x^k)g^-1 = (gxg^-1)^k, contradicting our choice of k.
case 2b) |gxg^-1| < |x|. then we have (gxg^-1)^k = e, but x^k ≠ e, for some k.
since e = (gxg^-1)^k = g(x^k)g^-1, we have that x^k is in the kernel of our isomorphism.
but since an isomorphism is injective, the kernel is {e}, contradiction.
thus |gxg^-1| = |x|.
now, since x-->gxg^-1 is an isomorphism, it is bijective, so |A| and |gAg^-1| have to be equal.
4. ## Re: |x|=|gxg^{-1}|
Originally Posted by Drexel28
$\phi^{-1}(\phi(x)^{|\phi(x)|})$
Why is this phi of x to the order of phi of x?
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 14, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9171209335327148, "perplexity_flag": "middle"}
|
http://www.physicsforums.com/showthread.php?t=584945
|
Physics Forums
Page 1 of 2 1 2 >
Gradient Confusion
Hi I am having trouble getting my head around the definition of a gradient. I know a gradient tells us the direction of steepest slope that one must follow to arrive at a maximum and I know it is defined as:
However I haven't got a gutt feeling for it, I need these questions answering before I can accept it:
Where is the definition derived from?
Why does adding the partial derivatives tell us the direction of maximum gradient?....I know this sounds stupid but if a function has gradients of 4,5,6 (x,y,x) what does that exactly mean? and why does adding them up points in the direction of the greatest rate of increase ?
Can someone explain this in simply laymens terms to me, preferably using an example...
I thank you in advance guys and gals
PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug
Recognitions: Gold Member Science Advisor Staff Emeritus If you were draw a line through point $(x_0, y_0, z_0)$ in the direction of vector $v= <v_x, v_y, v_z>$ you can write the line in parametric equations as $x= v_xt+ x_0$, $y=v_yt+ y_0$, $z= v_zt+ z_0$. On that line we can write the function f(x,y,z) as f(x(t),y(t), z(t)). Applying the chain rule to that, we have $df/dt= (\partial f/\partial x)(dx/dt)+ (\partial f/\partial y)(dy/dt)+ (\partial f/\partial z)(dz/dt)$. We can write that as a dot product: $\left<\partial f/\partial x, \partial f/\partial y, \partial f/\partial z\right>\cdot\left<dx/dt, dy/dt, dz/dt\right>$. That is why it is useful to define the gradient $\nabla f= \partial f/\partial x\vec{i}+ \partial f/\partial y \vec{j}+ \partial f/\partial z\vec{k}$.
Quote by HallsofIvy If you were draw a line through point $(x_0, y_0, z_0)$ in the direction of vector $v= <v_x, v_y, v_z>$ you can write the line in parametric equations as $x= v_xt+ x_0$, $y=v_yt+ y_0$, $z= v_zt+ z_0$. On that line we can write the function f(x,y,z) as f(x(t),y(t), z(t)). Applying the chain rule to that, we have $df/dt= (\partial f/\partial x)(dx/dt)+ (\partial f/\partial y)(dy/dt)+ (\partial f/\partial z)(dz/dt)$. We can write that as a dot product: $\left<\partial f/\partial x, \partial f/\partial y, \partial f/\partial z\right>\cdot\left<dx/dt, dy/dt, dz/dt\right>$. That is why it is useful to define the gradient $\nabla f= \partial f/\partial x\vec{i}+ \partial f/\partial y \vec{j}+ \partial f/\partial z\vec{k}$.
Thanks for your reply.....you lost me at the chain rule. I know how to perform the chain rule, but I don't know how you managed to apply the chain rule on:
$x= v_xt+ x_0$, $y=v_yt+ y_0$, $z= v_zt+ z_0$
to arrive at:
$df/dt= (\partial f/\partial x)(dx/dt)+ (\partial f/\partial y)(dy/dt)+ (\partial f/\partial z)(dz/dt)$
This is the part I am getting stuck on, can you break it down for me...
Thanks alot
Gradient Confusion
I have a question on the application of the gradient vector:
say A textbook question told me to "Find the directional derivative of f(x,y,z)=xy+z^2 at (1,1,1) in the direction towards (5,-3,3)."
I have attempted this problem 3 times yet my answer is not the answer in the back of the answer guide: 2/3
I found the gardient vector:<y,x,2z >; @ the point (1,1,1) => (1,1,2)
I found the unit vector in the given direction: <5/√43,-3/√43,3/√43>
and the dot product of the gardient and unit vector: 5/√43 - 3/√43 + 6/√43= 8/√43
8/√43 is not the correct answer in the back of the book... I know typos exist but I suspect I've done something wrong or misunderstood the question. any help?
Recognitions: Gold Member Homework Help Science Advisor Don't worry. I don't get it, either. your textbook is WRONG, no matter what it thinks about.
Quote by arildno Don't worry. I don't get it, either. your textbook is WRONG, no matter what it thinks about.
Thanks for confirming...
haha you completely hi-jacked my thread....
Quote by jonlg_uk haha you completely hi-jacked my thread....
relax. If anything i'm making your thread more popular, but now that i've properly hi-jacked this thread.. LET ME FLY IT THROUGH THE GROUND! http://upload.wikimedia.org/wikipedi...7_pentagon.gif
Recognitions:
Gold Member
Science Advisor
Staff Emeritus
Quote by jonlg_uk Thanks for your reply.....you lost me at the chain rule. I know how to perform the chain rule, but I don't know how you managed to apply the chain rule on: $x= v_xt+ x_0$, $y=v_yt+ y_0$, $z= v_zt+ z_0$ to arrive at: $df/dt= (\partial f/\partial x)(dx/dt)+ (\partial f/\partial y)(dy/dt)+ (\partial f/\partial z)(dz/dt)$ This is the part I am getting stuck on, can you break it down for me... Thanks alot
I'm not sure how to "break it down" any more- that is the "chain rule" for functions of more than one variable: if f is a function of x, y, and z, and x, y, and z are themselves functions of the variable t, then we could replace each by its expression as a function of t so that f is itself a function of t and then
$$\frac{df}{dt}= \frac{\partial f}{\partial x}\frac{dx}{dt}+\frac{\partial f}{\partial y}\frac{dy}{dt}+\frac{\partial f}{\partial z}\frac{dz}{dt}$$
and variations on that:
If x, y, and z are functions of u, v, and w, say, then
$$\frac{\partial f}{\partial u}= \frac{\partial f}{\partial x}\frac{\partial x}{\partial u}+\frac{\partial f}{\partial y}\frac{\partial y}{\partial u}+\frac{\partial f}{\partial z}\frac{\partial z}{\partial t}$$
etc.
Or if f is a function of the single variable x and x is a function of u, v, and w,
$$\frac{\partial f}{\partial u}= \frac{df}{dx}\frac{\partial x}{\partial u}$$
etc.
Recognitions: Homework Help Science Advisor If you hve a simple function like f(x,y,z) = x+2y+3z, then does it make sense that the direction in which it increases fastest is perpendicuklar to the directions in which it is constant? If so then the direction of fastest increase say at (0,0,0) is perpendicular to the plane x+2y+3z = 0. Do you know why this direction is along the vector (1,2,3)? In general any smooth function f(x,y,z) is constant along a surface f(x,y,z) = c, whose tangent plane is parallel to ∂f∂x x + ∂f/∂y y + ∂f/∂z z = 0. then the direction of greatest increase is prpendicular to that surface and hence to that plane and hence is parallel to the gradient (∂f/∂x, ∂f/∂y, ∂f/∂z).
Recognitions: Science Advisor For intuition about the gradient, I like to think about ideas of potential energy in classical physics. The work done by a force field on a particle - e.g. gravity on a point mass or and electric field on an electron - is the integral of the inner product of the field with the velocity vector of the particle along the curve. This is just the law work = Force x distance on a curved path. In many situations, this work represents a change in the potential energy of the particle e.g. a change in gravitational potential or electrostatic potential. The force field is just the gradient of the potential. Along a surface where the potential is constant, the force field does no work on the moving particle since the potential does not change. This means that the inner product of the force field with the velocity vector of a curve is zero if the curve lies on the constant potential surface. In other words the gradient of the potential is perpendicular to the surface. This perpendicular direction must be the direction of maximum change in potential energy since any other direction will have a component tangent to the surface that will have no effect on the change in potential.
Recognitions: Homework Help Science Advisor good illustration. e.g. you do no work against gravity unless you move something perpendicular to the surface of the earth, i.e. up or down.
Quote by HallsofIvy I'm not sure how to "break it down" any more- that is the "chain rule" for functions of more than one variable: if f is a function of x, y, and z, and x, y, and z are themselves functions of the variable t, then we could replace each by its expression as a function of t so that f is itself a function of t and then $$\frac{df}{dt}= \frac{\partial f}{\partial x}\frac{dx}{dt}+\frac{\partial f}{\partial y}\frac{dy}{dt}+\frac{\partial f}{\partial z}\frac{dz}{dt}$$ and variations on that: If x, y, and z are functions of u, v, and w, say, then $$\frac{\partial f}{\partial u}= \frac{\partial f}{\partial x}\frac{\partial x}{\partial u}+\frac{\partial f}{\partial y}\frac{\partial y}{\partial u}+\frac{\partial f}{\partial z}\frac{\partial z}{\partial t}$$ etc. Or if f is a function of the single variable x and x is a function of u, v, and w, $$\frac{\partial f}{\partial u}= \frac{df}{dx}\frac{\partial x}{\partial u}$$ etc.
thanks for that. I understand that the chain rule but I don't understand the why you start adding the gradients in each direction, why not just leave the comma between them in place......?
OkI have identified where I am getting confused, I do not know how to derive the Multivariable Chain Rule. I am going to try and figure out how this works.
Recognitions:
Gold Member
Science Advisor
Staff Emeritus
Quote by Cloudzero I have a question on the application of the gradient vector: say A textbook question told me to "Find the directional derivative of f(x,y,z)=xy+z^2 at (1,1,1) in the direction towards (5,-3,3)." I have attempted this problem 3 times yet my answer is not the answer in the back of the answer guide: 2/3 I found the gardient vector:<y,x,2z >; @ the point (1,1,1) => (1,1,2) I found the unit vector in the given direction: <5/√43,-3/√43,3/√43> and the dot product of the gardient and unit vector: 5/√43 - 3/√43 + 6/√43= 8/√43 8/√43 is not the correct answer in the back of the book... I know typos exist but I suspect I've done something wrong or misunderstood the question. any help?
Quote by arildno Don't worry. I don't get it, either. your textbook is WRONG, no matter what it thinks about.
You both may be misunderstanding the phrase "in the direction towards (5, -3, 3)". I interpret that as the vector from (1, 1, 1) to (5, -3, 3) which is <4, -4, 2> which has length $\sqrt{16+ 16+ 4}= 6$. The unit vector in that direction is <2/3, -2/3, 1/3>. The directional derivative is <1, 1, 2>.<2/3, -2/3, 1/3>= 2/3.
Quote by HallsofIvy If you were draw a line through point $(x_0, y_0, z_0)$ in the direction of vector $v= <v_x, v_y, v_z>$ you can write the line in parametric equations as $x= v_xt+ x_0$, $y=v_yt+ y_0$, $z= v_zt+ z_0$. On that line we can write the function f(x,y,z) as f(x(t),y(t), z(t)). Applying the chain rule to that, we have $df/dt= (\partial f/\partial x)(dx/dt)+ (\partial f/\partial y)(dy/dt)+ (\partial f/\partial z)(dz/dt)$. We can write that as a dot product: $\left<\partial f/\partial x, \partial f/\partial y, \partial f/\partial z\right>\cdot\left<dx/dt, dy/dt, dz/dt\right>$. That is why it is useful to define the gradient $\nabla f= \partial f/\partial x\vec{i}+ \partial f/\partial y \vec{j}+ \partial f/\partial z\vec{k}$.
Ok I have gone away and done some work. So far I understand the explanation up to applying the chain rule and deriving :
$df/dt= (\partial f/\partial x)(dx/dt)+ (\partial f/\partial y)(dy/dt)+ (\partial f/\partial z)(dz/dt)$.
After that can you explain, in a little more detail, how to get from that ^ to this :
$\nabla f= \partial f/\partial x\vec{i}+ \partial f/\partial y \vec{j}+ \partial f/\partial z\vec{k}$.[/QUOTE]
Why have you applied the dot product?
Is it correct to just cancel out all the dx's, dy's and dz's, in the chain rule representation and just replace them with a unit vector?
Thanks a bunch
jon
Recognitions: Gold Member Homework Help Science Advisor There is no worthwhile answer to your "why" other than that it is VALID to represent the sum you understand in terms of the dot product. (Multiplying out the dot product gives you the sum back!) Do you see that?
Page 1 of 2 1 2 >
Thread Tools
| | | |
|-----------------------------------------|----------------------------|---------|
| Similar Threads for: Gradient Confusion | | |
| Thread | Forum | Replies |
| | Calculus & Beyond Homework | 2 |
| | Calculus & Beyond Homework | 6 |
| | Calculus & Beyond Homework | 1 |
| | Calculus & Beyond Homework | 2 |
| | General Math | 3 |
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 35, "mathjax_display_tex": 6, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9267564415931702, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/34077?sort=votes
|
What are the obstructions for a Henstock-Kurzweil integral in more than one dimension?
Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I have recently come across the book The Kurzweil-Henstock Integral and its Differentials by Solomon Leader, in which, if I understand correctly, the HK integration process is modified in a way that makes it also work for dimensions higher than 1 (there's a proof of Green's theorem at the end). It has always been my impression that HK-integration doesn't extend to n dimensions, but truth be told, I don't actually know why.
So my question(s) is (are):
1. In what sense can the Henstock-Kurzweil integral not be extended to more than one dimension?
2. Leader's construction via summants below seems very reminiscent of Jenny Harrison's work on chainlets. Are the two related?
3. Does the relationship to measures from the one-dimensional case go both ways, i.e. every measure is a differential? Would this relationship be preserved in higher dimensions?
Below I've summarized the key features of Leader's construction.
A cell is a closed interval [a,b] in $[-\infty, \infty]$. A figure is a finite union of cells. A tagged cell in a figure K is a pair (I,t) where I is an interval contained in K, and t is an endpoint of I (according to Leader, the restriction of tags to be endpoints is key).
A gauge is a function $\delta:[-\infty,\infty]\to (0,\infty)$. Every gauge associated to every point t a neighborhood '$N_\delta(t)$ which is $(t-\delta(t),t+\delta(t))$ for finite t, $[-\infty,-\frac1{\delta(-\infty)}]$ and $[\frac1{\delta(-\infty)},\infty]$ for the infinite points. This ensures that $N_\alpha(t) \subset N_\beta(t)$ if $\alpha(t)\leq\beta(t)$, and then we can define division of a figure K into tagged cells to be $\delta$-fine if for each tagged cell $(I,t)$ we have $I\subset N_\delta(t)$.
Then where I understand Leader's theory to take a departure from the normal development, he defines a summant S to be a function on tagged cells, and then he constructs $\int_K S$ of a summant S over a figure K as the directed limit of $\sum_{(I,t)\in\mathcal{K}} S(I,t)$ over gauges $\delta$, where $\mathcal{K}$ are $\delta$-fine divisions of K (he actually defines a limit supremum and a limit infimum and works with those).
Some summants are for example $\Delta([a,b])=b-a$ and $|\Delta|([a,b]=|b-a|$. Any summant S can be multiplied by a function by way of (fS)(I,t)=f(t)S(I,t), and any function can be canonically extended to a summant f\Delta.
Where his theory really gets interesting is that he defines differentials as equivalence classes of summants under the equivalence relation S~T if $\int_K|S-T|=0$. From there he defines the differential of any function g by \$dg=[g\Delta], where [S] is the equivalence class of the summant S.
From this he calls a differential integrable if its representative summants S are integrable, and show that every integrable summant is of the form df where f is a function. Then absolutely integrable summants (ones such that |df| is integrable) give rise to measures. For example, the differential dx, where x is the identity function, corresponds to standard Lebesgue measure.
A final point of interest is that the fundamental theorem of calculus can be formulated as $f\Delta=f'df$ where f' is the usual derivative of f (which I actually think is a great way to motivate the definition of pointwise derivative in the first place).
On to n-dimensional theory, though. An n-cell [a,b] consists of the parallelopiped with opposite vertices (a,a,a,...,a) and (b,b,b,b,...,b). A tagged n-cell (I,t) has tag one of the vertices, it is $\delta$-fine if it's diameter is less than $\delta(t)$. The $\Delta^(n)g$ is given by the alternating sum $\sum_{t\in V_I} (-1)^{\mathcal{N}_I(v)}g(t)$, where VI are the vertices of I, and NI(v) are the number of coordinates of v that are the same as those of the tag t.
Allegedly (Solomon doesn't give the details in his book), integration goes through, though some results regarding fundamental theorem and such allegedly do not. I am unclear on what happens to the relationship with measures.
-
3 Answers
This is reffering to question 1.
There's a choice we have to make when defining a real-valued two-dimensional (or n dimensional) nonabsolute integral: would we rather have a class of integrable functions which include all divergences of differentiable functions, or would we want to have some sort of Fubini's Theorem working? This conflict was know from the early development of the HK integral and was also pointed out by Pfeffer in the book mentioned above.
If we define the HK integral in the obvious way (call it the standard two dimensional HK integral), we get Fubini's theorem, but no fully general divergence theorem. Many authors made modifications on the definition of this integral, and an relatively satisfactory definition was given by Jarník, Kurzweil and Schwabik in "On Mawhin's approach to multiple nonabsolutely convergent integral", Casopis. Pest. Mat. There they defined the $M_1$-integral, which satisfies a fully general divergence theorem, and has a simple enough definition so we can prove some convergence theorems. It is shown though that this integral does not satisfy Fubini's theorem when the corresponding one-dimensional integrals are considered to be the HK integral. The original example from that paper can be used to show that, in a more general setting, an interval-based two-dimensional integral that satisfies a full divergence theorem will fail in some sense Fubini's theorem (see Proposição 2.5 here if you are not afraid of reading in portuguese).
Other problem that is frequently overlooked when defining interval-based two-dimensional nonabsolute integrals is that the integral can be sensitive to rotations, that is, we can get integrable functions such that a certain rotation of that function is not integrable. We have this unpleasant effect for the $M_1$-integral and even for the standard two-dimensional HK integral (see main theorem of "K teorii vícerozmerného integrálu", Casopis. Pest. Mat. by K. Kartak, if you are not afraid of reading czech, or Proposição 1.8 in the aforementioned Thesis, which is for the $M_1$-integral but easily adaptable to the standard two-dimensional HK one).
Then there is a new challenge: trying to define an integral which is not based on intervals but that still will be simple enough to prove convergence theorems. Kurzweil himself defined an integral where the domain is partitioned into sets with boundaries continuously differentiable by parts; it's a lot of trouble even to prove Saks' Lemma for this integral. See also this article for an integral where we use triangular partitions. This integral satisfies many of the commonly desired theorems, but it is unknown to me for example if it satisfies a nice change of variables formula.
-
You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
There is a chapter on this (multi-dimensional gauge integral) in: W. Pfeffer, The Riemann Approach to Integration (Cambridge, 1993)
-
not many except need of a positive integrable function for fubini to hold
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 27, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9352749586105347, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/115687/vortex-equations-on-cylinder
|
## Vortex equations on cylinder
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Solutions to the vortex equations for a closed Riemann surface are well known (moduli space is a symmetric power). What do we know about solutions on surfaces with boundary or non compact surfaces? In particular I am interested in the case of a infinite cylinder $S^1 \times \mathbb{R}$.
-
## 1 Answer
For finite-energy vortices on a finite-type Riemannian surface with cylindrical ends, there is still a non-negative integer parameter, the vortex number $N$, and the moduli space is still canonically diffeomorphic to the $N$th symmetric product by the map that takes a gauge-equivalence class of vortices $[A,\phi]$ to $\phi^{-1}(0)$. One can prove that such vortices extend over the puncture, whereupon the usual methods apply.
Some references:
1) The case of the complex plane was treated in the book "Vortices and monopoles" by Jaffe-Taubes.
2) The case of a cylinder is explicitly treated, by a different method, in a paper by Frauenfelder:
http://arxiv.org/abs/math/0507285
3) One can regard the vortex equations as dimensional reductions of the Seiberg-Witten equations. There is a comprehensive treatment of those equations in the presence of cylindrical ends in Kronheimer and Mrowka's book "Monopoles and 3-manifolds". They also discuss Atiyah-Patodi-Singer boundary conditions in the case where there is a boundary.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8910053372383118, "perplexity_flag": "head"}
|
http://nrich.maths.org/6516
|
### Far Horizon
An observer is on top of a lighthouse. How far from the foot of the lighthouse is the horizon that the observer can see?
### Epidemic Modelling
Use the computer to model an epidemic. Try out public health policies to control the spread of the epidemic, to minimise the number of sick days and deaths.
### More or Less?
Are these estimates of physical quantities accurate?
# Constantly Changing
##### Stage: 4 Challenge Level:
Physical constants can only be determined by experiment and can never be known exactly, even if in principle an exact value does exists. As a result, physical quantities are given as a probable range of values with an uncertainty registered in the last two digits, as follows:
$1.234\, 5678(32) \rightarrow 1.234\, 5678 \pm 0.000\, 0032$
$245.234\, 789\, 123(45) \rightarrow 245.234\, 789\, 123\pm 0.000\, 000\, 045$
The following table contains the best currently known measurements for various physical quantities:
Name Value
Avogadro constant $6.022\, 141\, 79(30) \times 10 ^{23}$ mol $^{-1}$
Atomic mass constant $1.660 \,538\, 782(83) \times 10^{-27}$ kg
Electron mass $9.109\, 382 \,15(45)\times 10^{-31}$ kg
Proton-electron mass ratio $1836.152\, 672\, 4718(80)$
Proton mass $1.672\, 621\, 637(83) \times 10^{-27}$ kg
Neutron mass $1.674\, 927 \,211(84) \times 10{-27}$ kg
Speed of light in vacuum $299\, 792\, 458$ m s$^{-1}$ exactly
Consider the relationship between the error bounds for the proton-electron mass ratio and those for the electron mass and the proton mass. Are they consistent? Which appears to be known to best experimental accuracy?
Using this data, can you work out an upper limit on the mass of a mole of water? What is a lower limit?
How much uncertainty is there is the energy is contained within the mass of a mole of water, according to Einstein's energy-mass equation $E=mc^2$?
If the specific heat capacity of liquid water is about $4.1813$kJ kg$^{-1}$ K$^{-1}$ make an estimate of the number of cups of tea that you could make with this uncertain amount of energy.
NOTES AND BACKGROUND
National Institute of Standards and Technology Reference on Constants, Units and Uncertainty provides detailed information on the bounds of measurements of physical constants.
See http://physics.nist.gov/cuu/Constants/index.html for more details .
For a list of all constants, see http://physics.nist.gov/cuu/Constants/Table/allascii.txt
Interestingly, there is a strong element of statistics used to determine the probable values of constants. Key to this idea are the concepts of error and uncertainty in measurement. Cleverly designed experiments based on a strong understanding of statistics can be used to minimise this uncertainty.
To read about the essentials of expressing measurement uncertainty see http://physics.nist.gov/cuu/Uncertainty/index.html
Note that the speed of light given is a numerically exact quantity because the length of a metre has now been defined in terms of the speed of light!
The NRICH Project aims to enrich the mathematical experiences of all learners. To support this aim, members of the NRICH team work in a wide range of capacities, including providing professional development for teachers wishing to embed rich mathematical tasks into everyday classroom practice. More information on many of our other activities can be found here.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 15, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8824644684791565, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/117491?sort=newest
|
## Asymptotic formula for an expression in terms of the second kind of stirling numbers
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
We have proved that the limit of $\sum_{k=0}^n r^2k^m / (1+r)^{k+1}$ when n approaches infinity is $\sum_{k=1}^m S(m,k)k!/r^{k-1}$ where S(m,k) is the second kind of stirling number.
Is there a simple asymptotic or approximate formula for the result $\sum_{k=1}^m S(m,k)k!/r^{k-1}$ with $m$ fixed and $r$ near $1$. ?
-
## 1 Answer
For the question to make sense, you have to specify what the asymptotics is with respect to. For example, which variables are fixed and which are going to infinity. If $r>0$ is fixed and $m\to\infty$ (and probably in some other cases too), you are better off analyzing your initial sum rather than the Stirling version. The largest term is around $k= m/\ln(1+r)$ and the terms near that have a Gaussian shape with standard deviation $m^{1/2}/\ln(1+r)$. Euler-Maclaurin summation for the main part plus crude bounds for the tails will give it to you.
-
A good suggestion. In fact and for our actual applications, $m$ is usuallly fixed and usually less than $10$, and $r$ will vary near 1, often not more than $10$ and not less than $1/2$. Therefore we do want an approximate result for $\sum_{k=1}^m S(m,k)k!/r^{k-1}$. – liaomingxue Dec 30 at 11:30
1
So you have an exact expression with usually less than 10 terms. Why do you think there should be something better? – Brendan McKay Dec 30 at 14:03
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 16, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9427230954170227, "perplexity_flag": "head"}
|
http://www.haskell.org/haskellwiki/index.php?title=User:Michiexile/MATH198/Lecture_2&diff=32219&oldid=30271
|
User:Michiexile/MATH198/Lecture 2
From HaskellWiki
(Difference between revisions)
| | | | |
|---------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| | | Current revision (06:10, 7 December 2009) (edit) (undo) | |
| (7 intermediate revisions not shown.) | | | |
| Line 1: | | Line 1: | |
| - | IMPORTANT NOTE: THESE NOTES ARE STILL UNDER DEVELOPMENT. PLEASE WAIT UNTIL AFTER THE LECTURE WITH HANDING ANYTHING IN, OR TREATING THE NOTES AS READY TO READ. | + | This lecture covers material occurring in Awodey sections 2.1-2.4. |
| | | | |
| | ===Morphisms and objects=== | | ===Morphisms and objects=== |
| Line 8: | | Line 8: | |
| | * cancellability - the categorical notion corresponding to properties we use when solving, e.g., equations over <math>\mathbb N</math>: | | * cancellability - the categorical notion corresponding to properties we use when solving, e.g., equations over <math>\mathbb N</math>: |
| | ::<math>3x = 3y \Rightarrow x = y</math> | | ::<math>3x = 3y \Rightarrow x = y</math> |
| | | + | :See also Wikipedia [http://en.wikipedia.org/wiki/Cancellation_property], where the relevant definitions and some interesting keywords occur. The article is technical and terse, though. |
| | * existence of inverses - which is stronger than cancellability. If there are inverses around, this implies cancellability, by applying the inverse to remove the common factor. Cancellability, however, does not imply that inverses exist: we can cancel the 3 above, but this does not imply the existence of <math>1/3\in\mathbb N</math>. | | * existence of inverses - which is stronger than cancellability. If there are inverses around, this implies cancellability, by applying the inverse to remove the common factor. Cancellability, however, does not imply that inverses exist: we can cancel the 3 above, but this does not imply the existence of <math>1/3\in\mathbb N</math>. |
| | | | |
| Line 21: | | Line 22: | |
| | | | |
| | In a category of sets with structure with morphisms given by functions that respect the set structure, isomorphism are bijections respecting the structure. In the category of sets, the isomorphisms are bijections. | | In a category of sets with structure with morphisms given by functions that respect the set structure, isomorphism are bijections respecting the structure. In the category of sets, the isomorphisms are bijections. |
| | | + | |
| | | + | In wikipedia: [http://en.wikipedia.org/wiki/Bijection] |
| | | | |
| | ====Representative subcategories==== | | ====Representative subcategories==== |
| Line 29: | | Line 32: | |
| | | | |
| | Doing this, we get a ''representative subcategory'': a subcategory such that every object of the supercategory is isomorphic to some object in the subcategory. | | Doing this, we get a ''representative subcategory'': a subcategory such that every object of the supercategory is isomorphic to some object in the subcategory. |
| | | + | |
| | | + | The representative subcategory ends up being a more categorically interesting concept than the idea of a wide subcategory: it doesn't hit every object in the category, but it hits every object worth hitting in order to capture all the structure. |
| | | + | |
| | | + | '''Example''' The category of finite sets has a representative subcategory given by all sets <math>[n]=\{1,\ldots,n\}</math>. |
| | | | |
| | ====Groupoids==== | | ====Groupoids==== |
| | | | |
| | A ''groupoid'' is a category where ''all'' morphisms are isomorphisms. The name originates in that a groupoid with one object is a bona fide group; so that groupoids are the closest equivalent, in one sense, of groups as categories. | | A ''groupoid'' is a category where ''all'' morphisms are isomorphisms. The name originates in that a groupoid with one object is a bona fide group; so that groupoids are the closest equivalent, in one sense, of groups as categories. |
| | | + | |
| | | + | A very rich starting point is the wikipedia page [http://en.wikipedia.org/wiki/Groupoid]. In the categorical definition on this page, the difference to a category is in the existence and properties of the function inv. |
| | | | |
| | ===Monomorphisms=== | | ===Monomorphisms=== |
| Line 47: | | Line 56: | |
| | ::<math>x\neq y\Rightarrow f(x)\neq f(y)</math> or moving out the negations, | | ::<math>x\neq y\Rightarrow f(x)\neq f(y)</math> or moving out the negations, |
| | ::<math>f(x)=f(y) \Rightarrow x=y</math>. | | ::<math>f(x)=f(y) \Rightarrow x=y</math>. |
| | | + | |
| | | + | In Wikipedia: [http://en.wikipedia.org/wiki/Injective_function]. |
| | | | |
| | ====Subobjects==== | | ====Subobjects==== |
| Line 52: | | Line 63: | |
| | Consider the subset <math>\{1,2\}\subset\{1,2,3\}</math>. This is the image of an accordingly chosen injective map from any 2-element set into <math>\{1,2,3\}</math>. Thus, if we want to translate the idea of a subset into categorical language, it is not enough talking about monomorphisms, though the fact that inclusion is an injection indicates that we are on the right track. | | Consider the subset <math>\{1,2\}\subset\{1,2,3\}</math>. This is the image of an accordingly chosen injective map from any 2-element set into <math>\{1,2,3\}</math>. Thus, if we want to translate the idea of a subset into categorical language, it is not enough talking about monomorphisms, though the fact that inclusion is an injection indicates that we are on the right track. |
| | | | |
| - | The trouble that remains is that we do not want to view <math>\{1,2\}</math> when it occurs as the image of <math>\{1,2\}</math> as a different subset from <math>\{5,6\}</math> mapping to <math>\{1,2\}</math>. So we need some way of figuring out how to catch these situations and parry for them. | + | The trouble that remains is that we do not want to view <math>\{1,2\}</math> as different subsets when it occurs as an image of the 2-element set <math>\{1,2\}</math> or when it occurs as an image of the 2-element set <math>\{5,6\}</math>. So we need some way of figuring out how to catch these situations and parry for them. |
| | | | |
| | We'll say that a morphism <math>f</math> ''factors through'' a morphism <math>g</math> if there is some morphism <math>h</math> such that <math>f=gh</math>. | | We'll say that a morphism <math>f</math> ''factors through'' a morphism <math>g</math> if there is some morphism <math>h</math> such that <math>f=gh</math>. |
| Line 61: | | Line 72: | |
| | | | |
| | Equipped with this equivalence relation, we define a ''subobject'' of an object <math>A</math> to be an equivalence class of monomorphisms. | | Equipped with this equivalence relation, we define a ''subobject'' of an object <math>A</math> to be an equivalence class of monomorphisms. |
| | | + | |
| | | + | Wikipedia has an accurate exposition [http://en.wikipedia.org/wiki/Subobject]. |
| | | | |
| | ===Epimorphisms=== | | ===Epimorphisms=== |
| | | | |
| - | ''Right cancellability'', by analogy, is the implication | + | ''Right cancellability'', by duality, is the implication |
| | :<math>g_1f = g_2f \Rightarrow g_1 = g_2</math> | | :<math>g_1f = g_2f \Rightarrow g_1 = g_2</math> |
| | The name, here comes from that we can remove the right cancellable <math>f</math> from the right of any equation it is involved in. | | The name, here comes from that we can remove the right cancellable <math>f</math> from the right of any equation it is involved in. |
| Line 72: | | Line 85: | |
| | ====In Set==== | | ====In Set==== |
| | | | |
| - | For epimorphims the interpretation in set functions is that whatever <math>f</math> does, it doesn't hide any part of the things <math>g_1</math> and <math>g_2</math> do. So applying <math>f</math> first doesn't influence the total available scope <mamth>g_1</math> and <math>g_2</math> have. | + | For epimorphims the interpretation in set functions is that whatever <math>f</math> does, it doesn't hide any part of the things <math>g_1</math> and <math>g_2</math> do. So applying <math>f</math> first doesn't influence the total available scope <math>g_1</math> and <math>g_2</math> have. |
| | | + | |
| | | + | In Wikipedia: [http://en.wikipedia.org/wiki/Surjective_function]. |
| | | | |
| | ===More on factoring=== | | ===More on factoring=== |
| Line 78: | | Line 93: | |
| | In Set, and in many other categories, any morphism can be expressed by a factorization of the form <math>f=ip</math> where <math>i</math> is a monomorphism and <math>p</math> is an epimorphism. For instance, in Set, we know that a function is surjective onto its image, which in turn is a subset of the domain, giving a factorization into an epimorphism - the projection onto the image - followed by a monomorphism - the inclusion of the image into the domain. | | In Set, and in many other categories, any morphism can be expressed by a factorization of the form <math>f=ip</math> where <math>i</math> is a monomorphism and <math>p</math> is an epimorphism. For instance, in Set, we know that a function is surjective onto its image, which in turn is a subset of the domain, giving a factorization into an epimorphism - the projection onto the image - followed by a monomorphism - the inclusion of the image into the domain. |
| | | | |
| | | + | A generalization of this situation is sketched out on the Wikipedia page for Factorization systems [http://en.wikipedia.org/wiki/Factorization_system]. |
| | ---- | | ---- |
| | | | |
| Line 95: | | Line 111: | |
| | * In the category of Vector spaces, the single element vector space 0 is both initial and terminal. | | * In the category of Vector spaces, the single element vector space 0 is both initial and terminal. |
| | | | |
| - | ===Zero objects=== | + | On Wikipedia, there is a terse definition, and a good range of examples and properties: [http://en.wikipedia.org/wiki/Initial_and_terminal_objects]. |
| | | + | |
| | | + | Note that terminal objects are sometimes called ''final'', and are as such used in the formal logic specification of algebraic structures. |
| | | + | |
| | | + | ====Zero objects==== |
| | | | |
| | This last example is worth taking up in higher detail. We call an object in a category a ''zero object'' if it is simultaneously initial and terminal. | | This last example is worth taking up in higher detail. We call an object in a category a ''zero object'' if it is simultaneously initial and terminal. |
| | | | |
| | | + | Some categories exhibit a richness of structure similar to the category of vectorspaces: all kernels exist (nullspaces), homsets are themselves abelian groups (or even vectorspaces), et.c. With the correct amount of richness, the category is called an ''Abelian category'', and forms the basis for ''homological algebra'', where techniques from topology are introduced to study algebraic objects. |
| | | | |
| | | + | One of the core requirements for an Abelian category is the existence of zero objects in it: if a category does have a zero object <math>0</math>, then for any <math>Hom(A,B)</math>, the composite <math>A\to 0\to B</math> is a uniquely determined member of the homset, and the addition on the homsets of an Abelian category has this particular morphism as its identity element. |
| | | | |
| | ====Pointless sets and generalized elements==== | | ====Pointless sets and generalized elements==== |
| Line 135: | | Line 157: | |
| | ===Internal and external hom=== | | ===Internal and external hom=== |
| | | | |
| - | If <math>f:B\to C</math>, then <math>f</math> induces a set function <math>Hom(A,f):Hom(A,B)\to Hom(A,C)</math> through <math>Hom(A,f)(g) = f\circ g</math>. Similarly, it induces a set function <math>Hom(f,A):Hom(C,A)\to Hom(C,B)</math> through <math>Hom(f,A)(g) = g\circ f</math>. | + | If <math>f:B\to C</math>, then <math>f</math> induces a set function <math>Hom(A,f):Hom(A,B)\to Hom(A,C)</math> through <math>Hom(A,f)(g) = f\circ g</math>. Similarly, it induces a set function <math>Hom(f,A):Hom(C,A)\to Hom( |
| | | + | B,A)</math> through <math>Hom(f,A)(g) = g\circ f</math>. |
| | | | |
| | Using this, we have an occasionally enlightening | | Using this, we have an occasionally enlightening |
| Line 160: | | Line 183: | |
| | We shall return to this situation later, when we are better equipped to give a formal scaffolding to the idea of having elements in objects in a category act as morphisms. For now, we shall introduce the notations <math>[A\to B]</math> or <math>B^A</math> to denote the ''internal'' hom - where the morphisms between two objects live as an object of the category. This distinguishes <math>B^A</math> from <math>Hom(A,B)</math>. | | We shall return to this situation later, when we are better equipped to give a formal scaffolding to the idea of having elements in objects in a category act as morphisms. For now, we shall introduce the notations <math>[A\to B]</math> or <math>B^A</math> to denote the ''internal'' hom - where the morphisms between two objects live as an object of the category. This distinguishes <math>B^A</math> from <math>Hom(A,B)</math>. |
| | | | |
| - | To gain a better understanding of the choice of notation, it is worth noting that <math>|Hom_{Set}(A,B)|=|B|^|A|</math>. | + | To gain a better understanding of the choice of notation, it is worth noting that <math>|Hom_{Set}(A,B)|=|B|^{|A|}</math>. |
| | | | |
| | ===Homework=== | | ===Homework=== |
| | | | |
| - | Passing mark requires at least 6 of 11. | + | Passing mark requires at least 4 of 11. |
| | | | |
| | # Suppose <math>g,h</math> are two-sided inverses to <math>f</math>. Prove that <math>g=h</math>. | | # Suppose <math>g,h</math> are two-sided inverses to <math>f</math>. Prove that <math>g=h</math>. |
| - | # (requires some familiarity with analysis) There is a category with object <math>\mathbb R</math> (or even all smooth manifolds) and with morphisms smooth (infinitely differentiable) functions. Prove that being a bijection does not imply being an isomorphisms. Hint: What about <math>x\mapsto x^3?</math>. | + | # (requires some familiarity with analysis) There is a category with object <math>\mathbb R</math> (or even all smooth manifolds) and with morphisms smooth (infinitely differentiable) functions <math>f:\mathbb R\to\mathbb R</math>. Prove that being a bijection does not imply being an isomorphisms. Hint: What about <math>x\mapsto x^3?</math>. Wikipedia definition of smoothness: [http://en.wikipedia.org/wiki/Smooth_function]. Moral of the definition is that all derivatives and derivatives of derivatives, et.c. are everywhere finite and continuous. |
| - | # (do this if you don't do 2) In the category of posets, with order-preserving maps as morphisms, show that not all bijective homomorphisms are isomorphisms. | + | # (try to do this if you don't do 2) In the category of posets, with order-preserving maps as morphisms, show that not all bijective homomorphisms are isomorphisms. See the notes for [[User:Michiexile/MATH198/Lecture 1|Lecture 1]] for details on posets and order-preserving maps, as well as wikipedia links. |
| | # Consider the partially ordered set <math>P</math> as a category. Prove: every arrow is both monic and epic. Is every arrow thus an isomorphism? | | # Consider the partially ordered set <math>P</math> as a category. Prove: every arrow is both monic and epic. Is every arrow thus an isomorphism? |
| - | # What are the terminal and initial objects in a poset? Give an example of a poset that has both, either and none. Give an example of a poset that has a zero object. | + | # What are the terminal and initial objects in a poset? Give an example each of a poset that has both, either and none. Give an example of a poset that has a zero object. |
| - | # What are the terminal and initial objects in the category with objects graphs and morphisms graph homomorphisms? | + | # What are the terminal and initial objects in the category with objects graphs and morphisms graph homomorphisms? Definition of a graph and a graph homomorphism occurred in [[User:Michiexile/MATH198/Lecture 1|Lecture 1]]. |
| - | # Prove that if a category has one zero object, then all initial and all terminal objects are all isomorphic and all zero objects. | + | # Prove that if a category has one zero object, then all initial and all terminal objects are all isomorphic and they are all zero objects. |
| | # Prove that the composition of two monomorphisms is a monomorphism and that the composition of two epimorphisms is an epimorphism. If <math>g\circ f</math> is monic, do any of <math>g,f</math> have to be monic? If the composition is epic, do any of the factors have to be epic? | | # Prove that the composition of two monomorphisms is a monomorphism and that the composition of two epimorphisms is an epimorphism. If <math>g\circ f</math> is monic, do any of <math>g,f</math> have to be monic? If the composition is epic, do any of the factors have to be epic? |
| | # Verify that the equivalence relation used in defining subobjects really is an equivalence relation. Further verify that this fixes the motivating problem. | | # Verify that the equivalence relation used in defining subobjects really is an equivalence relation. Further verify that this fixes the motivating problem. |
| | # Describe a representative subcategory each of: | | # Describe a representative subcategory each of: |
| | #* The category of vectorspaces over the reals. | | #* The category of vectorspaces over the reals. |
| - | #* The category of finite sets. | | |
| | #* The category formed by the preordered set of the integers <math>\mathbb Z</math> and the order relation <math>a\leq b</math> if <math>a|b</math>. Recall that a preordered set is a set <math>P</math> equipped with a relation <math>\leq</math> that fulfills transitivity and reflexivity, but not necessarily anti-symmetry. | | #* The category formed by the preordered set of the integers <math>\mathbb Z</math> and the order relation <math>a\leq b</math> if <math>a|b</math>. Recall that a preordered set is a set <math>P</math> equipped with a relation <math>\leq</math> that fulfills transitivity and reflexivity, but not necessarily anti-symmetry. |
| | # * An arrow <math>f:A\to A</math> in a category <math>C</math> is an ''idempotent'' if <math>f\circ f = f</math>. We say that <math>f</math> is a ''split idempotent'' if there is some <math>g:A\to B, h:B\to A</math> such that <math>h\circ g=f</math> and <math>g\circ h=1_B</math>. Show that in Set, <math>f</math> is idempotent if and only if its image equals its set of fixed points. Show that every idempotent in Set is split. Give an example of a category with a non-split idempotent. | | # * An arrow <math>f:A\to A</math> in a category <math>C</math> is an ''idempotent'' if <math>f\circ f = f</math>. We say that <math>f</math> is a ''split idempotent'' if there is some <math>g:A\to B, h:B\to A</math> such that <math>h\circ g=f</math> and <math>g\circ h=1_B</math>. Show that in Set, <math>f</math> is idempotent if and only if its image equals its set of fixed points. Show that every idempotent in Set is split. Give an example of a category with a non-split idempotent. |
Current revision
This lecture covers material occurring in Awodey sections 2.1-2.4.
Contents
1 Morphisms and objects
Some morphisms and some objects are special enough to garner special names that we will use regularly.
In morphisms, the important properties are
• cancellability - the categorical notion corresponding to properties we use when solving, e.g., equations over $\mathbb N$:
$3x = 3y \Rightarrow x = y$
See also Wikipedia [1], where the relevant definitions and some interesting keywords occur. The article is technical and terse, though.
• existence of inverses - which is stronger than cancellability. If there are inverses around, this implies cancellability, by applying the inverse to remove the common factor. Cancellability, however, does not imply that inverses exist: we can cancel the 3 above, but this does not imply the existence of $1/3\in\mathbb N$.
Thus, we'll talk about isomorphisms - which have two-sided inverses, monomorphisms and epimorphisms - which have cancellability properties, and split morphisms - which are mono's and epi's with correspodning one-sided inverses. We'll talk about what these concepts - defined in terms of equationsolving with arrows - apply to more familiar situations. And we'll talk about how the semantics of some of the more wellknown ideas in mathematics are captured by these notions.
For objects, the properties are interesting in what happens to homsets with the special object as source or target. An empty homset is pretty boring, and a large homset is pretty boring. The real power, we find, is when all homsets with the specific source or target are singleton sets. This allows us to formulate the idea of a 0 in categorical terms, as well as capturing the roles of the empty set and of elements of sets - all using only arrows.
2 Isomorphisms
An arrow $f:A\to B$ in a category C is an isomorphism if it has a twosided inverse g. In other words, we require the existence of a $g:B\to A$ such that fg = 1B and gf = 1A.
2.1 In Set
In a category of sets with structure with morphisms given by functions that respect the set structure, isomorphism are bijections respecting the structure. In the category of sets, the isomorphisms are bijections.
In wikipedia: [2]
2.2 Representative subcategories
Very many mathematical properties and invariants are interesting because they hold for objects regardless of how, exactly, the object is built. As an example, most set theoretical properties are concerned with how large the set is, but not what the elements really are.
If all we care about are our objects up to isomorphisms, and how they relate to each other - we might as well restrict ourselves to one object for each isomorphism class of objects.
Doing this, we get a representative subcategory: a subcategory such that every object of the supercategory is isomorphic to some object in the subcategory.
The representative subcategory ends up being a more categorically interesting concept than the idea of a wide subcategory: it doesn't hit every object in the category, but it hits every object worth hitting in order to capture all the structure.
Example The category of finite sets has a representative subcategory given by all sets $[n]=\{1,\ldots,n\}$.
2.3 Groupoids
A groupoid is a category where all morphisms are isomorphisms. The name originates in that a groupoid with one object is a bona fide group; so that groupoids are the closest equivalent, in one sense, of groups as categories.
A very rich starting point is the wikipedia page [3]. In the categorical definition on this page, the difference to a category is in the existence and properties of the function inv.
3 Monomorphisms
We say that an arrow f is left cancellable if for any arrows g1,g2 we can show $fg_1 = fg_2 \Rightarrow g_1=g_2$. In other words, it is left cancellable, if we can remove it from the far left of any equation involving arrows.
We call a left cancellable arrow in a category a monomorphism.
3.1 In Set
Left cancellability means that if, when we do first g1 and then f we get the same as when we do first g2 and then f, then we had equality already before we followed with f.
In other words, when we work with functions on sets, f doesn't introduce relations that weren't already there. Anything non-equal before we apply f remains non-equal in the image. This, translated to formulae gives us the well-known form for injectivity:
$x\neq y\Rightarrow f(x)\neq f(y)$ or moving out the negations,
$f(x)=f(y) \Rightarrow x=y$.
In Wikipedia: [4].
3.2 Subobjects
Consider the subset $\{1,2\}\subset\{1,2,3\}$. This is the image of an accordingly chosen injective map from any 2-element set into {1,2,3}. Thus, if we want to translate the idea of a subset into categorical language, it is not enough talking about monomorphisms, though the fact that inclusion is an injection indicates that we are on the right track.
The trouble that remains is that we do not want to view {1,2} as different subsets when it occurs as an image of the 2-element set {1,2} or when it occurs as an image of the 2-element set {5,6}. So we need some way of figuring out how to catch these situations and parry for them.
We'll say that a morphism f factors through a morphism g if there is some morphism h such that f = gh.
We can also talk about a morphism $f:A\to C$ factoring through an object B by requiring the existence of morphisms $g:A\to B, h:B\to C$ that compose to f.
Now, we can form an equivalence relation on monomorphisms into an object A, by saying f˜g if f factors through g and g factors through f. The arrows implied by the factoring are inverse to each other, and the source objects of equivalent arrows are isomorphic.
Equipped with this equivalence relation, we define a subobject of an object A to be an equivalence class of monomorphisms.
Wikipedia has an accurate exposition [5].
4 Epimorphisms
Right cancellability, by duality, is the implication
$g_1f = g_2f \Rightarrow g_1 = g_2$
The name, here comes from that we can remove the right cancellable f from the right of any equation it is involved in.
A right cancellable arrow in a category is an epimorphism.
4.1 In Set
For epimorphims the interpretation in set functions is that whatever f does, it doesn't hide any part of the things g1 and g2 do. So applying f first doesn't influence the total available scope g1 and g2 have.
In Wikipedia: [6].
5 More on factoring
In Set, and in many other categories, any morphism can be expressed by a factorization of the form f = ip where i is a monomorphism and p is an epimorphism. For instance, in Set, we know that a function is surjective onto its image, which in turn is a subset of the domain, giving a factorization into an epimorphism - the projection onto the image - followed by a monomorphism - the inclusion of the image into the domain.
A generalization of this situation is sketched out on the Wikipedia page for Factorization systems [7].
Note that in Set, every morphisms that is both a mono and an epi is immediately an isomorphism. We shall see in the homework that the converse does not necessarily hold.
6 Initial and Terminal objects
An object 0 is initial if for every other object C, there is a unique morphism $0\to C$. Dually, an object 1 is terminal if there is a unique morphism $C\to 1$.
First off, we note that the uniqueness above makes initial and terminal objects unique up to isomorphism whenever they exist: we shall perform the proof for one of the cases, the other is almost identical.
Proposition Initial (terminal) objects are unique up to isomorphism.
Proof: Suppose C and C' are both initial (terminal). Then there is a unique arrow $C\to C'$ and a unique arrow $C'\to C$. The compositions of these arrows are all endoarrows of one or the other. Since all arrows from (to) an initial (terminal) objects are unique, these compositions have to be the identity arrows. Hence the arrows we found between the two objects are isomorphisms. QED.
• In Sets, the empty set is initial, and any singleton set is terminal.
• In the category of Vector spaces, the single element vector space 0 is both initial and terminal.
On Wikipedia, there is a terse definition, and a good range of examples and properties: [8].
Note that terminal objects are sometimes called final, and are as such used in the formal logic specification of algebraic structures.
6.1 Zero objects
This last example is worth taking up in higher detail. We call an object in a category a zero object if it is simultaneously initial and terminal.
Some categories exhibit a richness of structure similar to the category of vectorspaces: all kernels exist (nullspaces), homsets are themselves abelian groups (or even vectorspaces), et.c. With the correct amount of richness, the category is called an Abelian category, and forms the basis for homological algebra, where techniques from topology are introduced to study algebraic objects.
One of the core requirements for an Abelian category is the existence of zero objects in it: if a category does have a zero object 0, then for any Hom(A,B), the composite $A\to 0\to B$ is a uniquely determined member of the homset, and the addition on the homsets of an Abelian category has this particular morphism as its identity element.
6.2 Pointless sets and generalized elements
Arrows to initial objects and from terminal objects are interesting too - and as opposed to the arrows from initial and to the terminals, there is no guarantee for these arrows to be uniquely determined. Let us start with arrows $A\to 0$ into initial objects.
In the category of sets, such an arrow only exists if A is already the empty set.
In the category of all monoids, with monoid homomorphisms, we have a zero object, so such an arrow is uniquely determined.
For arrows $1\to A$, however, the situation is significantly more interesting. Let us start with the situation in Set. 1 is some singleton set, hence a function from 1 picks out one element as its image. Thus, at least in Set, we get an isomorphism of sets A = Hom(1,A).
As with so much else here, we build up a general definition by analogy to what we see happening in the category of sets. Thus, we shall say that a global element, or a point, or a constant of an object A in a category with terminal objects is a morphism $x:1\to A$.
This allows us to talk about elements without requiring our objects to even be sets to begin with, and thus reduces everything to a matter of just morphisms. This approach is fruitful both in topology and in Haskell, and is sometimes called pointless.
The important point here is that we can replace function application f(x) by the already existing and studied function composition. If a constant x is just a morphism $x:1\to A$, then the value f(x) is just the composition $f\circ x:1\to A\to B$. Note, also, that since 1 is terminal, it has exactly one point.
In the idealized Haskell category, we have the same phenomenon for constants, but slightly disguised: a global constant is 0-ary function. Thus the type declaration
`x :: a`
can be understood as syntactic sugar for the type declaration
`x :: () -> a`
thus reducing everything to function types.
Similarly to the global elements, it may be useful to talk about variable elements, by which we mean non-specified arrows $f:T\to A$. Allowing T to range over all objects, and f to range over all morphisms into A, we are able to recover some of the element-centered styles of arguments we are used to. We say that f is parametried over T.
Using this, it turns out that f is a monomorphism if for any variable elements $x,y:T\to A$, if $x\neq y$ then $f\circ x\neq f\circ y$.
7 Internal and external hom
If $f:B\to C$, then f induces a set function $Hom(A,f):Hom(A,B)\to Hom(A,C)$ through $Hom(A,f)(g) = f\circ g$. Similarly, it induces a set function $Hom(f,A):Hom(C,A)\to Hom( B,A)$ through $Hom(f,A)(g) = g\circ f$.
Using this, we have an occasionally enlightening
Proposition An arrow $f:B\to C$ is
1. a monomorphism if and only if Hom(A,f) is injective for every object A.
2. an epimorphism if and only if Hom(f,A) is injective for every object A.
3. a split monomorphism if and only if Hom(f,A) is surjective for every object A.
4. a split epimorphism if and only if Hom(A,f) is surjective for every object A.
5. an isomorphism if and only if any one of the following equivalent conditions hold:
1. it is both a split epi and a mono.
2. it is both an epi and a split mono.
3. Hom(A,f) is bijective for every A.
4. Hom(f,A) is bijective for every A.
For any A,B in a category, the homset is a set of morphisms between the objects. For many categories, though, homsets may end up being objects of that category as well.
As an example, the set of all linear maps between two fixed vector spaces is itself a vector space.
Alternatively, the function type
a -> b
is an actual Haskell type, and captures the morphisms of the idealized Haskell category.
We shall return to this situation later, when we are better equipped to give a formal scaffolding to the idea of having elements in objects in a category act as morphisms. For now, we shall introduce the notations $[A\to B]$ or BA to denote the internal hom - where the morphisms between two objects live as an object of the category. This distinguishes BA from Hom(A,B).
To gain a better understanding of the choice of notation, it is worth noting that | HomSet(A,B) | = | B | | A | .
8 Homework
Passing mark requires at least 4 of 11.
1. Suppose g,h are two-sided inverses to f. Prove that g = h.
2. (requires some familiarity with analysis) There is a category with object $\mathbb R$ (or even all smooth manifolds) and with morphisms smooth (infinitely differentiable) functions $f:\mathbb R\to\mathbb R$. Prove that being a bijection does not imply being an isomorphisms. Hint: What about $x\mapsto x^3?$. Wikipedia definition of smoothness: [9]. Moral of the definition is that all derivatives and derivatives of derivatives, et.c. are everywhere finite and continuous.
3. (try to do this if you don't do 2) In the category of posets, with order-preserving maps as morphisms, show that not all bijective homomorphisms are isomorphisms. See the notes for Lecture 1 for details on posets and order-preserving maps, as well as wikipedia links.
4. Consider the partially ordered set P as a category. Prove: every arrow is both monic and epic. Is every arrow thus an isomorphism?
5. What are the terminal and initial objects in a poset? Give an example each of a poset that has both, either and none. Give an example of a poset that has a zero object.
6. What are the terminal and initial objects in the category with objects graphs and morphisms graph homomorphisms? Definition of a graph and a graph homomorphism occurred in Lecture 1.
7. Prove that if a category has one zero object, then all initial and all terminal objects are all isomorphic and they are all zero objects.
8. Prove that the composition of two monomorphisms is a monomorphism and that the composition of two epimorphisms is an epimorphism. If $g\circ f$ is monic, do any of g,f have to be monic? If the composition is epic, do any of the factors have to be epic?
9. Verify that the equivalence relation used in defining subobjects really is an equivalence relation. Further verify that this fixes the motivating problem.
10. Describe a representative subcategory each of:
• The category of vectorspaces over the reals.
• The category formed by the preordered set of the integers $\mathbb Z$ and the order relation $a\leq b$ if a | b. Recall that a preordered set is a set P equipped with a relation $\leq$ that fulfills transitivity and reflexivity, but not necessarily anti-symmetry.
11. * An arrow $f:A\to A$ in a category C is an idempotent if $f\circ f = f$. We say that f is a split idempotent if there is some $g:A\to B, h:B\to A$ such that $h\circ g=f$ and $g\circ h=1_B$. Show that in Set, f is idempotent if and only if its image equals its set of fixed points. Show that every idempotent in Set is split. Give an example of a category with a non-split idempotent.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 46, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8863343000411987, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/math-topics/15396-force-pi-10j.html
|
# Thread:
1. ## A force of (pi + 10j)
i neeed to figure out the distance and the speed of the car so i need 4 and 5 that ,thank you ...thanks in advance for help !
Attached Thumbnails
2. Originally Posted by carlasader
i neeed to figure out the distance and the speed of the car so i need 4 and 5 that ,thank you ...thanks in advance for help !
Hello,
to #4:
You have 2 forces which can be described by vectors:
$\overrightarrow{f_1}=\left( \begin{array}{c}p \\ 10 \end{array} \right)$ and $\overrightarrow{f_2}=\left( \begin{array}{c}3 \\ 5 \end{array} \right)$
The resulting force can be described by: $\overrightarrow{f_r}=\left( \begin{array}{c}p+3 \\ 15 \end{array} \right)$
From $F = m \cdot a$ you know that $F =2 \frac{m}{s^2} \cdot 8.5 kg = 17 N = \left | \overrightarrow{f_r}\right|$
Thus you get:
$17^2 = (3+p)^2+15^2\ \ \Longleftrightarrow \ \ p^2+6p-55=0$
You'll get the solutions: $p_1=5 \text{ or } p_2=-11$
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 7, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9433561563491821, "perplexity_flag": "middle"}
|
http://programmingpraxis.com/2009/06/19/monte-carlo-factorization/?like=1&_wpnonce=81a70c627a
|
# Programming Praxis
A collection of etudes, updated weekly, for the education and enjoyment of the savvy programmer
## Monte Carlo Factorization
### June 19, 2009
We have previously examined two methods of integer factorization, trial division using wheels and Fermat’s method of the differences of squares. In this exercise we examine a probabilistic method of integer factorization that takes only a small, fixed amount of memory and tends to be very fast. This method was developed by John Pollard in 1975 and is commonly called the Pollard rho algorithm.
The algorithm is based on the observation that, for any two random integers x and y that are congruent modulo p, the greatest common divisor of their difference and n, the integer being factored, will be 1 if p and n are coprime (have no factors in common) but will be between 1 and n if p is a factor of n. By the birthday paradox, p will be found with reasonably large probability after trying about $\sqrt{p}$ random pairs.
Pollard’s algorithm uses a function modulo n to generate a pseudo-random sequence. Two copies of the sequence are run, one twice as fast as the other, and their values are saved as x and y. At each step, we calculate gcd(x-y,n). If the greatest common divisor is one, we loop, since the two values are coprime. If the greatest common divisor is n, then the values of the two sequences have become equal and Pollard’s algorithm fails, since the sequences have fallen into a cycle, which is detected by Floyd’s tortoise-and-hare cycle-finding algorithm; that’s why we have two copies of the sequence, one (the “hare”) running twice as fast as the other (the “tortoise”). But if the greatest common divisor is between 1 and n, we have found a factor of n.
Failure doesn’t mean failure. It just means that the particular pseudo-random sequence that we chose doesn’t lead to success. Our response to failure is to try another sequence. We use the function x² + c (mod n), where c is initially 1. If Pollard’s algorithm fails, we increase c to 2, then 3, and so on. If we keep increasing c, we will eventually find a factor, though it may take a long time if n is large.
Your task is to implement Pollard’s factorization algorithm. You can test it by calculating the factors of the 98th Mersenne number, 298 – 1. When you are finished, you are welcome to read or run a suggested solution, or to post your own solution or discuss the exercise in the comments below.
### Like this:
Pages: 1 2
Posted by programmingpraxis
Filed in Exercises
5 Comments »
### 5 Responses to “Monte Carlo Factorization”
1. June 19, 2009 at 1:00 PM
[...] Praxis – Monte Carlo factorization By Remco Niemeijer In today’s Programming Praxis problem we have to implement John Pollard’s factorization algorithm. Our [...]
2. Remco Niemeijer said
June 19, 2009 at 1:01 PM
My Haskell solution (see http://bonsaicode.wordpress.com/2009/06/19/programming-praxis-monte-carlo-factorization/ for a version with comments):
```import Control.Arrow
import Data.Bits
import Data.List
import System.Random
isPrime :: Integer -> StdGen -> Bool
isPrime n g =
let (s, d) = (length *** head) . span even $ iterate (`div` 2) (n-1)
xs = map (expm n d) . take 50 $ randomRs (2, n - 2) g
in all (\x -> elem x [1, n - 1] ||
any (== n-1) (take s $ iterate (expm n 2) x)) xs
expm :: Integer -> Integer -> Integer -> Integer
expm m e b = foldl' (\r (b', _) -> mod (r * b') m) 1 .
filter (flip testBit 0 . snd) .
zip (iterate (flip mod m . (^ 2)) b) $
takeWhile (> 0) $ iterate (`shiftR` 1) e
factor :: Integer -> Integer -> Integer
factor c n = factor' 2 2 1 where
f x = mod (x * x + c) n
factor' x y 1 = factor' x' y' (gcd (x' - y') n) where
(x', y') = (f x, f $ f y)
factor' _ _ d = if d == n then factor (c + 1) n else d
factors :: Integer -> StdGen -> [Integer]
factors n g = sort $ fs n where
fs x | even x = 2 : fs (div x 2)
| isPrime x g = [x]
| otherwise = f : fs (div x f) where f = factor 1 x
main :: IO ()
main = print . factors (2^98 - 1) =<< getStdGen
```
3. July 1, 2009 at 4:38 PM
Here’s my attempt in Python. A couple of issues in the code remain. The factors that it discovers aren’t guaranteed to be prime. I cribbed the Miller-Rabin test from one of the python code repositories. And, I don’t really understand exactly how this works. :-) Back to the reference books.
```#!/usr/bin/env python
#
# a basic implementation of the Pollard rho factorization
# Written by Mark VandeWettering <markv@pixar.com>
#
import sys
import locale
import random
class FactorError(Exception):
def __init__(self, value):
self.value = value
def __str__(self):
return repr(self.value)
def miller_rabin_pass(a, n):
d = n - 1
s = 0
while d % 2 == 0:
d >>= 1
s += 1
a_to_power = pow(a, d, n)
if a_to_power == 1:
return True
for i in xrange(s-1):
if a_to_power == n - 1:
return True
a_to_power = (a_to_power * a_to_power) % n
return a_to_power == n - 1
def isprime(n):
for repeat in xrange(20):
a = 0
while a == 0:
a = random.randrange(n)
if not miller_rabin_pass(a, n):
return False
return True
def gcd(a, b):
while b != 0:
a, b = b, a%b
return a
def findfactor(n):
for c in range(1, 50):
x = y = random.randint(1, n-1)
x = (x * x + c) % n
y = (y * y + c) % n
y = (y * y + c) % n
while True:
t = gcd(n, abs(x-y))
if t == 1:
x = (x * x + c) % n
y = (y * y + c) % n
y = (y * y + c) % n
elif t == n:
break
else:
return t
raise FactorError("couldn't find a factor.")
def factor(n):
r = []
while True:
if isprime(n):
r.append(n)
break
try:
f = findfactor(n)
r.append(f)
n = n / f
except FactorError, msg:
r.append(n)
break
r.sort()
return r
def doit(n):
flist = factor(n)
print locale.format("%d", n, True), "="
for f in flist:
print "\t%s" % locale.format("%d", f, True)
locale.setlocale(locale.LC_ALL, "")
doit(2**98-1)
```
4. July 1, 2009 at 9:48 PM
Okay, I fixed a couple of things, and extended the program a tiny bit. It now is a numeric calculator of sorts. It’s not industrial strength or anything, but you can basically type any python numeric expression, and it will use eval() (at least with a predefined environment) to evaluate the number. I’ve also predefined a couple of built in functions. prime(n) will return an n digit prime. rsa(n) will return an rsa key which is the combination of two n/2 digit primes. factor(n) factors n. I’ve also added code to do some trial division as well, to get rid of small factors, and it collapses multiple occurrences of a factor (instead of printing 128 copies of 2 when factoring 2^128, it outputs “2**128″).
```#!/usr/bin/env python
#
# a basic implementation of the Pollard rho factorization
# Written by Mark VandeWettering <markv@pixar.com>
#
import sys
import locale
import random
try:
import readline
except ImportError, msg:
print msg
print "Line editing disabled."
# an inefficient but straightforward way to find primes...
def primes(n):
primes = [2]
for x in range(3, n, 2):
prime = True
for p in primes:
if p * p > n:
break
if x % p == 0:
# it's composite..
prime = False
break
if prime:
primes.append(x)
return primes
class FactorError(Exception):
def __init__(self, value):
self.value = value
def __str__(self):
return repr(self.value)
def miller_rabin_pass(a, n):
d = n - 1
s = 0
while d % 2 == 0:
d >>= 1
s += 1
a_to_power = pow(a, d, n)
if a_to_power == 1:
return True
for i in xrange(s-1):
if a_to_power == n - 1:
return True
a_to_power = (a_to_power * a_to_power) % n
return a_to_power == n - 1
def isprime(n):
for repeat in xrange(20):
a = 0
while a == 0:
a = random.randrange(n)
if not miller_rabin_pass(a, n):
return False
return True
def gcd(a, b):
while b != 0:
a, b = b, a%b
return a
def findfactor(n):
for c in range(1, 10):
x = y = random.randint(1, n-1)
x = (x * x + c) % n
y = (y * y + c) % n
y = (y * y + c) % n
while True:
t = gcd(n, abs(x-y))
if t == 1:
x = (x * x + c) % n
y = (y * y + c) % n
y = (y * y + c) % n
elif t == n:
break
else:
return t
raise FactorError("couldn't find a factor.")
def factor(n):
r = []
for p in primes(10000):
while n % p == 0:
r.append(p)
n = n / p
if n == 1:
return r
while True:
if isprime(n):
r.append(n)
break
try:
f = findfactor(n)
r.append(f)
n = n / f
except FactorError, msg:
r.append(n)
break
r.sort()
return r
# this function would be easier to write recursively, but
# python isn't good at tail recursion, so in theory, it could
# fail. Too bad.
def shorten(flist):
slist = []
idx = 0
while flist[idx:] != []:
hd = flist[idx]
idx = idx + 1
exp = 1
while flist[idx:] != [] and flist[idx] == hd:
exp = exp + 1
idx = idx + 1
if exp > 1:
slist.append(locale.format("%d", hd, True) + "**"+str(exp))
else:
slist.append(locale.format("%d", hd, True))
return slist
def factorit(n):
flist = factor(n)
print locale.format("%d", n, True), "="
for f in shorten(flist):
print "\t%s" % f
locale.setlocale(locale.LC_ALL, "")
import string
def prime(n):
"generate an n bit prime"
while True:
x = int(''.join([random.choice(string.digits) for i in range(n)]))
if isprime(x):
return x
def rsapair(n):
return prime(n//2)*prime(n//2)
env = { "prime" : prime,
"rsa" : rsapair,
"factor" : factorit}
while True:
try:
num = raw_input("enter a python numexpr >> ")
n = eval(num, env)
if n:
print n
except NameError, msg:
print >> sys.stderr, msg
except SyntaxError, msg:
print >> sys.stderr, msg
except KeyboardInterrupt, msg:
print >> sys.stderr, "**Interrupted**"
continue
except EOFError, msg:
print
break
```
5. programmingpraxis said
July 1, 2009 at 11:59 PM
Instead of eval, you might want to build your own calculator. See the very first Programming Praxis exercise for an RPN calculator.
%d bloggers like this:
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 1, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9135074615478516, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/algebra/210658-algebra-2-can-someone-show-me-how-you-would-go-about-solving-g-x.html
|
# Thread:
1. ## Algebra 2:Can someone show me how you would go about solving for G(x)?
The equation goes: 2G(x)-1/(G(x)+1=2√x2+1 -1/(√x2+1 +1)
I need to know how would you go about solving for the G(x) because i'm so stuck!
Note: The "x2+1" is under the square root. It jus didn't look right so I had to make a note.
2. ## Re: Algebra 2:Can someone show me how you would go about solving for G(x)?
is it (2G(x)-1)/(G(x)) or 2G(x)-(1/G(x))??
3. ## Re: Algebra 2:Can someone show me how you would go about solving for G(x)?
Because if it's (2G(x)-1)/(G(x)) then just split up the numerator and you'll get 2-1/G(x)= 2√x 2+1 -1/(√x2+1 +1) and that's the equivalent of 2-[2√x2+1 -1/(√x2+1 +1) ]=1/G(x)
So that'll give you G(x)=1/(2-[2√x2+1 -1/(√x2+1 +1)])
4. ## Re: Algebra 2:Can someone show me how you would go about solving for G(x)?
Hello, EJdive43!
How is this any different from any other "solve for G" problem?
$\text{Solve for G: }\:\frac{2G-1}{G+1} \:=\:\frac{2\sqrt{x^2+1} - 1}{\sqrt{x^2+1} + 1}$
We have: . $(2G-1)(\sqrt{x^2+1} + 1) \:=\:(G+1)(2\sqrt{x^2+1} - 1)$
. . $2G\sqrt{x^2+1} + 2G - \sqrt{x^2+1} - 1 \:=\:2G\sqrt{x^2+1} - G + 2\sqrt{x^2+1} - 1$
x . . . . . . . . . . . . . . . . . . . . $3G \:=\:3\sqrt{x^2+1}$
. . . . . . . . . . . . . . . . . . . . . . $G \:=\:\sqrt{x^2+1}$
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8720325231552124, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/249381/extension-of-holder-continuous-function
|
# Extension of Hölder continuous function
Let $f:[0,c] \rightarrow \mathbb R$ be Hölder continuous with constant $M>0$ and power $p \in (0,1)$ and satisfies $f(0)=f(c)$. Let $g:[0,2c] \rightarrow \mathbb R$ be given by:
$$g(x)=f(x) \ \ \textrm{for} \ \ x \in [0,c]$$ and $$g(x)=f(x-c) \ \ \textrm{for} \ \ x \in [c,2c].$$
Then $g$ is Hölder continuous with the same power but I don't know if constant $M$ is the same.
I try in this way. Let $x,y \in [0,2c]$, $x<y$. It is obvious that if $x,y \in [0,c]$ or $x,y \in [c,2c]$ then $|g(x)-g(y)|\leq M |x-y|^p$. For $x\in [0,c]$, $y\in [c,2c]$ we have
$$|g(x)-g(y)| \leq |g(x)-g(c)|+|g(c)-g(y)| =|f(x)-f(c)|+|f(0)-f(y-c)|\leq M [(x-c)^p+(y-c)^p]\leq M 2^{1-p} (c-x+y-c)^p= M 2^{1-p} |y-x|^p.$$
(I have used inequality: $2^{p-1}(u^p+v^p)\leq (u+v)^p$ for $u,v \geq 0$, $p\in (0,1)$, which follows from concavity of $[0,\infty) \ni t\mapsto t^p$.)
Does constant $M 2^{1-p}$ can be improved to $M$ ?
Thanks
-
## 1 Answer
Suppose $f(x) = x^p$ for $0 \le x \le \epsilon$ and $f(x) = -(c-x)^p$ for $c-\epsilon \le x \le c$, which (at least if $c$ is large enough and you define $f$ suitably on $[\epsilon, c-\epsilon]$) has constant $M=1$. Then taking $x=c+\epsilon$, $y=c-\epsilon$, $$|g(x) - g(y)| = 2 \epsilon^p = 2^{1-p} |x-y|^p$$ and you can't get rid of the $2^{1-p}$.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 31, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9628298282623291, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/90953/induced-paths-of-order-4/91019
|
## Induced Paths of Order 4
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
In a graph $G=(V,E)$ of order $n$, what fraction of the $\binom{n}{4}$ $4$-subsets of $V$ can induce the path of order four?
I looked at this question 30 years ago and was never able to come up with a respectable upper bound. The question has reared its head again. The answer appears to be somewhere between $1/4$ and $1/3$, though that upper bound is almost certainly weak. Ideas?
-
I don't understand the question. By path of order $4,$ do you mean a path $v_1, v_2, v_3, v_4?$ By order $n,$ I assume "with $n$ vertices"? How does a subset induce a path, since paths are order dependent? If you mean all possible orderings of the four elements, a subset can induce 24 path (or 12, if you don't care about the direction). Note further that if $G$ is the complete graph, then by the above, every such subset DOES induce 24 (or 12) paths. So what do you mean? – Igor Rivin Mar 12 2012 at 1:36
2
I interpret the question as follows: take all the 4-subsets of vertices and count as a hit each of those for which the induced graph on those 4 vertices is isomorphic to a 4-path. The complete graph has a count of 0 because every 4-subset of the vertices induces a complete graph, not a 4-path. – gordon-royle Mar 12 2012 at 2:25
Yes, by induced subgraph I mean the usual - you include all the edges in the original graph involving the four vertices. For a complete graph you get a count of zero. One can do better. By a path, I mean what is usually meant by a path in Graph Theory. For example, if G is the cycle of order 5, then all 5 induced subgraphs of order 4 are paths. But the maximum fraction that can induce $P_4$ in a graph of order $n$ is clearly a non-increasing function of $n$ (if $n \ge 4$, and it's bounded below, so there should be a limit. I would like to know the limit. – geoffreyexoo Mar 12 2012 at 2:47
There are results of Alon (tinyurl.com/nogapaper) and of Bollobas and Sarkar (myweb.facstaff.wwu.edu/sarkara/four.ps) on maximizing the number of copies of P_4 over graphs with a fixed number of edges. Not posting as an answer since the word "induced", and fixing the number of edges rather than of vertices, makes a pretty big difference. As a historical curiosity, this seems to be Noga Alon's first paper, according to the publication list on his web site. – Louigi Addario-Berry Mar 12 2012 at 14:09
## 1 Answer
The question appears to be difficult. The best lower bound that I am aware of is still the one provided by the question author in 1986:
$$\frac{960}{4877}\binom{n}{4}\sim 0.19684\binom{n}{4}.$$
An upper bound is referred to in the paper The Inducibility of Graphs on Four Vertices" by James Hirst. It is
$$\sim 0.2064 \left( \binom{n}{4} + o(n^4)\right).$$
The bound is obtained via semi-definite programming using the flag algebra technique. This method was introduced by Razborov in 2007 and it can be used to automatically produce upper bounds on asymptotic number of induced configurations in graphs and hypergraphs. These bounds are occasionally tight. In particular, James Hirst in the paper linked above deduces asymptotically tight upper bounds on the number of induced subgraphs on $4$ vertices of any fixed type, except for the $4$ vertex path.
-
Thanks. I was not aware of this paper. Also, it appears that an upper bound of 156/495 can be achieved using current day computers by simply checking all graphs of order 12. – geoffreyexoo Mar 13 2012 at 0:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 18, "mathjax_display_tex": 2, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9445074796676636, "perplexity_flag": "head"}
|
http://www.nag.com/numeric/cl/nagdoc_cl23/html/S/s18cdc.html
|
# NAG Library Function Documentnag_bessel_k1_scaled (s18cdc)
## 1 Purpose
nag_bessel_k1_scaled (s18cdc) returns a value of the scaled modified Bessel function ${e}^{x}{K}_{1}\left(x\right)$.
## 2 Specification
#include <nag.h>
#include <nags.h>
double nag_bessel_k1_scaled (double x, NagError *fail)
## 3 Description
nag_bessel_k1_scaled (s18cdc) evaluates an approximation to ${e}^{x}{K}_{1}\left(x\right)$, where ${K}_{1}$ is a modified Bessel function of the second kind. The scaling factor ${e}^{x}$ removes most of the variation in ${K}_{1}\left(x\right)$.
The function uses the same Chebyshev expansions as nag_bessel_k1 (s18adc), which returns the unscaled value of ${K}_{1}\left(x\right)$.
## 4 References
Abramowitz M and Stegun I A (1972) Handbook of Mathematical Functions (3rd Edition) Dover Publications
## 5 Arguments
1: x – doubleInput
On entry: the argument $x$ of the function.
Constraint: ${\mathbf{x}}>0.0$. If x is too close to zero, there is a danger of overflow, and a failure will occur.
2: fail – NagError *Input/Output
The NAG error argument (see Section 3.6 in the Essential Introduction).
## 6 Error Indicators and Warnings
NE_REAL_ARG_LE
On entry, x must not be less than or equal to 0.0: ${\mathbf{x}}=〈\mathit{\text{value}}〉$.
${k}_{1}$ is undefined and the function returns zero.
NE_REAL_ARG_TOO_SMALL
On entry, x must be greater than $〈\mathit{\text{value}}〉$ : ${\mathbf{x}}=〈\mathit{\text{value}}〉$.
The function returns the value of the function at the smallest permitted value of the argument.
## 7 Accuracy
Relative errors in the argument are attenuated when propagated into the function value. When the accuracy of the argument is essentially limited by the machine precision, the accuracy of the function value will be similarly limited by at most a small multiple of the machine precision.
None.
## 9 Example
The following program reads values of the argument $x$ from a file, evaluates the function at each value of $x$ and prints the results.
### 9.1 Program Text
Program Text (s18cdce.c)
### 9.2 Program Data
Program Data (s18cdce.d)
### 9.3 Program Results
Program Results (s18cdce.r)
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 14, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.5119141340255737, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/tagged/combinatorial-geometry
|
## Tagged Questions
0answers
63 views
### pavings and quadratic forms
Hi, let $L$ be a lattice isomorphic to $\mathbb{Z}^r$ for some positive integer $r$ and $E=L\otimes \mathbb{R}$. An integral paving in $E$ is a set $\Sigma$ of integral polytopes …
1answer
140 views
### Fano plane drawings: embedding PG(2,2) into the real plane
By a drawing of the Fano plane I mean a system of seven simple curves and seven points in the real plane such that every point lies on exactly three curves, and every curve conta …
0answers
81 views
### Maximum number of Vertices of Hypercube covered by Ball of radius R
Let $R>0$ be given and let $H^n$ be the unit hypercube in $\mathbb{R}^n$. The problem I am facing is to find the maximum number of vertices of $H^n$ which can be covered by a close …
1answer
165 views
### Does every convex polyhedron have a combinatorially isomorphic counterpart whose all faces have rational areas?
Does every convex polyhedron have a combinatorially isomorphic counterpart whose faces all have rational areas? Does every convex polyhedron have a combinatorially isomorphic coun …
0answers
57 views
### Realizability of extensions of a free oriented matroid by an independent set
Question: I am searching for a non-realizable matroid with few dependencies relative to the number of points. Precisely, I would like to find a non-realizable (over $\mathbb{R}$) o …
1answer
84 views
### The Cayley Menger Theorem and integer matrices with row sum 2
I just filled a gap in my education by learning about the Cayley-Menger theorem, and the Cayley-Menger determinant: If $P_0, \dots, P_n$ are $n+1$ point in $\mathbb{R}^n$, and \$d_ …
3answers
190 views
### Perimeter/Neighborhood of a graph on grid
Hello, I have a $\sqrt{n}\times\sqrt{n}$ lattice graph $G=(V,E)$ i.e. vertices on said 2-dim integer lattice, and two vertices have an edge if their $L_1$ distance is one. Now I w …
3answers
185 views
### Strong notions of general position
Hi! I am looking for notions of general position that are stronger than linear general position. To illustrate, 3 points in linear general position don't lie on a line. I want a …
0answers
50 views
### rigidity of isoradial graphs
Suppose given a $1$-separated net $\Gamma\subset\mathbb R^2$. Is it true or false that there exists $\delta>0$ and a $\delta$-isoradial graph containing $\Gamma$ as a subset of its …
0answers
63 views
### 2d bin packing problem, with opportunity to optimize the size of the bin
I have been tasked with optimizing a manufacturing process. It is a non-trivial, NP hard problem. The problem is similar to the 2d bin packing problem, but we are trying to optimiz …
2answers
650 views
### Access to a preprint by D. N. Verma
Some work I am doing is connected with a sequence 1, 3, 40, 1225, 67956, $\dots$ which agrees with http://oeis.org/A012250 for all eight terms. The only useful information in OEIS …
1answer
118 views
### What properties does generalized Delaunay triangulation have?
Suppose that instead of the usual circle, we pick some other convex set D and make the Delaunay triangulation of a finite planar point set with respect to this set, i.e. connect tw …
1answer
482 views
### Sane bound on number of moves for Maker-Breaker game on $\mathbb R^2$ for $\{0,1,2,3,4\}$
The description below comes from József Beck. Combinatorial games. Tic-tac-toe theory, Encyclopedia of Mathematics and its Applications, 114. Cambridge University Press, Cambrid …
1answer
420 views
### Can one recover the smooth Gauss Bonnet theorem form the combinatorial Gauss Bonnet theorem as an appropriate limit?
First let me state two known theorems Theorem 1 (for smooth manifolds): Let $(M,g)$ be a smooth compact two dimensional Riemannian manifold. Then \int \frac{K}{2 \pi} dA = \ …
1answer
179 views
### Ways to look at a polyhedral graph
Motivation There are at least three interpretations of an abstract polyhedral (= planar 3-vertex-connected) graph: the 1-skeleton of a convex polyhedron (when embedded into \$\ma …
15 30 50 per page
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 27, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8909358978271484, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/268268/argument-of-the-riemann-zeta-function/268325
|
# argument of the Riemann zeta function
what does it mean that the function $F(t)$
$$F(t)= \frac{\arg\zeta (\frac{1}{2}+it)}{\sqrt{\log\log(t)}}$$
is distributed as a 'Gaussian Random variable ?? in the limit $t \to \infty$
a) $$\arg\zeta (\frac{1}{2}+it)=(1+o(1))\sqrt{\log\log(t)}$$
b) the Argument of the Zeta function on the critical line is almost $\sqrt{\log\log(t)}$
-
A deterministic function of a deterministic variable cannot be distributed in any fashion as a random variable. On the other hand, it may be interesting to analyze the result when the independent variable $t$ is normally distributed. As $t \rightarrow \infty$ however, it may be that the phase of $F$ changes so rapidly that it may be better to treat it statistically. – Ron Gordon Dec 31 '12 at 17:47
## 2 Answers
Consider a measurable real-valued function $G$ defined on $(0,+\infty)$. For every $T\gt0$ and real number $x$, define $\ell_T(x)$ as the Lebesgue measure of the set $\{t\leqslant T\mid G(t)\leqslant x\}$.
One says that the function $G$ is asymptotically distributed as a standard normal random variable if, for every real number $x$, $$\lim\limits_{T\to\infty}T^{-1}\ell_T(x)=\frac1{\sqrt{2\pi}}\int_{-\infty}^x\mathrm e^{-z^2/2}\mathrm dz,$$ that is, $$\lim\limits_{T\to\infty}T^{-1}\int_0^T\mathbf 1_{G(t)\leqslant x}\,\mathrm dt=\mathbb P(Z\leqslant x),$$ where $Z$ is a standard normal random variable.
There exists some variants of this definition but the idea remains the same.
-
What Selberg proved is that for $E\subset \mathbb R$, we have that the limit as $T\to\infty$ of $$\frac{1}{T}\mu(T\le t\le 2T\,|\,\arg(\zeta(1/2+i t)/\sqrt{1/2\log\log t}\in E)$$ where $\mu$ is Lebesgue measure, is equal to the integral over $E$ of a Gaussian Random Variable with mean $0$ and standard deviation $1$: $$\frac{1}{\sqrt{2\pi}}\int_E \exp(-x^2/2)\, dx.$$
Edit: Here's an example that may help clarify, using the harmonic conjugate $\log|\zeta(1/2+i t)|$ (which is implemented in Mathematica). The analog of Selberg's theorem is true for this function as well. The plot is of $\log|\zeta(1/2+i t)|/\sqrt{1/2\log\log(t))}$, for $50\le t\le 100$. Note it looks nothing like a Gaussian.
Here's a histogram of $50000$ equally spaced values values taken by this function:
Extreme negative values (near the Riemann zeros) are extremely scarce, as are large positive values. The fit to the bell curve is not yet good, but the Riemann zeta function approaches its asymptotic behavior very slowly.
-
then stopple what you mean is that $$\frac{1}{T}\int_{T}^{2T}dt \frac{arg \xi(1/2+it)}{\sqrt{(1/2)loglog(t)}}= \frac{1}{\sqrt{2\pi}}\int_{E}exp(-x^{2}/2)$$ is this true – Jose Garcia Jan 2 at 11:31
@Jose: Well, no. Your right side depends on $E$; your left side does not. – stopple Jan 2 at 15:29
@Jose: Very roughly what Selberg's theorem says is that if you make a histogram of the values of $\arg(\zeta(1/2+it))/\sqrt{1/2\log\log(t)}$, it will look like the Gaussian, i.e. a bell curve. – stopple Jan 2 at 17:01
sorry stopple i am a physicit so i do not understand much about probability (only for quantum mechanics) so if your reressent the function $$arg\zeta(1/2+it)/\sqrt{1/2loglog(t)}$$ will have the form of a Gaussian ?? thanks for your advise :) and your patience – Jose Garcia Jan 2 at 20:05
@Jose: see edit above. – stopple Jan 2 at 21:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 27, "mathjax_display_tex": 8, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8914026021957397, "perplexity_flag": "head"}
|
http://mathhelpforum.com/algebra/193049-locus-parabola-problem.html
|
# Thread:
1. ## Locus and the Parabola Problem
Hi there MHF,
After an hour of intensive algebra, I am still unable to solve this question.
Could someone please help me with Part A). And also, if they are kind enough, Part B).
Thank you.
2. ## Re: Locus and the Parabola Problem
Originally Posted by eskimogenius
Hi there MHF,
After an hour of intensive algebra, I am still unable to solve this question.
Could someone please help me with Part A). And also, if they are kind enough, Part B).
Thank you.
1. You probably have determined the coordinates of the focus as S(0, 1).
2. The equation of the parabola is $y=\frac14 \cdot x^2$.
That means the slope of the tangent to the parabola is calculated by $y'=\frac12 \cdot x$
3. Let $P\left(p,\frac14p^2 \right)$ be a point on the parabola. The tangent $t_P$ at P to the parabola has the equation:
$y-\frac14p^2=\frac12p \cdot (x-p)$
$t_P: y=\frac12p \cdot x - \frac14p^2$
4. The normal $n_P$ in P has the slope $m = -\frac2p$. Since the normal passes through P too it has the equation:
$y-\frac14p^2=-\frac2p \cdot (x-p)$
$y = -\frac2p \cdot x +2+\frac14p^2$
5. Now determine the point of intersection of the parabola and the normal.
6. Show that $\overrightarrow{SP} \cdot \overrightarrow{SQ} = 0$ that means the angle $\angle(PSQ) = 90^\circ$.
3. ## Re: Locus and the Parabola Problem
Having immense troubles here trying to determine the point of intersection of the parabola and the normal.
When you substitute it in, you have on one side, x^2, whilst on the other side, x. It is impossible to make x the subject of the equation.
How do I complete your step 5?
In other attempts, I have tried to use a separate variable for Q, with P being (2ap, ap^2) and Q being (2aq, aq^2). And then using Pythagoras' Theorem to prove true for the right angle. However, the algebra becomes intense and you are unable to eliminate and cancel down to the RHS. Also, I have attempted to use the gradients, then attempting to substitute the given condition of QS = 2PS, however, that fails also with the algebra coming up to powers of 4.
4. ## Re: Locus and the Parabola Problem
Originally Posted by eskimogenius
Having immense troubles here trying to determine the point of intersection of the parabola and the normal.
When you substitute it in, you have on one side, x^2, whilst on the other side, x. It is impossible to make x the subject of the equation.
How do I complete your step 5?
...
The intersection of the parabola and the normal:
equation of parabola: $y = \frac14 x^2$
equation of normal: $y = -\frac2p x+2+\frac14p^2$
Solve for x:
$\frac14 x^2 = -\frac2p x + 2 + \frac14p^2$
$x^2+\frac8p x - 8 - p^2=0$
Apply the quadratic formula:
$x = \frac{-\frac8p \pm \sqrt{\frac{64}{p^2} - 4(-8-p^2)}}2$
$x = \frac{-\frac8p \pm \frac2p \cdot \sqrt{p^4+8p^2+16 }}2$
The radicand is a complete square!
Therefore you'll get:
$\underbrace{x = p}_{P} ~\lor ~ \underbrace{x = -p-\frac8p}_{Q}$
So Q has the coordinates $Q\left(-p-\frac8p, \frac14\left(p+\frac8p \right)^2\right)$
To prove the orthogonality of $\overline{SP}$ and $\overline{SQ}$ respectively I would use vectors as I've described in my previous post.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 22, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9040572643280029, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/156462/prove-that-16-1156-111556-11115556-1111155556-are-squares?answertab=oldest
|
# Prove that 16, 1156, 111556, 11115556, 1111155556… are squares.
I'm 16 years old, and I'm studying for my exam maths coming this monday. In the chapter "sequences and series", there is this exercise:
Prove that a positive integer formed by $k$ times digit 1, followed by $(k-1)$ times digit 5 and ending on one 6, is the square of an integer.
I'm not a native English speaker, so my translation of the exercise might be a bit crappy. What is says is that 16, 1156, 111556, 11115556, 1111155556, etc are all squares of integers. I'm supposed to prove that. I think my main problem is that I don't see the link between these numbers and sequences.
Of course, we assume we use a decimal numeral system (= base 10)
Can anyone point me in the right direction (or simply prove it, if it is difficult to give a hint without giving the whole evidence). I think it can't be that difficult, since I'm supposed to solve it.
For sure, by using the word "integer", I mean "natural number" ($\in\mathbb{N}$)
Thanks in advance.
As TMM pointed out, the square roots are 4, 34, 334, 3334, 33334, etc...
This row is given by one of the following descriptions:
• $t_n = t_{n-1} + 3*10^{n-1}$
• $t_n = \lfloor\frac{1}{3}*10^{n}\rfloor + 1$
• $t_n = t_{n-1} * 10 - 6$
But, I still don't see any progress in my evidence. A human being can see in these numbers a system and can tell it will be correct for $k$ going to $\infty$. But this isn't enough for a mathematical evidence.
-
4
Hint - try writing the general term in simple terms using the fact that a block of digits all the same can be summed as a geometric progression. So a sequence of $k$ '1's is $\frac{10^k-1}9$, then see what you have. – Mark Bennet Jun 10 '12 at 13:11
@MarkBennet: Thanks! I found it! – Martijn Courteaux Jun 10 '12 at 13:37
## 9 Answers
Mark Bennet already suggested looking at the numbers as geometric series, so I'll use a slightly different approach. Instead of writing the squares like that, try writing them as follows:
$$\begin{align} 15&.999\ldots = 16 \\ 1155&.999\ldots = 1156 \\ 111555&.999\ldots = 111556 \\ \vdots\end{align}$$
These numbers can be expressed as a sum of three numbers, as follows:
$$\begin{align} 111111&.111\ldots \\ 444&.444\ldots \\ 0&.444\ldots \\ \hline 111555&.999\ldots \end{align}$$
Since $1/9 = 0.111\ldots$, we get
$$\begin{align} 111111&.111\ldots = \frac{1}{9} \cdot 10^{2k} \\ 444&.444\ldots = \frac{1}{9} \cdot 4 \cdot 10^k \\ 0&.444\ldots = \frac{1}{9} \cdot 4 \\ \hline 111555&.999\ldots = \frac{1}{9} \left(10^{2k} + 4 \cdot 10^k + 4\right). \end{align}$$
But this can be written as a square:
$$\frac{1}{9} \left(10^{2k} + 4 \cdot 10^k + 4\right) = \left(\frac{10^k + 2}{3}\right)^2.$$
Since $10^k + 2$ is always divisible by $3$, this is indeed the square of an integer.
-
This is clever. – 000 Nov 21 '12 at 21:10
Hint: The square roots are 4, 34, 334, 3334, ...
Hint: Find the square roots.
-
2
In my opinion, an even better hint would be: Hint, find the square roots. – Gerry Myerson Jun 10 '12 at 13:00
@Gerry Thanks for the suggestion, I hope it looks better now! ;) – TMM Jun 10 '12 at 14:29
5
Somehow, keeping the old bit ruins the charm... – mixedmath♦ Jun 13 '12 at 11:53
Mark Bennet's hint seems to be a winner, so I'm reposting it CW:
Hint - try writing the general term in simple terms using the fact that a block of digits all the same can be summed as a geometric progression. So a sequence of $k$ '1's is $10^k−1\over9$, then see what you have.
-
k = 1$\rightarrow$ $4^2 = 16$
k = 2$\rightarrow$ $34^2 = 1156$
k = 3$\rightarrow$ $334^2 = 111556$
k = 4$\rightarrow$ $3334^2 = 11115556$
etc
So,
the left part is given by: $(\frac{10^k - 1}{9} + 1)^2$
the right part is given by: $\frac{10^2k - 1}{9} + 4\frac{10^k - 1}{9} + 1$
work out both parts and you will see that they are equal. It is now proven since the base number of the left part (which is $\frac{10^k - 1}{9} + 1$) is always an integer.
-
Here's what I got from thinking about it for a little bit.
$u_1=16=1+5*10^0+10^1$
$u_2=1156=1+5*10^0+5*10^1+10^2+10^3$
$u_3=111556=1+5*10^0+5*10^1+5*10^2+10^3+10^4+10^5$
$u_k=1+\sum_{n=0}^{k-1} 5*10^n + \sum_{n=k}^{n=2k-1}10^n$
And $\sum_{n=k}^{n=2k-1}10^n=\sum_{n=0}^{n=2k-1}10^n-\sum_{n=0}^{n=k-1}10^n$
By the formula for the sum of a finite geometric series, we have: $$u_k=1+5 \frac{10^k-1}{9}+\frac{10^{2k}-1}{9} - \frac{10^k-1}{9}=1+ \frac{4(10^k)-4+10^{2k}-1}{9}$$ Bringing the 1 into the fraction and cancelling, we get$$u_k=\frac{10^{2k}+4(10^k)+4}{9}=\left(\frac{10^k+2}{3}\right)^2$$
And we are done.
-
Multiply one of these numbers by $9$, and you get $100...00400...004$, which is $100...002^2$.
-
Here's another way, using induction. I'm not a huge fan of proofs by induction, as they often seem to me to mask what's going on. In this case, though, induction allows you to stick to extremely elementary techniques, provided you can get your head around some notation and keep your columns in order.
Notation. Let $3_{(k)}$ denote $k$-many 3's in a row. So, $3_{(2)} = 33$ and $10_{(2)}2 = 1002$, for instance.
Our inductive hypothesis, following TMM's / Gerry Myerson's hint, is that $(3_{(n)}4)^2 = 1_{(n+1)}5_{(n)}6$. Checking the base case is trivial: sure enough $(3_{(0)}4)^2 = 1_{(1)}5_{(0)}6$, i.e., $4^2 = 16$.
We now assume the result for $n=k$ and aim to prove it for $n=k+1$. First, some elementary algebra, $(a+b)^2 = a^2 + 2ab + b^2$. We use this as follows:
$(3_{(k+1)}4)^2 = (30_{(k+1)} + 3_{(k)}4)^2 = 90_{(2k+2)} + 20_{(k)}40_{(k+1)} + (3_{(k)}4)^2$
We then apply the inductive hypothesis to $(3_{(k)}4)^2$:
$= 90_{(2k+2)} + 20_{(k)}40_{(k+1)} + 1_{(k+1)}5_{(k)}6$
It's now a matter of adding the digits in each column. A picture makes this easy:
In words: the last $k+1$ digits are just $5_{(k)}6$, as the left and middle terms have $0$'s here. In the $k+2$th position, you have $4+1=5$, from the middle and last terms; so, cumulatively, we've now got $5_{(k+1)}6$. For the next $k$ positions, the first and middle term again supply only zeroes. So, we look to the final term, which gives use $k$ $1$'s. We're now up to $1_{(k)}5_{(k+1)}6$. At this point, the final term has run out and we look to the first and middle terms to fill the $2k+2$th column. These give $9+2=11$. So, we have $111_{(k)}5_{(k+1)}6$. That is:
$(3_{(k+1)}4)^2 = 1_{(k+2)}5_{(k+1)}6$
So, we've proven the inductive hypothesis.
-
This is quite a nice result, and worth remembering; proving that each number in the sequence is the square of a natural number by 'guessing' what number each is the square of, and proving that this is indeed true for every number in the sequence. – Bill Michell Jun 10 '12 at 20:53
$\rm\begin{eqnarray} {\bf Hint}\ & &\,\ 9\ (11\ldots1155\ldots556) \\ &= &\,\ 9\,(11\ldots11 + 44\ldots44\,+\,1) \\ &=&\rm\ \ 10^{2k}-1\ +\ 4(10^k - 1) + 9\\ &=&\rm\ \ 10^{2k} +\, 4\cdot\!10^k\ +\ 4 \\ &=&\rm\ (10^k\ +\ \_\,)^2 \end{eqnarray}$
-
1
I like this a lot. – Mark Bennet Jun 10 '12 at 16:59
Why did you write "_" instead of "2"? – MJD Jun 10 '12 at 21:22
1
@Mark That (was) work left for the OP. – Gone Jun 10 '12 at 21:30
You may want to have a look at
http://www.cut-the-knot.org/arithmetic/NumberCuriosities/Squares.shtml
with a half dozen identities like that and links to as many of a similar sort. At one time there was a splash of activity on twitter.com with about the same question. But theis was in 2011.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 62, "mathjax_display_tex": 6, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.938401997089386, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/47870/what-is-the-simplest-possible-topological-bloch-function/48097
|
# What is the simplest possible topological Bloch function?
Kohmoto (1985) pointed out in Topological Invariant and the Quantization of the Hall Conductance how TKNN's calcuation of Hall conducance is related to topology, in which topologically nontriviality is said to be equivalent to impossiblility choosing a global phase of Bloch function $u_k (r)$ in Brillouin zone. As shown in the Figure, we can choose two distinct gauges in sector I and II, and the curvature is the loop integral of phase mismatch on boundary $\partial H$.
What is the simplest possible Bloch function that is
• topologically nontrivial, and
• an eigenstate of Bloch Hamiltonian?
Bloch Hamiltonian: $H(k_x,k_y) = \frac{1}{2m}(-i\partial + {\bf k}+e{\bf A}(x,y))^2 + U(x,y)$ where $U$ is lattice periodic.
-
## 1 Answer
Surprisingly, according to Immanuel Bloch's group (no relation to F. Bloch!), the simplest topological Bloch function is the 1D staggered lattice. The topological invariant is the Zak phase, the Barry phase accrued by walking across the edge of the Brillouin zone. The article will explain it better than I can: Direct Measurement of the Zak phase in Topological Bloch Bands
-
– Qmechanic♦ Jan 3 at 14:44
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.846119225025177, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/21397/what-should-be-taught-in-a-1st-course-on-riemann-surfaces/21560
|
## What should be taught in a 1st course on Riemann Surfaces?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I am teaching a topics course on Riemann Surfaces/Algebraic Curves next term. The course is aimed at 1st and 2nd year US graduate students who have have taken basic coursework in algebra and manifold theory, but may not have had much expose to algebraic geometry. I will loosely follow the book Introduction to Algebraic Curves by Griffiths. In particular, I hope to spend a minimum amount of time developing basic machinery (e.g. sheaf theory) and to start doing concrete geometry (e.g. canonical models of curves of genus up to 4) as soon as possible.
My question is: what are some good concrete, accessible geometric topics in Riemann Surface/Curve theory that aren't in the standard textbooks?
Let's say that the standard textbooks are the book I mentioned and those discussed: here.
-
What are the "standard textbooks"? – Kevin Lin Apr 14 2010 at 22:56
I just edited the post to address this. – jlk Apr 14 2010 at 23:14
1st and 2nd year students of what? I guess you mean graduate students? Or maybe master? Otherwise it is difficult to answer. – Andrea Ferretti Apr 15 2010 at 14:13
@Ferretti, thanks for pointing this out. I revised the question. – jlk Apr 15 2010 at 18:24
## 7 Answers
Good question. I bet you'll get many interesting answers.
About two years ago I taught an "arithmetically inclined" version of the standard course on algebraic curves. I had intended to talk about degenerating families of curves, arithmetic surfaces, semistable reduction and such things, but I ended up spending more time on (and enjoying) some very classical things about the geometry of curves. My lecture notes for that part of the course are available here:
http://www.math.uga.edu/~pete/8320notes6.pdf
Some things that I found fun:
1) Construction of curves with large gonality. For instance, after having given several examples of various curves, it occurred to me that I hadn't shown them a non-hyperelliptic curve in every genus g >= 3, so then I talked about trigonal curves, and then...Anyway, there is a very nice theorem here due to Accola and Namba: suppose a curve $C$ admits maps $x,y$ to $\mathbb{P}^1$ of degrees $d_1$ and $d_2$. If these maps are independent in the sense that $x$ and $y$ generate the function field of the curve (note that this must occur for easy algebraic reasons when $d_1$ and $d_2$ are coprime), then the genus of $C$ is at most $(d_1-1)(d_2-1)$.
I sketched the proof in an exercise, which was indeed solved in a problem session by one of the students.
2) Material on automorphism groups of curves: the Hurwitz bound, automorphisms of hyperelliptic curves, construction of curves with interesting automorphism group.
3) Weierstrass points, with applications to 2) above.
-
Professor Clark, I have been meaning to ask you -- do you now have an affirmation of, or counterexample to, Exercise 55? Thank you for assembling this beautiful collection of problems. – pmoduli Apr 15 2010 at 0:56
6
No, I don't know the answer (nor did anyone in my course work on it, that I recall). If only there were some kind of website where mathematicians could ask each other research level questions, you could ask the question on that site and probably get an answer... – Pete L. Clark Apr 15 2010 at 1:36
1
Pete, for Ex. 55, apart from possibility (for char. > 0 small compared to genus) that char. divides order of the automorphism, it's a consequence of Lefschetz trace formula: such a (finite order) aut. has "simple" fixed points and so via Lefschetz induces negation on the degree-1 cohomology and hence is an involution. Quotient curve by involution has degree-1 cohomology injecting into invariants upstairs, so quotient is genus 0, whence curve was hyperelliptic with given aut. as hyperell. involution. – BCnrd Apr 15 2010 at 8:24
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
The exercises in the early chapters of the book by Arbarello Cornalba Griffiths and Harris are very interesting. The book itself is a second course but the early chapters and execises are a recap with interesting side trips.
You can also look at Clemens's book A scrapbook of complex curves, somehing like that.
-
Agreed, there are some great exercises there, some of them quite challenging. (In at least one case, I had to recruit the help of the algebraic geometer in the office next door to mine.) – Pete L. Clark Apr 15 2010 at 1:39
One thing that I rather like (though I'm biased, and mentioning old work by my advisor...) is the theory of Prym varieties, which can be mentioned immediately after discussing Jacobians. In particular the $n$-gonal construction (see Donagi "Fibers of the Prym Map") has a lot of nice geometric consequences, including a proof that the intermediate Jacobian of a cubic threefold isn't the Jacobian of a curve (though that's not really a Riemann surface thing), but generally, Prym varieties are rather nice (as well as Weil pairing and theta characteristics) but aren't mentioned in most courses...I think they've got short appendices towards the end in Arbarello-Cornalba-Griffiths-Harris, and they leave out most of the details.
-
Although it is sort of indirectly related, it might be nice to talk about some introductory abelian variety things (as in the first few pages of Mumford's Abelian Varieties). The motivation would come from proving the equivalence of the definition of genus as the dimension of the Jacobian variety of the curve. When I took a "curves" class, I would have liked to see this rather than thinking the course was "self-contained".
Do not be afraid to show glimpses of huge areas of math that were motivated by the study of curves, even if you don't have time to do more than just mention it. I would have been far more excited and motivated to learn some of these things if I had seen it as motivated by curves, rather than the other way around (studying abelian varieties as interesting in their own right and only later learning a motivation).
-
I agree, and proving the universal property of the Albanese in general, and doing it specifically for Jacobians, is a useful thing. – Charles Siegel Apr 15 2010 at 11:25
These answers seem to have almost nothing on Riemann surfaces. I guess I am just too old-fashioned. In a first course on Riemann surfaces, I would like the student to get an understanding of the Riemann surface for log z, and for arcsin z, for example.
-
Given that the original poster says "I will loosely follow the book Introduction to Algebraic Curves by Griffiths.", I get the idea that he is really asking about what should be taught in a first course on complex algebraic geometry (focusing on complex algebraic curves). – Steven Gubkin Apr 16 2010 at 13:00
My training is in algebraic geometry, so the textbook choice reflects my professional biases. I am very interested in hearing the suggestions of people who are not algebraic geometers, though. – jlk Apr 16 2010 at 23:05
As Gerald said, really understanding some specific surfaces is useful. And not just the compact ones! Remember that Riemann's ideas were based on analytic continuation, not algebra.
And if you really mean Riemann surfaces, then Divisors and monodromy (compute some actual monodromy matrices!).
And if you want to cover material which is not in the standard textbooks, cover in depth the relation between Riemann surfaces and differential equations.
-
Did you have specific monodromy computations in mind? – jlk Apr 16 2010 at 23:32
Also, did you have specific examples of divisor computations in mind? I know interesting examples, but I'd be interested in seeing new ones. – jlk Apr 16 2010 at 23:39
Sorry, it's been too long, I can't remember the ones that I thought were interesting. But there is some code from Mark van Hoeij that helps with such computations which I would probably turn to. – Jacques Carette Apr 16 2010 at 23:47
Puiseux series and the Newton-Puiseux theorem are beautiful and very useful to understand ramification and related issues. They do appear in one of the "standard textbooks" of the list (Farkas-Kra) but it seems they are usually overlooked.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9610604643821716, "perplexity_flag": "middle"}
|
http://unapologetic.wordpress.com/2007/09/26/images-coimages-and-exactness/?like=1&source=post_flair&_wpnonce=fe7f791e1e
|
# The Unapologetic Mathematician
## Images, Coimages, and Exactness
By the first isomorphism theorem, we know that any morphism $f$ in an abelian category $\mathcal{C}$ factorizes as $f=m\circ e$ with $m=\mathrm{Ker}(\mathrm{Cok}(f))$, and $e$ is epic. Since $m$ is monic, $f\circ t=m\circ e\circ t=0$ exactly when $e\circ t=0$. That is, the kernel of $f$ is isomorphic to the kernel of $e$. Then since $e$ is epic, $e=\mathrm{Cok}(\mathrm{Ker}(e))=\mathrm{Cok}(\mathrm{Ker}(f))$. So there’s a sort of a symmetry here between the monic and the epic in the factorization of $f$.
Now let’s consider another morphism $f'$ and a pair of morphisms $(g,h)$ so that $h\circ f=f'\circ g$. Then we can factorize each of $f$ and $f'$ as above to find $h\circ m\circ e=m'\circ e'\circ g$. Then there is a unique $k$ such that $e'\circ g=k\circ e$ and $m'\circ k=h'\circ m$.
To see this, set $u=\mathrm{Ker}(f)=\mathrm{Ker}(e)$. Then $0=h\circ f\circ u=m'\circ e'\circ g\circ u$ so $e'\circ g\circ u=0$. Thus $e'\circ g$ factors uniquely through $e=\mathrm{Cok}(u)$ as $e'\circ g=k\circ e$. Then $m'\circ k\circ e=m'\circ e'\circ g=h\circ m\circ e$. And so since $e$ is epic we have $m'\circ k=h'\circ m$.
Now, we’ll regard $f$ and $f'$ as objects in the arrow category $\mathcal{C}^\mathbf{2}$. Then the pair $(g,h)$ is a morphism from $f$ to $f'$. Similarly, the triangle $f=m\circ e$ is an object of $\mathcal{C}^\mathbf{3}$, and the triple $(g,k,h)$ is a morphism in this category.
What the above proof shows is that any object in $\mathcal{C}^\mathrm{2}$ can be assigned an object in $\mathcal{C}^\mathcal{3}$, and that any morphism in $\mathcal{C}^\mathbf{2}$ can be assigned one in $\mathcal{C}^\mathcal{3}$. Clearly this assignment amounts to a functor. In particular, if we start with the identity pair $(1,1)$ we must have an isomorphism for $k$, and thus any two factorizations are isomorphic.
Now, given this unique (up to isomorphism) factorization, we can define the image and coimage of $f=m\circ e:A\rightarrow B$ as $\mathrm{Im}(f)=m$ and $\mathrm{Coim}(f)=e$. Thus as expected the image of $f$ is a subobject of its target, and the coimage is a quotient object of its source.
Now that we have defined images and coimages we can define what it means for a composable sequence of morphisms to be exact. Let’s say we have $f:A\rightarrow B$ and $g:B\rightarrow C$. Both $\mathrm{Im}(f)$ and $\mathrm{Ker}(g)$ are subobjects of $B$, and we say that the pair $(f,g)$ is exact at $B$ when $\mathrm{Im}(f)=\mathrm{Ker}(g)$. We say that a longer string of composable arrows is exact if it is exact at each object inside the string.
As a special case, we say the sequence $\mathbf{0}\rightarrow A\rightarrow B\rightarrow C\rightarrow\mathbf{0}$ is short exact if it is exact. That is, if we let the two outer arrows be the unique such, let $f:A\rightarrow B$, and let $g:B\rightarrow C$, then the sequence is short exact if $\mathrm{Ker}(f)=\mathbf{0}$, $\mathrm{Im}(g)=\mathbf{0}$, and $\mathrm{Ker}(g)=\mathrm{Im}(f)$. If we drop the left $\mathbf{0}$ we call the sequence short right exact, and short left exact sequences are defined similarly.
Now the factorization of $f:A\rightarrow B$ gives rise to two short exact sequences: $\mathbf{0}\rightarrow\mathrm{Ker}(f)\rightarrow A\rightarrow\mathrm{Coim}(f)\rightarrow\mathbf{0}$ and $\mathbf{0}\rightarrow\mathrm{Im}(f)\rightarrow B\rightarrow\mathrm{Cok}(f)\rightarrow\mathbf{0}$. Then because the objects of the coimage and the image are isomorphic, we can weave these two sequences together at that point. In fact, we did something just like this back when we talked about exact sequences of groups!
An $\mathbf{Ab}$-functor $T:\mathcal{C}\rightarrow\mathcal{D}$ is called left exact when it preserves all finite limits. In particular it preserves kernels — that is, left exact sequences. Since any $\mathbf{Ab}$-functor preserves biproducts, preserving kernels is enough to preserve all finite limits. Similarly, a right exact functor is one which preserves all finite colimits, or equivalently all cokernels — right exact sequences. Finally, a functor is exact if it is both left and right exact.
### Like this:
Posted by John Armstrong | Category theory
## 4 Comments »
1. [...] Exact Sequences Last time we defined a short exact sequence in an abelian category to be an exact sequence of the form . [...]
Pingback by | September 27, 2007 | Reply
2. [...] So let’s take this and consider a linear transformation . The first isomorphism theorem says we can factor as a surjection followed by an injection . We’ll just regard the latter as the inclusion of the image of as a subspace of . As for the surjection, it must be the linear map , just as in any abelian category. Then we can set up the short exact sequence [...]
Pingback by | June 27, 2008 | Reply
3. [...] all the machinery of homological algebra, if we should so choose. In particular, we can talk about exact sequences, which can be useful from time to time. Possibly related posts: (automatically [...]
Pingback by | December 15, 2008 | Reply
4. [...] are the morphisms in the category of -modules. It turns out that this category has kernels and has images. Those two references are pretty technical, so we’ll talk in more down-to-earth [...]
Pingback by | September 29, 2010 | Reply
« Previous | Next »
## About this weblog
This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”).
I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 71, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9334762096405029, "perplexity_flag": "head"}
|
http://mathhelpforum.com/pre-calculus/150437-show-help.html
|
# Thread:
1. ## 'Show that' help
Suppose that ab > 0. Show that if a < b, then 1/b < 1/a.
I know this is correct, but I don't know how to show it. I did this:
aa^-2 < ba^-2
1/a < ba^-2
(b^-2)/a < (a^-2)/b
Hoping that I could find a way to swap the inequality sign around whilst creating the fractions required. But I can't seem to do it.
Any assistance would be great!
2. Originally Posted by Glitch
Suppose that ab > 0. Show that if a < b, then 1/b < 1/a.
I know this is correct, but I don't know how to show it. I did this:
aa^-2 < ba^-2
1/a < ba^-2
(b^-2)/a < (a^-2)/b
Hoping that I could find a way to swap the inequality sign around whilst creating the fractions required. But I can't seem to do it.
Any assistance would be great!
You can consider two cases: (1) a and b are both positive, or (2) a and b are both negative.
3. Ok. But am I on the right track? Is there a method to proving this?
4. Originally Posted by undefined
You can consider two cases: (1) a and b are both positive, or (2) a and b are both negative.
We can do it without "consider two cases"...
1/b<1/a
1/b-1/a<0
{a-b}/ab<0
since ab>0, and a<b, it's obvious!
5. Thanks! Looks like I was way off! :P
If I were to write that in an exam, would I have to explain why it's obvious?
6. Yes, you explain it in the next form:
{a-b}<0 since a<b(given)
ab>0 {given}
so, {a-b}/ab={-}/{+}={-}<0
ok?
7. Yup. Thanks. I really do need more practise with this stuff.
8. Thanks Also sprach Zarathustra, my way was pretty tedious compared with yours.
Originally Posted by Glitch
Thanks! Looks like I was way off! :P
If I were to write that in an exam, would I have to explain why it's obvious?
You should explain it on an exam, yes, but it's simply that the numerator is negative while the denominator is positive...
Edit: Too slow.
9. One more thing worth mentioning since "show that" means we're dealing with formal proofs here.
Originally Posted by Also sprach Zarathustra
1/b<1/a
1/b-1/a<0
{a-b}/ab<0
These lines are connected implicitly by "if and only if" as in
1/b<1/a
$\displaystyle \iff$ 1/b-1/a<0
$\displaystyle \iff$ {a-b}/ab<0
For the last line, we don't need to think about whether it's "if and only if" because we only need to go in one direction
1/b<1/a
$\displaystyle \iff$ 1/b-1/a<0
$\displaystyle \iff$ {a-b}/ab<0
$\displaystyle \Longleftarrow ab > 0 \land a < b$
I write this mainly so that nobody gets confused, thinking the proof is not valid because we assumed what we wanted to prove.
10. I've been meaning to ask, what does that upside-down 'V' symbol mean?
11. Originally Posted by Glitch
I've been meaning to ask, what does that upside-down 'V' symbol mean?
AND
12. The simplest way is to multiply 1/ab in to the both sides! and then you wont have to say that this is obvious!!
Just note that 1/ab>0 since ab>0.
a<b => a/ab<b/ab => 1/b<1/a .
13. Given $ab > 0$ and $a < b$.
Case 1: $a > 0$ and $b > 0$.
$a < b$
$\frac{a}{a} < \frac{b}{a}$
$1 < \frac{b}{a}$
$\frac{1}{b} < \frac{b}{ab}$
$\frac{1}{b} < \frac{1}{a}$.
Case 2: $a < 0$ and $b < 0$.
$a < b$
$\frac{a}{a} > \frac{b}{a}$
$1 > \frac{b}{a}$
$\frac{1}{b} < \frac{b}{ab}$
$\frac{1}{b} < \frac{1}{a}$.
Q.E.D.
14. Originally Posted by Glitch
Thanks! Looks like I was way off! :P
If I were to write that in an exam, would I have to explain why it's obvious?
The best thing to do would be to multiply both sids of $\frac{a- b}{ab}> 0$ by the positive value ab to get a- b> 0, then add b to both sides.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 22, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9663398265838623, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/135714/calculating-volume-of-convex-polytopes-generated-by-inequalities
|
# Calculating volume of convex polytopes generated by inequalities
I have a set of inequalities, I am looking for a way to compute its volume. More specifically, I would like to compute the ratio of its volume with the volume if some more inequalities were added. I have seen this question, I think however that half-plane intersections can be general convex polytopes (not just simplexes, am I wrong?).
I do not think I can expect a formula, an efficient algorithm would do (even an efficient one would be nice). Numerical method suggestions are also welcome.
-
## 1 Answer
If you are ok with approximate result, just use Monte Carlo method and if you want exact solution (like a rational number) then I think you will need to enumerate all the vertices (each vertex is given as a solution of set of some $n$ independent linear equations taken from those inequalities) and split into $n$-cells (like triangulation of some polygon) and sum the volume of those $n$-cells.
I think there might be some more efficient method by splitting this polytope by hyperplanes parallel to $\{x = 0\}$ or $\{y = 0\}$ or something similar, but this looks like a more complicated approach.
Finally there is yet another approach that splits your polytope by $|V|$ hyperplanes parallel to $\{x = 0\}$ passing through each vertex, and then volume of such slice can be computed by manipulating the $(n-1)$-volumes of the two sides, which can be computed by recursive call in a smaller dimension. This is not an efficient algorithm, but it will give a precise result and I think it won't be a nightmare to code (after all, I guess, the number of inequalities and the dimension of the space may vary, so it might be a good approach to code it independently of those two).
Edit: To answer your comment: Well, I don't have much time right now, but I will sketch a possible solution (there may be a better one, but I don't know, this is just a simple approach that came to me at the time of writing the post). Let $A \subset \mathbb{R}^n$ be your $n$-dimensional polytope, and $\mu_k$ be the $k$-dimentional Lebesgue measure (i.e. $k$-volume). Consider a function $$f(x) = \mu_{n-1}(\{p\in A \mid p_0 = x\}).$$ This is a piecewise $(n-1)$-th degree polynomial, e.g. if $n=2$ then you can split your polygon into trapezoids, and $f$ would be piecewise linear, if $n=3$ then you can split your polygon into prismatoids and $f$ would be piecewise quadratic. To obtain the volume of $A$ just integrate $f$ and you are done (this is a polynomial, so it is easy to get the exact result). And how to get $f$? Pieces are exactly created by the vertices, so in $n=3$ case you will need both sides (which you are computing anyway) and something in-between (actually you need $n$ points total for each piece, that means $n-2$ additional points). However, each of these can be solved using a recursive call to this algorithm for smaller dimension, and after that you have $n=3$ data points that uniquely describe a polynomial of $(n-1=2)$-nd degree.
Edit 2: After some reconsideration instead of integration consider just using some quadrature, the result still will be precise (if the order is high enough) and this should skip you the polynomial-fitting part.
However, I think that the approach using $n$-cells may be better. It is simple to calculate the volume of $n$-cell, so the algorithm could be as follows: take random $n$ points and split the polytope by resulting hyperplane into two parts and compute the results recursively. When the number of points is small, just do not take random points (choose them, so the resulting split is good), and finally for each part you will end up with $n+1$ points from which you can easily compute the volume.
Hope this helps!
-
I am interested in the last suggestion, I just don't fully understand it.. Can you maybe explain it in 3D as an example, or can you link to a longer description? – aelguindy Apr 23 '12 at 13:09
@aelguindy The comment was too short. I hope this helps ;-) – dtldarek Apr 23 '12 at 19:23
– aelguindy Apr 24 '12 at 9:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 30, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9529070258140564, "perplexity_flag": "head"}
|
http://scicomp.stackexchange.com/questions/1864/why-cant-householder-reflections-diagonalize-a-matrix
|
Why can't Householder reflections diagonalize a matrix?
When computing the QR factorization in practice, one uses Householder reflections to zero out the lower portion of a matrix. I know that for computing eigenvalues of symmetric matrices, the best you can do with Householder reflections is getting it to tridiagonal form. Is there an obvious way to see why it can't be fully diagonalized in this way? I am trying to explain this simply but I can't come up with a clear presentation.
-
4 Answers
When computing the eigenvalues of the symmetric matrix $M\in\mathbb{R}^{n\times n}$ the best you can do with Householder reflector is drive $M$ to a tridiagonal form. As was mentioned in a previous answer because $M$ is symmetric there is an orthogonal similarity transformation which results in a diagonal matrix, i.e., $D=S^TMS$. It would be convenient if we could find the action of the unknown orthogonal matrix $S$ strictly using Householder reflectors by computing a sequence of reflectors and applying $H^T$ from the left to $M$ and $H$ from the right to $M$. However this is not possible because of the way the Householder reflector is designed to zero out columns. If we were to compute the Householder reflector to zero out all the numbers below $M_{11}$ we find $$M=\left(\!\!{\begin{array}{ccccc} * &* & * & *&* \\ * &* & * & *&* \\ * &* & * & *&* \\ * &* & * & *&* \\ * &* & * & *&* \\ \end{array}}\!\!\right)\rightarrow H^T_1M=\left(\!\!{\begin{array}{ccccc} * &*' & *' & *'&*' \\ 0 &*' & *' & *'&*' \\ 0 &*' & *' & *'&*' \\ 0 &*' & *' & *'&*' \\ 0 &*' & *' & *'&*' \\ \end{array}}\!\!\right).$$ But now the entries $M_{12}-M_{1n}$ have been altered by the reflector $H^T_1$ applied on the left. Thus when we apply $H_1$ on the right it will no longer zero out the first row of $M$ leaving only $M_{11}$. Instead we will obtain $$H^T_1M=\left(\!\!{\begin{array}{ccccc} * &*' & *' & *'&*' \\ 0 &*' & *' & *'&*' \\ 0 &*' & *' & *'&*' \\ 0 &*' & *' & *'&*' \\ 0 &*' & *' & *'&*' \\ \end{array}}\!\!\right)\rightarrow H^T_1MH_1=\left(\!\!{\begin{array}{ccccc} * &*'' & *'' & *''&*'' \\ *' &*'' & *'' & *''&*'' \\ *' &*'' & *'' & *''&*'' \\ *' &*'' & *'' & *''&*'' \\ *' &*'' & *'' & *''&*'' \\ \end{array}}\!\!\right).$$ Where not only did we not zero out the row but we may destroy the zero structure we just introduced with the reflector $H^T_1$.
However, when you opt to drive $M$ to a tridiagonal structure you will leave the first row untouched by the action of $H^T_1$, so $$M=\left(\!\!{\begin{array}{ccccc} * &* & * & *&* \\ * &* & * & *&* \\ * &* & * & *&* \\ * &* & * & *&* \\ * &* & * & *&* \\ \end{array}}\!\!\right)\rightarrow H^T_1M=\left(\!\!{\begin{array}{ccccc} * &* & * & *&* \\ *' &*' & *' & *'&*' \\ 0 &*' & *' & *'&*' \\ 0 &*' & *' & *'&*' \\ 0 &*' & *' & *'&*' \\ \end{array}}\!\!\right).$$ Thus when we apply the same reflector from the right we obtain $$H^T_1M=\left(\!\!{\begin{array}{ccccc} * &* & * & *&* \\ *' &*' & *' & *'&*' \\ 0 &*' & *' & *'&*' \\ 0 &*' & *' & *'&*' \\ 0 &*' & *' & *'&*' \\ \end{array}}\!\!\right)\rightarrow H^T_1MH_1=\left(\!\!{\begin{array}{ccccc} * &*' & 0 & 0&0 \\ *' &*'' & *'' & *''&*'' \\ 0 &*'' & *'' & *''&*'' \\ 0 &*'' & *'' & *''&*'' \\ 0 &*'' & *'' & *''&*'' \\ \end{array}}\!\!\right).$$
Applied recursively this allows us to drive $M$ to a tridiagonal matrix $T$. You can complete the diagonalization of $M$ efficiently, as was mentioned previously, using Jacobi or Givens rotations both of which are found in the Golub and Van Loan book Matrix Computations. The accumulated actions of the sequence of Householder reflectors and Jacobi or Givens rotations allows us to find the action of the orthogonal matrices $S^T$ and $S$ without explicitly forming them.
-
As the Comments to other Answers clarify, the real issue here is not a shortcoming of Householder matrices but rather a question as to why iterative rather than direct ("closed-form") methods are used to diagonalize (real) symmetric matrices (via orthogonal similarity).
Indeed any orthogonal matrix can be expressed as a product of Householder matrices, so if we knew the diagonal form of a symmetric matrix (its eigenvalues), we could solve for a complete set of orthonormalized eigenvectors and represent the corresponding change of basis matrix as a product of Householder transformations in polynomial time.
So let's turn to Victor's parenthetical comment "other than Abel's theorem" because we are effectively asking why iterative methods should be used find the roots of a polynomial rather than a direct method. Of course the eigenvalues of a real symmetric matrix are the roots of its characteristic polynomial, and it is possible to go in the other direction as well. Given a real polynomial with only real roots, it is possible to construct a tridiagonal symmetric companion matrix from a Sturm sequence for the polynomial. See also that poster Denis Serre's Exercise 92 in this set. This is rather nice for showing the equivalence of those problems since we've seen (@AndrewWinters) the direct application of Householder matrices will tridiagonalize a real symmetric matrix.
Analysis of the arithmetic complexity for an iterative (root isolation) method is given in Reif (1999), An Efficient Algorithm for the Real Root and Symmetric Tridiagonal Eigenvalue Problems. Reif's approach improves on tailored versions of QR for companion matrices, giving $O(n \log^3 n)$ instead of $O(n^2)$ complexity.
The Abel-Galois-Ruffini Theorem says that no general formula for roots of polynomials above degree four can be given in terms of radicals (and usual arithmetic). However there are closed forms for roots in terms of more exotic operations. In principle one might base eigenvalue/diagonalization methods on such approaches, but one encounters some practical difficulties:
1. The Bring radical (aka ultraradical) is a function of one variable, in that respect like taking a square root. Jerrad (c. 1835) showed that solving the general quintic could be reduced to solving $t^5 + t - a = 0$, so that univariate function $t(a)$ (used in addition to radicals and other usual arithmetic) allows all quintics to be solved.
2. This breaks down with degree six polynomials and above, although various ways can be found to solve them using functions of just two variables. Hilbert's 13th Problem was the conjecture that general degree seven polynomials could not be solved using only functions of at most two variables, but in 1957 V.I. Arnold showed they could. Among the multivariable function families that can be used to get solutions to arbitrary degree polynomials are Mellin integrals, hypergeometric and Siegel theta functions.
3. Besides implementing somewhat exotic special functions of more than one argument, we need direct methods for solving polynomials which work for general degree $n$ rather than ad hoc or degree specific methods. Guàrdia (2002) gives "a very simple expression of the roots of a polynomial of arbitrary degree in terms of derivatives of hyperelliptic theta functions." However this approach requires making choices of Weierstrass points on hyperelliptic curve $C_f: Y^2 = f(x)$ where all roots of polynomial $f(x)$ are sought. A good choice leads to expressing less than half of those roots, and it appears this approach requires repeated trials to get all of them. Each trial involves solving a homogeneous linear system at $O(n^3)$ cost.
Therefore the indirect/iterative methods for isolating real roots (equiv. eigenvalues of symmetric matrices), even to high precision, currently have practical advantages over the known direct/exact methods for these problems.
-
Thank you for the excellent exposition! – Jack Poulson Apr 10 '12 at 16:33
Some notes: 1. a practical method for building the tridiagonal companion matrix from Sturm sequences was outlined in papers by Fiedler and Schmeisser; I gave a Mathematica implementation here, and it should not be too hard to implement in a more traditional language. – J. M. May 11 at 18:03
2. With respect to the "theta function" approach for polynomial roots (which I agree is a bit too unwieldy for practical use), Umemura outlines an approach using Riemann theta functions. – J. M. May 11 at 18:07
For what reason do you assume that this is impossible?
Any symmetric real matrix $S$ can be orthogonally diagonalized, i.e. $S = G D G^t$, where $G$ is orthogonal and $D$ is diagonal.
Any orthogonal matrix of size n×n can be constructed as a product of at most n such reflections.Wikipedia. Therefore you have this decomposition.
I am not sure about the last statement, I just cite it (and I think it is correct). As far as I understand your question, it boils down to whether any orthogonal matrix can be decomposed into a sequence of Householder transforms.
-
2
I should have been more specific. The first step to diagonalizing a symmetric matrix is applying Householder until it is tridiagonal. Next, QR iterations are performed. This process cannot be completed using only closed-form Householder transformations. Why? (other than Abel's theorem) – Victor Liu Apr 4 '12 at 23:49
1
You can do it with Jacobi rotations. Golub and Van Loan write that Jacobi is the same as Givens. Householder is just another way of doing Givens. In practice, the "correct" way might be with QR if it is faster. – power Apr 5 '12 at 3:07
If the eigenvalues are already known (from a preliminary calculation based on the usual approach), one can use them to triangulize a nonsymmetric matrix (or diagonalize a symmetric matrix) by a product on $n-1$ Householder reflections. In the $k$th step the $k$th column is brought to triangular form. (This also provides a simple inductive proof of the existence of the Schur factorization.)
It is actually useful for methods where one repeatedly needs the orthoginal matrix in a numerically stable factored form.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 38, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9243289828300476, "perplexity_flag": "head"}
|
http://mathoverflow.net/revisions/36289/list
|
## Return to Answer
1 [made Community Wiki]
My favorite equation is
$$\frac{16}{64} = \frac{1}{4}.$$
What makes this equation interesting is that canceling the $6$'s yields the correct answer. I realized this in, perhaps, third grade. This was the great rebellion of my youth. Sometime later I generalized this to finding solutions to
$$\frac{pa +b}{pb + c} = \frac{a}{c}.$$
where $p$ is an integer greater than $1$. We require that $a$, $b$, and $c$ are integers between $1$ and $p - 1$, inclusive. Say a solution is trivial if $a = b = c$. Then $p$ is prime if and only if all solutions are trivial. On can also prove that if $p$ is an even integer greater than $2$ then $p - 1$ is prime if and only if every nontrivial solution $(a,b,c)$ has $b = p - 1$.
The key to these results is that if $(a, b, c)$ is a nontrivial solution then the greatest common divisor of $c$ and $p$ is greater than $1$ and the greatest common divisor of $b$ and $p - 1$ is also greater than $1$.
Two other interesting facts are (i) if $(a, b, c)$ is a nontrivial solution then $2a \leq c < b$ and (2) the number of nontrivial solutions is odd if and only if $p$ is the square of an even integer. To prove the latter item it is useful to note that if $(a, b, c)$ is a nontrivial solution then so is $(b - c, b, b - a)$.
For what it is worth I call this demented division.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 27, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9439944624900818, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/209008/how-to-show-that-general-form-of-a-complex-contour-integration
|
# How to show that general form of a complex contour integration
i´m preparing my exam on basic complex analysis, and i find out this exercise that i find nice, i need to show that
$$\int_{|z|=1}exp\left(\frac{1}{z^k}\right)=\begin{cases} 2i\pi & k=1\\ 0 & otherwise\\ \end{cases}$$ i really don´t know how to deal this problem, i already know the tools but not how to use them, i know that i had Laurent´s series and residue calculus, the problem is that i don´t know how to calculate the residue, can you explain me how to proceed and how to find the residue. Thanks
-
## 1 Answer
Hint: Apply the residue theorem. Recall that the residue is the coefficient of the $z^{-1}$ term in the Laurent series. Take the Taylor series for $\exp(x)$ and substitute $x\mapsto z^{-k}$. That will give you the corresponding Laurent series.
-
1
I'm not sure what you mean by that. There no need to "choose" the residue, the residue is the coefficient of $z^{-1}$ in the Laurent series. That's all it can be. – EuYu Oct 8 '12 at 1:01
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9317464232444763, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/26883/sympletic-structure-of-general-relativity/26884
|
# Sympletic structure of General Relativity
Inspired by physics.SE: http://physics.stackexchange.com/questions/15571/does-the-dimensionality-of-phase-space-go-up-as-the-universe-expands/15613
It made me wonder about symplectic structures in GR, specifically, is there something like a Louiville form? In my dilettante understanding, the existence of the ADM formulation essentially answers that for generic cases, but it is unclear to me how boundaries change this. Specifically, I know that if one has an interior boundary, then generally the evolution is not hamiltonian; on the other hand, if the interior boundary is an isolated horizon, then the it is hamiltonian iff the first law of blackhole thermodynamics is obeyed (see http://arxiv.org/abs/gr-qc/0407042).
The sharper form of the question is thus what happens cosmologically?
-
3
Wald's book on GR has a section on the hamiltonian formalism in General Relativity. It is an infinite-dimensional system, so you have to be a little careful when you talk about a symplectic structure. It certainly has a Poisson structure and it is constrained. The Poisson reduction gives you formally symplectic structure. – José Figueroa-O'Farrill Oct 12 '11 at 12:19
## 1 Answer
Notice first that the phase space of any theory is nothing but the space of all its classical solutions. The traditional presentation of phase spaces by fields and their canonical momenta on a Cauchy surface is just a way of parameterizing all solutions by initial value data -- if possible. This is often possible, but comes with all the disadvantages that a choice of coordinates always comes with. The phase space itself exists independently of these choices and whether they exist in the first place. In order to emphasize this point one sometimes speaks of covariant phase space .
This is well known, even if it remains a bit hidden in many textbooks. For more details and an extensive and commented list of references on this see the $n$Lab entry phase space .
Then notice that the phase space of every field theory that comes from a local action functional (meaning that it is the integral of a Lagrangian which depends only on finitely many derivatives of the fields) comes canonically equipped with a canonical Liouville form and a canonical presymplectic form. The way this works is also discuss in detail at phase space . A good classical reference is Zuckerman, a more leisurely discussion is in Crncovic-Witten .
This canonical presymplectic form that exists on the phase space of every local theory becomes symplectic on the reduced phase space, which is the space obtained by quotienting out the gauge symmetries. This quotient is often very ill-behaved, but it always exists nicely as a "derived" quotient, and as such is modeled by the BV-BRST complex (as discussed there). The whole (Lagrangian) BV-BRST machinery is there to produce the canonical symplectic form existing on the reduced phase space of any local action functional.
Since the Einstein-Hilbert action and all of its usual variants with matter couplings etc. is a local action functional, all this applies to gravity. Recently Fredenhagen et al. have given careful discussions of the covariant phase space of gravity (and its Liouville form), see the references listed here .
It follows that the "dimension" of the covariant phase space of gravity does not depend on the "size of the universe", nor does it make much sense to ask this, in the first place. A given cosmology is one single point in this phase space (or rather it is so in the reduced phase space, after quotienting out symmetries).
However, you might be after some truncations or effective approximations or coarse graining to full covariant gravity. For these the story might be different.
-
2
Nice answer, the point about phase space being a covariant object should be more widely appreciated. – user566 Oct 12 '11 at 15:33
1
For the record, Ashtekar is no slouch when it comes to the covariant phase space cosntruction of the symplectic structure. If you look at the list of references on the nLab page Urs cited, you'll see the papers by Lee-Wald and Ashtekar-Bombelli-Reula, which are also often used as standard references on this topic. In fact, the $\Omega_V$ term Ashtekar writes down in section 7.2 of the paper you cited is constructed using precisely this method. I may say more about the boundary term $\Omega_S$, but I'll have to look at it in a bit more detail first. – Igor Khavkine Oct 12 '11 at 15:48
1
I think most of the interesting physics is in the last sentence: the full solution extended beyond the cosmological horizon defines a point in this much too large phase space, the space of all Einsteinian metrics, but the original question was about the reduction of the phase space to describe the dynamics of a cosmological patch. This reduction should give that there are more effective degrees of freedom as the universe expands, and the reduction process is mysterious. I think that the spirit of the question is: can you make sense of a causal-patch reduction? – Ron Maimon Oct 12 '11 at 17:46
1
– Igor Khavkine Oct 13 '11 at 9:38
1
Finally, there is nothing particularly mysterious about restricting yourself to a cosmological patch or to any other kind of patch of spacetime. Given any manifolds $X$ and $Y$, the space of solutions of Einstein equations, $\Gamma(X)$ or $\Gamma(Y)$, on either of them is infinite dimensional. Moreover, a diffeomorphism $X\to Y$ naturally induces the map $\Gamma(Y)\to \Gamma(X)$, by differential pullback. One may think of $X$ as smaller than $Y$ and hence $\Gamma(Y)=\Gamma(X)\times$(extra degrees of freedom). But $X$ and $Y$ could also be exchanged. That's life with diff-inv and inf-dim. – Igor Khavkine Oct 13 '11 at 9:50
show 9 more comments
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.929777979850769, "perplexity_flag": "head"}
|
http://www.physicsforums.com/showthread.php?p=4122602
|
Physics Forums
Recognitions:
Gold Member
Science Advisor
## Cosmological constant from first principles
Can the cosmological constant be derived from first principles? The answer appears to be - YES, according to this paper by Padmanabhan - 'The Physical Principle that determines the Value of the Cosmological Constant', http://arxiv.org/abs/1210.4174. This is, in part, an extension of Padmanabhan's earlier paper 'Emergent perspective of Gravity and Dark Energy', http://arxiv.org/abs/1207.0505.
PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug
Recognitions: Gold Member There's a possible problem here: he's saying (I think) that λLF2 is ~1/nμ where n is the # of phase space cells within the Hubble radius (and μ turns out to be ~1.2). However, during the matter era, n $\propto$ ρ-3/4 $\propto$ t3/2, which would make λ variable. This is not allowed in GR. (When I say "# of phase space cells", I mean the # of photons that would result if all energy in the observable universe were converted to BB radiation.)
I'm not sure if I understood correctly, so please explain if I didn't... But so it seems to me he says that there are three different phases of expansion: first de Sitter, then radiation dominated, then de Sitter again. If the Hubble parameter at first de Sitter phase is of order Planck mass, then the current Hubble parameter should be $$H_{now} = \frac{a_{then}^2}{a_{now}^2} H_{then} = \frac{a_{then}^2}{a_{now}^2} L_P^{-1}$$ and since in de Sitter, cosmological constant is related to H, one gets $$\Lambda = 3H_{now}^2 = 3 \frac{a_{then}^4}{a_{now}^4} L_P^{-2}$$ Then he goes about calculating $Q = a_{now}/a_{then}$. I don't understand the calculation. There has to be some clear assumption for when the second de Sitter phase starts, and it has to be put in by hand. Where does the value fundamentally come from?
Recognitions:
Gold Member
Science Advisor
## Cosmological constant from first principles
Quote by clamtrox I'm not sure if I understood correctly, so please explain if I didn't... But so it seems to me he says that there are three different phases of expansion: first de Sitter, then radiation dominated, then de Sitter again. If the Hubble parameter at first de Sitter phase is of order Planck mass, then the current Hubble parameter should be $$H_{now} = \frac{a_{then}^2}{a_{now}^2} H_{then} = \frac{a_{then}^2}{a_{now}^2} L_P^{-1}$$ and since in de Sitter, cosmological constant is related to H, one gets $$\Lambda = 3H_{now}^2 = 3 \frac{a_{then}^4}{a_{now}^4} L_P^{-2}$$ Then he goes about calculating $Q = a_{now}/a_{then}$. I don't understand the calculation. There has to be some clear assumption for when the second de Sitter phase starts, and it has to be put in by hand. Where does the value fundamentally come from?
The Hubble parameter during the inflationary epoch [1st de Sitter phase] is the Planck length [Lp]. The inflationary epoch is assumed to end when the de Sitter temperature is reached, defined as Tp = 1/(2piLp). This occurs at point D on p3 graph, the beginning of the radiation epoch. The radiation epoch ends when the number of comoving wave vectors that reenter the Hubble radius is the same as the number that exited during the inflationary epoch. This occurs at point B on p3 graph, which also marks the beginning of the second de Sitter phase. Q is the expansion factor, which is expected to be the same during all three epochs. It appears to me you can use the point when accelerated expansion began as the start of the second de Sitter phase.
Recognitions:
Gold Member
Quote by Chronos The Hubble parameter during the inflationary epoch [1st de Sitter phase] is the Planck length [Lp].
I suppose this depends on the chosen units and will be correct with everything expressed in Planck units, but then H_then = 1, not so?
Would Lp not be the Hubble radius, rather than the Hubble parameter, which would be extremely large, i.e. $H_{then} = 1/T_{Planck} \approx 10^{43} \, \, sec^{-1}$?
Recognitions:
Gold Member
Science Advisor
Quote by Jorrie I suppose this depends on the chosen units and will be correct with everything expressed in Planck units, but then H_then = 1, not so? Would Lp not be the Hubble radius, rather than the Hubble parameter, which would be extremely large, i.e. $H_{then} = 1/T_{Planck} \approx 10^{43} \, \, sec^{-1}$?
Agreed, the initial Hubble radius appears to be Lp, which expands by H~a during the inflationary epoch, followed by H~a^2 during the radiation epoch - which appears consistent with the LCDM model.
Recognitions:
Gold Member
Quote by Chronos Agreed, the initial Hubble radius appears to be Lp, which expands by H~a during the inflationary epoch, followed by H~a^2 during the radiation epoch - which appears consistent with the LCDM model.
I thought that during inflation (which is the 1st de Sitter phase), the Hubble radius remained constant and only started to grow when inflation ended (point D in Padmanabhan Fig.1). It is $\dot{a}$ that initially increased exponentially, but $H = \dot{a}/a$ remained constant. Or am I mixing things up the wrong way here?
Recognitions: Gold Member Science Advisor He calls the Hubble radius 'constant asymptotically' during inflation [p2], which lead me to assume H could increase linearly while 'a' went wild. It seemed logical, the modes within the initial Hubble radius would be whisked away, unable to reenter the Hubble radius until the radiation epoch commenced. The change in the Hubble radius during inflation may, however, be too trivial to be of any consequence.
Recognitions:
Gold Member
Quote by Chronos He calls the Hubble radius 'constant asymptotically' during inflation [p2], which lead me to assume H could increase linearly while 'a' went wild.
Thanks, makes sense. Sharp slope changes on log-log plots are really gradual changes on linear plots.
From his page 7, second bullet:
Time translation invariance of the geometry suggests that de Sitter space- time qualifies as some kind of “equilibrium” configuration. Given the two length scales, one can envisage two de Sitter phases for the universe, one governed by H = Lp−1 and the other governed by H = (Λ/3)1/2. Of these, I would expect the Planck scale inflationary phase to be an unstable equi- librium causing the universe to make a transition towards the second de Sitter phase governed by the cosmological constant. The transient stage is populated by matter emerging along with classical geometry around the point D in Fig. 1.
I don't quite catch the meaning of the last sentence. Does he mean that all the radiation and matter (energy) that we observe emerged around point D, or did it gradually emerge during the middle phase (D to B), i.e. migrated from the left side of the parallelogram to the right side? We are presumable situated very near point B, busy entering phase B to C.
Attached Thumbnails
Recognitions: Gold Member Science Advisor Me neither. The emergent phase is not well characterized. It appears he asserts a quantum gravity solution is required on that count.
Thread Tools
| | | |
|------------------------------------------------------------------|-------------------|---------|
| Similar Threads for: Cosmological constant from first principles | | |
| Thread | Forum | Replies |
| | Cosmology | 2 |
| | Astrophysics | 1 |
| | Cosmology | 1 |
| | General Astronomy | 4 |
| | General Astronomy | 15 |
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9316427111625671, "perplexity_flag": "middle"}
|
http://nrich.maths.org/752/solution
|
nrich enriching mathematicsSkip over navigation
### 14 Divisors
What is the smallest number with exactly 14 divisors?
### Repeaters
Choose any 3 digits and make a 6 digit number by repeating the 3 digits in the same order (e.g. 594594). Explain why whatever digits you choose the number will always be divisible by 7, 11 and 13.
### Oh! Hidden Inside?
Find the number which has 8 divisors, such that the product of the divisors is 331776.
# Adding in Rows
##### Stage: 3 Challenge Level:
Congratulations to Soh Yong Sheng from Raffles Institution, Singapore for this excellent solution.
We have $0 < a < b$ which means $1/a > 1/b$ and so $3 + 1/a > 3 + 1/b$. Flipping over the fraction, we will get 1/(3 + 1/a) < 1/(3 + 1/b) and the inequality remains the same way round when 2 is added. Flipping over again for the last time we get $$\frac{1}{2+\frac{1}{3+\frac{1}{a}}}$$ is greater than $$\frac{1}{2+\frac{1}{3+\frac{1}{b}}}$$
The second part is a further expansion of the first, and in the process of repeating the above we know that it involves just one more flipping over of the fraction, thus $$\frac{1}{2+\frac{1}{3+\frac{1}{4 + \frac{1}{a}}}}$$ is less than the same thing with $b$ in place of $a$ as the inequality would be reversed again.
Lastly the continued fractions are expanded all the way down to $100 + 1/a$ and $100 + 1/b$. Observe the above process, we can tell that if the last or biggest number is odd then the continued fraction with a in it is bigger. If the last or biggest number is even then the continued fraction with $b$ in it is bigger. Each successive continued fraction involves one more 'flipping over' and reverses the inequality one more time. The following continued fraction is smaller than the same thing with $b$ in place of $a$: $${1\over\displaystyle 2 + { 1 \over \displaystyle 3+ { 1\over \displaystyle 4 + \dots + {1\over\displaystyle 99+ {1\over \displaystyle {100 + {1 \over \displaystyle a}} }}}}}$$
The NRICH Project aims to enrich the mathematical experiences of all learners. To support this aim, members of the NRICH team work in a wide range of capacities, including providing professional development for teachers wishing to embed rich mathematical tasks into everyday classroom practice. More information on many of our other activities can be found here.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9312899708747864, "perplexity_flag": "head"}
|
http://mathhelpforum.com/calculus/109539-derivative-integral.html
|
# Thread:
1. ## Derivative of an integral
Hi, how do you solve this (btw, sorry, I don't know how to make snazzy math grafics :/):
Find G'(x) for G(x) = the integral exp(t^2)dt in the interval 1/x< a< x.
2. Originally Posted by Hampus
Hi, how do you solve this (btw, sorry, I don't know how to make snazzy math grafics :/):
Find G'(x) for G(x) = the integral exp(t^2)dt in the interval 1/x< a< x.
$\frac{d}{dx} \int_v^u f(t) \, dt = f(u) \cdot \frac{du}{dx} - f(v) \cdot \frac{dv}{dx}$
$G(x) = \int_{\frac{1}{x}}^x e^{t^2} \, dt$
$G'(x) = e^{x^2} + \frac{e^{\frac{1}{x^2}}}{x^2}$
3. Could you maybe explain why this is the case? (please?)
4. Originally Posted by Hampus
Could you maybe explain why this is the case? (please?)
Say that $\int e^{t^2}\,dt=F(t)$. (It doesn't matter what $F(t)$ is.)
So $\int_{1/x}^x e^{t^2}\,dt=F(x)-F(1/x)$ by the Fundamental Theorem of Calculus.
To take the derivative of this expression, use the chain rule to get
$F'(x)\cdot1-F'(1/x)\cdot-\frac{1}{x^2}$
But because of how we defined $F$, we know that $F'(x)=e^{x^2}$.
So the above is $e^{x^2}+\frac{e^{1/x^2}}{x^2}$.
5. Feels like something I could have figured out if I gave it a thought :/ Thanks a lot though, very helpful, it all makes sense
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 10, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9309350252151489, "perplexity_flag": "middle"}
|
http://www.reference.com/browse/co-arsen
|
Definitions
# Co-NP
In computational complexity theory, co-NP is a complexity class. A problem $mathcal\left\{X\right\}$ is a member of co-NP if and only if its complement $overline\left\{mathcal\left\{X\right\}\right\}$ is in complexity class . In simple terms, co-NP is the class of problems for which efficiently verifiable proofs of no instances, sometimes called counterexamples, exist.
An example of an NP-complete problem is the subset sum problem: given a finite set of integers is there a non-empty subset which sums to zero? The complementary problem is in co-NP and asks: "given a finite set of integers, does every non-empty subset have a nonzero sum?" To give a proof of a "no" instance one must specify a non-empty subset which does sum to zero. This proof is then easy to verify.
P, the class of polynomial time solvable problems, is a subset of both NP and co-NP. P is thought to be a strict subset in both cases (and demonstrably cannot be strict in one case but not the other). NP and co-NP are also thought to be unequal. If so, then no NP-complete problem can be in co-NP and no co-NP-complete problem can be in NP.
This can be shown as follows. Assume that there is an NP-complete problem that is in co-NP. Since all problems in NP can be reduced to this problem it follows that for all problems in NP we can construct a non-deterministic Turing machine that decides the complement of the problem in polynomial time, i.e., NP is a subset of co-NP. From this it follows that the set of complements of the problems in NP is a subset of the set of complements of the problems in co-NP, i.e., co-NP is a subset of NP. Since we already knew that NP is a subset of co-NP it follows that they are the same. The proof for the fact that no co-NP-complete problem can be in NP is symmetrical.
If a problem can be shown to be in both NP and co-NP, that is generally accepted as strong evidence that the problem is probably not NP-complete (since otherwise NP = co-NP).
An example of a problem which is known to be in NP and in co-NP is integer factorization: given positive integers m and n determine if m has a factor less than n and greater than one. Membership in NP is clear; if m does have such a factor then the factor itself is a certificate. Membership in co-NP is more subtle; one must list the prime factors of m and provide a primality certificate for each one.
Integer factorization is often confused with the closely related primality problem. Both primality testing and factorization have long been known to be NP and co-NP problems. The AKS primality test, published in 2002, proves that primality testing also lies in P, while factorization may or may not have a polynomial-time algorithm.
## External links
• Complexity Zoo: coNP
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 2, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9504832625389099, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/135566/give-an-example-of-an-infinite-class-of-closed-sets-whose-union-is-not-closed?answertab=active
|
# give an example of an infinite class of closed sets whose union is not closed.
give an example of an infinite class of closed sets whose union is not closed. Thanks for your help
-
7
How about 1/n?, – Ross Millikan Apr 23 '12 at 0:09
## 5 Answers
I think probably the most instructive example is considering $\displaystyle A_n=\left[\frac{1}{n},\infty\right)$.
-
thank you very much for your help – Chalie Her Apr 23 '12 at 0:28
2
Why is it the most instructive example? – lhf Apr 23 '12 at 1:32
the union of intervals of the form $(1/n ,1- 1/n ) = (0,1)$ will be a example. the behaviour of the interval already stated above.
-
How are these intervals closed? – Matt N. Oct 5 '12 at 16:26
As another example, let $X$ be any infinite set, and consider the cofinite topology on $X$ (ie all open sets are either the empty set or sets whose complement is finite). Every proper closed subset of $X$ is finite. So, fixing an element $x_0\in X$, we have the union closed sets equaling an open set: $$X\setminus\{x_0\}=\bigcup\limits_{x\not=x_0} \{x\}$$
-
Every subset $S\subset X$ of a Hausdorff space is the union of its singleton subsets, which are closed : $$S=\bigcup_{s\in S} \lbrace s\rbrace$$
-
Can you express $(0,1)$ as an increasing union of closed sets? Maybe find a pair of sequences $a_n$ and $b_n$ with $a_n$ decreasing to $0$ and $b_n$ increasing to $1$? Then you can try taking $[a_n,b_n]$ and see if that works.
-
thank you very much for your help – Chalie Her Apr 23 '12 at 0:28
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 15, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9465093016624451, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/39599/convexity-of-injectivity-domains-on-riemannian-manifolds/39607
|
## Convexity of injectivity domains on Riemannian manifolds
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Let $(M,g)$ be a smooth compact Riemannian manifold. Is there a link between the convexity of all injectivity domains and the sign of sectional curvatures? For example, is it true that a compact surface all whose injectivity domains are convex (that is, for every $x \in M$, the domain of the exponential map $\exp_x$ is a convex subset of the vector plane $T_xM$) is necessarily nonnegatively curved ?
-
## 2 Answers
For a non-injective map like $\exp$, there are many alternative choices of subsets of the domain where the restriction is injective. For a flat 2-torus like $\mathbb R^2/\mathbb Z^2$, one domain for $\exp$ is a square centered at the origin, but there are many others --- any shape that tiles the plane by translations in $\mathbb Z^2$ will do. For $\exp$, there is a canonical for vectors whose geodesics don't reach the cut locus. I'll take this to be your meaning. Is there a standard terminology, perhaps even "injectivity domain" as you used? Even if so, I think it is misleading, so I'll refer to it as the pre-cut locus.
Consider the case $M^2 = \mathbb {RP}^2 = S^2 / \pm 1$. For the standard round metric, the pre-cut locus is the disk of radius $\pi$ in every tangent space. The cut locus remains smooth and convex under small perturbations of the metric, since it's far away from the conjugate locus. But there are also modifications of the metric that make the curvature negative in places without changing the geodesic flow very much.
A positively curved metric tends to focus a family of geodesics, making the wave fronts (curves perpendicular to the geodesics) bend toward the concave sense so that the total geodesic curvature of the wave front decreases (by an amount equal to the total Gaussian curvature they sweep through). Negative curvature bends in the opposite way. But just as camera lenses are often designed with both convex and concave elements, you can put some negative curvature into the mix and still make geodesics focus. The cut locus is determined by the shapes of wavefronts at the time they collide, and wherever he net geodesic curvature of any segment is less at the time of collision than the corresponding angle in the tangent space, the precut locus will be locally convex. (But note that to get the full local convexity description depends on geodesics from both sides).
No: there are metrics on $\mathbb {RP}^2$ with small areas of negative curvature whose precut locus is still convex.
On $S^2$, another phenomenon takes place: the cut locus for the round metric is also the conjugate locus, where geodesics actually focus. In small perturbation of the metric, the cut locus typically becomes a planar tree, which can have arbitrary combinatorial complexity, and can even have infinitely many branches. This is closely connected to the possible cut loci for smooth curves in the plane, that is, the set of centers of disks with interiors inside the curve and boundary circles that touch the curve in two or more points.
Added: here is a picture of a cut locus that can occur for a perturbed metric, in a small area near the antipodal point on $S^2$. The computation is actually the Voronoi diagram for 200 points around the image of the unit circle under $z \to .03 z^5 + .01 z^2$. The wave front shown has started to look significantly irregular, but at a radius 100 or even 10 times as large, it would look very nearly round. Cusps are formed at the focal points at tips of trees, and along the edges, wave fronts arrive from two directions at a definite angle.
Correction If a tip of the cut locus is not a conjugate point, then the exponential map is a local diffeomeorphism near its preimage, in which case the precut locus is obviously nonconvex. At an isolated tip, the only way to have local convexity is for a family of equal-length geodesics converge to the tip from directions ranging over a 180 degrees. The total curvature of the bigon swept out is the sum of its two angles, which is greater than $\pi$. In a positively curved metric, there can be at most 3 tips of the tree that are focal points in this way, since the total curvature is $4 \pi$, so for any positively curved metric on $S^2$, if any point has cut locus more with more than 3 tips, then the precut locus at that point is not convex. The generic behavior is for the tips of branches to be instantaneous focal points only, so there would be no positive angle of equal geodesics. Now I realize, after further thought upon seeing Ludovic's comment: Even though the conjugate locus in the manifold has cusps, it is the image of a smooth curve in the unit tangent bundle whose preimage as the boundary of the precut locus is smooth, so it changes smoothly with sufficiently smooth changes of the metric and remains convex under perturbation.
The same trick can work to get metrics with convex precut locus on $S^2$ even though they have patches of negative curvature: a very localized $C^3$-bounded change of the metric can make curvature have localized areas of negativity, but it changes the exponential map near the cut locus by a $C^3$ small amounts. The precut locus will be perturbed by only small $C^2$ amount and remain convex.
Added For the case in the figure above, we can see the conjugate locus by drawing rays perpendicular to the curve, showing the geodesics; the fold lines in the picture give the conjugate locus. Each tip of the cut locus is a cusp of the conjugate locus; waves coming from one side of the point focus behind the cut locus. In this case, the conjugate locus is visible as an 8-pointed star with 4 long points and 4 short points ending at the ends of the cut locus tree. This figure shows 1400 very thin lines, and shows a larger region than above.
Additional: how to visualize the Jacobi equation. The behavior of geodesics is described by the Jacobi equation, which says that the second derivative of signed distance from a geodesic to infinitesimally nearby geodesics equals -(Gaussian curvature) times distance. Integrating this equation amounts to taking the limit of composition of sequence of elements of $SL(2,\mathbb R)$, acting on (distance, derivative of distance).
This can be visualized by looking at the action of $SL(2,\mathbb R)$ on the hyperbolic plane, which we can coordinatize as the upper half plane $im(z) > 0$. The boundary of the upper half-plane corresponds to the set of slopes of lines in the plane, with 0 corresponding to parallel variations of a geodesic and infinity corresponding to variations of a geodesic that keep distance 0 and turn at an angle. In general, each points on the circle at infinity tell us the curvature of an advancing wave-front. If the point is on the right, the wave front is convex; on the left, concave.
If curvature is 0, the point at infinity is fixed, and the action is a unit speed translation to the right in Euclidean terms, a parabolic transformation in hypebolic terms. Curvature creates an additional effect that moves the point at infinity: positive curvature moves it counterclockwise by adding a parabolic vector field fixing 0, and negative curvature moves it clockwise.
The instantaneous sum of these two motions is rotation around a point in the positive curvature case, and translation along a geodesic in the negative curvature case. Thus for positive curvature, the wave front shapes make complete circuits around the circle at infinity, alternating between local convexity and local concavity. In the negative curvature case, once convex they always remain convex.
I don't want to make this answer even overlier long, but it should be clear from this concrete picture of the Jacobi equation that we can put in smooth little blips of negative curvature without changing the qualitative nature of the wave fronts at the time of the collision that forms the cut locus, and thus not destroying convexity of the precut locus.
True but not relevant to the question: I don't think the large angles of focusing on all tips of a cut locus tree can happen for all points in an open set, although I haven't thought it through. If all trees collapse to points, then the sphere is swept out by equal length geodesics between those two points. I think it should be known that if this happens for any starting point, the metric is a constant curvature metric. (For small perturbations this is related to the Radon transform variant (functions on S^2) -- integrate over great circles --> (functions on S^2). Apply this to compute the derivatives of lengths of great circles under a perturbation. The transform has a simple form when applied to spherical harmonics, which gives an easy way to compute and deduce many things about perturbations).
-
@Ludovic: For a perturbation of the round metric, the conjugate locus behaves nicely in the tangent space, and deforms continuously in a smooth topology on metrics. So there is a convex domain where the exponential map is locally injective. The global minimum of distance behaves differently. Think of the image of the exponential map at time $\pi - \epsilon$, for a metric near the round metric. It's basically an arbitrary smooth curve that is nearly circular, enar the antipodal point. But the small deviations from roundness make the cut locus blow up into a little tree. – Bill Thurston Sep 22 2010 at 13:52
Right, the conjugate locus behaves smoothly while small deformations on the metric could induce very bad deformations on the boundary of injectivity domains. It is not the case near the round metric! Of course, the cut locus may become very bad, BUT for small perturbations of the round metric in $C^4$ topology, surprisingly all the injectivity domains remains (uniformly) convex. It seems that we are misunderstanding.. – Ludovic Rifford Sep 22 2010 at 13:57
@Ludovic: "you need to do a large perturbation in topology (on the metric)" It's true that it's large near certain points, but the effect on the exponential map depends on integrating over the length of the geodesic. If large changed are limited to a short portion of every path to the cut locus, then the integrated effect along all such geodesics can be kept small in the $C^2$ topology. – Bill Thurston Sep 22 2010 at 13:58
Here is a Mathematica code for the above figure: `Needs["ComputationalGeometry`"];Show[ ParametricPlot[{Re[#], Im[#]} & [ z + .03 z^5 + .01 I z^2 /. z -> Cos[t] + I Sin[t]], {t, 0, 2 \[Pi]}], DiagramPlot[ Table[{Re[#], Im[#]} & [ z + .03 z^5 + .01 I z^2 /. z -> Cos[t] + I Sin[t]], {t, 0, 2 \[Pi], \[Pi]/100}], PlotRange -> {{-2, 2}, {-2, 2}}], Axes -> None]` – Bill Thurston Sep 22 2010 at 14:28
Mathematica code for the conjugate locus picture: Module[{ p = z + .03 z^5 + .01 I z^2 /. z -> Exp[I t], q}, q = D[p, t]; Graphics[{Thickness[.0001], Table[ Line[ {Re[#], Im[#]} & /@ {p, p + 4 I q}] , {t, 0, 2 \[Pi], \[Pi]/700}]}, PlotRange -> {{-2, 2}, {-2, 2}} ] ] – Bill Thurston Sep 22 2010 at 14:51
show 6 more comments
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Given $x\in M$, the domain of injectivity (of $\exp_x$) I am speaking about is the set of velocities $v\in T_xM$ such that the geodesic starting at $x$ with initial velocity $tv$ is minimizing between $x$ and $\exp_x(tv)$ for some $t>1$ sufficiently small. This set is a an open bounded (star-shaped w.r.t. $0$) set with Lipschitz boundary in $T_xM$. The image of its boundary by $\exp_x$ is the cut locus (from $x$).
I agree that if you perform a $C^3$ perturbation of the round metric on $RP^n$, then the (uniform) convexity of injectivity domains is preserved. But if you perform a modification on the metric (on $RP^2$) to make it negatively curved near some point, you need to do a large perturbation in $C^2$ topology (on the metric). Thus, your geodesic flow will be deformed (much) in $C^1$ topology. You claim that the convexity properties is preserved ?
Concerning spheres, it can be shown that under small perturbations of the round metric in $C^4$ topology, the (uniform) convexity is preserved.
-
Hi Ludovic Rifford, you should copy this answer into a series of comments on Bill Thurston's answer so that the conversation will stay in one place. – jc Sep 22 2010 at 13:42
Thanks for the advice ! But since my comment was too long, I chosed to post it as a new answer. Sorry (that's my first question on this website), I am going to copy my answer into the comments. – Ludovic Rifford Sep 22 2010 at 13:50
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 46, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9258464574813843, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/56918/the-uncertainty-principle-and-black-holes
|
# The Uncertainty Principle and Black Holes
What are the consequences of applying the uncertainty principle to black holes?
Does the uncertainty principle need to be modified in the context of a black hole and if so what are the implications of these modifications?
-
1
Hi John, and welcome to Physics Stack Exchange! Your question right now seems pretty vague. Could you rewrite it to be more specific about what exactly you'd like to know? – David Zaslavsky♦ Mar 15 at 1:21
3
I think a better way to word this question would be "can the uncertainty principle be applied to black holes, and if so what are its implications?" When you say "what are some..." it sounds like you're asking for a list. – Nathaniel Mar 15 at 4:35
1
– David Zaslavsky♦ Mar 18 at 2:55
4
@Siva just a note: you can change your vote to a question or answer if it is edited after your vote. – anna v Mar 18 at 18:53
1
@Johannes though they both deal with uncertainty principle an Black Holes the post you linked to only deals with particles falling in, this one does not deal with a specific case like that but more the implications in general of how a black hole is affected by the uncertainty principle. So they are similar but not quite the same. Thanks for linking that, I am finding it interesting. – John Mar 25 at 22:01
show 9 more comments
## 2 Answers
The GUP (Generalised Uncertainty Principle): In view of the discussion generated on this question, and the answer by Dilaton, I have decided to add an edition to my answer in the hope that it will generate further discussion.
EDITION: UNCERTAINTY PRINCIPLE FOR A BLACK HOLE
The most famous effect where the uncertainty principle plays a very important part around a black hole is the Hawking radiation. In this, the usual quantum fluctuations of the vacuum just outside the event horizon of a black hole, generate particle–antiparticle pairs which are “separated” by the immensely strong gravitational field of the black hole. The phenomenon then evolves by having the negative energy particle (antiparticle) fall into the black hole, hence reducing the energy of the black hole. The positive energy particle moves away from the black hole to reach an observer at some distance from the event horizon. As long as the observer is concerned, the black hole appears to radiate energy in the from of particles – black hole vaporisation. There is yet another level of uncertainty, however, and in this gravity plays a very important part. This is a string theory result, the GUP (generalised uncertainty principle) and goes as follows
$\Delta x\ge \frac{\hbar}{\Delta p}+ \frac{G\Delta p}{c^3}$.
One can see the effect of gravity in the above GUP. We can observe that, in the usual “low” energy uncertainty principle, $\Delta x\Delta p\sim\hbar/2$, large uncertainty in measuring the momentum of an electron, large $\Delta p$, implies small uncertainty in the measurement of its position, $\Delta x$. However, from the above equation, at the Planck scale, near the singularity of a black hole, this no longer is the case! We see that as $\Delta p$ increases so does $\Delta x$ due to the second term in the GUP. Hence gravity introduces an extra level of uncertainty so that $\Delta x$ and $\Delta p$ do not mutually exclude each other. This can be interpreted that at the Planck scale, both wave and particle behaviour are manifest simultaneously.
By completing the squares in the above quadratic form for $\Delta p$ and taking the "equal" sign one gets
$(\Delta p-\frac{c^3\Delta x}{2G})^2=c^3\frac {c^3\Delta x^2-4G\hbar}{4G^2}$
Due to the square on the LHS of the above equation we can see that
$\Delta x^2 \ge 4\frac{G\hbar}{c^3}$
This result has also been written by Dilaton. This equation shows that gravity sets an ultimate accuracy in the measurement of the position of the electron, and this is the Planck length. This is what we should expect thinking in terms of string theory. $\Delta x$ can be interpreted as the wavelentgh of the electron field, which has to be $2L_p$.
FIRST EDITION
The strong gravitational field of the black hole has a "dual" effect. Outside the event horizon normal quantum fluctuations of the vacuum can give rise to particle-antiparticle pairs, which then can be separated by the strong gravitational field of the black hole to lead to the famous Hawking radiation. However closer to the black hole there is an extra source of uncertainty due to gravity. The GUP (generalised uncertainty princple) is a result of string theory, and the Planck length begins to make crucial contribution to the minimal action. An interesting analysis and discussion of the effects can be found in this link:
http://arxiv.org/abs/gr-qc/0106080
I hope it will make an interesting reading.
-
1
@Dilaton You are right. The second term contains G and only becomes important at the Planck scale, where the momentum $\Delta p$ is very large, so that gravity is in control of the uncertainty in position, $\Delta x$. – JKL Mar 15 at 22:19
@Qmechanic, thanks I was looking through that cite and never actually came across this article, will read – John Mar 18 at 2:31
@Dilaton, thanks for putting in that question that is an interesting thought – John Mar 18 at 2:32
@JKL, thanks for answering his question, I am learning a lot – John Mar 18 at 2:35
To put what JKL said in a slightly different way, in situations where quantum gravity or Planck scale physics can not be ignored, such as in the context of black holes (or the very early universe too), the second stringy part of the generalized uncertainty principle
$$\Delta x = \frac{\hbar}{\Delta p} + \alpha' \frac{\Delta p}{\hbar}$$
where
$$\alpha' = \frac{1}{2\pi T}$$
is the slope of the Regge trajectories (and T is the string tension), becomes important.
The second term can be explained by the fact that string theory introduces a very small (at most 1000 times the Planck scale as I have heard) minimum (string) length scale
$$x_{min} \sim 2\sqrt{\alpha'} \sim \frac{l_{Planck}}{g^{\beta}_{closed}}$$
($l_{Planck}$ is the Planck length, $g_{closed} << 1$ is the coupling constant of closed strings, and $\beta > 1$) which can be neglected at low energy (or large length) everyday scales.
When trying to probe shorter and shorter distances down to the Planck lenght one has to put the energy of $10^{19}$ GeV into the colliding particles. Since the Schwarzschild radius of a particle with the corresponding Planck mass is the Planck length too, this means that one produces the smallest possible black holes by such Planck energy collisions. Increasing the energy further to try to probe distances even smaller leads to the production of larger black holes instead, and the length scale one attains by increasing the energy beyond the Planck energy starts to grow again.
My (if not correct please complain!) interpretation of the generalized uncertainty principle is that the second stringy term, which is proportional to the uncertainty in momentum (or energy) and which starts to dominate the short distance behavior already at the string scale which is assumed to be larger than the Planck length, correctly describes this at a first glance counter intuitive behavior.
-
1
Thank you, this does help. – John Mar 21 at 13:50
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 19, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9253557324409485, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/181607/evaluating-int-01-cx-mathrm-dx-through-integration-by-parts?answertab=active
|
# Evaluating $\int_0^1 \! C(x) \, \mathrm dx$ through integration by parts
$$\int_0^1 \! C(x) \, \mathrm{d} x.$$
where $C(x) = \int_0^x \cos(t^2) \, \mathrm{d} t$.
I am really not quite sure how to go about this one, especially given that it needs to be calculated using integration by parts.
My lecturer has an example (done using integration by parts) for $\int_0^1 \! xC(x) \, \mathrm{d} x$.
In this example, he let $u = C(x)$, so that $\frac{du}{dx}\ = \cos(x^2)$, which worked out nicely.
In the 2nd last line of his solution, he had the term: $$\int_0^1 \! \sin(x^2) \, \mathrm{d} x.$$ and simply finished with leaving this part as $S(1)$.
However, in this question, I don't seem to have terms which I can choose as $u$ and $dv$?
Is anyone able to give me some direction?
Many thanks!
-
## 1 Answer
It's easier to do integration by parts here:
$$\int C(x)\mathrm dx=x\,C(x)-\int x\cos(x^2)\mathrm dx$$
Can you take it from here?
-
Thanks J.M., just to clarify: you have simply chosen C(x) for u, and x (from dx) as v'? Then I could just apply integration by parts to the last part right? – mathstudent Aug 12 '12 at 10:13
1
Right you are. The last part is more easily done through substitution, though: $u=x^2$, $\mathrm du=2x\,\mathrm dx$... – J. M. Aug 12 '12 at 10:16
gotcha. Thanks! – mathstudent Aug 13 '12 at 4:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9729259610176086, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/22252/resistor-circuit-that-isnt-parallel-or-series/22381
|
# Resistor circuit that isn't parallel or series
What's the equivalent resistance in this circuit (between points A and B)?
-
3
You can solve this in the same way you can solve any resistor problem: write down Kirchoff's laws. The tricks for equivalent resistance for series and parallel circuits are useful shortcuts, but they don't provide a general algorithm for solving these problems. Kirchoff's laws do. – kleingordon Mar 12 '12 at 5:54
1
Ordinarily I close these kinds of questions, but (a) you've been around long enough that I shouldn't need to be trigger-happy with the close button ;-) and (b) you're basically asking about a Wheatstone bridge, which is pretty much the canonical example of a circuit element that can't be reduced by the series and parallel rules. I think asking how to find the resistance of that configuration is generally applicable enough that it's fine. Though I still do feel the question would be better with a bit more explanation. (Maybe it's just me) – David Zaslavsky♦ Mar 12 '12 at 6:17
David, you should add whatever text you think it needs to improve the question. To me, it's just a cute application of mathematics. The purpose is to point out kleingordon's comment. – Carl Brannen Mar 12 '12 at 7:24
## 4 Answers
I'll give the answer to this question using an unusual method that showed up in the American Mathematical Monthly's problem section perhaps in the late 1970s. This is not necessarily the easy way to solve the problem, but it works out nicely from an algebraic point of view.
The way most people solve most resistance problems is to use series and parallel resistor rules. These are mathematically elegant in that they involve only resistance. But this circuit cannot be reduced to series and parallel rules (is this true if you write an infinite series in R3, perhaps?), so probably the most straightforward method is to apply a voltage of V to the circuit and use algebra to work out the total current. This is inelegant (but physical) in that it introduces ideas other than resistance itself.
The "delta" method mentioned by Manishearth, (but at this time not actually worked out to the final answer) is how an EE would solve the problem. It has the advantage of sticking with resistance, but it involves somewhat unintuitive changes to the topology of the circuit.
The method I'm giving here uses only resistances and illustrates a general solution to this sort of problem. If one generalizes the $R_k$ to complex numbers $Z_k$, it can be used for general impedances (as can the delta method), but it is more general than the delta method. It also may help with student's understanding of sheet resistance so I think it's worth my time to type it in:
First, we replace the resistors with thin flat material that happens to have a "sheet resistance" of 1 ohm per square. With such a material, if we cut out a rectangle of dimensions 1 x R, we will obtain a resistance of R ohms between two conductors attached to the 1 length sides:
Now the thing about sheet resistance is that you can scale the resistor to whatever size you like; so long as you keep the ratio of the side lengths as "R", the resulting resistor will have resistance R. The sheet can be made up of little sheets that are pasted together. To do the pasting correctly, we need to use insulating glue for the horizontal connections and conducting glue for the vertical connections. This is because current only flows from left to right. So the insulating glue doesn't help or hinder the current flow, and the vertical connections don't matter because all the conducting glue has the same voltage anyway. I saw this method of computing resistors in a solution to problem E2459 at the American Mathematical Monthly, February 1975.
So replace the given circuit with one where each resistor is replaced by a rectangular region with dimensions appropriate for its resistance. In doing this, we have to make an assumption about which way current flows through resistor R3. I'll assume it flows from top to bottom. And in order to set a scale for the whole thing, let's make the vertical dimension of R3 to be length 1. This gives us the following drawing:
Now the overall circuit has a resistance given by the ratio of its length to its width:
$$R = L/W = (R_1 x_1 + R_2x_2)/(x_1+x_4)$$ There are four unknowns, $\{x_1,x_2,x_4,x_5\}$. Comparing horizontal dimensions gives two independent equations:
$$R_1x_1 = R_4x_4 + R_3,$$ $$R_5x_5 = R_3 + R_2x_2.$$ And comparing vertical dimensions gives:
$$x_1 = 1 + x_2,$$ $$x_5 = 1 + x_4.$$
This eliminates $x_1$ and $x_5$ to give two independent equations in two unknowns:
$$R_1 + R_1x_2 = R_4x_4 + R_3,$$ $$R_5 + R_5x_4 = R_3 + R_2x_2.$$ Or:
$$R_1x_2 - R_4x_4 = R_3-R_1,$$ $$-R_2x_2+ R_5x_4 = R_3-R_5 .$$ These solve to give:
$$x_2 = \frac{R_3R_5-R_1R_5 + R_3R_4-R_4R_5}{R_1R_5-R_2R_4},$$
$$x_4 = \frac{R_1R_3-R_1R_5 + R_2R_3-R_1R_2}{R_1R_5-R_2R_4},$$
and so
$$x_1 = \frac{R_3R_5-R_2R_4 + R_3R_4-R_4R_5}{R_1R_5-R_2R_4},$$
$$x_5 = \frac{R_1R_3-R_2R_4 + R_2R_3-R_1R_2}{R_1R_5-R_2R_4},$$
We need $W=x_1+x_4:$
$$W = \frac{R_3(R_1+R_2+R_4+R_5)-(R_1+R_4)(R_2+R_5)}{R_1R_5-R_2R_4}$$ and $L = R_1x_1+R_2x_2:$
$$L = \frac{R_3(R_4+R_5)(R_1+R_2)-R_1R_2(R_4+R_5)-R_4R_5(R_1+R_2)}{R_1R_5-R_2R_4}$$ so the total resistance is:
$$R = L/W = \frac{R_1R_4(R_2+R_5)+R_2R_5(R_1+R_4)-R_3(R_4+R_5)(R_1+R_2)}{(R_2+R_5)(R_1+R_4)-R_3((R_1+R_2)+(R_4+R_5))}.$$ In the above, I've grouped terms in order to make it clear that this gives the correct answer in the limit of $R_3$ goes to $0$ or $\infty$.
-
Awesome trick! If you want, I can work it out using the $Y-\Delta$ method. It may take me a while though, I'm slow with TeX. – Manishearth♦ Mar 15 '12 at 9:10
The answer may be correct(have not verified it yet), but I don't think the transformation if fully justified. It seems to be making assumptions of the current path. Also, the resistance of a sheet will only be the ratio of its lengths if the terminals are parallel. Here, at the R4-R3 interface(and some other places), this is not the case. – Manishearth♦ Mar 15 '12 at 9:26
@Manishearth; I'd appreciate seeing the Delta method completed. (And maybe someone should type up the current method.) I'll add a correction for the connections, i.e. the vertical connections are conductors while the horizontal connections are insulators. And it's true that the choice of diagram depends on which way current flows through R3. You can determine the correct direction by comparing the ratios R1/R2 to R4/R5. But you end up with the same equations (as it is unchanged on swapping R1 for R4 and R2 for R5). – Carl Brannen Mar 15 '12 at 19:57
Now its much clearer! (Also, it seems that Qmechanic has done the job of doing a bit of Y-$\Delta$ derivation (without every simplification step) – Manishearth♦ Mar 16 '12 at 0:47
Use a star-delta transform to simplify part of the circuit. You may also use the principle of superposition.
-
I'd originally seen this in a math problem that was given as "what is the minimum number of resistors needed to obtain a resistance of pi to an accuracy of 1 part in a million?" The solution there transformed the problem into one of tiling a rectangular array with a minimum number of square tiles (of arbitrary size) with the ratio of the length and width being an integer approximation of pi. Translated back into a circuit, the winning circuit was of this form (with the resistances each a small multiple of one ohm). – Carl Brannen Mar 12 '12 at 8:04
@CarlBrannen Hmm... Such a problem I would try to solve by some infinite series of resistors. $\zeta(2)$ pops into my mind, as it's relatively simple to construct with integral resistors, but unfortunately you will finally get a resistance of $6/\pi^2$. Tiling six of these in parallel gets you $1/\pi^2$. I doubt that resistors can square-root stuff. – Manishearth♦ Mar 12 '12 at 8:20
Or just take a material of constant resistivity+cross section, draw a (semi)circle with radius=length of $1\Omega$ resistor. Lay your material on this circle. =D – Manishearth♦ Mar 12 '12 at 8:22
Another way to do it would be to use the expansion of $\arctan x$, but it has negatives in it. – Manishearth♦ Mar 12 '12 at 8:23
@CarlBrannen, why not ask that as a separate 'puzzle' question? – nibot Mar 12 '12 at 13:24
show 3 more comments
As suggested by Manishearth, one can perform a $Y$-$\Delta$ transform from $Y$-resistances $R_1$, $R_2$ and $R_3$, to $\Delta$-conductances $G_1$, $G_2$ and $G_3$ (using a $123$ symmetric labeling convention), cf. Fig.1 below.
```` A x----x------x-----[3]-----x------x----x B
| | | |
[4] [2] [1] [5]
| | | |
|------x-------------x------|
````
$\uparrow$ Fig.1. A $\Delta$-equivalent circuit to OP's original circuit.
In terms of formulas, the $Y$-$\Delta$ transformation is given as $$G_i ~:=~ R_i \frac{\sum_{j=1}^3 R_j}{\prod_{k=1}^3 R_k}~=~ R_i \frac{R_1+R_2+R_3}{R_1 R_2 R_3},\qquad\qquad i=1,2,3.$$
The $\Delta$-equivalent circuit in Fig.1 can be viewed as composed of only series and parallel resistors. The equivalent conductance between $A$ and $B$ therefore becomes
$$\frac{1}{R}~=~G_3+\frac{1}{\frac{1}{G_2+\frac{1}{R_4}} +\frac{1}{G_1+\frac{1}{R_5}}}.$$
(Finally let us mention that it is also possible to apply the $Y$-$\Delta$ transform to other triples of the five resistors than $123$.)
-
Here is how I would do it, following the method outlined by kleingordon in a comment. This method is less cool but more general than Carl Brannen's answer, because it will work even in the case where there are crossing wires and you can't rearrange it into a single sheet of resistive material.
Let the electric potential at $A$ be $V_A$ and that at $B$ be $V_B$. Also, let the potential on the wire that connects $R_1$ to $R_2$ and $R_3$ be $V_C$ and let the potential on the wire connecting $R_4$ to $R_3$ and $R_5$ be $V_D$. We know that the current across each resistor must equal the potential difference divided by the resistance, so we have $$I_1 = R_1(V_A - V_C)$$ $$I_2 = R_2(V_C - V_B)$$ $$I_3 = R_3(V_C - V_D)$$ $$I_4 = R_4(V_A - V_D)$$ $$I_5 = R_5(V_D - V_B).$$
We also know that the current must be conserved at every junction, which gives us $$I_1 + I_2 = I_4 + I_5$$ $$I_1 = I_2 + I_3$$ $$I_4 + I_3 = I_5,$$ but the last of these three equations is redundant because it can be derived from the other two, so there are seven equations in total, in nine unknowns (five currents and four potentials).
We want to calculate the resistance, which is given by $(V_A-V_B)/(I_1+I_2).$ Since everything's linear we can assume without loss of generality that $V_B=0$ and $V_A=1$. This gives us seven equations in seven unknowns, which we can solve to find the answer.
I haven't worked it through because it's a bit laborious (I'd probably use a computer algebra system rather than doing it by hand) but it should give the same answer as Carl Brannen's method.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 53, "mathjax_display_tex": 26, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9465563297271729, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/calculus/20074-sequence-convergence-proofs-vs-continuity-function-proofs.html
|
# Thread:
1. ## Sequence Convergence Proofs vs. Continuity Function Proofs
Definition of converge:
For each epsilon > 0 there exists a number N such that n>N implies
|sn -s|<epsilon.
In this case the N you chose was usually a max {constant, f(epsilon)}.
Definition of continuity
For each epsilon > 0 there exists a delta>0 such that x element dom(f) and
|x-x0| < delta implies |f(x)-f(x0)|< epsilon.
In this case the delta you choose is usually a min{constant, f(epsilon)}
I was just wondering how to think about why for one you need a min and the other a max.
Thanks
2. Frankly, I do not think what you wrote above is correct.
Perhaps you can fill out what you wrote with more detail.
There is no way to know what you mean by “ $\left\{ \mbox{constant},f(\varepsilon ) \right\}$.
Please tell us.
3. Well to prove that 4n^3 + 3n / (n^3 - 6) = 4 you choose an
N = max{2,squareroot(54/epsilon)}
To prove that 2x^2 + 1 is continuous, you choose a
delta = min{1, epsilon/(2(2|x0|+1))}
Hopefully that's more clear.
4. While those exact examples may not work in all cases. However, I do get the idea.
In the case of the sequence, the subscript is approaching infinity.
Therefore, we take the maximum N to insure all conditions hold.
The case of continuity is a bit more abstract. In choosing the $\delta$ we are creating a bound on x, we want to be able to say that $\left| {x - x_0 } \right| < \delta \quad \Rightarrow \quad \left| {f(x) - f\left( {x_0 } \right)} \right| < \varepsilon$.
Knowing that $\delta < 1$ means $x_0 - 1 < x < x_0 + 1$.
This means we can construct bounds on $f(x)$.
So we want $\delta$ to be at most 1 (not more).
I hope that helps somewhat.
#### Search Tags
View Tag Cloud
Copyright © 2005-2013 Math Help Forum. All rights reserved.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 7, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9239796996116638, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/tagged/experimental-physics
|
Tagged Questions
for questions about design, process, data, or analysis of experiments and observations.
4answers
150 views
How can I determine whether the mass of an object is evenly distributed?
How can I determine whether the mass of an object is evenly distributed without doing any permanent damage? Suppose I got all the typical lab equipment. I guess I can calculate its center of mass and ...
1answer
35 views
Interpreting the results
I have preformed the muon lifetime experiment at my uni's lab, and got the data. It's text file with 8190 numbers. My TDC unit was set so that the time gates were at 10 $\mu s$, and it has 8192 ...
0answers
73 views
Easy question about magnetism?
I have to build a simple electric motor by attaching a magnet to a battery, extending the terminals of the battery (with stiff wires so they could act as supports), and placing a coil of wire on top ...
1answer
46 views
Optics alignment of scanning microscope
I am facing a challenge in my project regarding optical alignment. See the figure: The challenge is with the vertical optical system alignment. I considered placing a mirror and check back if the ...
0answers
70 views
Mechanical Equivalent of Heat
Recently I have been looking up James Joule's experiment regarding the mechanical equivalent of heat. After viewing some drawings of the apparatus, I assumed that the lines holding the weights would ...
0answers
49 views
List of cross sections?
Sometimes I need to look up a certain cross section, say the inclusive Z production cross section at $\sqrt{s}$ = 7 TeV. Is there a place where 'all the' cross sections are tabulated ...
2answers
67 views
Cosmological triangle with PLANCK results
Is there an updated version of the cosmological triangle with recent PLANCK results included?
1answer
26 views
Experiment to find Ammeter Resistance [closed]
I've been studying experimental electrodynamics and I needed to describe an experiment to find the resistance of an ammeter if I just have the ammeter together with one voltmeter and a protection ...
0answers
46 views
What is the difference between various fields of physics? [closed]
what is the difference between the fields of physics? like high energy physics, particle physics, cosmology, quantum physics, quantum mechanics, experimental physics, theoretical physics, applied ...
0answers
84 views
Status of experimental searches for tachyons?
Now that the dust has settled on the 2011 superluminal neutrino debacle at OPERA, I'm interested in understanding the current status of experimental searches for neutrinos. Although the OPERA claim ...
0answers
17 views
Concerning Scattering Intensity and Particle Concentration
I am trying to determine what governs my sensor output. I have an optical sensor that emits infrared radiation on a sample volume and gives me a voltage output from the scattering of (1 to 10 micron) ...
0answers
29 views
Experiments related to Capacitor and Prism [closed]
I want my students to learn experiments on prism and capacitor. Can you provide me different experiments related to prism and capacitor (except charging and discharging). The experiments will be given ...
1answer
152 views
How the inverse square law in electrodynamics is related to photon mass?
I have read somewhere that one of the tests of the inverse square law is to assume nonzero mass for photon and then, by finding a maximum limit for it , determine a maximum possible error in ...
1answer
77 views
measuring electromagnetic induction
There is a famous law which says that a potential difference is produced across a conductor when it is exposed to a varying MF. But, how do you measure it to prove? It is quite practical. ...
1answer
114 views
Can thought experiments qualify as actual research?
I wondered whether thought experiments actually can be substituted for actual experimentation. I understand that in some cases it might be necessary, but can it be unnecessary over thinking sometimes? ...
1answer
130 views
On the Aharonov-Bohm effect, and the reality of the classical fields
As far as I can check, the Aharonov-Bohm effect is not -- contrary to what is claimed in the historical paper -- a demonstration that the vector potential $A$ has an intrinsic existence in quantum ...
1answer
87 views
How important is it, really, to clean vacuum parts?
In every lab I've seen, people are quite meticulous about cleaning parts that are to be used in ultrahigh vacuum, as well as the components of the chamber itself. The parts may be put in an ultrasonic ...
2answers
83 views
Parabolic motion (experiment)
We performed a laboratory, performing six releases of a sphere with angles $15^\circ,30^\circ,45^\circ,60^\circ,75^\circ,40^\circ$ a parabolic movement, took five distances for each angle, the initial ...
0answers
70 views
faster then the speed of light [closed]
if i was having a race against some one ten times biger then me building sise from the start line to the finish line is only 60 feat it will take me a wild to get ther but it will take the other guy ...
0answers
46 views
Robot controling pouring process from a bottle
I need to solve a problem within mechanic of fluids for a part of my thesis. Robot will pick up a bottle of beer, cola, julebrus or any other kind of beverage. And then it has to bring it to the glass ...
1answer
21 views
Are “timed” measurements actually revealing error-distributions of the measurement apparati?
A thought experiment: Given some object moving (swinging) from left to right and back with constant velocity, imagine a camera set up to take a picture of the scene at a fixed interval so that we can ...
1answer
95 views
What is Transverse Energy?
What is transverse energy? Why we use transverse total energy instead of energy and transverse momentum in place of Total momentum in the particle detectors?
1answer
31 views
Planes of graphite crystal on diffraction experiments
When doing electron diffraction on graphite (a popular experiment for students at universities) always diffraction at these two planes with distances $d_1$ and $d_2$ are observed: But a plane ...
1answer
67 views
Explain background pattern in particle tracing image
I'm trying to understand this image of a particle tracing experiment (which can be found all over the net if you google for "bubble chamber"): ( There are two things that I can't figure out: The ...
1answer
85 views
Whistling on bottle tops
It is well known that if you blow horizontally on a bottle top it creates a sound. Pouring water to the bottle changes the pitch. I have been doing experiments on the relation between the sound's ...
1answer
83 views
How do you find (initial) velocity using conservation of energy?
Without mass; only time, distance, and height is given. For example: For this lab, the reference level was 100cm above ground therefore the height of the object was 10cm. I determined time and ...
3answers
132 views
How quark electric charge directly have been measured?
How quarks electric charge directly have been measured when quarks never directly observed in isolation? (Due to a phenomenon known as color confinement.)
3answers
218 views
How can I determine the coefficient $k$ in $\dfrac{dT}{dt} = -k(T - 100 \mathrm{^\circ C})$?
I recently spend some time on cooking and I'm curious about the time evolution of the temperature of the water. I did some experiment and the temperature is of the form T = 100 \mathrm{^\circ C} + ...
1answer
139 views
Universal Sequence and relationship of mathematics and reality [closed]
In "The Special and General Theory of Relativity" Einstein says: How can it be that mathematics, being after all a product of human thought which is independent of experience, is so admirably ...
1answer
36 views
Why pulse waves results in wave packets?
I was doing experiments of measuring sonic velocity and I generate pulse waves from sensor 1, but when they are received by sensor 2, I saw wave packets on the oscilloscope, can you explain why? I was ...
3answers
523 views
How hot is the water in the pot?
Question: How hot is the water in the pot? More precisely speaking, how can I get a temperature of the water as a function of time a priori? Background & My attempt: Recently I started spend ...
2answers
299 views
Is there a compound denser than the densest element?
I'm musing about how to give students an intuitive feeling about density by letting them lift a same sized volume of different materials, e.g. 1 liter of water, a 10x10x10 cm cube of iron, lead etc. ...
1answer
115 views
Exploiting the Heisenberg Uncertainty Principle as a means to communicate
It seems as though I've come across a rather unusual conclusion that could either simply be a misinterpretation or a contradictory discovery. I seem to have found a way to utilize the Heisenberg ...
1answer
31 views
william herschel discovering infrared problem
when william herschel conducted the experiment of separating white light with a prism and measuring the different colors, he put a thermometer past the red color as a control finding it to pick up the ...
1answer
71 views
Is there any experimental evidence to support the Terrell rotation?
The Shapiro delay was predicted in 1964 and observed by 1966, and is now a tool used to measure the mass of distant binary pulsars. The Terrell-Penrose rotation was published in 1959, but I can find ...
2answers
171 views
Why propagation of uncertainty is linear?
I'm in doubt with one thing: let's imagine that we have $n+1$ quantities, $n$ of them being directly measured, and the other one being related to the first $n$ by a function \$f : \mathbb{R}^n \to ...
1answer
117 views
Have general relativistic effects of the sun's rotation been measured?
I was wondering if general relativistic effects of the sun's rotation have also been measured, like gravity probes A and B measured GR effects from the earth.
1answer
75 views
Uncertainty in Acceleration given the uncertainty in position
I had a motion detector record the position of a dynamics cart and automatically plot Position vs Time and Velocity vs Time plots in Logger Pro on the computer. If the instrument uncertainty in ...
1answer
66 views
Experimental perspective in understanding the Heisenberg Uncertainty Principle
I need to confirm whether or not I understand Heisenberg Uncertainty Principle. So the crucial thing is that you need an "ensemble" of measurements: $$\delta x \delta p \ge \frac{\hbar}{2}.$$ If I ...
2answers
144 views
Multiple measurements of the same quantity - combining uncertainties
I have a number of measurements of the same quantity (in this case, the speed of sound in a material). Each of these measurements has their own uncertainty. $$v_{1} \pm \Delta v_{1}$$ v_{2} \pm ...
1answer
55 views
Does the method of an experiment always have to be numbered? [closed]
When writing a method for an experiment, does it always have to be set out in orderly numbered steps? Can it not also be a paragraph of text that outlines the method? A mundane example: Place a ...
2answers
186 views
What limits the maximum attainable Fermi Energy for a material experimentally?
Either through doping or gating. What are some good terms to search for if I'm looking for some experimentally obtained values for particular materials? I'm particularly interested in what the limit ...
3answers
151 views
Does light really “travel”?
From what I've so far understood about light, a photon is emitted somewhere and after some time it's absorbed somewhere else. Have we had experiments that confirm the path taken or something akin to ...
1answer
117 views
Dalitz plot analysis
I have seen a few Dalitz plots so far and tried to understand how they are useful. So one of the advantages of these plot is that the non-uniformity in the plots can tell something about the ...
2answers
139 views
Conical Pendulum — Can it rotate at 90 degrees?
I have a simple question, can you spin a conical pendulum fast enough so that it rotates at 90 degrees? The equation is $\tan(\theta)=v^2/rg$ , but at 90 degrees, $\tan(\theta)=\infty$ ... so what ...
1answer
57 views
More points vs. precision
I have a dilemma. In my lab exercise, I was measuring spectra with HPGe detector of several sources (gamma spectroscopy). To determine the energy of the unknown spectrum I first needed to calibrate ...
1answer
360 views
Determine viscosity using falling sphere (Stokes Law, Ladenburg correction)
Introduction I am trying to determine the viscosity of a fluid. Therefore, I let a sphere of known mass m and radius r fall ...
1answer
40 views
Dealing with experimental data
I have some experimental data about a value $n$, now, I am supposed to give, in the ending, a single value with an error: $n=a\pm b$. I have originally 6 values of $n$, each one comes as an indirect ...
1answer
47 views
Is this the correct way to I combine multiple interdependant pressure readings?
I want to measure the density in different layers of a suspension. To do this I want to place pressure sensors at different heights. Let's assume that the sensors are not by orders of magnitude more ...
1answer
107 views
Observable (in principle) signal of a bubble collision in eternal inflation
Assuming a scenario of eternal inflation with a lot of "bubble universes" expanding, Lenny Susskind explains here that a potential signal of a collision of our universe with another bubble could be a ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 18, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9258593320846558, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/tagged/optimization+probability
|
# Tagged Questions
0answers
79 views
### pricing of heat rate-linked derivative [migrated]
It's a simplified model. Suppose $U_t$ is a random variables subject to Lognormal($x_1$, $z_1^2$)distribution. $V_t$ is a random variables subject to Lognormal($x_2$, $z_2^2$)distribution. Suppose ...
1answer
32 views
### Optimize winnings in a money making game.
So, given a continuous random variable A (with some density and CDF function), and a value I choose V, what is the equation to determine the best value V to maximize my earnings given that I will be ...
2answers
35 views
### gradient descent optimal step size
Suppose a differentiable, convex function $F(x)$ exists. Then $b = a - \gamma\bigtriangledown F(a)$ imples that $F(b) <= F(a)$ given $\gamma$ is chosen properly. The goal is to find the optimal ...
1answer
47 views
### Optimal strategy puzzle
Play a game with an urn. $75$ blue balls. $25$ red balls. $1$ yellow ball. you get a dollar for every red and if you select the yellow you lose everything. what should be your strategy in the game. ...
0answers
31 views
### Facets of the convex hull as solution of an optimization problem?
Given $N$ points $x_1, x_2, ..., x_N \in \mathbb{R}^n$, consider their convex hull \mathcal{C} = \text{conv}( \{ x_1, ..., x_n \} ) = \bigcap_{j=1}^{J} \{ x \in \mathbb{R}^n : \ A_j x \leq b_j \} ...
1answer
22 views
### What is the optimal stopping point for an experiment when expecting unknown event
Assume we notice that stock prices are rising and we can deduce we are in a bubble. Assume we start at $w(0)=0$ worth at time $t=0$ and the value grows linearly with time $(w(t)=t)$. We know that ...
2answers
61 views
### Anyone saw this interesting function before?
Say $\theta\in\Re^n$ and $\theta_i\in(0,1)$ for all $i$. Define $$f(\theta) = \frac{1}{n}\sum_i^n\{(1-\theta_i)\log(1-\theta_i)+\theta_i\log\theta_i\}$$ It is easy to see the minimizer of ...
3answers
50 views
### Packing radios into cartons - why is my solution wrong?
A manufacturer of car radios ships them to retailers in cartons of $n$ radios. The profit per radio is $\$59.50$, minus shipping cost of$\$25$ per carton, so the profit is $59.5n-25$ dollars per ...
1answer
77 views
### Inverse transform sampling
I know the basic idea is to generate a random number from $U(0,1)$, find the inverse cumulative distribution function $F^{-1}$ and then take $x = F^{-1}(U)$. If you were plot a histogram of say 1000 ...
1answer
140 views
### Optimal Yahtzee (Dice roll) decisions: Probability and weighting choices
I'm a senior in computer science, and I have a hobby of taking on little projects that I find interesting. My current one is a Yahtzee optimal play solver. One would enter their current roll, and it ...
0answers
31 views
### Optimal distribution with moment conditions
Basically, I want to find a probability distribution which maximizes a convex objective function and satisfies two moment constraints. For given $\bar x$, $m_{n-1}$, $m_n$ \max_{f(x)} ...
1answer
135 views
### Need help with proof for arbitrage betting
Recently I came across this article about sports betting arbitrage: http://www.sportsbettingworm.com/arbitrage-calculations/index.html. The article gives formulas for calculating arbitrage profit and ...
1answer
48 views
### Optimizing a physician's medical test plan
I have come across the following optimization problem: "A patient presents himself with symptoms to a physician. The physician has a set of $n$ medical tests, where each test $i$ has costs $c_i$ ...
1answer
14 views
1answer
74 views
### what math topic is this kind of example part of? or what is needed to understand how to solve it? [closed]
we 100000000 sets/locations. each set has, A = % chance of finding a cure (there are many different types of cures) for cancer B = time it takes to extract a cure to caner C = the optimal % chance (IN ...
0answers
88 views
### Differential Equations, Probability/Statistics, Optimization Problem - Relations?
While I am working on some physical/mathematical problems, I feel strongly that these three areas are almost the identical thing, except that they have different methods/from different aspects to ...
1answer
116 views
### Secretary problem for unknown n?
So one of my good friends is starting to date again (after being out of the country for two years), and I think that it might be helpful, or at least fun, to keep track of her dates in a ranked ...
0answers
37 views
### How to go about optimizing this function? (Maximizing)
If we are given a fixed integer $N > 0$ of choices we can pick out of a pool of $k$ values $c_0, \cdots, c_k$ (with repetitions allowed and $c_i > 0 \forall i$) and we want to maximize the ...
2answers
37 views
### Maximizing the time we reach to a threshold in a series of numbers
I have a problem and I really don't know what kind of mathematical method should I apply to solve or model my problem. I would be thankful If anyone can give me some answer or help. Suppose we have ...
1answer
38 views
### Uniform Continuous R.V. - Optimization
working on this problem: A road construction company needs to decide where to place an emergency phone on a stretch of road of length L. Suppose that accidents can happen uniformly at random ...
0answers
78 views
### Select positions for strongest defense given probability(position, target) scores. [closed]
In a game of tower defense, I want to place archers to optimize survival time. I have ~10 towers, and I am allowed one archer per tower. The towers have 50 to 300 vantage points each. Once an archer ...
2answers
512 views
### One vs multiple servers - problem
Consider the following problem: We have a simple queueing system with $\lambda%$ - probabilistic intensity of queries per some predefined time interval. Now, we can arrange the system as a single ...
1answer
104 views
### Strategy to maximize no. of balls from N boxes
If you have N boxes each containing distinct number of balls and you are allowed to choose at most ...
2answers
389 views
### Grad degree that mainly deals with probability/game theory/optimization?
I'm currently working but am going to take classes as a non-degree student to beef up the math part of my background. I've only taken calc 1-3, ODEs, linear algebra, logic, and decision theory so my ...
2answers
79 views
### is there a solution to the following maximization problem such that $a = b$?
Let $X = (X_1,...,X_n)$ be a vector of $n$ random variables. Consider the following maximization problem: $\max\limits_{a,b} \;\mathrm{Cov}(a\cdot X, b \cdot X)$ under the constraint that \$\|a\|_2 = ...
1answer
63 views
### Derivatives with respect to a symmetric matrix, with an application to maximum likelihood
I am quite unsure about this whole matter of differentiation with respect to a matrix. First, I'd like a good (online hopefully) reference for getting up to speed on the theory - as opposed to a bunch ...
4answers
103 views
### Optimizing the expectancy
The following problem is about optimization. It is not a homework, but rather a natural question to ask to oneself afterwards. Here it is. Consider a road of length $L$ between two cities $A$ and ...
1answer
179 views
### Stochastic assignment problem
Given an $n \times n$ real matrix $C$, we can try to maximize $$\Phi(C, \pi) = \frac{1}{n} \sum_{i} C_{i,\pi(i)}$$ over $\pi \in S_n$, the set of all permutations on $n$ objects. What can one say ...
1answer
295 views
### Manifold with minimum surface distance between two points
The book "The World is Flat" uses flatness as a metaphor for a global economy. In fact, a spherical world would seem to be better than a flat world in terms of reducing the distances between two ...
2answers
323 views
### Generalization of the Sultan's dowry problem
We know the solution of the Sultan's dowry problem: To reject the first $n/e$ candidates and then to select the first who exceeds the best of the sample. How to find the best strategy if we want ...
1answer
164 views
### Maximize normal density function over a subset
For a 2D Normal distribution $N(0, \left[ \begin{array}{cc} 1 & -1/4 \\ -1/4 & 1 \end{array} \right])$, I am now trying to maximize its density function over $\{ x\geq 10, y \geq 10 \}$. My ...
3answers
411 views
### Optimally combining samples to estimate averages
Suppose I have two tables, each of unknown size, and I'd like to estimate the average of their true sizes. I hire 2 contractors: one guarantees good precision (i.e., her measurement ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 55, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9231758713722229, "perplexity_flag": "middle"}
|
http://mathoverflow.net/revisions/50079/list
|
## Return to Answer
5 added 61 characters in body
I think the source of the confusion here is the idea that all models of ZFC have the same notion of what a "natural number" (and hence, by an appropriate encoding, an "algorithm") is. Unfortunately, Godel's incompleteness theorem tells us that no recursively enumerable axiom system (of which ZFC is an example) can precisely pin down the theory of the true natural numbers (i.e. true arithmetic), which can thus only be fully described in the metatheory rather than in any formal system. As such, there exist statements G about natural numbers which are true in some models of ZFC and false in others, because these two models have genuinely different interpretations of the natural number system.
It is a proiri conceivable (though, in my opinion, unlikely), that P=NP is one of these statements. Specifically, it is conceivable that SAT is not solvable in polynomial time in the standard model of the natural numbers, but is solvable in polynomial time in an exotic model of the natural numbers, even if both models of the natural numbers are part of respective models of set theory obeying ZFC. The point here is that the exotic algorithm could have a length which is an exotic natural number, which could be larger than every standard natural number; similarly, the constants in the polynomial run time for this exotic algorithm could also be larger than every standard number. So there is no obvious way to convert the exotic polynomial time SAT solver into a standardly polynomial time SAT solver; it may even be that the exotic algorithm cannot be described at all in the standard model, let alone have a polynomial run time.
[Edit: actually, with Levin's trick, if SAT is solvable, it is always solvable with a bounded-length algorithm (namely, "run all possible algorithms in parallel in a carefully chosen manner"), so exotic length is not a genuine issue. However, this still does not exclude the possibility of exotic run time constants.]
It is even conceivable (though, again, I believe it to be unlikely) that the reverse is true: SAT is solvable in polynomial time in the standard model, but not in a exotic model. Here, the standard algorithm has a length which is a standard natural number, so the algorithm can at least be described in the exotic world. But just because it has a polynomial run time in the standard model, this does not necessarily imply a polynomial run time in the exotic model (unless one has a transfer principle, as is the case in the models coming from nonstandard analysis, but not all exotic models are of this type); the algorithm may solve all standard SAT problems in a polynomial amount of time, but require super-polynomial time to solve an exotic SAT problem. [In this scenario, ZFC + P!=NP would be $\omega$-inconsistent, but could still be consistent.]
4 added 29 characters in body
I think the source of the confusion here is the idea that all models of ZFC have the same notion of what a "natural number" (and hence, by an appropriate encoding, an "algorithm") is. Unfortunately, Godel's incompleteness theorem tells us that no recursively enumerable axiom system (of which ZFC is an example) can precisely pin down the theory of the true natural numbers (i.e. true arithmetic), which can thus only be fully described in the metatheory rather than in any formal system. As such, there exist statements G about natural numbers which are true in some models of ZFC and false in others, because these two models have genuinely different interpretations of the natural number system.
It is a proiri conceivable (though, in my opinion, unlikely), that P=NP is one of these statements. Specifically, it is conceivable that SAT is not solvable in polynomial time in the standard model of the natural numbers, but is solvable in polynomial time in an exotic model of the natural numbers, even if both models of the natural numbers are part of respective models of set theory obeying ZFC. The point here is that the exotic algorithm could have a length which is an exotic natural number, which could be larger than every standard natural number; similarly, the constants in the polynomial run time for this exotic algorithm could also be larger than every standard number. So there is no obvious way to convert the exotic polynomial time SAT solver into a standardly polynomial time SAT solver; it may even be that the exotic algorithm cannot be described at all in the standard model, let alone have a polynomial run time.
[Edit: actually, with Levin's trick, if SAT is solvable, it is always solvable with a bounded-length algorithm (namely, "run all possible algorithms in parallel")parallel in a carefully chosen manner"), so exotic length is not a genuine issue. However, this still does not exclude the possibility of exotic run time constants.]
It is even conceivable (though, again, I believe it to be unlikely) that the reverse is true: SAT is solvable in polynomial time in the standard model, but not in a exotic model. Here, the standard algorithm has a length which is a standard natural number, so the algorithm can at least be described in the exotic world. But just because it has a polynomial run time in the standard model, this does not necessarily imply a polynomial run time in the exotic model (unless one has a transfer principle, as is the case in the models coming from nonstandard analysis, but not all exotic models are of this type); the algorithm may solve all standard SAT problems in a polynomial amount of time, but require super-polynomial time to solve an exotic SAT problem. [In this scenario, ZFC + P!=NP would be $\omega$-inconsistent, but could still be consistent.]
3 added 158 characters in body; added 10 characters in body; added 23 characters in body
I think the source of the confusion here is the idea that all models of ZFC have the same notion of what a "natural number" (and hence, by an appropriate encoding, an "algorithm") is. Unfortunately, Godel's incompleteness theorem tells us that no recursively enumerable axiom system (of which ZFC is an example) can precisely pin down the theory of the true natural numbers (i.e. true arithmetic), which can thus only be fully described in the metatheory rather than in any formal system. As such, there exist statements G about natural numbers which are true in some models of ZFC and false in others, because these two models have genuinely different interpretations of the natural number system.
It is a proiri conceivable (though, in my opinion, unlikely), that P=NP is one of these statements. Specifically, it is conceivable that SAT is not solvable in polynomial time in the standard model of the natural numbers, but is solvable in polynomial time in an exotic model of the natural numbers, even if both models of the natural numbers are part of a model respective models of set theory obeying ZFC. The point here is that the exotic algorithm could have a length which is an exotic natural number, which could be larger than every standard natural number; similarly, the constants in the polynomial run time for this exotic algorithm could also be larger than every standard number. So there is no obvious way to convert the exotic polynomial time SAT solver into a standardly polynomial time SAT solver; it may even be that the exotic algorithm cannot be described at all in the standard model, let alone have a polynomial run time.
[Edit: actually, with Levin's trick, if SAT is solvable, it is always solvable with a bounded-length algorithm (namely, "run all possible algorithms in parallel"), so exotic length is not a genuine issue. However, this still does not exclude the possibility of exotic run time constants.]
It is even conceivable (though, again, I believe it to be unlikely) that the reverse is true: SAT is solvable in polynomial time in the standard model, but not in a exotic model. Here, the standard algorithm has a length which is a standard natural number, so the algorithm can at least be described in the exotic world. But just because it has a polynomial run time in the standard model, this does not necessarily imply a polynomial run time in the exotic model (unless one has a transfer principle, as is the case in the models coming from nonstandard analysis, but not all exotic models are of this type); the algorithm may solve all standard SAT problems in a polynomial amount of time, but require super-polynomial time to solve an exotic SAT problem. [In this scenario, ZFC + P!=NP would be $\omega$-inconsistent, but could still be consistent.]
2 added 294 characters in body
I think the source of the confusion here is the idea that all models of ZFC have the same notion of what a "natural number" (and hence, by an appropriate encoding, an "algorithm") is. Unfortunately, Godel's incompleteness theorem tells us that no recursively enumerable axiom system (of which ZFC is an example) can precisely pin down the theory of the true natural numbers (i.e. true arithmetic), which can thus only be fully described in the metatheory rather than in any formal system. As such, there exist statements G about natural numbers which are true in some models of ZFC and false in others, because these two models have genuinely different interpretations of the natural number system.
It is a proiri conceivable (though, in my opinion, unlikely), that P=NP is one of these statements. Specifically, it is conceivable that SAT is not solvable in polynomial time in the standard model of the natural numbers, but is solvable in polynomial time in an exotic model of the natural numbers, even if both models are part of a model of set theory obeying ZFC. The point here is that the exotic algorithm could have a length which is an exotic natural number, which could be larger than every standard natural number; similarly, the constants in the polynomial run time for this exotic algorithm could also be larger than every standard number. So there is no obvious way to convert the exotic polynomial time SAT solver into a standardly polynomial time SAT solver; it may even be that the exotic algorithm cannot be described at all in the standard model, let alone have a polynomial run time.
[Edit: actually, with Levin's trick, if SAT is solvable, it is always solvable with a bounded-length algorithm (namely, "run all possible algorithms in parallel"), so exotic length is not a genuine issue. However, this still does not exclude the possibility of exotic run time constants.]
It is even conceivable (though, again, I believe it to be unlikely) that the reverse is true: SAT is solvable in polynomial time in the standard model, but not in a exotic model. Here, the standard algorithm has a length which is a standard natural number, so the algorithm can at least be described in the exotic world. But just because it has a polynomial run time in the standard model, this does not necessarily imply a polynomial run time in the exotic model (unless one has a transfer principle, as is the case in the models coming from nonstandard analysis, but not all exotic models are of this type); the algorithm may solve all standard SAT problems in a polynomial amount of time, but require super-polynomial time to solve an exotic SAT problem.
1
I think the source of the confusion here is the idea that all models of ZFC have the same notion of what a "natural number" (and hence, by an appropriate encoding, an "algorithm") is. Unfortunately, Godel's incompleteness theorem tells us that no recursively enumerable axiom system (of which ZFC is an example) can precisely pin down the theory of the true natural numbers (i.e. true arithmetic), which can thus only be fully described in the metatheory rather than in any formal system. As such, there exist statements G about natural numbers which are true in some models of ZFC and false in others, because these two models have genuinely different interpretations of the natural number system.
It is a proiri conceivable (though, in my opinion, unlikely), that P=NP is one of these statements. Specifically, it is conceivable that SAT is not solvable in polynomial time in the standard model of the natural numbers, but is solvable in polynomial time in an exotic model of the natural numbers, even if both models are part of a model of set theory obeying ZFC. The point here is that the exotic algorithm could have a length which is an exotic natural number, which could be larger than every standard natural number; similarly, the constants in the polynomial run time for this exotic algorithm could also be larger than every standard number. So there is no obvious way to convert the exotic polynomial time SAT solver into a standardly polynomial time SAT solver; it may even be that the exotic algorithm cannot be described at all in the standard model, let alone have a polynomial run time.
It is even conceivable (though, again, I believe it to be unlikely) that the reverse is true: SAT is solvable in polynomial time in the standard model, but not in a exotic model. Here, the standard algorithm has a length which is a standard natural number, so the algorithm can at least be described in the exotic world. But just because it has a polynomial run time in the standard model, this does not necessarily imply a polynomial run time in the exotic model (unless one has a transfer principle, as is the case in the models coming from nonstandard analysis, but not all exotic models are of this type); the algorithm may solve all standard SAT problems in a polynomial amount of time, but require super-polynomial time to solve an exotic SAT problem.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9465588331222534, "perplexity_flag": "head"}
|
http://mathhelpforum.com/calculus/28088-2-differentiate-following-function.html
|
# Thread:
1. ## 2. Differentiate the following function
f (t) = 1/7t^6-5t^4+9t
2. Do you know the power rule?
$\tfrac{d}{dx} x^n = nx^{n-1}$
3. no, and i have a horrible teacher, can you take me through the steps, please?
4. Originally Posted by plstevens
f (t) = 1/7t^6-5t^4+9t
Well, there's one rule and one property you have to know here. First is the power rule, as described above. Second is the fact that the derivative of x + y is the (derivative of x) + (derivative of y). In other words, differentiate each term separately, then add up all the derivatives at the end. Does this help?
5. Basically take whatever number that the variable is raised to. Multiply it by whatever constant was there before and then subtract 1 from the power.
So:
$\tfrac{d}{dx}x^2 = 2x$
$\tfrac{d}{dx}2x = 2$
$\tfrac{d}{dx}3x^{5.9} = 3 \cdot 5.9 x^{4.9}$
It's like that.
6. so is that the way you work out the problem that i'm currently doing because i'm still lost and still confused.
7. Originally Posted by plstevens
f (t) = 1/7t^6-5t^4+9t
First differentiate 1/7t^6 using the power rule (6*1/7t^(6-1), according to the rule), then do the same on -5t^4 and 9t.
8. (6/7)t^5-20t^3+9t^0 is this right, tell me what i'm supposed to do with the 9t^0
9. Originally Posted by plstevens
(6/7)t^5-20t^3+9t^0 is this right, tell me what i'm supposed to do with the 9t^0
What is the value of $t^0$? If that is giving you problems, then consider something more fundamental: What is the value of 3^0? 5^0? 8^0? Plug them into your calculator if you don't remember.
-Dan
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9515820741653442, "perplexity_flag": "middle"}
|
http://mathoverflow.net/revisions/61996/list
|
## Return to Question
3 edited braces
This is a question I asked on Math.SE and got only a partial answer. I hope I will have better chances here.
Given the ring of polynomials $\mathbb{Z}_n[X]$, consider $$\mathbb{P}_n = {\lbrace a_0 +a_1x+a_2x^2+\cdots+a_{n-1}x^{n-1}| a_i \in \mathbb{Z}_n},$$ mathbb{Z}_n \rbrace, i.e. $\mathbb{P}_n$ is the set of all polynomials in $\mathbb{Z}_n[X]$ with exponents in $\mathbb{Z}_n$.
So, $\mathbb{P}_2 = {0,1,x,1+x},$ \lbrace 0,1,x,1+x \rbrace ,\$
$$\mathbb{P}_3 = {0, \lbrace 0, x^2, 2x^2, x, x+x^2, x+2x^2, 2x, 2x+x^2, 2x+2x^2, 1, 1+x^2, 1+2x^2, 1+x } \rbrace \cup$$
$${ \lbrace 1+x+x^2, 1+x+2x^2, 1+2x, 1+2x+2x^2, 2, 2+x^2, 2+2x^2, 2+x, 2+x+x^2 } \rbrace \cup$$
$${ \lbrace 2+x+2x^2, 2+2x, 2+2x+x^2, 2+2x+2x^2}$$ 2+2x+2x^2 \rbrace
The above ordering of the elements is based on the coefficient coordinates pattern: $(0,0,0), (0,0,1),(0,0,2), (0,1,0), (0,1,1), (0,1,2), \cdots, (2,2,0), (2,2,1), (2,2,2).$
Clearly, $\mathbb{P}_n$ has $n^n$ elements. I am counting the number of polynomials in $\mathbb{P}_n$ that vanish in $\mathbb{Z}_n$. Let's denote the count for $\mathbb{P}_n$ by $r_n$ ($r$ loosely stands for 'reducible'). Then, $r_2 = 3, r_3 = 19, \cdots$ It is very early to guess the growth of $r_n$ or its primality but I would like to know if there is any theorem that would help to count or reduce the number of polynomials I should check.
Some work:
1. Since $\mathbb{Z}_n \subset \mathbb{Z}_n[X]$, $r_n \leq n^n - (n-1)$. (there are $n-1$ nonzero elements)
2. There are $n^{n-1}$ polynomials with zero constant term and there are $n-1$ polynomials of degree $1$ with nonzero constant term all of which vanish for some $x$ in $\mathbb{Z}_n$. Hence $n^{n-1} + (n-1) \leq r_n$. This is not a good bound as it is far less than $n^n$ for large $n$.
2 edited formula
# Counting some vanishing polynomials thathaveazero in $\mathbb{Z}_n[X]$
This is a question I asked on Math.SE and got only a partial answer. I hope I will have better chances here.
Given the ring of polynomials $\mathbb{Z}_n[X]$, consider $$\mathbb{P}_n = {a_0 +a_1x+a_2x^2+\cdots+a_{n-1}x^{n-1}| a_i \in \mathbb{Z}_n},$$ i.e. $\mathbb{P}_n$ is the set of all polynomials in $\mathbb{Z}_n[X]$ with exponents in $\mathbb{Z}_n$.
So, $\mathbb{P}_2 = {0,1,x,1+x},$
$$\mathbb{P}_3 = {0, x^2, 2x^2, x, x+x^2, x+2x^2, 2x, 2x+x^2, 2x+2x^2, 1, 1+x^2, 1+2x^2, 1+x } \cup$$
$${ 1+x+x^2, 1+x+2x^2, 1+2x, 1+2x+2x^2, 2, 2+x^2, 2+2x^2, 2+x, 2+x+x^2 } \cup$$
$${ 2+x+2x^2, 2+2x, 2+2x+x^2, 2+2x+2x^2}$$
The above ordering of the elements is based on the coefficient coordinates pattern: $(0,0,0), (0,0,1),(0,0,2), (0,1,0), (0,1,1), (0,1,2), \cdots, (2,2,0), (2,2,1), (2,2,2).$
Clearly, $\mathbb{P}_n$ has $n^n$ elements. I am counting the number of polynomials in $\mathbb{P}_n$ that vanish in $\mathbb{Z}_n$. Let's denote the count for $\mathbb{P}_n$ by $r_n$ ($r$ loosely stands for 'reducible'). Then, $r_2 = 3, r_3 = 19, \cdots$ It is very early to guess the growth of $r_n$ or its primality but I would like to know if there is any theorem that would help to count or reduce the number of polynomials I should check.
Some work:
1. Since $\mathbb{Z}_n \subset \mathbb{Z}_n[X]$, $r_n \leq n^n - (n-1)$. (there are $n-1$ nonzero elements)
2. There are $(n-1)^{n-1}$ n^{n-1}$polynomials with zero constant term and there are$n-1$polynomials of degree$1$with nonzero constant term all of which vanish for some$x$in$\mathbb{Z}_n$. Hence$(n-1)^{n-1} n^{n-1} + (n-1) \leq r_n$. This is not a good bound as it is far less than$n^n$for large$n\$.
1
# Counting some vanishing polynomials in $\mathbb{Z}_n[X]$
This is a question I asked on Math.SE and got only a partial answer. I hope I will have better chances here.
Given the ring of polynomials $\mathbb{Z}_n[X]$, consider $$\mathbb{P}_n = {a_0 +a_1x+a_2x^2+\cdots+a_{n-1}x^{n-1}| a_i \in \mathbb{Z}_n},$$ i.e. $\mathbb{P}_n$ is the set of all polynomials in $\mathbb{Z}_n[X]$ with exponents in $\mathbb{Z}_n$.
So, $\mathbb{P}_2 = {0,1,x,1+x},$
$$\mathbb{P}_3 = {0, x^2, 2x^2, x, x+x^2, x+2x^2, 2x, 2x+x^2, 2x+2x^2, 1, 1+x^2, 1+2x^2, 1+x } \cup$$
$${ 1+x+x^2, 1+x+2x^2, 1+2x, 1+2x+2x^2, 2, 2+x^2, 2+2x^2, 2+x, 2+x+x^2 } \cup$$
$${ 2+x+2x^2, 2+2x, 2+2x+x^2, 2+2x+2x^2}$$
The above ordering of the elements is based on the coefficient coordinates pattern: $(0,0,0), (0,0,1),(0,0,2), (0,1,0), (0,1,1), (0,1,2), \cdots, (2,2,0), (2,2,1), (2,2,2).$
Clearly, $\mathbb{P}_n$ has $n^n$ elements. I am counting the number of polynomials in $\mathbb{P}_n$ that vanish in $\mathbb{Z}_n$. Let's denote the count for $\mathbb{P}_n$ by $r_n$ ($r$ loosely stands for 'reducible'). Then, $r_2 = 3, r_3 = 19, \cdots$ It is very early to guess the growth of $r_n$ or its primality but I would like to know if there is any theorem that would help to count or reduce the number of polynomials I should check.
Some work:
1. Since $\mathbb{Z}_n \subset \mathbb{Z}_n[X]$, $r_n \leq n^n - (n-1)$. (there are $n-1$ nonzero elements)
2. There are $(n-1)^{n-1}$ polynomials with zero constant term and there are $n-1$ polynomials of degree $1$ with nonzero constant term all of which vanish for some $x$ in $\mathbb{Z}_n$. Hence $(n-1)^{n-1} + (n-1) \leq r_n$. This is not a good bound as it is far less than $n^n$ for large $n$.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 82, "mathjax_display_tex": 12, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9518085718154907, "perplexity_flag": "head"}
|
http://mathematica.stackexchange.com/questions/7502/how-can-i-return-private-members-of-a-mathematica-package-as-the-output-of-packa
|
# How can I return private members of a Mathematica package as the output of package functions without the “PackageNamePrivate” prefix?
I have created a Mathematica package that manipulates various types of input physics data into a common form of output data for further analysis. To make this process more efficient and manageable, I assign each specific set of data a different head and define the manipulation functions differently (as necessary) for each of the various heads. For example,
````getX[data_AA] := data[[1]];
getY[data_AA] := data[[2]];
getZ[data_AA] := data[[3]];
getR[data_AA] := Module[{x=getX[data],y=getY[data]},Return[x^2+y^2]]
...and so on...
getR[data_BB] := data[[1]];
getPhi[data_BB] := data[[2]];
getX[data_BB] := Module[{r=getR[data],phi=getPhi[data]},Return[r*Cos[phi]]]
...and so on...
getX[data_CC] := data[[5]];
...and so on...
````
Accordingly, I have written functions that change the heads of the data from List (since the data are imported from files into arrays) to their respective heads so that the data can then be used as arguments in the manipulation functions like the examples above: for example,
````makeAA[AAdata_] := Return[AAdata /. List -> AA]
````
Because I want this to all be done internally, so that one cannot simply initialize a random object in a notebook with a global head of AA and then use the manipulation functions on that arbitrarily created object (see Note below), I have made the heads private members of the module. If the data is returned, however, by the function, say,
````getAAdata[filename_]
````
then it is displayed as
````PackageName`Private`AA[PackageName`Private`AA[particle 1 data],PackageName`Private`AA[particle 2 data],...,PackageName`Private`AA[particle N data]]
````
instead of the more concise and, therefore, more desirable
````AA[AA[particle 1 data],AA[particle 2 data],...,AA[particle N data]]
````
(this is especially a problem when dealing with further nested arrays).
So the question is: how can I get private objects (heads, variables, etc.) to display in outputted data without the cumbersome and cluttering "PackageNamePrivate" prefix?
Note: it is not sufficient to list the heads in the public part of the package because then, even though it would solve the above problem, it would introduce an additional problem that one would be able to initialize arbitrary objects with those heads -- say, `test=AA[2.1,3.2,5.4,2.3]` -- and then call `getX[test]`, which will then carry out the evaluation as it would any object with head `AA`. I want to reserve creating objects with those heads to other functions in the package (such as `makeAA`; see above).
This problem also shows up if I don't explicitly write a usage description/tag for a function and try to get information on the function by doing
````?Function
````
-- all the arguments and local variables are written with the "PackageNamePrivate" prefix, making it rather difficult to read the body of the function.
Thank you very much for your help!
-
– Mr.Wizard♦ Jun 27 '12 at 6:39
– Mr.Wizard♦ Jun 27 '12 at 7:29
@Mr.Wizard -- The first link that you provided above is indeed an answer to my second question about obtaining context-free information/definitions of functions in my package. I believe that the main question I had, however, is not addressed by either of the above links, but is instead beautifully answered by Oleksandr below. Would you agree? – PhysicsCodingEnthusiast Jun 27 '12 at 15:58
## 3 Answers
A more object-oriented approach may be helpful here. For example:
````BeginPackage["example`"];
thing;
new;
Begin["`Private`"];
thing /: new[thing[contents_]] :=
Module[{thing},
(* Supported operations: *)
thing /: Plus[thing[x_], y_] := thing[x + y];
thing /: Times[thing[x_], y_] := thing[x y];
(* Example invalid operation: *)
thing /: Power[_thing, _] := $Failed;
(* Format value: *)
thing /: MakeBoxes[thing, StandardForm] = InterpretationBox["thing", thing];
(* Return symbol: *)
thing[contents]
];
End[];
EndPackage[];
````
Now we can write, e.g,
````myThing = new@thing[73];
4 myThing + 3
````
which gives:
````thing[295] (* == thing[4 * 73 + 3] *)
````
However, despite the standard form (which can also be copied and pasted from output to input thanks to the `InterpretationBox`), the internal representation is quite different:
````% // InputForm
(* -> thing$576[295] *)
````
As a result we can't just conjure `thing`s up without invoking `new`, as the methods defined on real `thing`s simply won't exist in that case:
````myNewThing = thing[48];
myNewThing + 17
(* 17 + thing[48] *)
````
Strictly speaking, the format value would have been enough to answer your question as stated. However, it seems like an object oriented approach is quite a natural fit for what you're probably trying to achieve here, so the extra effort may be worthwhile for maintainability and robustness reasons if you intend to have a lot of different objects each with their own private methods and/or internal state.
-
1
Hi Oleksandr! `MakeBoxes` and `InterpretationBox` were exactly what I was looking for; the additional information about making Mathematica function more like an object-oriented language was the cherry on top! Thank you very much! – PhysicsCodingEnthusiast Jun 27 '12 at 15:47
1
@PhysicsCodingEnthusiast: you're very welcome. I think it's also worth noting if you use this object-oriented approach, you'll also be able to overload `Definition` (as suggested by R.M) without having to make global modifications. Indeed, making such modifications as part of a package is a usually considered to be a very bad idea, because who knows what combination of packages users might load. – Oleksandr R. Jun 28 '12 at 21:46
Try this:
````Begin["PackageName`Private`"];
?`*
End[];
````
which puts you in the private context when you execute `?`*`. Also, note the grave mark (`) in `?`*`, it ensures that the wildcard (`*`) is only applied to the current context not your entire `$ContextPath`. Incidentally, this is the same reason that `Private` in `Begin["`Private`"]` is also preceded by a grave mark, it ensures that it is a sub-context of the package context.
The simplest way, though, is simply to type
````?PackageName`Private`*
````
which will not display the context path of the variables.
-
Hi rcollyer; thanks for your reply! Perhaps I should have been more explicit in stating that I was hoping to be able to add something to my package itself to suppress the "PackageNamePrivate" context prefix, at least for my self-defined heads of the various datasets. The only command I would want anyone to have to invoke in a Mathematica notebook/script would be the Get[Package.m] command, without having to worry about invoking Begin["PackageNamePrivate"]. Is that possible? – PhysicsCodingEnthusiast Jun 27 '12 at 1:57
The inner context is there to hide the implementation from casual inspection. In Mathematica, this isn't a strict barrier, but it is okay for its purpose. Outside of that, but within the package context itself, you expose the symbols you want the rest of the world to see usually through a `symbol::usage` string. Using it that way, `Get[package.m]` works just fine, and as long as your careful the things in the private context don't leak out. (But, you've read that question.) How does your usage requirements differ from this? – rcollyer Jun 27 '12 at 2:05
That functionality is mostly okay for me, as well. The only problem I have with that is when I have a function that outputs data with one of my self-defined private heads: for example, a function like `getAAdata[filename_]` will return data in the form `AA[AA[particle 1 data],AA[particle 2 data],....,AA[particle N data]]`. But the heads will, of course, not be displayed simply as `AA`, but rather as `PackageName`Private`AA`, so that the output will look rather messy. – PhysicsCodingEnthusiast Jun 27 '12 at 2:14
Of course, I could always convert the head back to `List`, so that the data is presented in nicely in array form, but I'm just curious. Overall, it's not the biggest problem in the world, but I do appreciate your humoring me! – PhysicsCodingEnthusiast Jun 27 '12 at 2:16
Also, sorry for the numerous comment postings, deletions, and edits -- I'm still getting used to the site! – PhysicsCodingEnthusiast Jun 27 '12 at 2:17
show 5 more comments
The only way to call a function/variable in a package without using the full context is to make it public, which is typically done with usage messages immediately after the `BeginPackage["Package`"]` and before `Begin["Private`"]`.
To me, it seems like you just want to be able to view the definitions, etc. without it looking messy with all the context information. To do that, here's a simple way to add a definition to, uhm, `Definition` that does this. Include this in your package:
````Unprotect@Definition;
Definition[x_Symbol] /; StringMatchQ[Context[x], "Package`" ~~ ___] :=
StringReplace[ToString@FullDefinition[x],
(WordCharacter .. ~~ DigitCharacter ... ~~ "`") .. ~~ s_ :> s
];
Protect@Definition;
````
Now, all functions in your package will not have the full context displayed when you call `?foo`. Note that it is not really `Definition`, as I'm using `FullDefinition`, which also gives you the definitions of all symbols `foo` depends on. This is only for convenience sake, so that you can still use `?`. If you strictly want only `Definition`, create a function `myDefinition` instead (otherwise you'll end up with a recursion) that uses the above example with `Definition` instead of `FullDefinition` (I didn't want to mess with the parsing of `?` since it's also used in pattern tests and could break a lot of things subtly).
Here's an example of it at work:
In the above code, I strip away all context information. If you want to strip only `Package`Private``, then you can modify the `StringReplace` part accordingly.
-
1
Thanks R.M; this was a great answer to my second question -- I will definitely be using this! I wish there were a way to accept more than one answer for this question, since it had two (related) parts to it. Out of curiosity, is it commonplace to modify default Mathematica functions/behaviors like you have done with `Definition` above? – PhysicsCodingEnthusiast Jun 27 '12 at 15:52
– rcollyer Jun 27 '12 at 16:56
@PhysicsCodingEnthusiast It depends... I would normally not overload/redefine built-ins, unless there's a strong need for it and even then only if it is reasonably safe to do so. In this case, the additional definition is very narrow in scope and in the tests I did, I didn't come across any issues. However, there are some functions that I wouldn't touch, no matter what... things like `Set` for example, which are fundamental. As I said, if you don't mind not using `?`, then it might be better to use a custom definition and even assign it to an unused prefix operator – rm -rf♦ Jun 27 '12 at 17:14
lang-mma
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 5, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8617211580276489, "perplexity_flag": "middle"}
|
http://en.wikipedia.org/wiki/Jacobian_matrix_and_determinant
|
# Jacobian matrix and determinant
Calculus
Definitions
Concepts
Rules and identities
Integral calculus
Definitions
Integration by
Formalisms
Definitions
Specialized calculi
In vector calculus, the Jacobian matrix (pron.: , ) is the matrix of all first-order partial derivatives of a vector- or scalar-valued function with respect to another vector. Suppose $F : \mathbb{R}^n \rightarrow \mathbb{R}^m$ is a function from a real n-tuple to a real m-tuple. Such a function is given by m real-valued component functions, $F_1(x_1,\ldots,x_n),\ldots,F_m(x_1,\ldots,x_n)$. The partial derivatives of all these functions (if they exist) can be organized in an m-by-n matrix, the Jacobian matrix $J$ of $F$, as follows:
$J=\begin{bmatrix} \dfrac{\partial F_1}{\partial x_1} & \cdots & \dfrac{\partial F_1}{\partial x_n} \\ \vdots & \ddots & \vdots \\ \dfrac{\partial F_m}{\partial x_1} & \cdots & \dfrac{\partial F_m}{\partial x_n} \end{bmatrix}.$
This matrix is also denoted by $J_F(x_1,\ldots,x_n)$ and $\frac{\partial(F_1,\ldots,F_m)}{\partial(x_1,\ldots,x_n)}$. If $(x_1, \ldots , x_n)$ are the usual orthogonal Cartesian coordinates, the i th row (i = 1, ..., m) of this matrix corresponds to the gradient of the i th component function Fi: $\left(\nabla F_i\right)$. Note that some books define the Jacobian as the transpose of the matrix given above.
The Jacobian determinant (often simply called the Jacobian) is the determinant of the Jacobian matrix (if $m=n$).
These concepts are named after the mathematician Carl Gustav Jacob Jacobi.
## Jacobian matrix
The Jacobian of a function describes the orientation of a tangent plane to the function at a given point. In this way, the Jacobian generalizes the gradient of a scalar valued function of multiple variables which itself generalizes the derivative of a scalar-valued function of a scalar. In other words, the Jacobian for a scalar valued multivariable function is the gradient and that of a scalar valued function of scalar is simply its derivative. Likewise, the Jacobian can also be thought of as describing the amount of "stretching" that a transformation imposes. For example, if $(x_2,y_2)=f(x_1,y_1)$ is used to transform an image, the Jacobian of $f$, $J(x_1,y_1)$ describes how much the image in the neighborhood of $(x_1,y_1)$ is stretched in the x and y directions.
If a function is differentiable at a point, its derivative is given in coordinates by the Jacobian, but a function doesn't need to be differentiable for the Jacobian to be defined, since only the partial derivatives are required to exist.
The importance of the Jacobian lies in the fact that it is a factor in one term of the best linear approximation to a differentiable function near a given point. In this sense, the Jacobian is the derivative of a multivariate function.
If p is a point in Rn and F is differentiable at p, then its derivative is given by JF(p). In this case, the linear map described by JF(p) is the best linear approximation of F near the point p, in the sense that
$F(\mathbf{x}) = F(\mathbf{p}) + J_F(\mathbf{p})(\mathbf{x}-\mathbf{p}) + o(\|\mathbf{x}-\mathbf{p}\|)$
for x close to p and where o is the little o-notation (for $x\to p$) and $\|\mathbf{x}-\mathbf{p}\|$ is the distance between x and p.
Compare this to a Taylor series for a scalar function of a scalar argument, truncated to first order:
$f(x) = f(p) + f'(p) ( x - p ) + o(x-p).$
In a sense, both the gradient and Jacobian are "first derivatives" — the former the first derivative of a scalar function of several variables, the latter the first derivative of a vector function of several variables. In general, the gradient can be regarded as a special version of the Jacobian: it is the Jacobian of a scalar function of several variables.
The Jacobian of the gradient has a special name: the Hessian matrix, which in a sense is the "second derivative" of the scalar function of several variables in question.
### Inverse
According to the inverse function theorem, the matrix inverse of the Jacobian matrix of an invertible function is the Jacobian matrix of the inverse function. That is, if the Jacobian of the function F : Rn → Rn is continuous and nonsingular at the point p in Rn, then F is invertible when restricted to some neighborhood of p and
$(JF^{-1})(F(p)) = [ (JF)(p) ]^{-1}.\$
### Uses
#### Dynamical systems
Consider a dynamical system of the form x' = F(x), where x' is the (component-wise) time derivative of x, and F : Rn → Rn is continuous and differentiable. If F(x0) = 0, then x0 is a stationary point (also called a critical point, not to be confused with a fixed point). The behavior of the system near a stationary point is related to the eigenvalues of JF(x0), the Jacobian of F at the stationary point.[1] Specifically, if the eigenvalues all have real parts with a magnitude less than 0, then the system is stable in the operating point, if any eigenvalue has a real part with a magnitude greater than 0, then the point is unstable. If the largest real part of the eigenvalues is equal to 0, the Jacobian matrix does not allow for an evaluation of the stability.
#### Image Jacobian
In computer vision the image Jacobian is known as the relationship between the movement of the camera and the apparent motion of the image (Optical Flow).
A point in the space with 3D coordinates $P=\left( X,\text{ }Y,\text{ }Z \right)\text{ }$ in the frame of the camera, is projected in the image as a 2D point with coordinates $p=\left( x,\text{ }y \right)$ , the relation between them is (neglecting the intrinsic parameters of the camera, and assuming focal distance 1) :
$\left\{ \begin{matrix} x=\frac{X}{Z}\\ y=\frac{Y}{Z} \\ \end{matrix} \right.$
Differentiating this
$\left\{ \begin{matrix} \dot{x}=\frac{{\dot{X}}}{Z}-\frac{X\dot{Z}}{Z^{2}}=\frac{\dot{X}-x\dot{Z}}{Z} \\ \dot{y}=\frac{{\dot{Y}}}{Z}-\frac{Y\dot{Z}}{Z^{2}}=\frac{\dot{Y}-y\dot{Z}}{Z} \\ \end{matrix}\right.\qquad\qquad\dot{P}=-v_{c}-\omega _{c}\times P\Leftrightarrow \left\{ \begin{matrix} \dot{X}=-v_{x}-\omega _{y}Z+\omega _{z}Y \\ \dot{Y}=-v_{y}-\omega _{z}X+\omega _{x}Z \\ \dot{Z}=-v_{z}-\omega _{x}Y+\omega _{y}X \\ \end{matrix}\right.$
Grouping these equations
$\left\{ \begin{matrix} \dot{x}=\frac{-v_{x}}{Z}+\frac{xv_{z}}{Z}+xy\omega _{x}-(1+x^{2})\omega _{y}+y\omega _{z} \\ \dot{y}=\frac{-v_{y}}{Z}+\frac{yv_{z}}{Z}+(1+y^{2})\omega _{x}-xy\omega _{y}-x\omega _{z} \\ \end{matrix} \right.\ \ \ \ \ \,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,$
Finally for a given pixel in the image with coordinates :$(x,y)$ the apparent motion[2] :$(u,v)$
$\begin{bmatrix} u\\v \end{bmatrix} = \begin{bmatrix} -\frac{1}{Z} & 0 & \frac{x}{Z} & xy & -(x^2+1) &y\\ 0 & -\frac{1}{Z} & \frac{y}{Z} & y^2+1 & -xy &-x\\ \end{bmatrix} \begin{bmatrix} V_x \\ V_y \\ V_z \\ \omega_x \\ \omega_y \\ \omega_z \\ \end{bmatrix}$
#### Newton's method
A system of coupled nonlinear equations can be solved iteratively by Newton's method. This method uses the Jacobian matrix of the system of equations.
The following is the detail code in MATLAB (although there is a built in 'jacobian' command)
``` function s = jacobian(f, x, tol)
% f is a multivariable function handle, x is a starting point
if nargin == 2
tol = 10^(-5);
end
while 1
% if x and f(x) are row vectors, we need transpose operations here
y = x' - jacob(f, x)\f(x)'; % get the next point
if norm(f(y))<tol % check error tolerate
s = y';
return;
end
x = y';
end
```
``` function j = jacob(f, x) % approximately calculate Jacobian matrix
k = length(x);
j = zeros(k, k);
x2 = x;
dx = 0.001;
for m = 1: k
x2(m) = x(m)+dx;
j(m, :) = (f(x2)-f(x))/dx; % partial derivatives in m-th row
x2(m) = x(m);
end
```
## Jacobian determinant
If m = n, then F is a function from n-space to n-space and the Jacobian matrix is a square matrix. We can then form its determinant, known as the Jacobian determinant. The Jacobian determinant is sometimes simply called "the Jacobian."
The Jacobian determinant at a given point gives important information about the behavior of F near that point. For instance, the continuously differentiable function F is invertible near a point p ∈ Rn if the Jacobian determinant at p is non-zero. This is the inverse function theorem. Furthermore, if the Jacobian determinant at p is positive, then F preserves orientation near p; if it is negative, F reverses orientation. The absolute value of the Jacobian determinant at p gives us the factor by which the function F expands or shrinks volumes near p; this is why it occurs in the general substitution rule.
The Jacobian determinant is used when making a change of variables when evaluating a multiple integral of a function over a region within its domain. To accommodate for the change of coordinates the magnitude of the Jacobian determinant arises as a multiplicative factor within the integral. Normally it is required that the change of coordinates be done in a manner which maintains an injectivity between the coordinates that determine the domain. Similarly, the Jacobian determinant represents the factor by which volumes change when space is distorted according to some function. The Jacobian determinant, as a result, is usually well-defined. The Jacobian can also be used to solve systems of differential equations at an equilibrium point or approximate solutions near an equilibrium point.
## Examples
Example 1. The transformation from spherical coordinates (r, θ, φ) to Cartesian coordinates (x1, x2, x3), is given by the function F : R+ × [0,π] × [0,2π) → R3 with components:
$x_1 = r\, \sin\theta\, \cos\phi \,$
$x_2 = r\, \sin\theta\, \sin\phi \,$
$x_3 = r\, \cos\theta. \,$
The Jacobian matrix for this coordinate change is
$J_F(r,\theta,\phi) =\begin{bmatrix} \dfrac{\partial x_1}{\partial r} & \dfrac{\partial x_1}{\partial \theta} & \dfrac{\partial x_1}{\partial \phi} \\[3pt] \dfrac{\partial x_2}{\partial r} & \dfrac{\partial x_2}{\partial \theta} & \dfrac{\partial x_2}{\partial \phi} \\[3pt] \dfrac{\partial x_3}{\partial r} & \dfrac{\partial x_3}{\partial \theta} & \dfrac{\partial x_3}{\partial \phi} \\ \end{bmatrix}=\begin{bmatrix} \sin\theta\, \cos\phi & r\, \cos\theta\, \cos\phi & -r\, \sin\theta\, \sin\phi \\ \sin\theta\, \sin\phi & r\, \cos\theta\, \sin\phi & r\, \sin\theta\, \cos\phi \\ \cos\theta & -r\, \sin\theta & 0 \end{bmatrix}.$
The determinant is r2 sin θ. As an example, since dV = dx1 dx2 dx3 this determinant implies that the differential volume element dV = r2 sin θ dr dθ dϕ. Nevertheless this determinant varies with coordinates. To avoid any variation the new coordinates can be defined as $w_{1}=\frac{r^{3}}{3},\ w_{2}=-\cos\theta,\ w_{3}=\phi.\,$ [3] Now the determinant equals 1 and volume element becomes $r^{2}dr\ \sin\theta\ d\theta\ d\phi=dw_{1}dw_{2}dw_{3}\,$.
Example 2. The Jacobian matrix of the function F : R3 → R4 with components
$y_1 = x_1 \,$
$y_2 = 5x_3 \,$
$y_3 = 4x_2^2 - 2x_3 \,$
$y_4 = x_3 \sin(x_1) \,$
is
$J_F(x_1,x_2,x_3) =\begin{bmatrix} \dfrac{\partial y_1}{\partial x_1} & \dfrac{\partial y_1}{\partial x_2} & \dfrac{\partial y_1}{\partial x_3} \\[3pt] \dfrac{\partial y_2}{\partial x_1} & \dfrac{\partial y_2}{\partial x_2} & \dfrac{\partial y_2}{\partial x_3} \\[3pt] \dfrac{\partial y_3}{\partial x_1} & \dfrac{\partial y_3}{\partial x_2} & \dfrac{\partial y_3}{\partial x_3} \\[3pt] \dfrac{\partial y_4}{\partial x_1} & \dfrac{\partial y_4}{\partial x_2} & \dfrac{\partial y_4}{\partial x_3} \\ \end{bmatrix}=\begin{bmatrix} 1 & 0 & 0 \\ 0 & 0 & 5 \\ 0 & 8x_2 & -2 \\ x_3\cos(x_1) & 0 & \sin(x_1) \end{bmatrix}.$
This example shows that the Jacobian need not be a square matrix.
Example 3.
$x\,=r\,\cos\,\phi;$
$y\,=r\,\sin\,\phi.$
$J(r,\phi)=\begin{bmatrix} {\partial x\over\partial r} & {\partial x\over \partial\phi} \\ {\partial y\over \partial r} & {\partial y\over \partial\phi} \end{bmatrix}=\begin{bmatrix} {\partial (r\cos\phi)\over \partial r} & {\partial (r\cos\phi)\over \partial \phi} \\ {\partial(r\sin\phi)\over \partial r} & {\partial (r\sin\phi)\over \partial\phi} \end{bmatrix}=\begin{bmatrix} \cos\phi & -r\sin\phi \\ \sin\phi & r\cos\phi \end{bmatrix}$
The Jacobian determinant is equal to $r$. This shows how an integral in the Cartesian coordinate system is transformed into an integral in the polar coordinate system:
$\iint_A dx\, dy= \iint_B r \,dr\, d\phi.$
Example 4. The Jacobian determinant of the function F : R3 → R3 with components
$\begin{align} y_1 &= 5x_2 \\ y_2 &= 4x_1^2 - 2 \sin (x_2x_3) \\ y_3 &= x_2 x_3 \end{align}$
is
$\begin{vmatrix} 0 & 5 & 0 \\ 8 x_1 & -2 x_3 \cos(x_2 x_3) & -2x_2\cos(x_2 x_3) \\ 0 & x_3 & x_2 \end{vmatrix} = -8 x_1 \cdot \begin{vmatrix} 5 & 0 \\ x_3 & x_2 \end{vmatrix} = -40 x_1 x_2.$
From this we see that F reverses orientation near those points where x1 and x2 have the same sign; the function is locally invertible everywhere except near points where x1 = 0 or x2 = 0. Intuitively, if you start with a tiny object around the point (1,1,1) and apply F to that object, you will get an object set with approximately 40 times the volume of the original one.
## Notes
1. D.K. Arrowsmith and C.M. Place, Dynamical Systems, Section 3.3, Chapman & Hall, London, 1992. ISBN 0-412-39080-9.
2. Fermín, Leonardo; Medina et al (2009). "Estimation of Velocities Components using Optical Flow and Inner Product". Lecture Notes in Computer Science 396: 349–358.
3. Taken from http://www.sjcrothers.plasmaresources.com/schwarzschild.pdf – On the Gravitational Field of a Mass Point according to Einstein’s Theory by K. Schwarzschild – arXiv:physics/9905030 v1 (text of the original paper, in Wikisource).
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 45, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8739627599716187, "perplexity_flag": "head"}
|
http://divisbyzero.com/2010/05/09/volumes-of-n-dimensional-balls/?like=1&_wpnonce=1e4d043b68
|
Division by Zero
A blog about math, puzzles, teaching, and academic technology
Posted by: Dave Richeson | May 9, 2010
Volumes of n-dimensional balls
We all know that the area of a circle is ${\pi r^{2}}$ and the volume of a sphere is ${\displaystyle \frac{4}{3}\pi r^{3}}$, but what about the volumes (or hypervolumes) of balls of higher dimension?
For a fun exercise I had my multivariable calculus class compute the volumes of various balls using multiple integrals. The surprising results inspired this post.
First some terminology. An ${(n-1)}$-dimensional hypersphere (or ${(n-1)}$-sphere) of radius ${R}$ is the set of points in ${\mathbb{R}^{n}}$ satisfying ${x_{1}^{2}+x_{2}^{2}+\cdots+x_{n}^{2}=R^{2}}$ (I’ll place the center at the origin for simplicity). For example, a 0-sphere is the two-point set ${\{\pm R\}}$ on the real number line, a 1-sphere is a circle of radius ${R}$ in the plane, and a 2-sphere is a spherical shell of radius ${R}$ in 3-dimensional space.
An ${n}$-dimensional ball (or ${n}$-ball) is the region enclosed by an ${(n-1)}$-sphere: the set of points in ${\mathbb{R}^{n}}$ satisfying ${x_{1}^{2}+x_{2}^{2}+\cdots+x_{n}^{2}\le R^{2}}$. For example, a 1-ball is the interval ${[-R,R]}$, a 2-ball is a disk in the plane, and a 3-ball is a solid ball in 3-dimensional space.
It is possible to define “volume” in ${\mathbb{R}^{n}}$—in ${\mathbb{R}}$ it is length, in ${\mathbb{R}^{2}}$ it is area, in ${\mathbb{R}^{3}}$ it is ordinary volume, and in ${\mathbb{R}^{n}}$ it is hypervolume. Let ${V_{n}(R)}$ denote the volume of the ${n}$-ball of radius ${R}$.
$\displaystyle \begin{array}{|c|c|c|c|} \hline n & \text{equation} & \text{shape} & V_{n}(R) \\ \hline 1 & x^2\le R^2 & \text{interval}& 2R\text{ (length)} \\ \hline 2 & x^2+y^{2}\le R^2 & \text{disk} & \pi R^{2}\text{ (area)} \\ \hline 3 & x^2+y^{2}+z^{2}\le R^2 & \text{ball} & \frac{4}{3}\pi R^{3}\text{ (volume)} \\ \hline 4 & x_{1}^2+x_{2}^2+x_{3}^2+x_{4}^2\le R^2 & \text{4-dimensional ball} & \text{?? (hypervolume)} \\ \hline \vdots &\vdots & \vdots & \vdots \\ \hline \end{array}$
It turns out that the volumes of ${n}$-balls satisfy the following remarkable recursion relation. (I’ll prove this relation at the end of the post.)
$\displaystyle V_{1}(R)=2R,\, V_{2}(R)=\pi R^{2},\,\text{and }V_{n}(R)=\frac{2\pi R^{2}}{n}V_{n-2}(R),\text{ for }n\ge 3.$
It is not difficult to use this recurrence relation to obtain a formula for ${V_{n}(R)}$. In particular, when ${n}$ is even ${\displaystyle V_{n}(R)=\frac{2^{\frac{n}{2}}\pi^{\frac{n}{2}}R^{n}}{2\cdot 4\cdot 6\cdots n}=\frac{\pi^{\frac{n}{2}}R^{n}}{(\frac{n}{2})!}}$ and when ${n}$ is odd ${\displaystyle V_{n}(R)=\frac{2^{\frac{n+1}{2}}\pi^{\frac{n-1}{2}}R^{n}}{1\cdot 3\cdot 5\cdots n}}$. (If you know what the gamma function is you can express this as a single function, ${\displaystyle V_{n}(R)=\frac{\pi^{\frac{n}{2}}R^{n}}{\Gamma(\frac{n}{2}+1)}.}$)
The volumes of the ${n}$-balls in the first 15 dimensions are given in the following table.
$\displaystyle \begin{array}{|c|c|l|} \hline n & V_{n}(R) & V_{n}(1) \\ \hline 1 & 2R& 2 \\ \hline 2 & \pi R^{2}& 3.141592654\ldots \\ \hline 3 & \displaystyle\frac{4\pi R^{3}}{3}& 4.188790205\ldots\\ \hline 4 & \displaystyle\frac{\pi^{2}R^{4}}{2}&4.934802201\ldots\\ \hline 5 & \displaystyle\frac{8\pi^{2}R^{5}}{15}&5.263789014\ldots\\ \hline 6 & \displaystyle\frac{\pi^{3}R^{6}}{6}&5.16771278\ldots\\ \hline 7 & \displaystyle\frac{16\pi^{3}R^{7}}{105}& 4.72476597\ldots\\ \hline 8 & \displaystyle\frac{\pi^{4}R^{8}}{24}& 4.058712126\ldots\\ \hline 9 & \displaystyle\frac{32\pi^{4}R^{9}}{945}& 3.298508903\ldots\\ \hline 10 & \displaystyle\frac{\pi^{5}R^{10}}{120}& 2.55016404\ldots\\ \hline 11 & \displaystyle\frac{64\pi^{5}R^{11}}{10395}& 1.884103879\ldots\\ \hline 12 & \displaystyle\frac{\pi^{6}R^{12}}{720}& 1.335262769\ldots\\ \hline 13 & \displaystyle\frac{128\pi^{6}R^{13}}{135135}& 0.910628755\ldots\\ \hline 14 & \displaystyle\frac{\pi^{7}R^{14}}{5040}& 0.599264529\ldots\\ \hline 15 & \displaystyle\frac{256\pi^{7}R^{15}}{2027025}& 0.381443281\ldots\\ \hline \end{array}$
If you look at the volumes of the unit balls you’ll see they increase at first, reaching a maximum in dimension 5. Then they decrease and tend to zero as the dimension goes to infinity. Strange!
First, what is special about dimension 5? Why is the maximum achieved in this dimension? It turns out that there is nothing special about dimension 5. Below is a GeoGebra applet that allows you to adjust the radii of the balls. As we can see, the maximum volume is not always attained by the ball in dimension 5. Indeed, as the radius increases, the maximum volume occurs in higher dimensions. As John Moeller points out, the powers of ${R}$ in the numerator try to make ${V_{n}(R)}$ an increasing function, however the factorials in the denominator always dominate in the end.
Second, what is the intuition behind this limit of zero? One way to see this is to observe that to be on the boundary of the unit ${n}$-ball, we must have ${x_{1}^{2}+x_{2}^{2}+\cdots+x_{n}^{2}=1}$, but for this to happen when ${n}$ is large, most of the ${x_{i}}$‘s must be very close to zero. For example, the line ${x_{1}=x_{2}=\cdots=x_{n}}$ intersects the ${n}$-sphere at ${\pm(\frac{1}{\sqrt{n}},\ldots,\frac{1}{\sqrt{n}})}$. On the other hand, the corresponding corners of the hypercube that inscribes the sphere are at ${\pm(1,\ldots,1)}$, ${\sqrt{n}}$ units from the origin. Thus the sphere fills up less and less of the hypercube that contains it. (Notice that the circumscribed hypercube has volume ${2^{n}}$, while inscribed hypercube has volume ${2^{n}/n^{\frac{n}{2}}}$.)
My colleague informed me that this zero limit is related to the curse of dimensionality in statistics. Volume increases rapidly as dimension increases, so it requires many more data points to get a good estimate. As Wikipedia points out, “100 evenly-spaced sample points suffice to sample a unit interval with no more than 0.01 distance between points; an equivalent sampling of a 10-dimensional unit hypercube with a lattice with a spacing of 0.01 between adjacent points would require ${10^{20}}$ sample points.”
Proof
Now I will prove the recurrence relation that I gave above.
Clearly the relation is true for ${n=1}$ and ${n=2}$. Suppose ${n\ge 3}$.
First recall that if a solid in ${n}$-dimensional space is scaled by a factor of ${k}$, then its volume increases by a factor of ${k^{n}}$. In particular, this implies that
$\displaystyle V_{n}(R)=V_{n}(1)R^{n}.$
Observe that the intersection of the ${(x_{1},x_{2})}$-plane with the ${n}$-ball is a disk of radius ${R}$ centered at the origin (see image below). Use polar coordinates to describe points in this disk. Then the perpendicular cross section of the ${n}$-ball at the point ${(r,\theta)}$ is an ${(n-2)}$-ball of radius ${\sqrt{R^{2}-r^{2}}}$.
Thus we can compute ${V_{n}(R)}$ by integrating ${V_{n-2}(\sqrt{R^{2}-r^{2}})}$ over the disk. We do so using polar coordinates.
$\displaystyle \begin{array}{rcl} V_{n}(R)&=&\displaystyle \int_{0}^{R}\int_{0}^{2 \pi}V_{n-2}\big(\sqrt{R^{2}-r^{2}}\big)r\,d\theta\,dr \\ &=&\displaystyle \int_{0}^{R}\int_{0}^{2 \pi}V_{n-2}(1)(\sqrt{R^{2}-r^{2}}\big)^{n-2}r\,d\theta\,dr\\ &=&\displaystyle V_{n-2}(1)\int_{0}^{R}r\big(R^{2}-r^{2})^{\frac{n-2}{2}}\theta\Big|_{0}^{2\pi}\,dr\\ &=&\displaystyle 2\pi V_{n-2}(1)\int_{0}^{R}r(R^{2}-r^{2})^{\frac{n-2}{2}}\,dr\\ &=&\displaystyle -\frac{2\pi}{n} V_{n-2}(1)(R^{2}-r^{2})^{\frac{n}{2}}\Big|_{0}^{R}\\ &=&\displaystyle 2\pi V_{n-2}(1)\frac{R^{n}}{n}\\ &=&\displaystyle \frac{2\pi R^{2}}{n}V_{n-2}(R) \end{array}$
Like this:
Posted in Math, Teaching | Tags: calculus, curse of dimensionality, dimension, hyperspheres, hypervolume, polar coordinates, volume
Responses
1. Great post!
This is another good example of why you have to be careful when dealing with high-dimensional geometry. I posted a similar counter-intuitive result to do with the relative volumes of cubes and spheres a while back.
By: Mark Reid on May 9, 2010
at 2:01 pm
• Wow, that is really excellent. Thanks for sharing the link.
By: Dave Richeson on May 10, 2010
at 3:10 pm
2. This volume computation is done in the book Symmetric Bilinear Forms (By Milnor, Husemoller) as part of the full classification of indefinite integral inner product spaces (by their rank, type, and signature).
There, it is shown that there is indeed something spacial about n=5; using Minkowski’s convex body theorem, it is more natural to look at the behavior of \$4/(V_n(1))^{2/n}\$, which is larger than 2 for n>4, which makes the solution to the classification problem essentially different for n=5.
By: Farbod Shokrieh on May 9, 2010
at 4:43 pm
• n=5 is “n > = 5″ at the end of my previous comment.
By: Farbod Shokrieh on May 9, 2010
at 8:12 pm
• Thanks. Very interesting. I’ll be sure to track down that reference.
By: Dave Richeson on May 10, 2010
at 8:30 am
3. Interesting post. I have to say though that comparing length to area to volume to hypervolume and so on makes little sense, IMHO it would be better to plot the fraction of the hypercube each sphere occupies.
(there is a typo in second table should be R^3 in n=3)
By: Paul on May 9, 2010
at 4:49 pm
• I agree—we’re comparing apples to oranges to bananas here (which doesn’t make it less fun to do). I like your idea. I’ll have to give that a try when I get a chance.
Thanks for catching the typo. It is fixed now.
By: Dave Richeson on May 10, 2010
at 8:32 am
4. There was a nice MathOverflow question about the volume tending to zero which attracted a lot of great answers.
By: Qiaochu Yuan on May 9, 2010
at 8:29 pm
• Wow. Excellent. Thanks for providing that link.
By: Dave Richeson on May 10, 2010
at 8:34 am
5. Fleming’s 5.9, Functions of Several Variables has a very nice discussion of this. Not the curse of dimensionality bit, though – that was a nice tie-together.
By: sherifffruitfly on May 12, 2010
at 11:55 pm
6. Wait a minute.
“Volume increases rapidly as dimension increases”
Now look at your dim/vol plot.
Huh?
By: sherifffruitfly on May 13, 2010
at 9:58 am
• Yes, that’s not so clear. Here’s what I meant: a hypercube of side length $x$ in dimension $n$ has hypervolume $x^n$. So if $x>1$ and $n$ is large, then this volume is very large.
By: Dave Richeson on May 13, 2010
at 12:45 pm
• Ah – different object – gotcha.
That suggests a possibly interesting investigation I haven’t seen before: how does the volume of the unit n-cube or sphere vary with the *metric*. (You changing objects from sphere to cube suggested metric change to me because under other metrics, “circles” become “squares”.)
I would be curious to see a discussion of what, if anything, general could be said about the measure of typical sets as one varies the metric – especially with convergent sequences of metrics and the like. Knew I should’ve taken that functional analysis class. :P
By: sherifffruitfly on May 13, 2010
at 1:58 pm
7. Really interesting. Thanks.
By: DavidC on May 13, 2010
at 11:20 pm
8. [...] I’ve also been meaning to point you to Dave Richeson’s recent blog post about the volume of n-Dimensional balls. I would also be remiss to not mention the recent passing of world famous, and well-loved [...]
By: More Press « Random Walks on June 5, 2010
at 11:26 am
9. Another Math Overflow posting that had some interesting thoughts on thinking in higher dimensions. Particularly interesting post by Terry Tao:
“For instance, the fact that most of the mass of a unit ball in high dimensions lurks near the boundary of the ball can be interpreted as a manifestation of the law of large numbers, using the interpretation of a high-dimensional vector space as the state space for a large number of trials of a random variable.”
By: Matt McDonnell on June 7, 2010
at 4:23 pm
• Thanks! I’ve just started reading MathOverflow—I’ve known about it for a while, but have been too busy to dive in. It is great. Thanks for pointing this question out to me.
By: Dave Richeson on June 7, 2010
at 4:46 pm
10. [...] have been several blog posts lately on the volume of balls in higher dimensions that correspond to the case p = 2. The formula [...]
By: Volumes of generalized unit balls — The Endeavour on July 3, 2010
at 12:14 am
11. Re string theory– if the visible universe is a 9-d object w/ 3d of ~ (10^26)m and 6d of ~(10^-35)m, its diameter is then ~(10^-14.67)m, about the diameter of a proton. No part of our universe is any farther than this from any other part- considered hyperspatially. This may be a boring and inconsequential factoid to STists, but it’s bogglesome enough that the pop-sci press should be all over it.
By: AGNOSTIKOS on July 12, 2010
at 6:39 pm
12. [...] To begin our exploration of this phenomenon in higher dimensions we turn to Dave Richeson’s excellent account of the volumes of balls in higher dimensional spaces. [...]
By: Some peculiarities of higher dimensional spaces « Republic of Mathematics on July 30, 2010
at 3:41 pm
13. For n=5, pi should be squared, not cubed.
By: Hawaii on December 3, 2012
at 1:58 am
• Thanks! Fixed.
By: Dave Richeson on December 4, 2012
at 6:32 pm
14. [...] ovan är baserad på följande artikel på engelska, som även innehåller beviset för rekursionsformeln. Stort [...]
By: Mattebloggen » Blog Archive Bollvolymer i n dimensioner » Mattebloggen on April 17, 2013
at 5:43 pm
Cancel
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 71, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9053005576133728, "perplexity_flag": "middle"}
|
http://crypto.stackexchange.com/tags/ecdsa/hot
|
Tag Info
Hot answers tagged ecdsa
12
How to provide secure “vanity” bitcoin address service?
I don't believe that there's any way to generate the vanity hashes without iterating. In base 58, there's $\log_2(58) \approx 5.858$ bits per letter, so fixing 8 letters would need in average $58^8/2 = 2^{\log_2(58)·8}/2 \approx 2^{46}$ iterations. Note that Bitcoin addresses always start with a 1 by convention (this comes from the version field), and ...
9
How strong is the ECDSA algorithm?
First of all, I'm no expert in this area. Generally $n$ bit ECC seems to have a security level of about $n/2$, but I found some claims that it's lower for certain types of curves. RFC4492 - Elliptic Curve Cryptography (ECC) Cipher Suites contains the following table: for Transport Layer Security (TLS) Symmetric | ECC ...
7
Using same keypair for Diffie-Hellman and signing
On a general basis, you want to keep encryption and signature keys disjoint, because they tend to have distinct life cycles. In broad terms, an encryption key should be escrowed, because loss of the private key implies loss of the data which is encrypted relatively to the public key. However, a signature key must not be escrowed, since the proof value of a ...
7
What is the signature scheme with the fastest batch verification protocol for multiple signers?
Rabin signatures have a very fast verification algorithm: a simple squaring modulo some integer. RSA signature verification (with a public exponent equal to 3) is also very fast. These signature algorithms are simple to implement and will beat ECDSA for verification speed, even if batch verification is used for ECDSA. The Niederreiter digital signature ...
7
Secp256k1 test examples
Here are five test vectors for secp256k1, which I just generated with my own code. My code is a generic implementation of elliptic curves; it has been tested for many curves for which test vectors were available (in particular the NIST curves) so I tend to believe that it is correct. Each test vector is a value $m$ (chosen randomly modulo the curve order ...
7
Can one reduce the size of ECDSA-like signatures?
ECDSA is actually a kind-of computational zero-knowledge protocol, played by the signer, with a "reduction function" as impartial verifier. For that matter, ECDSA is not very different from plain DSA. Things basically go this way. There is a known public group $\mathbb{G}$ which I will denote additively, with $G$ as generator, and of size $q$ (a known prime ...
6
Can ECDSA signatures be safely made “deterministic”?
There is a draft RFC which describes a way to implement deterministic (EC)DSA (with test vectors). In this draft, both $h(m)$ (the hash of the message) and $x$ are used as input to a deterministic PRNG which uses HMAC (that's HMAC-DRBG as specified by NIST); the PRNG output is used to yield $k$. I am not sure your simple multiplication with $x$ would be ...
6
X9.62 Multiplying an elliptic curve point by a number
Well, you understand that Elliptic Curves define an operation on points we denote as +; that is, if $A$ and $B$ are two (not necessarily distinct) points, then $A+B$ is a third point (which will be distinct unless either $A$ or $B$ are the 'point-at-infinity'). If $A$ and $B$ are the same, the operation is usually called doubling instead of addition. Now, ...
6
What is the signature scheme with the fastest batch verification protocol for multiple signers?
There isn't a simple answer, as speed of batching depends on a number of parameters. First, the speed of the signature and the speed of the batching is largely independent. If you have two signature algorithms S1 and S2 that both permit batching technique B1, then generally they will both permit batching technique B2. If S1 is faster than S2 for individual ...
5
Why doesn't this replay attack work on ECDSA?
You got tripped up by the fact that there are two different group operations in play here, and they don't play nice with each other. This is implicit in the notation, and it's easy to get tripped up, because the notation expresses both operations in the same way -- but they are not the same. This is arguably a pitfall in the notation: the assumption is ...
5
What is the signature scheme with the fastest batch verification protocol for multiple signers?
I recommend you use Rabin signatures. Rabin signatures without batch verification are likely to be faster than most other signatures with batch verification. Moreover, read Dan Bernstein's work. He has shown how to make Rabin signatures even faster. For standard Rabin signatures, verification requires approximately one modular multiplication modulo n ...
4
Why are MACs in general deterministic, whereas digital signature constructions are randomized?
One rationale for avoiding randomized schemes in general, and in MACs in particular, is that the random in such schemes tends to increases the size of cryptograms or reduce the size of the payload. An example is scheme 2 in ISO/IEC 9796-2 RSA signature with message recovery, where the size of the random/salt field is directly antagonist with the amount of ...
4
What is the signature scheme with the fastest batch verification protocol for multiple signers?
I'm surprised that Daniel J. Bernstein's EdDSA has not been mentioned. High-speed high-security signatures Even faster batch verification. The software performs a batch of 64 separate signature verifications (verifying 64 signatures of 64 messages under 64 public keys) in only 8.55 million cycles, i.e., under 134000 cycles per signature. The ...
4
Making ECDSA public keys one bit shorter
I think your question can be reduced to the question whether there is a significant subset of weak public/private key pairs in any of the EC groups you mention. I am not aware of any such weakness, but if it exists, it would put a large dent in the security of Elliptic Curve Cryptography as a whole. If there is no significant risk you will get a key pair ...
4
Converting a DER ECDSA signature to ASN.1
Disclaimer: I don't know Javascript and I do not practice BouncyCastle. However, I do know Java, and ASN.1. ASN.1 is a notation for structured data, and DER is a set of rules for transforming a data structure (described in ASN.1) into a sequence of bytes, and back. This is ASN.1, namely the description of the structure which an ECDSA signature exhibits: ...
4
Why is 2 the inverse of 10?
It looks like your main question is determining why $k{_{E}}^{-1} = 2$, correct? As mentioned in the comments to the question, this is because it is the modular multiplicative inverse. The multiplicative inverse is a number, $x^{-1}$, such that $x·x^{-1}=1$. However, since we are in modulo 19, we want to find $x^{-1}$ such that $x·x^{-1}\equiv1 \bmod 19$. ...
4
Using same keypair for Diffie-Hellman and signing
The paper "On the Joint Security of Encryption and Signature in EMV" shows that ECIES and EC-Schnorr signatures can be used together without compromising security: In the random oracle model ECIES-KEM and EC-Schnorr are jointly secure if the gap-DLP problem and gap-DH problem are both hard Ed25519 is extremely similar to EC-Schnorr, and both ECIES ...
3
How to properly add ECDSA private keys?
Well, the normal rules apply, i.e. $(aG + bG) = (a + b)G$, so as long as you add $a$ and $b$ correctly, everything should work fine. Note that you don't actually have to reduce the result of the addition for the point multiplication to give the same result, however your implementation may require the number to be smaller than the order. Also make sure that ...
3
How is the x coordinate of a “point at infinity” encoded in a Secp256k1 signature?
Well, the most common representation of 'point at infinity' would be a value that consists solely of zeros; that is, if normal points are encoded as a series of 64 bytes, then the point at infinity would be encoded as 64 00 bytes. On the other hand, it wouldn't appear to apply to ECDSA; ECDSA signatures consist of two integers between 1 and the curve order, ...
3
How to generate a public key from a private ECDSA key?
The write up on Wikipedia is pretty good. I won't go into all the detail that they do there, but your private key is a randomly selected integer $d_A$ selected from $[1,n-1]$ where $n$ is the order of the group. The public key is $Q_A=d_AG$ where $G$ is the base point on the curve defined in the publicly agreed upon parameters.
3
Academic papers on ECDSA security
I'm just reading the book Advances in Elliptic Curve Cryptography, and Chapter II (by Dan Brown) is about provable security of ECDSA. It lists some necessary conditions for the ECDSA components (group, conversion function, RNG, hash function), each with an associated forgery. For example, the group has to be resistant against discrete logarithms, as well as ...
3
Can A PRNG Be Used To Generate Multiple Private Keys for ECDSA?
The most important property of cryptographic PRNGs is that it's indistinguishable from true random numbers, unless you know the seed, or you have huge computational resources. Two important consequences of this requirement are: You can't find the seed from observing the output You can't predict more outputs from observing some of them. An attacker who ...
3
Can ECDSA signatures be safely made “deterministic”?
In their 1998 SAC paper, M'Raihi et al showed how to use hash functions to turn Schnorr signatures (quite similar to (EC)DSA) deterministic, and proved that if the original signature scheme (with randomness) is secure, so is the deterministic one. Bernstein et al's recent EdDSA signature scheme uses the same technique to avoid randomness.
2
Can one reduce the size of ECDSA-like signatures?
I don't think there exists an algorithm that could exploit the public key recovery feature in order to compress digital signatures, but even if such an algorithm existed, you would typically not want to use it. If you remove the information that determines the public key $Q$ from the signature $(r,s)$, it would seem plausible to assume that it would become a ...
2
How can I use Weierstrass curve operations with a=-3 for implementing operations for a=0?
I have not thoroughly investigated golang's elliptic library (or Go at all), but I have implemented elliptic curves (with Jacobian coordinates) and I would say that your guess is correct. The "$a$" parameter is not used in the addition of two distinct points, but it appears in the formulas for doubling a point. With Jacobian coordinates, a normal ...
2
Elliptic curves for ECDSA
It is easier to generate a point with order $n$ than to find out the order of a random point: Generate a random point $G'$ (generate random $x$ and solve for $y$) Compute $G = hG'$ (multiply by cofactor) This is guaranteed to generate a point $G$ with order either $n$ or $1$ (the point at infinity). The chance of generating the point at infinity ...
2
How is the x coordinate of a “point at infinity” encoded in a Secp256k1 signature?
Standard encoding of the point at infinity is a single byte of value 0x00 (it is defined as such as least in P1363, possibly also in X9.62). Other representations may exist (such as a lot of bytes of value 0x00), but, in truth, the "point at infinity" does not have well-defined X and Y coordinates. In the case of ECDSA, you generate a random value k which ...
2
Storage of Private Keys
About the best you can do is have a master public/private key pair where the public key is stored on your server and the private key is stored offline. When you generate a new private key, encrypt it with the master public key and store that in the database. That way, if a password is ever lost, you can recover the user's private key by using the master ...
1
Is there a method to break an EC curve for all key-pairs (Q,d) such that (Q=d*G) faster than breaking every single key-pair?
Well, Big-Step/Little-Step can be written as a precompute-table and then lookup type algorithm, however, it doesn't become close to practical with a 160 bit field. Here's how Big-Step/Little-Step works; we first select two integers $a$ and $b$ with $ab \ge size(group)$ (I consistently talk about group rather than the curve; that's because ...
1
Signatures: RSA compared to ECDSA
I'm considering switching to ECDSA, would this require less space with the same level of encryption? The answer to that question is yes, both ECDSA signatures and public keys are much smaller than RSA signatures and public keys of similar security levels. If you compare a 192-bit ECDSA curve compared to a 1k RSA key (which are roughly the same security ...
Only top voted, non community-wiki answers of a minimum length are eligible
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 46, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9310112595558167, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/120822?sort=votes
|
## When is a sheaf of groups (algebras, rings, modules) a group (algebra, ring, module)?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
If $\pi:E\to M$ is a vector bundle then the set of sections $\Gamma(E)$ is naturally a vector space under fibrewise addition and scalar multiplication on the bundle $E$. This holds similarily for bundles of algebras or modules, thought I'm not sure if it holds for bundles of groups (certainly not for principal bundles). The main example I have in mind is the algebra $\mathcal{C}^\infty (M)$ of smooth real-valued functions on a smooth manifold (just considering the ring structure of the reals, not the field structure).
Now, given a sheaf $\mathcal{O}$ with values in some category $\mathcal{C}$, when is $\mathcal{O}_X$ an object in $\mathcal{C}$?
As the examples above show (basically abelian groups with extra structures) this is true when $\mathcal{O}_X$ is the sheaf of sections of a fibre bundle whose fibres are objects of $\mathcal{C}$. Another relevant question would be if this is indeed the case for locally trivial fibrations only. Concretely:
Is it true that the sheaf $\mathcal{O}_X$ is an object of $\mathcal{C}$ only when $\mathcal{O}_X$ is the sheaf of sections of a fibre bundle $B$ over $X$ whose fibres $B_x$ are objects of $\mathcal{C}$?
-
1
"sheaf of sections of a fibre bundle $B$ over $X$ whose fibres $B_x$ are objects of $C$" Note that this makes $B$ a '$C$-object' in the category of spaces over $X$, when '$C$-object' is suitably interpreted. From your examples, you are considering algebras for finite-product theories, which make perfect sense in the slice category $Top/X$. Then as Steven points out in his answer, global sections form a $C$-object because of (some abstract nonsense about preservation of finite limits). – David Roberts Feb 5 at 2:46
## 1 Answer
A sheaf ${\cal O}$ is a fortiori a presheaf. This means it's a functor that takes the category of open sets in $X$ to your fixed category ${\cal C}$. So, by definition, $\Gamma(X,{\cal O})={\cal O}(X)$ is an object of ${\cal C}$.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 32, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9050805568695068, "perplexity_flag": "head"}
|
http://mathhelpforum.com/geometry/155356-reflection-line.html
|
# Thread:
1. ## Reflection in a line
Couldn't find much useful on the net for this so I'll ask here.
If you're given a question such as find the reflection of a point (12, 1) in the line y=7x, how would you go about this?
I think there's a way using some trig to measure how far the point is away from the line and then using that some how but not really sure how to go about it or if it will work.
Thanks
2. Originally Posted by alexgeek
Couldn't find much useful on the net for this so I'll ask here.
If you're given a question such as find the reflection of a point (12, 1) in the line y=7x, how would you go about this?
I think there's a way using some trig to measure how far the point is away from the line and then using that some how but not really sure how to go about it or if it will work.
Thanks
to find symmetrical point on the line, you should first do distance of that point on the line ....
$\displaystyle \frac {x-x_1}{m} = \frac {y-y_1}{n}=\frac {z-z_1}{p}$
and you have some point at $A_2 (x_2 ,y_2 , z_2)$
to find distance between line $l$ and point $A_2$ you can do like this (or you can use formula, but let's do it like this so you understand what and how is done)
Let's first mark one point at the line as $A_1 (x_1, y_1, z_1)$ and on start of the line vector $\vec {q} = (m,n,q)$ we put that point $A_1$ (now look at the picture shown, than continue)
distance $d$ from the point $A_2$ and line $l$ is equal to ratio of surface of the parallelogram constructed on vectors $\vec{q}$ and $\vec {A_1 A_2}$ and intensity of the vector $\vec {q}$
as shown :
$\displaystyle d= \frac {|\vec{q} \times \vec {A_1 A_2 }|}{|\vec {q}| }$
or in scalar form
$\displaystyle d = \frac {\sqrt{ \begin{vmatrix}<br /> y_1-y_2 &z_1-z_2 \\ <br /> n & p<br /> \end{vmatrix}^2 + \begin{vmatrix}<br /> x_1-x_2 &z_1-z_2 \\ <br /> m & p<br /> \end{vmatrix}^2 +\begin{vmatrix}<br /> x_1-x_2 &y_1-y_2 \\ <br /> m & n<br /> \end{vmatrix}^2 }}{\sqrt{m^2+n^2+p^2}}$
can you figure the rest ? i'll go now bye
Attached Thumbnails
3. Think I found what I need using the matrix $\begin{pmatrix} \cos 2 \theta & \sin 2 \theta \\ \sin 2 \theta & - \cos 2 \theta \end{pmatrix}$ to reflect a point in the line $y=(tan \theta)x$
-edit-
Sorry seemed to post at the same time as you, I'm just reading your post now.
-edit 2-
Ok, brains stopped working now.. too much maths for one day. Have a look at that again tomorrow ha
Thanks!
4. Hello, alexgeek!
$\text{Find the reflection of a point }(12, 1)\text{ in the line }y\,=\,7x$
Code:
``` |
Q |
o | / y = 7x
* | /
* /
| * M/
| o
| / *
| / *
| / *
| / o (12,1)
|/ P
- - - + - - - - - - - - - - - -
|```
We want a line perpendicular to $y \,=\,7x.$ .[1]
This line has slope $\text{-}\frac{1}{7}$ and contains (12,1).
. . Its equation is: . $y -1 \:=\:\text{-}\frac{1}{7}(x-12) \quad\Rightarrow\quad y \:=\:\text{-}\frac{1}{7}x + \frac{19}{7}$ .[2]
$M$ is the intersection of [1] and [2].
. . $7x \:=\:\text{-}\frac{1}{7}x + \frac{19}{7} \quad\Rightarrow\quad 7x + \frac{1}{7}x \:=\:\frac{19}{7} \quad\Rightarrow\quad \frac{50}{7}x \:=\:\frac{19}{7}$
. . $x \:=\:\frac{19}{50} \quad\Rightarrow\quad y \:=\:\frac{133}{50}$
Point $M$ is $\left(\dfrac{19}{50},\:\dfrac{133}{50}\right)$
Let point $Q$ be $(x,y)$
Note that $M$ is the midpoint of $PQ.$
So we have: . $\begin{Bmatrix}\dfrac{x+12}{2} &=& \dfrac{19}{50} \\ \\[-3mm] \dfrac{y+1}{2} &=& \dfrac{133}{50} \end{Bmatrix}$
Solve the two equations: . $x \,=\,\text{-}\dfrac{281}{25},\;y \,=\,\dfrac{108}{25}$
Therefore: . $Q\left(\text{-}\dfrac{281}{25},\;\dfrac{108}{25}\right)$
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 33, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9555318355560303, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/60590?sort=newest
|
## Capacity of Balls in Hyperbolic Space
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Given $M$ a Riemannian manifold and $\Omega\subset M$ the capacity of $\Omega$ is defined as $$\mathrm{cap}(\Omega)=\inf \int_{M\setminus\Omega}{|\mathrm{grad} \varphi|^2 dV}$$ where $\varphi$ ranges over all continuous, compactly supported functions on $M\setminus\Omega$ which are $C^{\infty}$ on $M\setminus\overline{\Omega}$ and which are equal to 1 on $\partial\Omega$.
Is it known what is the capacity of a ball of radius $r$ in the $n$-th dimensional hyperbolic space?
-
## 2 Answers
The capacity of a set $\Omega$ it is known to be $$\mathrm{cap}(\Omega)=-\int_{\partial\Omega}{\frac{\partial \Phi}{\partial \nu}dA}$$ where $\Phi$ is an harmonic function with $\Phi|\partial\Omega=1$ and $A$ is the $(n-1)$ area of $\partial\Omega$ and $\frac{\partial}{\partial\nu}$ is the normal derivative along $\partial\Omega$ exterior to $\Omega$.
The Laplacian in the hyperbolic space in polar coordinates has the form: $$\Delta_{H^{n}} f(t,\xi) = \sinh^{1-n}t \frac{\partial}{\partial t}\left(\sinh^{n-1}t\frac{\partial f}{\partial t}\right) + \sinh^{-2}t\Delta_\xi f$$ where $\Delta\xi$ is the Laplace–Beltrami operator on the ordinary unit $n-1$-sphere.
Therefore, if $f$ is a radial harmonic function then: $$\Delta_{H^{n}}(f)=0 \iff (n-1)\cosh(t)f'(t)+\sinh(t)f''(t)=0$$ and $$\mathrm{cap}(B(x,r))=-\mathrm{vol}(S^{n-1})f'(r)$$
I'm in a rush I hope it makes sense!
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Sounds like a home work; here are some hints:
The capacity is equal to the integral of |gradient|$^2$ of spherically symmetric harmonic function with 1 on the bry of the ball and zero at infinity. The function $f$ depends only on the radius, say $r$. You can cook an ODE for $f$, something like $$f''(r)+\frac{(n-1){\cdot}\cosh r}{\sinh r}{\cdot}f'(r)=0.$$ Then you should solve it and integrate $$\mathrm{vol}\, S^{n-1}{\cdot}\int\limits_R^\infty(f')^2{\cdot}(\sinh r)^{n-1}\, dr.$$
(I do not know if will get the answer in a simple form, but it will be good for all practical purposes...)
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 26, "mathjax_display_tex": 7, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9227254986763, "perplexity_flag": "head"}
|
http://mathhelpforum.com/math-topics/134001-astronaut.html
|
# Thread:
1. ## Astronaut
A 68.5 astronaut is doing a repair in space on the orbiting space station. She throws a 2.40 tool away from her at 3.60 relative to the space station.
With what speed direction will she begin to move?
i figured it would be 3.60 m/s in the opposite direction but this answer doesnt seem to be correct and im not sure exactly why
any help would be appreciated thank you.
2. Originally Posted by elexis10
A 68.5 astronaut is doing a repair in space on the orbiting space station. She throws a 2.40 tool away from her at 3.60 relative to the space station.
With what speed direction will she begin to move?
i figured it would be 3.60 m/s in the opposite direction but this answer doesnt seem to be correct and im not sure exactly why
any help would be appreciated thank you.
This is a conservation of momentum problem. The astronaut's momentum will be $-p$, where $p$ is the momentum of the tool. Momentum is $m \bold{v}$.
CB
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9446876049041748, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/193977/question-on-logical-inferences/194007
|
# Question on logical inferences
The instruction of this question is:
Encode the following arguments and show whether they are valid or not. If not valid give countermodels i.e., truth assignments to the propositions which make them false.
If the investigation continues, then new evidence is brought to light. If new evidence is brought to light, then several leading citizens are implicated. If several leading citizens are implicated, then the newspapers stop publicizing the case. If continuation of the investigation implies that the newspapers stop publicizing the case, then bringing to light of new evidence implies that the investigation continues. The investigation does not continue. Therefore, new evidence is not brought to light.
My attempt:
Let $p$ denote "the investigation continues"
$\quad \space \space q$ denote "new evidence is brought to light"
$\quad \space \space r$ denote "several leading citizens are implicated"
$\quad \space \space s$ denote "the newspapers stop publicizing the case"
So, the premises are:
$p \to q$
$q \to r$
$r \to s$
$(p \to s) \to (q \to p)$
$\neg p$
$\therefore \neg q$
I am very much a beginner in logic so I am not sure if this is correct so far or how to prove or disprove this using the standard rules of inference. Any ideas?
-
## 5 Answers
Here's a simplification of Zhen Lin's argument. We know $$\tag A (p \to s) \to (q \to p)$$ Contrapose this: $$\tag B (q\land\neg p) \to (p\land \neg s)$$ Now I'm going to prove $q\to p$ by contradiction. We assume its negation $$\tag C q\land \neg p$$ and from this and (B) by modus ponens we can then conclude $p\land \neg s$ and in particular $p$. However (C) also trivially implies $\neg p$ which is a contradiction.
Thus, given (A) I have proved $$\tag D q\to p$$ (And in contrast to Zhen I didn't even need the $\neg p$ premise to do so. On the other hand, my proof is not intuitionistically valid).
Now the story explicitly tells us $\neg p$. Therefore, by modus tollens and (D), we have $\neg q$. Q.E.D.
-
looks like you missed a couple.
The fourth statement should be:
````(p -> s) -> (q -> p)
````
So we have:
````p→q
q→r
r→s
(p→s)→(q→p)
¬p
∴¬q
````
So by the first three primitives we have p->s. Hence q -> p. So not p implies not q.
I think this checks out, but I my formal logic training is a little weak.
-
You meant, for the fourth premiss, $$(p \to s) \to (q \to r)$$ Which might help! Intuitively, the first three premisses in PL give you $$p \to s$$ Modus ponens with the corrected fourth premiss yields $$q \to r$$ Modus tollens with the fifth premiss gives the conclusion. So it is just a question of following these steps in your preferred formal system.
A general point about such questions in elementary logic texts, however. You should always ask yourself, when translations involving conditionals or "implies" are concerned, whether the validity of the supposed rendering into PL shows (i) that the original argument is valid, or (ii) that the material conditional is here a bad translation of the intuitive content of the original. (After all, you are being asked whether the original argument is valid, and showing a certain rendering of it into PL is valid only settles the matter if the rendering is a good one: and where vernacular conditionals are involved, things can easily go wrong. So you should always, when answering, indicate whether you think that showing the PL argument is valid establishes what is asked.)
-
Edit. I assume you meant $(p \to s) \to (q \to p)$ in your fourth premise – but it doesn't actually matter, as we shall see.
1. Ex falso quodlibet, so $$\lnot p, p \vdash s$$ and by conditional proof, $$\lnot p \vdash (p \to s)$$
2. By modus ponens, $$\lnot p, ((p \to s) \to (q \to p)) \vdash (q \to p)$$
3. By contraposition, $$\lnot p, (q \to p) \vdash \lnot q$$
4. Putting it all together, $$\lnot p, ((p \to s) \to (q \to p)) \vdash \lnot q$$
So it turns out the other premises $p \to q, q \to r, r \to s$ are irrelevant for this deduction.
-
+1 Sneaky! ${}{}$ – Henning Makholm Sep 11 '12 at 8:50
On the other hand, what this sneaky reasoning suggests to me is that perhaps $(p\to s)\to(q\to p)$ doesn't fully represent the meaning of "If continuation of the investigation implies that the newspapers stop publicizing the case, then bringing to light of new evidence implies that the investigation continues." The more I stare at that sentence the less sure am I what it is intended to convey. (I'm attempting to pretend here that it's not just a back-translation of a propositional formula). Perhaps this natural-language use of "implies" is better modeled as something like $\Box(p\to s)$? – Henning Makholm Sep 11 '12 at 9:16
I agree – there's something very unsatisfactory about a proof that starts with ex falso quodlibet, but it was the first one that came to mind after playing with boolean valuations. On the other hand we can instead substitute $(p \to q), (q \to r), (r \to s) \vdash (p \to s)$ and deduce $q \to p$ by modus ponens. The story makes little sense, in any case. – Zhen Lin Sep 11 '12 at 9:23
The implication chain $p\to q\to r\to s$ is doubtlessly what the exercise poser had in mind -- since it is what you get if you strive to conclude something new immediately when you get a new piece of information and use each premise only once. Your solution is better, I think, because it highlights the semantic trouble with the story. – Henning Makholm Sep 11 '12 at 9:35
The text of the problem does not state that you have to use the rules of interference. To show that $\neg q$ holds when $\neg p$ and $(p \rightarrow s)\rightarrow(q \rightarrow p)$ holds you can also use truth tables. The lines were $\neg p$ and $(p \rightarrow s)\rightarrow(q \rightarrow p)$ holds are the lines 1,2,3,4. For these lines the last column is 1 so $\neg q$ is true.
$$\begin{array}{ccccccccccccc} &n&p&q&r&s&p \rightarrow q&q \rightarrow r&r \rightarrow s&(p \rightarrow s)\rightarrow(q \rightarrow p)& \neg p& \neg q\\\hline &1&0&0&0&0&1&1&1&1&1&1\\ &2&0&0&0&1&1&1&1&1&1&1\\ &3&0&0&1&0&1&1&0&1&1&1\\ &4&0&0&1&1&1&1&1&1&1&1\\ &5&0&1&0&0&1&0&1&0&1&0\\ &6&0&1&0&1&1&0&1&0&1&0\\ &7&0&1&1&0&1&1&0&0&1&0\\ &8&0&1&1&1&1&1&1&0&1&0\\ &9&1&0&0&0&0&1&1&1&0&1\\ &10&1&0&0&1&0&1&1&1&0&1\\ &11&1&0&1&0&0&1&1&1&0&1\\ &12&1&0&1&1&0&1&1&1&0&1\\ &13&1&1&0&0&1&0&1&1&0&0\\ &14&1&1&0&1&1&0&1&1&0&0\\ &15&1&1&1&0&1&1&1&1&0&0\\ &16&1&1&1&1&1&1&1&1&0&0 \end{array}$$
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 31, "mathjax_display_tex": 13, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9340487718582153, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/208109/proof-of-ramsey-theorem-with-explicit-use-of-ac?answertab=active
|
# Proof of Ramsey Theorem with explicit use of AC
What are minimal axiomatic requirements to prove Ramsey Theorem?
This one:
Let $X$ be some countably infinite set and colour the elements of $X^{(n)}$ (the subsets of $X$ of size $n$) in $c$ different colours. Then there exists some infinite subset $M$ of $X$ such that the size $n$ subsets of $M$ all have the same colour.
How to prove it explicitly using Countable Axiom of Choice?
-
Why do you think that the axiom of choice is used in the proof? – Carl Mummert Oct 6 '12 at 8:29
@Carl: Probably because (part of) it is used in both of the most common proofs. The slightly grubby one uses $AC_\omega$; the neat one uses $BPI$. – Brian M. Scott Oct 6 '12 at 8:34
@Carl: For the same reason there are people who think the axiom of choice is involved in choosing a single element from an infinite set, perhaps. – Asaf Karagila Oct 6 '12 at 9:16
– Shahab Oct 7 '12 at 3:31
## 4 Answers
EDIT: The question was about countably infinite set, not an infinite set, so this post does not answer the original question. I've decided to leave this answer here anyway, since it might still be interesting for the OP if he wants to know more relation of various versions of Ramsey Theorem to AC.
You can probably find some results about strength of Ramsey Theorem as a Choice Principle in Halbeisen's book Combinatorial Set Theory, Chapter 5, where several related Choice principles are mentioned.
A pdf-file with draft version of this book can be found at the website of the course on set theory he is teaching. (I'd say that version is very close to the final version, which was published.)
I'll quote some relevant results, proofs and more details can be found in the book.
$C(\aleph_0,\infty)$: Every countable family of non-empty sets has a choice function (this choice principle is usually called Countable Axiom of Choice).
RPP: If $X$ is an infinite set and $[X]^2$ is 2-coloured, then there is an infinite subset $Y$ of $X$ such that $[Y]^2$ is monochromatic.
Theorem 5.17. $C(\aleph_0,\infty)$ $\Rightarrow$ RPP $\Rightarrow$ KL $\Rightarrow$ $C(\aleph_0,n)$.
EDIT 2: After I learned that I originally wrote about a different version of the Ramsey theorem, I tried to check whether the same book mentions somewhere this version and - as expected, it does. Again, I've copied here some relevant parts, starting from p.11.
Ramsey proved his theorem in order to investigate a problem in formal logic, namely the problem of finding a regular procedure to determine the truth or falsity of a given logical formula in the language of First-Order Logic, which is also the language of Set Theory (cf. Chapter 3). However, Ramsey’s Theorem is a purely combinatorial statement and was the nucleus—but not the earliest result—of a whole combinatorial theory, the so-called Ramsey Theory. We would also like to mention that Ramsey’s original theorem, which will be discussed later, is somewhat stronger than the theorem stated below but is, like König’s Lemma, not provable without assuming some form of the Axiom of Choice (see Proposition 7.8).
Theorem 2.1 (Ramsey's theorem). For any number $n\in\omega$, for any positive number $r\in\omega$, for any $S\in[\omega]^\omega$, and for any colouring $\pi\colon{[S]^n}\to r$, there is always an $H \in [S]^omega$ such that $H$ is homogeneous for $\pi$, i.e., the set $[H]^n$ is monochromatic.
The proof is done by first proving the case $n=2$:
Proposition 2.2. For any positive number $r\in\omega$, for any $S \in [\omega]^\omega$, and for any colouring $\pi \colon [S]^2 \to r$, there is always an $H \in [S]^\omega$ such that $[H]^2$ is monochromatic.
The proof uses Infinite Pigeon-Hole Principle, but it is only needed for countable infinite sets in this proof.
Infinite Pigeon-Hole Principle. If infinitely many objects are coloured with finitely many colours, then infinitely many objects have the same colour.
-
@Carl Thanks for notifying me, I've missed that. I've left the answer anyway, since it might still be interesting for the OP. – Martin Sleziak Oct 6 '12 at 8:34
!Martin: Thanks, I do think this is interesting, even though the original question was murky about why choice would be used to prove a result that follows from ZF. (+1) – Carl Mummert Oct 6 '12 at 8:37
Martin gave an excellent answer, I would like to add a bit on his.
Indeed for a well-ordered set, no choice is needed to prove the Ramsey theorem. Countable sets are well-ordered. The proof is even constructive in the sense that we actually know what is the homogeneous set.
Let us see some examples of infinite sets who fail to have this property.
1. Suppose that $S$ is a Russell set, namely the union of a countable collection of pairs without a choice function. Formally we write $S=\bigcup_{n\in\mathbb N}S_n$ where $S_n$'s are pairwise disjoint sets of size $2$.
Color now $S^{(2)}$ as follows: $c({x,y})=1$ if and only if for some $n$, $S_n=\{x,y\}$. If $Y$ was an infinite homogeneous subset of $Y$ then all the pairs from $Y$ would have the same color. It is clearly not $1$, because that would imply all the points in $Y$ are in the same pair. Therefore all the pairs in $Y^{(2)}$ are colored $0$ which means that no two points in $Y$ come from the same pair. However the defining property of $S$ was that no such $Y$ exists.
2. The trick above can easily extended for $n>2$ to any set which can be written as a union of a countable set of pairwise disjoint sets of size $n$.
3. Returning to the set $S$, let us see a non-obvious generalization for $n=3$. We color $S^{(3)}$ as follows: $c(\{a,b,c\})=1$ if and only if there is $n$ such that $S_n\subseteq\{a,b,c\}$.
If $Y$ was an infinite homogeneous subset, it is clear that $Y$ cannot be colored $1$, suppose that $S_1,S_2$ and $S_3$ are all subsets of $Y$. Choose exactly one point from each of the three sets, $\{a_1,a_2,a_3\}$ it is clear that the color of this triplet is $0$. However if all triplets from $Y$ are colored $0$ then no two points are in the same pair (since you could have added any third point and you would get a triplet colored by $1$) and therefore $Y$ selects a point from infinitely many pairs, a contradiction again.
This generalization can be carried out further using the same idea for larger $n$'s.
-
This version of Ramsey's theorem is already provable in predicative second-order arithmetic $\mathsf{ACA_0}$. See Simpson's bible Subsystems of Second-Order Arithmetic, starting at p. 46, then in Ch III.7. (You can freely download the first chapter of SoSOA at http://www.math.psu.edu/simpson/sosoa/ which will explain what this means, if you are not familiar with the reverse mathematics programme.)
-
1
To expand on the meaning of this: because the result is provable in $\mathsf{ACA}_0$, there is no need to use the axiom of choice at all in the proof, it will go through in ZF. – Carl Mummert Oct 6 '12 at 8:27
The proof given here for the case $c=2$ uses only countable choice. The extension to arbitrary $c$ does not require choice.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 79, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.946169912815094, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/85539/conformally-flat/85542
|
## Conformally-flat
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Assume given a smooth manifold $(\mathbb{R}^n, g)$, where the metric is a scaled identity $g = e^{2f}I$. Is there a way to know if this is always a non-positive (sectional) curvature manifold?
Note this is a parametrized manifold that is locally conformally flat. Following Einstein Manifolds [Arthur L. Besse], the Ricci tensor (in coordinates), can be shown to be:
$R = -(n-2)(H_f - \nabla f \cdot \nabla f^T ) - \frac{n-2}{n}(\Delta f + \|\nabla f\|^2)I_{n\times n}$
where $H_f$ is the Hessian of $f$.
Then $(\mathbb{R}^n, g)$ is of non-positive (sectional) curvature if $R$ is negative semi-definite.
-
2
You might want to check your formula for $R$, since the right hand side clearly vanishes when $n=2$ even though Riemannian surfaces (which are conformally flat) are not all flat. – Robert Bryant Jan 13 2012 at 0:33
## 1 Answer
I'm not quite sure what you mean by always non-positively curved. If you are asking if this metric is non-positively curved for any $f$ then this is false. If you are asking for conditions on $f$ ensuring that the resulting metric is non-positively curved then there is a general formula:
Let $(M,g)$ be a Riemannian manifold and let $\tilde g=e^{2f}g$ be a new metric on $M$. Let $p\in M$ and let $u,v$ be orthonormal with respect to $g$ vectors in $T_pM$ and $\sigma$ the 2-plane spanned by them.
Then `$e^{2f}\tilde{K}_\sigma =K_\sigma-[Hess_f(u,u)+Hess_f(v,v)+|\nabla f|^2-\langle \nabla f, u \rangle^2-\langle \nabla f, v \rangle^2]$`.
This formula is in Besse btw (Theorem 1.159) but it's written slightly differently there.
In the special case you are interested in $g$ is the canonical metric on $\mathbb R^n$ and hence $\tilde K$ is nonpositive iff $H_f(u,u)+H_f(v,v)+|\nabla f|^2-\langle \nabla f, u \rangle^2-\langle \nabla f, v \rangle^2\ge 0$ for any $p$ and any orthonormal $u$ and $v$ in $T_pM$. Note that for example it's always true if $f$ is convex.
-
4
So, more explicitly, what one requires for nonpositive curvature is that $|\nabla f|^2 + \lambda_1 +\lambda_2 \ge 0$, where $\lambda_1$ and $\lambda_2$ are the lowest eigenvalues of the quadratic form $\textrm{Hess}(f) - (d f)^2$. – Robert Bryant Jan 13 2012 at 12:42
@Robert That's a nice invariant way to state it. – Vitali Kapovitch Jan 13 2012 at 14:50
Thank you for your reply. This is very helpful. In the original expression, it is clear that for $n=2$ the Ricci tensor is $0$. I don't see this as clearly in your expression: $H_f(u,u) + H_f(v,v) + |\nabla f|^2 - <\nabla f,u>^2 - <\nabla f,v>^2 \ge 0$ Clearly, the last three terms will vanish if $n=2$, but it seems the sum of Hessians might not. I wonder if one of the expressions may be some tweaking. – Guillermozo Jan 13 2012 at 16:21
1
@Guillermozo: The expression you wrote in your question is for the trace free part of the Ricci curvature. It is of course identically zero in dimension 2. The full Ricci curvature tensor after a conformal change is $\tilde Ric=Ric-\Delta f\cdot g -(n-2)[Hess(f)-(df)^2+|\nabla f|^2\cdot g]$ which you can get by taking the trace of the sectional curvature formula above. – Vitali Kapovitch Jan 13 2012 at 17:19
Excellent. Thanks! – Guillermozo Jan 13 2012 at 19:49
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 37, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9074110388755798, "perplexity_flag": "head"}
|
http://www.sagemath.org/doc/reference/cryptography/sage/crypto/block_cipher/miniaes.html
|
# Mini-AES¶
A simplified variant of the Advanced Encryption Standard (AES). Note that Mini-AES is for educational purposes only. It is a small-scale version of the AES designed to help beginners understand the basic structure of AES.
AUTHORS:
• Minh Van Nguyen (2009-05): initial version
class sage.crypto.block_cipher.miniaes.MiniAES¶
Bases: sage.structure.sage_object.SageObject
This class implements the Mini Advanced Encryption Standard (Mini-AES) described in [P02]. Note that Phan’s Mini-AES is for educational purposes only and is not secure for practical purposes. Mini-AES is a version of the AES with all parameters significantly reduced, but at the same time preserving the structure of AES. The goal of Mini-AES is to allow a beginner to understand the structure of AES, thus laying a foundation for a thorough study of AES. Its goal is as a teaching tool and is different from the SR small scale variants of the AES. SR defines a family of parameterizable variants of the AES suitable as a framework for comparing different cryptanalytic techniques that can be brought to bear on the AES.
EXAMPLES:
Encrypt a plaintext:
```sage: from sage.crypto.block_cipher.miniaes import MiniAES
sage: maes = MiniAES()
sage: K = FiniteField(16, "x")
sage: MS = MatrixSpace(K, 2, 2)
sage: P = MS([K("x^3 + x"), K("x^2 + 1"), K("x^2 + x"), K("x^3 + x^2")]); P
[ x^3 + x x^2 + 1]
[ x^2 + x x^3 + x^2]
sage: key = MS([K("x^3 + x^2"), K("x^3 + x"), K("x^3 + x^2 + x"), K("x^2 + x + 1")]); key
[ x^3 + x^2 x^3 + x]
[x^3 + x^2 + x x^2 + x + 1]
sage: C = maes.encrypt(P, key); C
[ x x^2 + x]
[x^3 + x^2 + x x^3 + x]
```
Decrypt the result:
```sage: plaintxt = maes.decrypt(C, key)
sage: plaintxt; P
[ x^3 + x x^2 + 1]
[ x^2 + x x^3 + x^2]
[ x^3 + x x^2 + 1]
[ x^2 + x x^3 + x^2]
sage: plaintxt == P
True
```
We can also work directly with binary strings:
```sage: from sage.crypto.block_cipher.miniaes import MiniAES
sage: maes = MiniAES()
sage: bin = BinaryStrings()
sage: key = bin.encoding("KE"); key
0100101101000101
sage: P = bin.encoding("Encrypt this secret message!"); P
01000101011011100110001101110010011110010111000001110100001000000111010001101000011010010111001100100000011100110110010101100011011100100110010101110100001000000110110101100101011100110111001101100001011001110110010100100001
sage: C = maes(P, key, algorithm="encrypt"); C
10001000101001101111000001111000010011001110110101000111011011010101001011101111101011001110011100100011101100101010100010100111110110011001010001000111011011010010000011000110001100000111000011100110101111000000001110001001
sage: plaintxt = maes(C, key, algorithm="decrypt")
sage: plaintxt == P
True
```
Now we work with integers $$n$$ such that $$0 \leq n \leq 15$$:
```sage: from sage.crypto.block_cipher.miniaes import MiniAES
sage: maes = MiniAES()
sage: P = [n for n in xrange(16)]; P
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15]
sage: key = [2, 3, 11, 0]; key
[2, 3, 11, 0]
sage: P = maes.integer_to_binary(P); P
0000000100100011010001010110011110001001101010111100110111101111
sage: key = maes.integer_to_binary(key); key
0010001110110000
sage: C = maes(P, key, algorithm="encrypt"); C
1100100000100011111001010101010101011011100111110001000011100001
sage: plaintxt = maes(C, key, algorithm="decrypt")
sage: plaintxt == P
True
```
Generate some random plaintext and a random secret key. Encrypt the plaintext using that secret key and decrypt the result. Then compare the decrypted plaintext with the original plaintext:
```sage: from sage.crypto.block_cipher.miniaes import MiniAES
sage: maes = MiniAES()
sage: MS = MatrixSpace(FiniteField(16, "x"), 2, 2)
sage: P = MS.random_element()
sage: key = maes.random_key()
sage: C = maes.encrypt(P, key)
sage: plaintxt = maes.decrypt(C, key)
sage: plaintxt == P
True
```
REFERENCES:
[P02] R. C.-W. Phan. Mini advanced encryption standard (mini-AES): a testbed for cryptanalysis students. Cryptologia, 26(4):283–306, 2002.
GF_to_binary(G)¶
Return the binary representation of G. If G is an element of the finite field $$\GF{2^4}$$, then obtain the binary representation of G. If G is a list of elements belonging to $$\GF{2^4}$$, obtain the 4-bit representation of each element of the list, then concatenate the resulting 4-bit strings into a binary string. If G is a matrix with entries over $$\GF{2^4}$$, convert each matrix entry to its 4-bit representation, then concatenate the 4-bit strings. The concatenation is performed starting from the top-left corner of the matrix, working across left to right, top to bottom. Each element of $$\GF{2^4}$$ can be associated with a unique 4-bit string according to the following table:
$\begin{split}\begin{tabular}{ll|ll} \hline 4-bit string & $\GF{2^4}$ & 4-bit string & $\GF{2^4}$ \\\hline 0000 & $0$ & 1000 & $x^3$ \\ 0001 & $1$ & 1001 & $x^3 + 1$ \\ 0010 & $x$ & 1010 & $x^3 + x$ \\ 0011 & $x + 1$ & 1011 & $x^3 + x + 1$ \\ 0100 & $x^2$ & 1100 & $x^3 + x^2$ \\ 0101 & $x^2 + 1$ & 1101 & $x^3 + x^2 + 1$ \\ 0110 & $x^2 + x$ & 1110 & $x^3 + x^2 + x$ \\ 0111 & $x^2 + x + 1$ & 1111 & $x^3 + x^2 + x+ 1$ \\\hline \end{tabular}\end{split}$
INPUT:
• G – an element of $$\GF{2^4}$$, a list of elements of $$\GF{2^4}$$, or a matrix over $$\GF{2^4}$$
OUTPUT:
• A binary string representation of G.
EXAMPLES:
Obtain the binary representation of all elements of $$\GF{2^4}$$:
```sage: from sage.crypto.block_cipher.miniaes import MiniAES
sage: maes = MiniAES()
sage: K = FiniteField(16, "x")
sage: S = Set(K); len(S) # GF(2^4) has this many elements
16
sage: [maes.GF_to_binary(S[i]) for i in xrange(len(S))]
[0000,
0001,
0010,
0011,
0100,
0101,
0110,
0111,
1000,
1001,
1010,
1011,
1100,
1101,
1110,
1111]
```
The binary representation of a list of elements belonging to $$\GF{2^4}$$:
```sage: from sage.crypto.block_cipher.miniaes import MiniAES
sage: maes = MiniAES()
sage: K = FiniteField(16, "x")
sage: G = [K("x^2 + x + 1"), K("x^3 + x^2"), K("x"), K("x^3 + x + 1"), K("x^3 + x^2 + x + 1"), K("x^2 + x"), K("1"), K("x^2 + x + 1")]
sage: maes.GF_to_binary(G)
01111100001010111111011000010111
```
The binary representation of a matrix over $$\GF{2^4}$$:
```sage: from sage.crypto.block_cipher.miniaes import MiniAES
sage: maes = MiniAES()
sage: K = FiniteField(16, "x")
sage: MS = MatrixSpace(K, 2, 2)
sage: G = MS([K("x^3 + x^2"), K("x + 1"), K("x^2 + x + 1"), K("x^3 + x^2 + x")]); G
[ x^3 + x^2 x + 1]
[ x^2 + x + 1 x^3 + x^2 + x]
sage: maes.GF_to_binary(G)
1100001101111110
sage: MS = MatrixSpace(K, 2, 4)
sage: G = MS([K("x^2 + x + 1"), K("x^3 + x^2"), K("x"), K("x^3 + x + 1"), K("x^3 + x^2 + x + 1"), K("x^2 + x"), K("1"), K("x^2 + x + 1")]); G
[ x^2 + x + 1 x^3 + x^2 x x^3 + x + 1]
[x^3 + x^2 + x + 1 x^2 + x 1 x^2 + x + 1]
sage: maes.GF_to_binary(G)
01111100001010111111011000010111
```
TESTS:
Input must be an element of $$\GF{2^4}$$:
```sage: from sage.crypto.block_cipher.miniaes import MiniAES
sage: maes = MiniAES()
sage: K = FiniteField(8, "x")
sage: G = K.random_element()
sage: maes.GF_to_binary(G)
Traceback (most recent call last):
...
TypeError: input G must be an element of GF(16), a list of elements of GF(16), or a matrix over GF(16)
```
A list of elements belonging to $$\GF{2^4}$$:
```sage: maes.GF_to_binary([])
Traceback (most recent call last):
...
ValueError: input G must be an element of GF(16), a list of elements of GF(16), or a matrix over GF(16)
sage: G = [K.random_element() for i in xrange(5)]
sage: maes.GF_to_binary(G)
Traceback (most recent call last):
...
KeyError:...```
A matrix over $$\GF{2^4}$$:
```sage: MS = MatrixSpace(FiniteField(7, "x"), 4, 5)
sage: maes.GF_to_binary(MS.random_element())
Traceback (most recent call last):
...
TypeError: input G must be an element of GF(16), a list of elements of GF(16), or a matrix over GF(16)
```
GF_to_integer(G)¶
Return the integer representation of the finite field element G. If G is an element of the finite field $$\GF{2^4}$$, then obtain the integer representation of G. If G is a list of elements belonging to $$\GF{2^4}$$, obtain the integer representation of each element of the list, and return the result as a list of integers. If G is a matrix with entries over $$\GF{2^4}$$, convert each matrix entry to its integer representation, and return the result as a list of integers. The resulting list is obtained by starting from the top-left corner of the matrix, working across left to right, top to bottom. Each element of $$\GF{2^4}$$ can be associated with a unique integer according to the following table:
$\begin{split}\begin{tabular}{ll|ll} \hline integer & $\GF{2^4}$ & integer & $\GF{2^4}$ \\\hline 0 & $0$ & 8 & $x^3$ \\ 1 & $1$ & 9 & $x^3 + 1$ \\ 2 & $x$ & 10 & $x^3 + x$ \\ 3 & $x + 1$ & 11 & $x^3 + x + 1$ \\ 4 & $x^2$ & 12 & $x^3 + x^2$ \\ 5 & $x^2 + 1$ & 13 & $x^3 + x^2 + 1$ \\ 6 & $x^2 + x$ & 14 & $x^3 + x^2 + x$ \\ 7 & $x^2 + x + 1$ & 15 & $x^3 + x^2 + x+ 1$ \\\hline \end{tabular}\end{split}$
INPUT:
• G – an element of $$\GF{2^4}$$, a list of elements belonging to $$\GF{2^4}$$, or a matrix over $$\GF{2^4}$$
OUTPUT:
• The integer representation of G.
EXAMPLES:
Obtain the integer representation of all elements of $$\GF{2^4}$$:
```sage: from sage.crypto.block_cipher.miniaes import MiniAES
sage: maes = MiniAES()
sage: K = FiniteField(16, "x")
sage: S = Set(K); len(S) # GF(2^4) has this many elements
16
sage: [maes.GF_to_integer(S[i]) for i in xrange(len(S))]
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15]
```
The integer representation of a list of elements belonging to $$\GF{2^4}$$:
```sage: from sage.crypto.block_cipher.miniaes import MiniAES
sage: maes = MiniAES()
sage: K = FiniteField(16, "x")
sage: G = [K("x^2 + x + 1"), K("x^3 + x^2"), K("x"), K("x^3 + x + 1"), K("x^3 + x^2 + x + 1"), K("x^2 + x"), K("1"), K("x^2 + x + 1")]
sage: maes.GF_to_integer(G)
[7, 12, 2, 11, 15, 6, 1, 7]
```
The integer representation of a matrix over $$\GF{2^4}$$:
```sage: from sage.crypto.block_cipher.miniaes import MiniAES
sage: maes = MiniAES()
sage: K = FiniteField(16, "x")
sage: MS = MatrixSpace(K, 2, 2)
sage: G = MS([K("x^3 + x^2"), K("x + 1"), K("x^2 + x + 1"), K("x^3 + x^2 + x")]); G
[ x^3 + x^2 x + 1]
[ x^2 + x + 1 x^3 + x^2 + x]
sage: maes.GF_to_integer(G)
[12, 3, 7, 14]
sage: MS = MatrixSpace(K, 2, 4)
sage: G = MS([K("x^2 + x + 1"), K("x^3 + x^2"), K("x"), K("x^3 + x + 1"), K("x^3 + x^2 + x + 1"), K("x^2 + x"), K("1"), K("x^2 + x + 1")]); G
[ x^2 + x + 1 x^3 + x^2 x x^3 + x + 1]
[x^3 + x^2 + x + 1 x^2 + x 1 x^2 + x + 1]
sage: maes.GF_to_integer(G)
[7, 12, 2, 11, 15, 6, 1, 7]
```
TESTS:
Input must be an element of $$\GF{2^4}$$:
```sage: from sage.crypto.block_cipher.miniaes import MiniAES
sage: maes = MiniAES()
sage: K = FiniteField(7, "x")
sage: G = K.random_element()
sage: maes.GF_to_integer(G)
Traceback (most recent call last):
...
TypeError: input G must be an element of GF(16), a list of elements of GF(16), or a matrix over GF(16)
```
A list of elements belonging to $$\GF{2^4}$$:
```sage: maes.GF_to_integer([])
Traceback (most recent call last):
...
ValueError: input G must be an element of GF(16), a list of elements of GF(16), or a matrix over GF(16)
sage: G = [K.random_element() for i in xrange(5)]
sage: maes.GF_to_integer(G)
Traceback (most recent call last):
...
KeyError:...```
A matrix over $$\GF{2^4}$$:
```sage: MS = MatrixSpace(FiniteField(7, "x"), 4, 5)
sage: maes.GF_to_integer(MS.random_element())
Traceback (most recent call last):
...
TypeError: input G must be an element of GF(16), a list of elements of GF(16), or a matrix over GF(16)
```
add_key(block, rkey)¶
Return the matrix addition of block and rkey. Both block and rkey are $$2 \times 2$$ matrices over the finite field $$\GF{2^4}$$. This method just return the matrix addition of these two matrices.
INPUT:
• block – a $$2 \times 2$$ matrix with entries over $$\GF{2^4}$$
• rkey – a round key; a $$2 \times 2$$ matrix with entries over $$\GF{2^4}$$
OUTPUT:
• The matrix addition of block and rkey.
EXAMPLES:
We can work with elements of $$\GF{2^4}$$:
```sage: from sage.crypto.block_cipher.miniaes import MiniAES
sage: maes = MiniAES()
sage: K = FiniteField(16, "x")
sage: MS = MatrixSpace(K, 2, 2)
sage: D = MS([ [K("x^3 + x^2 + x + 1"), K("x^3 + x")], [K("0"), K("x^3 + x^2")] ]); D
[x^3 + x^2 + x + 1 x^3 + x]
[ 0 x^3 + x^2]
sage: k = MS([ [K("x^2 + 1"), K("x^3 + x^2 + x + 1")], [K("x + 1"), K("0")] ]); k
[ x^2 + 1 x^3 + x^2 + x + 1]
[ x + 1 0]
sage: maes.add_key(D, k)
[ x^3 + x x^2 + 1]
[ x + 1 x^3 + x^2]
```
Or work with binary strings:
```sage: bin = BinaryStrings()
sage: B = bin.encoding("We"); B
0101011101100101
sage: B = MS(maes.binary_to_GF(B)); B
[ x^2 + 1 x^2 + x + 1]
[ x^2 + x x^2 + 1]
sage: key = bin.encoding("KY"); key
0100101101011001
sage: key = MS(maes.binary_to_GF(key)); key
[ x^2 x^3 + x + 1]
[ x^2 + 1 x^3 + 1]
sage: maes.add_key(B, key)
[ 1 x^3 + x^2]
[ x + 1 x^3 + x^2]
```
We can also work with integers $$n$$ such that $$0 \leq n \leq 15$$:
```sage: N = [2, 3, 5, 7]; N
[2, 3, 5, 7]
sage: key = [9, 11, 13, 15]; key
[9, 11, 13, 15]
sage: N = MS(maes.integer_to_GF(N)); N
[ x x + 1]
[ x^2 + 1 x^2 + x + 1]
sage: key = MS(maes.integer_to_GF(key)); key
[ x^3 + 1 x^3 + x + 1]
[ x^3 + x^2 + 1 x^3 + x^2 + x + 1]
sage: maes.add_key(N, key)
[x^3 + x + 1 x^3]
[ x^3 x^3]
```
TESTS:
The input block and key must each be a matrix:
```sage: from sage.crypto.block_cipher.miniaes import MiniAES
sage: maes = MiniAES()
sage: K = FiniteField(16, "x")
sage: MSB = MatrixSpace(K, 2, 2)
sage: B = MSB([ [K("x^3 + 1"), K("x^2 + x")], [K("x^3 + x^2"), K("x + 1")] ])
sage: maes.add_key(B, "key")
Traceback (most recent call last):
...
TypeError: round key must be a 2 x 2 matrix over GF(16)
sage: maes.add_key("block", "key")
Traceback (most recent call last):
...
TypeError: input block must be a 2 x 2 matrix over GF(16)
```
In addition, the dimensions of the input matrices must each be $$2 \times 2$$:
```sage: MSB = MatrixSpace(K, 1, 2)
sage: B = MSB([ [K("x^3 + 1"), K("x^2 + x")] ])
sage: maes.add_key(B, "key")
Traceback (most recent call last):
...
TypeError: input block must be a 2 x 2 matrix over GF(16)
sage: MSB = MatrixSpace(K, 2, 2)
sage: B = MSB([ [K("x^3 + 1"), K("x^2 + x")], [K("x^3 + x^2"), K("x + 1")] ])
sage: MSK = MatrixSpace(K, 1, 2)
sage: key = MSK([ [K("x^3 + x^2"), K("x^3 + x^2 + x + 1")]])
sage: maes.add_key(B, key)
Traceback (most recent call last):
...
TypeError: round key must be a 2 x 2 matrix over GF(16)
```
binary_to_GF(B)¶
Return a list of elements of $$\GF{2^4}$$ that represents the binary string B. The number of bits in B must be greater than zero and a multiple of 4. Each nibble (or 4-bit string) is uniquely associated with an element of $$\GF{2^4}$$ as specified by the following table:
$\begin{split}\begin{tabular}{ll|ll} \hline 4-bit string & $\GF{2^4}$ & 4-bit string & $\GF{2^4}$ \\\hline 0000 & $0$ & 1000 & $x^3$ \\ 0001 & $1$ & 1001 & $x^3 + 1$ \\ 0010 & $x$ & 1010 & $x^3 + x$ \\ 0011 & $x + 1$ & 1011 & $x^3 + x + 1$ \\ 0100 & $x^2$ & 1100 & $x^3 + x^2$ \\ 0101 & $x^2 + 1$ & 1101 & $x^3 + x^2 + 1$ \\ 0110 & $x^2 + x$ & 1110 & $x^3 + x^2 + x$ \\ 0111 & $x^2 + x + 1$ & 1111 & $x^3 + x^2 + x+ 1$ \\\hline \end{tabular}\end{split}$
INPUT:
• B – a binary string, where the number of bits is positive and a multiple of 4
OUTPUT:
• A list of elements of the finite field $$\GF{2^4}$$ that represent the binary string B.
EXAMPLES:
Obtain all the elements of the finite field $$\GF{2^4}$$:
```sage: from sage.crypto.block_cipher.miniaes import MiniAES
sage: maes = MiniAES()
sage: bin = BinaryStrings()
sage: B = bin("0000000100100011010001010110011110001001101010111100110111101111")
sage: maes.binary_to_GF(B)
[0,
1,
x,
x + 1,
x^2,
x^2 + 1,
x^2 + x,
x^2 + x + 1,
x^3,
x^3 + 1,
x^3 + x,
x^3 + x + 1,
x^3 + x^2,
x^3 + x^2 + 1,
x^3 + x^2 + x,
x^3 + x^2 + x + 1]
```
TESTS:
The input B must be a non-empty binary string, where the number of bits is a multiple of 4:
```sage: from sage.crypto.block_cipher.miniaes import MiniAES
sage: maes = MiniAES()
sage: maes.binary_to_GF("")
Traceback (most recent call last):
...
ValueError: the number of bits in the binary string B must be positive and a multiple of 4
sage: maes.binary_to_GF("101")
Traceback (most recent call last):
...
ValueError: the number of bits in the binary string B must be positive and a multiple of 4
```
binary_to_integer(B)¶
Return a list of integers representing the binary string B. The number of bits in B must be greater than zero and a multiple of 4. Each nibble (or 4-bit string) is uniquely associated with an integer as specified by the following table:
$\begin{split}\begin{tabular}{ll|ll} \hline 4-bit string & integer & 4-bit string & integer \\\hline 0000 & 0 & 1000 & 8 \\ 0001 & 1 & 1001 & 9 \\ 0010 & 2 & 1010 & 10 \\ 0011 & 3 & 1011 & 11 \\ 0100 & 4 & 1100 & 12 \\ 0101 & 5 & 1101 & 13 \\ 0110 & 6 & 1110 & 14 \\ 0111 & 7 & 1111 & 15 \\\hline \end{tabular}\end{split}$
INPUT:
• B – a binary string, where the number of bits is positive and a multiple of 4
OUTPUT:
• A list of integers that represent the binary string B.
EXAMPLES:
Obtain the integer representation of every 4-bit string:
```sage: from sage.crypto.block_cipher.miniaes import MiniAES
sage: maes = MiniAES()
sage: bin = BinaryStrings()
sage: B = bin("0000000100100011010001010110011110001001101010111100110111101111")
sage: maes.binary_to_integer(B)
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15]
```
TESTS:
The input B must be a non-empty binary string, where the number of bits is a multiple of 4:
```sage: from sage.crypto.block_cipher.miniaes import MiniAES
sage: maes = MiniAES()
sage: maes.binary_to_integer("")
Traceback (most recent call last):
...
ValueError: the number of bits in the binary string B must be positive and a multiple of 4
sage: maes.binary_to_integer("101")
Traceback (most recent call last):
...
ValueError: the number of bits in the binary string B must be positive and a multiple of 4
```
block_length()¶
Return the block length of Phan’s Mini-AES block cipher. A key in Phan’s Mini-AES is a block of 16 bits. Each nibble of a key can be considered as an element of the finite field $$\GF{2^4}$$. Therefore the key consists of four elements from $$\GF{2^4}$$.
OUTPUT:
• The block (or key) length in number of bits.
EXAMPLES:
```sage: from sage.crypto.block_cipher.miniaes import MiniAES
sage: maes = MiniAES()
sage: maes.block_length()
16
```
decrypt(C, key)¶
Use Phan’s Mini-AES to decrypt the ciphertext C with the secret key key. Both C and key must be $$2 \times 2$$ matrices over the finite field $$\GF{2^4}$$. Let $$\gamma$$ denote the operation of nibble-sub, $$\pi$$ denote shift-row, $$\theta$$ denote mix-column, and $$\sigma_{K_i}$$ denote add-key with the round key $$K_i$$. Then decryption $$D$$ using Phan’s Mini-AES is the function composition
$D = \sigma_{K_0} \circ \gamma^{-1} \circ \pi \circ \theta \circ \sigma_{K_1} \circ \gamma^{-1} \circ \pi \circ \sigma_{K_2}$
where $$\gamma^{-1}$$ is the nibble-sub operation that uses the S-box for decryption, and the order of execution is from right to left.
INPUT:
• C – a ciphertext block; must be a $$2 \times 2$$ matrix over the finite field $$\GF{2^4}$$
• key – a secret key for this Mini-AES block cipher; must be a $$2 \times 2$$ matrix over the finite field $$\GF{2^4}$$
OUTPUT:
• The plaintext corresponding to C.
EXAMPLES:
We encrypt a plaintext, decrypt the ciphertext, then compare the decrypted plaintext with the original plaintext. Here we work with elements of $$\GF{2^4}$$:
```sage: from sage.crypto.block_cipher.miniaes import MiniAES
sage: maes = MiniAES()
sage: K = FiniteField(16, "x")
sage: MS = MatrixSpace(K, 2, 2)
sage: P = MS([ [K("x^3 + 1"), K("x^2 + x")], [K("x^3 + x^2"), K("x + 1")] ]); P
[ x^3 + 1 x^2 + x]
[x^3 + x^2 x + 1]
sage: key = MS([ [K("x^3 + x^2"), K("x^3 + x^2 + x + 1")], [K("x + 1"), K("0")] ]); key
[ x^3 + x^2 x^3 + x^2 + x + 1]
[ x + 1 0]
sage: C = maes.encrypt(P, key); C
[x^2 + x + 1 x^3 + x^2]
[ x x^2 + x]
sage: plaintxt = maes.decrypt(C, key)
sage: plaintxt; P
[ x^3 + 1 x^2 + x]
[x^3 + x^2 x + 1]
[ x^3 + 1 x^2 + x]
[x^3 + x^2 x + 1]
sage: plaintxt == P
True
```
But we can also work with binary strings:
```sage: bin = BinaryStrings()
sage: P = bin.encoding("de"); P
0110010001100101
sage: P = MS(maes.binary_to_GF(P)); P
[x^2 + x x^2]
[x^2 + x x^2 + 1]
sage: key = bin.encoding("ke"); key
0110101101100101
sage: key = MS(maes.binary_to_GF(key)); key
[ x^2 + x x^3 + x + 1]
[ x^2 + x x^2 + 1]
sage: C = maes.encrypt(P, key)
sage: plaintxt = maes.decrypt(C, key)
sage: plaintxt == P
True
```
Here we work with integers $$n$$ such that $$0 \leq n \leq 15$$:
```sage: P = [3, 5, 7, 14]; P
[3, 5, 7, 14]
sage: key = [2, 6, 7, 8]; key
[2, 6, 7, 8]
sage: P = MS(maes.integer_to_GF(P)); P
[ x + 1 x^2 + 1]
[ x^2 + x + 1 x^3 + x^2 + x]
sage: key = MS(maes.integer_to_GF(key)); key
[ x x^2 + x]
[x^2 + x + 1 x^3]
sage: C = maes.encrypt(P, key)
sage: plaintxt = maes.decrypt(C, key)
sage: plaintxt == P
True
```
TESTS:
The input block must be a matrix:
```sage: from sage.crypto.block_cipher.miniaes import MiniAES
sage: maes = MiniAES()
sage: K = FiniteField(16, "x")
sage: MS = MatrixSpace(K, 2, 2)
sage: key = MS([ [K("x^3 + x^2"), K("x^3 + x^2 + x + 1")], [K("x + 1"), K("0")] ])
sage: maes.decrypt("C", key)
Traceback (most recent call last):
...
TypeError: ciphertext block must be a 2 x 2 matrix over GF(16)
sage: C = MS([ [K("x^3 + 1"), K("x^2 + x")], [K("x^3 + x^2"), K("x + 1")] ])
sage: maes.decrypt(C, "key")
Traceback (most recent call last):
...
TypeError: secret key must be a 2 x 2 matrix over GF(16)
```
In addition, the dimensions of the input matrices must be $$2 \times 2$$:
```sage: MS = MatrixSpace(K, 1, 2)
sage: C = MS([ [K("x^3 + 1"), K("x^2 + x")]])
sage: maes.decrypt(C, "key")
Traceback (most recent call last):
...
TypeError: ciphertext block must be a 2 x 2 matrix over GF(16)
sage: MSC = MatrixSpace(K, 2, 2)
sage: C = MSC([ [K("x^3 + 1"), K("x^2 + x")], [K("x^3 + x^2"), K("x + 1")] ])
sage: MSK = MatrixSpace(K, 1, 2)
sage: key = MSK([ [K("x^3 + x^2"), K("x^3 + x^2 + x + 1")]])
sage: maes.decrypt(C, key)
Traceback (most recent call last):
...
TypeError: secret key must be a 2 x 2 matrix over GF(16)
```
encrypt(P, key)¶
Use Phan’s Mini-AES to encrypt the plaintext P with the secret key key. Both P and key must be $$2 \times 2$$ matrices over the finite field $$\GF{2^4}$$. Let $$\gamma$$ denote the operation of nibble-sub, $$\pi$$ denote shift-row, $$\theta$$ denote mix-column, and $$\sigma_{K_i}$$ denote add-key with the round key $$K_i$$. Then encryption $$E$$ using Phan’s Mini-AES is the function composition
$E = \sigma_{K_2} \circ \pi \circ \gamma \circ \sigma_{K_1} \circ \theta \circ \pi \circ \gamma \circ \sigma_{K_0}$
where the order of execution is from right to left. Note that $$\gamma$$ is the nibble-sub operation that uses the S-box for encryption.
INPUT:
• P – a plaintext block; must be a $$2 \times 2$$ matrix over the finite field $$\GF{2^4}$$
• key – a secret key for this Mini-AES block cipher; must be a $$2 \times 2$$ matrix over the finite field $$\GF{2^4}$$
OUTPUT:
• The ciphertext corresponding to P.
EXAMPLES:
Here we work with elements of $$\GF{2^4}$$:
```sage: from sage.crypto.block_cipher.miniaes import MiniAES
sage: maes = MiniAES()
sage: K = FiniteField(16, "x")
sage: MS = MatrixSpace(K, 2, 2)
sage: P = MS([ [K("x^3 + 1"), K("x^2 + x")], [K("x^3 + x^2"), K("x + 1")] ]); P
[ x^3 + 1 x^2 + x]
[x^3 + x^2 x + 1]
sage: key = MS([ [K("x^3 + x^2"), K("x^3 + x^2 + x + 1")], [K("x + 1"), K("0")] ]); key
[ x^3 + x^2 x^3 + x^2 + x + 1]
[ x + 1 0]
sage: maes.encrypt(P, key)
[x^2 + x + 1 x^3 + x^2]
[ x x^2 + x]
```
But we can also work with binary strings:
```sage: bin = BinaryStrings()
sage: P = bin.encoding("de"); P
0110010001100101
sage: P = MS(maes.binary_to_GF(P)); P
[x^2 + x x^2]
[x^2 + x x^2 + 1]
sage: key = bin.encoding("ke"); key
0110101101100101
sage: key = MS(maes.binary_to_GF(key)); key
[ x^2 + x x^3 + x + 1]
[ x^2 + x x^2 + 1]
sage: C = maes.encrypt(P, key)
sage: plaintxt = maes.decrypt(C, key)
sage: plaintxt == P
True
```
Now we work with integers $$n$$ such that $$0 \leq n \leq 15$$:
```sage: P = [1, 5, 8, 12]; P
[1, 5, 8, 12]
sage: key = [5, 9, 15, 0]; key
[5, 9, 15, 0]
sage: P = MS(maes.integer_to_GF(P)); P
[ 1 x^2 + 1]
[ x^3 x^3 + x^2]
sage: key = MS(maes.integer_to_GF(key)); key
[ x^2 + 1 x^3 + 1]
[x^3 + x^2 + x + 1 0]
sage: C = maes.encrypt(P, key)
sage: plaintxt = maes.decrypt(C, key)
sage: plaintxt == P
True
```
TESTS:
The input block must be a matrix:
```sage: from sage.crypto.block_cipher.miniaes import MiniAES
sage: maes = MiniAES()
sage: K = FiniteField(16, "x")
sage: MS = MatrixSpace(K, 2, 2)
sage: key = MS([ [K("x^3 + x^2"), K("x^3 + x^2 + x + 1")], [K("x + 1"), K("0")] ])
sage: maes.encrypt("P", key)
Traceback (most recent call last):
...
TypeError: plaintext block must be a 2 x 2 matrix over GF(16)
sage: P = MS([ [K("x^3 + 1"), K("x^2 + x")], [K("x^3 + x^2"), K("x + 1")] ])
sage: maes.encrypt(P, "key")
Traceback (most recent call last):
...
TypeError: secret key must be a 2 x 2 matrix over GF(16)
```
In addition, the dimensions of the input matrices must be $$2 \times 2$$:
```sage: MS = MatrixSpace(K, 1, 2)
sage: P = MS([ [K("x^3 + 1"), K("x^2 + x")]])
sage: maes.encrypt(P, "key")
Traceback (most recent call last):
...
TypeError: plaintext block must be a 2 x 2 matrix over GF(16)
sage: MSP = MatrixSpace(K, 2, 2)
sage: P = MSP([ [K("x^3 + 1"), K("x^2 + x")], [K("x^3 + x^2"), K("x + 1")] ])
sage: MSK = MatrixSpace(K, 1, 2)
sage: key = MSK([ [K("x^3 + x^2"), K("x^3 + x^2 + x + 1")]])
sage: maes.encrypt(P, key)
Traceback (most recent call last):
...
TypeError: secret key must be a 2 x 2 matrix over GF(16)
```
integer_to_GF(N)¶
Return the finite field representation of N. If $$N$$ is an integer such that $$0 \leq N \leq 15$$, return the element of $$\GF{2^4}$$ that represents N. If N is a list of integers each of which is $$\geq 0$$ and $$\leq 15$$, then obtain the element of $$\GF{2^4}$$ that represents each such integer, and return a list of such finite field representations. Each integer between 0 and 15, inclusive, can be associated with a unique element of $$\GF{2^4}$$ according to the following table:
$\begin{split}\begin{tabular}{ll|ll} \hline integer & $\GF{2^4}$ & integer & $\GF{2^4}$ \\\hline 0 & $0$ & 8 & $x^3$ \\ 1 & $1$ & 9 & $x^3 + 1$ \\ 2 & $x$ & 10 & $x^3 + x$ \\ 3 & $x + 1$ & 11 & $x^3 + x + 1$ \\ 4 & $x^2$ & 12 & $x^3 + x^2$ \\ 5 & $x^2 + 1$ & 13 & $x^3 + x^2 + 1$ \\ 6 & $x^2 + x$ & 14 & $x^3 + x^2 + x$ \\ 7 & $x^2 + x + 1$ & 15 & $x^3 + x^2 + x+ 1$ \\\hline \end{tabular}\end{split}$
INPUT:
• N – a non-negative integer less than or equal to 15, or a list of such integers
OUTPUT:
• Elements of the finite field $$\GF{2^4}$$.
EXAMPLES:
Obtain the element of $$\GF{2^4}$$ representing an integer $$n$$, where $$0 \leq n \leq 15$$:
```sage: from sage.crypto.block_cipher.miniaes import MiniAES
sage: maes = MiniAES()
sage: maes.integer_to_GF(0)
0
sage: maes.integer_to_GF(2)
x
sage: maes.integer_to_GF(7)
x^2 + x + 1
```
Obtain the finite field elements corresponding to all non-negative integers less than or equal to 15:
```sage: from sage.crypto.block_cipher.miniaes import MiniAES
sage: maes = MiniAES()
sage: lst = [n for n in xrange(16)]; lst
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15]
sage: maes.integer_to_GF(lst)
[0,
1,
x,
x + 1,
x^2,
x^2 + 1,
x^2 + x,
x^2 + x + 1,
x^3,
x^3 + 1,
x^3 + x,
x^3 + x + 1,
x^3 + x^2,
x^3 + x^2 + 1,
x^3 + x^2 + x,
x^3 + x^2 + x + 1]
```
TESTS:
The input N can be an integer, but it must be such that $$0 \leq N \leq 15$$:
```sage: from sage.crypto.block_cipher.miniaes import MiniAES
sage: maes = MiniAES()
sage: maes.integer_to_GF(-1)
Traceback (most recent call last):
...
KeyError:...
sage: maes.integer_to_GF(16)
Traceback (most recent call last):
...
KeyError:...
sage: maes.integer_to_GF("2")
Traceback (most recent call last):
...
TypeError: N must be an integer 0 <= N <= 15 or a list of such integers```
The input N can be a list of integers, but each integer $$n$$ in the list must be bounded such that $$0 \leq n \leq 15$$:
```sage: maes.integer_to_GF([])
Traceback (most recent call last):
...
ValueError: N must be an integer 0 <= N <= 15 or a list of such integers
sage: maes.integer_to_GF([""])
Traceback (most recent call last):
...
KeyError:...
sage: maes.integer_to_GF([0, 2, 3, "4"])
Traceback (most recent call last):
...
KeyError:...
sage: maes.integer_to_GF([0, 2, 3, 16])
Traceback (most recent call last):
...
KeyError:...```
integer_to_binary(N)¶
Return the binary representation of N. If $$N$$ is an integer such that $$0 \leq N \leq 15$$, return the binary representation of N. If N is a list of integers each of which is $$\geq 0$$ and $$\leq 15$$, then obtain the binary representation of each integer, and concatenate the individual binary representations into a single binary string. Each integer between 0 and 15, inclusive, can be associated with a unique 4-bit string according to the following table:
$\begin{split}\begin{tabular}{ll|ll} \hline 4-bit string & integer & 4-bit string & integer \\\hline 0000 & 0 & 1000 & 8 \\ 0001 & 1 & 1001 & 9 \\ 0010 & 2 & 1010 & 10 \\ 0011 & 3 & 1011 & 11 \\ 0100 & 4 & 1100 & 12 \\ 0101 & 5 & 1101 & 13 \\ 0110 & 6 & 1110 & 14 \\ 0111 & 7 & 1111 & 15 \\\hline \end{tabular}\end{split}$
INPUT:
• N – a non-negative integer less than or equal to 15, or a list of such integers
OUTPUT:
• A binary string representing N.
EXAMPLES:
The binary representations of all integers between 0 and 15, inclusive:
```sage: from sage.crypto.block_cipher.miniaes import MiniAES
sage: maes = MiniAES()
sage: lst = [n for n in xrange(16)]; lst
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15]
sage: maes.integer_to_binary(lst)
0000000100100011010001010110011110001001101010111100110111101111
```
The binary representation of an integer between 0 and 15, inclusive:
```sage: from sage.crypto.block_cipher.miniaes import MiniAES
sage: maes = MiniAES()
sage: maes.integer_to_binary(3)
0011
sage: maes.integer_to_binary(5)
0101
sage: maes.integer_to_binary(7)
0111
```
TESTS:
The input N can be an integer, but must be bounded such that $$0 \leq N \leq 15$$:
```sage: from sage.crypto.block_cipher.miniaes import MiniAES
sage: maes = MiniAES()
sage: maes.integer_to_binary(-1)
Traceback (most recent call last):
...
KeyError:...
sage: maes.integer_to_binary("1")
Traceback (most recent call last):
...
TypeError: N must be an integer 0 <= N <= 15 or a list of such integers
sage: maes.integer_to_binary("")
Traceback (most recent call last):
...
TypeError: N must be an integer 0 <= N <= 15 or a list of such integers```
The input N can be a list of integers, but each integer $$n$$ of the list must be $$0 \leq n \leq 15$$:
```sage: from sage.crypto.block_cipher.miniaes import MiniAES
sage: maes = MiniAES()
sage: maes.integer_to_binary([])
Traceback (most recent call last):
...
ValueError: N must be an integer 0 <= N <= 15 or a list of such integers
sage: maes.integer_to_binary([""])
Traceback (most recent call last):
...
KeyError:...
sage: maes.integer_to_binary([0, 1, 2, 16])
Traceback (most recent call last):
...
KeyError:...```
mix_column(block)¶
Return the matrix multiplication of block with a constant matrix. The constant matrix is
$\begin{split}\begin{bmatrix} x + 1 & x \\ x & x + 1 \end{bmatrix}\end{split}$
If the input block is
$\begin{split}\begin{bmatrix} c_0 & c_2 \\ c_1 & c_3 \end{bmatrix}\end{split}$
then the output block is
$\begin{split}\begin{bmatrix} d_0 & d_2 \\ d_1 & d_3 \end{bmatrix} = \begin{bmatrix} x + 1 & x \\ x & x + 1 \end{bmatrix} \begin{bmatrix} c_0 & c_2 \\ c_1 & c_3 \end{bmatrix}\end{split}$
INPUT:
• block – a $$2 \times 2$$ matrix with entries over $$\GF{2^4}$$
OUTPUT:
• A $$2 \times 2$$ matrix resulting from multiplying the above constant matrix with the input matrix block.
EXAMPLES:
Here we work with elements of $$\GF{2^4}$$:
```sage: from sage.crypto.block_cipher.miniaes import MiniAES
sage: maes = MiniAES()
sage: K = FiniteField(16, "x")
sage: MS = MatrixSpace(K, 2, 2)
sage: mat = MS([ [K("x^2 + x + 1"), K("x^3 + x^2 + 1")], [K("x^3"), K("x")] ])
sage: maes.mix_column(mat)
[ x^3 + x 0]
[ x^2 + 1 x^3 + x^2 + x + 1]
```
Multiplying by the identity matrix should leave the constant matrix unchanged:
```sage: eye = MS([ [K("1"), K("0")], [K("0"), K("1")] ])
sage: maes.mix_column(eye)
[x + 1 x]
[ x x + 1]
```
We can also work with binary strings:
```sage: bin = BinaryStrings()
sage: B = bin.encoding("rT"); B
0111001001010100
sage: B = MS(maes.binary_to_GF(B)); B
[x^2 + x + 1 x]
[ x^2 + 1 x^2]
sage: maes.mix_column(B)
[ x + 1 x^3 + x^2 + x]
[ 1 x^3]
```
We can also work with integers $$n$$ such that $$0 \leq n \leq 15$$:
```sage: P = [10, 5, 2, 7]; P
[10, 5, 2, 7]
sage: P = MS(maes.integer_to_GF(P)); P
[ x^3 + x x^2 + 1]
[ x x^2 + x + 1]
sage: maes.mix_column(P)
[x^3 + 1 1]
[ 1 x + 1]
```
TESTS:
The input block must be a matrix:
```sage: from sage.crypto.block_cipher.miniaes import MiniAES
sage: maes = MiniAES()
sage: maes.mix_column("mat")
Traceback (most recent call last):
...
TypeError: input block must be a 2 x 2 matrix over GF(16)
```
In addition, the dimensions of the input matrix must be $$2 \times 2$$:
```sage: K = FiniteField(16, "x")
sage: MS = MatrixSpace(K, 1, 2)
sage: mat = MS([[K("x^3 + x^2 + x + 1"), K("0")]])
sage: maes.mix_column(mat)
Traceback (most recent call last):
...
TypeError: input block must be a 2 x 2 matrix over GF(16)
```
nibble_sub(block, algorithm='encrypt')¶
Substitute a nibble (or a block of 4 bits) using the following S-box:
$\begin{split}\begin{tabular}{ll|ll} \hline Input & Output & Input & Output \\\hline 0000 & 1110 & 1000 & 0011 \\ 0001 & 0100 & 1001 & 1010 \\ 0010 & 1101 & 1010 & 0110 \\ 0011 & 0001 & 1011 & 1100 \\ 0100 & 0010 & 1100 & 0101 \\ 0101 & 1111 & 1101 & 1001 \\ 0110 & 1011 & 1110 & 0000 \\ 0111 & 1000 & 1111 & 0111 \\\hline \end{tabular}\end{split}$
The values in the above S-box are taken from the first row of the first S-box of the Data Encryption Standard (DES). Each nibble can be thought of as an element of the finite field $$\GF{2^4}$$ of 16 elements. Thus in terms of $$\GF{2^4}$$, the S-box can also be specified as:
$\begin{split}\begin{tabular}{ll} \hline Input & Output \\\hline $0$ & $x^3 + x^2 + x$ \\ $1$ & $x^2$ \\ $x$ & $x^3 + x^2 + 1$ \\ $x + 1$ & $1$ \\ $x^2$ & $x$ \\ $x^2 + 1$ & $x^3 + x^2 + x + 1$ \\ $x^2 + x$ & $x^3 + x + 1$ \\ $x^2 + x + 1$ & $x^3$ \\ $x^3$ & $x + 1$ \\ $x^3 + 1$ & $x^3 + x$ \\ $x^3 + x$ & $x^2 + x$ \\ $x^3 + x + 1$ & $x^3 + x^2$ \\ $x^3 + x^2$ & $x^2 + 1$ \\ $x^3 + x^2 + 1$ & $x^3 + 1$ \\ $x^3 + x^2 + x$ & $0$ \\ $x^3 + x^2 + x + 1$ & $x^2 + x + 1$ \\\hline \end{tabular}\end{split}$
Note that the above S-box is used for encryption. The S-box for decryption is obtained from the above S-box by reversing the role of the Input and Output columns. Thus the previous Input column for encryption now becomes the Output column for decryption, and the previous Output column for encryption is now the Input column for decryption. The S-box used for decryption can be specified as:
$\begin{split}\begin{tabular}{ll} \hline Input & Output \\\hline $0$ & $x^3 + x^2 + x$ \\ $1$ & $x + 1$ \\ $x$ & $x^2$ \\ $x + 1$ & $x^3$ \\ $x^2$ & $1$ \\ $x^2 + 1$ & $x^3 + x^2$ \\ $x^2 + x$ & $x^3 + x$ \\ $x^2 + x + 1$ & $x^3 + x^2 + x + 1$ \\ $x^3$ & $x^2 + x + 1$ \\ $x^3 + 1$ & $x^3 + x^2 + 1$ \\ $x^3 + x$ & $x^3 + 1$ \\ $x^3 + x + 1$ & $x^2 + x$ \\ $x^3 + x^2$ & $x^3 + x + 1$ \\ $x^3 + x^2 + 1$ & $x$ \\ $x^3 + x^2 + x$ & $0$ \\ $x^3 + x^2 + x + 1$ & $x^2 + 1$ \\\hline \end{tabular}\end{split}$
INPUT:
• block – a $$2 \times 2$$ matrix with entries over $$\GF{2^4}$$
• algorithm – (default: "encrypt") a string; a flag to signify whether this nibble-sub operation is used for encryption or decryption. The encryption flag is "encrypt" and the decryption flag is "decrypt".
OUTPUT:
• A $$2 \times 2$$ matrix resulting from applying an S-box on entries of the $$2 \times 2$$ matrix block.
EXAMPLES:
Here we work with elements of the finite field $$\GF{2^4}$$:
```sage: from sage.crypto.block_cipher.miniaes import MiniAES
sage: maes = MiniAES()
sage: K = FiniteField(16, "x")
sage: MS = MatrixSpace(K, 2, 2)
sage: mat = MS([[K("x^3 + x^2 + x + 1"), K("0")], [K("x^2 + x + 1"), K("x^3 + x")]])
sage: maes.nibble_sub(mat, algorithm="encrypt")
[ x^2 + x + 1 x^3 + x^2 + x]
[ x^3 x^2 + x]
```
But we can also work with binary strings:
```sage: bin = BinaryStrings()
sage: B = bin.encoding("bi"); B
0110001001101001
sage: B = MS(maes.binary_to_GF(B)); B
[x^2 + x x]
[x^2 + x x^3 + 1]
sage: maes.nibble_sub(B, algorithm="encrypt")
[ x^3 + x + 1 x^3 + x^2 + 1]
[ x^3 + x + 1 x^3 + x]
sage: maes.nibble_sub(B, algorithm="decrypt")
[ x^3 + x x^2]
[ x^3 + x x^3 + x^2 + 1]
```
Here we work with integers $$n$$ such that $$0 \leq n \leq 15$$:
```sage: P = [2, 6, 8, 14]; P
[2, 6, 8, 14]
sage: P = MS(maes.integer_to_GF(P)); P
[ x x^2 + x]
[ x^3 x^3 + x^2 + x]
sage: maes.nibble_sub(P, algorithm="encrypt")
[x^3 + x^2 + 1 x^3 + x + 1]
[ x + 1 0]
sage: maes.nibble_sub(P, algorithm="decrypt")
[ x^2 x^3 + x]
[x^2 + x + 1 0]
```
TESTS:
The input block must be a matrix:
```sage: from sage.crypto.block_cipher.miniaes import MiniAES
sage: maes = MiniAES()
sage: maes.nibble_sub("mat")
Traceback (most recent call last):
...
TypeError: input block must be a 2 x 2 matrix over GF(16)
```
In addition, the dimensions of the input matrix must be $$2 \times 2$$:
```sage: K = FiniteField(16, "x")
sage: MS = MatrixSpace(K, 1, 2)
sage: mat = MS([[K("x^3 + x^2 + x + 1"), K("0")]])
sage: maes.nibble_sub(mat)
Traceback (most recent call last):
...
TypeError: input block must be a 2 x 2 matrix over GF(16)
```
The value for the option algorithm must be either the string "encrypt" or "decrypt":
```sage: K = FiniteField(16, "x")
sage: MS = MatrixSpace(K, 2, 2)
sage: mat = MS([[K("x^3 + x^2 + x + 1"), K("0")], [K("x^2 + x + 1"), K("x^3 + x")]])
sage: maes.nibble_sub(mat, algorithm="abc")
Traceback (most recent call last):
...
ValueError: the algorithm for nibble-sub must be either 'encrypt' or 'decrypt'
sage: maes.nibble_sub(mat, algorithm="e")
Traceback (most recent call last):
...
ValueError: the algorithm for nibble-sub must be either 'encrypt' or 'decrypt'
sage: maes.nibble_sub(mat, algorithm="d")
Traceback (most recent call last):
...
ValueError: the algorithm for nibble-sub must be either 'encrypt' or 'decrypt'
```
random_key()¶
A random key within the key space of this Mini-AES block cipher. Like the AES, Phan’s Mini-AES is a symmetric-key block cipher. A Mini-AES key is a block of 16 bits, or a $$2 \times 2$$ matrix with entries over the finite field $$\GF{2^4}$$. Thus the number of possible keys is $$2^{16} = 16^4$$.
OUTPUT:
• A $$2 \times 2$$ matrix over the finite field $$\GF{2^4}$$, used as a secret key for this Mini-AES block cipher.
EXAMPLES:
Each nibble of a key is an element of the finite field $$\GF{2^4}$$:
```sage: K = FiniteField(16, "x")
sage: from sage.crypto.block_cipher.miniaes import MiniAES
sage: maes = MiniAES()
sage: key = maes.random_key()
sage: [key[i][j] in K for i in xrange(key.nrows()) for j in xrange(key.ncols())]
[True, True, True, True]
```
Generate a random key, then perform encryption and decryption using that key:
```sage: from sage.crypto.block_cipher.miniaes import MiniAES
sage: maes = MiniAES()
sage: K = FiniteField(16, "x")
sage: MS = MatrixSpace(K, 2, 2)
sage: key = maes.random_key()
sage: P = MS.random_element()
sage: C = maes.encrypt(P, key)
sage: plaintxt = maes.decrypt(C, key)
sage: plaintxt == P
True
```
round_key(key, n)¶
Return the round key for round n. Phan’s Mini-AES is defined to have two rounds. The round key $$K_0$$ is generated and used prior to the first round, with round keys $$K_1$$ and $$K_2$$ being used in rounds 1 and 2 respectively. In total, there are three round keys, each generated from the secret key key.
INPUT:
• key – the secret key
• n – non-negative integer; the round number
OUTPUT:
• The $$n$$-th round key.
EXAMPLES:
Obtaining the round keys from the secret key:
```sage: from sage.crypto.block_cipher.miniaes import MiniAES
sage: maes = MiniAES()
sage: K = FiniteField(16, "x")
sage: MS = MatrixSpace(K, 2, 2)
sage: key = MS([ [K("x^3 + x^2"), K("x^3 + x^2 + x + 1")], [K("x + 1"), K("0")] ])
sage: maes.round_key(key, 0)
[ x^3 + x^2 x^3 + x^2 + x + 1]
[ x + 1 0]
sage: key
[ x^3 + x^2 x^3 + x^2 + x + 1]
[ x + 1 0]
sage: maes.round_key(key, 1)
[ x + 1 x^3 + x^2 + x + 1]
[ 0 x^3 + x^2 + x + 1]
sage: maes.round_key(key, 2)
[x^2 + x x^3 + 1]
[x^2 + x x^2 + x]
```
TESTS:
Only two rounds are defined for this AES variant:
```sage: from sage.crypto.block_cipher.miniaes import MiniAES
sage: maes = MiniAES()
sage: K = FiniteField(16, "x")
sage: MS = MatrixSpace(K, 2, 2)
sage: key = MS([ [K("x^3 + x^2"), K("x^3 + x^2 + x + 1")], [K("x + 1"), K("0")] ])
sage: maes.round_key(key, -1)
Traceback (most recent call last):
...
ValueError: Mini-AES only defines two rounds
sage: maes.round_key(key, 3)
Traceback (most recent call last):
...
ValueError: Mini-AES only defines two rounds
```
The input key must be a matrix:
```sage: maes.round_key("key", 0)
Traceback (most recent call last):
...
TypeError: secret key must be a 2 x 2 matrix over GF(16)
```
In addition, the dimensions of the key matrix must be $$2 \times 2$$:
```sage: K = FiniteField(16, "x")
sage: MS = MatrixSpace(K, 1, 2)
sage: key = MS([[K("x^3 + x^2 + x + 1"), K("0")]])
sage: maes.round_key(key, 2)
Traceback (most recent call last):
...
TypeError: secret key must be a 2 x 2 matrix over GF(16)
```
sbox()¶
Return the S-box of Mini-AES.
EXAMPLES:
```sage: from sage.crypto.block_cipher.miniaes import MiniAES
sage: maes = MiniAES()
sage: maes.sbox()
(14, 4, 13, 1, 2, 15, 11, 8, 3, 10, 6, 12, 5, 9, 0, 7)
```
shift_row(block)¶
Rotate each row of block to the left by different nibble amounts. The first or zero-th row is left unchanged, while the second or row one is rotated left by one nibble. This has the effect of only interchanging the nibbles in the second row. Let $$b_0, b_1, b_2, b_3$$ be four nibbles arranged as the following $$2 \times 2$$ matrix
$\begin{split}\begin{bmatrix} b_0 & b_2 \\ b_1 & b_3 \end{bmatrix}\end{split}$
Then the operation of shift-row is the mapping
$\begin{split}\begin{bmatrix} b_0 & b_2 \\ b_1 & b_3 \end{bmatrix} \longmapsto \begin{bmatrix} b_0 & b_2 \\ b_3 & b_1 \end{bmatrix}\end{split}$
INPUT:
• block – a $$2 \times 2$$ matrix with entries over $$\GF{2^4}$$
OUTPUT:
• A $$2 \times 2$$ matrix resulting from applying shift-row on block.
EXAMPLES:
Here we work with elements of the finite field $$\GF{2^4}$$:
```sage: from sage.crypto.block_cipher.miniaes import MiniAES
sage: maes = MiniAES()
sage: K = FiniteField(16, "x")
sage: MS = MatrixSpace(K, 2, 2)
sage: mat = MS([[K("x^3 + x^2 + x + 1"), K("0")], [K("x^2 + x + 1"), K("x^3 + x")]])
sage: maes.shift_row(mat)
[x^3 + x^2 + x + 1 0]
[ x^3 + x x^2 + x + 1]
sage: mat
[x^3 + x^2 + x + 1 0]
[ x^2 + x + 1 x^3 + x]
```
But we can also work with binary strings:
```sage: bin = BinaryStrings()
sage: B = bin.encoding("Qt"); B
0101000101110100
sage: B = MS(maes.binary_to_GF(B)); B
[ x^2 + 1 1]
[x^2 + x + 1 x^2]
sage: maes.shift_row(B)
[ x^2 + 1 1]
[ x^2 x^2 + x + 1]
```
Here we work with integers $$n$$ such that $$0 \leq n \leq 15$$:
```sage: P = [3, 6, 9, 12]; P
[3, 6, 9, 12]
sage: P = MS(maes.integer_to_GF(P)); P
[ x + 1 x^2 + x]
[ x^3 + 1 x^3 + x^2]
sage: maes.shift_row(P)
[ x + 1 x^2 + x]
[x^3 + x^2 x^3 + 1]
```
TESTS:
The input block must be a matrix:
```sage: from sage.crypto.block_cipher.miniaes import MiniAES
sage: maes = MiniAES()
sage: maes.shift_row("block")
Traceback (most recent call last):
...
TypeError: input block must be a 2 x 2 matrix over GF(16)
```
In addition, the dimensions of the input matrix must be $$2 \times 2$$:
```sage: K = FiniteField(16, "x")
sage: MS = MatrixSpace(K, 1, 2)
sage: mat = MS([[K("x^3 + x^2 + x + 1"), K("0")]])
sage: maes.shift_row(mat)
Traceback (most recent call last):
...
TypeError: input block must be a 2 x 2 matrix over GF(16)
```
Simplified DES
#### Next topic
Blum-Goldwasser Probabilistic Encryption
### Quick search
Enter search terms or a module, class or function name.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 16, "mathjax_display_tex": 136, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8180283308029175, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/26844/intuitive-sketch-of-the-correspondence-of-a-string-theory-to-its-limiting-quantu?answertab=oldest
|
# Intuitive sketch of the correspondence of a string theory to its limiting quantum field theory
I'm looking for an intuitive sketch of how one shows the correspondence of string theory to a certain QFT. My best guess is that one calculates the scattering amplitudes in the string theory as a series in some parameter (string length?) and shows that the leading order term is equal to the scattering amplitudes in the corresponding QFT.
If this is the case then my hope is that someone can elaborate and perhaps point me to some references. If I'm off base then my hope is that I can get a sketch and not be bogged down in heavy math (at this stage).
-
If you only want an intuitive sketch, this 600-character comment is more than enough. Histories in string theory look like Riemann surfaces. The long-distance limit inevitably makes all the tubs in the diagram much thinner than they're long - because one pays with energy for the spatial circumference of the cross section (i.e. length of the string). So then one has Feynman rules involving the lowest vibration states of strings - and they look like pointlike particles and have discrete spectrum - and they interact by some vertices (given by the tube junctions), so we get Feynman rules of QFTs. – Luboš Motl Jan 26 '12 at 5:58
Hi thanks for this. If you don't mind perhaps you could elaborate (pedagogical references?). In particular why do you mean by long-distance limit? Is this equivalent to taking the string length to 0 (in the same way that the classical limit is found by taking $\hbar$ to 0)? – Kyle Jan 27 '12 at 2:08
If this is how string theory corresponds to QFT, then is it a fair assessment to say that the notion of a quantum field is only useful in that in some limit, where one can ignore the spatial extent of a string, the theory makes consistent predictions? That is to say we shouldn't ascribe "reality" to quantum fields in the same way that we presumably do to strings? – Kyle Jan 27 '12 at 2:18
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9301555156707764, "perplexity_flag": "head"}
|
http://programmingpraxis.com/2009/04/21/probabilistic-spell-checking/?like=1&source=post_flair&_wpnonce=27cac980df
|
# Programming Praxis
A collection of etudes, updated weekly, for the education and enjoyment of the savvy programmer
## Probabilistic Spell Checking
### April 21, 2009
In a previous exercise we built a spell checker based on storing words in a trie. That spell checker was exact: the spell checker reported success if and only if the checked word was in the dictionary. Today we will build a spell checker that is probabilistic: it correctly reports failure if the checked word is not in the dictionary, and correctly reports success if the checked word is in the dictionary, but may also incorrectly report success even if the checked word is not in the dictionary. The probability of error can be made arbitrarily small, as determined by the programmer.
We will use a bloom filter, a data structure invented by Burton Bloom in 1970 to test membership is a set. A bloom filter consists of an array of m bits, plus k different hash functions that map set elements to the range 0 to m-1. All the bits are initially 0. To add an element, calculate the k hash values of the element, and set each kth bit to 1. To test if an element is in the set, calculate the k hash values of the element, return true if all of the kth bits are 1, and false if any is 0. In this way, it is certain that the element is not in the set if any hash returns 0, but it is possible that an element not in the set may be incorrectly reported as being in the set if all of the hashes return 1, but one of the hashes was set by some other element.
The easiest way to build a large number of hash functions is to use a single hash function and “salt” the dictionary words with an additional letter. For instance, to hash the word “hello” three times, use “ahelloa”, “bhellob”, and “chelloc” and hash with a standard string-hashing function.
There is some considerable math involved in determining the appropriate values of m and k. For a set of n elements, the probability p of a false positive is given by the formula:
$\bigg( 1 - \Big( 1 - \frac 1 m \Big) ^ { kn } \bigg) ^ k \approx \Big( 1 - e ^ { kn/m } \Big) ^ k$
To give this a sense of scale, storing a fifty thousand word dictionary in a bloom filter of a million bits using seven hash functions will result in a false positive every 5102 words, on average.
Your task is to build a probabilistic spell checker as described above. When you are finished, you can read or run a suggested solution, or post your solution or discuss the exercise in the comments below.
### Like this:
Pages: 1 2
Posted by programmingpraxis
Filed in Exercises
1 Comment »
### One Response to “Probabilistic Spell Checking”
1. FalconNL said
April 21, 2009 at 9:53 AM
```import Data.BloomFilter.Easy
import Data.Char
main = do dict <- fmap (easyList 0.01 . map lowercase . lines) $ readFile "words.txt"
print $ dict `contains` "ValiD"
print $ dict `contains` "xyzzy"
contains b s = elemB (lowercase s) b
lowercase = map toLower
```
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 1, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.878901481628418, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/12625/how-does-hubbles-constant-affect-the-earths-orbit/22191
|
# How does Hubble's constant affect the Earth's orbit
If Hubble's constant is $2.33 \times 10^{-18} \text{ s}^{-1}$ and the earth orbits the sun with average distance of 150 million kilometers; Does that mean the earth's orbital radius increases approximately $11\text{ m}/\text{year}$? Does the earth's angular momentum change? If so, where does the torque come from? If the angular momentum doesn't change, does the earth's orbital velocity (length of a year) change? If so, where does the lost kinetic energy go?
Aside: the 11 meters per year figure comes from Hubble expansion of space the distance of the earth's orbital radius integrated over an entire year.
$$(2.33 \times 10^{-18}\text{ s}^{-1}) (1.5 \times 10^{11} \text{ m}) (3.15 \times 10^7 \text{ s}/\text{year}) = 11 \text{ m}/\text{year}$$
-
BTW-- You'll note that Henry and I have made use of the MathJax formatting utility that is active on the site---using LaTeX syntax to typeset mathematics. – dmckee♦ Jul 22 '11 at 23:28
## 3 Answers
No. Hubble's constant roughly says how the distance between two objects at rest with the universe grows. It does not say that the distant between everything is growing - the size of the hydrogen atom is not increasing. (My size is increasing, but from dietary rather than cosmological sources.) The size of objects and orbits are maintained by a balance of forces (classically). To whatever extent one can think of the expansion of the universe as pushing the Earth and Sun apart, it is already taken into account in setting the Earth's orbit.
Added
The change in the Hubble constant can effect the orbit, see the paper linked by Ben Crowell. But just taking the Hubble constant and multiplying it by the Earth's radius, as I believe you have done, does not give you anything sensible.
-
1
looks like you don't believe in the Big Rip. Regardless, there does appear to be some evidence for hubble expansion of the moon's orbit. Though it's a bit more difficult to laser measure the distance from the earth to the sun. – rae Jul 22 '11 at 21:37
– Ben Crowell Jul 22 '11 at 21:44
1
@rae: The paper by Dumin was never published, and looks just plain wrong to me. It contradicts the Cooperstock paper that I referenced above, which was published in a peer-reviewed journal. There is not a scrap of GR anywhere in the Dumin paper; to my knowledge, no competent relativist has ever suggested that GR leads to an effect of the order of magnitude of the discrepancy that Dumin attributes to cosmological effects. The Big Rip is not really relevant. We don't know if the laws of physics are such as would cause a Big Rip, and the OP is not asking about the remote future. – Ben Crowell Jul 22 '11 at 21:54
@Ben Crowell: I agree that a changing Hubble constant effects the orbit which is what that paper derives (see Eqn. 4.2 depends only on the second derivative of the scale factor). I'm dealing with only the effects of a constant Hubble 'constant', since I believe that what the original poster was asking about. His number 11m/year comes I believe from multiplying the earth's orbital radius by the Hubble constant which is certainly not correct. – BebopButUnsteady Jul 22 '11 at 22:30
2
@rae -- The problem is that there's no clearly-defined meaning to be attached to the phrase "the same point in its orbit the following year." If you use comoving coordinates (i.e., coordinates that expand with the Universe), then "the same point" will be further out than before. If you use local Minkowski coordinates, it won't. And of course there are infinitely many other choices. The common mistake people make is to think that comoving coordinates are what space is "really" doing, but the central idea of relativity is that coordinate systems are just conveniences, not "Truth." – Ted Bunn Jul 23 '11 at 17:43
show 9 more comments
For objects smaller than cosmic scale, such as atoms, planets and solar systems, the electromagnetic and gravitational forces that hold them together are not changing (as far as we know) and so those objects do not change size.
Between galaxies, so widely separated, there's just gravity, and that tends to average out due to every galaxy being surrounded by other galaxies in all directions. On a cosmic scale, galaxies are like a gas, with galaxies being the "molecules" and described by the idea gas equation. To account for gravity and finite size of the galaxies, we might use the Van der Waals equation or some other variation, but that's beside the point, useful only for increasing accuracy.
Hubble's constant describes the rate at which the "container" of the galactic gas is expanding, the way the density of galaxies decreases over time. In an ordinary gas such as air, when in an expanding chamber, certainly the molecules are not expanding. Likewise, neither are the galaxies changing their sizes, at least not for Hubble-related reasons.
-
– Ben Crowell Jul 25 '11 at 0:39
The reason the universe expands is gravitation, as described by Einstein's field equation. The evolution of the universe is governed by gravitation, as described by Einstein's field equation. Over cosmological scale, the universe can be seen as homogeneous and isotropic, with very small density of matter and radiation. The density of matter and radiation is too small to counteract the expansion, an effect of initial condition. In local areas, however, the density is many magnitudes higher, and the effect of expansion is all but counteracted by the binding gravitational attraction.
-
– Ben Crowell Mar 11 '12 at 1:44
I also wouldn't agree that "the reason the universe expands is gravitation." The reason it has been expanding, ever since the Big Bang, is inertia, and this would be just as true in a Newtonian model as in one based on GR. – Ben Crowell Mar 11 '12 at 1:49
@BenCrowell: To your second comment: By inertial the expansion will slow down, whereas in fact the expansion is speeding up, currently modeled by a non-zero cosmological constant, an effect of gravitation. – C.R. Mar 11 '12 at 1:58
@BenCrowell: To your first comment: I read your paper, and all I see is that according to the authors themselves, this topic, the effect of cosmological expansion on local systems, is highly contentious. I doubt your paper has settled the problem and become the consensus. – C.R. Mar 11 '12 at 2:04
@BenCrowell: Regardless, it is well known that gravitation bounds the solar system. Even cosmological expansion has an effect, it is infinitesimal small. I don't see how that invalidates my phrase "tends to confine" at local scale. – C.R. Mar 11 '12 at 2:08
show 1 more comment
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9519668817520142, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/61995?sort=newest
|
## Kahler differentials of a hypersurface over a non-algebraically closed field
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
The following was recently on my algebraic geometry homework:
Let $k$ be an algebraically closed field, $f\in B=k[x_1,\ldots,x_n]$, and $A=B/(f)$. Show that $\Omega_{A/k}$ is locally free of rank $n-1$ $\iff$ $\nexists\, p\in k^n$ such that $f(p)=0$ and all $\frac{\partial f}{\partial x_i}(p)=0$.
Here, $\Omega_{A/k}$ is just the module of differentials, not the sheaf of differentials on the corresponding variety (so locally free is meant in the sense of modules). My solution (at least seems to) crucially depend on the Nullstellensatz, so my question is, are there any non-algebraically closed fields $k$ for which this result is still true? If so, is there an argument that treats them simultaneously? Or, if not, is there a good intuition for why algebraically closed is necessary?
-
3
If $k$ is not algebraically closed then $f(p)=0$ may be empty, so that the Jacobian condition on the right would be vacuous. The condition on the left, called smoothness, would still have content however. Notice that smoothness is stable under field extensions. So it is equivalent to the Jacobian condition over $\bar k$. – Donu Arapura Apr 17 2011 at 11:38
2
Regarding intuition, the problem is that $k$-points don't tell you everything about a $k$-variety when $k$ isn't algebraically closed, so there's no reason to expect the condition on the right to behave well for general $k$ as Donu says. Looking at $k$-points is a bad choice of "concretization" of $k$-varieties. A much better one is the functor which sends a $k$-variety to its $\bar{k}$-points equipped with the natural action of $\text{Gal}(\bar{k}/k)$. – Qiaochu Yuan Apr 17 2011 at 20:33
## 1 Answer
For $k$ alg. closed you can phrase the statement as $\Omega_{A/k}$ is loc. free iff Spec$(A)$ is smooth. 'Spec$(A)$ smooth iff $\Omega_{A/k}$ is loc free' should be true without requiring $k = \bar{k}$. But if $k \ne \bar{k}$ then the condition on the derivatives is not the same as smoothness. For example if $C$ is a curve defined over $\mathbb{R}$ with smooth $\mathbb{R}$ points but with singular $\mathbb{C}$ points then the condition on $f$ and its derivatives will be satisfied but there will be a maximal ideal of Spec$(A)$ with residue field $\mathbb{C}$ where $\Omega_{A/k}$ will have the wrong rank.
You can try this with $y^2 = (x^2+1)^2$ and the maximal ideal $(y, x^2 + 1)$ in $\mathbb{R}[x,y]$.
But if $A(k) = A(\bar{k})$ then the original statement should hold over $k$.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 43, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9483650326728821, "perplexity_flag": "head"}
|
http://conservapedia.com/Image_(mathematics)
|
# Image (mathematics)
### From Conservapedia
In mathematics, the image of a linear transformation is its range: all possible values generated by the transformation. A matrix A, which is an expression of a function, has an image denoted by im(A).
If the rows (or columns, equivalently) of a matrix A are linearly independent, then the image of that transformation is the entire space it is applied to.
## Examples
### Example 1
Consider all the points (vectors) in the plane, ie, (x,y) acting under the transformation
$\mathbf{A} = \begin{pmatrix} -1 & 1 \\ 1 & 0 \end{pmatrix}$.
We can be sure that the image of this transformation is the entire plane, because for any point
$\begin{pmatrix} x \\ y \end{pmatrix}$
in the plane, there is another vector in the plane
$\begin{pmatrix} y \\ x+y \end{pmatrix}$
such that
$\begin{pmatrix} -1 & 1 \\ 1 & 0 \end{pmatrix} \begin{pmatrix}y \\x+y \end{pmatrix} = \begin{pmatrix}x \\y \end{pmatrix}$.
### Example 2
We continue to work in the plane, but now we examine the matrix
$\mathbf{A} = \begin{pmatrix} 3 & 1 \\ -6 & -2 \end{pmatrix}$.
Now if we examine how this matrix acts on an arbitary point (a,b), we find that point is carried to
$\mathbf{A} = \begin{pmatrix}3 & 1 \\-6 & -2 \end{pmatrix}\begin{pmatrix}a \\b \end{pmatrix} = \begin{pmatrix}3a+b \\-6a-2b \end{pmatrix} = \begin{pmatrix}1 \\-3 \end{pmatrix}(a+b)$,
in other words, all points in the plane are carried to the line $y=-3x \$.
We write
$im(\mathbf{A}) = \begin{pmatrix}1 \\-3 \end{pmatrix}t, \forall t\in\mathbb{R} \$.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 8, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.898945152759552, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/discrete-math/142970-appealing-geometric-series.html
|
# Thread:
1. ## Appealing to geometric series
Derive the indicated result by appealing to the geometric series:
I know that there is something in geometric series that deal with a number being greter than or less than to 1, such as |x|<1 so I know the series must converge, but to what? I know it will converge to a value less than 1...
Attached Thumbnails
2. Originally Posted by WartonMorton
Derive the indicated result by appealing to the geometric series:
I know that there is something in geometric series that deal with a number being greter than or less than to 1, such as |x|<1 so I know the series must converge, but to what? I know it will converge to a value less than 1...
Note that the thing you're summing can be written as $(-x^2)^k$.
3. Originally Posted by mr fantastic
Note that the thing you're summing can be written as $(-x^2)^k$.
What does $(-x^2)^k$ give me? I guess I don't quite understand what form my answer should be in when they say derive the indicated result?
4. Originally Posted by WartonMorton
What does $(-x^2)^k$ give me? I guess I don't quite understand what form my answer should be in when they say derive the indicated result?
If $|x|<1$ then $\sum\limits_{k = 0}^\infty {\left( { - x^2 } \right)^k } = \frac{1}{{1 + x^2 }}$
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9729790091514587, "perplexity_flag": "middle"}
|
http://www.physicsforums.com/showthread.php?p=3379826
|
Physics Forums
## Easy question on proper time
Hi everyone!!
Let's take two events $P_1,P_2$ in a Minkowsky spacetime, and let's choose them both lying on the $\omega$ axis, separated by a certain distance.
Now, I want to calculate the world line of the particle which experiences the least proper time during its trip between the two points.
The proper time, taking the velocity of the particle costant in its modulus (but not in its direction!!) can be written $d\tau=dt\sqrt{1-v^2/c^2}=dt\sqrt{dt^2-dx^2/c^2}=\frac{1}{c}\sqrt{dw^2-dx^2}$; so it's easy to set up a variational calculus, minimizing the integral$$\tau=\frac{1}{c}\int_{P_1}^{P_2}\sqrt{1-\left(\frac{dx}{d\omega}\right)^2}d\omega$$(the quantity under the square root can't be negative, you all know why).
So by taking $L=\sqrt{1-{x'}^2(\omega)}$ with $x'(\omega)=\frac{dx}{d\omega}$ we need to calculate the euler-lagrange equation:$$\frac{\partial L}{\partial x(\omega)}-\frac{d}{d\omega}\frac{\partial L}{\partial x'(\omega)}=0$$but since L doesn't depend on x(w) but only on x'(w) we get to$$\frac{d}{d\omega}\frac{\partial L}{\partial x'(\omega)}=\frac{d}{d\omega}\frac{x'(\omega)}{ \sqrt{1-{x'}^2(\omega)} }=0$$and so $$\frac{x'(\omega)}{ \sqrt{1-{x'}^2(\omega)} }=G\Rightarrow x'=\frac{G}{\sqrt{1+G^2}}$$ where G is a constant, determinable by the position of the two point on the spacetime.
So we end up with$$x(\omega)=\frac{G}{\sqrt{1+G^2}}\omega$$
This is supposed to be the path between the two point on the spacetime of the particle which experiences the least proper time.
In the case considered in the beginning I took both point lying on the w axis; so the constant G turns out to be 0, so the path I'm looking for is $x=0$, which corresponds to a particle at rest.
But this is impossible, since it's know that this path it's the one with the longest proper time!!
I found various solution for this:
1) I did not understood nothing of relativity
2) I've done various mistake
3) The variational method I've used gave me effectively an extremal path, but the longest, not the shortest. If so, does this means that there are not a minimum path? (I don't think so) OR I have to introduce some lagrange multiplier-style constraint on the calculation (like the fact that I cant'get back in time)??
Can you help me to solve this riddle??
Thanks for your disponibility!!!
PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug
Blog Entries: 1
Recognitions:
Gold Member
Science Advisor
Quote by teddd But this is impossible, since it's know that this path it's the one with the longest proper time!! I found various solution for this: 1) I did not understood nothing of relativity 2) I've done various mistake 3) The variational method I've used gave me effectively an extremal path, but the longest, not the shortest. If so, does this means that there are not a minimum path? (I don't think so) OR I have to introduce some lagrange multiplier-style constraint on the calculation (like the fact that I cant'get back in time)?? Can you help me to solve this riddle?? Thanks for your disponibility!!!
The Euler Lagrange equation only guarantees 'stationary' variation. It does not distinguish minimum, maximum, or saddle (neither). There are other conditions you can test for that distinguish these case (these are rarely worth the bother - you can normally figure out which you have). However, in relativity, the answer is simple: there is no extremal minimum. A light like path has zero proper time but is not stationary with respect to variation. A time like path can get arbitrarily close to zero proper time. Bottom line: variation of a Minkowskian metric only gives you curves of maximum proper time (in GR, it doesn't guarantee that - there are often saddle geodesics in GR which have non-extremal paths of both longer and shorter proper time).
Thanks PAllen!! But can you make clear to me why it can't be found any minimum path for a world line? The proper time can be made arbitrarily small if the world line to which it corrisponds can get close enough to the null geodesic, but if the path has to pass between fixed point there are some constraint! I imagine that the path which have the shortest proper time is kind of the "longest" one. So if I take some oscillating function x=Asin(aw) that meets the requirement of a timelike path and whose tangent is the closest possible to the lightlike path i guess that something has to come out; but again I realize now that the higher the frequency the longest the path, so effectively there is no minimum. Am I right?
Blog Entries: 1
Recognitions:
Gold Member
Science Advisor
## Easy question on proper time
Quote by teddd Thanks PAllen!! But can you make clear to me why it can't be found any minimum path for a world line? The proper time can be made arbitrarily small if the world line to which it corrisponds can get close enough to the null geodesic, but if the path has to pass between fixed point there are some constraint! I imagine that the path which have the shortest proper time is kind of the "longest" one. So if I take some oscillating function x=Asin(aw) that meets the requirement of a timelike path and whose tangent is the closest possible to the lightlike path i guess that something has to come out; but again I realize now that the higher the frequency the longest the path, so effectively there is no minimum. Am I right?
There is no variational minimum. Given any time like path between two events, there is a 'nearby' path that has lower proper time. As a result, the Euler-Lagrange equation will never pick out a minimum for paths between events with time like separation.
Thread Tools
| | | |
|---------------------------------------------------|------------------------------|---------|
| Similar Threads for: Easy question on proper time | | |
| Thread | Forum | Replies |
| | Special & General Relativity | 17 |
| | Special & General Relativity | 17 |
| | Special & General Relativity | 47 |
| | Advanced Physics Homework | 2 |
| | Special & General Relativity | 11 |
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9409199357032776, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/calculus/138515-spherical-coordinates.html
|
# Thread:
1. ## Spherical Coordinates
I have to calculate some tripe integral $\int\int\int_G f(x,y,z)dV$
with $G= \left\{x^2+y^2+z^2 \leq R^2, x^2+y^2\leq z^2, z\geq 0\right\}$
with spherical coordinates. Thus we substitute:
$x= \rho\sin(\phi)\cos(\theta)$
$y=\rho\sin(\phi)\sin(\theta)$
$z=\rho\cos(\phi)$
and $dV = \rho^2\sin(\phi)d\rho d\phi d\theta$
How do I find the boundaries of integration for $\phi,\rho,\theta$ w.r.t G?
I guess $0\leq \rho\leq R$, but how about $\phi,\theta$
2. Originally Posted by Dinkydoe
I have to calculate some tripe integral $\int\int\int_G f(x,y,z)dV$
with $G= \left\{x^2+y^2+z^2 \leq R^2, x^2+y^2\leq z^2, z\geq 0\right\}$
with spherical coordinates. Thus we substitute:
$x= \rho\sin(\phi)\cos(\theta)$
$y=\rho\sin(\phi)\sin(\theta)$
$z=\rho\cos(\phi)$
and $dV = \rho^2\sin(\phi)d\rho d\phi d\theta$
How do I find the boundaries of integration for $\phi,\rho,\theta$ w.r.t G?
I actually have one of my notes online that has pretty much this exact same problem. Attached at the bottem
Go to the definition of spherical co-ordinates. Theta is the angle from the project onto the XY plane. So we want to go from 0-->2pi
Phi is the angle from the z-axis to the line of P. So we want to go from the bound of your cylinder --> pi/2
And for P, well I'll let you look that up in my note because it's best explained there. But generally you see that P must be greater then the line from the origin to the cylinder, so find an equation within the cylinder to model radius with height, and that is your min radius. Of course your max P is that of the sphere. Again, more detail in my notes!
Here you go:
Edit- OOPs, noticed that in my final integral of dV there should be a 2 in front of the intregal, to get both the upper and lower hemispheres!
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 16, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9318202137947083, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/25993/sets-with-positive-lebesgue-measure-boundary/26000
|
## sets with positive Lebesgue measure boundary
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Consider a compact subset $K$ of $R^n$ which is the closure of its interior. Does its boundary $\partial K$ have zero Lebesgue measure ?
I guess it's wrong, because the topological assumption is invariant w.r.t homeomorphism, in contrast to being of zero Lebesgue measure. But I don't see any simple counterexample.
-
## 3 Answers
Construct a Cantor set of positive measure in much the same way as you make the `standard' Cantor set but make sure the lengths of the deleted intervals add up to 1/2, say. Let $U$ be the union of the intervals that are deleted at the even-numbered steps and let $V$ be the union of the intervals deleted at the odd-numbered steps. The Cantor set is the common boundary of $U$ and $V$; their closures are as required.
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
http://www.jstor.org/pss/1986455 Here is constructed a Jordan Curve with positive measure. This gives an example.
-
4
Ah! You beat me to it. I'll delete my own answer, which is a duplicate of yours (and you need the rep more than I do). Let me just add that this curve (the Osgood curve) has been mentioned here on MO before. The search box will find it for you. – Harald Hanche-Olsen May 26 2010 at 12:21
It does the trick. The Jordan curve in that paper is a "thinned out" variant of the Peano curve. – Xandi Tuni May 26 2010 at 12:24
Let $D_0,D_1,\ldots$ enumerate a sequence of disjoint intervals in the unit interval with $\bigcup_n D_n$ open dense and having measure less than $1$. For example, place a very tiny interval around each rational number, so that the sum of the intervals is less than $1$. Now, let $E=\bigcup_n D_{2n}$ be the union of the even intervals and $O=\bigcup_n D_{2n+1}$, the union of the odd intervals. The entire interval is the union of $E$, $O$ and their boundaries, so one of these boundaries must have positive measure. So we may take $K$ to be the closure of $E$ or $O$.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 18, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9544548988342285, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/tagged/mechanics
|
# Tagged Questions
General questions about the way objects move and interact. This tag should be used when the tags for certain kinds of mechanics (newtonian-mechanics, classical-mechanics, quantum-mechanics, etc.) are too specific.
1answer
41 views
### How large of a solar sail would be needed to travel to mars in under a year?
I'm attempting to approach this using the identity $$F/A = I/c$$ I can solve for Area easily enough $$A = F(c/I)$$ and I know the distance $d$ is $$d=1/2(at^2)$$ But I'm having difficulty trying to ...
0answers
19 views
### Ignoring rotational inertia of a ballistic pendulum
When calculating the initial velocity of a steel ball by using a ballistic pendulum, but by ignoring the rotational inertia of the pendulum rod, a systematic error is introduced. My question is will ...
1answer
62 views
### Calculating how a polygon bounces off a plane
I'd like to calculate how polygons bounce off a plane. In this picture, the square doesn't bounce straight up, but instead it bounces somewhat to the right and starts spinning. But I have no idea ...
0answers
6 views
### Finite Element, NASTRAN, how to print the differential stiffness matrix in .f06 output file [migrated]
We are working on an optimization problem in which we can approximate the eigenvalue calculation by assuming a constant eigenvector, using the formula: ...
0answers
15 views
### Row of pivoted magnets and energy scale
This question is about a system involving a horizontal row of length L of equally spaced pivotable magnets, each with a pole at either end. These magnets will often be referred to as units. So each ...
1answer
34 views
### What lifting mechanism is likely to have the best energy recovery ratio?
Suppose I was designing an apparatus which needed to lift 250kg 5cm high, hold it there for a few seconds, and then lower the object back to the original height. Such a process would need to be ...
2answers
63 views
### Hamiltonian of Harmonic Oscillator with Spin Term
We have the usual Hamiltonian for the 1D Harmonic Oscillator: $\hat{H_{0}}=\frac{\hat{P^2}}{2m} + \frac{1}{2}m \omega \hat{X^2}$ Now a new term has been added to the Hamiltonian, \$\hat{H} = ...
1answer
27 views
### Terminal velocity and force pull [closed]
I can't figure out this problem . Buoyancy force and gravity remain constant and viscous force by is $-kv$. And these forces all balance, but data isn't given according to that, or I am not able to ...
1answer
47 views
### Floating Objects and Weight
The Situation: A ball is placed in a beaker filled with water and floats. It is also attached to the bottom of the beaker via a string. The Question: The ball is attached to the beaker, thus ...
1answer
35 views
### Finding the coffecient of restitution
A ball moving with velocity $1 \hat i \ ms^{-1}$ and collides with a friction less wall, afetr collision the velocity of ball becomes $1/2 \hat j \ ms^{-1}$. Find the coefficient of restitution ...
0answers
72 views
### How to calculate mechanical advantage of a worm gear?
How to calculate mechanical advantage of a worm gear? My textbook simply use the turn ratio as the mechanical advantage, but I'm not sure how that works. My thinking: If the worm has a radius of ...
1answer
86 views
### Confusions about rotational dynamics and centripetal force
I am a high school student. I am having confusions about the centripetal force and rotational motion . I have known that a body will be in rest or in uniform velocity if any force is not applied. But ...
1answer
59 views
### A sphere rolling down a rough wedge which lying on a smooth surface
A sphere of mass $m$ and radius $r$ rolls down from rest on an inclined (making an angle $\phi$ with the horizontal ) and rough surface of a wedge of mass $M$ which stays on a smooth horizontal floor. ...
1answer
135 views
### Mechanics question
The following is a question from a past exam paper that I'm working on, as I have an exam coming soon. I would appreciate any help. A fairground ride takes the form of a hollow, cylinder of radius ...
3answers
148 views
### Does more rain strike a vehicle while moving or while stopped (or neither)? [duplicate]
Assume there is a rainstorm, and the rain falling over the entire subject area is perfectly, uniformly distributed. Now assume there are two identical cars in this area. One is standing still, and ...
4answers
77 views
### Would a phone move upon vibration in a completely uniform situation?
I was sitting down yesterday and saw my phone vibrate on a side, and it moved about a centimetre per vibration. I wondered why it moves, and thought perhaps that the side it was on had a slight ...
0answers
51 views
### Scaling arguments for the Contact mechanics between two elastic spheres
I am studying a bit granular dynamics and I have seen that two spheres of radius $R$ in contact with a contact area of radius $a$ would need an applied force $F$ on this two spheres that is nonlinear ...
3answers
63 views
### Friction on roads
I have a question with which I am having trouble. A 71m radius curve is banked for a design speed of 91km/h. Given a coefficient of static friction of 0.32, what is the range of speeds in which a car ...
2answers
112 views
### Whats the anti-torque mechanism in horizontal take-off aircraft?
In most helicopters there is the anti-torque tail rotor to prevent the body from spinning in the opposite direction to the main rotor. What's the equivalent mechanism in horizontal takeoff single ...
1answer
52 views
### Hollow stone columns provide more support?
In history class in elementary school I remember learning that the Greeks would build their stone columns hollow because they thought this provided more support. Is it true that a hollow column is ...
1answer
44 views
### Measuring vibration and converting to force (N)
The test is: To have a rotating machine, bolted into a factory floor. To measure the vibration on 3 axis (output of accelerometers can be acceleration or velocity in $\mathrm{m/s}$ or ...
1answer
82 views
### Shear Flow corresponding to Eccentric Shear Force of a Closed-section Beam (Structural Analysis - Mechanics)
Been stumped with this question for way too long... its a beam with a thin-walled rectangular cross-section, and a shear force is acting at a distance from the shear center. I know my decomposition of ...
1answer
82 views
### (Re-)use of a space elevator (basic mechanics and potential energy source)
It's said that if a space elevator were made then it would be much more efficient to put objects in orbit. I've always wondered about the durability of a space elevator though. I don't mean the ...
1answer
29 views
### Mechanical shock resistance as a function of shape
I have a system where I'm dropping glass tubes filled with some sample from a certain height, along a track. I can apply a back-pressure of air to push them down faster, and in general the faster they ...
2answers
101 views
### How do you tell what forces do no work?
The total mass of the children and the toboggan is 66 kg. The force the parent exerts is 58 N (18 degrees above the horizontal). What 3 forces/ components do no work on the toboggan? I said the ...
1answer
102 views
### Intuition behind Work
I have a doubt in understanding the intuition behind the concept of work. First of all, I think this isn't duplicate, I've searched on the site, and the closest thing I've found was this post which is ...
2answers
144 views
### Universe Expansion and two tennis balls
Clear the universe of all matter except for two tennis balls. Place the two tennis balls in the same inertial frame 1 Mpc apart. Are the tennis balls getting further apart? Will the tennis balls ...
0answers
119 views
### Maximum Shear on a Beam - beam with fixed support on one end and hinge on other end
A beam $\displaystyle 3m$ long with fixed support on one end and hinge on the other end is subjected to a uniform load of $10\ kN/m$. What is the maximum shear of this beam? The solution is this one: ...
2answers
145 views
### Cantilever Beam - Maximum Shear of the Beam
A cantilever beam $3\ \text{m}$ long is subjected to a moment of $10\ \text{kNm}$ at the free end. Find the maximum shear of the beam. The answer is "There is no vertical load, shear is zero" ...
1answer
79 views
### Finding the acceleration of a cart rolling on a table
The cart is rolls frictionless on the table. It has a mass of $1 kg$. Attached to it are 2 strings, that go through two frictionless sheaves. The weights have masses as in the picture. ...
1answer
65 views
### Why is $dL = L d\epsilon$?
Let's say there's a random elastic material. It's length is $L$ and it's tensile strain $\epsilon= (L-L_0)/L_0$ Now, when one pulls on it the following is true: \$dW_{tot}=FdL =\sigma AdL=\sigma A L ...
1answer
153 views
### Drag on a spinning ball in fluid
I am a physics newbie (high school level) and I am wondering what happens when a spherical object is spinning on the spot in a bunch of gas (no gravity here, just an imaginary physics sandbox). Am I ...
1answer
65 views
### Atomic physics through classical resonance?
I have a rather general question regarding the theory of Quantum Mechanics. To preface this question, consider a violin string. When a violinist exposes the string to a bow, this is exposing the ...
2answers
69 views
### What fraction of peak horsepower do typical 4 door passenger vehicles use?
I was surprised when I looked at the power rating of the engine used on a Humvee. It's only ~190 horsepower, which is exceeded by many sedan engines. So an obvious question is why doesn't my Camry SE ...
1answer
63 views
### Lego Blender and gear ratios
I bought the Lego Kit LEGO Crazy Contraptions. It allows the learner to build a blender. My son, the engineer, said something to our grandson, his son, about a gear ratio. Can someone translate?
4answers
203 views
### Pseudo force in rotating frames
A bug of mass $m$ crawls out along a radial scratch of a phonographic disc rotating at $\omega$ angular velocity. It travels with constant velocity $v$ with respect to the disc. What are the forces ...
2answers
115 views
### distance of electron from proton
An electron is projected, with an initial speed of $1.10 \times 10^5 \text{m/s}$, directly towards a proton that is essentially at rest. If the electron is initially a great distance from the ...
0answers
17 views
### Allowed Quantum States- Filkelstein and Rubinstein constraints
So basically i'm doing a report on Finkelstein and Rubinstein constraints. I have a system where the allowed quantum states satisfy ...
2answers
176 views
### Why is the Lagrangian quadratic in $\dot{q}$? [duplicate]
My teacher said we only consider Lagrangians which are quadratic in $\dot{q}$, and we don't take other Lagrangians. I couldn't understand why. Can anyone please explain this?
0answers
89 views
### Internal moment in the hull of a pressure vessel
This question is related to the course structural analysis. As part of our exam grade every student has been given different multiple homework assignments which we have to solve. One of the problems ...
0answers
59 views
### Limitations on the choice of axis of rotation regarding rolling wheels
Consider a situation where a wheel is rolling without friction on a level surface. Call the center of the wheel $C$, the point where the wheel contacts the ground $G$, and some arbitrary other point ...
4answers
317 views
### Difference between torque and moment
What is the difference between torque and moment? I would like to see mathematical definitions for both quantities. I also do not prefer definitions like "It is the tendancy..../It is a measure of ...
2answers
73 views
### How can I understand work conceptually?
I'm in a mechanical physics class, and I'm having a hard time understanding what the quantity of work represents. How can I understand it conceptually?
0answers
59 views
### Fluid flow in a hollow spring(helix)
Liquid flowing in a long hollow spring(helix). Any effects on the flow rate etc when the spring is stretched or compressed? When the stretching or compressing of spring is done at brisk speeds the ...
2answers
144 views
### Is there any case in physics where the equations of motion depend on high time derivatives of the position?
For example if the force on a particle is of the form $\mathbf F = \mathbf F(\mathbf r, \dot{\mathbf r}, \ddot{\mathbf r}, \dddot{\mathbf r})$, then the equation of motion would be a third order ...
1answer
94 views
### How much (usable) potential energy is stored in a compound bow?
I have done a bit of reading about the energy stored in bows, but I haven't seen anywhere a description of how much energy actually is stored. Clearly there are many factors, bow design being ...
2answers
140 views
### Total Mechanical Advantage
How do you find the net Mechanical Advantage (MA) of two joint machines. Do you add or multiply the individual MA? Suppose I have two sets of wheel and axle connected by a fixed pulley. Each of the ...
1answer
49 views
### In which direction is the acceleration directed in a non uniform circular motion?
Acceleration is directed towards the center of the circle in a uniform circular motion. Is it same for the non-uniform circular motion?
2answers
222 views
### Translation Invariance without Momentum Conservation?
Instead of the actual gravitational force, in which the two masses enter symmetrically, consider something like $$\vec F_{ab} = G\frac{m_a m_b^2}{|\vec r_a - \vec r_b|^2}\hat r_{ab}$$ where \$\vec ...
0answers
47 views
### Fading transition and rotation of and object in 2D
I'm looking for sources about I guess dynamics subject. The model I'd like to solve is reduced to a question of: How does a force applied on a certain point of an object results in both fading ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 31, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9303026795387268, "perplexity_flag": "middle"}
|
http://unapologetic.wordpress.com/2011/05/23/vector-fields/?like=1&source=post_flair&_wpnonce=39170bc4ce
|
# The Unapologetic Mathematician
## Vector Fields
At last, we get back to the differential geometry and topology. Let’s say that we have a manifold $M$ with tangent bundle $\mathcal{T}M$, which of course comes with a projection map $\pi:\mathcal{T}M\to M$. If $U\subseteq M$ is an open submanifold, we can restrict the bundle to the tangent bundle $\pi:\mathcal{T}U\to U$ with no real difficulty.
Now a “vector field” on $U$ is a “section” of this projection map. That is, it’s a function $v:U\to\mathcal{T}U$ so that the composition $\pi\circ v:U\to U$ is the identity map on $U$. In other words, to every point $p\in U$ we get a vector $v(p)\in\mathcal{T}_pU$ at that point.
I should step aside to dissuade people from a common mistake. Back in multivariable calculus, it’s common to say that a vector field in $\mathbb{R}^3$ is a function which assigns “a vector” to every point in some region $U\subseteq\mathbb{R}^3$; that is, a function $U\to\mathbb{R}^3$. The problem here is that it’s assuming that every point gets a vector in the same vector space, when actually each point gets assigned a vector in its own tangent space.
The confusion comes because we know that if $M$ has dimension $n$ then each tangent space $\mathcal{T}_pM$ has dimension $n$, and thus they’re all isomorphic. Worse, when working over Euclidean space there is a canonical identification between a tangent space $\mathcal{T}_pE$ and the space $E$ itself, and thus between any two tangent spaces. But when we’re dealing with an arbitrary manifold there is no such canonical way to compare vectors based at different points; we have to be careful to keep them separate.
For each $U\subseteq M$ we have a collection of vector fields, which we will write $\mathfrak{X}_MU$, or $\mathfrak{X}U$ for short. It should be apparent that if $V\subseteq U$ is an open subspace we can restrict a vector field on $U$ to one on $V$, which means we’re talking about a presheaf. In fact, it’s not hard to see that we can uniquely glue together vector fields which agree on shared domains, meaning we have a sheaf of vector fields.
For any $U$, we can define the sum and scalar multiple of vector fields on $U$ just by defining them pointwise. That is, if $v_1$ and $v_2$ are vector fields on $U$ and $a_1$ and $a_2$ are real scalars, then we define
$\displaystyle\left[a_1v_1+a_2v_2\right](p)=a_1v_1(p)+a_2v_2(p)$
using the addition and scalar multiplication in $\mathcal{T}_pM$. But that’s not all; we can also multiply a vector field $v\in\mathfrak{X}U$ by any function $f\in\mathcal{O}U$:
$\displaystyle\left[fv\right](p)=f(p)v(p)$
using the scalar multiplication in $\mathcal{T}_pM$. This makes $\mathfrak{X}_M$ into a sheaf of modules over the sheaf of rings $\mathcal{O}_M$.
### Like this:
Posted by John Armstrong | Differential Topology, Topology
## 10 Comments »
1. [...] is a coordinate patch, it turns out that we can actually give an explicit basis of the module of vector fields over the ring [...]
Pingback by | May 24, 2011 | Reply
2. thank you for this post.
Comment by tiredguy | May 25, 2011 | Reply
3. [...] know what vector fields are on a region , but to identify them in the wild we need to verify that a given function sending [...]
Pingback by | May 25, 2011 | Reply
4. [...] is a vector field on the manifold and let be any point in . Then I say there exists a neighborhood of , an [...]
Pingback by | May 28, 2011 | Reply
5. [...] a smooth vector field we know what it means for a curve to be an integral curve of . We even know how to find them by [...]
Pingback by | May 30, 2011 | Reply
6. [...] Fields on Compact Manifolds are Complete It turns out that any vector field on a compact manifold is complete. That is, starting at any point we can follow the vector field on [...]
Pingback by | June 1, 2011 | Reply
7. [...] know that any vector field can act as an endomorphism on the space of smooth functions on . What happens if we act by one [...]
Pingback by | June 2, 2011 | Reply
8. [...] be a smooth map between manifolds, with derivative , and let and be smooth vector fields. We can compose them as and , and it makes sense to ask if these are the same [...]
Pingback by | June 3, 2011 | Reply
9. [...] go back to the way a vector field on a manifold gives us a “derivative” of smooth functions . If is a smooth vector [...]
Pingback by | June 15, 2011 | Reply
10. [...] vector field defines a one-dimensional subspace of at any point with : the subspace spanned by . If is [...]
Pingback by | June 28, 2011 | Reply
« Previous | Next »
## About this weblog
This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”).
I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 41, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9287415742874146, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/166183/is-this-mathematical-definition-iterative-if-not-what-does-an-iterative-functi/166197
|
# Is this mathematical definition iterative? If not, what does an iterative function look like?
I was debating with someone about iterative vs recursive in programming. I was defending the iterative side. He then said me that the true definition of Fibonacci number is this: $$f(n) = f(n-1) + f(n-2);\space n > 2$$ with $f(0) = 0$ and $f(1) =1$.
Then I replied with the factorial example, because I had to admit that for this one, this was right, I don't see other definition. Factorial is defined with that recursive mathematical definition:
$$\operatorname{fact}(n)=\begin{cases}1 & \text{if } n=0 \\ n\cdot\operatorname{fact}(n-1) & \text{if } n > 0\end{cases}$$
But, in the same page, they also say that Factorial can be implemented iterative, and the mathematical definition, with the pseudo code that goes with it is:
````function factorial is:
input: integer n such that n >= 0
output: [n × (n-1) × (n-2) × … × 1]
1. create new variable called running_total with a value of 1
2. begin loop
1. if n is 0, exit loop
2. set running_total to (running_total × n)
3. decrement n
4. repeat loop
3. return running_total
end factorial
````
$$fact(n) = fact_{acc}(n, 1)$$ $$\operatorname{fact_{acc}}(n, t)=\begin{cases}t & \text{if } n=0 \\ \operatorname{fact_{acc}}(n-1, n\cdot{t}) & \text{if } n > 0\end{cases}$$
Then he replied me that its not a iterative mathematical definition, but another recursive mathematical definition. But, if you check the source (I know that Wikipedia is not the best source, but...), It clearly state that its the representation of the iterative pseudo code.
The way I read it, is as long as n>0, repeat the content of the function and decrease n. t is used as an accumulator, and when n reach 0, we simply keep the result of what t contain.
Is it a mathematical definition of a recursive or iterative function? If its not an iterative definition, how a iterative mathematical definition for that factorial look like? (Not the code to implement it, the mathematical definition). If there is no way to represent that function with a iterative mathematical definition, can you show some example of very easy to understand iterative mathematical definition, and explain me how to read such definition?
Don't point me out to this: Factorial Function - Recursive and iterative implementation, Its only the implementation, it don't really talk about the mathematical definition. Im only interested by the mathematical definition, nothing else.
Edit: Want to thanks Shaktal who edited my post, and added some formatting. From the formatting he did to the 2 first mathematical definition of the function, was able to format the rest.
I want to thanks everyone for the clarification about mathematical definition and the implementation. Thanks for your comprehension, I will only see those thing in math next scholar year, but already saw recursion and iterative long time ago in programming.
-
For any recursive program there exist iterative program and vice-versa. In mathematics there are many forms of the same things but I didn't come across any iterative definitions it is just how you implement in your program – Saurabh Jul 3 '12 at 17:18
## 4 Answers
The usual mathematical definition of factorial is $n! = \prod_{j=1}^n j$. I would say this is neither "iterative" nor "recursive" in your sense: the distinction between those is a matter of implementation rather than mathematics. You might also like $n! = \int_0^\infty x^n e^{-x}\ dx$. Or an even more purely "declarative", combinatorial definition: $n!$ is the number of permutations of $n$ objects.
As for the Fibonacci numbers, you might like the following declarative definition: $F_n$ is the number of subsets of $\{1,2,\ldots,n-2\}$ that don't contain any two consecutive integers.
There is a notion of "recursive function" in mathematical logic, but that's something quite different.
-
Thanks for the clarification, then it that me and him where wrong, because the its the implementation that is recursive or iterative. Also, I see those mathematics definition on another angle, multiple way to interpret them. – user1115057 Jul 4 '12 at 17:56
Mathematics is declarative (it tells you what the value of something is) whereas code is procedural (it tells how how to compute the value of something).
Therefore asking for an iterative definition of the factorial function doesn't really make any sense. Any definition you could write down would essentially be pseudocode for computing values of the factorial function.
An implementation of the factorial function can be either iterative or recursive, but the function itself isn't inherently either. Note that an implementation isn't necessarily either iterative or recursive. For example, here are three different definitions of the factorial function in the language Haskell:
````fact 0 = 1
fact n = n * fact (n-1)
fact n = fact' 1 n
where
fact' a 0 = a
fact' a b = fact' (a*b) (b-1)
fact n = product [1..n]
````
You might call the first definition recursive and the second definition iterative - and the final definition is a definition in terms of more primitive functions. But they all define the same function.
-
There is not really such a thing as an iterative definition of a mathematical object (such as a function). You could refer to an iterative process in a definition, but the definition itself must describe the whole object in a finite amount of text. That being said, there is nothing against defining the factorial by $$n! = \prod_{i=1}^ni \qquad\text{for }n\in\mathbb N$$ which has nothing recursive to it.
-
At my scholarship level, we are not suppose to know what is factorial, integral, derivative, what recursive mean... But I am the guy who like math and love learning new thing by myself, so I know what are these. But because we didn't learning that yet, I have some difficult with notation. Can you explain me what is the big PI symbol you used to define the factorial? The name of that notation or symbol, or a links to Wikipedia explaining it would be appreciated. By the way, I will learn all those thing maybe this year, I will finally be at college. – user1115057 Jul 3 '12 at 16:42
@user1115057 If you're familiar with the big sigma notation, then big pi notation is the analog of that but for products, instead of sums, e.g. $$5!=\prod_{i=1}^{5}{i}=1\times2\times3\times4\times5$$ And: $$n!=\prod_{i=1}^{n}{i}=1\times2\times\cdots\times n$$ – Shaktal Jul 3 '12 at 16:45
Yeah, I am more familiar with the big sigma, we used it one time at school last scholar year that just terminated 3 week ago, in physics class, but only one time. But i still remember how to use it, and when I do integral, I usually use that sigma notation. Thanks for the clarification. – user1115057 Jul 3 '12 at 16:49
**At my scholarship level, ** ... probably no sense in arguing about recursive/iterative with your friend, then. – GEdgar Jul 3 '12 at 17:19
The factorial can be given by a simple recurrence relation. The elements of the sequence defined by $$\begin{eqnarray*} x_0 &=& 1 \\ x_{n} &=& n x_{n-1} \end{eqnarray*}$$ are indeed $x_n = n!$. The collection of pairs $\{(n,x_n)\}$ for $n=0,1,2,\ldots$ is a function. It would not, however, be called a recursive function.
An iterated function is of the form $$f^m(x) = \underbrace{(f\circ \cdots \circ f)}_{m\mathrm{\, times}}(x)$$ where $f^0(x) = x$. That is, $f^m(x)$ is $f$ applied to $x$ $m$ times. Try to build factorial out of such a function. Some examples for $m=1,2,\ldots$,
(1) if $f(n) = n$, $f^m(n) = n$,
(2) if $f(n) = n^s$, $f^m(n) = n^{s^m}$,
(3) if $f(n) = n!$, $f^m(x) = n\underbrace{!\cdots!}_{m}$.
The "iterative" definition of factorial given in your pseudocode is
$$\begin{eqnarray*} f(0,m) &=& m \\ f(n,m) &=& f(n-1,n m). \end{eqnarray*}$$ Then $f(n,1) = n!$. This is really a recurrence relation taking two variables.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 24, "mathjax_display_tex": 10, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9251025915145874, "perplexity_flag": "middle"}
|
http://quant.stackexchange.com/questions/226/whats-the-difference-between-volatility-and-variance/1200
|
# What's the difference between volatility and variance?
How do they differ in what they imply about an underlying's (or any variable's) movement?
-
## 6 Answers
Volatility is typically unobservable, and as such estimated --- for example via the (sample) variance of returns, or more frequently, its square root yielding the standard deviation of returns as a volatility estimate.
There are also countless models for volatility, from old applied models like Garman/Klass to exponential decaying and formal models such as GARCH or Stochastic Volatility.
As for forecasts of the movement: well, that is a different topic as movement is the first moment (mean, location) whereas volatility is a second moment (dispersion, variance, volatility). So in a certain sense, volatility estimates do not give you estimates of future direction but of future ranges of movement.
-
So what is the difference from your pov ? – nicolas Jan 27 '12 at 10:53
Volatility is not ever the "(sample) variance of returns". Volatility is always expressed as standard deviation and hence squared results in variance. – Freddy Dec 30 '12 at 12:58
By volatility people usually refer to to annualized standard deviation of an asset. For an asset it's usually quoted as a percentage of the asset price (i.e. the return volatility). For a portfolio, it is often quoted in currency units. Variance is the square of the standard deviation. It is usually not quoted directly because it doesn't have an intuitive unit of measure. Instead, it is used in variance decomposition, e.g. the idiosyncratic variance of a portfolio is 6% of the total portfolio variance.
-
• The main underlying difference is in their definition. Variance has a fixed mathematical definition, however volatility does not as such. Volatility is said to be the measure of fluctuations of a process.
• Volatility is a subjective term, whereas variance is an objective term i.e. given the data you can definitely find the variance, while you can't find volatility just having the data. Volatility is associated with the process, and not with the data.
• In order to know the volatility you need to have an idea of the process i.e you need to have an observation of the dispersion of the process. All the different processes will have different methods to compute volatilities based on the underlying assumptions of the process.
-
Suppose X is a random variable representing the returns of an asset having finite mean $\mu$ and variance $\sigma^2>0$.
• Variance $\sigma^2$ represents the expected squared deviation of $X$ from $\mu$. Intuitively, this is a measure of how dispersed returns are about the mean. If returns are measured in $\%$, then the units of variance are $\%^2$. However, for many people $\%^2$ is difficult to interpret.
• Volatility $\sigma$ is the square root of variance, and has units $\%$. This change in units makes volatility more interpretable, furthermore a better tool for analysis. If we further assume $X$ follows a Gaussian distribution, then $\sigma$ provides many more additional insights.
Volatility is a tool commonly used in univariate cases, e.g. when speaking of returns of one stock, one bond, or one portfolio.
In the multivariate setting, variance is used, e.g. a covariance matrix, because taking the square root of a matrix is an unecessary additional layer of complexity.
-
Volatility is essentially quadratic variation. It is a property of sample paths, not probability measures. In other words, it can be calculated given a single historical path and doesn't depended upon the probability you assign to that path.
Variance, and standard deviation, are functions of the probability you assign to events.
-
I think variance is called quadratic measure, or double moment, not volatility. – S_H Feb 8 '11 at 0:48
1
I think you missed the point, Harpreet. If you take e.g. a standard Brownian motion and an Ornstein-Uhlenbeck (aka Vasicek) process, they both have the same (constant) instantaneous volatility. But their variances are different ; in the BM case the variance grows like time, whereas in the OU case the variance converges rapidly to a finite limit (stationary regime). – egoroff Feb 8 '11 at 10:57
the only difference between volatility and variance is the square. everything else is bs, as concept that apply to one applies to the other (historical vs implied, blabla)
-
1
i dont know who down voted, but I stand by this comment. there is no difference except the square. if you disagree, please argue. – nicolas May 28 '11 at 16:34
Variance has properties that standard deviation does not. For example, variance is additive with stable distributions while standard deviation is not. I didn't down vote btw. – strimp099 Dec 25 '11 at 20:47
@strimp099 yes, variance is additive, and is an easier, less error prone way of manipulating additive quantities. but then standard deviation is square root additive. and then standard deviation is multiplicative. bottom line is, the difference is technical, not conceptual. – nicolas Jan 27 '12 at 10:17
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9331468939781189, "perplexity_flag": "middle"}
|
http://mathoverflow.net/revisions/109284/list
|
2 edited tags
1
# Things you can do with the self-writhe
I hope "self-writhe" is the established word. (0 for link-crossing, otherwise identical to writhe +1 or -1) I bet the following is known: Take some crossing of a link with self-writhe $w_a$. Flip it to get a link with $w_b$, call their arithmetic mean $w_{\times}$. Orient the crossing to overpass, split horizontally and vertically, respectively, to get links with self-writhe $w_-,w_|$. The three numbers are linear dependent: $w_1-w_2=w_2-w_3$ (where ${\times,|,-}={1,2,3}$ but which is which depends on the self-writhe of the crossing itself. It's nicely symmetric but I'm too idle to actually list the three subcases :-) Thus I defined $J=w_1-w_2$ (again with proper numbering, and a factor) to be the "angular momentum" of a crossing. E.g. Hopf link $J=-1/2$, positive trefoil $J=+1$. You can do neat things with it: R1 crossings have $J=0$, and R2 pairs $+J,-J$...too bad that one of the three crossings (the one that would make the R3 move pic alternating when flipped) of a R3 move changes J "randomly" or you could simply sum over all J of a link to get an invariant. Blast. (Still, I should go check now if there is a connection to the Thurston-Bennequin number!)
Is there some use of the self-writhe for knot polynomials (beyond the Kauffman bracket) ? (As usual, paper refs are welcome.)
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9255074858665466, "perplexity_flag": "middle"}
|
http://mathematica.stackexchange.com/questions/8512/revolutionplot3d-axis-of-revolution-that-doesnt-pass-through-the-origin
|
# RevolutionPlot3D: axis of revolution that doesn't pass through the origin
This is a follow up question to RevolutionPlot3D: but NOT revolving about the z axis so please check that for the context.
It seems that `RevolutionAxis` requires the axis of revolution to pass thru the origin. Suppose I want to use an axis of revolution that does not pass through the origin, e.g., the line $y=-1$ in the example above. What is a good way to accomplish this?
I do realize that `RegionPlot3D` may be appropriate here, but even with seemingly simple examples such as the one above, it can struggle:
````Show[
RegionPlot3D[1 <= Sqrt[(y + 1)^2 + z^2] <= 1 + x^2, {x, 0, 1},
{y, -3, 3}, {z, -3, 3}, PlotRange -> All, PerformanceGoal -> "Quality",
Mesh -> False, AxesLabel -> {x, y, z}, PlotPoints -> 100,
ViewPoint -> {0.87, 0.44, 1.76}, ViewVertical -> {0.32, 0.67, 0.67}],
Graphics3D[{Thickness[.01], Black, Line[{{0, -1, 0}, {1.15, -1, 0}}]}]
]
````
Bumping `PlotPoints` up to 200, 300, 400 doesn't alleviate the problem and gets really slow.
-
## 1 Answer
Generate the plot and then apply a translation:
````RevolutionPlot3D[{Sin[t], t, Cos[t]}, {t, 0, 4 Pi}, RevolutionAxis -> {0, 0, 1},
PlotRange -> All] /.
GraphicsComplex[p_List, rest__] :> GraphicsComplex[TranslationTransform[{5, 5, 0}][p], rest]
````
-
I am not sure I understand this approach. How would I use it to generate the solid of revolution in the question? Thanks. – JohnD Dec 16 '12 at 1:50
lang-mma
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8361045122146606, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/139673/inductive-proof-of-a-countable-set-cartesian-product
|
# Inductive Proof of a countable set Cartesian product [duplicate]
Possible Duplicate:
Proving $\mathbb{N}^k$ is countable
I would like to prove that if S is countable then for any positive integer n the set $S^n$ (the n-fold Cartesian product of S with itself) is countable using mathematical induction.
I think I should initialize it at n=0 but I don't know where to go from there.
Thanks so much for the help
-
1
The result is trivial for $n=0$ and $n=1$, so your first step should be to prove it for $n=2$. Then you can use that result in your induction step to go from countability of $S^n$ to countability of $S^{n+1}$, since $S^{n+1}$ clearly admits a bijection with $S^n\times S$, a product of two countable sets. – Brian M. Scott May 1 '12 at 23:08
Thanks Brian. How do you prove the countability of a cartesian product of two countable sets ? – fred May 1 '12 at 23:11
1
– Brian M. Scott May 1 '12 at 23:17
I made it with your help. Many thanks ! – fred May 1 '12 at 23:39
## marked as duplicate by Asaf Karagila, Martin Sleziak, Chris Eagle, t.b., J. M.Aug 18 '12 at 1:29
This question has been asked before and already has an answer. If those answers do not fully address your question, please ask a new question.
## 1 Answer
Brian gave me some excellent advise and I found a way to do it. Showing that the cartesian $S^n \times S$
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9525795578956604, "perplexity_flag": "middle"}
|
http://terrytao.wordpress.com/tag/pinching-phenomenon/
|
What’s new
Updates on my research and expository papers, discussion of open problems, and other maths-related topics. By Terence Tao
# Tag Archive
You are currently browsing the tag archive for the ‘pinching phenomenon’ tag.
## 285G, Lecture 3: The maximum principle, and the pinching phenomenon
4 April, 2008 in 285G - poincare conjecture, math.AP, math.DG | Tags: convexity, maximum principle, pinching phenomenon, Ricci flow, Riemann curvature, tensor bundles | by Terence Tao | 25 comments
We now begin the study of (smooth) solutions $t \mapsto (M(t),g(t))$ to the Ricci flow equation
$\frac{d}{dt} g_{\alpha \beta} = - 2 \hbox{Ric}_{\alpha \beta}$, (1)
particularly for compact manifolds in three dimensions. Our first basic tool will be the maximum principle for parabolic equations, which we will use to bound (sub-)solutions to nonlinear parabolic PDE by (super-)solutions, and vice versa. Because the various curvatures $\hbox{Riem}_{\alpha \beta \gamma}^\delta$, $\hbox{Ric}_{\alpha \beta}$, R of a manifold undergoing Ricci flow do indeed obey nonlinear parabolic PDE (see equations (31) from Lecture 1), we will be able to obtain some important lower bounds on curvature, and in particular establishes that the curvature is either bounded, or else that the positive components of the curvature dominate the negative components. This latter phenomenon, known as the Hamilton-Ivey pinching phenomenon, is particularly important when studying singularities of Ricci flow, as it means that the geometry of such singularities is almost completely dominated by regions of non-negative (and often quite high) curvature.
Read the rest of this entry »
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 4, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8872758150100708, "perplexity_flag": "middle"}
|
http://mathoverflow.net/revisions/68189/list
|
## Return to Answer
1 [made Community Wiki]
Poincare Recurrence Theorem: http://en.wikipedia.org/wiki/Poincar%C3%A9_recurrence_theorem
Let $(X,\Sigma,m)$ be a finite measure space and let $f:X \to X$ be a measure-preserving map. If $E \in \Sigma$, then almost every point in $E$ returns to $E$; i.e., $m ({x \in E: \exists N: \forall n>N \quad f^n(x) \not \in E })=0$
A proof can be found e.g. in Arnold's "Mechanics"; there are some on PlanetMath, too. All use basically the definition of a measure, and maybe (or not) a necessary condition for convergence of a series of real numbers.
The theorem describes behavior of certain systems in statistical mechanics or thermodynamics, but it also has many mathematical consequences. It was one of first results in ergodic theory. It can be used to prove e.g. that an orbit of an irrational rotation of a circle is dense. Relations with recent developments in ergodic theory and dynamical systems are discussed by Barreira, doi:10.1142/9789812704016_0039
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8952034711837769, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/30198/how-to-bend-light?answertab=active
|
# How to bend light?
As we all know that light travels in rectilinear motion. But can we bend light in parabolic path? If not practically then is it possible in paper? Has anyone succeeded in doing that practically ?
-
## 4 Answers
Light does not, in general circumstances, travel in straight lines (although it does do so in the ones we usually encounter).
For one, light is really a wave and can only approximately be thought of as consisting of independently-propagating rays. This happens when the wavelength of the light is much smaller than the distances it is propagating over, which is usually the case for light (whose wavelength in the visible range is $0.4$ to $0.7\,\mu\textrm{m}$) but is not necessarily the case e.g. for radio waves and when nanoparticles are involved.
In this short-wavelength limit, wave propagation gives way to ray propagation (which is a special, approximate case of the former), and specifically to Fermat's principle for the mathematical description of light. This principle states that light rays starting at $A$ and ending up at $B$ will follow the path that minimizes the travel time $$S=\int_A^B n(s)\textrm{d}s,$$ where $n(s)$ is the (possibly spatially dependant) refraction index along the path.
For a homogeneous medium, this does indeed give straight lines for propagation. For a planar interface between two different media it gives Snell's law for refraction and it also describes reflection. (However, because it does not account for the actual nature of light as an oscillating electric field, this description cannot predict transmission or reflection coefficients.
However, if the medium is not homogeneous, then light will not travel on a straight line, and for complicated inhomogeneities the path can be correspondingly difficult to calculate. For an example, see the formation of mirages or more generally atmospheric refraction. Conversely, if one has a path one wishes a given light ray to take, then it is possible to engineer a refractive index spatial dependence that will make light bend that way. (Of course, whether such a dependence is physically reasonable is another matter; if the path bends too sharply then it may not be possible to find materials with the correspondingly large index and index gradients necessary.)
-
To generalize all the nice answers here, we can bend light in almost any shape using optical fibres or photonic crystals. Although it may look artificial, it is basically equivalent to all other methods because is governed by the same laws of physics.
-
We bend light all the time - using lenses.
Light bends when going from one material to another, due to conservation of momentum.
Snell's law describes how light bends.
Light is also bent when traveling past massive objects - look into "gravitational lensing" if you are interested.
Light can be effectively bent into a parabolic path using materials that have changing index of refraction. This is done in fiber optics using "graded-index fiber."
-
sorry I forgot to add parabolic path to it! I have now done that! – Pranit Bauva Jun 16 '12 at 14:26
I wouldn't say that light is anyhow (effectively or not) bent in the graded-index fibres. It's as misleading as to draw light ray reflection in a step-index fibre. In a straight graded-index single-mode fibre, light propagates along a straight line because it forms a standing wave in the transverse direction, so there is no propagation. – texnic Jun 18 '12 at 14:49
@texnic : but in multimode graded-index fibres the light indee follows a sinusoidal path. – Frédéric Grosshans Jun 18 '12 at 16:14
I agree to 16BitTons. Its stated that light travels in straight line,but owing to the huge number of optical instruments we use today,namely lenses,mirrors,prisms,etc. we are able to change the direction of the light's motion,deviate it from its actual path and hence'bend'it.Here I would also suggest that like in geometry we know that circle is a combination of a number of small straight lines joined together to form the final shape.Try on taking triangle,square,pentagon.....,icosagon,... and as you go up higher and higher the shape tends to become a circle more. If a similar experimental setup can be arranged to prepare a part of the curve using a combination of a number of mirrors, and then the light is made incident upon from one end, then we may be able to view the "bending" of the light from the other end.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9126001596450806, "perplexity_flag": "middle"}
|
http://unapologetic.wordpress.com/2011/03/16/bump-functions-part-2/?like=1&source=post_flair&_wpnonce=8cb43e0c56
|
# The Unapologetic Mathematician
## Bump Functions, part 2
As an immediate application of our partitions of unity, let’s show that we can always get whatever bump functions we need.
Let $U$ be an open subset of $M$, and $V$ be a set whose closure $\bar{V}$ is contained within $U$. I say that there is a nonnegative smooth function $\phi:M\to\mathbb{R}$ which is identically $1$ on $\bar{V}$, and which is supported within $U$.
To find this function, we start with a cover of $M$. Specifically, let $U$ be one set of the cover, and let $M\setminus\bar{V}$ be the other set. Then we know that there is a countable smooth partition of unity subordinate to this cover. That is, for every $k$ we either have $\phi_k$ supported in $U$, or $\phi_k$ supported in $M\setminus\bar{V}$ (or possibly both).
In fact, no matter what countable partition we come up with, we can take all the $\phi_k$ supported within $U$ and add them all up into one function $\phi$, and then take all the remaining functions and add them all up into one function $\psi$. Then $\{\phi,\psi\}$ is a partition of unity subordinate to our cover, and I say that $\phi$ is exactly the function we’re looking for.
Indeed, as a part of a partition of unity, $\phi$ is a nonnegative smooth function, and we know it’s supported in $U$. The only thing we need to determine is if it’s identically $1$ on $\bar{V}$. But for $p\in\bar{V}$ we know that $\phi(p)+\psi(p)=1$, and yet we also know that $\psi(p)=0$, since $\psi$ is supported in $M\setminus\bar{V}$. Thus we must have $\phi(p)=1$, and $\phi$ is indeed our bump function.
### Like this:
Posted by John Armstrong | Differential Topology, Topology
## 3 Comments »
1. [...] of homeomorphic to a ball in , and we can certainly find within such a neighborhood. Anyhow, we know that there exists a bump function which is identically on and supported within . We can thus [...]
Pingback by | March 24, 2011 | Reply
2. [...] each . We let be a neighborhood of whose closure is contained in . We know we can find a smooth bump function supported in and with on [...]
Pingback by | July 9, 2011 | Reply
3. [...] of then . Equivalently (given linearity), if in a neighborhood of then . But then we can pick a bump function which is on a neighborhood of and outside of . Then we have [...]
Pingback by | July 19, 2011 | Reply
« Previous | Next »
## About this weblog
This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”).
I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 34, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.947486162185669, "perplexity_flag": "head"}
|
http://stats.stackexchange.com/questions/33189/how-can-i-represent-r-squared-in-matrix-form
|
# How can I represent R squared in matrix form?
This question is a follow-up to a prior question.
Basically, I wanted to study under what conditions when we regress the residuals to $x_1$, we will get $\small R^2$ of 20%.
As a first step to attack this problem, my question is, how do I express $\small R^2$ in matrix form?
Then I will try to express "$\small R^2$ of regressing residuals to $x_1$" using matrix form.
Also, how can I add regression weights into the expression?
-
Re: Basically I wanted to study under what conditions when we regress the residuals to x1, we will get a R-squared of 20%, if the regression is an ordinary least squares regression, and $x_1$ was included in the model, then the answer is never, and I showed you in my answer here. If you're actually asking something else then please clarify. – Macro Jul 27 '12 at 15:36
Hi Macro, because I have weights in the regression. I wanted to be able to derive something show study the R^2. That's the reason for asking for the matrix form expression. Thank you! – Luna Jul 27 '12 at 19:06
Hi @Luna - OK but there is no mention of weights in this post. What kind of weights? Can you edit the post to clarify? – Macro Jul 27 '12 at 19:10
I had mentioned weights in that thread... here I am looking for a generic form - once I learn the generic form, I can add weights myself... right? – Luna Jul 27 '12 at 21:29
## 1 Answer
We have $$\begin{align*} R^2 = 1 - \frac{\sum{e_i^2}}{\sum{(y_i - \bar{y})^2}} = 1 - \frac{e^\prime e}{\tilde{y}^\prime\tilde{y}}, \end{align*}$$ where $\tilde{y}$ is a vector $y$ demeaned.
Recall that $\hat{\beta} = (X^\prime X)^{-1} X^\prime y$, implying that $e= y - X\hat{\beta} = y - X(X^\prime X)^{-1}X^\prime y$. Regression on a vector of 1s, written as $l$, gives the mean of $y$ as the predicted value and residuals from that model produce demeaned $y$ values; $\tilde{y} = y - \bar{y} = y - l(l^\prime l)^{-1}l^\prime y$.
Let $H = X(X^\prime X)^{-1}X^\prime$ and let $M = l(l^\prime l)^{-1}l^\prime$, where $l$ is a vector of 1's. Also, let $I$ be an identity matrix of the requisite size. Then we have
$$\begin{align*} R^2 &= 1- \frac{e^\prime e}{\tilde{y}^\prime\tilde{y}} \\ &= 1 - \frac{y^\prime(I - H)^\prime(I-H)y}{y^\prime (I - M)^\prime(I-M)y} \\ &= 1 - \frac{y^\prime(I-H)y}{y^\prime (I-M)y}, \end{align*}$$
where the second line comes from the fact that $H$ and $M$ (and $I$) are idempotent.
In the weighted case, let $\Omega$ be the weighting matrix used in the OLS objective function, $e^\prime \Omega e$. Additionally, let $H_w = X \Omega^{1/2} (X^\prime \Omega X)^{-1} \Omega^{1/2} X^\prime$ and $M_w = l \Omega^{1/2}(l^\prime \Omega l)^{-1} \Omega^{1/2} l^\prime$. Then, $$\begin{align*} R^2 &= 1 - \frac{y^\prime \Omega^{1/2} (I-H_w) \Omega^{1/2} y}{y^\prime \Omega^{1/2}(I-M_w) \Omega^{1/2}y}, \end{align*}$$
-
+1 It's nice (and elegant) to see the variance in the denominator pop out from regression against a constant. – whuber♦ Jul 27 '12 at 20:08
thanks a lot! I upvoted for you. Also, how to add regression weights to the expression? Thank you! – Luna Jul 27 '12 at 21:42
1
@Luna, if this post answers your question & provides the info you need, you should consider accepting it (by clicking the check mark to its left) as well as upvoting it. – gung Aug 11 '12 at 18:13
1
@Charlie, I think you may have accidentally dropped $y$ from your first equation for $\hat\beta$. Also, I don't quite follow your equation for $R^2$, I'm used to seeing $\sum(\hat y_i-\bar y)^2/\sum(y_i -\bar y)^2$ or $1-(\sum(y_i-\hat y_i)^2/\sum(y_i-\bar y)^2)$. I'm interpreting your $e^2$ as $\sum(y_i-\hat y_i)^2$, so I'm confused; is there a way that can make this clearer for me? – gung Aug 12 '12 at 3:45
@gung, You're right, I had $R^2$ defined incorrectly. I hope that it is correct now. Thanks! – Charlie Aug 13 '12 at 17:59
show 1 more comment
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 33, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9456334114074707, "perplexity_flag": "middle"}
|
http://en.wikipedia.org/wiki/PageRank
|
# PageRank
Mathematical PageRanks for a simple network, expressed as percentages. (Google uses a logarithmic scale.) Page C has a higher PageRank than Page E, even though there are fewer links to C; the one link to C comes from an important page and hence is of high value. If web surfers who start on a random page have an 85% likelihood of choosing a random link from the page they are currently visiting, and a 15% likelihood of jumping to a page chosen at random from the entire web, they will reach Page E 8.1% of the time. (The 15% likelihood of jumping to an arbitrary page corresponds to a damping factor of 85%.) Without damping, all web surfers would eventually end up on Pages A, B, or C, and all other pages would have PageRank zero. In the presence of damping, Page A effectively links to all pages in the web, even though it has no outgoing links of its own.
PageRank is a link analysis algorithm, named after Larry Page[1] and used by the Google web search engine, that assigns a numerical weighting to each element of a hyperlinked set of documents, such as the World Wide Web, with the purpose of "measuring" its relative importance within the set. The algorithm may be applied to any collection of entities with reciprocal quotations and references. The numerical weight that it assigns to any given element E is referred to as the PageRank of E and denoted by $PR(E).$
The value of incoming links is colloquially referred to as "Google juice", "link juice" or "Pagerank juice".[citation needed]
## Description
Cartoon illustrating basic principle of PageRank. The size of each face is proportional to the total size of the other faces which are pointing to it.
A PageRank results from a mathematical algorithm based on the webgraph, created by all World Wide Web pages as nodes and hyperlinks as edges, taking into consideration authority hubs such as cnn.com or usa.gov. The rank value indicates an importance of a particular page. A hyperlink to a page counts as a vote of support. The PageRank of a page is defined recursively and depends on the number and PageRank metric of all pages that link to it ("incoming links"). A page that is linked to by many pages with high PageRank receives a high rank itself. If there are no links to a web page, then there is no support for that page.
Numerous academic papers concerning PageRank have been published since Page and Brin's original paper.[2] In practice, the PageRank concept may be vulnerable to manipulation. Research has been conducted into identifying falsely influenced PageRank rankings. The goal is to find an effective means of ignoring links from documents with falsely influenced PageRank.[original research?]
Other link-based ranking algorithms for Web pages include the HITS algorithm invented by Jon Kleinberg (used by Teoma and now Ask.com)[citation needed], the IBM CLEVER project, and the TrustRank algorithm.
## History
PageRank was developed at Stanford University by Larry Page (hence the name Page-Rank[3]) and Sergey Brin in 1996[4] as part of a research project about a new kind of search engine.[5] Sergey Brin had the idea that information on the web could be ordered in a hierarchy by "link popularity": a page is ranked higher as there are more links to it.[6] It was co-authored by Rajeev Motwani and Terry Winograd. The first paper about the project, describing PageRank and the initial prototype of the Google search engine, was published in 1998:[2] shortly after, Page and Brin founded Google Inc., the company behind the Google search engine. While just one of many factors that determine the ranking of Google search results, PageRank continues to provide the basis for all of Google's web search tools.[7]
The name "PageRank" is a trademark of Google, and the PageRank process has been patented (). However, the patent is assigned to Stanford University and not to Google. Google has exclusive license rights on the patent from Stanford University. The university received 1.8 million shares of Google in exchange for use of the patent; the shares were sold in 2005 for \$336 million.[8][9]
PageRank has been influenced by citation analysis, early developed by Eugene Garfield in the 1950s at the University of Pennsylvania, and by Hyper Search, developed by Massimo Marchiori at the University of Padua. In the same year PageRank was introduced (1998), Jon Kleinberg published his important work on HITS. Google's founders cite Garfield, Marchiori, and Kleinberg in their original paper.[2]
A small search engine called "RankDex" from IDD Information Services designed by Robin Li was, since 1996, already exploring a similar strategy for site-scoring and page ranking.[10] The technology in RankDex would be patented by 1999[11] and used later when Li founded Baidu in China.[12][13] Li's work would be referenced by some of Larry Page's U.S. patents for his Google search methods.[14]
## Algorithm
PageRank is a probability distribution used to represent the likelihood that a person randomly clicking on links will arrive at any particular page. PageRank can be calculated for collections of documents of any size. It is assumed in several research papers that the distribution is evenly divided among all documents in the collection at the beginning of the computational process. The PageRank computations require several passes, called "iterations", through the collection to adjust approximate PageRank values to more closely reflect the theoretical true value.
A probability is expressed as a numeric value between 0 and 1. A 0.5 probability is commonly expressed as a "50% chance" of something happening. Hence, a PageRank of 0.5 means there is a 50% chance that a person clicking on a random link will be directed to the document with the 0.5 PageRank.
### Simplified algorithm
Assume a small universe of four web pages: A, B, C and D. Links from a page to itself, or multiple outbound links from one single page to another single page, are ignored. PageRank is initialized to the same value for all pages. In the original form of PageRank, the sum of PageRank over all pages was the total number of pages on the web at that time, so each page in this example would have an initial PageRank of 1. However, later versions of PageRank, and the remainder of this section, assume a probability distribution between 0 and 1. Hence the initial value for each page is 0.25.
The PageRank transferred from a given page to the targets of its outbound links upon the next iteration is divided equally among all outbound links.
If the only links in the system were from pages B, C, and D to A, each link would transfer 0.25 PageRank to A upon the next iteration, for a total of 0.75.
$PR(A)= PR(B) + PR(C) + PR(D).\,$
Suppose instead that page B had a link to pages C and A, while page D had links to all three pages. Thus, upon the next iteration, page B would transfer half of its existing value, or 0.125, to page A and the other half, or 0.125, to page C. Since D had three outbound links, it would transfer one third of its existing value, or approximately 0.083, to A.
$PR(A)= \frac{PR(B)}{2}+ \frac{PR(C)}{1}+ \frac{PR(D)}{3}.\,$
In other words, the PageRank conferred by an outbound link is equal to the document's own PageRank score divided by the number of outbound links L( ).
$PR(A)= \frac{PR(B)}{L(B)}+ \frac{PR(C)}{L(C)}+ \frac{PR(D)}{L(D)}. \,$
In the general case, the PageRank value for any page u can be expressed as:
$PR(u) = \sum_{v \in B_u} \frac{PR(v)}{L(v)}$,
i.e. the PageRank value for a page u is dependent on the PageRank values for each page v contained in the set Bu (the set containing all pages linking to page u), divided by the number L(v) of links from page v.
### Damping factor
The PageRank theory holds that even an imaginary surfer who is randomly clicking on links will eventually stop clicking. The probability, at any step, that the person will continue is a damping factor d. Various studies have tested different damping factors, but it is generally assumed that the damping factor will be set around 0.85.[2]
The damping factor is subtracted from 1 (and in some variations of the algorithm, the result is divided by the number of documents (N) in the collection) and this term is then added to the product of the damping factor and the sum of the incoming PageRank scores. That is,
$PR(A) = {1 - d \over N} + d \left( \frac{PR(B)}{L(B)}+ \frac{PR(C)}{L(C)}+ \frac{PR(D)}{L(D)}+\,\cdots \right).$
So any page's PageRank is derived in large part from the PageRanks of other pages. The damping factor adjusts the derived value downward. The original paper, however, gave the following formula, which has led to some confusion:
$PR(A)= 1 - d + d \left( \frac{PR(B)}{L(B)}+ \frac{PR(C)}{L(C)}+ \frac{PR(D)}{L(D)}+\,\cdots \right).$
The difference between them is that the PageRank values in the first formula sum to one, while in the second formula each PageRank is multiplied by N and the sum becomes N. A statement in Page and Brin's paper that "the sum of all PageRanks is one"[2] and claims by other Google employees[15] support the first variant of the formula above.
Page and Brin confused the two formulas in their most popular paper "The Anatomy of a Large-Scale Hypertextual Web Search Engine", where they mistakenly claimed that the latter formula formed a probability distribution over web pages.[2]
Google recalculates PageRank scores each time it crawls the Web and rebuilds its index. As Google increases the number of documents in its collection, the initial approximation of PageRank decreases for all documents.
The formula uses a model of a random surfer who gets bored after several clicks and switches to a random page. The PageRank value of a page reflects the chance that the random surfer will land on that page by clicking on a link. It can be understood as a Markov chain in which the states are pages, and the transitions, which are all equally probable, are the links between pages.
If a page has no links to other pages, it becomes a sink and therefore terminates the random surfing process. If the random surfer arrives at a sink page, it picks another URL at random and continues surfing again.
When calculating PageRank, pages with no outbound links are assumed to link out to all other pages in the collection. Their PageRank scores are therefore divided evenly among all other pages. In other words, to be fair with pages that are not sinks, these random transitions are added to all nodes in the Web, with a residual probability usually set to d = 0.85, estimated from the frequency that an average surfer uses his or her browser's bookmark feature.
So, the equation is as follows:
$PR(p_i) = \frac{1-d}{N} + d \sum_{p_j \in M(p_i)} \frac{PR (p_j)}{L(p_j)}$
where $p_1, p_2, ..., p_N$ are the pages under consideration, $M(p_i)$ is the set of pages that link to $p_i$, $L(p_j)$ is the number of outbound links on page $p_j$, and N is the total number of pages.
The PageRank values are the entries of the dominant eigenvector of the modified adjacency matrix. This makes PageRank a particularly elegant metric: the eigenvector is
$\mathbf{R} = \begin{bmatrix} PR(p_1) \\ PR(p_2) \\ \vdots \\ PR(p_N) \end{bmatrix}$
where R is the solution of the equation
$\mathbf{R} = \begin{bmatrix} {(1-d)/ N} \\ {(1-d) / N} \\ \vdots \\ {(1-d) / N} \end{bmatrix} + d \begin{bmatrix} \ell(p_1,p_1) & \ell(p_1,p_2) & \cdots & \ell(p_1,p_N) \\ \ell(p_2,p_1) & \ddots & & \vdots \\ \vdots & & \ell(p_i,p_j) & \\ \ell(p_N,p_1) & \cdots & & \ell(p_N,p_N) \end{bmatrix} \mathbf{R}$
where the adjacency function $\ell(p_i,p_j)$ is 0 if page $p_j$ does not link to $p_i$, and normalized such that, for each j
$\sum_{i = 1}^N \ell(p_i,p_j) = 1$,
i.e. the elements of each column sum up to 1, so the matrix is a stochastic matrix (for more details see the computation section below). Thus this is a variant of the eigenvector centrality measure used commonly in network analysis.
Because of the large eigengap of the modified adjacency matrix above,[16] the values of the PageRank eigenvector can be approximated to within a high degree of accuracy within only a few iterations.
As a result of Markov theory, it can be shown that the PageRank of a page is the probability of arriving at that page after a large number of clicks. This happens to equal $t^{-1}$ where $t$ is the expectation of the number of clicks (or random jumps) required to get from the page back to itself.
One main disadvantage of PageRank is that it favors older pages. A new page, even a very good one, will not have many links unless it is part of an existing site (a site being a densely connected set of pages, such as Wikipedia).
Several strategies have been proposed to accelerate the computation of PageRank.[17]
Various strategies to manipulate PageRank have been employed in concerted efforts to improve search results rankings and monetize advertising links. These strategies have severely impacted the reliability of the PageRank concept, which purports to determine which documents are actually highly valued by the Web community.
Since December 2007, when it started actively penalizing sites selling paid text links, Google has combatted link farms and other schemes designed to artificially inflate PageRank. How Google identifies link farms and other PageRank manipulation tools is among Google's trade secrets.
### Computation
PageRank can be computed either iteratively or algebraically. The iterative method can be viewed as the power iteration method[18][19] or the power method. The basic mathematical operations performed are identical.
#### Iterative
At $t=0$, an initial probability distribution is assumed, usually
$PR(p_i; 0) = \frac{1}{N}$.
At each time step, the computation, as detailed above, yields
$PR(p_i;t+1) = \frac{1-d}{N} + d \sum_{p_j \in M(p_i)} \frac{PR (p_j; t)}{L(p_j)}$,
or in matrix notation
$\mathbf{R}(t+1) = d \mathcal{M}\mathbf{R}(t) + \frac{1-d}{N} \mathbf{1}$, (*)
where $\mathbf{R}_i(t)=PR(p_i; t)$ and $\mathbf{1}$ is the column vector of length $N$ containing only ones.
The matrix $\mathcal{M}$ is defined as
$\mathcal{M}_{ij} = \begin{cases} 1 /L(p_j) , & \mbox{if }j\mbox{ links to }i\ \\ 0, & \mbox{otherwise} \end{cases}$
i.e.,
$\mathcal{M} := (K^{-1} A)^T$,
where $A$ denotes the adjacency matrix of the graph and $K$ is the diagonal matrix with the outdegrees in the diagonal.
The computation ends when for some small $\epsilon$
$|\mathbf{R}(t+1) - \mathbf{R}(t)| < \epsilon$,
i.e., when convergence is assumed.
#### Algebraic
For $t \to \infty$ (i.e., in the steady state), the above equation (*) reads
$\mathbf{R} = d \mathcal{M}\mathbf{R} + \frac{1-d}{N} \mathbf{1}$. (**)
The solution is given by
$\mathbf{R} = (\mathbf{I}-d \mathcal{M})^{-1} \frac{1-d}{N} \mathbf{1}$,
with the identity matrix $\mathbf{I}$.
The solution exists and is unique for $0 < d < 1$. This can be seen by noting that $\mathcal{M}$ is by construction a stochastic matrix and hence has an eigenvalue equal to one as a consequence of the Perron–Frobenius theorem.
#### Power Method
If the matrix $\mathcal{M}$ is a transition probability, i.e., column-stochastic with no columns consisting of just zeros and $\mathbf{R}$ is a probability distribution (i.e., $|\mathbf{R}|=1$, $\mathbf{E}\mathbf{R}=1$ where $\mathbf{E}$ is matrix of all ones), Eq. (**) is equivalent to
$\mathbf{R} = \left( d \mathcal{M} + \frac{1-d}{N} \mathbf{E} \right)\mathbf{R} =: \widehat{ \mathcal{M}} \mathbf{R}$. (***)
Hence PageRank $\mathbf{R}$ is the principal eigenvector of $\widehat{\mathcal{M}}$. A fast and easy way to compute this is using the power method: starting with an arbitrary vector $x(0)$, the operator $\widehat{\mathcal{M}}$ is applied in succession, i.e.,
$x(t+1) = \widehat{\mathcal{M}} x(t)$,
until
$|x(t+1) - x(t)| < \epsilon$.
Note that in Eq. (***) the matrix on the right-hand side in the parenthesis can be interpreted as
$\frac{1-d}{N} \mathbf{I} = (1-d)\mathbf{P} \mathbf{1}^t$,
where $\mathbf{P}$ is an initial probability distribution. In the current case
$\mathbf{P} := \frac{1}{N} \mathbf{1}$.
Finally, if $\mathcal{M}$ has columns with only zero values, they should be replaced with the initial probability vector $\mathbf{P}$. In other words
$\mathcal{M}^\prime := \mathcal{M} + \mathcal{D}$,
where the matrix $\mathcal{D}$ is defined as
$\mathcal{D} := \mathbf{P} \mathbf{D}^t$,
with
$\mathbf{D}_i = \begin{cases} 1, & \mbox{if }L(p_i)=0\ \\ 0, & \mbox{otherwise} \end{cases}$
In this case, the above two computations using $\mathcal{M}$ only give the same PageRank if their results are normalized:
$\mathbf{R}_{\textrm{power}} = \frac{\mathbf{R}_{\textrm{iterative}}}{|\mathbf{R}_{\textrm{iterative}}|} = \frac{\mathbf{R}_{\textrm{algebraic}}}{|\mathbf{R}_{\textrm{algebraic}}|}$.
PageRank MATLAB/Octave implementation
```% Parameter M adjacency matrix where M_i,j represents the link from 'j' to 'i', such that for all 'j' sum(i, M_i,j) = 1
% Parameter d damping factor
% Parameter v_quadratic_error quadratic error for v
% Return v, a vector of ranks such that v_i is the i-th rank from [0, 1]
function [v] = rank(M, d, v_quadratic_error)
N = size(M, 2); % N is equal to half the size of M
v = rand(N, 1);
v = v ./ norm(v, 2);
last_v = ones(N, 1) * inf;
M_hat = (d .* M) + (((1 - d) / N) .* ones(N, N));
while(norm(v - last_v, 2) > v_quadratic_error)
last_v = v;
v = M_hat * v;
v = v ./ norm(v, 2);
end
endfunction
function [v] = rank2(M, d, v_quadratic_error)
N = size(M, 2); % N is equal to half the size of M
v = rand(N, 1);
v = v ./ norm(v, 1); % This is now L1, not L2
last_v = ones(N, 1) * inf;
M_hat = (d .* M) + (((1 - d) / N) .* ones(N, N));
while(norm(v - last_v, 2) > v_quadratic_error)
last_v = v;
v = M_hat * v;
% removed the L2 norm of the iterated PR
end
endfunction
```
Example of code calling the rank function defined above:
```M = [0 0 0 0 1 ; 0.5 0 0 0 0 ; 0.5 0 0 0 0 ; 0 1 0.5 0 0 ; 0 0 0.5 1 0];
rank(M, 0.80, 0.001)
```
This example takes 13 iterations to converge.
The following is a proof that rank.m is incorrect. It's based on the first graphic example. My understanding is that rank.m uses the wrong norm on the input, then continues to renormalize L2, which is unnecessary.
```% This represents the example graph, correctly normalized and accounting for sinks (Node A)
% by allowing it to effectively random transition 100% of time, including to itself.
% While RANK.m doesn't actually handle this incorrectly, it does not show exactly how one should
% handle sink nodes (one possible solution would be a SELF-TRANSITION of 1.0), which does not
% give the correct result.
test_graph = ...
[ 0.09091 0.00000 0.00000 0.50000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000;
0.09091 0.00000 1.00000 0.50000 0.33333 0.50000 0.50000 0.50000 0.50000 0.00000 0.00000;
0.09091 1.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000;
0.09091 0.00000 0.00000 0.00000 0.33333 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000;
0.09091 0.00000 0.00000 0.00000 0.00000 0.50000 0.50000 0.50000 0.50000 1.00000 1.00000;
0.09091 0.00000 0.00000 0.00000 0.33333 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000;
0.09091 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000;
0.09091 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000;
0.09091 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000;
0.09091 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000;
0.09091 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 ]
pr = rank(test_graph, 0.85, 0.001) % INCORRECT is not normalized.
% 0.062247
% 0.730223
% 0.650829
% 0.074220
% 0.153590
% 0.074220
% 0.030703
% 0.030703
% 0.030703
% 0.030703
% 0.030703
pr / norm(pr,1) % CORRECT once normalized. I still don't know why the L2 normalization happens ( v = v/norm(v, 2))
% 0.032781
% 0.384561
% 0.342750
% 0.039087
% 0.080886
% 0.039087
% 0.016170
% 0.016170
% 0.016170
% 0.016170
% 0.016170
pr = rank2(test_graph, 0.85, 0.001) % CORRECT, only requires input PR normalization (make sure it sums to 1.0)
% 0.032781
% 0.384561
% 0.342750
% 0.039087
% 0.080886
% 0.039087
% 0.016170
% 0.016170
% 0.016170
% 0.016170
% 0.016170
```
#### Efficiency
Depending on the framework used to perform the computation, the exact implementation of the methods, and the required accuracy of the result, the computation time of the these methods can vary greatly.
## Variations
### PageRank of an undirected graph
The PageRank of an undirected graph G is statistically close to the degree distribution of the graph G,[20] but they are generally not identical: If R is the PageRank vector defined above, and D is the degree distribution vector
$D = {1\over 2|E|} \begin{bmatrix} deg(p_1) \\ deg(p_2) \\ \vdots \\ deg(p_N) \end{bmatrix}$
where $deg(p_i)$ denotes the degree of vertex $p_i$, and E is the edge-set of the graph, then, with $Y={1\over N}\mathbf{1}$, by:[21]
${1-d\over1+d}\|Y-D\|_1\leq \|R-D\|_1\leq \|Y-D\|_1,$
that is, the PageRank of an undirected graph equals to the degree distribution vector if and only if the graph is regular, i.e., every vertex has the same degree.
### Distributed Algorithm for PageRank Computation
There are simple and fast random walk-based distributed algorithms for computing PageRank of nodes in a network.[22] They present a simple algorithm that takes $O(\log n/\epsilon)$ rounds with high probability on any graph (directed or undirected), where n is the network size and $\epsilon$ is the reset probability ($1-\epsilon$ is also called as damping factor) used in the PageRank computation. They also present a faster algorithm that takes $O(\sqrt{\log n}/\epsilon)$ rounds in undirected graphs. Both of the above algorithms are scalable, as each node processes and sends only small (polylogarithmic in n, the network size) number of bits per round. For directed graphs, they present an algorithm that has a running time of $O(\sqrt{\log n/\epsilon})$, but it requires a polynomial number of bits to processed and sent per node in a round.
### Google Toolbar
The Google Toolbar's PageRank feature displays a visited page's PageRank as a whole number between 0 and 10. The most popular websites have a PageRank of 10. The least have a PageRank of 0. Google has not disclosed the specific method for determining a Toolbar PageRank value, which is to be considered only a rough indication of the value of a website.
PageRank measures the number of sites that link to a particular page.[23] The PageRank of a particular page is roughly based upon the quantity of inbound links as well as the PageRank of the pages providing the links. The algorithm also includes other factors, such as the size of a page, the number of changes, the time since the page was updated, the text in headlines and the text in hyperlinked anchor texts.[6]
The Google Toolbar's PageRank is updated infrequently, so the values it shows are often out of date.
### SERP Rank
The search engine results page (SERP) is the actual result returned by a search engine in response to a keyword query. The SERP consists of a list of links to web pages with associated text snippets. The SERP rank of a web page refers to the placement of the corresponding link on the SERP, where higher placement means higher SERP rank. The SERP rank of a web page is a function not only of its PageRank, but of a relatively large and continuously adjusted set of factors (over 200),.[24] Search engine optimization (SEO) is aimed at influencing the SERP rank for a website or a set of web pages.
After the introduction of Google Places into the mainstream organic SERP, numerous other factors in addition to PageRank affect ranking a business in Local Business Results.[25]
### Google directory PageRank
The Google Directory PageRank is an 8-unit measurement. Unlike the Google Toolbar, which shows a numeric PageRank value upon mouseover of the green bar, the Google Directory only displays the bar, never the numeric values.
### False or spoofed PageRank
In the past, the PageRank shown in the Toolbar was easily manipulated. Redirection from one page to another, either via a HTTP 302 response or a "Refresh" meta tag, caused the source page to acquire the PageRank of the destination page. Hence, a new page with PR 0 and no incoming links could have acquired PR 10 by redirecting to the Google home page. This spoofing technique, also known as 302 Google Jacking, was a known vulnerability. Spoofing can generally be detected by performing a Google search for a source URL; if the URL of an entirely different site is displayed in the results, the latter URL may represent the destination of a redirection.
### Other uses
A version of PageRank has recently been proposed as a replacement for the traditional Institute for Scientific Information (ISI) impact factor,[29] and implemented at eigenfactor.org. Instead of merely counting total citation to a journal, the "importance" of each citation is determined in a PageRank fashion.
A similar new use of PageRank is to rank academic doctoral programs based on their records of placing their graduates in faculty positions. In PageRank terms, academic departments link to each other by hiring their faculty from each other (and from themselves).[30]
PageRank has been used to rank spaces or streets to predict how many people (pedestrians or vehicles) come to the individual spaces or streets.[31][32] In lexical semantics it has been used to perform Word Sense Disambiguation[33] and to automatically rank WordNet synsets according to how strongly they possess a given semantic property, such as positivity or negativity.[34]
A dynamic weighting method similar to PageRank has been used to generate customized reading lists based on the link structure of Wikipedia.[35]
A Web crawler may use PageRank as one of a number of importance metrics it uses to determine which URL to visit during a crawl of the web. One of the early working papers [36] that were used in the creation of Google is Efficient crawling through URL ordering,[37] which discusses the use of a number of different importance metrics to determine how deeply, and how much of a site Google will crawl. PageRank is presented as one of a number of these importance metrics, though there are others listed such as the number of inbound and outbound links for a URL, and the distance from the root directory on a site to the URL.
The PageRank may also be used as a methodology to measure the apparent impact of a community like the Blogosphere on the overall Web itself. This approach uses therefore the PageRank to measure the distribution of attention in reflection of the Scale-free network paradigm.
In any ecosystem, a modified version of PageRank may be used to determine species that are essential to the continuing health of the environment.[38]
An application of PageRank to the analysis of protein networks in biology is reported recently.[39]
## nofollow
In early 2005, Google implemented a new value, "nofollow",[40] for the rel attribute of HTML link and anchor elements, so that website developers and bloggers can make links that Google will not consider for the purposes of PageRank—they are links that no longer constitute a "vote" in the PageRank system. The nofollow relationship was added in an attempt to help combat spamdexing.
As an example, people could previously create many message-board posts with links to their website to artificially inflate their PageRank. With the nofollow value, message-board administrators can modify their code to automatically insert "rel='nofollow'" to all hyperlinks in posts, thus preventing PageRank from being affected by those particular posts. This method of avoidance, however, also has various drawbacks, such as reducing the link value of legitimate comments. (See: Spam in blogs#nofollow)
In an effort to manually control the flow of PageRank among pages within a website, many webmasters practice what is known as PageRank Sculpting[41]—which is the act of strategically placing the nofollow attribute on certain internal links of a website in order to funnel PageRank towards those pages the webmaster deemed most important. This tactic has been used since the inception of the nofollow attribute, but may no longer be effective since Google announced that blocking PageRank transfer with nofollow does not redirect that PageRank to other links.[42]
## Deprecation
PageRank was once available for the verified site maintainers through the Google Webmaster Tools interface. However on October 15, 2009, a Google employee confirmed[43] that the company had removed PageRank from its Webmaster Tools section, explaining that "We've been telling people for a long time that they shouldn't focus on PageRank so much. Many site owners seem to think it's the most important metric for them to track, which is simply not true."[43] In addition, The PageRank indicator is not available in Google's own Chrome browser.
The visible page rank is updated very infrequently.
On 6 October 2011, many users mistakenly thought Google PageRank was gone. As it turns out, it was simply an update to the URL used to query the PageRank from Google.[44]
Google now also relies on other strategies as well as PageRank, such as Google Panda.[45]
## See also
• EigenTrust — a decentralized PageRank algorithm
• Hilltop algorithm
• PigeonRank
• Power method — the iterative eigenvector algorithm used to calculate PageRank
• Search engine optimization
• SimRank — a measure of object-to-object similarity based on random-surfer model
• Topic-Sensitive PageRank
• TrustRank
• Webgraph
• CheiRank
## Notes
1. "Google Press Center: Fun Facts". www.google.com. Archived from the original on 2009-04-24.
2. Brin, S.; Page, L. (1998). "The anatomy of a large-scale hypertextual Web search engine". Computer Networks and ISDN Systems 30: 107–117. doi:10.1016/S0169-7552(98)00110-X. ISSN 0169-7552.
3. David Vise and Mark Malseed (2005). The Google Story. p. 37. ISBN ISBN 0-553-80457-X.
4. Raphael Phan Chung Wei (2002-05-16). (Computimes; 2 ed.). Unknown parameter `|section=` ignored (help)
5. ^ a b 187-page study from Graz University, Austria, includes the note that also human brains are used when determining the page rank in Google[]
7. Lisa M. Krieger (1 December 2005). "Stanford Earns \$336 Million Off Google Stock". San Jose Mercury News, cited by redOrbit. Retrieved 2009-02-25.
8. Richard Brandt. "Starting Up. How Google got its groove". Stanford magazine. Retrieved 2009-02-25.
9. Li, Yanhong (August 6, 2002). "Toward a qualitative search engine". Internet Computing, IEEE (IEEE Computer Society) 2 (4): 24–29. doi:10.1109/4236.707687.
10. Greenberg, Andy, "The Man Who's Beating Google", Forbes magazine, October 05, 2009
12. Cf. especially Lawrence Page, U.S. patents 6,799,176 (2004) "Method for scoring documents in a linked database", 7,058,628 (2006) "Method for node ranking in a linked database", and 7,269,587 (2007) "Scoring documents in a linked database"2011
13. Matt Cutts's blog: Straight from Google: What You Need to Know, see page 15 of his slides.
14. Taher Haveliwala and Sepandar Kamvar. (March 2003). "The Second Eigenvalue of the Google Matrix" (PDF). Stanford University Technical Report: 7056. arXiv:math/0307056. Bibcode:2003math......7056N. Unknown parameter `|class=` ignored (help)
15. Gianna M. Del Corso, Antonio Gullí, Francesco Romani (2005). "Fast PageRank Computation via a Sparse Linear System". Internet Mathematics. Lecture Notes in Computer Science 2 (3): 118. doi:10.1007/978-3-540-30216-2_10. ISBN 978-3-540-23427-2.
16. Arasu, A. and Novak, J. and Tomkins, A. and Tomlin, J. (2002). "PageRank computation and the structure of the web: Experiments and algorithms". Proceedings of the Eleventh International World Wide Web Conference, Poster Track. Brisbane, Australia. pp. 107–117.
17. Massimo Franceschet (2010). "PageRank: Standing on the shoulders of giants". arXiv:1002.2858 [cs.IR].
18. Nicola Perra and Santo Fortunato.; Fortunato (September 2008). "Spectral centrality measures in complex networks". Phys. Rev. E, 78 (3): 36107. arXiv:0805.3322. Bibcode:2008PhRvE..78c6107P. doi:10.1103/PhysRevE.78.036107.
19. Vince Grolmusz (2012). "A Note on the PageRank of Undirected Graphs". ArXiv 1205 (1960): 1960. arXiv:1205.1960. Bibcode:2012arXiv1205.1960G.
20. Atish Das Sarma, Anisur Rahaman Molla, Gopal Pandurangan, Eli Upfal (2012). "Fast Distributed PageRank Computation". arXiv:1208.3071 [cs.DC,cs.DS].
21. Fishkin, Rand; Jeff Pollard (April 2, 2007). "Search Engine Ranking Factors - Version 2". seomoz.org. Retrieved May 11, 2009. []
22. "Ranking of listings : Ranking - Google Places Help". Google.com. Retrieved 2011-05-27.
23. ^ a b "How to report paid links". mattcutts.com/blog. April 14, 2007. Retrieved 2007-05-28.
24. Jøsang, A. (2007). "Trust and Reputation Systems" (PDF). In Aldini, A. Foundations of Security Analysis and Design IV, FOSAD 2006/2007 Tutorial Lectures. 4677. Springer LNCS 4677. pp. 209–245. doi:10.1007/978-3-540-74810-6.
25.
26. Johan Bollen, Marko A. Rodriguez, and Herbert Van de Sompel.; Rodriguez; Van De Sompel (December 2006). "Journal Status". Scientometrics 69 (3): 1030. arXiv:cs.GL/0601030. Bibcode:2006cs........1030B.
27. Benjamin M. Schmidt and Matthew M. Chingos (2007). "Ranking Doctoral Programs by Placement: A New Method" (PDF). PS: Political Science and Politics 40 (July): 523–529.
28. B. Jiang (2006). "Ranking spaces for predicting human movement in an urban environment". International Journal of Geographical Information Science 23 (7): 823–837. arXiv:physics/0612011. doi:10.1080/13658810802022822.
29. Jiang B., Zhao S., and Yin J. (2008). "Self-organized natural roads for predicting traffic flow: a sensitivity study". Journal of Statistical Mechanics: Theory and Experiment. P07008 (7): 008. arXiv:0804.1630. Bibcode:2008JSMTE..07..008J. doi:10.1088/1742-5468/2008/07/P07008.
30. Andrea Esuli and Fabrizio Sebastiani. "PageRanking WordNet synsets: An Application to Opinion-Related Properties" (PDF). In Proceedings of the 35th Meeting of the Association for Computational Linguistics, Prague, CZ, 2007, pp. 424–431. Retrieved June 30, 2007.
31. Wissner-Gross, A. D. (2006). "Preparation of topical readings lists from the link structure of Wikipedia". Proceedings of the IEEE International Conference on Advanced Learning Technology (Rolduc, Netherlands): 825. doi:10.1109/ICALT.2006.1652568. ISBN 0-7695-2632-2.
32. "Working Papers Concerning the Creation of Google". Google. Retrieved November 29, 2006.
33. Cho, J., Garcia-Molina, H., and Page, L. (1998). "Efficient crawling through URL ordering". Proceedings of the seventh conference on World Wide Web (Brisbane, Australia).
34. Burns, Judith (2009-09-04). "Google trick tracks extinctions". BBC News. Retrieved 2011-05-27.
35. G. Ivan and V. Grolmusz (2011). "When the Web meets the cell: using personalized PageRank for analyzing protein interaction networks". Bioinformatics (Vol. 27, No. 3. pp. 405-407) 27 (3): 405–7. doi:10.1093/bioinformatics/btq680. PMID 21149343.
36. "Preventing Comment Spam". Google. Retrieved January 1, 2005.
37. "PageRank Sculpting: Parsing the Value and Potential Benefits of Sculpting PR with Nofollow". SEOmoz. Retrieved 2011-05-27.
38. "PageRank sculpting". Mattcutts.com. 2009-06-15. Retrieved 2011-05-27.
39. ^ a b Susan Moskwa. "PageRank Distribution Removed From WMT". PageRank Distribution Removed From WMT. Retrieved October 16, 2009
40. WhatCulture!. 6 October 2011 http://whatculture.com/technology/google-pagerank-is-not-dead.php `|url=` missing title (help). Retrieved 7 October 2011.
41.
## References
• Altman, Alon; Moshe Tennenholtz (2005). "Ranking Systems: The PageRank Axioms" (PDF). Proceedings of the 6th ACM conference on Electronic commerce (EC-05). Vancouver, BC. Retrieved 2008-02-05.
• Cheng, Alice; Eric J. Friedman (2006-06-11). "Manipulability of PageRank under Sybil Strategies" (PDF). Proceedings of the First Workshop on the Economics of Networked Systems (NetEcon06). Ann Arbor, Michigan. Retrieved 2008-01-22.
• Farahat, Ayman; LoFaro, Thomas; Miller, Joel C.; Rae, Gregory and Ward, Lesley A. (2006). "Authority Rankings from HITS, PageRank, and SALSA: Existence, Uniqueness, and Effect of Initialization". SIAM Journal on Scientific Computing 27 (4): 1181–1201. doi:10.1137/S1064827502412875.
• Haveliwala, Taher; Jeh, Glen and Kamvar, Sepandar (2003). "An Analytical Comparison of Approaches to Personalizing PageRank" (PDF). Stanford University Technical Report.
• Langville, Amy N.; Meyer, Carl D. (2003). "Survey: Deeper Inside PageRank". Internet Mathematics 1 (3).
• Langville, Amy N.; Meyer, Carl D. (2006). Google's PageRank and Beyond: The Science of Search Engine Rankings. Princeton University Press. ISBN 0-691-12202-4.
•
• Richardson, Matthew; Domingos, Pedro (2002). "The intelligent surfer: Probabilistic combination of link and content information in PageRank" (PDF). Proceedings of Advances in Neural Information Processing Systems. 14.
## Relevant patents
• Original PageRank U.S. Patent—Method for node ranking in a linked database—Patent number 6,285,999—September 4, 2001
• PageRank U.S. Patent—Method for scoring documents in a linked database—Patent number 6,799,176—September 28, 2004
• PageRank U.S. Patent—Method for node ranking in a linked database—Patent number 7,058,628—June 6, 2006
• PageRank U.S. Patent—Scoring documents in a linked database—Patent number 7,269,587—September 11, 2007
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 74, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.878780722618103, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/algebra/137095-example-irrational-irrational-rational.html
|
# Thread:
1. ## example of irrational + irrational = rational
plzz give me examples when two DIFFERENT irrational numbers add up to form a rational number.
a + b = c where a and b are two different irrational numbers and c is a rational numbers.
i will be very much helpful if i get more than one example. plzz help thanks in advance.
2. Originally Posted by saha.subham
plzz give me examples when two DIFFERENT irrational numbers add up to form a rational number.
a + b = c where a and b are two different irrational numbers and c is a rational numbers.
i will be very much helpful if i get more than one example. plzz help thanks in advance.
Dear saha,
Take any irrational number and it's negetive,
e.g; $a=\sqrt{2}~and~b=-\sqrt{2}$
Then, $\sqrt{2}-\sqrt{2}=0\in{Q}$
3. One more example:
$a=\sqrt{2}~and~b=1-\sqrt{2}<br />$
$\sqrt{2}+1-\sqrt{2}=1\in{Q}<br />$
4. Originally Posted by MJ*
hi earboth....i dont think so thats true..22/7 is an irrational number ;it is infact pi. (3.1411592...)
And if u ay that irrational numbers cant be expressed in form of fractions; then sqrt(2) wud not have been an irrtional number;coz it can be expressed in fractional form sqrt(2)/1
for reference see:
Pi - Wikipedia, the free encyclopedia
Normally I wouldn't get too rough with people who respond to questions they have no clue about but posting a reference to Wikipedia that you didn't even bother to read carefully yourself enrages me.
Wikipedia does NOT say that $\pi$ is equal to "22/7". It lists that as one of many rational approximations to $\pi$.
A number is rational if and only if it can be written as a fraction with integer numerator and denominator- that is often used as the definition of "rational number". Saying that $\sqrt{2}= \sqrt{2}{1}$ does NOT make it a rational number because the numerator is not an integer.
$\pi$ is irrational, 22/7 is rational. They are NOT equal, they are "close"- and not that very close. 22/7= 3.142857142857... and differs from $\pi$ in just the third decimal place.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 9, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9286888241767883, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/35276/scattering-on-delta-function-potential
|
# Scattering on delta function potential
Suppose a particle has energy $E>V(+/-\infty)=0$, then the solutions to the Schrodinger equation outside of the potential will be $\psi(x)=Ae^{i k x}+Be^{-i k x}$. How can one show or explain that $|B|/|A|$ gives the probability that a particle scattering off the potential is reflected?
-
You can either argue with probability currents (though this was always confusing to me). The nice way is to construct wave-packets with the scattering solutions of a small range of momenta and then calculate what is the probability of the particle to be scattered. – Fabian Aug 31 '12 at 5:32
Can you explain more? If I start with a gaussian wavepacket, how do I get it to travel? If I apply the schrodinger equation to find its time evolution, it just stays at the same point but flattens out, how can I input its initial speed? – Hobo Sep 2 '12 at 15:55
@Hobo: work with the momentum first. Try something like $\int dk A\,e^{-\alpha k^{2}}e^{-i\beta kx}$, with $A, \alpha,$ and $\beta$ constant. – Jerry Schirmer Sep 2 '12 at 16:18
2
– jjcale Sep 2 '12 at 16:55
@jjcale: voted to close as duplicate, I can't see how to answer without replicating the answer to the previous question verbatim. – Ron Maimon Sep 2 '12 at 21:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.918846607208252, "perplexity_flag": "head"}
|
http://mathhelpforum.com/math-challenge-problems/121040-show-all-these-numbers-zero.html
|
# Thread:
1. ## Show that all of these numbers are zero
Suppose you have $2n$ numbers $(n > 1)$ which have the following property : whenever you remove one of them (any one), you can split the remaining ones into two sets having equal sum.
Show that all of these numbers are zero.
2. Originally Posted by Bruno J.
Suppose you have $2n-1$ numbers $(n > 1)$ which have the following property : whenever you remove one of them (any one), you can split the remaining ones into two sets having equal sum.
Show that all of these numbers are zero.
by "zero" did you mean "equal"? also, each of those two sets with equal sum must have exactly $n-1$ elements. otherwise, the claim would be false.
3. Oh, yeah. I'm sorry! I made a mistake in the statement of the problem! I have fixed my post.
4. Originally Posted by Bruno J.
Suppose you have $2n$ numbers $(n > 1)$ which have the following property : whenever you remove one of them (any one), you can split the remaining ones into two sets having equal sum.
Show that all of these numbers are zero.
a weaker property: whenever you remove one of them (any one), either the sum of the remaining ones is zero or you can split the remaining ones into two sets having equal sum.
this problem is equivalen to this claim that the $2n \times 2n$ matrix $A=[a_{ij}]$ with $a_{ii}=0, \ a_{ij}=\pm 1, \ \forall i \neq j,$ is invertible. to prove this claim we'll show that $\det A \neq 0$:
$\det A = \sum_{\sigma \in S_{2n}} \text{sign}(\sigma) \prod_{i=1}^{2n} a_{i \sigma(i)}=\sum_{\sigma \in D} \text{sign}(\sigma) \prod_{i=1}^{2n} a_{i \sigma(i)},$ where $D = \{\sigma \in S_{2n}: \ \ \sigma(i) \neq i, \ \forall i \},$ because we're given that $a_{ii}=0.$ so $D$ is the set of derangements of $\{1,2, \cdots , 2n \}.$
but we know that the number of derangements of a set with even number of elements is odd. so $\det A$ is a sum of odd number of terms where each term is $\pm 1.$ clearly this sum can
never be zero and hence $A$ is invertible. Q.E.D.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 19, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9565723538398743, "perplexity_flag": "head"}
|
http://mathoverflow.net/revisions/16763/list
|
## Return to Answer
2 I inserted comments on moduli stack over Z to stress the canonicity of the trivialization
I will give an intrinsic characterization below for what this unit class modulo 12th powers means, which may be viewed as an answer of sorts: it expresses the obstruction to extracting the 12th root of a certain canonical isomorphism between 12 12th powers of line bundles (and so one could shift the answer to: where does the need to extract such a 12th root come up?)
For any ring $R$, the group $R^{\times}/(R^{\times})^{12}$ naturally maps into the degree-1 fppf cohomology of $\mu_{12}$ over ${\rm{Spec}}(R)$, so it classifies isomorphism classes of certain $\mu_{12}$-torsors for the fppf topology over this base. (Namely, those $\mu_{12}$-torsors whose pushout to a $\mathbf{G} _m$-torsor is trivial.)
It is the same to use the etale topology when $12$ is a unit in $R$ (as then $\mu_{12}$ is etale over $R$). So the issue is to associate to any elliptic curve $f:E \rightarrow {\rm{Spec}}(R)$ over a ring a canonical $\mu_{12}$-torsor (with the extra property that its pushout to a $\mathbf{G} _m$-torsor is trivial).
In the theory of Weierstrass planar models for elliptic curves $E$ over a base scheme $S$ (this includes the condition "good reduction") there is an obstruction to the existence of such a model, namely whether or not the line bundle $\omega_{E/S} = f_{\ast}(\Omega^1_{E/S})$ on $S$ admits a global trivialization. The necessity of such triviality is due to the fact that a Weierstrass model produces a trivialization (the ${\rm{dx}}/(2y+\dots)$ thing), and the sufficiency is explained in Chapter 2 of Katz-Mazur (where they use a choice of trivializing section to distinguish some formal parameters along the origin and pass from this to a Weierstrass model via the relationship between global 1-forms, the relative cotangent space ${\rm{Cot}}_e(E)$ along the identity section $e$, and $\mathcal{O}(ne)/\mathcal{O}((n-1)e) \simeq {\rm{Cot}}_e(E)^{ \otimes -n}$ for $n = 2, 3$).
That being said, regardless of whether or not the line bundle $\omega_{E/S}$ is trivial (though it always is when $S$ is local), the line bundle $\omega_{E/S}^{\otimes 12}$ is canonically trivial (in a manner that is compatible with base change and functorial in isomorphisms of elliptic curves): that is the meaning of the classical fact that the product of $\Delta$ with the 12th power of the section ${\rm{d}}x/(2y+\dots)$ is invariant under choice of Weierstrass model. This also underlies Mumford's calculation (recently revisited by Fulton-Olsson) of the Picard group of the moduli stack of elliptic curves as $\mathbf{Z}/12\mathbf{Z}$, which one could regard as providing a distinguished role to that trivialization. Working with the compactified moduli stack over $\mathbf{Z}$ (so allowing generalized elliptic curves with geometrically irreducible but possibly non-smooth fibers, and hence working with relative dualizing sheaf to generalize $\omega_{E/S}$ when allowing non-smooth fibers), the trivialization (which we could generously attribute to Ramanujan) is unique up to a sign, which in turn is nailed down by the Tate curve over $\mathbf{Z}[[q]]$ and the isomorphism of its formal group with $\widehat{\mathbf{G}}_m$. So this trivialization is really a canonical thing, independent of any theory of Weierstrass models.
Letting $\theta_{E/S}$ denote this intrinsic trivializing section of $\omega _{E/S}^{\otimes 12}$ as just defined, it is natural to ask if $\theta _{E/S}$ is the 12th power of a trivializing section of $\omega _{E/S}$. Note that this is a nontrivial condition even when $\omega _{E/S}$ is trivial (such as when $S$ is local). Anyway, the functor of such 12th roots is a $\mu _{12}$-torsor over $S$ for the fppf topology (and etale if 12 is a unit on the base), and as such it corresponds to the inverse of the class of $\Delta$ in the question (for which the base was local). So that is an answer of sorts: it describes the obstruction to extracting a 12th root of the canonical trivialization of $\omega^{\otimes 12}$ obtained by pullback from the trivialization over the moduli space of elliptic curves (up to an issue of signs in the exponent). Now does one ever care to extract such a 12th root? That's another matter...
1
I will give an intrinsic characterization below for what this unit class modulo 12th powers means, which may be viewed as an answer of sorts: it expresses the obstruction to extracting the 12th root of a certain canonical isomorphism between 12 powers of line bundles (and so one could shift the answer to: where does the need to extract such a 12th root come up?)
For any ring $R$, the group $R^{\times}/(R^{\times})^{12}$ naturally maps into the fppf cohomology of $\mu_{12}$ over ${\rm{Spec}}(R)$, so it classifies isomorphism classes of certain $\mu_{12}$-torsors for the fppf topology over this base. (Namely, those $\mu_{12}$-torsors whose pushout to a $\mathbf{G} _m$-torsor is trivial.)
It is the same to use the etale topology when $12$ is a unit in $R$ (as then $\mu_{12}$ is etale over $R$). So the issue is to associate to any elliptic curve $f:E \rightarrow {\rm{Spec}}(R)$ over a ring a canonical $\mu_{12}$-torsor (with the extra property that its pushout to a $\mathbf{G} _m$-torsor is trivial).
In the theory of Weierstrass planar models for elliptic curves $E$ over a base scheme $S$ (this includes the condition "good reduction") there is an obstruction to the existence of such a model, namely whether or not the line bundle $\omega_{E/S} = f_{\ast}(\Omega^1_{E/S})$ on $S$ admits a global trivialization. The necessity of such triviality is due to the fact that a Weierstrass model produces a trivialization (the ${\rm{dx}}/(2y+\dots)$ thing), and the sufficiency is explained in Chapter 2 of Katz-Mazur (where they use a choice of trivializing section to distinguish some formal parameters along the origin and pass from this to a Weierstrass model via the relationship between global 1-forms, the relative cotangent space ${\rm{Cot}}_e(E)$ along the identity section $e$, and $\mathcal{O}(ne)/\mathcal{O}((n-1)e) \simeq {\rm{Cot}}_e(E)^{ \otimes -n}$ for $n = 2, 3$).
That being said, regardless of whether or not the line bundle $\omega_{E/S}$ is trivial (though it always is when $S$ is local), the line bundle $\omega_{E/S}^{\otimes 12}$ is canonically trivial (in a manner that is compatible with base change and functorial in isomorphisms of elliptic curves): that is the meaning of the classical fact that the product of $\Delta$ with the 12th power of the section ${\rm{d}}x/(2y+\dots)$ is invariant under choice of Weierstrass model. This also underlies Mumford's calculation (recently revisited by Fulton-Olsson) of the Picard group of the moduli stack of elliptic curves as $\mathbf{Z}/12\mathbf{Z}$, which one could regard as providing a distinguished role to that trivialization.
Letting $\theta_{E/S}$ denote this intrinsic trivializing section of $\omega _{E/S}^{\otimes 12}$ as just defined, it is natural to ask if $\theta _{E/S}$ is the 12th power of a trivializing section of $\omega _{E/S}$. Note that this is a nontrivial condition even when $\omega _{E/S}$ is trivial (such as when $S$ is local). Anyway, the functor of such 12th roots is a $\mu _{12}$-torsor over $S$ for the fppf topology (and etale if 12 is a unit on the base), and as such it corresponds to the inverse of the class of $\Delta$ in the question (for which the base was local). So that is an answer of sorts: it describes the obstruction to extracting a 12th root of the canonical trivialization of $\omega^{\otimes 12}$ obtained by pullback from the trivialization over the moduli space of elliptic curves (up to an issue of signs in the exponent). Now does one ever care to extract such a 12th root? That's another matter...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 82, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9389156699180603, "perplexity_flag": "head"}
|
http://mathhelpforum.com/calculus/98757-slope-tangent-line-curve.html
|
# Thread:
1. ## slope of tangent line on a curve?
Find the slope of the tangent to the curve $f(x) = x^3-4x+1$ at the point where x=a. This is what I get
$\lim_{x\to a}\frac{f(x)-f(a)}{x-a}$
$\lim_{x\to a}\frac{(x^3-4x+1)-(a^3-4a+1)}{x-a}$
$\lim_{x\to a}\frac{(x^3-4x+1)+(-a^3+4a-1)}{x-a}$
$\lim_{x\to a}\frac{x^3-4x-a^3+4a}{x-a}$
Then I'm stuck from there. Oh and don't use derivatives or that $\frac{dy}{dx}$ stuff because we haven't learned that yet.
2. Originally Posted by yoman360
Find the slope of the tangent to the curve $f(x) = x^3-4x+1$ at the point where x=a. This is what I get
$\lim_{x\to a}\frac{f(x)-f(a)}{x-a}$
$\lim_{x\to a}\frac{(x^3-4x+1)-(a^3-4a+1)}{x-a}$
$\lim_{x\to a}\frac{(x^3-4x+1)+(-a^3+4a-1)}{x-a}$
$\lim_{x\to a}\frac{x^3-4x-a^3+4a}{x-a}$
Then I'm stuck from there. Oh and don't use derivatives or that $\frac{dy}{dx}$ stuff because we haven't learned that yet.
$\lim_{x\to a}\frac{x^3-4x-a^3+4a}{x-a} = \lim_{x\to a}\frac{x^3-a^3 - 4x+4a}{x-a}$
${\color{white}\lim_{x\to a}\frac{x^3-4x-a^3+4a}{x-a}} = \lim_{x\to a}\frac{(x^3-a^3) - 4(x-a)}{x-a}$
${\color{white}\lim_{x\to a}\frac{x^3-4x-a^3+4a}{x-a}} = \lim_{x\to a}\frac{((x-a)(x^2+ax+a^2)) - 4(x-a)}{x-a}$
Now cancel (x-a):
${\color{white}\lim_{x\to a}\frac{x^3-4x-a^3+4a}{x-a}} = \lim_{x\to a}(x^2+ax+a^2 - 4) = \boxed{3a^2-4}$
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 16, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9488109946250916, "perplexity_flag": "middle"}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.