text
stringlengths 105
4.17k
| source
stringclasses 883
values |
---|---|
Research on the effects of positive reinforcement, negative reinforcement and punishment continue today as those concepts are fundamental to learning theory and apply to many practical applications of that theory.
## Operant conditioning
The term operant conditioning was introduced by Skinner to indicate that in his experimental paradigm, the organism is free to operate on the environment. In this paradigm, the experimenter cannot trigger the desirable response; the experimenter waits for the response to occur (to be emitted by the organism) and then a potential reinforcer is delivered. In the classical conditioning paradigm, the experimenter triggers (elicits) the desirable response by presenting a reflex eliciting stimulus, the unconditional stimulus (UCS), which they pair (precede) with a neutral stimulus, the conditional stimulus (CS).
Reinforcement is a basic term in operant conditioning. For the punishment aspect of operant conditioning, see punishment (psychology).
### Positive reinforcement
Positive reinforcement occurs when a desirable event or stimulus is presented as a consequence of a behavior and the chance that this behavior will manifest in similar environments increases. For example, if reading a book is fun, then experiencing the fun positively reinforces the behavior of reading fun books. The person who receives the positive reinforcement (i.e., who has fun reading the book) will read more books to have more fun.
|
https://en.wikipedia.org/wiki/Reinforcement
|
For example, if reading a book is fun, then experiencing the fun positively reinforces the behavior of reading fun books. The person who receives the positive reinforcement (i.e., who has fun reading the book) will read more books to have more fun.
The high probability instruction (HPI) treatment is a behaviorist treatment based on the idea of positive reinforcement.
### Negative reinforcement
Negative reinforcement increases the rate of a behavior that avoids or escapes an aversive situation or stimulus. That is, something unpleasant is already happening, and the behavior helps the person avoid or escape the unpleasantness. In contrast to positive reinforcement, which involves adding a pleasant stimulus, in negative reinforcement, the focus is on the removal of an unpleasant situation or stimulus. For example, if someone feels unhappy, then they might engage in a behavior (e.g., reading books) to escape from the aversive situation (e.g., their unhappy feelings). The success of that avoidant or escapist behavior in removing the unpleasant situation or stimulus reinforces the behavior.
Doing something unpleasant to people to prevent or remove a behavior from happening again is punishment, not negative reinforcement.
|
https://en.wikipedia.org/wiki/Reinforcement
|
The success of that avoidant or escapist behavior in removing the unpleasant situation or stimulus reinforces the behavior.
Doing something unpleasant to people to prevent or remove a behavior from happening again is punishment, not negative reinforcement. The main difference is that reinforcement always increases the likelihood of a behavior (e.g., channel surfing while bored temporarily alleviated boredom; therefore, there will be more channel surfing while bored), whereas punishment decreases it (e.g., hangovers are an unpleasant stimulus, so people learn to avoid the behavior that led to that unpleasant stimulus).
### Extinction
Extinction occurs when a given behavior is ignored (i.e. followed up with no consequence). Behaviors disappear over time when they continuously receive no reinforcement. During a deliberate extinction, the targeted behavior spikes first (in an attempt to produce the expected, previously reinforced effects), and then declines over time. Neither reinforcement nor extinction need to be deliberate in order to have an effect on a subject's behavior. For example, if a child reads books because they are fun, then the parents' decision to ignore the book reading will not remove the positive reinforcement (i.e., fun) the child receives from reading books.
|
https://en.wikipedia.org/wiki/Reinforcement
|
Neither reinforcement nor extinction need to be deliberate in order to have an effect on a subject's behavior. For example, if a child reads books because they are fun, then the parents' decision to ignore the book reading will not remove the positive reinforcement (i.e., fun) the child receives from reading books. However, if a child engages in a behavior to get attention from the parents, then the parents' decision to ignore the behavior will cause the behavior to go extinct, and the child will find a different behavior to get their parents' attention.
### Reinforcement versus punishment
Reinforcers serve to increase behaviors whereas punishers serve to decrease behaviors; thus, positive reinforcers are stimuli that the subject will work to attain, and negative reinforcers are stimuli that the subject will work to be rid of or to end. The table below illustrates the adding and subtracting of stimuli (pleasant or aversive) in relation to reinforcement vs. punishment.
+Comparison chart Rewarding (pleasant) stimulus Aversive (unpleasant) stimulus Positive (adding a stimulus) Positive reinforcementExample:
|
https://en.wikipedia.org/wiki/Reinforcement
|
The table below illustrates the adding and subtracting of stimuli (pleasant or aversive) in relation to reinforcement vs. punishment.
+Comparison chart Rewarding (pleasant) stimulus Aversive (unpleasant) stimulus Positive (adding a stimulus) Positive reinforcementExample: Reading a book because it is fun and interesting Positive punishmentExample: Telling someone that their actions are inconsiderate Negative (taking a stimulus away) Negative punishmentExample: Loss of privileges (e.g., screen time or permission to attend a desired event) if a rule is broken Negative reinforcementExample: Reading a book because it allows the reader to escape feelings of boredom or unhappiness
### Further ideas and concepts
- Distinguishing between positive and negative reinforcement can be difficult and may not always be necessary. Focusing on what is being removed or added and how it affects behavior can be more helpful.
- An event that punishes behavior for some may reinforce behavior for others.
- Some reinforcement can include both positive and negative features, such as a drug addict taking drugs for the added euphoria (positive reinforcement) and also to eliminate withdrawal symptoms (negative reinforcement).
- Reinforcement in the business world is essential in driving productivity.
|
https://en.wikipedia.org/wiki/Reinforcement
|
- Some reinforcement can include both positive and negative features, such as a drug addict taking drugs for the added euphoria (positive reinforcement) and also to eliminate withdrawal symptoms (negative reinforcement).
- Reinforcement in the business world is essential in driving productivity. Employees are constantly motivated by the ability to receive a positive stimulus, such as a promotion or a bonus. Employees are also driven by negative reinforcement, such as by eliminating unpleasant tasks.
- Though negative reinforcement has a positive effect in the short term for a workplace (i.e. encourages a financially beneficial action), over-reliance on a negative reinforcement hinders the ability of workers to act in a creative, engaged way creating growth in the long term.
### Primary and secondary reinforcers
A primary reinforcer, sometimes called an unconditioned reinforcer, is a stimulus that does not require pairing with a different stimulus in order to function as a reinforcer and most likely has obtained this function through the evolution and its role in species' survival. Examples of primary reinforcers include food, water, and sex. Some primary reinforcers, such as certain drugs, may mimic the effects of other primary reinforcers. While these primary reinforcers are fairly stable through life and across individuals, the reinforcing value of different primary reinforcers varies due to multiple factors (e.g., genetics, experience).
|
https://en.wikipedia.org/wiki/Reinforcement
|
Some primary reinforcers, such as certain drugs, may mimic the effects of other primary reinforcers. While these primary reinforcers are fairly stable through life and across individuals, the reinforcing value of different primary reinforcers varies due to multiple factors (e.g., genetics, experience). Thus, one person may prefer one type of food while another avoids it. Or one person may eat much food while another eats very little. So even though food is a primary reinforcer for both individuals, the value of food as a reinforcer differs between them.
A secondary reinforcer, sometimes called a conditioned reinforcer, is a stimulus or situation that has acquired its function as a reinforcer after pairing with a stimulus that functions as a reinforcer. This stimulus may be a primary reinforcer or another conditioned reinforcer (such as money).
When trying to distinguish primary and secondary reinforcers in human examples, use the "caveman test." If the stimulus is something that a caveman would naturally find desirable (e.g. candy) then it is a primary reinforcer. If, on the other hand, the caveman would not react to it (e.g. a dollar bill), it is a secondary reinforcer. As with primary reinforcers, an organism can experience satisfaction and deprivation with secondary reinforcers.
|
https://en.wikipedia.org/wiki/Reinforcement
|
If, on the other hand, the caveman would not react to it (e.g. a dollar bill), it is a secondary reinforcer. As with primary reinforcers, an organism can experience satisfaction and deprivation with secondary reinforcers.
### Other reinforcement terms
- A generalized reinforcer is a conditioned reinforcer that has obtained the reinforcing function by pairing with many other reinforcers and functions as a reinforcer under a wide-variety of motivating operations. (One example of this is money because it is paired with many other reinforcers).
- In reinforcer sampling, a potentially reinforcing but unfamiliar stimulus is presented to an organism without regard to any prior behavior.
- Socially-mediated reinforcement involves the delivery of reinforcement that requires the behavior of another organism. For example, another person is providing the reinforcement.
- The Premack principle is a special case of reinforcement elaborated by David Premack, which states that a highly preferred activity can be used effectively as a reinforcer for a less-preferred activity.
- Reinforcement hierarchy is a list of actions, rank-ordering the most desirable to least desirable consequences that may serve as a reinforcer. A reinforcement hierarchy can be used to determine the relative frequency and desirability of different activities, and is often employed when applying the Premack principle.
|
https://en.wikipedia.org/wiki/Reinforcement
|
- Reinforcement hierarchy is a list of actions, rank-ordering the most desirable to least desirable consequences that may serve as a reinforcer. A reinforcement hierarchy can be used to determine the relative frequency and desirability of different activities, and is often employed when applying the Premack principle.
- Contingent outcomes are more likely to reinforce behavior than non-contingent responses. Contingent outcomes are those directly linked to a causal behavior, such a light turning on being contingent on flipping a switch. Note that contingent outcomes are not necessary to demonstrate reinforcement, but perceived contingency may increase learning.
- Contiguous stimuli are stimuli closely associated by time and space with specific behaviors. They reduce the amount of time needed to learn a behavior while increasing its resistance to extinction. Giving a dog a piece of food immediately after sitting is more contiguous with (and therefore more likely to reinforce) the behavior than a several minute delay in food delivery following the behavior.
- Noncontingent reinforcement refers to response-independent delivery of stimuli identified as reinforcers for some behaviors of that organism. However, this typically entails time-based delivery of stimuli identified as maintaining aberrant behavior, which decreases the rate of the target behavior.
|
https://en.wikipedia.org/wiki/Reinforcement
|
- Noncontingent reinforcement refers to response-independent delivery of stimuli identified as reinforcers for some behaviors of that organism. However, this typically entails time-based delivery of stimuli identified as maintaining aberrant behavior, which decreases the rate of the target behavior. As no measured behavior is identified as being strengthened, there is controversy surrounding the use of the term noncontingent "reinforcement".
## Natural and artificial reinforcement
In his 1967 paper, Arbitrary and Natural Reinforcement, Charles Ferster proposed classifying reinforcement into events that increase the frequency of an operant behavior as a natural consequence of the behavior itself, and events that affect frequency by their requirement of human mediation, such as in a token economy where subjects are rewarded for certain behavior by the therapist.
In 1970, Baer and Wolf developed the concept of "behavioral traps." A behavioral trap requires only a simple response to enter the trap, yet once entered, the trap cannot be resisted in creating general behavior change. It is the use of a behavioral trap that increases a person's repertoire, by exposing them to the naturally occurring reinforcement of that behavior. Behavioral traps have four characteristics:
- They are "baited" with desirable reinforcers that "lure" the student into the trap.
- Only a low-effort response already in the repertoire is necessary to enter the trap.
|
https://en.wikipedia.org/wiki/Reinforcement
|
Behavioral traps have four characteristics:
- They are "baited" with desirable reinforcers that "lure" the student into the trap.
- Only a low-effort response already in the repertoire is necessary to enter the trap.
- Interrelated contingencies of reinforcement inside the trap motivate the person to acquire, extend, and maintain targeted skills.
- They can remain effective for long periods of time because the person shows few, if any, satiation effects.
Thus, artificial reinforcement can be used to build or develop generalizable skills, eventually transitioning to naturally occurring reinforcement to maintain or increase the behavior. Another example is a social situation that will generally result from a specific behavior once it has met a certain criterion.
## Intermittent reinforcement schedules
Behavior is not always reinforced every time it is emitted, and the pattern of reinforcement strongly affects how fast an operant response is learned, what its rate is at any given time, and how long it continues when reinforcement ceases. The simplest rules controlling reinforcement are continuous reinforcement, where every response is reinforced, and extinction, where no response is reinforced. Between these extremes, more complex schedules of reinforcement specify the rules that determine how and when a response will be followed by a reinforcer.
|
https://en.wikipedia.org/wiki/Reinforcement
|
The simplest rules controlling reinforcement are continuous reinforcement, where every response is reinforced, and extinction, where no response is reinforced. Between these extremes, more complex schedules of reinforcement specify the rules that determine how and when a response will be followed by a reinforcer.
Specific schedules of reinforcement reliably induce specific patterns of response, and these rules apply across many different species. The varying consistency and predictability of reinforcement is an important influence on how the different schedules operate. Many simple and complex schedules were investigated at great length by B.F. Skinner using pigeons.
### Simple schedules
- Ratio schedule – the reinforcement depends only on the number of responses the organism has performed.
- Continuous reinforcement (CRF) – a schedule of reinforcement in which every occurrence of the instrumental response (desired response) is followed by the reinforcer.
Simple schedules have a single rule to determine when a single type of reinforcer is delivered for a specific response.
- Fixed ratio (FR) – schedules deliver reinforcement after every nth response. An FR 1 schedule is synonymous with a CRF schedule.
1. (ex. Every three times a rat presses a button, that rat receives a slice of cheese)
- Variable ratio schedule (VR) – reinforced on average every nth response, but not always on the nth response.
|
https://en.wikipedia.org/wiki/Reinforcement
|
1. (ex. Every three times a rat presses a button, that rat receives a slice of cheese)
- Variable ratio schedule (VR) – reinforced on average every nth response, but not always on the nth response.
1. (ex. Gamblers win 1 out every an 10 turns on a slot machine, however this is an average and they could hypothetically win on any given turn)
- Fixed interval (FI) – reinforced after n amount of time.
1. (ex. Every 10 minutes, a rat receives a slice of cheese when it presses a button. Eventually, the rat will learn to ignore the button until each 10 minute interval has elapsed)
- Variable interval (VI) – reinforced on an average of n amount of time, but not always exactly n amount of time.
1. (ie. A radio host gives away concert tickets approximately every hour, but the exact minutes may vary)
- Fixed time (FT) – Provides a reinforcing stimulus at a fixed time since the last reinforcement delivery, regardless of whether the subject has responded or not. In other words, it is a non-contingent schedule.
- Variable time (VT) – Provides reinforcement at an average variable time since last reinforcement, regardless of whether the subject has responded or not.
Simple schedules are utilized in many differential reinforcement procedures:
- Differential reinforcement of alternative behavior (DRA) -
|
https://en.wikipedia.org/wiki/Reinforcement
|
- Variable time (VT) – Provides reinforcement at an average variable time since last reinforcement, regardless of whether the subject has responded or not.
Simple schedules are utilized in many differential reinforcement procedures:
- Differential reinforcement of alternative behavior (DRA) - A conditioning procedure in which an undesired response is decreased by placing it on extinction or, less commonly, providing contingent punishment, while simultaneously providing reinforcement contingent on a desirable response. An example would be a teacher attending to a student only when they raise their hand, while ignoring the student when he or she calls out.
- Differential reinforcement of other behavior (DRO) – Also known as omission training procedures, an instrumental conditioning procedure in which a positive reinforcer is periodically delivered only if the participant does something other than the target response. An example would be reinforcing any hand action other than nose picking.
- Differential reinforcement of incompatible behavior (DRI) – Used to reduce a frequent behavior without punishing it by reinforcing an incompatible response. An example would be reinforcing clapping to reduce nose picking
- Differential reinforcement of low response rate (DRL) – Used to encourage low rates of responding. It is like an interval schedule, except that premature responses reset the time required between behavior.
|
https://en.wikipedia.org/wiki/Reinforcement
|
An example would be reinforcing clapping to reduce nose picking
- Differential reinforcement of low response rate (DRL) – Used to encourage low rates of responding. It is like an interval schedule, except that premature responses reset the time required between behavior.
- Differential reinforcement of high rate (DRH) – Used to increase high rates of responding. It is like an interval schedule, except that a minimum number of responses are required in the interval in order to receive reinforcement.
#### Effects of different types of simple schedules
- Fixed ratio: activity slows after reinforcer is delivered, then response rates increase until the next reinforcer delivery (post-reinforcement pause).
- Variable ratio: rapid, steady rate of responding; most resistant to extinction.
- Fixed interval: responding increases towards the end of the interval; poor resistance to extinction.
- Variable interval: steady activity results, good resistance to extinction.
- Ratio schedules produce higher rates of responding than interval schedules, when the rates of reinforcement are otherwise similar.
- Variable schedules produce higher rates and greater resistance to extinction than most fixed schedules. This is also known as the Partial Reinforcement Extinction Effect (PREE).
-
|
https://en.wikipedia.org/wiki/Reinforcement
|
- Variable schedules produce higher rates and greater resistance to extinction than most fixed schedules. This is also known as the Partial Reinforcement Extinction Effect (PREE).
- The variable ratio schedule produces both the highest rate of responding and the greatest resistance to extinction (for example, the behavior of gamblers at slot machines).
- Fixed schedules produce "post-reinforcement pauses" (PRP), where responses will briefly cease immediately following reinforcement, though the pause is a function of the upcoming response requirement rather than the prior reinforcement.
- The PRP of a fixed interval schedule is frequently followed by a "scallop-shaped" accelerating rate of response, while fixed ratio schedules produce a more "angular" response.
- fixed interval scallop: the pattern of responding that develops with fixed interval reinforcement schedule, performance on a fixed interval reflects subject's accuracy in telling time.
- Organisms whose schedules of reinforcement are "thinned" (that is, requiring more responses or a greater wait before reinforcement) may experience "ratio strain" if thinned too quickly. This produces behavior similar to that seen during extinction.
- Ratio strain: the disruption of responding that occurs when a fixed ratio response requirement is increased too rapidly.
- Ratio run: high and steady rate of responding that completes each ratio requirement.
|
https://en.wikipedia.org/wiki/Reinforcement
|
- fixed interval scallop: the pattern of responding that develops with fixed interval reinforcement schedule, performance on a fixed interval reflects subject's accuracy in telling time.
- Organisms whose schedules of reinforcement are "thinned" (that is, requiring more responses or a greater wait before reinforcement) may experience "ratio strain" if thinned too quickly. This produces behavior similar to that seen during extinction.
- Ratio strain: the disruption of responding that occurs when a fixed ratio response requirement is increased too rapidly.
- Ratio run: high and steady rate of responding that completes each ratio requirement. Usually higher ratio requirement causes longer post-reinforcement pauses to occur.
- Partial reinforcement schedules are more resistant to extinction than continuous reinforcement schedules.
- Ratio schedules are more resistant than interval schedules and variable schedules more resistant than fixed ones.
- Momentary changes in reinforcement value lead to dynamic changes in behavior.
### Compound schedules
Compound schedules combine two or more different simple schedules in some way using the same reinforcer for the same behavior. There are many possibilities; among those most often used are:
- Alternative schedules''' – A type of compound schedule where two or more simple schedules are in effect and whichever schedule is completed first results in reinforcement.
|
https://en.wikipedia.org/wiki/Reinforcement
|
There are many possibilities; among those most often used are:
- Alternative schedules''' – A type of compound schedule where two or more simple schedules are in effect and whichever schedule is completed first results in reinforcement.
- Conjunctive schedules – A complex schedule of reinforcement where two or more simple schedules are in effect independently of each other, and requirements on all of the simple schedules must be met for reinforcement.
- Multiple schedules – Two or more schedules alternate over time, with a stimulus indicating which is in force. Reinforcement is delivered if the response requirement is met while a schedule is in effect.
- Mixed schedules – Either of two, or more, schedules may occur with no stimulus indicating which is in force. Reinforcement is delivered if the response requirement is met while a schedule is in effect.
-
### Concurrent schedules
– A complex reinforcement procedure in which the participant can choose any one of two or more simple reinforcement schedules that are available simultaneously. Organisms are free to change back and forth between the response alternatives at any time.
- Concurrent-chain schedule of reinforcement' – A complex reinforcement procedure in which the participant is permitted to choose during the first link which of several simple reinforcement schedules will be in effect in the second link. Once a choice has been made, the rejected alternatives become unavailable until the start of the next trial.
|
https://en.wikipedia.org/wiki/Reinforcement
|
Organisms are free to change back and forth between the response alternatives at any time.
- Concurrent-chain schedule of reinforcement' – A complex reinforcement procedure in which the participant is permitted to choose during the first link which of several simple reinforcement schedules will be in effect in the second link. Once a choice has been made, the rejected alternatives become unavailable until the start of the next trial.
- Interlocking schedules – A single schedule with two components where progress in one component affects progress in the other component. In an interlocking FR 60 FI 120-s schedule, for example, each response subtracts time from the interval component such that each response is "equal" to removing two seconds from the FI schedule.
- Chained schedules – Reinforcement occurs after two or more successive schedules have been completed, with a stimulus indicating when one schedule has been completed and the next has started
- Tandem schedules – Reinforcement occurs when two or more successive schedule requirements have been completed, with no stimulus indicating when a schedule has been completed and the next has started.
- Higher-order schedules – completion of one schedule is reinforced according to a second schedule; e.g. in FR2 (FI10 secs), two successive fixed interval schedules require completion before a response is reinforced.
|
https://en.wikipedia.org/wiki/Reinforcement
|
- Interlocking schedules – A single schedule with two components where progress in one component affects progress in the other component. In an interlocking FR 60 FI 120-s schedule, for example, each response subtracts time from the interval component such that each response is "equal" to removing two seconds from the FI schedule.
- Chained schedules – Reinforcement occurs after two or more successive schedules have been completed, with a stimulus indicating when one schedule has been completed and the next has started
- Tandem schedules – Reinforcement occurs when two or more successive schedule requirements have been completed, with no stimulus indicating when a schedule has been completed and the next has started.
- Higher-order schedules – completion of one schedule is reinforced according to a second schedule; e.g. in FR2 (FI10 secs), two successive fixed interval schedules require completion before a response is reinforced.
### Superimposed schedules
The psychology term superimposed schedules of reinforcement refers to a structure of rewards where two or more simple schedules of reinforcement operate simultaneously. Reinforcers can be positive, negative, or both. An example is a person who comes home after a long day at work. The behavior of opening the front door is rewarded by a big kiss on the lips by the person's spouse and a rip in the pants from the family dog jumping enthusiastically.
|
https://en.wikipedia.org/wiki/Reinforcement
|
An example is a person who comes home after a long day at work. The behavior of opening the front door is rewarded by a big kiss on the lips by the person's spouse and a rip in the pants from the family dog jumping enthusiastically. Another example of superimposed schedules of reinforcement is a pigeon in an experimental cage pecking at a button. The pecks deliver a hopper of grain every 20th peck, and access to water after every 200 pecks.
Superimposed schedules of reinforcement are a type of compound schedule that evolved from the initial work on simple schedules of reinforcement by B.F. Skinner and his colleagues (Skinner and Ferster, 1957). They demonstrated that reinforcers could be delivered on schedules, and further that organisms behaved differently under different schedules. Rather than a reinforcer, such as food or water, being delivered every time as a consequence of some behavior, a reinforcer could be delivered after more than one instance of the behavior. For example, a pigeon may be required to peck a button switch ten times before food appears. This is a "ratio schedule". Also, a reinforcer could be delivered after an interval of time passed following a target behavior. An example is a rat that is given a food pellet immediately following the first response that occurs after two minutes has elapsed since the last lever press.
|
https://en.wikipedia.org/wiki/Reinforcement
|
Also, a reinforcer could be delivered after an interval of time passed following a target behavior. An example is a rat that is given a food pellet immediately following the first response that occurs after two minutes has elapsed since the last lever press. This is called an "interval schedule".
In addition, ratio schedules can deliver reinforcement following fixed or variable number of behaviors by the individual organism. Likewise, interval schedules can deliver reinforcement following fixed or variable intervals of time following a single response by the organism. Individual behaviors tend to generate response rates that differ based upon how the reinforcement schedule is created. Much subsequent research in many labs examined the effects on behaviors of scheduling reinforcers.
If an organism is offered the opportunity to choose between or among two or more simple schedules of reinforcement at the same time, the reinforcement structure is called a "concurrent schedule of reinforcement". Brechner (1974, 1977) introduced the concept of superimposed schedules of reinforcement in an attempt to create a laboratory analogy of social traps, such as when humans overharvest their fisheries or tear down their rainforests. Brechner created a situation where simple reinforcement schedules were superimposed upon each other. In other words, a single response or group of responses by an organism led to multiple consequences.
|
https://en.wikipedia.org/wiki/Reinforcement
|
Brechner created a situation where simple reinforcement schedules were superimposed upon each other. In other words, a single response or group of responses by an organism led to multiple consequences. Concurrent schedules of reinforcement can be thought of as "or" schedules, and superimposed schedules of reinforcement can be thought of as "and" schedules. Brechner and Linder (1981) and Brechner (1987) expanded the concept to describe how superimposed schedules and the social trap analogy could be used to analyze the way energy flows through systems.
Superimposed schedules of reinforcement have many real-world applications in addition to generating social traps. Many different human individual and social situations can be created by superimposing simple reinforcement schedules. For example, a human being could have simultaneous tobacco and alcohol addictions. Even more complex situations can be created or simulated by superimposing two or more concurrent schedules. For example, a high school senior could have a choice between going to Stanford University or UCLA, and at the same time have the choice of going into the Army or the Air Force, and simultaneously the choice of taking a job with an internet company or a job with a software company. That is a reinforcement structure of three superimposed concurrent schedules of reinforcement.
|
https://en.wikipedia.org/wiki/Reinforcement
|
For example, a high school senior could have a choice between going to Stanford University or UCLA, and at the same time have the choice of going into the Army or the Air Force, and simultaneously the choice of taking a job with an internet company or a job with a software company. That is a reinforcement structure of three superimposed concurrent schedules of reinforcement.
Superimposed schedules of reinforcement can create the three classic conflict situations (approach–approach conflict, approach–avoidance conflict, and avoidance–avoidance conflict) described by Kurt Lewin (1935) and can operationalize other Lewinian situations analyzed by his force field analysis. Other examples of the use of superimposed schedules of reinforcement as an analytical tool are its application to the contingencies of rent control (Brechner, 2003) and problem of toxic waste dumping in the Los Angeles County storm drain system (Brechner, 2010).
Concurrent schedules
In operant conditioning, concurrent schedules of reinforcement are schedules of reinforcement that are simultaneously available to an animal subject or human participant, so that the subject or participant can respond on either schedule. For example, in a two-alternative forced choice task, a pigeon in a Skinner box is faced with two pecking keys; pecking responses can be made on either, and food reinforcement might follow a peck on either.
|
https://en.wikipedia.org/wiki/Reinforcement
|
Concurrent schedules
In operant conditioning, concurrent schedules of reinforcement are schedules of reinforcement that are simultaneously available to an animal subject or human participant, so that the subject or participant can respond on either schedule. For example, in a two-alternative forced choice task, a pigeon in a Skinner box is faced with two pecking keys; pecking responses can be made on either, and food reinforcement might follow a peck on either. The schedules of reinforcement arranged for pecks on the two keys can be different. They may be independent, or they may be linked so that behavior on one key affects the likelihood of reinforcement on the other.
It is not necessary for responses on the two schedules to be physically distinct. In an alternate way of arranging concurrent schedules, introduced by Findley in 1958, both schedules are arranged on a single key or other response device, and the subject can respond on a second key to change between the schedules. In such a "Findley concurrent" procedure, a stimulus (e.g., the color of the main key) signals which schedule is in effect.
Concurrent schedules often induce rapid alternation between the keys. To prevent this, a "changeover delay" is commonly introduced: each schedule is inactivated for a brief period after the subject switches to it.
|
https://en.wikipedia.org/wiki/Reinforcement
|
Concurrent schedules often induce rapid alternation between the keys. To prevent this, a "changeover delay" is commonly introduced: each schedule is inactivated for a brief period after the subject switches to it.
When both the concurrent schedules are variable intervals, a quantitative relationship known as the matching law is found between relative response rates in the two schedules and the relative reinforcement rates they deliver; this was first observed by R.J. Herrnstein in 1961. Matching law is a rule for instrumental behavior which states that the relative rate of responding on a particular response alternative equals the relative rate of reinforcement for that response (rate of behavior = rate of reinforcement). Animals and humans have a tendency to prefer choice in schedules.
## Shaping
Shaping is the reinforcement of successive approximations to a desired instrumental response. In training a rat to press a lever, for example, simply turning toward the lever is reinforced at first. Then, only turning and stepping toward it is reinforced. Eventually the rat will be reinforced for pressing the lever. The successful attainment of one behavior starts the shaping process for the next. As training progresses, the response becomes progressively more like the desired behavior, with each subsequent behavior becoming a closer approximation of the final behavior.
The intervention of shaping is used in many training situations, and also for individuals with autism as well as other developmental disabilities.
|
https://en.wikipedia.org/wiki/Reinforcement
|
As training progresses, the response becomes progressively more like the desired behavior, with each subsequent behavior becoming a closer approximation of the final behavior.
The intervention of shaping is used in many training situations, and also for individuals with autism as well as other developmental disabilities. When shaping is combined with other evidence-based practices such as Functional Communication Training (FCT), it can yield positive outcomes for human behavior. Shaping typically uses continuous reinforcement, but the response can later be shifted to an intermittent reinforcement schedule.
Shaping is also used for food refusal. Food refusal is when an individual has a partial or total aversion to food items. This can be as minimal as being a picky eater to so severe that it can affect an individual's health. Shaping has been used to have a high success rate for food acceptance.
## Chaining
Chaining involves linking discrete behaviors together in a series, such that the consequence of each behavior is both the reinforcement for the previous behavior, and the antecedent stimulus for the next behavior. There are many ways to teach chaining, such as forward chaining (starting from the first behavior in the chain), backwards chaining (starting from the last behavior) and total task chaining (teaching each behavior in the chain simultaneously).
|
https://en.wikipedia.org/wiki/Reinforcement
|
## Chaining
Chaining involves linking discrete behaviors together in a series, such that the consequence of each behavior is both the reinforcement for the previous behavior, and the antecedent stimulus for the next behavior. There are many ways to teach chaining, such as forward chaining (starting from the first behavior in the chain), backwards chaining (starting from the last behavior) and total task chaining (teaching each behavior in the chain simultaneously). People's morning routines are a typical chain, with a series of behaviors (e.g. showering, drying off, getting dressed) occurring in sequence as a well learned habit.
Challenging behaviors seen in individuals with autism and other related disabilities have successfully managed and maintained in studies using a scheduled of chained reinforcements. Functional communication training is an intervention that often uses chained schedules of reinforcement to effectively promote the appropriate and desired functional communication response.
## Mathematical models
There has been research on building a mathematical model of reinforcement. This model is known as MPR, which is short for mathematical principles of reinforcement. Peter Killeen has made key discoveries in the field with his research on pigeons.
## Applications
Reinforcement and punishment are ubiquitous in human social interactions, and a great many applications of operant principles have been suggested and implemented. Following are a few examples.
|
https://en.wikipedia.org/wiki/Reinforcement
|
## Applications
Reinforcement and punishment are ubiquitous in human social interactions, and a great many applications of operant principles have been suggested and implemented. Following are a few examples.
### Addiction and dependence
Positive and negative reinforcement play central roles in the development and maintenance of addiction and drug dependence. An addictive drug is intrinsically rewarding; that is, it functions as a primary positive reinforcer of drug use. The brain's reward system assigns it incentive salience (i.e., it is "wanted" or "desired"), so as an addiction develops, deprivation of the drug leads to craving. In addition, stimuli associated with drug use – e.g., the sight of a syringe, and the location of use – become associated with the intense reinforcement induced by the drug. These previously neutral stimuli acquire several properties: their appearance can induce craving, and they can become conditioned positive reinforcers of continued use. Thus, if an addicted individual encounters one of these drug cues, a craving for the associated drug may reappear. For example, anti-drug agencies previously used posters with images of drug paraphernalia as an attempt to show the dangers of drug use. However, such posters are no longer used because of the effects of incentive salience in causing relapse upon sight of the stimuli illustrated in the posters.
|
https://en.wikipedia.org/wiki/Reinforcement
|
For example, anti-drug agencies previously used posters with images of drug paraphernalia as an attempt to show the dangers of drug use. However, such posters are no longer used because of the effects of incentive salience in causing relapse upon sight of the stimuli illustrated in the posters.
In drug dependent individuals, negative reinforcement occurs when a drug is self-administered in order to alleviate or "escape" the symptoms of physical dependence (e.g., tremors and sweating) and/or psychological dependence (e.g., anhedonia, restlessness, irritability, and anxiety) that arise during the state of drug withdrawal.
### Animal training
Animal trainers and pet owners were applying the principles and practices of operant conditioning long before these ideas were named and studied, and animal training still provides one of the clearest and most convincing examples of operant control.
|
https://en.wikipedia.org/wiki/Reinforcement
|
In drug dependent individuals, negative reinforcement occurs when a drug is self-administered in order to alleviate or "escape" the symptoms of physical dependence (e.g., tremors and sweating) and/or psychological dependence (e.g., anhedonia, restlessness, irritability, and anxiety) that arise during the state of drug withdrawal.
### Animal training
Animal trainers and pet owners were applying the principles and practices of operant conditioning long before these ideas were named and studied, and animal training still provides one of the clearest and most convincing examples of operant control. Of the concepts and procedures described in this article, a few of the most salient are: availability of immediate reinforcement (e.g. the ever-present bag of dog yummies); contingency, assuring that reinforcement follows the desired behavior and not something else; the use of secondary reinforcement, as in sounding a clicker immediately after a desired response; shaping, as in gradually getting a dog to jump higher and higher; intermittent reinforcement, reducing the frequency of those yummies to induce persistent behavior without satiation; chaining, where a complex behavior is gradually put together.
### Child behavior – parent management training
Providing positive reinforcement for appropriate child behaviors is a major focus of parent management training.
|
https://en.wikipedia.org/wiki/Reinforcement
|
Of the concepts and procedures described in this article, a few of the most salient are: availability of immediate reinforcement (e.g. the ever-present bag of dog yummies); contingency, assuring that reinforcement follows the desired behavior and not something else; the use of secondary reinforcement, as in sounding a clicker immediately after a desired response; shaping, as in gradually getting a dog to jump higher and higher; intermittent reinforcement, reducing the frequency of those yummies to induce persistent behavior without satiation; chaining, where a complex behavior is gradually put together.
### Child behavior – parent management training
Providing positive reinforcement for appropriate child behaviors is a major focus of parent management training. Typically, parents learn to reward appropriate behavior through social rewards (such as praise, smiles, and hugs) as well as concrete rewards (such as stickers or points towards a larger reward as part of an incentive system created collaboratively with the child).Kazdin AE (2010). Problem-solving skills training and parent management training for oppositional defiant disorder and conduct disorder. Evidence-based psychotherapies for children and adolescents (2nd ed.), 211–226. New York: Guilford Press.
|
https://en.wikipedia.org/wiki/Reinforcement
|
Evidence-based psychotherapies for children and adolescents (2nd ed.), 211–226. New York: Guilford Press. In addition, parents learn to select simple behaviors as an initial focus and reward each of the small steps that their child achieves towards reaching a larger goal (this concept is called "successive approximations").Forgatch MS, Patterson GR (2010). Parent management training — Oregon model: An intervention for antisocial behavior in children and adolescents. Evidence-based psychotherapies for children and adolescents (2nd ed.), 159–78. New York: Guilford Press. They may also use indirect rewards such through progress charts. Providing positive reinforcement in the classroom can be beneficial to student success. When applying positive reinforcement to students, it's crucial to make it individualized to that student's needs. This way, the student understands why they are receiving the praise, they can accept it, and eventually learn to continue the action that was earned by positive reinforcement. For example, using rewards or extra recess time might apply to some students more, whereas others might accept the enforcement by receiving stickers or check marks indicating praise.
### Economics
Both psychologists and economists have become interested in applying operant concepts and findings to the behavior of humans in the marketplace. An example
is the analysis of consumer demand, as indexed by the amount of a commodity that is purchased.
|
https://en.wikipedia.org/wiki/Reinforcement
|
### Economics
Both psychologists and economists have become interested in applying operant concepts and findings to the behavior of humans in the marketplace. An example
is the analysis of consumer demand, as indexed by the amount of a commodity that is purchased. In economics, the degree to which price influences consumption is called "the price elasticity of demand." Certain commodities are more elastic than others; for example, a change in price of certain foods may have a large effect on the amount bought, while gasoline and other essentials may be less affected by price changes. In terms of operant analysis, such effects may be interpreted in terms of motivations of consumers and the relative value of the commodities as reinforcers. Domjan, M. (2009). The Principles of Learning and Behavior. Wadsworth Publishing Company. 6th Edition. pages 244–249.
### Gambling – variable ratio scheduling
As stated earlier in this article, a variable ratio schedule yields reinforcement after the emission of an unpredictable number of responses. This schedule typically generates rapid, persistent responding. Slot machines pay off on a variable ratio schedule, and they produce just this sort of persistent lever-pulling behavior in gamblers. Because the machines are programmed to pay out less money than they take in, the persistent slot-machine user invariably loses in the long run.
|
https://en.wikipedia.org/wiki/Reinforcement
|
Slot machines pay off on a variable ratio schedule, and they produce just this sort of persistent lever-pulling behavior in gamblers. Because the machines are programmed to pay out less money than they take in, the persistent slot-machine user invariably loses in the long run. Slots machines, and thus variable ratio reinforcement, have often been blamed as a factor underlying gambling addiction.
### Praise
The concept of praise as a means of behavioral reinforcement in humans is rooted in B.F. Skinner's model of operant conditioning. Through this lens, praise has been viewed as a means of positive reinforcement, wherein an observed behavior is made more likely to occur by contingently praising said behavior. Hundreds of studies have demonstrated the effectiveness of praise in promoting positive behaviors, notably in the study of teacher and parent use of praise on child in promoting improved behavior and academic performance, but also in the study of work performance. Praise has also been demonstrated to reinforce positive behaviors in non-praised adjacent individuals (such as a classmate of the praise recipient) through vicarious reinforcement. Praise may be more or less effective in changing behavior depending on its form, content and delivery.
|
https://en.wikipedia.org/wiki/Reinforcement
|
Praise has also been demonstrated to reinforce positive behaviors in non-praised adjacent individuals (such as a classmate of the praise recipient) through vicarious reinforcement. Praise may be more or less effective in changing behavior depending on its form, content and delivery. In order for praise to effect positive behavior change, it must be contingent on the positive behavior (i.e., only administered after the targeted behavior is enacted), must specify the particulars of the behavior that is to be reinforced, and must be delivered sincerely and credibly.
Acknowledging the effect of praise as a positive reinforcement strategy, numerous behavioral and cognitive behavioral interventions have incorporated the use of praise in their protocols. The strategic use of praise is recognized as an evidence-based practice in both classroom management and parenting training interventions, though praise is often subsumed in intervention research into a larger category of positive reinforcement, which includes strategies such as strategic attention and behavioral rewards.
### Traumatic bonding
Traumatic bonding occurs as the result of ongoing cycles of abuse in which the intermittent reinforcement of reward and punishment creates powerful emotional bonds that are resistant to change. Chrissie Sanderson. Counselling Survivors of Domestic Abuse. Jessica Kingsley Publishers; 15 June 2008. . p. 84.
|
https://en.wikipedia.org/wiki/Reinforcement
|
Jessica Kingsley Publishers; 15 June 2008. . p. 84.
The other source indicated that
'The necessary conditions for traumatic bonding are that one person must dominate the other and that the level of abuse chronically spikes and then subsides. The relationship is characterized by periods of permissive, compassionate, and even affectionate behavior from the dominant person, punctuated by intermittent episodes of intense abuse. To maintain the upper hand, the victimizer manipulates the behavior of the victim and limits the victim's options so as to perpetuate the power imbalance. Any threat to the balance of dominance and submission may be met with an escalating cycle of punishment ranging from seething intimidation to intensely violent outbursts. The victimizer also isolates the victim from other sources of support, which reduces the likelihood of detection and intervention, impairs the victim's ability to receive countervailing self-referent feedback, and strengthens the sense of unilateral dependency ... The traumatic effects of these abusive relationships may include the impairment of the victim's capacity for accurate self-appraisal, leading to a sense of personal inadequacy and a subordinate sense of dependence upon the dominating person. Victims also may encounter a variety of unpleasant social and legal consequences of their emotional and behavioral affiliation with someone who perpetrated aggressive acts, even if they themselves were the recipients of the aggression.
|
https://en.wikipedia.org/wiki/Reinforcement
|
The traumatic effects of these abusive relationships may include the impairment of the victim's capacity for accurate self-appraisal, leading to a sense of personal inadequacy and a subordinate sense of dependence upon the dominating person. Victims also may encounter a variety of unpleasant social and legal consequences of their emotional and behavioral affiliation with someone who perpetrated aggressive acts, even if they themselves were the recipients of the aggression.
### Video games
Most video games are designed around some type of compulsion loop, adding a type of positive reinforcement through a variable rate schedule to keep the player playing the game, though this can also lead to video game addiction.
As part of a trend in the monetization of video games in the 2010s, some games offered "loot boxes" as rewards or purchasable by real-world funds that offered a random selection of in-game items, distributed by rarity. The practice has been tied to the same methods that slot machines and other gambling devices dole out rewards, as it follows a variable rate schedule. While the general perception that loot boxes are a form of gambling, the practice is only classified as such in a few countries as gambling and otherwise legal. However, methods to use those items as virtual currency for online gambling or trading for real-world money has created a skin gambling market that is under legal evaluation.
|
https://en.wikipedia.org/wiki/Reinforcement
|
While the general perception that loot boxes are a form of gambling, the practice is only classified as such in a few countries as gambling and otherwise legal. However, methods to use those items as virtual currency for online gambling or trading for real-world money has created a skin gambling market that is under legal evaluation.
## Criticisms
The standard definition of behavioral reinforcement has been criticized as circular, since it appears to argue that response strength is increased by reinforcement, and defines reinforcement as something that increases response strength (i.e., response strength is increased by things that increase response strength). However, the correct usage of reinforcement is that something is a reinforcer because'' of its effect on behavior, and not the other way around. It becomes circular if one says that a particular stimulus strengthens behavior because it is a reinforcer, and does not explain why a stimulus is producing that effect on the behavior. Other definitions have been proposed, such as F.D. Sheffield's "consummatory behavior contingent on a response", but these are not broadly used in psychology.
Increasingly, understanding of the role reinforcers play is moving away from a "strengthening" effect to a "signalling" effect. That is, the view that reinforcers increase responding because they signal the behaviors that are likely to result in reinforcement.
|
https://en.wikipedia.org/wiki/Reinforcement
|
Increasingly, understanding of the role reinforcers play is moving away from a "strengthening" effect to a "signalling" effect. That is, the view that reinforcers increase responding because they signal the behaviors that are likely to result in reinforcement. While in most practical applications, the effect of any given reinforcer will be the same regardless of whether the reinforcer is signalling or strengthening, this approach helps to explain a number of behavioral phenomena including patterns of responding on intermittent reinforcement schedules (fixed interval scallops) and the differential outcomes effect.
|
https://en.wikipedia.org/wiki/Reinforcement
|
In linear algebra, a generalized eigenvector of an
$$
n\times n
$$
matrix
$$
A
$$
is a vector which satisfies certain criteria which are more relaxed than those for an (ordinary) eigenvector.
Let
$$
V
$$
be an
$$
n
$$
-dimensional vector space and let
$$
A
$$
be the matrix representation of a linear map from
$$
V
$$
to
$$
V
$$
with respect to some ordered basis.
There may not always exist a full set of
$$
n
$$
linearly independent eigenvectors of
$$
A
$$
that form a complete basis for
$$
V
$$
. That is, the matrix
$$
A
$$
may not be diagonalizable. This happens when the algebraic multiplicity of at least one eigenvalue
$$
\lambda_i
$$
is greater than its geometric multiplicity (the nullity of the matrix
$$
(A-\lambda_i I)
$$
, or the dimension of its nullspace). In this case,
$$
\lambda_i
$$
is called a defective eigenvalue and
$$
A
$$
is called a defective matrix.
|
https://en.wikipedia.org/wiki/Generalized_eigenvector
|
This happens when the algebraic multiplicity of at least one eigenvalue
$$
\lambda_i
$$
is greater than its geometric multiplicity (the nullity of the matrix
$$
(A-\lambda_i I)
$$
, or the dimension of its nullspace). In this case,
$$
\lambda_i
$$
is called a defective eigenvalue and
$$
A
$$
is called a defective matrix.
A generalized eigenvector
$$
x_i
$$
corresponding to
$$
\lambda_i
$$
, together with the matrix
$$
(A-\lambda_i I)
$$
generate a Jordan chain of linearly independent generalized eigenvectors which form a basis for an invariant subspace of
$$
V
$$
.
Using generalized eigenvectors, a set of linearly independent eigenvectors of
$$
A
$$
can be extended, if necessary, to a complete basis for
$$
V
$$
. This basis can be used to determine an "almost diagonal matrix"
$$
J
$$
in
## Jordan normal form
, similar to
$$
A
$$
, which is useful in computing certain matrix functions of
$$
A
$$
.
|
https://en.wikipedia.org/wiki/Generalized_eigenvector
|
This basis can be used to determine an "almost diagonal matrix"
$$
J
$$
in
## Jordan normal form
, similar to
$$
A
$$
, which is useful in computing certain matrix functions of
$$
A
$$
. The matrix
$$
J
$$
is also useful in solving the system of linear differential equations
$$
\mathbf x' = A \mathbf x,
$$
where
$$
A
$$
need not be diagonalizable.
The dimension of the generalized eigenspace corresponding to a given eigenvalue
$$
\lambda
$$
is the algebraic multiplicity of
$$
\lambda
$$
.
## Overview and definition
There are several equivalent ways to define an ordinary eigenvector. For our purposes, an eigenvector
$$
\mathbf u
$$
associated with an eigenvalue
$$
\lambda
$$
of an
$$
n
$$
×
$$
n
$$
matrix
$$
A
$$
is a nonzero vector for which
$$
(A - \lambda I) \mathbf u = \mathbf 0
$$
, where
$$
I
$$
is the
$$
n
$$
× _ BLOCK8_ identity matrix and
$$
\mathbf 0
$$
is the zero vector of length
$$
n
$$
.
|
https://en.wikipedia.org/wiki/Generalized_eigenvector
|
For our purposes, an eigenvector
$$
\mathbf u
$$
associated with an eigenvalue
$$
\lambda
$$
of an
$$
n
$$
×
$$
n
$$
matrix
$$
A
$$
is a nonzero vector for which
$$
(A - \lambda I) \mathbf u = \mathbf 0
$$
, where
$$
I
$$
is the
$$
n
$$
× _ BLOCK8_ identity matrix and
$$
\mathbf 0
$$
is the zero vector of length
$$
n
$$
. That is,
$$
\mathbf u
$$
is in the kernel of the transformation
$$
(A - \lambda I)
$$
. If
$$
A
$$
has
$$
n
$$
linearly independent eigenvectors, then
$$
A
$$
is similar to a diagonal matrix
$$
D
$$
. That is, there exists an invertible matrix
$$
M
$$
such that
$$
A
$$
is diagonalizable through the similarity transformation
$$
D = M^{-1}AM
$$
. The matrix
$$
D
$$
is called a spectral matrix for
$$
A
$$
. The matrix
$$
M
$$
is called a modal matrix for
$$
A
$$
. Diagonalizable matrices are of particular interest since matrix functions of them can be computed easily.
|
https://en.wikipedia.org/wiki/Generalized_eigenvector
|
The matrix
$$
M
$$
is called a modal matrix for
$$
A
$$
. Diagonalizable matrices are of particular interest since matrix functions of them can be computed easily.
On the other hand, if
$$
A
$$
does not have
$$
n
$$
linearly independent eigenvectors associated with it, then
$$
A
$$
is not diagonalizable.
Definition: A vector
$$
\mathbf x_m
$$
is a generalized eigenvector of rank m of the matrix
$$
A
$$
and corresponding to the eigenvalue
$$
\lambda
$$
if
$$
(A - \lambda I)^m \mathbf x_m = \mathbf 0
$$
but
$$
(A - \lambda I)^{m-1} \mathbf x_m \ne \mathbf 0.
$$
Clearly, a generalized eigenvector of rank 1 is an ordinary eigenvector. Every
$$
n
$$
×
$$
n
$$
matrix
$$
A
$$
has
$$
n
$$
linearly independent generalized eigenvectors associated with it and can be shown to be similar to an "almost diagonal" matrix
$$
J
$$
in Jordan normal form. That is, there exists an invertible matrix
$$
M
$$
such that
$$
J = M^{-1}AM
$$
.
|
https://en.wikipedia.org/wiki/Generalized_eigenvector
|
Every
$$
n
$$
×
$$
n
$$
matrix
$$
A
$$
has
$$
n
$$
linearly independent generalized eigenvectors associated with it and can be shown to be similar to an "almost diagonal" matrix
$$
J
$$
in Jordan normal form. That is, there exists an invertible matrix
$$
M
$$
such that
$$
J = M^{-1}AM
$$
. The matrix
$$
M
$$
in this case is called a generalized modal matrix for
$$
A
$$
. If
$$
\lambda
$$
is an eigenvalue of algebraic multiplicity
$$
\mu
$$
, then
$$
A
$$
will have
$$
\mu
$$
linearly independent generalized eigenvectors corresponding to
$$
\lambda
$$
. These results, in turn, provide a straightforward method for computing certain matrix functions of
$$
A
$$
.
Note: For an
$$
n \times n
$$
matrix
$$
A
$$
over a field
$$
F
$$
to be expressed in Jordan normal form, all eigenvalues of
$$
A
$$
must be in
$$
F
$$
. That is, the characteristic polynomial
$$
f(x)
$$
must factor completely into linear factors;
$$
F
$$
must be an algebraically closed field.
|
https://en.wikipedia.org/wiki/Generalized_eigenvector
|
Note: For an
$$
n \times n
$$
matrix
$$
A
$$
over a field
$$
F
$$
to be expressed in Jordan normal form, all eigenvalues of
$$
A
$$
must be in
$$
F
$$
. That is, the characteristic polynomial
$$
f(x)
$$
must factor completely into linear factors;
$$
F
$$
must be an algebraically closed field. For example, if
$$
A
$$
has real-valued elements, then it may be necessary for the eigenvalues and the components of the eigenvectors to have complex values.
The set spanned by all generalized eigenvectors for a given
$$
\lambda
$$
forms the generalized eigenspace for
$$
\lambda
$$
.
## Examples
Here are some examples to illustrate the concept of generalized eigenvectors. Some of the details will be described later.
### Example 1
This example is simple but clearly illustrates the point. This type of matrix is used frequently in textbooks.
Suppose
$$
A = \begin{pmatrix} 1 & 1\\ 0 & 1 \end{pmatrix}.
$$
Then there is only one eigenvalue,
$$
\lambda = 1
$$
, and its algebraic multiplicity is
$$
m=2
$$
.
Notice that this matrix is in Jordan normal form but is not diagonal.
|
https://en.wikipedia.org/wiki/Generalized_eigenvector
|
Suppose
$$
A = \begin{pmatrix} 1 & 1\\ 0 & 1 \end{pmatrix}.
$$
Then there is only one eigenvalue,
$$
\lambda = 1
$$
, and its algebraic multiplicity is
$$
m=2
$$
.
Notice that this matrix is in Jordan normal form but is not diagonal. Hence, this matrix is not diagonalizable. Since there is one superdiagonal entry, there will be one generalized eigenvector of rank greater than 1 (or one could note that the vector space
$$
V
$$
is of dimension 2, so there can be at most one generalized eigenvector of rank greater than 1). Alternatively, one could compute the dimension of the nullspace of
$$
A - \lambda I
$$
to be
$$
p=1
$$
, and thus there are
$$
m-p=1
$$
generalized eigenvectors of rank greater than 1.
The ordinary eigenvector
$$
\mathbf v_1=\begin{pmatrix}1 \\0 \end{pmatrix}
$$
is computed as usual (see the eigenvector page for examples).
|
https://en.wikipedia.org/wiki/Generalized_eigenvector
|
Alternatively, one could compute the dimension of the nullspace of
$$
A - \lambda I
$$
to be
$$
p=1
$$
, and thus there are
$$
m-p=1
$$
generalized eigenvectors of rank greater than 1.
The ordinary eigenvector
$$
\mathbf v_1=\begin{pmatrix}1 \\0 \end{pmatrix}
$$
is computed as usual (see the eigenvector page for examples). Using this eigenvector, we compute the generalized eigenvector
$$
\mathbf v_2
$$
by solving
$$
(A-\lambda I) \mathbf v_2 = \mathbf v_1.
$$
Writing out the values:
$$
\left(\begin{pmatrix} 1 & 1\\ 0 & 1 \end{pmatrix} - 1 \begin{pmatrix} 1 & 0\\ 0 & 1 \end{pmatrix}\right)\begin{pmatrix}v_{21} \\v_{22} \end{pmatrix} = \begin{pmatrix} 0 & 1\\ 0 & 0 \end{pmatrix} \begin{pmatrix}v_{21} \\v_{22} \end{pmatrix} =
\begin{pmatrix}1 \\0 \end{pmatrix}.
$$
This simplifies to
$$
v_{22}= 1.
$$
The element
$$
v_{21}
$$
has no restrictions.
|
https://en.wikipedia.org/wiki/Generalized_eigenvector
|
The ordinary eigenvector
$$
\mathbf v_1=\begin{pmatrix}1 \\0 \end{pmatrix}
$$
is computed as usual (see the eigenvector page for examples). Using this eigenvector, we compute the generalized eigenvector
$$
\mathbf v_2
$$
by solving
$$
(A-\lambda I) \mathbf v_2 = \mathbf v_1.
$$
Writing out the values:
$$
\left(\begin{pmatrix} 1 & 1\\ 0 & 1 \end{pmatrix} - 1 \begin{pmatrix} 1 & 0\\ 0 & 1 \end{pmatrix}\right)\begin{pmatrix}v_{21} \\v_{22} \end{pmatrix} = \begin{pmatrix} 0 & 1\\ 0 & 0 \end{pmatrix} \begin{pmatrix}v_{21} \\v_{22} \end{pmatrix} =
\begin{pmatrix}1 \\0 \end{pmatrix}.
$$
This simplifies to
$$
v_{22}= 1.
$$
The element
$$
v_{21}
$$
has no restrictions. The generalized eigenvector of rank 2 is then
$$
\mathbf v_2=\begin{pmatrix}a \\1 \end{pmatrix}
$$
, where a can have any scalar value.
|
https://en.wikipedia.org/wiki/Generalized_eigenvector
|
Using this eigenvector, we compute the generalized eigenvector
$$
\mathbf v_2
$$
by solving
$$
(A-\lambda I) \mathbf v_2 = \mathbf v_1.
$$
Writing out the values:
$$
\left(\begin{pmatrix} 1 & 1\\ 0 & 1 \end{pmatrix} - 1 \begin{pmatrix} 1 & 0\\ 0 & 1 \end{pmatrix}\right)\begin{pmatrix}v_{21} \\v_{22} \end{pmatrix} = \begin{pmatrix} 0 & 1\\ 0 & 0 \end{pmatrix} \begin{pmatrix}v_{21} \\v_{22} \end{pmatrix} =
\begin{pmatrix}1 \\0 \end{pmatrix}.
$$
This simplifies to
$$
v_{22}= 1.
$$
The element
$$
v_{21}
$$
has no restrictions. The generalized eigenvector of rank 2 is then
$$
\mathbf v_2=\begin{pmatrix}a \\1 \end{pmatrix}
$$
, where a can have any scalar value. The choice of a = 0 is usually the simplest.
|
https://en.wikipedia.org/wiki/Generalized_eigenvector
|
The generalized eigenvector of rank 2 is then
$$
\mathbf v_2=\begin{pmatrix}a \\1 \end{pmatrix}
$$
, where a can have any scalar value. The choice of a = 0 is usually the simplest.
Note that
$$
(A-\lambda I) \mathbf v_2 = \begin{pmatrix} 0 & 1\\ 0 & 0 \end{pmatrix} \begin{pmatrix}a \\1 \end{pmatrix} =
\begin{pmatrix}1 \\0 \end{pmatrix} = \mathbf v_1,
$$
so that
$$
\mathbf v_2
$$
is a generalized eigenvector, because
$$
(A-\lambda I)^2 \mathbf v_2 = (A-\lambda I) [(A-\lambda I)\mathbf v_2] =(A-\lambda I) \mathbf v_1 = \begin{pmatrix} 0 & 1\\ 0 & 0 \end{pmatrix} \begin{pmatrix}1 \\0 \end{pmatrix} =
\begin{pmatrix}0 \\0 \end{pmatrix} = \mathbf 0,
$$
so that
$$
\mathbf v_1
$$
is an ordinary eigenvector, and that
$$
\mathbf v_1
$$
and
$$
\mathbf v_2
$$
are linearly independent and hence constitute a basis for the vector space
$$
V
$$
.
|
https://en.wikipedia.org/wiki/Generalized_eigenvector
|
The choice of a = 0 is usually the simplest.
Note that
$$
(A-\lambda I) \mathbf v_2 = \begin{pmatrix} 0 & 1\\ 0 & 0 \end{pmatrix} \begin{pmatrix}a \\1 \end{pmatrix} =
\begin{pmatrix}1 \\0 \end{pmatrix} = \mathbf v_1,
$$
so that
$$
\mathbf v_2
$$
is a generalized eigenvector, because
$$
(A-\lambda I)^2 \mathbf v_2 = (A-\lambda I) [(A-\lambda I)\mathbf v_2] =(A-\lambda I) \mathbf v_1 = \begin{pmatrix} 0 & 1\\ 0 & 0 \end{pmatrix} \begin{pmatrix}1 \\0 \end{pmatrix} =
\begin{pmatrix}0 \\0 \end{pmatrix} = \mathbf 0,
$$
so that
$$
\mathbf v_1
$$
is an ordinary eigenvector, and that
$$
\mathbf v_1
$$
and
$$
\mathbf v_2
$$
are linearly independent and hence constitute a basis for the vector space
$$
V
$$
.
### Example 2
|
https://en.wikipedia.org/wiki/Generalized_eigenvector
|
Note that
$$
(A-\lambda I) \mathbf v_2 = \begin{pmatrix} 0 & 1\\ 0 & 0 \end{pmatrix} \begin{pmatrix}a \\1 \end{pmatrix} =
\begin{pmatrix}1 \\0 \end{pmatrix} = \mathbf v_1,
$$
so that
$$
\mathbf v_2
$$
is a generalized eigenvector, because
$$
(A-\lambda I)^2 \mathbf v_2 = (A-\lambda I) [(A-\lambda I)\mathbf v_2] =(A-\lambda I) \mathbf v_1 = \begin{pmatrix} 0 & 1\\ 0 & 0 \end{pmatrix} \begin{pmatrix}1 \\0 \end{pmatrix} =
\begin{pmatrix}0 \\0 \end{pmatrix} = \mathbf 0,
$$
so that
$$
\mathbf v_1
$$
is an ordinary eigenvector, and that
$$
\mathbf v_1
$$
and
$$
\mathbf v_2
$$
are linearly independent and hence constitute a basis for the vector space
$$
V
$$
.
### Example 2
This example is more complex than Example 1.
|
https://en.wikipedia.org/wiki/Generalized_eigenvector
|
### Example 2
This example is more complex than Example 1. Unfortunately, it is a little difficult to construct an interesting example of low order.
The matrix
$$
A = \begin{pmatrix}
1 & 0 & 0 & 0 & 0 \\
3 & 1 & 0 & 0 & 0 \\
6 & 3 & 2 & 0 & 0 \\
10 & 6 & 3 & 2 & 0 \\
15 & 10 & 6 & 3 & 2
\end{pmatrix}
$$
has eigenvalues
$$
\lambda_1 = 1
$$
and
$$
\lambda_2 = 2
$$
with algebraic multiplicities
$$
\mu_1 = 2
$$
and
$$
\mu_2 = 3
$$
, but geometric multiplicities
$$
\gamma_1 = 1
$$
and
$$
\gamma_2 = 1
$$
.
|
https://en.wikipedia.org/wiki/Generalized_eigenvector
|
The generalized eigenspaces of
$$
A
$$
are calculated below.
$$
\mathbf x_1
$$
is the ordinary eigenvector associated with
$$
\lambda_1
$$
.
$$
\mathbf x_2
$$
is a generalized eigenvector associated with
$$
\lambda_1
$$
.
$$
\mathbf y_1
$$
is the ordinary eigenvector associated with
$$
\lambda_2
$$
.
$$
\mathbf y_2
$$
and
$$
\mathbf y_3
$$
are generalized eigenvectors associated with
$$
\lambda_2
$$
.
$$
(A-1 I) \mathbf x_1
= \begin{pmatrix}
0 & 0 & 0 & 0 & 0 \\
3 & 0 & 0 & 0 & 0 \\
6 & 3 & 1 & 0 & 0 \\
10 & 6 & 3 & 1 & 0 \\
15 & 10 & 6 & 3 & 1
\end{pmatrix}\begin{pmatrix}
0 \\ 3 \\ -9 \\ 9 \\ -3
\end{pmatrix} = \begin{pmatrix}
0 \\ 0 \\ 0 \\ 0 \\ 0
\end{pmatrix} = \mathbf 0 ,
$$
$$
(A - 1 I) \mathbf x_2
= \begin{pmatrix}
0 & 0 & 0 & 0 & 0 \\
3 & 0 & 0 & 0 & 0 \\
6 & 3 & 1 & 0 & 0 \\
10 & 6 & 3 & 1 & 0 \\
15 & 10 & 6 & 3 & 1
\end{pmatrix} \begin{pmatrix}
1 \\ -15 \\ 30 \\ -1 \\ -45
\end{pmatrix} = \begin{pmatrix}
0 \\ 3 \\ -9 \\ 9 \\ -3
\end{pmatrix} = \mathbf x_1 ,
$$
$$
(A - 2 I) \mathbf y_1
= \begin{pmatrix}
-1 & 0 & 0 & 0 & 0 \\
3 & -1 & 0 & 0 & 0 \\
6 & 3 & 0 & 0 & 0 \\
10 & 6 & 3 & 0 & 0 \\
15 & 10 & 6 & 3 & 0
\end{pmatrix} \begin{pmatrix}
0 \\ 0 \\ 0 \\ 0 \\ 9
\end{pmatrix} = \begin{pmatrix}
0 \\ 0 \\ 0 \\ 0 \\ 0
\end{pmatrix} = \mathbf 0 ,
$$
$$
(A - 2 I) \mathbf y_2 = \begin{pmatrix}
-1 & 0 & 0 & 0 & 0 \\
3 & -1 & 0 & 0 & 0 \\
6 & 3 & 0 & 0 & 0 \\
10 & 6 & 3 & 0 & 0 \\
15 & 10 & 6 & 3 & 0
\end{pmatrix} \begin{pmatrix}
0 \\ 0 \\ 0 \\ 3 \\ 0
\end{pmatrix} = \begin{pmatrix}
0 \\ 0 \\ 0 \\ 0 \\ 9
\end{pmatrix} = \mathbf y_1 ,
$$
$$
(A - 2 I) \mathbf y_3 = \begin{pmatrix}
-1 & 0 & 0 & 0 & 0 \\
3 & -1 & 0 & 0 & 0 \\
6 & 3 & 0 & 0 & 0 \\
10 & 6 & 3 & 0 & 0 \\
15 & 10 & 6 & 3 & 0
\end{pmatrix} \begin{pmatrix}
0 \\ 0 \\ 1 \\ -2 \\ 0
\end{pmatrix} = \begin{pmatrix}
0 \\ 0 \\ 0 \\ 3 \\ 0
\end{pmatrix} = \mathbf y_2 .
$$
This results in a basis for each of the generalized eigenspaces of
$$
A
$$
.
|
https://en.wikipedia.org/wiki/Generalized_eigenvector
|
Together the two chains of generalized eigenvectors span the space of all 5-dimensional column vectors.
$$
\left\{ \mathbf x_1, \mathbf x_2 \right\} =
\left\{
\begin{pmatrix} 0 \\ 3 \\ -9 \\ 9 \\ -3 \end{pmatrix},
\begin{pmatrix} 1 \\ -15 \\ 30 \\ -1 \\ -45 \end{pmatrix}
\right\},
\left\{ \mathbf y_1, \mathbf y_2, \mathbf y_3 \right\} =
\left\{
\begin{pmatrix} 0 \\ 0 \\ 0 \\ 0 \\ 9 \end{pmatrix},
\begin{pmatrix} 0 \\ 0 \\ 0 \\ 3 \\ 0 \end{pmatrix},
\begin{pmatrix} 0 \\ 0 \\ 1 \\ -2 \\ 0 \end{pmatrix}
\right\}.
$$
An "almost diagonal" matrix
$$
J
$$
in Jordan normal form, similar to
$$
A
$$
is obtained as follows:
$$
M =
\begin{pmatrix} \mathbf x_1 & \mathbf x_2 & \mathbf y_1 & \mathbf y_2 & \mathbf y_3 \end{pmatrix} =
\begin{pmatrix}
0 & 1 & 0 &0& 0 \\
3 & -15 & 0 &0& 0 \\
-9 & 30 & 0 &0& 1 \\
9 & -1 & 0 &3& -2 \\
-3 & -45 & 9 &0& 0
\end{pmatrix},
$$
$$
J = \begin{pmatrix}
1 & 1 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 & 0 \\
0 & 0 & 2 & 1 & 0 \\
0 & 0 & 0 & 2 & 1 \\
0 & 0 & 0 & 0 & 2
\end{pmatrix},
$$
where
$$
M
$$
is a generalized modal matrix for
$$
A
$$
, the columns of
$$
M
$$
are a canonical basis for
$$
A
$$
, and
$$
AM = MJ
$$
.
|
https://en.wikipedia.org/wiki/Generalized_eigenvector
|
## Jordan chains
Definition: Let
$$
\mathbf x_m
$$
be a generalized eigenvector of rank m corresponding to the matrix
$$
A
$$
and the eigenvalue
$$
\lambda
$$
. The chain generated by
$$
\mathbf x_m
$$
is a set of vectors
$$
\left\{ \mathbf x_m, \mathbf x_{m-1}, \dots , \mathbf x_1 \right\}
$$
given by
where
$$
\mathbf x_1
$$
is always an ordinary eigenvector with a given eigenvalue
$$
\lambda
$$
. Thus, in general,
The vector
$$
\mathbf x_j
$$
, given by (), is a generalized eigenvector of rank j corresponding to the eigenvalue
$$
\lambda
$$
. A chain is a linearly independent set of vectors.
## Canonical basis
Definition: A set of n linearly independent generalized eigenvectors is a canonical basis if it is composed entirely of Jordan chains.
|
https://en.wikipedia.org/wiki/Generalized_eigenvector
|
A chain is a linearly independent set of vectors.
## Canonical basis
Definition: A set of n linearly independent generalized eigenvectors is a canonical basis if it is composed entirely of Jordan chains.
Thus, once we have determined that a generalized eigenvector of rank m is in a canonical basis, it follows that the m − 1 vectors
$$
\mathbf x_{m-1}, \mathbf x_{m-2}, \ldots , \mathbf x_1
$$
that are in the Jordan chain generated by
$$
\mathbf x_m
$$
are also in the canonical basis.
Let
$$
\lambda_i
$$
be an eigenvalue of
$$
A
$$
of algebraic multiplicity
$$
\mu_i
$$
. First, find the ranks (matrix ranks) of the matrices
$$
(A - \lambda_i I), (A - \lambda_i I)^2, \ldots , (A - \lambda_i I)^{m_i}
$$
.
|
https://en.wikipedia.org/wiki/Generalized_eigenvector
|
Let
$$
\lambda_i
$$
be an eigenvalue of
$$
A
$$
of algebraic multiplicity
$$
\mu_i
$$
. First, find the ranks (matrix ranks) of the matrices
$$
(A - \lambda_i I), (A - \lambda_i I)^2, \ldots , (A - \lambda_i I)^{m_i}
$$
. The integer
$$
m_i
$$
is determined to be the first integer for which
$$
(A - \lambda_i I)^{m_i}
$$
has rank
$$
n - \mu_i
$$
(n being the number of rows or columns of
$$
A
$$
, that is,
$$
A
$$
is n × n).
|
https://en.wikipedia.org/wiki/Generalized_eigenvector
|
First, find the ranks (matrix ranks) of the matrices
$$
(A - \lambda_i I), (A - \lambda_i I)^2, \ldots , (A - \lambda_i I)^{m_i}
$$
. The integer
$$
m_i
$$
is determined to be the first integer for which
$$
(A - \lambda_i I)^{m_i}
$$
has rank
$$
n - \mu_i
$$
(n being the number of rows or columns of
$$
A
$$
, that is,
$$
A
$$
is n × n).
Now define
$$
\rho_k = \operatorname{rank}(A - \lambda_i I)^{k-1} - \operatorname{rank}(A - \lambda_i I)^k \qquad (k = 1, 2, \ldots , m_i).
$$
The variable
$$
\rho_k
$$
designates the number of linearly independent generalized eigenvectors of rank k corresponding to the eigenvalue
$$
\lambda_i
$$
that will appear in a canonical basis for
$$
A
$$
.
|
https://en.wikipedia.org/wiki/Generalized_eigenvector
|
The integer
$$
m_i
$$
is determined to be the first integer for which
$$
(A - \lambda_i I)^{m_i}
$$
has rank
$$
n - \mu_i
$$
(n being the number of rows or columns of
$$
A
$$
, that is,
$$
A
$$
is n × n).
Now define
$$
\rho_k = \operatorname{rank}(A - \lambda_i I)^{k-1} - \operatorname{rank}(A - \lambda_i I)^k \qquad (k = 1, 2, \ldots , m_i).
$$
The variable
$$
\rho_k
$$
designates the number of linearly independent generalized eigenvectors of rank k corresponding to the eigenvalue
$$
\lambda_i
$$
that will appear in a canonical basis for
$$
A
$$
. Note that
$$
\operatorname{rank}(A - \lambda_i I)^0 = \operatorname{rank}(I) = n
$$
.
|
https://en.wikipedia.org/wiki/Generalized_eigenvector
|
Now define
$$
\rho_k = \operatorname{rank}(A - \lambda_i I)^{k-1} - \operatorname{rank}(A - \lambda_i I)^k \qquad (k = 1, 2, \ldots , m_i).
$$
The variable
$$
\rho_k
$$
designates the number of linearly independent generalized eigenvectors of rank k corresponding to the eigenvalue
$$
\lambda_i
$$
that will appear in a canonical basis for
$$
A
$$
. Note that
$$
\operatorname{rank}(A - \lambda_i I)^0 = \operatorname{rank}(I) = n
$$
.
## Computation of generalized eigenvectors
In the preceding sections we have seen techniques for obtaining the
$$
n
$$
linearly independent generalized eigenvectors of a canonical basis for the vector space
$$
V
$$
associated with an
$$
n \times n
$$
matrix
$$
A
$$
. These techniques can be combined into a procedure:
Solve the characteristic equation of
$$
A
$$
for eigenvalues
$$
\lambda_i
$$
and their algebraic multiplicities
$$
\mu_i
$$
;
|
https://en.wikipedia.org/wiki/Generalized_eigenvector
|
## Computation of generalized eigenvectors
In the preceding sections we have seen techniques for obtaining the
$$
n
$$
linearly independent generalized eigenvectors of a canonical basis for the vector space
$$
V
$$
associated with an
$$
n \times n
$$
matrix
$$
A
$$
. These techniques can be combined into a procedure:
Solve the characteristic equation of
$$
A
$$
for eigenvalues
$$
\lambda_i
$$
and their algebraic multiplicities
$$
\mu_i
$$
;
For each
$$
\lambda_i :
$$
Determine
$$
n - \mu_i
$$
;
Determine
$$
m_i
$$
;
Determine
$$
\rho_k
$$
for
$$
(k = 1, \ldots , m_i)
$$
;
Determine each Jordan chain for
$$
\lambda_i
$$
;
### Example 3
|
https://en.wikipedia.org/wiki/Generalized_eigenvector
|
Determine each Jordan chain for
$$
\lambda_i
$$
;
### Example 3
The matrix
$$
A =
\begin{pmatrix}
5 & 1 & -2 & 4 \\
0 & 5 & 2 & 2 \\
0 & 0 & 5 & 3 \\
0 & 0 & 0 & 4
\end{pmatrix}
$$
has an eigenvalue
$$
\lambda_1 = 5
$$
of algebraic multiplicity
$$
\mu_1 = 3
$$
and an eigenvalue
$$
\lambda_2 = 4
$$
of algebraic multiplicity
$$
\mu_2 = 1
$$
. We also have
$$
n=4
$$
.
|
https://en.wikipedia.org/wiki/Generalized_eigenvector
|
The matrix
$$
A =
\begin{pmatrix}
5 & 1 & -2 & 4 \\
0 & 5 & 2 & 2 \\
0 & 0 & 5 & 3 \\
0 & 0 & 0 & 4
\end{pmatrix}
$$
has an eigenvalue
$$
\lambda_1 = 5
$$
of algebraic multiplicity
$$
\mu_1 = 3
$$
and an eigenvalue
$$
\lambda_2 = 4
$$
of algebraic multiplicity
$$
\mu_2 = 1
$$
. We also have
$$
n=4
$$
. For
$$
\lambda_1
$$
we have
$$
n - \mu_1 = 4 - 3 = 1
$$
.
$$
(A - 5I) =
\begin{pmatrix}
0 & 1 & -2 & 4 \\
0 & 0 & 2 & 2 \\
0 & 0 & 0 & 3 \\
0 & 0 & 0 & -1
\end{pmatrix},
\qquad \operatorname{rank}(A - 5I) = 3.
$$
$$
(A - 5I)^2 =
\begin{pmatrix}
0 & 0 & 2 & -8 \\
0 & 0 & 0 & 4 \\
0 & 0 & 0 & -3 \\
0 & 0 & 0 & 1
\end{pmatrix},
\qquad \operatorname{rank}(A - 5I)^2 = 2.
$$
$$
(A - 5I)^3 =
\begin{pmatrix}
0 & 0 & 0 & 14 \\
0 & 0 & 0 & -4 \\
0 & 0 & 0 & 3 \\
0 & 0 & 0 & -1
\end{pmatrix},
\qquad \operatorname{rank}(A - 5I)^3 = 1.
$$
The first integer
$$
m_1
$$
for which
$$
(A - 5I)^{m_1}
$$
has rank
$$
n - \mu_1 = 1
$$
is
$$
m_1 = 3
$$
.
|
https://en.wikipedia.org/wiki/Generalized_eigenvector
|
We also have
$$
n=4
$$
. For
$$
\lambda_1
$$
we have
$$
n - \mu_1 = 4 - 3 = 1
$$
.
$$
(A - 5I) =
\begin{pmatrix}
0 & 1 & -2 & 4 \\
0 & 0 & 2 & 2 \\
0 & 0 & 0 & 3 \\
0 & 0 & 0 & -1
\end{pmatrix},
\qquad \operatorname{rank}(A - 5I) = 3.
$$
$$
(A - 5I)^2 =
\begin{pmatrix}
0 & 0 & 2 & -8 \\
0 & 0 & 0 & 4 \\
0 & 0 & 0 & -3 \\
0 & 0 & 0 & 1
\end{pmatrix},
\qquad \operatorname{rank}(A - 5I)^2 = 2.
$$
$$
(A - 5I)^3 =
\begin{pmatrix}
0 & 0 & 0 & 14 \\
0 & 0 & 0 & -4 \\
0 & 0 & 0 & 3 \\
0 & 0 & 0 & -1
\end{pmatrix},
\qquad \operatorname{rank}(A - 5I)^3 = 1.
$$
The first integer
$$
m_1
$$
for which
$$
(A - 5I)^{m_1}
$$
has rank
$$
n - \mu_1 = 1
$$
is
$$
m_1 = 3
$$
.
We now define
$$
\rho_3 = \operatorname{rank}(A - 5I)^2 - \operatorname{rank}(A - 5I)^3 = 2 - 1 = 1 ,
$$
$$
\rho_2 = \operatorname{rank}(A - 5I)^1 - \operatorname{rank}(A - 5I)^2 = 3 - 2 = 1 ,
$$
$$
\rho_1 = \operatorname{rank}(A - 5I)^0 - \operatorname{rank}(A - 5I)^1 = 4 - 3 = 1 .
$$
Consequently, there will be three linearly independent generalized eigenvectors; one each of ranks 3, 2 and 1. Since
$$
\lambda_1
$$
corresponds to a single chain of three linearly independent generalized eigenvectors, we know that there is a generalized eigenvector
$$
\mathbf x_3
$$
of rank 3 corresponding to
$$
\lambda_1
$$
such that
but
Equations () and () represent linear systems that can be solved for
$$
\mathbf x_3
$$
.
|
https://en.wikipedia.org/wiki/Generalized_eigenvector
|
Let
$$
\mathbf x_3 =
\begin{pmatrix}
x_{31} \\
x_{32} \\
x_{33} \\
x_{34}
\end{pmatrix}.
$$
Then
$$
(A - 5I)^3 \mathbf x_3 =
\begin{pmatrix}
0 & 0 & 0 & 14 \\
0 & 0 & 0 & -4 \\
0 & 0 & 0 & 3 \\
0 & 0 & 0 & -1
\end{pmatrix}
\begin{pmatrix}
x_{31} \\
x_{32} \\
x_{33} \\
x_{34}
\end{pmatrix} =
\begin{pmatrix}
14 x_{34} \\
-4 x_{34} \\
3 x_{34} \\
- x_{34}
\end{pmatrix} =
\begin{pmatrix}
0 \\
0 \\
0 \\
0
\end{pmatrix}
$$
and
$$
(A - 5I)^2 \mathbf x_3 =
\begin{pmatrix}
0 & 0 & 2 & -8 \\
0 & 0 & 0 & 4 \\
0 & 0 & 0 & -3 \\
0 & 0 & 0 & 1
\end{pmatrix}
\begin{pmatrix}
x_{31} \\
x_{32} \\
x_{33} \\
x_{34}
\end{pmatrix} =
\begin{pmatrix}
2 x_{33} - 8 x_{34} \\
4 x_{34} \\
-3 x_{34} \\
_BLOCK0_\end{pmatrix} \ne
\begin{pmatrix}
0 \\
0 \\
0 \\
0
\end{pmatrix}.
$$
Thus, in order to satisfy the conditions () and (), we must have
$$
x_{34} = 0
$$
and
$$
x_{33} \ne 0
$$
.
|
https://en.wikipedia.org/wiki/Generalized_eigenvector
|
No restrictions are placed on
$$
x_{31}
$$
and
$$
x_{32}
$$
. By choosing
$$
x_{31} = x_{32} = x_{34} = 0, x_{33} = 1
$$
, we obtain
$$
\mathbf x_3 =
\begin{pmatrix}
0 \\
0 \\
1 \\
0
\end{pmatrix}
$$
as a generalized eigenvector of rank 3 corresponding to
$$
\lambda_1 = 5
$$
. Note that it is possible to obtain infinitely many other generalized eigenvectors of rank 3 by choosing different values of
$$
x_{31}
$$
,
$$
x_{32}
$$
and
$$
x_{33}
$$
, with
$$
x_{33} \ne 0
$$
. Our first choice, however, is the simplest.
|
https://en.wikipedia.org/wiki/Generalized_eigenvector
|
Note that it is possible to obtain infinitely many other generalized eigenvectors of rank 3 by choosing different values of
$$
x_{31}
$$
,
$$
x_{32}
$$
and
$$
x_{33}
$$
, with
$$
x_{33} \ne 0
$$
. Our first choice, however, is the simplest.
Now using equations (), we obtain
$$
\mathbf x_2
$$
and
$$
\mathbf x_1
$$
as generalized eigenvectors of rank 2 and 1, respectively, where
$$
\mathbf x_2 = (A - 5I) \mathbf x_3 =
\begin{pmatrix}
-2 \\
2 \\
0 \\
0
\end{pmatrix},
$$
and
$$
\mathbf x_1 = (A - 5I) \mathbf x_2 =
\begin{pmatrix}
2 \\
0 \\
0 \\
0
\end{pmatrix}.
$$
The simple eigenvalue _BLOCK41
|
https://en.wikipedia.org/wiki/Generalized_eigenvector
|
Our first choice, however, is the simplest.
Now using equations (), we obtain
$$
\mathbf x_2
$$
and
$$
\mathbf x_1
$$
as generalized eigenvectors of rank 2 and 1, respectively, where
$$
\mathbf x_2 = (A - 5I) \mathbf x_3 =
\begin{pmatrix}
-2 \\
2 \\
0 \\
0
\end{pmatrix},
$$
and
$$
\mathbf x_1 = (A - 5I) \mathbf x_2 =
\begin{pmatrix}
2 \\
0 \\
0 \\
0
\end{pmatrix}.
$$
The simple eigenvalue _BLOCK41 _ can be dealt with using standard techniques and has an ordinary eigenvector
$$
\mathbf y_1 =
\begin{pmatrix}
-14 \\
4 \\
-3 \\
1
\end{pmatrix}.
$$
A canonical basis for
$$
A
$$
is
$$
\left\{ \mathbf x_3, \mathbf x_2, \mathbf x_1, \mathbf y_1 \right\} =
\left\{
\begin{pmatrix} 0 \\ 0 \\ 1 \\ 0 \end{pmatrix}
\begin{pmatrix} -2 \\ 2 \\ 0 \\ 0 \end{pmatrix}
\begin{pmatrix} 2 \\ 0 \\ 0 \\ 0 \end{pmatrix}
\begin{pmatrix} -14 \\ 4 \\ -3 \\ 1 \end{pmatrix}
\right\}.
$$
$$
\mathbf x_1, \mathbf x_2
$$
and
$$
\mathbf x_3
$$
are generalized eigenvectors associated with
$$
\lambda_1
$$
, while
$$
\mathbf y_1
$$
is the ordinary eigenvector associated with
$$
\lambda_2
$$
.
|
https://en.wikipedia.org/wiki/Generalized_eigenvector
|
Now using equations (), we obtain
$$
\mathbf x_2
$$
and
$$
\mathbf x_1
$$
as generalized eigenvectors of rank 2 and 1, respectively, where
$$
\mathbf x_2 = (A - 5I) \mathbf x_3 =
\begin{pmatrix}
-2 \\
2 \\
0 \\
0
\end{pmatrix},
$$
and
$$
\mathbf x_1 = (A - 5I) \mathbf x_2 =
\begin{pmatrix}
2 \\
0 \\
0 \\
0
\end{pmatrix}.
$$
The simple eigenvalue _BLOCK41 _ can be dealt with using standard techniques and has an ordinary eigenvector
$$
\mathbf y_1 =
\begin{pmatrix}
-14 \\
4 \\
-3 \\
1
\end{pmatrix}.
$$
A canonical basis for
$$
A
$$
is
$$
\left\{ \mathbf x_3, \mathbf x_2, \mathbf x_1, \mathbf y_1 \right\} =
\left\{
\begin{pmatrix} 0 \\ 0 \\ 1 \\ 0 \end{pmatrix}
\begin{pmatrix} -2 \\ 2 \\ 0 \\ 0 \end{pmatrix}
\begin{pmatrix} 2 \\ 0 \\ 0 \\ 0 \end{pmatrix}
\begin{pmatrix} -14 \\ 4 \\ -3 \\ 1 \end{pmatrix}
\right\}.
$$
$$
\mathbf x_1, \mathbf x_2
$$
and
$$
\mathbf x_3
$$
are generalized eigenvectors associated with
$$
\lambda_1
$$
, while
$$
\mathbf y_1
$$
is the ordinary eigenvector associated with
$$
\lambda_2
$$
.
This is a fairly simple example.
|
https://en.wikipedia.org/wiki/Generalized_eigenvector
|
_ can be dealt with using standard techniques and has an ordinary eigenvector
$$
\mathbf y_1 =
\begin{pmatrix}
-14 \\
4 \\
-3 \\
1
\end{pmatrix}.
$$
A canonical basis for
$$
A
$$
is
$$
\left\{ \mathbf x_3, \mathbf x_2, \mathbf x_1, \mathbf y_1 \right\} =
\left\{
\begin{pmatrix} 0 \\ 0 \\ 1 \\ 0 \end{pmatrix}
\begin{pmatrix} -2 \\ 2 \\ 0 \\ 0 \end{pmatrix}
\begin{pmatrix} 2 \\ 0 \\ 0 \\ 0 \end{pmatrix}
\begin{pmatrix} -14 \\ 4 \\ -3 \\ 1 \end{pmatrix}
\right\}.
$$
$$
\mathbf x_1, \mathbf x_2
$$
and
$$
\mathbf x_3
$$
are generalized eigenvectors associated with
$$
\lambda_1
$$
, while
$$
\mathbf y_1
$$
is the ordinary eigenvector associated with
$$
\lambda_2
$$
.
This is a fairly simple example. In general, the numbers
$$
\rho_k
$$
of linearly independent generalized eigenvectors of rank
$$
k
$$
will not always be equal.
|
https://en.wikipedia.org/wiki/Generalized_eigenvector
|
This is a fairly simple example. In general, the numbers
$$
\rho_k
$$
of linearly independent generalized eigenvectors of rank
$$
k
$$
will not always be equal. That is, there may be several chains of different lengths corresponding to a particular eigenvalue.
## Generalized modal matrix
Let
$$
A
$$
be an n × n matrix. A generalized modal matrix
$$
M
$$
for
$$
A
$$
is an n × n matrix whose columns, considered as vectors, form a canonical basis for
$$
A
$$
and appear in
$$
M
$$
according to the following rules:
- All Jordan chains consisting of one vector (that is, one vector in length) appear in the first columns of
$$
M
$$
.
- All vectors of one chain appear together in adjacent columns of
$$
M
$$
.
- Each chain appears in
$$
M
$$
in order of increasing rank (that is, the generalized eigenvector of rank 1 appears before the generalized eigenvector of rank 2 of the same chain, which appears before the generalized eigenvector of rank 3 of the same chain, etc.).
Jordan normal form
An example of a matrix in Jordan normal form. The red blocks are called Jordan blocks.
Let
$$
V
$$
be an n-dimensional vector space; let
$$
\phi
$$
|
https://en.wikipedia.org/wiki/Generalized_eigenvector
|
The red blocks are called Jordan blocks.
Let
$$
V
$$
be an n-dimensional vector space; let
$$
\phi
$$
be a linear map in , the set of all linear maps from
$$
V
$$
into itself; and let
$$
A
$$
be the matrix representation of
$$
\phi
$$
with respect to some ordered basis.
|
https://en.wikipedia.org/wiki/Generalized_eigenvector
|
Let
$$
V
$$
be an n-dimensional vector space; let
$$
\phi
$$
be a linear map in , the set of all linear maps from
$$
V
$$
into itself; and let
$$
A
$$
be the matrix representation of
$$
\phi
$$
with respect to some ordered basis. It can be shown that if the characteristic polynomial
$$
f(\lambda)
$$
of
$$
A
$$
factors into linear factors, so that
$$
f(\lambda)
$$
has the form
$$
f(\lambda) = \pm (\lambda - \lambda_1)^{\mu_1}(\lambda - \lambda_2)^{\mu_2} \cdots (\lambda - \lambda_r)^{\mu_r} ,
$$
where
$$
\lambda_1, \lambda_2, \ldots , \lambda_r
$$
are the distinct eigenvalues of
$$
A
$$
, then each
$$
\mu_i
$$
is the algebraic multiplicity of its corresponding eigenvalue
$$
\lambda_i
$$
and
$$
A
$$
is similar to a matrix
$$
J
$$
in Jordan normal form, where each
$$
\lambda_i
$$
appears
$$
\mu_i
$$
consecutive times on the diagonal, and the entry directly above each
$$
\lambda_i
$$
(that is, on the superdiagonal) is either 0 or 1: in each block the entry above the first occurrence of each
$$
\lambda_i
$$
is always 0 (except in the first block); all other entries on the superdiagonal are 1.
|
https://en.wikipedia.org/wiki/Generalized_eigenvector
|
be a linear map in , the set of all linear maps from
$$
V
$$
into itself; and let
$$
A
$$
be the matrix representation of
$$
\phi
$$
with respect to some ordered basis. It can be shown that if the characteristic polynomial
$$
f(\lambda)
$$
of
$$
A
$$
factors into linear factors, so that
$$
f(\lambda)
$$
has the form
$$
f(\lambda) = \pm (\lambda - \lambda_1)^{\mu_1}(\lambda - \lambda_2)^{\mu_2} \cdots (\lambda - \lambda_r)^{\mu_r} ,
$$
where
$$
\lambda_1, \lambda_2, \ldots , \lambda_r
$$
are the distinct eigenvalues of
$$
A
$$
, then each
$$
\mu_i
$$
is the algebraic multiplicity of its corresponding eigenvalue
$$
\lambda_i
$$
and
$$
A
$$
is similar to a matrix
$$
J
$$
in Jordan normal form, where each
$$
\lambda_i
$$
appears
$$
\mu_i
$$
consecutive times on the diagonal, and the entry directly above each
$$
\lambda_i
$$
(that is, on the superdiagonal) is either 0 or 1: in each block the entry above the first occurrence of each
$$
\lambda_i
$$
is always 0 (except in the first block); all other entries on the superdiagonal are 1. All other entries (that is, off the diagonal and superdiagonal) are 0.
|
https://en.wikipedia.org/wiki/Generalized_eigenvector
|
It can be shown that if the characteristic polynomial
$$
f(\lambda)
$$
of
$$
A
$$
factors into linear factors, so that
$$
f(\lambda)
$$
has the form
$$
f(\lambda) = \pm (\lambda - \lambda_1)^{\mu_1}(\lambda - \lambda_2)^{\mu_2} \cdots (\lambda - \lambda_r)^{\mu_r} ,
$$
where
$$
\lambda_1, \lambda_2, \ldots , \lambda_r
$$
are the distinct eigenvalues of
$$
A
$$
, then each
$$
\mu_i
$$
is the algebraic multiplicity of its corresponding eigenvalue
$$
\lambda_i
$$
and
$$
A
$$
is similar to a matrix
$$
J
$$
in Jordan normal form, where each
$$
\lambda_i
$$
appears
$$
\mu_i
$$
consecutive times on the diagonal, and the entry directly above each
$$
\lambda_i
$$
(that is, on the superdiagonal) is either 0 or 1: in each block the entry above the first occurrence of each
$$
\lambda_i
$$
is always 0 (except in the first block); all other entries on the superdiagonal are 1. All other entries (that is, off the diagonal and superdiagonal) are 0. (But no ordering is imposed among the eigenvalues, or among the blocks for a given eigenvalue.)
|
https://en.wikipedia.org/wiki/Generalized_eigenvector
|
All other entries (that is, off the diagonal and superdiagonal) are 0. (But no ordering is imposed among the eigenvalues, or among the blocks for a given eigenvalue.) The matrix
$$
J
$$
is as close as one can come to a diagonalization of
$$
A
$$
. If
$$
A
$$
is diagonalizable, then all entries above the diagonal are zero. Note that some textbooks have the ones on the subdiagonal, that is, immediately below the main diagonal instead of on the superdiagonal. The eigenvalues are still on the main diagonal.
Every n × n matrix
$$
A
$$
is similar to a matrix
$$
J
$$
in Jordan normal form, obtained through the similarity transformation
$$
J = M^{-1}AM
$$
, where
$$
M
$$
is a generalized modal matrix for
$$
A
$$
. (See Note above.)
|
https://en.wikipedia.org/wiki/Generalized_eigenvector
|
Every n × n matrix
$$
A
$$
is similar to a matrix
$$
J
$$
in Jordan normal form, obtained through the similarity transformation
$$
J = M^{-1}AM
$$
, where
$$
M
$$
is a generalized modal matrix for
$$
A
$$
. (See Note above.)
### Example 4
Find a matrix in Jordan normal form that is similar to
$$
A =
\begin{pmatrix}
0 & 4 & 2 \\
-3 & 8 & 3 \\
4 & -8 & -2
\end{pmatrix}.
$$
Solution: The characteristic equation of
$$
A
$$
is
$$
(\lambda - 2)^3 = 0
$$
, hence,
$$
\lambda = 2
$$
is an eigenvalue of algebraic multiplicity three.
|
https://en.wikipedia.org/wiki/Generalized_eigenvector
|
(See Note above.)
### Example 4
Find a matrix in Jordan normal form that is similar to
$$
A =
\begin{pmatrix}
0 & 4 & 2 \\
-3 & 8 & 3 \\
4 & -8 & -2
\end{pmatrix}.
$$
Solution: The characteristic equation of
$$
A
$$
is
$$
(\lambda - 2)^3 = 0
$$
, hence,
$$
\lambda = 2
$$
is an eigenvalue of algebraic multiplicity three. Following the procedures of the previous sections, we find that
$$
\operatorname{rank}(A - 2I) = 1
$$
and
$$
\operatorname{rank}(A - 2I)^2 = 0 = n - \mu .
$$
Thus,
$$
\rho_2 = 1
$$
and
$$
\rho_1 = 2
$$
, which implies that a canonical basis for
$$
A
$$
will contain one linearly independent generalized eigenvector of rank 2 and two linearly independent generalized eigenvectors of rank 1, or equivalently, one chain of two vectors
$$
\left\{ \mathbf x_2, \mathbf x_1 \right\}
$$
and one chain of one vector
$$
\left\{ \mathbf y_1 \right\}
$$
.
|
https://en.wikipedia.org/wiki/Generalized_eigenvector
|
### Example 4
Find a matrix in Jordan normal form that is similar to
$$
A =
\begin{pmatrix}
0 & 4 & 2 \\
-3 & 8 & 3 \\
4 & -8 & -2
\end{pmatrix}.
$$
Solution: The characteristic equation of
$$
A
$$
is
$$
(\lambda - 2)^3 = 0
$$
, hence,
$$
\lambda = 2
$$
is an eigenvalue of algebraic multiplicity three. Following the procedures of the previous sections, we find that
$$
\operatorname{rank}(A - 2I) = 1
$$
and
$$
\operatorname{rank}(A - 2I)^2 = 0 = n - \mu .
$$
Thus,
$$
\rho_2 = 1
$$
and
$$
\rho_1 = 2
$$
, which implies that a canonical basis for
$$
A
$$
will contain one linearly independent generalized eigenvector of rank 2 and two linearly independent generalized eigenvectors of rank 1, or equivalently, one chain of two vectors
$$
\left\{ \mathbf x_2, \mathbf x_1 \right\}
$$
and one chain of one vector
$$
\left\{ \mathbf y_1 \right\}
$$
. Designating
$$
M = \begin{pmatrix} \mathbf y_1 & \mathbf x_1 & \mathbf x_2 \end{pmatrix}
$$
, we find that
$$
M =
\begin{pmatrix}
2 & 2 & 0 \\
1 & 3 & 0 \\
0 & -4 & 1
\end{pmatrix},
$$
and
$$
J =
\begin{pmatrix}
2 & 0 & 0 \\
0 & 2 & 1 \\
0 & 0 & 2
\end{pmatrix},
$$
where
$$
M
$$
is a generalized modal matrix for
$$
A
$$
, the columns of
$$
M
$$
are a canonical basis for
$$
A
$$
, and
$$
AM = MJ
$$
.
|
https://en.wikipedia.org/wiki/Generalized_eigenvector
|
Following the procedures of the previous sections, we find that
$$
\operatorname{rank}(A - 2I) = 1
$$
and
$$
\operatorname{rank}(A - 2I)^2 = 0 = n - \mu .
$$
Thus,
$$
\rho_2 = 1
$$
and
$$
\rho_1 = 2
$$
, which implies that a canonical basis for
$$
A
$$
will contain one linearly independent generalized eigenvector of rank 2 and two linearly independent generalized eigenvectors of rank 1, or equivalently, one chain of two vectors
$$
\left\{ \mathbf x_2, \mathbf x_1 \right\}
$$
and one chain of one vector
$$
\left\{ \mathbf y_1 \right\}
$$
. Designating
$$
M = \begin{pmatrix} \mathbf y_1 & \mathbf x_1 & \mathbf x_2 \end{pmatrix}
$$
, we find that
$$
M =
\begin{pmatrix}
2 & 2 & 0 \\
1 & 3 & 0 \\
0 & -4 & 1
\end{pmatrix},
$$
and
$$
J =
\begin{pmatrix}
2 & 0 & 0 \\
0 & 2 & 1 \\
0 & 0 & 2
\end{pmatrix},
$$
where
$$
M
$$
is a generalized modal matrix for
$$
A
$$
, the columns of
$$
M
$$
are a canonical basis for
$$
A
$$
, and
$$
AM = MJ
$$
. Note that since generalized eigenvectors themselves are not unique, and since some of the columns of both
$$
M
$$
and
$$
J
$$
may be interchanged, it follows that both
$$
M
$$
and
$$
J
$$
are not unique.
|
https://en.wikipedia.org/wiki/Generalized_eigenvector
|
Designating
$$
M = \begin{pmatrix} \mathbf y_1 & \mathbf x_1 & \mathbf x_2 \end{pmatrix}
$$
, we find that
$$
M =
\begin{pmatrix}
2 & 2 & 0 \\
1 & 3 & 0 \\
0 & -4 & 1
\end{pmatrix},
$$
and
$$
J =
\begin{pmatrix}
2 & 0 & 0 \\
0 & 2 & 1 \\
0 & 0 & 2
\end{pmatrix},
$$
where
$$
M
$$
is a generalized modal matrix for
$$
A
$$
, the columns of
$$
M
$$
are a canonical basis for
$$
A
$$
, and
$$
AM = MJ
$$
. Note that since generalized eigenvectors themselves are not unique, and since some of the columns of both
$$
M
$$
and
$$
J
$$
may be interchanged, it follows that both
$$
M
$$
and
$$
J
$$
are not unique.
### Example 5
In Example 3, we found a canonical basis of linearly independent generalized eigenvectors for a matrix
$$
A
$$
.
|
https://en.wikipedia.org/wiki/Generalized_eigenvector
|
Note that since generalized eigenvectors themselves are not unique, and since some of the columns of both
$$
M
$$
and
$$
J
$$
may be interchanged, it follows that both
$$
M
$$
and
$$
J
$$
are not unique.
### Example 5
In Example 3, we found a canonical basis of linearly independent generalized eigenvectors for a matrix
$$
A
$$
. A generalized modal matrix for
$$
A
$$
is
$$
M =
\begin{pmatrix} \mathbf y_1 & \mathbf x_1 & \mathbf x_2 & \mathbf x_3 \end{pmatrix} =
\begin{pmatrix}
-14 & 2 & -2 & 0 \\
4 & 0 & 2 & 0 \\
-3 & 0 & 0 & 1 \\
1 & 0 & 0 & 0
\end{pmatrix}.
$$
A matrix in Jordan normal form, similar to
$$
A
$$
is
$$
J = \begin{pmatrix}
4 & 0 & 0 & 0 \\
0 & 5 & 1 & 0 \\
0 & 0 & 5 & 1 \\
0 & 0 & 0 & 5
\end{pmatrix},
$$
so that
$$
AM = MJ
$$
.
## Applications
### Matrix functions
Three of the most fundamental operations which can be performed on square matrices are matrix addition, multiplication by a scalar, and matrix multiplication.
|
https://en.wikipedia.org/wiki/Generalized_eigenvector
|
## Applications
### Matrix functions
Three of the most fundamental operations which can be performed on square matrices are matrix addition, multiplication by a scalar, and matrix multiplication. These are exactly those operations necessary for defining a polynomial function of an n × n matrix
$$
A
$$
. If we recall from basic calculus that many functions can be written as a Maclaurin series, then we can define more general functions of matrices quite easily. If
$$
A
$$
is diagonalizable, that is
$$
D = M^{-1}AM ,
$$
with
$$
D =
\begin{pmatrix}
\lambda_1 & 0 & \cdots & 0 \\
_BLOCK0_\vdots & \vdots & \ddots & \vdots \\
_BLOCK1_\end{pmatrix},
$$
then
$$
D^k =
\begin{pmatrix}
\lambda_1^k & 0 & \cdots & 0 \\
_BLOCK2_\end{pmatrix}
$$
and the evaluation of the Maclaurin series for functions of
$$
A
$$
is greatly simplified.
|
https://en.wikipedia.org/wiki/Generalized_eigenvector
|
If we recall from basic calculus that many functions can be written as a Maclaurin series, then we can define more general functions of matrices quite easily. If
$$
A
$$
is diagonalizable, that is
$$
D = M^{-1}AM ,
$$
with
$$
D =
\begin{pmatrix}
\lambda_1 & 0 & \cdots & 0 \\
_BLOCK0_\vdots & \vdots & \ddots & \vdots \\
_BLOCK1_\end{pmatrix},
$$
then
$$
D^k =
\begin{pmatrix}
\lambda_1^k & 0 & \cdots & 0 \\
_BLOCK2_\end{pmatrix}
$$
and the evaluation of the Maclaurin series for functions of
$$
A
$$
is greatly simplified. For example, to obtain any power k of
$$
A
$$
, we need only compute
$$
D^k
$$
, premultiply
$$
D^k
$$
by
$$
M
$$
, and postmultiply the result by
$$
M^{-1}
$$
.
Using generalized eigenvectors, we can obtain the Jordan normal form for
$$
A
$$
and these results can be generalized to a straightforward method for computing functions of nondiagonalizable matrices. (See Matrix function#Jordan decomposition.)
|
https://en.wikipedia.org/wiki/Generalized_eigenvector
|
For example, to obtain any power k of
$$
A
$$
, we need only compute
$$
D^k
$$
, premultiply
$$
D^k
$$
by
$$
M
$$
, and postmultiply the result by
$$
M^{-1}
$$
.
Using generalized eigenvectors, we can obtain the Jordan normal form for
$$
A
$$
and these results can be generalized to a straightforward method for computing functions of nondiagonalizable matrices. (See Matrix function#Jordan decomposition.)
### Differential equations
Consider the problem of solving the system of linear ordinary differential equations
where
$$
\mathbf x =
\begin{pmatrix}
x_1(t) \\
x_2(t) \\
\vdots \\
x_n(t)
\end{pmatrix}, \quad
\mathbf x' =
\begin{pmatrix}
x_1'(t) \\
x_2'(t) \\
\vdots \\
x_n'(t)
\end{pmatrix},
$$
and
$$
A = (a_{ij}) .
$$
If the matrix
$$
A
$$
is a diagonal matrix so that _ BLOCK3_ for
$$
i \ne j
$$
, then the system () reduces to a system of n equations which take the form
|
https://en.wikipedia.org/wiki/Generalized_eigenvector
|
### Differential equations
Consider the problem of solving the system of linear ordinary differential equations
where
$$
\mathbf x =
\begin{pmatrix}
x_1(t) \\
x_2(t) \\
\vdots \\
x_n(t)
\end{pmatrix}, \quad
\mathbf x' =
\begin{pmatrix}
x_1'(t) \\
x_2'(t) \\
\vdots \\
x_n'(t)
\end{pmatrix},
$$
and
$$
A = (a_{ij}) .
$$
If the matrix
$$
A
$$
is a diagonal matrix so that _ BLOCK3_ for
$$
i \ne j
$$
, then the system () reduces to a system of n equations which take the form
In this case, the general solution is given by
$$
x_1 = k_1 e^{a_{11}t}
$$
$$
x_2 = k_2 e^{a_{22}t}
$$
$$
\vdots
$$
$$
x_n = k_n e^{a_{nn}t} .
$$
In the general case, we try to diagonalize
$$
A
$$
and reduce the system () to a system like () as follows.
|
https://en.wikipedia.org/wiki/Generalized_eigenvector
|
BLOCK3_ for
$$
i \ne j
$$
, then the system () reduces to a system of n equations which take the form
In this case, the general solution is given by
$$
x_1 = k_1 e^{a_{11}t}
$$
$$
x_2 = k_2 e^{a_{22}t}
$$
$$
\vdots
$$
$$
x_n = k_n e^{a_{nn}t} .
$$
In the general case, we try to diagonalize
$$
A
$$
and reduce the system () to a system like () as follows. If
$$
A
$$
is diagonalizable, we have
$$
D = M^{-1}AM
$$
, where
$$
M
$$
is a modal matrix for
$$
A
$$
.
|
https://en.wikipedia.org/wiki/Generalized_eigenvector
|
In this case, the general solution is given by
$$
x_1 = k_1 e^{a_{11}t}
$$
$$
x_2 = k_2 e^{a_{22}t}
$$
$$
\vdots
$$
$$
x_n = k_n e^{a_{nn}t} .
$$
In the general case, we try to diagonalize
$$
A
$$
and reduce the system () to a system like () as follows. If
$$
A
$$
is diagonalizable, we have
$$
D = M^{-1}AM
$$
, where
$$
M
$$
is a modal matrix for
$$
A
$$
. Substituting
$$
A = MDM^{-1}
$$
, equation () takes the form
$$
M^{-1} \mathbf x' = D(M^{-1} \mathbf x)
$$
, or
where
The solution of () is
$$
y_1 = k_1 e^{\lambda_1 t}
$$
$$
y_2 = k_2 e^{\lambda_2 t}
$$
$$
\vdots
$$
$$
y_n = k_n e^{\lambda_n t} .
$$
The solution
$$
\mathbf x
$$
of () is then obtained using the relation ().
|
https://en.wikipedia.org/wiki/Generalized_eigenvector
|
If
$$
A
$$
is diagonalizable, we have
$$
D = M^{-1}AM
$$
, where
$$
M
$$
is a modal matrix for
$$
A
$$
. Substituting
$$
A = MDM^{-1}
$$
, equation () takes the form
$$
M^{-1} \mathbf x' = D(M^{-1} \mathbf x)
$$
, or
where
The solution of () is
$$
y_1 = k_1 e^{\lambda_1 t}
$$
$$
y_2 = k_2 e^{\lambda_2 t}
$$
$$
\vdots
$$
$$
y_n = k_n e^{\lambda_n t} .
$$
The solution
$$
\mathbf x
$$
of () is then obtained using the relation ().
On the other hand, if
$$
A
$$
is not diagonalizable, we choose
$$
M
$$
to be a generalized modal matrix for
$$
A
$$
, such that
$$
J = M^{-1}AM
$$
is the Jordan normal form of
$$
A
$$
.
|
https://en.wikipedia.org/wiki/Generalized_eigenvector
|
Substituting
$$
A = MDM^{-1}
$$
, equation () takes the form
$$
M^{-1} \mathbf x' = D(M^{-1} \mathbf x)
$$
, or
where
The solution of () is
$$
y_1 = k_1 e^{\lambda_1 t}
$$
$$
y_2 = k_2 e^{\lambda_2 t}
$$
$$
\vdots
$$
$$
y_n = k_n e^{\lambda_n t} .
$$
The solution
$$
\mathbf x
$$
of () is then obtained using the relation ().
On the other hand, if
$$
A
$$
is not diagonalizable, we choose
$$
M
$$
to be a generalized modal matrix for
$$
A
$$
, such that
$$
J = M^{-1}AM
$$
is the Jordan normal form of
$$
A
$$
. The system
$$
\mathbf y' = J \mathbf y
$$
has the form
where the
$$
\lambda_i
$$
are the eigenvalues from the main diagonal of
$$
J
$$
and the
$$
\epsilon_i
$$
are the ones and zeros from the superdiagonal of
$$
J
$$
. The system () is often more easily solved than ().
|
https://en.wikipedia.org/wiki/Generalized_eigenvector
|
The system
$$
\mathbf y' = J \mathbf y
$$
has the form
where the
$$
\lambda_i
$$
are the eigenvalues from the main diagonal of
$$
J
$$
and the
$$
\epsilon_i
$$
are the ones and zeros from the superdiagonal of
$$
J
$$
. The system () is often more easily solved than (). We may solve the last equation in () for
$$
y_n
$$
, obtaining
$$
y_n = k_n e^{\lambda_n t}
$$
. We then substitute this solution for
$$
y_n
$$
into the next to last equation in () and solve for
$$
y_{n-1}
$$
. Continuing this procedure, we work through () from the last equation to the first, solving the entire system for
$$
\mathbf y
$$
. The solution
$$
\mathbf x
$$
is then obtained using the relation ().
|
https://en.wikipedia.org/wiki/Generalized_eigenvector
|
Continuing this procedure, we work through () from the last equation to the first, solving the entire system for
$$
\mathbf y
$$
. The solution
$$
\mathbf x
$$
is then obtained using the relation ().
Lemma:
Given the following chain of generalized eigenvectors of length
$$
r,
$$
$$
X_1 = v_1e^{\lambda t}
$$
$$
X_2 = (tv_1+v_2)e^{\lambda t}
$$
$$
X_3 = \left(\frac{t^2}{2}v_1+tv_2+v_3\right)e^{\lambda t}
$$
$$
\vdots
$$
$$
X_r = \left(\frac{t^{r-1}}{(r-1)!}v_1+...+\frac{t^2}{2}v_{r-2}+tv_{r-1}+v_r\right)e^{\lambda t}
$$
,
these functions solve the system of equations,
$$
X' = AX.
$$
Proof:
Define
$$
v_0=0
$$
$$
X_j(t)=
# e^{\lambda t}\sum_{i
= 1}^j\frac{t^{j-i}}{(j-i)!}
|
https://en.wikipedia.org/wiki/Generalized_eigenvector
|
Lemma:
Given the following chain of generalized eigenvectors of length
$$
r,
$$
$$
X_1 = v_1e^{\lambda t}
$$
$$
X_2 = (tv_1+v_2)e^{\lambda t}
$$
$$
X_3 = \left(\frac{t^2}{2}v_1+tv_2+v_3\right)e^{\lambda t}
$$
$$
\vdots
$$
$$
X_r = \left(\frac{t^{r-1}}{(r-1)!}v_1+...+\frac{t^2}{2}v_{r-2}+tv_{r-1}+v_r\right)e^{\lambda t}
$$
,
these functions solve the system of equations,
$$
X' = AX.
$$
Proof:
Define
$$
v_0=0
$$
$$
X_j(t)=
# e^{\lambda t}\sum_{i
= 1}^j\frac{t^{j-i}}{(j-i)!} v_i.
$$
Then, as
$$
{t^{0}}=1
$$
and
$$
1'=0
$$
,
$$
X'_j(t)=e^{\lambda t}\sum_{i = 1}^{j-1}\frac{t^{j-i-1}}{(j-i-1)!}v_i+e^{\lambda t}\lambda\sum_{i = 1}^j\frac{t^{j-i}}{(j-i)!}v_i
$$
.
On the other hand we have,
$$
v_0=0
$$
and so
$$
AX_j(t)=e^{\lambda t}\sum_{i = 1}^j\frac{t^{j-i}}{(j-i)!}Av_i
$$
$$
e^{\lambda t}\sum_{i 1}^j\frac{t^{j-i}}{(j-i)!}(v_{i-1}+\lambda v_i)
$$
$$
|
https://en.wikipedia.org/wiki/Generalized_eigenvector
|
# e^{\lambda t}\sum_{i
= 1}^j\frac{t^{j-i}}{(j-i)!} v_i.
$$
Then, as
$$
{t^{0}}=1
$$
and
$$
1'=0
$$
,
$$
X'_j(t)=e^{\lambda t}\sum_{i = 1}^{j-1}\frac{t^{j-i-1}}{(j-i-1)!}v_i+e^{\lambda t}\lambda\sum_{i = 1}^j\frac{t^{j-i}}{(j-i)!}v_i
$$
.
On the other hand we have,
$$
v_0=0
$$
and so
$$
AX_j(t)=e^{\lambda t}\sum_{i = 1}^j\frac{t^{j-i}}{(j-i)!}Av_i
$$
$$
e^{\lambda t}\sum_{i 1}^j\frac{t^{j-i}}{(j-i)!}(v_{i-1}+\lambda v_i)
$$
$$
# e^{\lambda t}\sum_{i = 2}^j\frac{t^{j-i}}{(j-i)!}v_{i-1}+e^{\lambda t}\lambda\sum_{i
1}^j\frac{t^{j-i}}{(j-i)!}v_i
$$
$$
|
https://en.wikipedia.org/wiki/Generalized_eigenvector
|
v_i.
$$
Then, as
$$
{t^{0}}=1
$$
and
$$
1'=0
$$
,
$$
X'_j(t)=e^{\lambda t}\sum_{i = 1}^{j-1}\frac{t^{j-i-1}}{(j-i-1)!}v_i+e^{\lambda t}\lambda\sum_{i = 1}^j\frac{t^{j-i}}{(j-i)!}v_i
$$
.
On the other hand we have,
$$
v_0=0
$$
and so
$$
AX_j(t)=e^{\lambda t}\sum_{i = 1}^j\frac{t^{j-i}}{(j-i)!}Av_i
$$
$$
e^{\lambda t}\sum_{i 1}^j\frac{t^{j-i}}{(j-i)!}(v_{i-1}+\lambda v_i)
$$
$$
# e^{\lambda t}\sum_{i = 2}^j\frac{t^{j-i}}{(j-i)!}v_{i-1}+e^{\lambda t}\lambda\sum_{i
1}^j\frac{t^{j-i}}{(j-i)!}v_i
$$
$$
# e^{\lambda t}\sum_{i = 1}^{j-1}\frac{t^{j-i-1}}{(j-i-1)!}v_{i}+e^{\lambda t}\lambda\sum_{i
1}^j\frac{t^{j-i}}{(j-i)!}v_i
$$
$$
=X'_j(t)
$$
as required.
|
https://en.wikipedia.org/wiki/Generalized_eigenvector
|
# e^{\lambda t}\sum_{i = 2}^j\frac{t^{j-i}}{(j-i)!}v_{i-1}+e^{\lambda t}\lambda\sum_{i
1}^j\frac{t^{j-i}}{(j-i)!}v_i
$$
$$
# e^{\lambda t}\sum_{i = 1}^{j-1}\frac{t^{j-i-1}}{(j-i-1)!}v_{i}+e^{\lambda t}\lambda\sum_{i
1}^j\frac{t^{j-i}}{(j-i)!}v_i
$$
$$
=X'_j(t)
$$
as required.
## Notes
## References
-
-
-
-
-
-
-
-
-
-
-
-
Category:Linear algebra
Category:Matrix theory
|
https://en.wikipedia.org/wiki/Generalized_eigenvector
|
In computer programming, an inline assembler is a feature of some compilers that allows low-level code written in assembly language to be embedded within a program, among code that otherwise has been compiled from a higher-level language such as C or Ada.
## Motivation and alternatives
The embedding of assembly language code is usually done for one of these reasons:
- Optimization: Programmers can use assembly language code to implement the most performance-sensitive parts of their program's algorithms, code that is apt to be more efficient than what might otherwise be generated by the compiler.
- Access to processor-specific instructions: Most processors offer special instructions, such as Compare and Swap and Test and Set instructions which may be used to construct semaphores or other synchronization and locking primitives. Nearly every modern processor has these or similar instructions, as they are necessary to implement multitasking.
## Examples
of specialized instructions are found in the SPARC VIS, Intel MMX and SSE, and Motorola Altivec instruction sets.
- Access to special calling conventions not yet supported by the compiler.
- System calls and interrupts: High-level languages rarely have a direct facility to make arbitrary system calls, so assembly code is used. Direct interrupts are even more rarely supplied.
- To emit special directives for the linker or assembler, for example to change sectioning, macros, or to make symbol aliases.
|
https://en.wikipedia.org/wiki/Inline_assembler
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.