content
stringlengths 275
370k
|
---|
In 1963, Maria Goeppert-Mayer became the first woman to receive the Nobel Prize in physics. She earned the prize for her work on the structure of the atomic nucleus.
Maria Goeppert-Mayer was one of the inner circle of nuclear physicists who developed the atomic fission bomb at the secret laboratory at Los Alamos, New Mexico, during World War II. Through her theoretical research with nuclear physicists Enrico Fermi and Edward Teller, Goeppert-Mayer developed a model for the structure of atomic nuclei. In 1963, for her work on nuclear structure, she became the first woman awarded the Nobel Prize for theoretical physics, sharing the prize with J. Hans D. Jensen, a German physicist. The two scientists, who had reached the same conclusions independently, later collaborated on a book explaining their model.
An only child, Goeppert-Mayer was born Maria Göppert on July 28, 1906, in the German city of Kattowitz in Upper Silesia (now Katowice, Poland). When she was four, her father, Dr. Friedrich Göppert, was appointed professor of pediatrics at the University at Göttingen, Germany. Situated in an old medieval town, the university had historically been respected for its mathematics department, but was on its way to becoming the European center for yet another discipline—theoretical physics. Maria's mother, Maria Wolff Göppert, was a former teacher of piano and French who delighted in entertaining faculty members with lavish dinner parties and providing a home filled with flowers and music for her only daughter.
Dr. Göppert was a most progressive pediatrician for the times, as he started a well-baby clinic and believed that all children, male or female, should be adventuresome risk-takers. His philosophy on child rearing had a profound effect on his daughter, who idolized her father and treasured her long country walks with him, collecting fossils and learning the names of plants. Because the Göpperts came from several generations of university professors, it was unstated but expected that Maria would continue the family tradition.
When Maria was just eight, World War I interrupted the family's rather idyllic university life with harsh wartime deprivation. After the war, life was still hard because of postwar inflation and food shortages. Maria Göppert attended a small private school run by female suffragists to ready young girls for university studies. The school went bankrupt when Göppert had completed only two of the customary three years of preparatory school. Nonetheless, she took and passed her university entrance exam.
The University of Göttingen that Göppert entered in 1924 was in the process of becoming a center for the study of quantum mechanics—the mathematical study of the behavior of atomic particles. Many well-known physicists visited Göttingen, including Niels Bohr, a Danish physicist who developed a model of the atom. Noted physicist Max Born joined the Göttingen faculty and became a close friend of Göppert's family. Göppert, now enrolled as a student, began attending Max Born's physics seminars and decided to study physics instead of mathematics, with an eye toward teaching. Her prospects of being taken seriously were slim: there was only one female professor at Göttingen, and she taught for "love," receiving no salary.
In 1927 Göppert's father died. She continued her study, determined to finish her doctorate in physics. She spent a semester in Cambridge, England, where she learned English and met Ernest Rutherford, the discoverer of the electron. Upon her return to Göttingen, her mother began taking student boarders into their grand house. One was an American physical chemistry student from California, Joseph E. Mayer, studying in Göttingen on a grant. Over the next several years, Maria and Joe became close, going hiking, skiing, swimming and playing tennis. When they married, in 1930, Maria adopted the hyphenated form of their names. (When they later moved to the United States, the spelling of her family name was anglicized to "Goeppert.") Soon after her marriage she completed her doctorate with a thesis entitled "On Elemental Processes with Two Quantum Jumps."
After Joseph Mayer finished his studies, the young scientists moved to the United States, where he had been offered a job at Johns Hopkins University in Baltimore, Maryland. Goeppert-Mayer found it difficult to adjust. She was not considered eligible for an appointment at the same university as her husband, but rather was considered a volunteer associate, what her biographer Joan Dash calls a "fringe benefit" wife. She had a tiny office, little pay, and no significant official responsibilities. Nonetheless, her position did allow her to conduct research on energy transfer on solid surfaces with physicist Karl Herzfeld, and she collaborated with him and her husband on several papers. Later, she turned her attention to the quantum mechanical electronic levels of benzene and of some dyes. During summers she returned to Göttingen, where she wrote several papers with Max Born on beta ray decay—the emissions of high-speed electrons that are given off by radioactive nuclei.
These summers of physics research were cut off as Germany was again preparing for war. Max Born left Germany for the safety of England. Returning to the states, Goeppert-Mayer applied for her American citizenship and she and Joe started a family. They would have two children, Marianne and Peter. Soon she became friends with Edward Teller, a Hungarian refugee who would play a key role in the development of the hydrogen bomb.
When Joe unexpectedly lost his position at Johns Hopkins, he and Goeppert-Mayer left for Columbia University in New York. There they wrote a book together, Statistical Mechanics, which became a classic in the field. As Goeppert-Mayer had no teaching credentials to place on the title page, their friend Harold Urey, a Nobel Prize-winning chemist, arranged for her to give some lectures so that she could be listed as "lecturer in chemistry at Columbia."
In New York, Goeppert-Mayer made the acquaintance of Enrico Fermi, winner of the Nobel Prize for physics for his work on radioactivity. Fermi had recently emigrated from Italy and was at Columbia on a grant researching nuclear fission. Nuclear fission—splitting an atom in a way that released energy—had been discovered by German scientists Otto Hahn, Fritz Strassmann, and Lise Meitner. The German scientists had bombarded uranium nuclei with neutrons, resulting in the release of energy. Because Germany was building its arsenal for war, Fermi had joined other scientists in convincing the United States government that it must institute a nuclear program of its own so as not to be at Hitler's mercy should Germany develop a nuclear weapon. Goeppert-Mayer joined Fermi's team of researchers, although once again the arrangement was informal and without pay.
In 1941, the United States formally entered World War II. Goeppert-Mayer was offered her first real teaching job, a half-time position at Sarah Lawrence College in Bronxville, New York. A few months later she was invited by Harold Urey to join a research group he was assembling at Columbia University to separate uranium-235, which is capable of nuclear fission, from the more abundant isotope uranium-238, which is not. The group, which worked in secret, was given the code name SAM—Substitute Alloy Metals. The uranium was to be the fuel for a nuclear fission bomb.
Like many scientists, Goeppert-Mayer had mixed feelings about working on the development of an atomic bomb. (Her friend Max Born, for instance, had refused to work on the project.) She had to keep her work a secret from her husband, even though he himself was working on defense-related work, often in the Pacific. Moreover, while she loved her adopted country, she had many friends and relatives in Germany. To her relief, the war in Europe was over early in 1945, before the bomb was ready. However, at Los Alamos Laboratory in New Mexico the bomb was still being developed. At Edward Teller's request, Goeppert-Mayer made several visits to Los Alamos to meet with other physicists, including Niels Bohr and Enrico Fermi, who were working on uranium fission. In August of 1945 atomic bombs were dropped on the Japanese cities of Hiroshima and Nagasaki with a destructive ferocity never before seen. According to biographer Joan Dash, by this time Goeppert-Mayer's ambivalence about the nuclear weapons program had turned to distaste, and she was glad she had played a relatively small part in the development of such a deadly weapon.
After the war, Goeppert-Mayer returned to teach at Sarah Lawrence. Then, in 1946, her husband was offered a full professorship at the University of Chicago's newly established Institute of Nuclear Studies, where Fermi, Teller, and Urey were also working. Goeppert-Mayer was offered an unpaid position as voluntary associate professor; the university had a rule, common at the time, against hiring both a husband and wife as professors. However, soon afterwards, Goeppert-Mayer was asked to become a senior physicist at the Argonne National Laboratory, where a nuclear reactor was under construction. It was the first time she had been offered a position and salary that put her on an even footing with her colleagues.
Again her association with Edward Teller was valuable. He asked her to work on his theory about the origin of the elements. They found that some elements, such as tin and lead, were more abundant than could be predicted by current theories. The same elements were also unusually stable. When Goeppert-Mayer charted the number of protons and neutrons in the nuclei of these elements, she noticed that the same few numbers recurred over and over again. Eventually she began to call these her "magic numbers." When Teller began focusing his attention on nuclear weapons and lost interest in the project, Goeppert-Mayer began discussing her ideas with Enrico Fermi.
Goeppert-Mayer had identified seven "magic numbers": 2, 8, 20, 28, 50, 82, and 126. Any element that had one of these numbers of protons or neutrons was very stable, and she wondered why. She began to think of a shell model for the nucleus, similar to the orbital model of electrons spinning around the nucleus. Perhaps the nucleus of an atom was something like an onion, with layers of protons and neutrons revolving around each other. Her "magic numbers" would represent the points at which the various layers, or "shells," would be complete. Goeppert-Mayer's likening of the nucleus to an onion led fellow physicist Wolfgang Pauli to dub her the "Madonna of the Onion." Further calculations suggested the presence of "spin-orbit coupling": the particles in the nucleus, she hypothesized, were both spinning on their axes and orbiting a central point—like spinning dancers, in her analogy, some moving clockwise and others counter-clockwise.
Goeppert-Mayer published her hypothesis in Physical Review in 1949. A month before her work appeared, a similar paper was published by J. Hans D. Jensen of Heidelberg, Germany. Goeppert-Mayer and Jensen began corresponding and eventually decided to write a book together. During the four years that it took to complete the book, Jensen stayed with the Goeppert-Mayers in Chicago. Elementary Theory of Nuclear Shell Structure gained widespread acceptance on both sides of the Atlantic for the theory they had discovered independently.
In 1959, Goeppert-Mayer and her husband were both offered positions at the University of California's new San Diego campus. Unfortunately, soon after settling into a new home in La Jolla, California, Goeppert-Mayer suffered a stroke which left an arm paralyzed. Some years earlier she had also lost the hearing in one ear. Slowed but not defeated, Goeppert-Mayer continued her work.
In November of 1963 Goeppert-Mayer received word that she and Jensen were to share the Nobel Prize for physics with Eugene Paul Wigner, a colleague studying quantum mechanics who had once been skeptical of her magic numbers. Goeppert-Mayer had finally been accepted as a serious scientist. According to biographer Olga Opfell, she would later comment that the work itself had been more exciting than winning the prize.
Goeppert-Mayer continued to teach and do research in San Diego, as well as grow orchids and give parties at her house in La Jolla. She enjoyed visits with her granddaughter, whose parents were daughter Marianne, an astronomer, and son-in-law Donat Wentzel, an astrophysicist. Her son Peter was now an assistant professor of economics, keeping up Goeppert-Mayer's family tradition of university teaching.
Goeppert-Mayer was made a member of the National Academy of Sciences and received several honorary doctorates. Her health, however, began to fail. A lifelong smoker debilitated by her stroke, she began to have heart problems. She had a pacemaker inserted in 1968. Late in 1971, Goeppert-Mayer suffered a heart attack that left her in a coma. She died on February 20, 1972.
Further Reading on Maria Goeppert-Mayer
Dash, Joan, The Triumph of Discovery: Women Scientists Who Won the Nobel Prize, Messner, 1991.
Opfell, Olga S., The Lady Laureates: Women Who Have Won the Nobel Prize, Scarecrow, 1978, pp. 194-208.
Sach, Robert G., Maria Goeppert-Mayer, 1906-1972: A Biographical Memoir, National Academy of Science of the United States, 1979. |
Our direct relationship with the wild kingdom most likely began with harvesting for food, tools, clothing, and shelter. But while our early ancestors munched on mammoth meat in a tent made from ox hides, all warm and cozy under a sloth fur blanket while they whittled away at spear points using their whale bone knives, perhaps they admired how animals survived using their amazing skills and physical attributes. Maybe these early humans wondered how they could employ some of those amazing talents, or even mimic them.
That’s exactly what happened. Nature inspired us to figure out ways to solve problems we faced. Coyotes taught us hunting skills, birds inspired us to fly, beavers gave us ideas on how to build dams, even tiny termites demonstrated building ideas that we could use. By studying and implementing the workings of wildlife, we’ve come up with some pretty awesome inventions. Here are some examples to inspire your students to come up with ideas of their own:
Have you noticed how a cat will retract its claws when they aren’t being used to stretch, scratch up the furniture or snag a mouse? You may also have noticed that stepping or sitting on a thumbtack is no fun. A designer came up with a new improved thumbtack based on the way a cat retracts its claws when not in use. He enclosed a pin inside a capsule made of soft plastic material. If you push the capsule against a hard surface, the capsule collapses so that the pin can penetrate the surface. If you remove it, the pin is once again safely inside the capsule.
Bats, the only flying mammal in the world, have a talent that may be even cooler than retractable claws. Bats can use their mouths and ears to perceive high-frequency sound waves, allowing them to swoop around obstacles and snag insects in the air, their favorite food. They do this by making sounds that echo back in a process called echolocation. The bat detects detailed information that allows it to locate and even identify what’s in the area. It’s no big surprise that researchers are working on ways that sight-impaired people can use a system like echolocation. The UltraCane is one invention that uses ultrasonic waves that reveal the location of obstacles. The information is passed through the cane as vibrations, allowing the person to “see” where they are going.
Bombardier beetles from Africa and Asia have a pretty powerful defense mechanism that researchers find awesome and inspiring. If anything threatens one of these guys, it will quickly be sorry. A bombardier beetle can blast a powerful spray of hot, toxic stuff with enough force to cause serious injury to the hapless predator. These creatures have a built-in combustion chamber where chemicals mix up and boil. Gases react and create pressure that helps eject the fluid at the right moment. Researchers have managed to mimic the action and hope to use the nature-inspired eco-friendly technology for fuel injection in automotive and other transportation industries as well as drug delivery systems, such as inhalers for people with breathing problems.
If you got extremely close to a shark, you might notice that its skin consists of millions of tiny sleek tooth-like scales. These scales are made of a very tough material called dentin. Scientists noted that the scales actually reduce drag by creating tiny vortices in the water as the shark moves. The scales also resist barnacles because they are constantly moving. This was interesting to the shipping industry because barnacles create extra weight and cause drag on a ship. Scientists are attempting to create a synthetic shark skin with the same “bio-fouling” attributes. Using something like shark skin could save the high costs of removing barnacles from ship hulls and ultimately save on fuel.
Most of us have encountered a spider web and noticed how strong and sticky it was. Spider silk is one of the strongest materials in nature, estimated to be five times stronger than steel by weight. But unlike steel, spider silk is both flexible and lightweight. Spiders can even spin silk that is either sticky, to catch prey, or non-sticky, so that they can use it for pathways. Scientists recently created a new type of medical tape based on the attributes of spider silk.
Even plants can inspire invention. An electrical engineer named George de Mestral noticed that burrs from burdock plants stuck fast to his unfortunate hunting dog. From that bit of inspiration he invented Velcro, one of the most widely used fastening systems in human history.
Have your students study the traits of a plant or animal and see what inventions or ideas they come up with based on the attributes and abilities. See my previous blog about writing a patent and perhaps combine the projects! |
Here is a list of common printing and graphic design terms.
AAs (author’s alterations)
Client’s changes and/or additions to copy after they have been typeset.
Colour with no saturation, such as white, black, or gray.
Additive colour primaries
Red, green, and blue (RGB). These colours are used to create all other colours with direct (or transmitted) light (for example, on a computer or television screen). They are called additive primaries because when pure red, green, and blue are superimposed on each other, they create white. Refer to subtractive colour primaries.
Visibly jagged steps along angles or object edges due to sharp tonal contrasts between pixels.
Placement and shape of text relative to the margins. Alignment settings can be centered, flush left, flush right, justified, ragged right, etc.
An 8-bit, gray scale representation of a Photoshop image, often used for creating masks that isolate part of an image.
Amplitude-modulated screening. Same as traditional halftone screening. Compare with FM screening.
Non-printing layers in some page layout programs used to provide written instructions on certain aspects of an electronic file.
In computer graphics, the smoothing of the jagged, “stairstep” appearance of graphical elements. See also jaggies.
Any analog or digital image, text or graphics used for printing reproduction.
The part of a letter that rises above the main body, such as the rising strokes of the letters “d” and “k.” Refer to descender.
In computer graphics, ratio of width to height of a screen or image frame.
Automatic Text Flow
Used in desktop publishing, it allows text matter to flow from one column to the next on each page and from one page to the next in a document automatically. It eases the pain of making significant copy changes to a long document.
Automatic Picture Replacement
A linking process where a low resolution image or low resolution placeholder (FPO) is automatically replaced by a high resolution image just before a document is sent to the imagesetter. This allows page layout handlers to work with smaller files without overloading the processors. Also known as AIR or OPI.
White bands which can be produced if data is sent too slowly to recorders that cannot stop/start successfully. The media continues to feed even though no image is available to print, resulting in white bands in the output. Stripes of colour that occur when too few colours are available to achieve a smooth colour blend. A visible stair-stepping of shades in a gradient.
The invisible line which all characters in a line of type rest upon.
The sequential scanning of multiple originals using previously defined and unique settings for each.
In computer graphics, a bezier is a curved line described by two end points and two or four control points. The end points are the ends of the curve itself. The control points (or levers) determine the shape of the curve, but are not on the curve itself.
A form of image containing only black and white pixels, a 2-bit image.
Press Sheet markings that indicate how the sheet should be cropped, folded, collated, or bound.
The number of bits in each pixel of an image. Also refers to the amount of data per pixel for displaying on a computer monitor. Bit depth sets the maximum number of discrete colours or shades in each pixel; hence, the greater the bit depth, the more vivid and realistic colour and greyscale images will appear.
An image formed by a matrix of visible or invisible dots (bits). On a computer screen, the dots are formed by pixels. Unlike vector objects or Bezier curves, bitmaps are resolution dependent. See also raster image.
A movable reference point that defines the darkest area in an image, causing all other areas to be adjusted accordingly.
Text or art that extends beyond the trim page boundaries, or the crop marks, on one or more sides of a page. Part of a printed image that extends beyond the page boundary. When the page is trimmed to size, the “bleed” extends to the absolute edge of the paper, preventing any show-through of the paper colour.
See Graduated fill.
The technique of creating an image on paper by stamping the paper with a die, creating a visible raised effect, without applying ink to the image (hence, the designation of “blind”).
Also known as blueprint. A one-off print made from stripped-up film or mechanicals, used to confirm position of image elements. Bluelines are often used as final proofs for single and spot colour jobs, or to show how the final job will fold or bind.
Refer to Blueline.
Text matter that comprises the major content of an article or publication other than mastheads, headlines, sub-heads, call-outs, charts and graphs.
This technique is to highlight or isolate important words or graphs from secondary copy surrounding it. Boxes also create interest and give the reader’s eye a break from long passages and monotonous amounts of text.
The relative lightness of an image. Computers represent brightness using a value between 0 (dimmest) and 255 (brightest).
A contact print made by exposing a photographic paper (originally bromide paper) to a light source through a film negative. Before the advent of computer generated artwork, bromides were commonly used as camera-ready artwork.
Bullets can be solid dots or squares, open dots, or another tiny iconic symbol that is used to enhance a list. Bullets are normally set in a slightly larger point size than the text they accompany and should always be used in a list of no less than five items. Bullets are visually most effective when used with hanging indents.
With regard to recorders and imagesetters, the process of adjusting the device so it correctly reproduces the desired halftones, tints, and so on. See also Linearisation.
A strip of varying shades usually ranging from 0% to 100% (in 10% increments) on film, proofs, and press sheets. Prepress service providers use calibration bars to measure and control screen percentages for printing and proofing.
A call-out is a short phrase or line of type that helps identify important elements of a graphic or illustration. A connecting line or arrow is often used with a call-out.
Black and white artwork that is meant to be processed by shooting it on a process camera or scanned and converted to negatives and used to make printing plates. On a direct-to-plate system, the black and white artwork would be converted directly from the art to the printing plate. Used as a generic term for a mechanical, film negative or positive, or any material that is ready to be photographed for the purpose of generating printing plates.
A caption is a sentence or more used to summarize the importance of charts, graphs, illustrations, photographs, or tables. Captions identify the people in photographs and relate the photo or graphic item to the surrounding body copy. A photograph should always have a caption.
An unwanted tinge or shade of colour present in an image.
Press marks that appear on the center of all sides of a press sheet to aid in positioning the print area on the paper.
Information about a single process or spot colour contained within an image file. An image may have up to 16 channels.
Refer to Trap
A synonym for colour or hue.
Perceived as having a hue, not white, gray or black.
A measure of the combination of both hue and saturation in colour produced by lights.
A unit of measurement in the Didot system, commonly used in Europe. A cicero is equal to 12 Didot points, is slightly larger than a pica and is approximately 4.55 millimeters.
Commission Internationale de l’Éclairage, the international body that sets standards for illumination and developed the CIE colour models.
CIE colour models
A family of mathematical models that describe colour in terms of hue, lightness, and saturation. The CIE colour models include CIE XYZ, CIELAB, CIELUV, and CIE xyZ.
Illustrations and designs collected and usually sold commercially.
Electronic art files that are already on a disk are called click art.
The conversion of all tones lighter than a specified gray level to white, or darker than a specified grey level to black, causing loss of detail. This also applies to individual channels in a colour image.
The boundary of a graphical mask (created by points and straight or curved lines) used to screen out parts of an image and expose or print other parts. Only what is inside the clipping path is displayed or printed. A series of Bezier curves drawn around a particular area of an image to isolate it from its background, so that it appears to be masked or silhouetted when placed in a page-layout program. Typically performed in Adobe Photoshop.
Colour management system. This ensures colour uniformity across input and output devices so that final printed results match originals. The characteristics or profiles of devices are normally established by reference to standard colour targets.
A colour mixing model consisting of the four process colours used in printing: cyan, magenta, yellow, and black. A comprehensive array of colours can be achieved by combining certain percentages of these four primaries. White is achieved by letting the paper show through where white is required.
Light waves that reach the viewer’s eye by transmission (through an object between the viewer and the light source) or by reflection (when light waves bounce off an object). All substances, whether transparent or opaque, absorb some wavelengths while letting others pass through or bounce off. A red apple looks red because it absorbs all colours in white light except red, which it reflects. White objects reflect all and black objects absorb all light waves (at least in theory).
Colour produced by mixing coloured lights. In projected light, each colour is created by adding one colour of light to another. All colours can be made by a mixture of red, green and blue light.
Rectangles of colour printed on colour proofs to check the ink densities and other technical factors required to conform to quality standards.
Artwork prepared so as to indicate which elements print in which ink colour. Copy and art for each colour may be pasted on separate boards, pasted on on overlays, or indicated in pencil on an overlay sheet of tissue paper.
An overall colour imbalance in an image, as if viewed through a coloured filter.
Black-and-white version made from a colour photograph or other original.
The adjustment of colours in any photographic, electronic, or manual process to obtain a correct image by compensating for the deficiencies of process inks, colour separation, or undesired balance of the original image.
A set of four acetate overlays, each utilizing a halftone consisting of one of the four process colours used for proofing colour separations.
Colour lookup table
A table of values, each of which corresponds to a different colour that can be displayed on a computer monitor or used in an image. For example, an indexed colour image uses a colour lookup table of up to 256 colours.
Specifying Pantone or process colours to produce a desired colour from a previously printed piece or other colour original.
A sheet of film or paper whose text and art correspond to one spot colour or process colour. Each colour overlay becomes the basis for a single printing plate that will apply that colour to paper.
A representation matching the appearance of the final printed piece. Includes colour laser proofs, colour overlay proofs, and laminate proofs. A representation of what the printed composition will look like. The resolution and quality will vary greatly depending on the proofing device. These can be provided during the various stages of page construction.
The amount of a hue contained in a colour; the greater the saturation, the more intense the colour.
In four colour process printing, the process of transforming colour artwork into four components corresponding to the four process colours. In spot colour printing, the process of transforming artwork into components corresponding to each spot colour required in the printed piece. Each component used in preparation for making printing plates that correspond to the specific ink colour.
A scheme of representation for colour images, such as CMYK or RGB. Colours are represented as a combination of a small set of other colours or by other parameters (like hue, saturation, and brightness).
Colour produced by mixing pigments (such as inks or paints). Pigments absorb (or subtract) all the colours from the reflected light except for the colour we see.
A circle that displays the spectrum of visible colours. It provides a graphic representation of the relationship between primary and secondary colours with successive colour mixtures and tonal values.
A light-sensitive device for measuring colours by filtering their red, green and blue components, as in the human eye. See also spectrophotometer.
Signatures of different sizes inserted at any position in a layout.
Comprehensive artwork used to represent the general colour and layout of a page.
The inverted hue of a colour (the one that is diametrically opposite on a colour wheel). For example, yellow is the complementary colour of blue.
A version of an illustration or page in which the process colours appear together to represent full colour. When produced on a monochrome output device, colours are represented as shades of gray.
Composition is the process of keyboarding and combining typographic elements into pleasing page layouts for print production.
The use of various software designed to reduce the size of a digital file. See also lossy and non-lossy.
Type in which the individual characters are narrower than normal so that more characters can fit on a single line. When the set width of a font has been shortened, the font will be more narrow — allowing more characters to fit on any given line length. Fonts should be condensed by using a true “condensed” version of a typeface. Condensing type by using the “attributes” selection screen of a page layout program increases the risk that the outputting or dtp equipment will not recognize the font or ignore it completely.
Continuous-tone (CT also contone) image
Any colour or greyscale image which has not been converted to halftone dots for reproduction. Photographs, paintings and charcoal drawings are prime examples of continuous tone images.
A proof created by the printer to be shown to the customer as a representation of the final colours of a printed piece and subsequently signed by the customer to indicate that the printed piece will be acceptable if it matches the signed proof.
The difference between the dark and light values in an image. Images with a great deal of contrast contain mostly very dark and very light values, while low-contrast images contain mostly medium gray values.
Copyfitting is the process of writing or editing articles to fit into a predetermined space allowance. Good copyfitting results in evenly filled columns and pages with the proper amount of white space.
Creep, Creep Allowance
Adjusting the page layout of inner spreads to maintain a constant outer margin when a saddle-stitched booklet is trimmed. If there is no creep allowance, when pages are trimmed, the outer margins become narrower toward the center of the booklet and there is the possibility that the text or images may be cut off. The amount of creep allowance needed depends on the size of the margins, number of pages, and the thickness of the paper. Sometimes associated with shingling.
Short, fine lines used as guides for final trimming of the pages within a press sheet. Short, fine lines that mark where a printed page should be trimmed when it is printed on paper that is larger than the image area.
Cropping is the process of eliminating irrelevent or excessive background content of photographs. Cropping enhances the focus of photographs and allows the designer to change the shape of the original photo.
Custom printer description file
A file containing information specific to a type of output device; used in conjunction with a standard PPD file to customize the printing process.
Cylinder on a paper machine for creating finishes, such as wove, laid, or linen, and for adding watermarks.
The result of a raster image processor (RIP) failing to supply data to an output device quickly enough. If the output device cannot stop/start successfully, banding or other negative effects may occur.
Acronym for Desktop Colour Separation, a version of the EPS file format. DCS 1.0 files are composed of five PostScript files for each colour image: cyan, magenta, yellow, and black file, plus a separate low-resolution FPO, image to place in a digital file. In contrast, DCS 2.0 files can have a single file that stores process colour and spot colour information. A preseparated image file format, developed by Quark, Inc., consisting of five parts: four process colour separation files (containing the cyan, magenta, yellow, and black components of the image) and a composite EPS placeholder file. A variation of the EPS file format for CMYK images where the process colour information is stored in four separate files. A fifth “master” file is used for placement in a page layout. This format is sometimes used by prepress vendors instead of OPI, or APR images. The master file can be sent to the designer for placement in the layout.
The expansion of compressed image files. See also lossy and non-lossy.
A device used throughout the printing process to measure the amount of light passing through or reflecting from a given medium.
In a digital file, the outline used to create a device for cutting, stamping, or embossing the finished printed piece into a particular shape, such as a rolodex card.
The ability of an object to stop or absorb light. The less the light is reflected or transmitted by an object, the higher the density.
The range from the smallest highlight dot the press can print to the largest shadow dot it can print. Density range also describes the capacity of a scanner to read detail in the shadow and highlight areas of a continuous tone image. The greater the scanner range, the more detail at either end of the scale. Scanners with a short density range characteristically produce noise in the darkest shadow areas. Refer to noise.
That part of a letter that drops below the baseline, such as the lower strokes of the letters “g” and “p.” Refer to ascender.
Removal of halftone dot patterns during or after scanning printed matter by defocusing the image. This avoids moire patterning and colour shifts during subsequent halftone reprinting.
A special type of interference filter, which reflects a specific part of the spectrum, whilst transmitting the rest. Used in scanners to split a beam of light into RGB components.
The Didot point system was created by François-Ambroise Didot in 1783. 1 didot point is about 0.376mm. See also cicero, pica and points.
A format which is recognizable and readable by a computer system.
Digital cameras use a CCD sensor that captures light and converts it into electrical signals, which are then converted into digital data. Images may be temporarily stored in random access memory (RAM), in-camera storage media, or moved directly to a computer’s drive.
Convert information to computer-readable form. Digitized typesetting is the creation of typographic characters by the arrangement of black and white pixels.
Degree to which paper maintains its size and shape in the printing process and when subjected to changes in moisture content or relative humidity.
A decorative character.
Direct exposure of image data onto printing plates, without the intermediate use of film.
Elimination of intermediate film and printing plates by the direct transfer of image data to printing cylinders in the press.
Type set larger than the text to attract attention.
A technique used in computer graphics to create the illusion of varying shades of gray or additional colours by distributing the screen pixels or imagesetter dots of an image. Dithering relies on the eye’s tendency to blur spots of different colours by averaging their effects and merging them into a single perceived shade or colour.
The point of maximum density in an image or original.
The point of minimum density in an image or original.
Computer file created with an application program.
In printing, a small spot which combines with other dots in a matrix of rows and columns to form characters or graphic elements. Refer to halftone.
In halftone printing, the tendency of the ink used to create halftone dots to flow outward as it is absorbed by the paper. Too much dot gain can create a cloudy or dark image. A phenomenon that results due to the tendency of wet ink to spread when it contacts paper. This results in a slightly larger dot than appears on the printing plate itself, and in some cases may cause images to darken or appear “muddy.” Image files should be prepared in such a manner so as to allow for dot gain, a process known as gain compensation.
The shape of the dots that make up a halftone. Dot shapes can be round, square, elliptical, linear, etc.
A font not resident in a printer’s memory that must be sent to the printer in order to print a document containing that font.
The process of acquiring a low-resolution copy of an high-resolution image for layout purposes only. The reduction in resolution of an image, necessitating a loss in detail.
Dots per inch. A measure of screen and printer resolution that is expressed as the number of dots that a device can print or display per linear inch. The number of dots in a linear inch (as opposed to a square inch). Therefore, a 600-dpi laser printer prints 600 dots per linear inch, or 360,000 (600 x 600) dots per square inch. DPI is also known as resolution, as in “What is the resolution of your printer/scanner/TIFF file/monitor?” Printer, scanner, image and monitor resolution are all typically expressed in terms of DPI. However, since image files are made up of pixels, and since monitors display pixels rather than dots, image resolution and monitor resolution should technically be expressed in PPI (pixels per inch). Refer to LPI (lines per inch).
A decorative capital letter at the beginning of a paragraph that hangs below the top line of the paragraph and occupies space of more than one line.
A coloured or shaded box or character offset and placed behind an identical box or character to give a shadow effect.
Early drum scanners separated scans into CMYK data, recording these directly onto film held on a second rotating drum.
Acronym for Document Structuring Conventions, a set of organizational and commenting conventions for PostScript files designed to provide a standard order and format for information so applications that process PostScript can easily find information about a document’s structure and imaging requirements. These conventions allow specially formatted PostScript comments to be added to the page description; applications can search for these comments, but PostScript interpreters usually ignore them.
A halftone greyscale image rendered in two colours, one of which is usually black. This process uses the same image on both plates with the exception of setting the screen angles differently to avoid moiré patterns and the density range is shortened on the darker colour to allow the lighter colour to show in the highlight and midtone areas. This gives the image tonal and colour interest and gives the illusion of added depth. This is a very useful design alternative for two-colour print jobs containing grayscale images.
A printing process using small heating elements to evaporate pigments from a carrier film, depositing these smoothly onto a substrate. Used primarily for colour proofs and comps.
Electrostatic printing or copying
Printing process in which an image is created by applying an electric charge to a carrier, attracting magnetic ink (toner) to the image, and transferring it from the carrier to paper with heat and pressure. See also Xerography.
The ellipsis is a set of three dots which look like a series of three periods(…). They are used to indicate missing copy when placed between two sentences or phrases. They are commonly used when paraphrasing long quotations. They can also be used in pairs as a “continuation technique” when you want to lead the reader into other copy. But, don’t forget to place the second set of ellipsis before the final connecting copy so the reader knows where to go.
Unit of space (width) equal to the point size of the type.
An em dash is used to abruptly change a thought within a sentence or to connect two different thoughts within a sentence. The actual length of an em dash is approximately four times the length of a hyphen and is relative to the set width of the font which you are using. Em dashes received their name due to the fact that they are equivalent to the width of the capital letter em (M).
An Em space is a fixed amount of blank space equivalent to the width of a capital letter em (M). Em spaces are frequently used for paragraph indents and bullet item indents because they are fixed units. Em spaces are relative to the set width of the font being used.
Half an em.
An en dash is used to denote continuation; as in “pages 4-5” and “1966-1995.” The actual length of an en dash is approximately two times the length of a hyphen and is relative to the set width of the font which you are using. En dashes received their name due to the fact that they are equivalent to the width of the capital letter en (N). An en dash is one-half the width of an em dash.
An En space is a fixed amount of blank space equivalent to the width of a capital letter en (N). En spaces are frequently used when a fixed amount of space is needed, but less space than the more commonly used em space. En spaces are relative to the set width of the font being used. An en space is one-half the width of an em space.
To stamp a raised area or image into paper with metal dies, usually combined with a printed image. (To stamp an indentation with metal dies is to deboss the image).
A thin photo-sensitive coating (usually containing silver halides) that is applied to a base substrate to produce photographic film or paper. When the emulsion is exposed to the appropriate light source and developed in the appropriate chemical developer, an image is produced. Developing removes the unexposed silver halides from the emulsion.
Encapsulated PostScript (EPS)
A file format that stores PostScript information in an image file so that it may be transferred as a unit to a suitable page layout or drawing program or used for preparing images for later typesetting. An encapsulated PostScript file has two parts: a low resolution bitmap picture of the screen and a full PostScript description to pass on to a suitable printer.
See Encapsulated PostScript.
Another term used for DCS. Refer to DCS
A list of errors in a book which are of sufficient importance to be called to the attention of the reader.
Euclidean dot shapes
Round, elliptical, square, or linear halftone dots that invert with their cell after 50% intensity. This strategy helps reduce dot gain problems sometimes experienced with elliptical, square, and linear dots.
A small block of print used to trigger packaging equipment.
When the set width of a font has been lengthened, the font will be wider, allowing fewer characters to fit on any given line length. Fonts should be expanded by using a true “expanded” version of a typeface. Expanding type by using the “attributes” selection screen of a page layout program increases the risk that the outputting or dtp equipment will not recognize the font or ignore it completely.
A type of illustration that shows a structure with its parts separated but drawn in relation to each other.
To transfer information from the current program to another location or program. See import.
A ratio of the intensity of a light source and length of time photo-sensitive material is subjected to the light source. The measurement of exposure is a prominent factor in controlling the lasers that are at the heart of imagesetters, platesetters and computer-to-press imaging devices.
Two pages that face each other in a printed publication, comprised of the verso (left) and recto (right) page of an opened book.
A family of type is the complete font set with all its related attributes. One family can include: roman, italic, bold, bold italic, black, black italic, light, light italic, thin, thin italic, plus all the condensed and expanded versions of the previously listed.
The progressive bleed-off at the soft edge of an image so that it blends with the underlying image or background colour.
Film containing an image in which the values of the original image are reversed. Film negatives are typically output from imagesetters and are used to create printing plates. Refer to Film Positive.
Same as film negative, except that the image is not reversed. Usually used when the film is to be duplicated, rather than directly photographed to create printing plates.
Used in reference to colour transparency recording devices, and sometimes also to imagesetters.
Individual film assembled onto a film carrier readied for contacting or Platemaking.
Any scanning device that incorporates a flat transparent plate, on which original images are placed for scanning. The scanning process is linear rather than rotational.
Printing on a press using a rubber plate that stretches around a cylinder, making it necessary to compensate by distorting the plate image. Flexography is used most often in label printing, often on metal or other non-paper material.
To rotate an image along either its horizontal or vertical axis.
To completely coat a press sheet with ink or varnish (as apposed to pattern or spot varnish which is a defined image).
Aligned or even with (in reference to type alignment).
Aligned along the left edge or margin.
Aligned along the right edge or margin.
Frequency-modulated screening. A type of screening that employs irregular clusters of equally sized CMYK pixels to represent continuous-tone images. The placement of these pixels, although seemingly random, is precisely calculated to produce the desired hue and intensity. This process differs from traditional halftoning in which the distance between CMYK dots remains constant while dot size varies to create the desired hue and intensity. Compare AM screening. See also Stochastic Screening.
Dotted or dashed lines on camera-ready art that indicate where to fold the printed piece.
A template used for determining the page arrangement on a form to meet folding and binding requirements.
A printed page number.
A set of letters, numbers, punctuation marks, and symbols that share a unified design. The design is called a typeface. A group of related typefaces is a type family. A font is the specific name applied to a particular typeface style. Examples of font names are Helvetica, Times, Americana, and Zapf Chancery.
The information about a publication, such as its title, date, issue or page number is a footer when it consistently appears at the bottom of each page of the document.
A footnote is a numbered passage which amplifies specific information on the page and provides direction about how to find sources or related reading.
The front or back of a signature.
The overall appearance of a publication, including page size, paper, binding, length, and page-design elements such as margins, number of columns, treatment of headlines, and so on.
For position only (FPO)
A photocopy, photostat, or low-resolution electronic copy of an image or piece of art positioned on the camera-ready page to indicate the position of the actual art to be stripped in by the printer or inserted by the system during prepress processing. A term applied to low-quality art reproductions used to indicate placement and scaling of an art element on mechanicals or camera-ready artwork. In digital publishing, FPO can be low-resolution TIFF files that are later replaced with high-resolution versions. An FPO is not intended for reproduction but only as a guide and placeholder for the prepress service provider.
A solution of water, gum arabic, and other chemicals used to repel ink from non-printing areas of the lithographic plate.
The most common full-colour printing process which uses colour separation to produce one image for each of the four process colours (cyan, magenta, yellow, and black). Each colour is then overprinted to reproduce the full colour of the image.
An outline between abutting colour areas.
A combination of hardware and software, designed to capture individual frames from video clips for further digital manipulation, or consecutive replay on computer platforms.
Frequency (Halftone Screen Frequency)
The spacing of the dot matrix in a halftone image, usually expressed as lines per inch (lpi). Optimum halftone screen frequency is dependent on the type of imaging device used to reproduce the halftone. In practical terms it varies from 55 lpi for silkscreen printing to 200 lpi for high-end offset printing.
The process of preparing an image file to compensate for the increase in dot size, known as dot gain that occurs when wet ink spreads on paper. Adjustments to compensate for dot gain are typically performed in Adobe Photoshop or a similar image editing program.
A measure of contrast that affects the mid-range grays (midtones) of an image. Gamma is often expressed as a curve. Technically, a numerical representation of contrast in an image. Adjusting the gamma, which is what you are doing when you move the middle slider in Photoshop’s “Levels” dialog, allows you to correct midtones without noticeable changes in the highlight and shadow areas.
The correction of tonal ranges in an image, normally by the adjustment of tone curves.
The range of colours that a device can reproduce. The eye, the camera, the computer monitor, the toner-based colour printer, the inkjet printer and the four-colour printing press all have different colour gamuts. The human eye has the widest gamut and the printing press has the narrowest.
The process of colour matching in which differences in colour gamuts between the source device and the target device are taken into consideration.
The process by which a device with a wider gamut is programmed to emulate behavior of a device with a narrower gamut.
True gang scanning means mounting a number of originals on the scanner and scanning all at the same exposure. Advances in scan software now allow multiple images to be mounted on the scan bed and exposure, cropping, colour mode and other scan parameters to be applied to each individual image.
A four-page insert or cover with foldouts on either side, making the equivalent of 8 pages.
Gray component replacement. A colour separation technique that uses black instead of combinations of cyan, magenta, and yellow in reproducing the gray components of colours. This provides a more economical use of inks and improved ink application. A technique for minimizing ink coverage.
When an image is screened back or shaded down in intensity, it is called a ghosted image. Both full-colour and black and white images can be ghosted.
Light marks in a printed image caused by an adjacent heavy image depleting the ink on the inking roller. Smaller presses with few rollers in the ink train are most subject to this problem, although some designs will cause ghosting even on the largest press.
Graphic Interchange Format. A platform-independent image file developed by CompuServe that is commonly used to display and distribute images on the Internet. A compressed digital image format widely used for electronically published images on the Internet. Any single image may only contain a maximum of 256 different colours, generally considered inadequate to represent photos. Pronounced Jiff.
Sans serif typefaces.
See graduated fill.
A graded series of colours that changes progressively from one colour to another or from light to dark or dark to light within the same colour. See also Vignette.
An area in which two colours (or shades of gray) are blended so as to create a gradual change from one to the other. Graduated fills are also known as blends, gradations, gradient fills, and vignettes.
In paper, the direction in which fibers line up. Paper grain is a significant factor in a variety of operations such as folding, scoring and paper handling in printing and finishing equipment.
Graphic Accents emphasize and organize words, illustrations and photographs. Boxes, drop shadows, indents, lines, rules, screens and icons are considered graphic accents.
A page layout term that refers to the way placed graphics files are managed by software. When a graphic is placed on a page, it appears there but does not become part of the page layout file. The page layout software keeps track of the location of the graphics file (the link) and will download that file when the page layout is printed.
The balance between CMY colourants required to produce neutral greys without a colour cast.
Grey component replacement
Discrete tonal steps in a continuous tone image, inherent to digital data. Most digital continuous tone images will contain 256 grey levels per colour.
The representation of colours in varying shades of gray – usually 256 shades in digital artwork.
A grid is the defining of headline positions, column length and width, placement of headers and footers and and any other predetermined placement of photographs or graphic elements on a page. A series of nonprinting horizontal and vertical rules assist in creating and maintaining a grid for page layout.
The trim at the back (or spine) of a signature, or of two or more gathered signatures, in preparation for perfect binding.
The part of the press or printer that holds the paper and guides it through the press. Also, the edge of the paper so held.
Extra space between pages in a layout. Gutters can appear either between the top and bottom of two adjacent pages or between two sides of adjacent pages. Gutters are often used because of the binding or layout requirements of a job: for example, to add space at the top or bottom of each page or to allow for the grind-off taken when a book is perfect bound. Gutters are the white spaces which appear between columns of type. Gutter widths should be wide enough to clearly define columns and narrow enough to not lose the reader. Gutters are also placed between multiple images on a press sheet for a variety of finishing processes.
A very thin typographic rule.
The rendering of an image in a series of dots whose size differs relative to the tonal density of the image. When seen from a normal viewing distance, and without magnification, the dots are seen as areas of differing tonal values. See also AM screening, FM screening, screening and frequency.
The spacing of the dot matrix in a halftone image, usually measured in lines per inch (lpi). More correctly referred to as the frequency or halftone frequency.
The formula used to determine the optimum graphic resolution (pixels per inch) of an image based on the screen frequency used on the reproduction device. The image resolution in pixels per inch should be twice the screen frequency of the reproduction device at actual reproduction size. For example, a photo to be reproduced at 150 lpi on the press should be scanned at 300 ppi at reproduction size.
A light line around object edges in an image, produced by the Unsharp Masking (sharpening) Technique. This technique uses the contrast between the edges of tonal areas of a continuous tone areas as a basis for applying the halo. The halo creates a visual separation between the tonal areas and makes the image look sharper or more in focus.
To place characters outside the left margin.
A paragraph style in which the left margin of the first line extends beyond the left margin of subsequent lines, or put another way, all subsequent lines are indented more than the first line of the paragraph. Bulleted and numbered items are visually most effective when they use hanging indents.
Images intended for reproduction and which have been supplied as prints rather than digital files. See also soft copy.
Harlequin Precision Screening (HPS)
A set of screening algorithms developed by Harlequin Incorporated that precisely controls the accuracy of screen angles and frequency to reduce moire patterns.
A head or headline is an enlarged phrase which gives the reader a preview of the content to follow. Heads are very important elements because they motivate the reader to continue reading the associated material.
The information about a publication; such as its title, date, issue, or page number is a header when is consistently appears at the top of each page of the document.
The title of an article or story.
Heavy Ink Coverage
When over 30% of a sheet has ink coverage on it, the order is considered to have heavy ink coverage.
A light image that is intentionally lacking in shadow detail.
The lightest (brightest) areas of an image; usually refers to areas with less than or equal to a 10% halftone dot. Areas with no visible halftone dot (like sunlight reflecting off a chrome bumper) are known as specular highlights (specular light being the opposite of diffuse light). Refer to shadows, midtones.
A chart displaying the tonal ranges present in an image as a series of vertical bars. A graphic representation of the distribution of light and dark pixels in an image, which provides the information necessary to make tonal adjustments. In Adobe Photoshop, it’s accessed via the “Image” menu.
The visual attribute of a colour that allows it to be classified as red, blue, yellow or any intermediate between contiguous pairs of these colours.
In typographical usage, a hyphen is placed at the end of the syllable that remains on the first line when words are too long to fit on a single line and are broken between two lines. Hyphenation can be automatic in page layout programs but this hyphenation is based on a rudimentary hyphenation dictionary contained in the layout software. For quality work, hyphenation should be corrected manually to repair bad word breaks and enhance copyfitting. Hyphenation can also be turned off if no hyphens are preferred.
The hyphenation zone is the space near the column’s right edge which will allow hyphenation. Long hyphenation zones result in fewer word splits than short hyphenation zones.
In graphical environments, a small graphic image displayed on the screen to represent an object that can be manipulated by the user.
The area of a printed piece wherein an image can be placed without danger of being marred by accidental cropping, folding or other finishing processes. Sometimes referred to as a safety area.
A digital recording device that uses a laser to image photosensitive film or paper. Imagesetters are used for creating artwork for reproduction. Most imagesetters are PostScript-compatible and use a dedicated raster image processor (RIP) to process the digital information into code to drive the laser.
The process of producing a film or paper copy of a digital file from an output device (such as an imagesetter or printer).
To bring information into the current program from another location or program. See export.
The process of arranging individual pages on a digital or analog form to construct a signature so the pages will be in proper sequence after printing, folding, and binding. Also the placement of multiple images on a form or press sheet to produce the job in the most economical manner.
The transfer of an individual image to an individual side of a sheet of paper. Each colour represents a separate impression, so printing one side of one sheet in four colour process equals four impressions.
Cylinder of an offset press that squeezes the paper against the blanket cylinder carrying the image. (In toner devices, this is accomplished by a transfer cylinder or belt).
The most common indent is at the beginning of a paragraph when the first line is “set-in” from the left edge of the column. An indent can be placed on the left side only (as in paragraph beginnings) or on the left and right sides of copy (when a block of text needs to be set apart from the rest of the paragraph).
Indexed colour image
A single-channel image, with 8 bits of colour information per pixel. The index is a colour lookup table containing up to 256 colours. A pixel colour system that uses a limited number of distinct colours (usually 256 colours) to represent a digital image, rather than describing a colour using bit depth.
A non-impact printer that fires tiny drops of ink at the paper to create characters or graphics.
The space between the binding edge of the page and the text.
See optical resolution.
In the image manipulation context, this is the increase of image resolution by the addition of new pixels throughout the image, the colours of which are based on neighbouring pixels.
A display method that shows three-dimensional objects with height and width but without the change in perspective that would be added by depth.
Industry standard colour reference target used to calibrate input and output devices.
A type style in which the characters are slanted upward to the right. Usually, italic characters have different shapes than their Roman counterparts.
This is the abbreviation for International Typographic Corporation, which licenses many of the typefaces used in computerized graphic design. ITC fonts are identical to the typefaces used on phototypesetting equipment and based on the original “hot type” font designs. They are considered higher quality typographic forms because they have retained their letterform integrity through the years and are more reliable when transferred from computer to outputting devices.
The stair-stepped effect of bit-mapped type and graphics caused when square pixels represent diagonal or curved lines. See anti-aliasing.
Small vibrations or fluctuations in a displayed image caused by irregularities in the display signal.
In prepress and printing, the collection of files associated with a single project including page layout files, image files, etc.
Joint Photographic Experts Group. An organization that has defined various file compression techniques. An image compression standard describing a type of compression used on photographic images. JPEG compression discards data, but does it in an intelligent fashion that results in a much smaller file size with very little loss in quality. JPEG is also used to refer to files that have been compressed using the JPEG standard.
A line of text that indicates the page where an article or story continues or the carryover line on the subsequent page that identifies the article or story being continued. When an article is continued from one page to another, the jumpline placed at the end of the first page to identify where the article is continued. A jumpline should also appear at the beginning of the continuation page to tell the reader where the article started.
Lines of type in a column that are flush with both the right and left column margins. If only one side of the text column is flush, it is said to be right-justified or left-justified. Left-justified column are also called ragged-right because the right side which is not justified tends to be uneven. When type is justified to both the right and left margins, wordspacing and letterspacing must be varied to allow such alignment.
To selectively adjust the space between character pairs to improve readability or to achieve balanced, proportional type. Refer to kerning.
The number of pixels sampled as a unit during image manipulation and sharpening processes.
Kerning refers to improving the appearance of type by adjusting the spacing between selected pairs of letters. The most problematic pairs of letters are AV, AY, FA, AW, PA, and AT. Kerning is more important in large type and all uppercase text.
A pair of characters for which correct kerning is automatically applied. Kerning pairs are defined in kerning tables built into most fonts.
A table built into most fonts containing kerning pairs.
Keyboarding is the process of typing in the raw text (headlines, subheads and body copy) for a publication in preparation for turning it over to a graphic designer. The most commonly used program for keyboarding is MS Word.
A thin border or frame surrounding a colour area. A thin border around a picture or a box indicating where to place pictures. In digital files, the keylines are often vector objects while photographs are usually bitmap images.
A kicker is a short phrase or key word which introduces a headline. Kickers can also be used to relate a headline to a particular portion of a publication.
A condition that exists when abutting colours in a knockout come together with no trapping, framing, or keylines, also known as a net fit.
The process of removing the portion of a background colour that lies underneath an object so that the object colour will not mix with the background colour during printing. A printing technique that prints overlapping objects without mixing inks. The ink for the underlying element does not print (knocks out) in the area where the objects overlap. Opposite of overprinting.
Extremely strong paper used when durability is important. May be unbleached and brown like a grocery bag, bleached, or bleached and dyed.
Although a number of devices employ laser technology to print images, this normally refers to printers which use dry toner and the xerographic printing process.
The position of print on a sheet of paper.
The overall plan or design of a document or document page.
A leader is a repeating symbol used in combination with tab stops to draw a reader from one area of line of text to another area. Dotted and dashed lines are the most common leader elements.
Leading is the vertical space relationship between one line of type and the next. Computer graphics normally default to +2 points of leading for any given point size selected. (i.e. 10 point type uses 12 points of leading and 14 point type uses 16 points of leading). In general, the larger a point size gets, the better it will look with reduced leading. Increased and decreased leading can also be used for copyfitting purposes. This spacing was originally achieved with lead type by placing slugs of lead between lines of type. Pronounced Ledding.
Page with printing on both sides.
A paragraph of text in which the left edge is flush and the right edge is ragged. Also called ragged right.
The space between characters.
The words of a language and their definitions.
In typography, two or more letters merged or tied into one.
An attribute of object colour where the object reflects or transmits more of the incident light.
A gradient fill that is projected from one point to another in a straight line (as apposed to a radiant fill where the gradient is projected from the center outward in a concentric manner.
An image that contains only black and white with no shades of gray. Some examples of line art are type matter, solid black and white logos and pen-and-ink drawings.
Typographic term for the distance from baseline to baseline of lines of text.
Particles of paper dust which can degrade print quality.
Printing process which originally utilized the oil repellant properties of water and the water repellant properties of oil to separate the printing and non-printing areas of an image. Subsequently, waterless lithography has been developed, where physical properties of the printing plates are used to repel ink in the non-printing areas and attract ink in the printing areas.
A page’s live area is the part between borders and margins where most text and graphics will appear.
Live Art Files
The original electronic file used to create and identify an EPS or TIFF image. This can be an original drawing that has been created in FreeHand, Illustrator or CorelDraw or a scanned image. Live art files are necessary inclusions in processing electronic documents because they are the links needed to produce high resolution output.
A logo is a stylized name of a company or organization set in a unique way and often accompanied by an illustration or icon. A successful logo should be reproducible in its original colour design and a black and white version.
Image compression that functions by removing minor tonal and/or colour variations, causing visible loss of detail at high compression ratios.
Noncapital letters, such as a, b, c, and so on. The name is derived from the practice of placing these letters in the bottom (lower) case of a pair of type cases. Compare uppercase.
A dark image that is intentionally lacking in highlight detail.
An image or screen in relatively coarse detail. In raster-oriented printing or displays, low resolution has to do with the number of pixels or dots used to reproduce the image. The fewer the pixels per inch, the lower the resolution.
Lines per inch – The imperial unit in which haltone frequency is measured. Lpcm or per centimeter is the metric equivalent
The apparent brightness/darkness of an image adjusted to account for the inherent tendencies of the human eye to perceive some colour values as being brighter than others despite their similar rates of light transmission and reflection.
The Lempel-Ziv-Welch image compression technique.
The distance from the edge of the paper to the image area occupied by text and/or graphics.
An analog or digital image used to eliminate unwanted portions of an image. An analog mask could consist of a negative film, hand cut ruby or amber film or simply photographically opaque paper. Digital mask files utilize a clipping path and are superimposed over an image to define which portions of the image should print and which should not. Image pixels inside the clipping path print; pixels outside the clipping path do not. A similar mask can be used to control the area of a graphic affected by such operations as colour correction, filters, tool effects, etc.
Artwork used as an original from which subsequent reproductions are made.
A nonprinting page in certain page layout programs that help to define the basic layout and format of subsequent document pages. A master page can contain headers, footers, page numbers, graphic elements, etc.
The list of staff, owners, and subscription information for a periodical.
Not glossy, such as a matte varnish or a matte laminant.
In traditional publishing, one or more artboards with type galleys, line art, “for-position-only” photostats, and tissue overlays to indicate colour. In electronic publishing, the final camera-ready page with position-only stats keyed to flat art to be stripped in by the printer.
Those parts of an image with colours of intermediate value–that is, in the 25% to 75% value range.
The unwanted result of incorrectly aligned colours on a finished printed piece. Misregistration can be caused by many factors, including paper stretch and improper plate alignment. Registration tolerances vary according to the device on which an image is printed, but no device is capable of consistently producing perfect registration. Trapping (choking or spreading images that require tight registration) can compensate for misregistration. Imaging plates directly on the press eliminates the problem of plate misalignment.
A proof used to ensure the correct page numbers, orientation, and dimensions are used in the final printout of an imposition layout. A good mockup is as close as possible to the finished product, and is of great value in showing trimming, folding and assembling in pieces involving complicated finishing operations, such as boxes.
Interference caused by incorrect halftone screen angles which results in an undesirable pattern in multi-colour printing. Images such as plaid or checkered fabrics can also interfere with the angles of the halftone screens. One advantage of stochastic screening is the reduction of obvious and subtle moirè patterns.
The colour attributes of an image made up of one or more tones of one hue.
Character spacing that is the same for all characters regardless of their shape or width (such as typewriter spacing).
The process of making a composite picture by bringing together into a single composition a number of different pictures or parts of pictures and arranging these to form a blended whole.
A texture similar to orange peel sometimes caused by sharpening. It is particularly visible in flat areas such as sky or skin.
An digital image where the components of the image are spread over more than one digital channel. Refer to alpha channel.
A polyester film product developed by DuPont often used as the base for magnetically coated storage media.
The typographic design of a publication’s name as it appears on the cover of a publication. Also called a masthead) A nameplate is the distinctive portion of the front of any publication which usually contains the “name” of the publication, a logo, date and volume information and remains consistent in style from one issue to the next.
No Carbon Required paper. A special type of paper used for multi-copy forms. NCR paper has a special coating that combine under the pressure of a writing implement to produce an image where the pressure is applied (the first sheet of a 3 part NCR form would be coated on the back, the second sheet coated front and back, and the third sheet coated front). This paper comes in standard colour sequences for 2 part, 3 part, 4 part and 5 part forms. Forms are commonly padded at the head (glued at the top) in sets.
A type specification in which there is less space from baseline to baseline than the size of the type itself (for example 40-point type with 38-point leading).
A type specification in which the space between characters is reduced beyond the default setting either by kerning or tracking.
A measurement of the lightness or darkness of a colour without reference to its hue or chroma. A neutral density of zero (0.00) is the lightest value possible and is equivalent to pure white; 3.294 is roughly equivalent to 100% of each of the CMYK components.
Any hue dulled by the addition of white, gray, black, or some of the complementary colour.
A coarse, absorbent, low-grade paper used for printing newspapers.
In the scanning context, this refers to random, incorrectly read pixel values, normally due to electrical interference or device instability. In an image, pixels with randomly distributed colour values. Adobe Photoshop provides filters to apply noise to an image.
In typesetting, a special space character placed between two words to keep the words from being separated by a line break.
Any printer that makes marks on the paper without striking it mechanically. The most common non-impact printers are ink-jet, thermal, and laser.
Image compression without loss of quality.
In graphics, a distinct entity. In many object-oriented applications, objects are framed by tiny square handles that enable manipulation.
Computer vector graphics based on the use of construction elements (graphic primitives) such as lines, curves, circles, and squares. These construction elements are defined mathematically rather than specifying the colour and position of each pixel and are combined to form complex images and text. This contrasts to bitmap images, which are composed of individual pixels.
A text style created by slanting a roman font to simulate italics.
Optical Character Recognition. The analysis of scanned data to recognize characters so that these can be converted into editable text. This is a method used to convert hard copy into text files. Because OCR depends upon comparing scanned images to a library of defined images, the attempt to scan faxes and other low grade documents can result in garbled text. OCR scans are far from perfect and require subsequent editing and clean-up.
The distance of an object from some (usually standard) reference point. Also, the transfer of ink from one surface to another. The undesirable effect produced when the pressure from cutting a printed job before the ink is dry caused the ink from the front of one sheet to transfer to the back of the sheet above it.
The most common commercial, high-volume, ink-based printing process, in which ink adhering to image areas of a lithographic plate is transferred (offset) to a blanket cylinder before being applied to paper or other substrate.
Numerals positioned so that the body sits on the baseline, creating ascenders and descenders.
One-up, two-up, etc.
The number of identical images on a press sheet. Multiples of the same image are often run on the same press sheet to shorten the press time required to produce a job. This results in savings to the customer. When a job is quoted, it is planned to print on the most economical cut of the paper with the most economical number of multiple images possible on that cut. Multiples of different images placed on a press sheet are described on x-on. For example, two business cards, each having a different name, but running on the same press sheet would be described as one-up, two-on, while two business cards, each having the same name, would be described as two-up.
A material characteristic that prevents or restricts the transmission of light. Opacity also refers to the apparent transparency (or lack thereof) of a digital image (such as a layer) in a graphics program.
Open Prepress Interface. A set of PostScript language comments defining and specifying the placement of high-resolution images in PostScript files on an electronic page layout. A process used on desktop prepress systems where high resolution scans are made and specially linked low resolution images are used for placement in the layout. The linked low-res images are automatically swapped with the high-res images when the file is processed by the raster image processor. (Similar to APR.)
In a scanner, a measurement of the amount of data captured for a given area of the scanned image, typically expressed in dots per inch (DPI). It’s important to note that the optical resolution refers to the true resolution of the scanner (usually 300 or 600 dpi for desktop models). When a scanner claims to be able to scan “up to” 2400 or 3600 dpi, this additional resolution is accomplished via software calculations, and is known as interpolated resolution. Refer to Resolution.
The first line of a paragraph that falls at the bottom of a text column and is separated from the remainder of the paragraph by a page or column break. Also, a word or very short line that appears by itself at the end of a paragraph. Compare widow. When a single word or line of type at the end of a paragraph falls at the beginning of the next page or column, and thus separated from the rest of the text, it is a widow.
The vector information that decribes the shape of a letter. Converting fonts to outlines in Adobe Illustrator converts the letters to vector outlines and preserves the letter shapes even if the computer opening the file does not have the originating fonts installed. In the field of graphic design, this is standard practice in preparing a file to be transferred from one computer to another.
A font stored in a computer or printer as a set of templates from which the font characters, at various sizes, can be drawn.
Refer to misregistration.
Any hardware equipment, such as a laser printer or imagesetter, that images computer-generated text or graphics onto a substrate, such as film or paper.
In a book or magazine, the space between the fore-edge (the edge opposite the spine) trim and the text.
A font style in which a horizontal line appears above characters. Also, a brief tag line over a headline that categorizes the story.
The opposite of knockout. A printing technique in which all overlapping inks print on top of each other, and transparent inks blend to form a new colour. See also knockout, trapping.
The point at which the flow of text in a document moves to the top of a new page.
A programming language, such as PostScript, that is used to describe output to a printer or display device.
Pages per minute (PPM)
A rating of printer output, especially used with laser printers.
The process of dividing a document into pages for printing and/or of adding page numbers to the header or footer of each page.
A subset of the colour lookup table which establishes the number of colours that can be displayed on the screen at a particular time. Also a user-defined set of colours used in a graphics program.
To scan horizontally or vertically to bring off-screen portions of a display or image into view.
Pantone Matching System (PMS)
The most common colour matching system used by commercial printers for spot colour printing. The pantone matching system is comprised of ink formulas for mixing colours by weight using a standard set of basic colours. Swatch books with over 500 numbered colours allow the customer to specify the desired colour exactly, and allows the printer to maintain the same colour from press run to press run.
A thin, flexible material made from a pulp prepared from rags, wood, or other fibrous material, and used for writing or printing on, for packaging, as structural material, and so on.
Colours resulting from adding white pigment to neutralized colours.
Traditionally, the process of assembling mechanicals by pasting galleys and line art in place. In desktop publishing, traditional pasteup has largely been replaced by electronic page assembly.
In file storage, the route followed by the operating system to find, store, or retrieve files on a disk. In graphics, the vector description of an accumulation of line segments or curves. Vector paths can be filled, stroked, used as masks and clipping paths and type may be made to follow or fit within such paths.
A display method that shows objects in three dimensions with the depth aspect rendered according to its perceived relative distance or position.
A method created by Kodak for scanning and storing photographic images on CD ROM.
A font made up of non-standard characters such as arrows, map symbols, bullets, and dingbats.
In typography, a unit of measurement equal to 12 points or approximately one-sixth inch.
The pulling off of particles from a paper’s surface during printing. Particles accumulate on the plate or blanket, causing printing defects.
A file-format for encoding both bitmapped and object-oriented graphical images. A format for defining images and drawings on the Macintosh platform. PICT 2 supports 24-bit colour. Pict files are not universally supported by printing devices, and are not recommended for graphic reproduction.
A clear rectangle in a negative into which a halftone negative would be stripped by the printer. This has become unnecessary because of the function of page layout programs in combining photos and text prior to the creation of film, plates or direct to press imaging.
A substance, usually a powder, added to a liquid binder to give colour to paints or inks. Some properties of pigments include lightfastness (non-permanent pigments are know as fugitive) transparency and hue.
A measure of fixed-width fonts that describes the number of characters that fit in a horizontal inch. Refer to Monospacing.
Picture element. A tiny rectangular element in the rectilinear grid of the computer screen that is either “painted” on or off to form an image or character. If a pixel is black-and-white, it can be encoded with only 1 bit of information. If the pixel must represent a larger range of colours or shades of gray, the pixel must be encoded with more bits of information as follows 2 bits for four colours or shades of gray, 4 bits for sixteen colours or shades of gray, and so on. An image of 2 colours is called a bitmap; an image of more than 2 colours is called a pixel map.
The data structure of a colour graphic which includes the colour, resolution, dimensions, storage information, and number of bits used to describe each pixel. When only 1 bit per pixel is used, the data structure is called a bitmap.
A means of reducing image resolution by simply deleting pixels throughout the image.
Cylinder on a rotary press to which the metal printing plate is attached.
An image carrier made of polyester or aluminum used on a press. Plates are coated with a photochemical emulsion which is exposed by a light source in a vacuum contact frame, or by a laser in the case of computer-to-plate or direct-to-press applications. The exposed image becomes ink receptive and the unexposed area repels ink. During offset printing the image is transferred from the plate to a rubber blanket which then transfers the image to the paper (the soft blanket more easily conforms to the surface of the paper).
The process of exposing and developing the photochemical plate used to transfer the image on an offset press.
The cylinder used in most impact printers and typewriters around which the paper wraps and against which the print mechanism strikes the paper. In letterpress printing and foiling, the surface of the press against which the type or die presses, and which allows the transfer of the image.
A type of removable hard disk drive. Also commonly referred to as a Syquest Drive. No longer in common usage.
To create a graphic or a diagram by connecting points representing values defined by their positions in relation to the x (horizontal) axis, y (vertical) axis, and z (depth) axis.
Any device used to draw charts, diagrams, and other line-based graphics.
(Pantone Matching System) A commonly used system for identifying specific ink colours. The North American printing industry standard for defining non-process colours. Refer to Pantone Matching System.
Acronym for photomechanical transfer. PMTs are created by exposing a photosensitive paper image carrier through a negative, positive or on a camera; sandwiching this carrier with a receptor paper and processing the sandwich through a special processor. The latent, unstable image on the carrier is transferred to the receiver paper which results in a high quality, stable line or halftone image. Prior to the advent of page layout programs and imagesetters, images were commonly made into PMTs for paste-up purposes.
A typographical unit of measure equal to approximately 1/72 inch, often used to indicate the height of type characters or the amount of space between lines of text (leading). There are 12 points in a pica.
Any two-dimensional closed shape consisting of three or more sides. Triangles, rectangles, hexagons, octagons, etc. are all polygons.
A line consisting of two or more connected segments.
A page orientation in which the horizontal dimension of the image follows the narrower dimension of a rectangular page.
A computer display with a shape higher than it is wide, used to display an 8.5-by-11-inch page at full size in portrait mode.
To limit all the values in an continuous tone image to some smaller number, resulting in the conversion of continuous tone data into a series of visible tonal steps or bands.
A page-description language from Adobe Systems that offers flexible font capability and high-quality graphics. PostScript is the copyrighted term for the Page Description Language owned by the Adobe Corporation. PostScript defines images as vector (outline) information permitting extreme flexibility in scaling, colour, shading, position, rotation, etc.
PostScript Printer Description file
Acronym for PostScript Printer Description, a file format developed by Adobe Systems, Inc., that contains information enabling application software to optimize PostScript printing by utilizing the printer properties described for each type of designated printer.
Pixels per Inch. Units of measurement for bitmapped or pixel mapped images. The number of pixels in a linear inch (as opposed to a square inch). Therefore, a 72-ppi monitor displays 72 pixels per linear inch, or 5,184 (72 x 72) pixels per square inch. PPI is technically the correct terminology for describing image and monitor resolution, as apposed to DPI
See Pages per minute.
Evaluating an electronic file before sending it to an imagesetter, platesetter, large format printer, direct-to-press or any other reproduction device, specifically for the purpose of detecting and correcting any problems that would render the resulting film, plates, or large format prints or press sheets unusable. A typical preflight check would involve ensuring that all linked images and fonts are supplied, that all information needed for output is available, that images destined for process colour printing are CMYK, not RGB, that image file formats are supported and are high resolution, that trapping has been applied where necessary and only the desired colours have been applied to images. Preflighting a job is a chargeable service designed to save the customer money by preventing wasted materials and labour due to deficiencies in customer supplied files.
Any of the operations required to prepare a digital file, or mechanical artwork for printing, including the production of plates or the transfer of files to the imaging device in computer-to-plate or direct-to-press processes.
Prepress service provider
In the publishing industry, the generic term for colour separation houses, commercial printers, electronic prepress houses, service bureaus, and in-plant printers or any company that provides prepress operations. Refer to prepress.
A proof actually run on a press, using the printing inks and substrates as specified for the finished job. Because the printer must keep the job on the press while the customer examines the press proof for approval, customers are generally charged press time until the approval is received. This can be a very expensive proofing method, and is discouraged, except when the cost is justified in the case of an extremely prestigious job.
In sheet-fed printing, the printed sheet of paper that comes off the press.
The hues from which other colours can be mixed. The additive primaries (for projected light) are red, green, and blue. When added together, these hues form white light. The standard pigment primaries are red, yellow and blue. The subtractive four colour process primaries are cyan, magenta, and yellow and black.
A shape, such as a line, curve, circle, or polygon, that can be drawn, stored, or manipulated as a discrete entity by a graphics program.
An area of memory to which print output can be sent for temporary storage until the printer is ready to handle it. A print buffer can be located in a computer’s random-access memory, in the printer, on a disk, or in a special memory unit between the computer and the printer. Refer to Print Spooler.
A computer peripheral that puts computer-generated text and images on paper or other medium.
The processing hardware in a printer, including the raster image processor, the memory, and the microprocessor.
The portion of a printer that performs the actual printing. Many printer engines are self-contained units that are easily replaced.
A font residing in or intended for a printer. Printer fonts differ from screen fonts which are intended for displaying characters on a computer screen. Also known as Type 1 Fonts or Outline Fonts, printer fonts are Postscript language programs that mathematically describe the appearance of each character in a font, using lines and curves. Printer fonts generate smooth output on-screen and on a postscript printer at any size. You must have a printer font installed for any typeface you print. This category can include TrueType fonts, as well as Postscript fonts.
The marks printed on a press sheet or film to aid in positioning the print area on the press sheet, checking the quality of the printed image, and trimming the final pages. Printer’s marks may include calibration bars, crop marks, and registration marks.
A pair of pages positioned across a fold from each other on the press sheet. The pages in a printer spread are positioned so that when the final press sheets are collated and folded, the pages will be in the proper order, as opposed to reader spreads, which have consecutive pages on the same sheet to simplify proofreading. Beware of setting up a saddle-stitched document in reader spreads, unless you plan to convert it to printer spreads prior to submitting to a printer. Some programs do not allow linked documents to easily be converted to printer spreads so it can be expensive for the printer to spend the extra time to impose the pages properly for printing.
The four transparent inks (cyan, magenta, yellow, and black) used in four-colour process printing. See also Colour separation.
The part of a printer that mechanically controls the imprinting of characters on paper. The print head can consist of pins that strike a ribbon, ink jets, or pins that pass an electrostatic charge to the paper.
The metallic or polyester sheets, usually imaged from film negatives or directly from a digital file, used to transfer ink to paper (actually, to the printing blanket) on a commercial printing press. One plate is required for each colour being printed.
Any number of characters, images, pages, or documents sent to a printer as a single unit.
The clarity of printer output, partly determined by resolution, but subject to many factors such as moisture content of the paper, screen frequency designated in the print file, etc.
A computer dedicated to managing a network printer.
Software that intercepts a print job on its way to the printer and sends it to a disk or memory where it can be held until the printer is ready to process it.
Any one of the subtractive primary colours (cyan, magenta, yellow or black). Process colour may also refer to the technique of creating full colour images by blending percentages of cyan, magenta, yellow, and black inks. See also spot colour.
Manipulating data within a computer system. In a RIP system, processing means rasterizing the data and outputting it to film or other medium.
The colour characteristics of an input or output device, used by a colour management system to enhance colour fidelity.
A reasonably accurate representation of how a finished page is intended to look. Proofs can be in black and white or colour.
One side of a leaf of a book, magazine, newspaper, letter, and so on.
To check typeset material for spelling, punctuation, and basic document layout (alignment of elements, etc.).
A set of characters with a variable amount of horizontal space allotted to each. For example, the letter ihas less space allotted to it than the letter w.
A type of character spacing in which the horizontal space each character occupies is proportional to the width of the character.
A sentence or phrase excerpted from the body copy and set in large type, used to break up running text and draw the reader’s attention to the page. Also known as a blurb, breakout, or callout. Pull-quotes (also called out-quotes) are short phrases or sentences taken from body copy and emphasized by enlargement, boxing, or colour background to highlight surrounding content.
Beaten and refined vegetable fibers (cellulose) to which chemicals and fillers are added, used to make paper.
A multiplication factor applied to output screen ruling to calculate scanning resolution for optimum output quality. Refer to halftoning factor.
Tonal value of dot, located approximately halfway between highlight and midtone. Tones between shadow and midtones are known as 3/4 tones and those between highlight and midtones are known as 1/4 tones.
Any number (positive or negative, whole or fractional) used to indicate a value.
A stored arrangement of computer data or programs, waiting to be processed (usually in the order in which they were received).
The irregularity along the left or right edge of a column of text.
Type which is set with an uneven alignment of characters on the left or right side has been set ragged. A common alignment choice is “flush left/ragged right” type. Because of its poor legibility, “flush right/ragged left” type alignment is rarely used.
Text alignment that is flush on the left margin and uneven on the right.
Paper containing at least 25% rag or cotton fiber pulp.
A fill that is projected from a center point outward in all directions.
The illusion of a gradual change from one colour to another, like the effect of an airbrush, created in the software by a series of discrete steps.
A rectangular pattern of lines. A synonym for grid. Sometimes used to refer to the grid of addressable positions in an output device.
A method of generating graphics in which images are stored as multitudes of small, independently controlled dots (pixels) arranged in rows and columns.
An image formed by patterns of light and dark pixels in a rectangular array. See also bitmap.
Raster image processor (RIP)
A hardware and/or software device that converts vector graphics and/or text to raster images.
The process of converting digital information into pixels. Also the process used by an imagesetter to translate PostScript files before they are imaged to film or paper. See also RIP.
A layout made in two-page spreads as readers would see them. For example, an 11 by 17 reader’s spread of a 16-page manual would have pages 2 and 3 next to each other and so on. Refer to Printer Spread.
500 sheets of paper.
The right-hand page of an open book or spread. Opposite of verso page.
The change in direction of particular wavelengths of light by bending or throwing back off a surface.
Artwork prepared so that it may be photographed or input into a computer by scanning or digital capture.
In desktop publishing, to change the way a page looks by altering its layout, fonts, etc. In data storage, to prepare a disk for reuse which already contains data, effectively destroying the existing data.
The bending of light as it passes from one medium to another, separating the light into different wavelengths that show their hues.
The process of precisely aligning elements or superimposing layers in a document or a graphic so that everything will print in the correct relative position. The precise alignment of the various colour plates used when printing. In order to properly register plates on press, printers rely on the placement of registration marks on film or mechanicals. Refer to Misregistration.
Marks placed on a page so that in printing, the elements or layers in a document can be arranged correctly with respect to each other. Figures (usually crossed lines and a circle) placed outside the trim page boundaries in colour separation overlays to provide a common element for proper alignment.
Recorder element. The minimum distance between two recorded points (spots) in an imagesetter.
The creation of an image containing geometric models, using colour and shading to give the image a realistic look.
To recalculate the page numbering in a document.
A new impression or a second or subsequent edition of a printed work. Also, the publication in one country of a work previously published in another.
A shortening of the term Graphic Reproduction.
A term used to define image resolution instead of ppi. Res 12 indicates 12 pixels per millimetre.
An increase or reduction in the number of pixels in an image, required to change its resolution without altering its size. See also down-sampling and interpolation. Altering the amount of data in an image by changing its size and/or resolution. Reducing the file size of an image containing more information than is necessary for final output is an example of resampling, and is done in Photoshop by choosing “Image Size” from the “Image” menu. Attempting to increase the resolution of an image by resampling up will still produce an inferior image, despite interpolation. See also resolution.
To change the size of an image while maintaining its resolution.
The clarity or fineness of detail attained by a printer or monitor in producing an image. Resolution is often expressed as the number of pixels per inch in a displayed image, the number of dots per inch in printer output, or the number of bits per pixel. The amount of data available to represent graphic detail in a given area. In an image file or on a computer monitor, resolution refers to the number of pixels in a linear inch (PPI); on a printer, to the number of dots printed in a linear inch (DPI); on a scanner, to the number of samples saved for a given area of the scanned image; and in a halftone, to the number of lines of halftone dots per inch (LPI). See also optical resolution, screen resolution.
To edit an image to eliminate imperfections or simply to alter it. Retouching can be done by hand, by airbrush, or electronically by prepress software.
Type appearing in white or other light colour on a black or dark background. Sometimes called a knockout if the type is the colour of the paper.
Mirrored type or image.
To return to the last saved version of a document.
Acronym for red, green, blue. The colours of projected Light which, when combined, simulate a subset of the visual spectrum. See also CMYK.
A colour monitor that receives red, green, and blue levels over separate lines.
Rich Text Format (RTF)
A Microsoft adaptation of Document Content Architecture (DCA) that is used for transferring formatted text documents between applications or platforms.
Right-reading, emulsion-side-down, page art
Art printed as a negative on film in such a way that the type reads correctly (left to right) when the emulsion side of the film is facing down.
Acronym for raster image processor, the part of an output device or imagesetter that converts digital information into dots on film or paper. See also Rasterize. A software program (usually resident on a laser printer or imagesetter) that interprets PostScript page information and translates it into data needed by the printing engine to produce printed dots.
Unsightly white space that seems to run through a text column when word spacing is too loose.
A type face or type style in which the characters are upright. Compare italic and oblique.
To change the angle of a graphic image or type. It is better to rotate an image in the native graphic program instead of the page layout program because this reduces the time it takes to rasterize the image in an output device.
A loosely sketched graphic design idea, usually in felt markers on tracing paper.
A series of items arranged horizontally within some type of framework or matrix. Compare column.
Rels (recorder elements) per inch. A measurement of the number of discrete steps that exposure units in imagesetting devices can make per inch.
See Rich Text Format.
In computer graphics, changing the shape of an object made up of connected lines by selecting a point on an anchored line and dragging it to a new location.
Two-layer acetate film of red or amber (Amberlith) emulsion on a clear base for used for hand cutting masks for platemaking.
A line printed above, below, or to the side of text or some other element. Rules can be created in a variety of thickness (referred to as rule weight), although the rule described as a hairline in many page layout programs is too thin to properly reproduce and should be avoided. Rules can be solid, screened, or vignetted in black or coloured ink.
In some application programs, a non-printing screen measuring tool extending horizontally across the page, marked off in inches or some other unit of measure, used to show line and column widths, tab settings, paragraph indents, and so on. Some programs may also employ a ruler along the vertical edge of the document.
In page composition, to position text so that it flows around an illustration or other items. Also called text wrap.
A subhead, usually in bold or italic type, which is part of a paragraph.
Paper qualities (strength, dimensional stability, cleanliness, and surface integrity) that determine how well a sheet performs on the press.
Running head, running foot
One or more lines of text in the top (head) or bottom (foot) margin area of a page, composed of one or more elements such as the page number, the name of the chapter, the date, and so on. See also Header, Footer.
Sans Serif Type
Sans serif typefaces have straight stems and cross-bars with no tiny extensions or decorations at the end of any letter part. Examples of common sans serif types are Helvetica, Franklin Gothic and Univers. A typeface in which the characters have no serifs (short lines or ornaments at the upper or lower end of character strokes). A sans serif typeface usually has a straightforward, geometric appearance. See also serif.
Purity or intensity of hue. The less saturated a colour is, the less visible the hue is. Desaturated colours are often described as washed-out.
Any font that can be scaled to produce characters of varying sizes.
The act of reducing or enlarging an image, while leaving the amount of image data intact.
Image created by the process of placing hardcopy in a device that projects light onto (reflective copy) or through (transmissive copy) the original and records that light as a bitmapped or pixel mapped image.
An electronic device that digitizes and converts photographs, slides, paper images, or other two-dimensional images into bitmapped or pixel mapped images. Scanners project light onto, or through, an original and record that light as electronic data.
Another name for clipping.
To crease with a dull rule in preparation for folding. Cover stock, card stock and heavy papers require scores to prevent cracking.
A pattern of dots used to reproduce colour or grayscale continuous-tone images. The fineness of the screen can vary from 55 lines to 200 or more lines per inch. Eighty-five-line screens are commonly used for printing on newsprint. Better paper can accommodate finer line screens. See also AM screening, FM screening, and halftone. Screens are also the “tinting” or “shading” of a solid image area. Tint screens are defined in percentages from 99% to 1% of solid (which is 100%). Tint screens can be applied to practically any graphic image including type.
The degree of rotation at which a halftone screen is printed. In process colour and duotone printing, the halftone screens for each colour must be printed at a different angle to avoid moirè (interference) patterns when the colours are superimposed. In process colour printing black is normally angled at 45 degrees, magenta at 75 degrees, cyan at 105 degrees, and yellow at 90 degrees. These angles are chosen for elimination of the screen interference and assigned to particular colours to avoid the screen becoming too apparent. A 45 degree angle is the least noticeable, so it is assigned to the darkest colour (black), while 90 degrees is the most visible and is assigned to the lightest colour (yellow). In the case of duotones, the darker colour is assigned a 45 degree screen angle and the lighter colour is assigned a 75 degree angle (the same as magenta in process work). When precisely registered the combined angles of the screen form a well distributed pattern known as a rosette. Because of the balanced nature of this rosette, the halftone screen is not obvious on the printed piece.
A copy of the computer screen, taken by copying video memory or main memory and then converting it to an image file. Also called a snapshot and screen shot.
Also called bitmapped fonts, screen fonts are maps of dots (used by computer monitors) that represent specific fonts at specific sizes. You must install one bitmapped font for each Postscript printer font you have installed. TrueType fonts do not require screen fonts.
Screen frequency/screen resolution/screen ruling
Refer to Halftone screen frequency.
A colour created from mixing two primary colours. See primary colour.
A PostScript file format created from PageMaker that can contain multiple pages as well as links in the form of OPI comments to high-resolution images, in colour or in black and white.
Artwork for commercial printing that has been separated into individual pages containing components of each colour. Process colour separations are represented on four pages (usually film) consisting of combinations of the four process colours. Spot colour separations consist of one page, or piece of film, for each spot colour used. See Colour separation.
Serifs are the tiny decorative extensions applied to the ends of a type font’s character. Serifs enhance reading flow and reduce eye strain in long, text-heavy documents and books. Examples of common serif types are Palatino, Times, Garamond, and Bodoni.
A company that specializes in providing prepress services such as film output from electronic files.
The darkest part of an image, represented in the halftone by the largest dots.
A printing press that is fed by individual sheets of paper, rather than a roll, or “web”. See also web press.
A layout in which different plates are used to print the front and back of a press sheet. Sheetwise impositions are commonly used for creating book signatures for multiple-page documents among other uses.
An adjustment for the way page images in a folded signature tend to move toward the outer or facing edge of a book (an occurrence called creep). The amount of shingling needed steadily increases as you move toward the center signatures in the book.
A sidebar is a short article that accompanies a longer, feature article. Sidebars can amplify content or tie related information to the feature.
In bookwork, a completed press sheet containing page impositions in multiples of four, before folding, collating, binding, and trimming.
A silhouette is created when a photograph or illustration’s background is dropped away. Silhouetting is also referred to as “outlining” or “close-cropping.”
To slant an object by a prescribed degree.
Slab Serif Type
When a type font’s serifs are squared off, rather than tapered to a point, they are referred to as slab serif types. Examples of common slab serif types are Courier, Lubalin and Egyptiennes.
A font of capital letters which are slightly smaller than the standard capital letters in that typeface. A true small cap font has been specially designed so the capital letters that make up the so-called lowercase have the same weight strokes to balance with the actual capitals. Using a smaller type size for the small cap letters results in the small caps looking weaker than the true capitals.
The adjusting of a bitmap image by rounding the jagged edges to give them a more uniform look, usually by using the appropriate filters in Adobe Photoshop. Careless use of the smoothing filter can result in a blurred image.
Flatness of the surface of a sheet of paper, a factor in the printability of the sheet. Smoothness is a requisite for achieving crisp images in laser printing.
A drawing feature that causes objects to align with an invisible grid when created, moved, resized, or rotated.
An invisible grid to which an object snaps when you create, move, resize, or rotate it.
A screen dump. A copy of the video screen, taken by copying video memory or main memory and then converting to an image file.
The temporary images presented on a computer display screen. See also hard copy.
A downloadable font.
A line break inserted in a document that only takes effect when the word following the soft return would extend into the page margin.
A photographic effect in which the image combines positive and negative areas, Solarization can be accomplished by exposing the film to light during processing or by using retouching software.
Isolated light pixels in predominantly dark image areas, sometimes caused by incorrect readings or noise in the scanning device.
In a halftone, a bright reflection containing no halftone highlight dot. See highlights.
An extremely accurate colour measurement device using a diffraction grating to split light into its component wavelengths, which are then measured by numerous light sensors.
Mechanical binding using a plastic coil passing through pre-drilled holes.
A composite dot created through the halftoning process. A spot is composed of a group of dots arranged in a pattern reflecting the gray level of a pixel to be drawn at a particular location.
Any premixed ink that is not one of or a combination of the four process colour inks. A colour that is produced by printing an ink of that specific colour, rather than creating the colour by combining CMYK inks (e.g., printing green ink as opposed to printing cyan ink on top of yellow ink). See also process colour.
(1) A spread is the relative viewing position of a pair of left and right-hand pages in a book or publication. A “reader’s” spread is the consecutive placement of pages by page numbers. A “printer’s” spread is the imposed position of pages based on how many pages are in the publication. (2) A colour trapping option in which a colour object overlaps into the knockout to allow some overprinting to occur where the colours meet. In trapping, the lighter colour is spread so the overprinting is less visible and the darker colour defines the shape of the image. Opposite of choke. Also refer to knockout.
The order in which text and graphics overlap in a page layout program.
The jagged appearance of a graphic line or curve when reproduced using pixels. Also called jaggies. See aliasing.
A layout in which two or more copies of the same piece are placed on a single plate. This is useful for printing several copies of a small layout, such as a business card, on a single sheet. Also called a multiple-up layout.
A relatively new technology for reproducing continuous tone images through the printing process, involving the placement of miniscule, random spots of process colours on paper rather than the repeating pattern of uniform dots used in standard halftone screening. It eliminates colour shifts caused by slight misregistration, and is used widely (and very effectively) in large-format digital colour printing devices.
One or more lines drawn through a range of text. This is often used in electronic text editing to indicate text that is to be deleted at some future time.
The act of assembling individual film negatives into flats for printing. Also referred to as film assembly. The preparation and assembling of film prior to platemaking.
(1) A line representing part of a letter or other type character. (2) The colour applied to the perimeter of an object in a graphics program (as opposed to fill which is the overall colour of the object). Most graphics programs allow the user to specify the thickness of the stroke.
The part of a perforated form, ticket, cheque, etc, usually numbered, that is retained after the other part is distributed.
A variation of a typeface, such as bold or italic.
A file of instructions used to apply character, paragraph, and page formats to a document.
A pen-like pointing device, usually attached to a graphics tablet.
A character printed smaller than standard text and positioned slightly below the baseline; commonly used in mathematical and chemical notation.
A subhead is smaller than a headline and larger than body copy. Subheads are useful for breaking up long articles, identifying specific content for the reader, and giving the reader a break from long passages of copy.
The basis weight of certain grades of paper. For example, 20 lb bond is also called substance 20 or sub 20.
The base material used to carry or support an image, for example paper or film.
Cyan, magenta and yellow are the subtractive primaries. Refer to Colour, subtractive.
The capture of more grey levels per colour than is required for image manipulation or output. This additional data allows shadow details to be heightened, for example.
A character printed smaller than standard text and positioned slightly above the baseline of the surrounding text; commonly used in reference citation and mathematical and technical notation.
Wrong reading plate or Wrong reading emulsion down film.
To print one image or colour over another. Refer to Overprint.
Acronym for Specifications for Web Offset Publications. A standard set of specifications for separations, proofs, and colour printing established to ensure consistent quality in magazine and other web printing applications.
To align the baselines of body paragraphs along a page grid.
A character used to vertically align text or figures on the screen or page. Tab alignment is particularly important in the typesetting of tables. Tab commands can be used inside columns in most page layout programs.
A data structure characterized by rows and columns with data potentially occupying each cell formed by the intersection of row and column.
The process of using a known value to search for data in a previously constructed table of values.
(1) A standard sheet size of 11″ x 17″. (2) A large-format publication, usually half the size of a standard newspaper.
An identifier used to categorize or locate data. See profile.
Tag Image File Format
Teasers are short phrases placed on the outside front cover which are meant to increase the reader’s interest in the publication’s inside contents.
An electronic prototype of a publication that provides the layout grid and style sheet necessary to create documents. Templates are predetermined and saved formats for page layouts. They are designed to be used as a starting point for each successive page or issue. Templates can also be used as guides to the imposition of multiple-up documents. The use of templates saves time and reduces errors in layout formats.
A fine-paper weight designed specifically for use for internal pages of books, or as letterheads and other documents requiring a high-quality medium weight paper. Many text stocks have matching cover weights that, aside from the obvious use as covers, can be used for business cards which match the letterhead. Some text stocks also have matching envelopes.
(1) In computer graphics, shading and other attributes applied to a graphical image to give it the appearance of a physical substance. (2) In reference to paper, the tactile properties of the paper surface such as writing laid, linen finish, felt, smooth, etc. Paper texture can have a marked effect on the type of image which can be printed or the process which can be used for printing on a particular paper.
A non-impact printer that uses heat to generate an image on specially treated paper.
Thermal wax-transfer printer
A special type of non-impact printer that uses heat to melt coloured wax onto paper to create an image. A printing process using small heating elements to melt dots of wax pigment on a carrier film, which are then transferred to paper or transparent film by contact. This differs from the dye sublimation process in that individual dots do not fuse together, so thermal wax transfer appears to be of a lower resolution.
Printing process in which an engraved or embossed look is simulated by adhering a clear powder to wet ink and applying heat to fuse the powder and produce a raised surface on the image. Thermography is not suitable for letterheads or other printed product that will be subsequently subjected to heat such as the heat present in the fuser assembly in a laser printer.
A thin space is rarely used today. It was originally developed when hot metal was the popular form of typesetting and situations often arose where a minute amount of space was needed to center or justify a line of type. The only common use for thin spaces is placing them before and after an em or an en dash. A thin space is approximately one-third the width of an en space. Thin spaces are achieved in most layout programs by applying custom kerning to the space in question.
The point at which an action begins or changes. The tonal threshold setting used in scanning in a bitmap mode determines which values are converted to black and which will become white. The tonal threshold defined in the unsharp masking process determines how large a contrast between adjacent colour values must be before sharpening will be applied.
Rough sketch, display, or printout of a page layout. Thumbnails are reduced in size to fit several on a single page.
Marks on a ruler showing the increments of measure.
Acronym for Tagged Image File Format, a file format developed by Aldus, Microsoft, and leading scanner vendors for bitmap images. A TIFF image can be monochrome (black and white), grayscale, or colour, with a bit depth ranging from 1-bit to 32-bit. TIFF is a lossless file format commonly used for scanning, storage, and interchange of bitmapped and pixel mapped images.
Tiling is the process of joining sheets containing partial images together to create oversized sheets. This process is used when an output unit does not have the size capabilities available to produce the image in one piece at 100% size. Printing portions of a document at 100%, aligning them with each other and taping them together is a common form of tiling.
A value of any colour ranging from 1% to 99%. Tints are produced in offset printing by using a halftone screen or combination of halftone screens. Refer to Halftone Screen.
Sheet of tissue paper on a piece of artwork or mechanical with instructions to the printer, including colour indications.
A value of a colour, ranging from 0% to 100%. Refer to Tint.
The two dimensional representation of the tonal values of an image as a curve on x and y axes. This curve can be used to manipulate particular points in the tonal range of an image, or to affect the overall tonal range. Graphic programs such as Adobe Photoshop provide access to tone curves for each separate colour channel for multicolour images.
Plastic magnetic ink used in the Xerographic method of printing.
A disposable container that holds toner for a laser printer or other similar Xerographic imaging device.
Total ink coverage
In process colour printing, the maximum total amount of cyan, magenta, yellow, and black ink that will be printed to create the darkest shadows in an image. Although 400% is theoretically possible (100% for each of the CMYK inks), a maximum of 280-300% is recommended to reduce printing problems. Attempting to print 100% of each colour can result in paper sticking to the blanket cylinder of a press, or in one ink layer not adhering to properly to the layer below. Heavy ink coverage is reduced by the process of GCR (gray component replacement) and UCR (under colour removal) in which the overlapping values of cyan, magenta and yellow are removed and replaced with values of black. This is accomplished by the settings applied either at the scan level or in the look-up tables of in colour separation software.
The overall adjustment of the amount of space between letters and words is tracking. Tracking increases and decreases word density and can be used for copyfitting purposes. Adjustment of tracking is often needed with “justified” type to even out the rivers of white space within body copy. Creative tracking can also remove widows, orphans, bad word-breaks, and undesirable hyphenation. Tracking is different from kerning, in that tracking is applied to words, lines, paragraphs or pages, and kerning is applied specifically to pairs of letters to compensate for unpleasant spacing caused by the particular letter combinations.
(1) Film colour positive. Common transparency sizes are 35mm, 4″ x 5″ and 8″ x 10″ (2) Any image on a transparent carrier, such as presentation transparencies used for overhead projection.
(1) The property of a material that allows virtually all of the visible light spectrum to pass through it (glass is transparent). (2) Operation that is either automatic or so easy or intuitive as to be “invisible” to the user.
The process of creating an overlap between abutting colours to compensate for imprecision or misregistration in the printing process, which otherwise would cause the paper colour to show through in certain areas. (Sometimes called “chokes and spreads.”) See also knockout, overprint.
Three colours taken at approximately equal distances apart on the colour wheel.
To cut away folded or uneven edges. Also, the final size of a printed page.
Trim page size
Area of the finished page after the job is printed, bound, and trimmed.
A halftone image made up of three spot colours (usually two colours plus black).
To cut off the beginning or end of a series of characters or numbers or part of an image.
The time from the submission of a job to the completion of that job.
The characters that make up printed text. As a verb, to enter information by means of a keyboard.
A specific, named design of a set of printed characters, such as Helvetica or Courier, that has a specified obliqueness and stroke weight. A typeface is not the same as a font, which is a typeface in a specific size (for example, 10-point Helvetica).
The collection of all related typefaces, such as Helvetica, Helvetica Bold, Helvetica Oblique, and Helvetica Bold Oblique.
The size of printed characters, usually measured in points.
The specific obliqueness or stroke weight of a typeface. In some page layout programs, type style may also refer to special type effects, such as outline, shadow, or strikethrough.
The choice and arrangement of type. Good typography requires a thorough understanding of communication, and the role of letter shapes, size, spacing and style in successfully achieving that communication.
Abbreviation for under colour removal, a technique for minimizing ink coverage. In this printing process black ink is substituted where there are equal percentages of cyan, magenta, and yellow inks. UCR is identical to GCR (Gray Component Replacement), except that it is applied only to neutral or shadow areas of the printed image. UCR uses less ink and can eliminate ink coverage problems in dark areas without altering colour saturation or hue. See GCR.
Uncoated offset paper
A good quality, general-purpose printing paper that has not been coated with the clay covering used on gloss and matte coated stocks. Offset stocks are also referred to as book stocks.
A line set at or slightly below the baseline of one or more letters of text.
Capital letters, such as A, B, C. The name is derived from the practice of placing capitalized letters in the top (upper) case of a pair of type cases. See lowercase.
A computer product, especially software, designed to perform adequately with computer products expected to become widely used in the foreseeable future.
Unsharp masking. A process used to sharpen images.
The lightness or darkness or shade of a colour.
In computer graphics, a path drawn from a starting point to an ending point, both of which are coordinates in a rectangular grid with horizontal (x) and vertical (y) axes. Vectors are used in draw programs to create graphical images which are composed of keypoints and paths, as well as fill and stroke instructions.
The process of turning a bitmap into a Vector.
Artwork or text characters constructed from mathematical statements instead of individual pixels. Vector objects usually take less disk space than bitmap images and can be scaled to virtually any size without losing visual quality. Fonts (such as PostScript and TrueType), illustrations from drawing applications, and files from page-layout applications are common examples of vector objects.
The left-hand page of an open book or spread. Opposite of recto page.
In general context, an image in which the colours or tones gradually bleed out into the background. In prepress, often used to refer to a continuous gradation of colours. See graduated fill.
Diagonal slash or solidus (/).
Wavelengths perceived by the human eye as colours.
Visual neutral density
The degree to which a colour is perceived to be light or dark. Prepress service providers measure visual neutral density using a densitometer with no process filters.
A subset of colours roughly including yellow and the range extending to the adjacent secondaries in both directions on the colour spectrum.
An identifying design impressed on paper during manufacture. Custom watermarks were formerly the indication of wealth and prestige.
A printing press that is fed by a roll, or “web” of paper, rather than individual sheets. See also sheet-fed press.
(1) The density of letters, traditionally described as light, regular, bold, extra bold, etc. (2) A measurement of the thickness of paper, based on the weight of a ream of paper at the parent size, also known as basis weight or substance.
A movable reference point that defines the lightest area in an image, causing all other areas to be adjusted accordingly.
The areas of the page without text or graphics, used as a deliberate element in good graphic design.
A single word, portion of a word, or a few short words left on a line by themselves at the end of a paragraph or column of type. Usually considered undesirable on the printed page, a widow can usually be eliminated through editing or rewording. See orphan.
The horizontal measure of letters, described as condensed, normal, and expanded.
Elegant mechanical binding using double series of wire loops through slots rather than holes.
The amount of space between words.
The function of a word processor that breaks lines of text automatically so that they stay within the page or column margins. Line breaks created by wordwrap are called soft returns.
A layout in which a single plate is used to print both sides of a two-sided job. The paper is run through once, then flipped over, top to bottom, to run on the opposite side. The gripper edge changes from the edge that was the head on the first pass to the edge that was the tail.
A layout in which a single plate is used to print both sides of a two-sided job. The paper is run through once, then flipped over, side to side, to run on the opposite side. Both sides use the same gripper edge to hold the paper for positioning, and repeat the same sequence of pages on both sides.
Smooth paper finish.
Text that wraps around a graphic. Also called runaround text. When type is shortened or follows around an illustration, graphic, or photograph, it is called a wrap-around type.
(pronounced “wizzywig”) An acronym for “What You See Is What You Get.” A display method that shows document layout and images on the screen as they will appear on the printed page.
Electrostatic printing using magnetic ink particles (toner) where the image is fused to the paper by heat and pressure.
The height of the main body of a lowercase letter, excluding ascenders or descenders.
Fan fold, as in a map or brochure.
To magnify or reduce your view of the current document.
Are you looking for a term that we have not defined here? Give us a call at 604.683.6991 for immediate assistance or contact our Technical Support staff. You can also reach out to us on Twitter, Facebookand Google+. |
Newswise — PHILADELPHIA (January 22, 2014) – New research from the Monell Center reveals humans can use the sense of smell to detect dietary fat in food. As food smell almost always is detected before taste, the findings identify one of the first sensory qualities that signals whether a food contains fat. Innovative methods using odor to make low-fat foods more palatable could someday aid public health efforts to reduce dietary fat intake.
“The human sense of smell is far better at guiding us through our everyday lives than we give it credit for,” said senior author Johan Lundström, PhD, a cognitive neuroscientist at Monell. “That we have the ability to detect and discriminate minute differences in the fat content of our food suggests that this ability must have had considerable evolutionary importance.”
As the most calorically dense nutrient, fat has been a desired energy source across much of human evolution. As such, it would have been advantageous to be able to detect sources of fat in food, just as sweet taste is thought to signal a source of carbohydrate energy.
Although scientists know that humans use sensory cues to detect fat, it still remains unclear which sensory systems contribute to this ability. The Monell researchers reasoned that fat detection via smell would have the advantage of identifying food sources from a distance.
While previous research had determined that humans could use the sense of smell to detect high levels of pure fat in the form of fatty acids, it was not known whether it was possible to detect fat in a more realistic setting, such as food.
In the current study, reported in the open access journal PLOS ONE, the researchers asked whether people could detect and differentiate the amount of fat in a commonly consumed food product, milk.
To do this, they asked healthy subjects to smell milk containing an amount of fat that might be encountered in a typical milk product: either 0.125 percent, 1.4 percent or 2.7 percent fat.
The milk samples were presented to blindfolded subjects in three vials. Two of the vials contained milk with the same percent of fat, while the third contained milk with a different fat concentration. The subjects’ task was to smell the three vials and identify which of the samples was different.
The same experiment was conducted three times using different sets of subjects. The first used healthy normal-weight people from the Philadelphia area. The second experiment repeated the first study in a different cultural setting, the Wageningen area of the Netherlands. The third study, also conducted in Philadelphia, examined olfactory fat detection both in normal-weight and overweight subjects.
In all three experiments, participants could use the sense of smell to discriminate different levels of fat in the milk. This ability did not differ in the two cultures tested, even though people in the Netherlands on average consume more milk on a daily basis than do Americans. There also was no relation between weight status and the ability to discriminate fat.
“We now need to identify the odor molecules that allow people to detect and differentiate differentiate levels of fat. Fat molecules typically are not airborne, meaning that they are unlikely to be sensed by sniffing food samples,” said lead author Sanne Boesveldt, PhD, a sensory neuroscientist. “We will need sophisticated chemical analyses to sniff out the signal.”
The paper can be accessed at http://dx.plos.org/10.1371/journal.pone.0085977. Boesveldt currently is at the Division of Human Nutrition, Wageningen University, the Netherlands. Support was provided in part by the Knut and Alice Wallenberg Foundation and a grant from the Netherlands Organization for Scientific Research.
The Monell Chemical Senses Center is an independent nonprofit basic research institute based in Philadelphia, Pennsylvania. For 45 years, Monell has advanced scientific understanding of the mechanisms and functions of taste and smell to benefit human health and well-being. Using an interdisciplinary approach, scientists collaborate in the programmatic areas of sensation and perception; neuroscience and molecular biology; environmental and occupational health; nutrition and appetite; health and well-being; development, aging and regeneration; and chemical ecology and communication. For more information about Monell, visit www.monell.org. |
Elephants strip off pieces of it for lunch, popping it into their mouths like giant potato chips. The hornbill makes its house inside of it, turning it into a cozy living room. Humans use the fiber from is bark as a material for making ropes, baskets, and many other items.
Welcome to the baobab tree, Africa's "green giant."
Baobabs are some of the oldest and largest trees on Earth. There are nine species of them – six in mainland Africa, two in Madagascar, and one in Australia.
The most famous baobab species is called Adansonia digitata. It is the baobab tree that grows across the savannas of Africa. It is named after a French botanist Michel Adanson, the first scientist to describe it.
This baobab can grow to be more than 25 meters (80 feet tall) and live at least 1500 years.
The growth form of the baobab (the way the tree grows) is called a pachycaul. A pachycaul is a tall tree with a thick trunk and relatively few branches. The baobab grows thicker as it gets older, and a mature baobab can be as much as 10-14 meters (30-45 feet) in diameter.
The baobab is deciduous, meaning that it loses its leaves for some of the year. The baobab is actually leafless for as much as 9 months a year, during the dry season. The branches of the baobab cluster at the top of the tree and look like roots when they are in their leafless state. This gives the baobab the nickname “the upside-down tree”.
From October to December the baobab produces large white flowers. These flowers attract fruit bats, which sip up the nectar the flowers produce and get pollen stuck to them. When a pollen-covered bat flies to the next baobab flower for its nectar snack, it pollinates that flower.
When the flower is pollinated it develops a large fruit around the seeds. The fruit drops to the ground where it becomes a menu item for elephants, black rhinos, and antelope like elands. When the baobab seeds are eaten they pass through the digestive system of their eaters. In the time it takes to be pooped out, the animal has moved some distance away from the parent baobab of the seeds.
The seed eater helps the seed become a new plant by giving it a home away from other trees where it hopefully won’t have to compete for water, sunlight, and growing space from other trees. If the seed successfully germinates and becomes established, then after a few centuries it may become a grand baobab like its parents.
Baobabs are very important parts of their savanna neighborhoods. They provide homes and food for many species of animals. The interior of the baobab stores water, a very valuable commodity in hot and dry savanna ecosystems. Elephants strip the bark off of the baobabs and eat it for its moisture. The baobab can usually regrow the bark, but the gashes of the elephants can leave scars.
Humans have a very long history with baobabs. Sometimes the massive trunks of a thousand-year-old baobab tree has a large hollow in it. Humans have used these hollows for livestock corrals, shelter from the sun, and sometimes even permanent homes. Many other animals use these hollows to nest and live in also.
Scientists study the largest and oldest baobabs across Africa. They have found that many of the largest baobabs appear to be dying off in the early years of the 21st century. Nobody is sure why this baobab die-off is happening. Could it be global warming causing increasing droughts, pushing the oldest and largest baobabs beyond what they can tolerate? The baobab scientists across Africa are trying to figure this out so that these essential ‘green giants’ of the African savanna ecosystems can survive the 21st century and beyond.
Story by David Brown. Photos by Wild Nature Institute. |
The lead coffin archaeologists found in the abandoned ancient city of Gabii, Italy could contain a gladiator or bishop.
Credit: University of Michigan
Archaeologists found a 1,000-pound lead coffin while digging in the ruins of an ancient city near Rome last summer. The mission now is to determine who or what is buried inside.
The project – which is headed by Nicola Terrenato, a professor of classical studies at the University of Michigan – is the largest American-led dig in Italy in the past 50 years.
"We're very excited about this find," Terrenato said. “Romans as a rule were not buried in coffins to begin with and when they did use coffins, they were mostly wooden. There are only a handful of other examples from Italy of lead coffins from this age – the second, third or fourth century A.D. We know of virtually no others in this region.”
This coffin is of particular interest to Terrenato and his team of archaeologists due to its size.
“It’s a sheet of lead folded onto itself an inch thick,” Terrenato said. “A thousand pounds of metal is an enormous amount of wealth in this era. To waste so much of it in a burial is pretty unusual.”
The coffin is set to be transported to the American Academy in Rome, where engineers will closely examine the bones and any symbols or gifts left in the coffin in order to determine who is buried inside.
Lead coffins tend to keep human remains well-preserved. In fact, an ancient Greek mummy of a middle-aged woman was discovered in a lead coffin and reported a couple years ago.
For the new finding, the researchers want to avoid breaking into the coffin, which could damage the contents. Instead, the team plans to use less invasive techniques, such as thermography and endoscopy (essentially tiny cameras) to examine the sarcophagus.
The process of thermography involves heating the coffin slowly and recording the thermal responses of the various contents. Bones would have different thermal responses than other artifacts that could be inside, Terrenato said. The researchers would then turn to endoscopy, which involves inserting tiny cameras inside the coffin. Still, the effectiveness of endoscopy is contingent on how much dirt has built up inside the coffin over the centuries.
If these techniques fail, researchers could perform an MRI scan on the container, but this option is expensive and would involve transporting the half-ton coffin to a hospital.
The dig that unearthed this mysterious casket began in the summer of 2009 and will continue through 2013. Approximately 75 researchers from across the U.S., including a dozen undergraduate students from the University of Michigan, spend two months working on the project on location in the ruins of the ancient city of Gabii (pronounced “gabby”).
Weird Ways We Deal With the Dead
Seven Ancient Wonders of the World
- Top 10 Ancient Capitals |
When the Apollo astronauts returned to Earth, they brought with them some souvenirs: rocks, pebbles, and dust from the moon’s surface. These lunar samples have since been analyzed for clues to the moon’s past. One outstanding question has been whether the moon was once a complex, layered, and differentiated body, like the Earth is today, or an unheated relic of the early solar system, like most asteroids.
Ben Weiss, a professor of planetary sciences in MIT’s Department of Earth, Atmospheric and Planetary Sciences, and members of his laboratory have found remnants of magnetization in some lunar rocks, suggesting that the moon once emitted a substantial magnetic field, much like the Earth does today. The discovery has opened a new set of questions: How long did this magnetic field last? How strong was its pull? And what sparked and sustained it?
Weiss and former MIT student Sonia Tikoo have written a review, published today in Science, in which they explore the possibility of a lunar dynamo — a molten, churning core at the center of the moon that may have powered an intense magnetic field for at least 1 billion years. Weiss spoke with MIT News about the moon’s hidden history.
Q. How would a lunar dynamo have worked? What might have been going on in the moon, and in the solar system, to sustain this dynamo for a billion years?
A. Planetary dynamos are generated by the process of induction, in which the energy of turbulent, conducting fluids is transformed into a magnetic field. Magnetic fields are one of the very few outward manifestations of the extremely energetic fluid motions that can occur in advecting planetary cores.
The motion of Earth’s liquid core is powered by the cooling of the planet, which stirs up buoyant fluid from the surrounding liquid — similar to what happens in a lava lamp. We have recently argued from magnetic studies of Apollo samples that the moon also generated a dynamo in its molten metal core.
Our data suggest that, despite the moon’s tiny size — only 1 percent of the Earth’s mass — its dynamo was surprisingly intense (stronger than Earth’s field today) and long-lived, persisting from at least 4.2 billion years ago until at least 3.56 billion years ago. This period, which overlaps the early epoch of intense solar system-wide meteoroid bombardment and coincides with the oldest known records of life on Earth, comes just before our earliest evidence of the Earth’s dynamo.
Q. Why is it so surprising that a lunar dynamo may have been so intense and long-lived?
A. Both the strong intensity and long duration of lunar fields are surprising because of the moon’s small size. Convection, which is thought to power all known dynamos in the solar system today, is predicted to produce surface magnetic fields on the moon at least 10 times weaker than what we observe recorded in ancient lunar rocks.
Nevertheless, a convective dynamo powered by crystallization of an inner core could potentially sustain a lunar magnetic field for billions of years. An exotic dynamo mechanism that could explain the moon’s strong field intensity is that the core was stirred by motion of the solid overlying mantle, analogous to a blender. The moon’s mantle was moving because its spin axis is precessing, or wobbling, and such motion was more vigorous billions of years ago, when the moon was closer to the Earth. Such mechanical dynamos are not known for any other planetary body, making the moon a fascinating natural physics laboratory.
Q. What questions will the next phase of lunar dynamo research seek to address?
A. We know that the moon’s field declined precipitously between 3.56 billion years ago and 3.3 billion years ago, but we still do not know when the dynamo actually ceased. Establishing this will be a key goal of the next phase of lunar magnetic studies.
We also do not know the absolute direction of the lunar field, since all of our samples were unoriented rocks from the regolith — the fragmental layer produced by impacts on the lunar surface. If we could find a sample whose original orientation is known, we could determine the absolute direction of the lunar field relative to the planetary surface. This transformative measurement would then allow us to test ideas that the moon’s spin pole wandered in time across the planetary surface, possibly due to large impacts. |
What is evolution?
In a Nutshell
Evolution is the biological model for the history of life on Earth. While some consider evolution to be equivalent to atheism, BioLogos sees evolution as a description of how God created all life. Evolution refers to descent with modification. Small modifications occur at the genetic level (in DNA) with each generation, and these genetic changes can affect how the creature interacts with its environment. Over time, accumulation of these genetic changes can alter the characteristics of the whole population, and a new species appears. Major changes in life forms take place by the same mechanism but over even longer periods of time. All life today can be traced back to a common ancestor some 3.85 billion years ago.
The word evolution can be used in many ways, but in biology, it means descent with modification. In other words, small modifications occur at the genetic level (i.e. in DNA) when a new generation descends from an ancestral population of individuals within a given species. Over time the modifications fundamentally alter the characteristics of the whole population. When the population accumulates a substantial number of changes and conditions are right, a new species may appear.
Universal Common Descent
A cardinal principle of evolutionary theory is that all living things—including humans—are related to one another through common descent from the earliest form of life, which first appeared on earth about 3.85 billion years ago. How the first simple organisms arose is still a scientific mystery, but we know that they carried hereditary information and were capable of self-replication. Over eons, successive generations led to the marvelous diversity of living things that exist today. Common descent is supported by multiple independent lines of evidence, most notably the fossil record and the comparison of many species’ genomes.
Mechanisms of Evolution
When Charles Darwin published The Origin of Species in 1859, descent with modification was not a particularly new or controversial idea. Darwin’s intellectual leap was to propose the mechanism by which evolution occurred. That mechanism, called natural selection, is a description of what happens when variations occur in a population where resources are limited. When more individuals are born than the environment can support, those with advantageous variations are more likely to survive than those without them. This differential reproduction leads to overall changes in the traits of a population over time.
Natural selection is called “natural” not because it occurs apart from God’s activity (after all, many believe natural laws and processes are a reflection of God’s activity), but because it is the usual pattern one observes in nature, in contrast to the “artificial” selection practiced for centuries by farmers and animal breeders.
Other mechanisms of evolution besides natural selection include sexual selection and genetic drift. Sexual selection occurs when individuals of one sex are attracted to mates which manifest certain traits (the peacock’s tail arose this way, for example). Genetic drift is the random (i.e. unpredictable) fluctuations that naturally occur in a population’s gene pool when the population is small. The best adapted individuals do not always survive to reproduce, while poorly adapted individuals don’t always die before passing on their genes. Over time, in small populations, genetic drift can lead to noticeable change.
More recently, it has been proposed that a group of organisms could sometimes benefit from its members behaving in ways that would otherwise be detrimental to an individual organism. This so-called group selection takes into account the survival needs of an entire community of a given species.
Genetic Mutations as the Source of Variation
Darwin recognized from his years of study that when any organism reproduces, new variants sometimes arise. Although he didn’t know it at the time, these differences were a consequence of mutations. Mutations are changes in DNA that occur due to errors in DNA replication or exposures to radiation or certain chemicals. The vast majority of mutations are neutral or harmful and are not preserved, but occasionally beneficial mutations occur that are preferentially passed down through the generations.
A number of common misconceptions have led to confusion or suspicion about evolution over the years. One common argument is that despite hundreds of years of observation, there has been no experimental proof of one species evolving from another, such as a cat turning into a dog. The truth is, such a drastic transition is not predicted by the theory of evolution. In some cases, scientists have observed speciation, but it is true that we have not observed major changes in form. The reason, is that we simply haven’t been watching long enough.1 Evolution of new forms—what some people call “macroevolution”—takes a very, very long time.
Next, the claim that humans share common ancestry with other species should not be misunderstood to mean that humans have evolved from any other presently existing species. Humans do share close common ancestry with other living primates, but rather than being direct descendants, we are more like cousins. Other primates have been changing as well over the past 5-6 million years since humans and chimpanzees diverged from a common ancestor.
A third misconception is that evolution is a random, purposeless process. It is true that individual mutations are random, in the sense that they are unpredictable, but natural selection is decidedly non-random. Whether there is any purpose behind the evolutionary process is not a scientific question, and the answer depends greatly on one’s worldview. For believers in the God of the Bible who created and sustains the whole universe, evolution is simply the means by which he accomplishes his praiseworthy purposes of bringing forth life.
- The American Scientific Affiliation. “Creation and Evolution.”
- The Natural History Museum Board of Trustees. “What is Evolution?”
- University of California, Museum of Paleontology. "Understanding Evolution."
- The Smithsonian Human Origins Program
- University of California Museum of Paleontology. “Discrete Genes Are Inherited: Gregor Mendel.”
- Alexander, Denis. Creation or Evolution: Do We Have to Choose? Oxford: Monarch Books, 2008.
- Collins, Francis S. The Language of God: A Scientist Presents Evidence for Belief. New York: Free Press, 2006.
- Darwin, C. R. On the Origin of Species by Means of Natural Selection, or the Preservation of Favoured Races in the Struggle for Life. London: John Murray, 1859.
- Falk, Darrel R. Coming to Peace with Science: Bridging the Worlds between Faith and Biology. Downers Grove, IL: InterVarsity Press, 2004.
- Giberson, Karl. Saving Darwin: How to be a Christian and Believe in Evolution. New York: HarperOne, 2008.
- Miller, Kenneth. Finding Darwin’s God: A Scientist’s Search for Common Ground Between God and Evolution. New York: Cliff Street Books, 1999.
- Weiner, Jonathan The Beak of the Finch: A Story of Evolution In Our Time. New York: Knopf, 1994.
- Darrel Falk, Coming to Peace with Science, 131. |
Wikipedia—The Free Encyclopedia
Aspergillosis is the name given to a wide variety of diseases caused by fungi of the genus Aspergillus. The most common forms are allergic bronchopulmonary aspergillosis, pulmonary aspergilloma and invasive aspergillosis. Most humans inhale Aspergillus spores every day. Aspergillosis develops mainly in individuals who are immunocompromised, either from disease or from immunosuppressive drugs, and is a leading cause of death in acute leukemia and hematopoietic stem cell transplantation. Conversely, it may also develop as an allergic response. The most common cause is Aspergillus fumigatusSymptoms
A fungus ball in the lungs may cause no symptoms and may be discovered only with a chest x-ray. Or it may cause repeated coughing up of blood and occasionally severe, even fatal, bleeding. A rapidly invasive Aspergillus infection in the lungs often causes cough, fever, chest pain, and difficulty breathing.
Aspergillosis affecting the deeper tissues makes a person very ill. Symptoms include fever, chills, shock, delirium, and blood clots. The person may develop kidney failure, liver failure (causing jaundice), and breathing difficulties. Death can occur quickly.
Aspergillosis of the ear canal causes itching and occasionally pain. Fluid draining overnight from the ear may leave a stain on the pillow. Aspergillosis of the sinuses causes a feeling of congestion and sometimes pain or discharge.
In addition to the symptoms, an x-ray or computerised tomography (CT) scan of the infected area provides clues for making the diagnosis. Whenever possible, a doctor sends a sample of infected material to a laboratory to confirm identification of the fungus.Diagnosis
On chest X-ray and computed tomography pulmonary aspergillosis classically manifests as an air crescent sign. In hematologic patients with invasive aspergillosis the galactomannan test can make the diagnosis in a noninvasive way. |
This movie records an eruptive event in the southern hemisphere of Jupiter over a period of 8 Jupiter days. Prior to the event, an undistinguished oval cloud mass cruised through the turbulent atmosphere. The eruption occurs over avery short time at the very center of the cloud. The white eruptive material is swirled about by the internal wind patterns of the cloud. As a result of the eruption, the cloud then becomes a type of feature seen elsewhere on Jupiter known as "spaghetti bowls."
As Voyager 2 approached Jupiter in 1979, it took images of the planet at regular intervals. This sequence is made from 8 images taken once every Jupiter rotation period (about 10 hours). These images were acquired in the Violet filter around May 6, 1979. The spacecraft was about 50 million kilometers from Jupiter at that time.
This time-lapse movie was produced at JPL by the Image Processing Laboratory in 1979. |
Town Analyses > Jericho > Physical Landscape
There are 78 different types of soil identified within the Jericho borders, a product of the area’s dynamic glacial past. Distinguishing one soil from another is the product of several factors, including the type of bedrock underlying all other deposits, the surficial material coating the bedrock, the steepness of slope on which the soil lies, the climate or microclimate that determines the rate of decomposition, the size of soil particles, and the mineral composition of the soil itself.
Why do we bother dividing soils into different types? Soil structure often dictates what types of vegetation will grow naturally on a landscape, and human activities and land use practices are often more suitable on one type of soil over another. Farming, for example, is most productive on soils with relatively high clay content that has mixed with sand for better drainage. Also, it tends to be level and free of many rocks. Although most of Vermont was cleared for farming at some point in the state’s history, we can actually see a relationship between soil type and date of farm abandonment, where the prime soils are most likely to remain farms even today. Most of the prime soils are located along river floodplains, where clay was once deposited when the land was covered by a lake, but where sand has more recently been mixed with the clays. Upland farms, with soils derived from rocky glacial till, were often the first to be abandoned.
Certain trees and vegetation may also grow better in one soil than another, which eventually leads to animal habitat also partially corresponding to soil type.
Jericho Agricultural Soils
MRP Sept 024
Classic rocky Vermont soils |
But, before we begin, It’s important that you understand the terms associated with the anatomy of the goats mammary system:
Udder- The intact mammary system of a goat. Goats only have ONE ‘udder’. “Udders” is an incorrect and confusing term. Cattle also only have one udder. A goats’ udder is comprised of two halves, each half has its own mammary duct aka teat. An udder on a cow however is divided into four parts called: Quarters. And each quarter also has a teat.
Teat(s)- A teat is the ‘nipple’ on a goat. Goats have two mammary ducts aka teats, one for each half of the udder while cattle have four (one for each quarter).
Orifice(s)- The opening of the duct in the teat that allows milk to pass out of the body of the animal.
Mastitis- An infection of the mammary system, affecting one or both halves of the udder. If left untreated, not treated for the full length of time recommended, not treated soon enough, or if the bacteria present has become resistant to the medication being used (usually due to not treating for the full recommended time) The bacteria may permanently damage/destroy the mammary tissue. In short, your dairy goat will no longer produce milk. Mastitis is caused by different types of bacteria, most common are those in the E.coli family.
*KEEP YOUR BEDDING/ ENCLOSURES CLEAN:
Keeping your shelters, barns, and paddocks clean will go leaps and bounds to help keep your Dairy goats from contracting mastitis. Goats have a tendency to urinate and defecate in their favorite sleeping spots and especially in their shelters. Urine soaked feces and bedding not only attracts flies, but is also hot beds for the growth of bacteria.
When your dairy goat goes to lay down, the weight of the animal can press on the udder causing the teat orifices to open; sometimes releasing the pressure of the milk. Open orifices, in contact with fecal debris, are the primary route of infection by harmful bacteria.
*ON THE MILK STAND:
Proper preparation and cleaning prior to and after milking is just as important as providing clean housing in order to prevent infection. Before you begin milking wash your hands. Then, using a soft clean cloth and warm water with a mild soap, wash the goats udder and teats. Be firm enough to wipe off any debris, and move in downward strokes over the teats to prevent the introduction of bacteria/dirt into the orifices.
*NEVER wipe upward, or rub the whole udder in a circular motion.
You may also use rubbing alcohol or commercially prepared disinfectants/ udder washes available at your local feed store/ on-line livestock warehouse.
After milking, the orifices remain open for some time. So again, using the prescribed method as above, re-wash the udder and teats to remove any milk, sweat, oil from your own hands. Then apply bag balm, or any other udder moisturizer/conditioning lotion to finish. It is important that as a dairy goat owner you take this extra step in your does’ care. Keeping the skin clean, and moisturized keeps your doe’s skin supple, and helps prevent painful chaffing.
*During Milking/ Drying-Off:
Milking your goat also provides ample opportunity for Mastitis to occur. During milking any residual oil/dirt harboring bacteria from either your hands or the animals own skin may in advertently be introduced into the teat orifices. So, it is very important that the milker takes care NOT to come into contact with the orifice of the teat, to help prevent the accidental infusion of bacteria.
Mastitis may also be caused by Damage to the mammary tissue by using improper milking technique. The most common mistake is allowing milk to flow back up into the mammary by either a) Not being sure to squeeze out all the milk in the duct/ teat. Or b) When attempting to milk, squeezing the duct so that the milk forcefully 'shoots' back up into the mammary; very common mistake made by beginners, but no less harmful!
Damage may also be caused by:
~'Bumping' the doe to hard, or too vigorously.
~PULLING down on the teat to extract milk rather than using the proper method that should be used when milking a goat that utilizes a technique to squeeze the milk out of the teat/mammary duct.
~Allowing large hoof stock to nurse directly from a goat rather than use a bottle.
~ Milk Congestion (during the drying-off process milk may 'congeal' and form blockages in the mammary tissue leading to inflammation, fluid build-up, & infection.)
~General Injury (any injury sustained either in the paddock, or pasture that was not obtained during the actual milking process.)
When you are first learning to milk a goat (or even if you are experienced) it is important to take your time. The animals may not wish to stand for you as you learn, so be sure to give them their breakfast/dinner on the stand while you are learning. Do not rush. Everyone makes mistakes, but mastitis is a mistake that can permanently put your doe out of commission. The health and safety of your animals should be the FOCUS of everything you do with your dairy goats in order to ensure you both will enjoy a long-lasting, productive, working relationship.
The YouTube Video that Started us Milking!
Welcome to the Suds Bucket!
Adventures, Experiences, Ideas...it's all here. |
Forerunner of the Reformation
by Burk Parsons
John Wycliffe was the morning star of the Reformation. He was a protestant and a reformer more than a century before Martin Luther ignited the Protestant Reformation in 1517. Through Wycliffe, God planted the seeds of the Reformation, He watered the seeds through John Hus, and He brought the flower of the Reformation to bloom through Martin Luther. The seed of the flower of the German Augustinian monk Luther’s 95 theses was planted by the English scholar and churchman John Wycliffe.
Wycliffe died on New Year’s Eve, 1384. Three decades later, he was condemned as a heretic. In 1415, the Council of Constance condemned the Bohemian reformer John Hus (c. 1370-1415) and burned him at the stake, and it condemned Wycliffe on 260 counts of heresy. The council ordered that Wycliffe’s bones be exhumed, removed from the honored burial grounds of the church, and burned, and his ashes scattered. More than a decade later, the Roman Catholic Church sought to counteract the spreading heresies of Wycliffe and his followers, the Lollards, by establishing Lincoln College, Oxford, under the leadership of Bishop Richard Fleming. Although the pope could condemn Wycliffe’s teachings and scatter his bones, he was unable to stamp out his influence. Wycliffe’s ashes were scattered into the River Swift in England’s Midlands, and as one journalist later observed: “They burnt his bones to ashes and cast them into the Swift, a neighboring brook running hard by. Thus the brook hath conveyed his ashes into Avon; Avon into Severn; Severn into the narrow seas; and they into the main ocean. And thus the ashes of Wycliffe are the emblem of his doctrine which now is dispersed the world over.”
Wycliffe was committed to the authority and inspiration of Holy Scripture, declaring, “Holy Scripture is the highest authority for every believer, the standard of faith and the foundation for reform in religious, political and social life … in itself it is perfectly sufficient for salvation, without the addition of customs or traditions.” As such, Wycliffe oversaw the translation of the Bible from Latin into the English vernacular. This was a radical undertaking, and it was against the express mandate of the papacy. His understanding of Scripture naturally led to his understanding of justification by faith alone, as he declared, “Trust wholly in Christ. Rely altogether on his sufferings. Beware of seeking to be justified in any other way than by his righteousness. Faith in our Lord Jesus Christ is sufficient for salvation.”
In the fourteenth century, at the dawn of the Reformation, Wycliffe shone as a burning and shining light of gospel truth, and his doctrine mirrored his life as one who lived by God’s grace and before God’s face, coram Deo, and for God’s glory. Soli Deo gloria. |
Finding a Place for Monstrous Nature in the Environmental Movement
In Sanitation in Daily Life, Richards demonstrates that humans are part of nature, generating the basis for the human ecology movement. For Richards, urban problems like air and water pollution were products of human activity imposed on the environment and, subsequently, best resolved by humans. The human ecology movement evolved into home economics, but its grounding in conservation had lasting effects, including the environmental justice movements, health ecology, and urban planning.
In Sand County Almanac (1949), Aldo Leopold advocates for the good of all life as part of an ecosystem that includes humans, nonhuman animals, and plant life, not just those animals seen as sentient. For Leopold, “the individual is a member of a community of interdependent parts,” and those parts include all elements of the natural environment, from soil and plants to Bambi. A graduate of the Yale forestry school, Leopold promoted game management, evolutionary biology, and ecology, rather than sentimental anthropomorphism.
These two books helped ground our readings of a perhaps pristine natural world exploited and decimated by humanity. What was missing for us, though, were explorations that address monstrous nature like the cockroach, parasite, cyborg, and cannibal. Four books helped us turn these “monsters” into part of the land ethic Aldo Leopold proposes: Cockroach, The Art of Being a Parasite, Simians, Cyborgs, and Women, and Dinner with a Cannibal.
Marion Copeland’s Cockroach (2004) provides a complex perspective on the cockroach and its strengths. Copeland notes multiple positive associations with cockroaches. Although their nocturnal nature has connected them with a Freudian unconscious and id, in Thailand, Australia, South America, and French Guiana, cockroaches serve as food, medicine, and folk tale source. Copeland also notes that cockroaches contribute to cancer research and emphasizes their physical and intellectual strengths by making explicit connections between cockroaches and humans. As with humans, female cockroaches have stronger immune responses than males. And cockroaches can learn new tricks, overcoming their aversion to light. They also can learn to run a maze, even without their heads!
To illustrate the interdependent relationships hosts and parasites may share, Claude Combes’ The Art of Being a Parasite (2005) defines and illustrates the multiple levels of parasitism. Combes differentiates those parasites that feed off a host without benefiting it from two other types: commensals and mutualists. Commensals live on or within another organism without harming or benefiting the host. Mutualists, on the other hand, do help their hosts. According to Combes, orchids are an apt example of mutualism, because to extract pollen from orchids, moths must have a long probuscis. As with some parasites and their hosts, orchids and moths have evolved mutually, deriving benefits interdependently. Although he emphasizes the interdependent relationships shared by parasites and their partner hosts, Combes debunks notions of mutualism that romanticize nature. Instead, parasites are part of a biotic community in which producers and consumers interact interdependently, surviving in relation to a food web that includes both life and death, not in a Disneyfied harmony like that found in Bambi (1942).
Donna Haraway’s (1991) Simians, Cyborgs, and Women explains how the cyborg combines elements of technology and machines with organic physical (probably human and female) bodies. For Haraway, though, cyborg fiction and film also offers a space in which women can deconstruct binaries that construct nature and the feminine as inferior to their binary opposites, the masculine and culture. We see such an exploration in contemporary Japanese body modification horror like Machine Girl, RoboGeisha, and Tokyo Gore Police. In these films, women, nature and the machine merge creating new organisms with the ability to modify themselves from within.
In her Dinner with a Cannibal: The Complete History of Mankind’s Oldest Taboo (2008), Travis-Henikoff provides evidence for multiple types of cannibalism, from the survival cannibalism noted in Jamestown to the medicinal cannibalism of the Inquisition. She notes, for example, that cannibalism is celebrated in at least one book and film, Alive (1993). Her work builds on the research of scientists and scholars from multiple fields, substantiating the existence of cannibalism without condemning its practice.
These books opened up avenues for aligning our early experiences with animals and the land ethic with less pleasant elements of the natural world.
Robin L. Murray is a professor of English at Eastern Illinois University. Joseph K. Heumann is professor emeritus from the Department of Communication Studies at Eastern Illinois University. Murray and Heumann are coauthors of That’s All Folks?: Ecocritical Readings of American Animated Features (Nebraska, 2011) and Film and Everyday Eco-disasters (Nebraska, 2014) and Monstrous Nature (Nebraska, 2016) |
This is a short section, to add the idea of variables to the code.
x = 7;
Variables work as a shorthand -- we = assign a value into a variable, and then use that variable on later lines to retrieve that value. In the simplest case, this just works to avoid repeating a value: we store the value once, and then can use it many times. All computer languages have some form of variable like this -- storing and retrieving values.
Change the code below so it produces the following output. Use a variable to store the string "Alice" in a variable on the first line like
x = "Alice";, then use the variable x on the later lines. In this way, changing just the first line to use the value "Bob" or "Zoe" or whatever changes the output of the whole program.
Alice Alice Alice Alice In high school I had a crush on Alice Now the Alice curse is lifted |
Human behavior can negatively or positively affect the environment. Environmental settings such as pollution, crowding, heat, or noise may be a source of that can negatively impact the environmental quality, conditions. The environment can be positively impacted by structures, green areas or health facilities. There are simple solutions that can help in getting started with these efforts. Explain how environmental cues shape behavior and provide at least one example Environmental cues are the normal elements that the general public does not control. For this reason, individuals are required to obey the rules with regard to the environmental cues. Examples would be the environmental cues such as sustenance accessibility and high temperature fluctuations commonly upset the nourishing routines of wildlife. A grocery store, as another example can has been sensibly designed to give the experience to take full advantage of the amount of money you will spend by the time you walk out. This includes fundamentals like inserting necessities such as milk and eggs on the furthest side from the entry so you have to walk through additional lanes to get there, placing foods with kid appeal on lower shelves so they can see and request it, as well as placing impulse objects by the cash registers to get your attention while waiting in line. Even the smell drifting from the bakery has been intended to increase the amount of items in your shopping cart. The human mind typically takes part in certain actions centered on the familiar environmental cues and patterns. If people gather in an environment where the use of drugs is rampant, this means that majority of the population will take on to this behavior without bearing in mind the harmful effects that their acts could have in the long run. This means that human beings have a part of planting something in the environment that can generate change and reduce the negative effects that are currently experienced. A good model would be implementation of a practice to make use of decomposable bags for grocery shopping as a replacement for the disposable plastics. This is because the plastics ordinarily have harmful effects on the environment in several ways. People typically do not dispose of the correctly and they have the potential of being a health risk to animals if they happen to swallow them while eating. The implementation of this method will influence the environment positively in the long run because the behaviors of people will change accordingly. Evaluate how behavior can be modified to support sustainability and how this can limit a negative impact on the environment
Behavior can be modified for example in our daily activities. Most people wake up in the more and brush their teeth as well as shower. Both of these activities require using water. Instead of letting the water run constantly while engaging in these activities a person can turn the water off while brushing and only use as needed or when showering rinse with the water to get wet then turn off while lathering up and back onto rinse off. This will all lessen the time the water is being used for less waste. When grocery shopping a person can elect to use either paper or their own environmentally safe bags for shopping. Sometimes a person tends to utilize their car out of habit and convenience. Instead of driving to the corner store a person may elect to walk or ride a bicycle. This in turn will reduce the amount of pollutants released in the air, also affording exercise for the individual. Describe how social norms influence behavior and beliefs about the environment Social norms affect the method in which people conduct themselves, depending on the communal experiences and what the society expects of them. With the current generation nonetheless, these social norms have been washed away in many communities and this has had. |
In the early 19th century, most Henricoans made their living by farming and related industries, such as milling. Coal mining was also important, especially in northern and western Henrico. The principal source of labor for these industries was slavery.
In 1800, a slave named Gabriel, owned by Thomas Henry Prosser of Brookfield plantation in Henrico County, conceived and organized a widespread slave uprising. Involving several Virginia localities, it was possibly the most far-reaching slave uprising planned in the history of the South.
The plan might have succeeded had it not been for a sudden, severe downpour
and the disclosure of the plot by several slaves, including Tom and Pharoah,
who belonged to Mosby Sheppard of Meadow Farm,
in Henrico. The alarm went out and the rebellion was thwarted. The effects of the conspiracy were profound and as a result, county and state leaders instituted legislation to regulate the movement of slaves and free blacks.
Image credit: “An Escaped Slave,” an engraving from a photograph, published in Harper’s Weekly, 2 July 1864. Engraving from the collection of the Library of Virginia. |
This image of the Ring Nebula, NGC 6720, shows the distinctive shape of the glowing remains of a Sun-like star.
Combining the visible-light views from the Hubble Space Telescope with infrared data from the ground-based Large Binocular Telescope, this composite image reveals that the nebula is more complex than astronomers previously thought.
Included is an inquiry-based classroom activity that focuses on the image and text.
This image from the Hubble Space Telescope showcases the distinctive shape of the Ring Nebula and offers the best view yet of the nebula. The observations reveal a more complex structure than previously thought. A captioned inset on the back shows the variety of shapes of planetary nebulae. Includes a classroom activity.
Teachers can use this lithograph as:
An example of a planetary nebula. Use the inquiry-based classroom activity called In Search of Planetary Nebulae Shapes that is included with the PDF lithograph.
An engagement tool in an inquiry-based lesson. Have students study the images on the lithograph. Ask them to write down any questions they have about the images. When the students are finished, their questions can be used in a variety of ways:
A content reading tool. Have students read the text on the lithograph and then write a quiz for the class.
HubbleSite press release: "NASA's Hubble Space Telescope Reveals the Ring Nebula's True Shape"
Amazing Space resources by topic: Stars and Stellar Evolution |
The cold is an infection of viral origin that affects the respiratory tract, primarily the nose and throat, sometimes reaching to the trachea and bronchi. A person can contract the disease several times a year, times more likely to start in autumn, spring and towards the middle of winter.
In principle, it is a disease that runs smoothly, and between three and seven days, symptoms begin to disappear. However, more attention should be given to children, elderly, pregnant women and debilitated people who may be most affected.
The most common symptoms of a cold are nasal congestion and excessive mucus, accompanied by coughing, sneezing, sore throat and slight malaise. The cold usually heal without the onset of fever, although in some cases can reach 38-39 degrees. Although these symptoms are similar to the flu, there are some differences between them, especially the presence of fever for several days, more severe cough, headache, joint pain and muscle weakness and feeling sharp.
Better to prevent
The cold is a disease that spreads easily, and transmission occurs through contact with secretions carrying the virus. There has not yet an effective vaccine for its prevention, as there are many types of virus causing the disease mutate every year. But there are hygiene measures recommended to avoid it:
- Not being in touch with the people who have it, especially the first few days.
- It is important that the body's defense system functions properly to protect ourselves in case of infection, wash hands thoroughly and dry with a towel that is different from the sick person.
- Protect from cold environments and avoid loaded places.
It is important that the body's defense system function properly to protect ourselves in case of infection. This plays a vital role to continue feeding the individual, and that through it you can stimulate immune function. One of the most important nutrients involved in this function is vitamin C, whose presence is noteworthy in fruits and vegetables. Foods with higher content of this vitamin are citrus fruits like oranges, tangerines and grapefruit juice, other fruits such as strawberries, kiwis and mangoes and vegetables such as peppers and cabbage family. The deficit of other nutrients can cause a weakened immune system and increase susceptibility to infections. These include minerals such as selenium and zinc, omega 6 fatty acids and amino acids such as arginine and glutamine. Lactic acid bacteria, which are present mainly in fermented milk products, it also seems to have a beneficial effect on the immune system. These microorganisms are able to cross the barrier and act on the gastrointestinal flora or intestinal mucosa, where they express their beneficial qualities.
Once you have contracted the cold, treatment is aimed at improving symptoms, as there is no medicine that can cure a case of a disease caused by a virus. Antibiotics are not effective, except in cases of complications caused by bacteria.
The role of food during the cold
- A diet soft based with comforting and nutritious foods can help improve the general malaise and mitigate loss of appetite that may occur due to the state of convalescence. Food should be varied, not elaborated so that it is easy to digest and rich in foods that stimulate the immune function.
- An adequate fluid intake is essential to maintain good hydration in the case of colds. This prevents that dry mucous membranes and enhances the liquefaction of secretions.
- Alcoholic beverages and those containing caffeine can cause dehydration, so it is recommended not to take in this case.
- Hot drinks and broths, soups and teas can be an appealing and rewarding to take the necessary amount of liquid.
It is advised to prepare food in a simple way to make them easier to digest. Avoid using an excessive amount of fat, as it can be indigestible.
Vitamin C helps prevent colds and reduce their duration, since it is a nutrient that stimulates the body's defenses. The best sources of this vitamin are vegetables and fruit, whose presence is essential in the diet to combat the cold.
Other nutrients that stimulate immune function are selenium which is present in foods like eggs, cereals full of vegetables, meat and fish, and zinc, whose main dietary sources are liver, the cheese, seafood, vegetables, eggs and nuts.
In case of fever, emphasis should be placed in an adequate intake of fluids to avoid dehydration and to eliminate toxins from the body. It also recommends a soft diet easily digestible.
Phytotherapy can collaborate to improve the general condition and reduce symptoms of colds. The herbal tea to take effect on the airways is best suited, such as eucalyptus, the Echinacea, the elder and verbena, among others.
Breakfast: Orange juice, milk and biscuits with honey.
Lunch: Mashed vegetables, chicken breast to the plate with rice, applesauce.
Snack: Yoghurt with nuts infusion.
Dinner: noodle soup, omelets with tomato sauce, kiwi.
The best of Dietetics
Products related with Eating well during colds
You have not found what you were looking for? |
THE SUPPLY OF RAW MATERIALS
It was pointed out that growth cannot take place without favorable conditions in respect to a number of factors, including natural resources, labor power, capital, entrepreneurship, and advancing technology. These particular factors are basic for development. Other factors may prevent growth from reaching its potential; for example, insufficient purchasing power, economic instability, or unstable prices. Even though these conditions are favorable, an unfavorable condition with respect to the basic factors would militate against growth. For example, at any stage of technological advance, growth would be limited by the supply of natural resources, taken to include the metals and minerals, energy sources, water, agricultural products, and forests. It is important then to examine our prospects in relation to future supplies of raw materials. Later chapters will deal with the other factors. Conversely plentiful resources are encouraging. A new good source of raw materials can be counted on to increase investments in order to exploit it. This increased investment results in further growth of other industries as purchasing power expands.
Early economists pictured a dismal future for mankind because they thought that population increase would outrun the potential increase in food supply. This was based on the notion that little could be expected of the people in relation to restraining births whereas food supply could not be increased indefinitely because of the law of diminishing returns. Furthermore, relatively little progress was made in agricultural techniques so that it was hard to conceive of an improving technology that would be sufficient to overcome this tendency to diminishing returns. Although this early discussion was always in terms of the food supply, the same sort of reasoning could be applied to any resource and probably would have been if manufacturing had been more important. Thus man was thought to |
An interference microscope is used to measure the precise optical properties of materials (e.g., birefringence) and study minute variations in surface morphology. The interference microscope is based on a Michelson interferometer, which splits a monochromatic beam of light (a single wavelength of light from the visible spectrum) into two beams, a beam that passes through the sample and reference beam. As the sample beam passes through the sample, it is slowed down relative to the reference beam. After the sample, the light waves in the two beams recombine; but in recombination, the waves interfere, either constructively or destructively, creating light regions in former case or dark regions in the latter. From the resulting bands of light and dark that make up an interferogram, we can quantitatively and qualitatively characterize optical properties and surface characteristics. |
A best-selling introduction to Shakespeare, his world and his plays. Stimulating idea and resources enable students to get to grips with the plays in an enjoyable way.
Key Assessment Objectives are covered through a range of activities. Topics include: study of historical context, plot, sub-plot, genre, character, themes, language, and staging
Includes advice on how to prepare for and approach the exam
Features sample exam questions
No current reviews
No free resources available. |
Typhoon Nabi was a Category 2 typhoon in the western Pacific when the Moderate Resolution Imaging Spectroradiometer (MODIS) on NASA’s Terra satellite captured this image on September 6, 2005 at 11:05 a.m. Tokyo time. It had sustained winds of around 160 kilometers an hour (100 miles per hour), and it was heading north across the southern end of Japan. The eye of the storm is roughly centered in the image, and the thick storm clouds completely hide the island of Kyushu. To the northeast of the eye, the smaller island of Shikoku and the largest Japanese island, Honshu, are also under the clouds.
These clouds brought a deluge to the southern islands and caused dangerous landslides in the region’s mountainous terrain. The landslides killed several people on Kyushu. As high waves pounded the coast, as much as 51 inches of rain may have fallen in 24 hours as the storm moved slowly northward into the Sea of Japan. The Japanese government had ordered evacuations for more than 100,000 people in the southern islands, according to reports from BBC News. Flights, road traffic, and ferry services were disrupted, and hundreds of thousands of people and businesses lost power.
NASA image created by Jesse Allen, Earth Observatory, using data obtained from the MODIS Rapid Response team.
- Terra - MODIS |
In English, many things are named after a particular country – but have you ever wondered what those things are called in those countries?
1secesión feminineseparación femininesecession from sth — secesión de algo
- In 1861, southern secession freed Republicans from the pressure to compromise to preserve the Union.
- A few traders advocated secession, but most were unionists.
- All opposed secession but in the end backed the Confederacy.
- He rejected the radical branch of the party that advocated secession in defense of states' rights and slavery.
- Through a moral equivalent of Civil War, we must prevent this secession from taking place.
- There can be no such thing as a peaceable secession.
- They are likely to fear that federalism might lead to secession.
- When the Civil War came along, this area of the South opposed secession.
- None of the candidates questioned Georgia's secession from the former Soviet Union.
- Thus, the actual reason for the South's secession was racism.
- He talks of other theories proposed by historians to explain Southern secession.
- It could also spark further claims for secession from other ethnic groups.
- In addition, perhaps as high as 40 percent of white Southerners had opposed secession.
- Thus some nationalism has involved movements that aim to break up existing states, through secession or fragmentation of various forms.
- They threatened secession if the colony did not join the Commonwealth.
- A first modification allows for unilateral secession of border regions.
- Khartoum has argued that the clause paves the way for the south's immediate secession.
- Every Indian leader has feared that if Kashmir breaks away then it could set off other movements for secession from the Indian state.
- However, the concern of aboriginal peoples is precipitated by the asserted right of Quebec to unilateral secession.
- Texas secessionists organized lynch mobs across the state to murder anyone who opposed secession.
English has borrowed many of the following foreign expressions of parting, so you’ve probably encountered some of these ways to say goodbye in other languages.
Many words formed by the addition of the suffix –ster are now obsolete - which ones are due a resurgence?
As their breed names often attest, dogs are a truly international bunch. Let’s take a look at 12 different dog breed names and their backstories. |
An aphelion is an event when a celestial body orbits around the sun at its farthest range. A perihelion is the opposite of aphelion when the celestial objects move around the Sun at its closest range. Any celestial object that orbits around the Sun will experience both Aphelion and perihelion at a certain period of time. Their difference may vary depending on the peculiarity if the object” orbit.
It’s a false notion that earth revolves around the Sun. The earth has its own elliptical orbit that makes it eccentricity very low as .0167 compared to Mercury with .2056 eccentricity.
Aphelion occurs in the Northern hemisphere’s summer and July. It moves around 30 minutes ever year accompanied a 21,000 year cycle. This means Earth’s aphelion falls in the Southern Hemisphere’s summer.
Earth’s distance from the Sun during perihelion is approximately 91 million miles (147 million kilometers). At aphelion, Earth’s distance is 95 million (152 million kilometers). This approximated distance will vary slightly as compared to many objects that orbits around the Sun.
Astronomers use the formulas of astronomical units (AU); Each AU represents the mean distance of the Sun from the Earth. Astronomers draw maps the paths that a mass of objects are tracked with their orbits around the Sun. These charts are found in the Atlases and Science textbooks, depicting the orbits each planet with their elliptical distance. The charts they made also depicted the future occurrences of aphelion and perihelion. |
ACCUPLACER reading comprehension, sentence skills, writePlacer ESL guide and practice questions.
What is ACCUPLACER?
ACCUPLACER is a suite of tests that determines your knowledge in math, reading and writing as you prepare to enroll in college-level courses. ACCUPLACER is used to identify your strengths and weaknesses in each subject area. The results of the assessment, in conjunction with your academic background, goals and interests, are used by academic advisors and counselors to place you in the appropriate college courses that meet your skill level.
You cannot “pass” or “fail” ACCUPLACER tests, but it is very important that you do your very best on these tests so that you will have an accurate measure of your academic skills.
ACCUPLACER assessments are delivered in multiple-choice format with the exception of the WritePlacer®, a written essay assessment. All tests are untimed to allow you to focus and comfortably demonstrate your skills while answering questions.
ACCUPLACER assessments are computer-adaptive. Questions are selected based on your skill level; in other words, your response to one question determines the difficulty level of the following question. You are encouraged to give each question as much thought as you wish before selecting your final answer.
The number of questions varies depending on which ACCUPLACER assessment you take. There could be as few as 12 questions or as many as 40.
What is on an ACCUPLACER Test?
The Arithmetic test, comprised of 17 questions, measures your ability to perform basic arithmetic operations and to solve problems that involve fundamental arithmetic concepts. There are three types of Arithmetic questions:
- Operations with whole numbers and fractions: topics included in this category are addition, subtraction, multiplication, division, recognizing equivalent fractions and mixed numbers, and estimating.
- Operations with decimals and percents: topics include addition, subtraction, multiplication, and division with decimals. Percent problems, recognition of decimals, fraction and percent equivalencies, and problems involving estimation are also given.
- Applications and problem solving: topics include rate, percent, and measurement problems, simple geometry problems, and distribution of a quantity into its fractional parts.
WritePlacer (Written Essay)
ESL – Language Use
ESL – Listening
ESL – Reading Skills
ESL – Sentence Meaning |
An elaborate electronic helmet that allows the wearer to control a robot by thought alone has been unveiled by researchers in Japan.
Scientists at the Honda Research Institute demonstrated the invention today by using it to move the arms and legs of an Asimo humanoid robot.
To control the robot, the person wearing the helmet only had to think about making the movement. Its inventors hope that one day the mind-control technology will allow people to do things like turn air conditioning on or off and open their car boot without putting their shopping down.
The helmet is the first “brain-machine interface” to combine two different techniques for picking up activity in the brain. Sensors in the helmet detect electrical signals through the scalp in the same way as a standard EEG (electroencephalogram). The scientists combined this with another technique called near-infrared spectroscopy, which can be used to monitor changes in blood flow in the brain.
Brain activity picked up by the helmet is sent to a computer, which uses software to work out which movement the person is thinking about. It then sends a signal to the robot commanding it to perform the move. Typically, it takes a few seconds for the thought to be turned into a robotic action.
Honda said the technology was not ready for general use because of potential distractions in the person’s thinking. Another problem is that brain patterns differ greatly between individuals, and so for the technology to work brain activity must first be analysed for up to three hours. |
Career Opportunities in Geology
|What is Geology?|
Geology is the scientific study of the Earth, including the materials that it is made of, the physical and chemical processes that occur on its surface and in its interior, and the history of the planet and its life forms.
| What do geologists do?
Geology is a multi-faceted field with many different areas of specialization. Listed below are some of the more common ones.
Earth Science Teachers: teach 'earth science' (a mixture of geology, oceanography and climatology) in junior and senior high schools. A teaching certificate from a professional education program is also normally required.
Economic Geologists: explore for and help produce metallic (iron, copper, gold, etc.) and non-metallic (coal, granite dimension stone, limestone aggregate sand and gravel, etc.) rock and mineral resources of economic value.
Engineering Geologists: investigate the engineering properties of rock, sediment and soil below man-made structures such as roads, bridges, high-rise buildings, dams, airports, etc.
Environmental Geologists: study the environmental affects of pollution on ground and surface waters and surficial materials (rock, sediment and soil), and also recommend solutions to environmental problems. They are also interested in understanding, predicting and mitigating the effects of natural hazards, such as flooding, erosion, landslides, volcanic eruptions, earthquakes, etc.
Geochemists: investigate the chemical composition and properties of earth materials, especially polluted ground and surface waters, fossil fuels (such as petroleum and coal) and other resources of economic value.
Geology Professors: teach geology courses and conduct research in colleges and universities.
Geomorphologists: study the origin and evolution of landscapes on the continental surfaces.
Geophysicists: use the principles of physics to investigate the structure of the Earth's deep interior, explore for economic resources in the subsurface, and monitor pollution in ground water.
Glacial or Quaternary Geologists: study the history of geologically recent (Quaternary period) glaciers as well as the sediment deposits and landforms they produced.
Hydrogeologists: are concerned with water in the Earth's subsurface, including its sources, quality, abundance and movement.
Hydrologists: are concerned with water on the Earth's surface, including its precipitation, evaporation and runoff, and its abundance and quality in streams and lakes.
Marine Geologists: study the physical, chemical and biological characteristics of the sediments deposited on the ocean floors and the rocks that underlie them.
Mineralogists: investigate the origins, properties and uses of the minerals occurring within the Earth's rocks.
Paleontologists: study the remains of ancient animals and plants (fossils) in order to understand their behaviors, environmental circumstances, and evolutionary history.
Petroleum Geologists: explore for and help produce petroleum and natural gas from sedimentary rocks.
Petrologists: study the origins and characteristics of igneous, metamorphic and sedimentary rocks.
Sedimentologists: investigate the origins and characteristics of sediment deposits and the sedimentary rocks that form from them.
Seismologists: are geophysicists who study earthquakes, both to better understand the physical processes involved and to interpret the deep internal structure of the Earth.
Stratigraphers: investigate the time and space relationships among sedimentary and other rocks on local to global scales, and are also interested in the geochronology (absolute dating by radiometric methods) and fossil content of rock layers.
Structural Geologists: study the folding, fracturing, faulting and other forms of deformation experienced by rocks below the Earth's surface, and are also interested in how these processes relate to global Plate Tectonics.
Volcanologists: investigate volcanoes, especially their eruptions and deposits, in order to better understand physical processes involved and to predict volcanic eruptions.
| Where do geologists work and what do they get paid?
The principal employers of geologists are, in order of decreasing numbers of jobs:
Annual salaries for geologists with a baccalaureate degree generally range between $35,000 and $55,000. Most of the better-paying jobs for geologists require a master's degree and offer annual salaries in the $45,000 to $75,000 range. A doctoral degree is required for university professorships and other research-intensive positions, and these jobs pay salaries in the $50,000 to $70,000 range.
According to the U. S. Department of Labor's Occupational Outlook Handbook (2002-2003 Edition), "employment of environmental scientists and hydrologists [including environmental geologists and hydrogeologists] is expected to grow faster than the average for all occupations through 2010. The need for companies to comply with environmental laws and regulations is expected to contribute to the demand for environmental scientists and some geoscientists, especially hydrologists and engineering geologists."
| How do I become a geologist at UT?
Students with a broad interest in geology should pursue the Bachelor of Science degree in Geology, whereas those with interests in both environmental issues and geology may work toward a Bachelor of Science degree in Environmental Science following the "Geology Track". Students with both kinds of baccalaureate degrees are encouraged to pursue the Master of Science degree in Geology in order to acquire expertise in an area of specialization and so prepare themselves for a specific geological discipline and a better-paying job. |
Lesson 1 (from Chapter 1)
Genre is a type of literary work. Often a book's genre is identified on the cover or binding. Other times, such as with "The Neon Rain," the back cover copy establishes the book as being from the mystery genre. Literary elements such as character, setting, plot, and tone can all contribute to creating a mystery. The objective of this lesson is to learn about the mystery genre.
1) Class Discussion: What is genre? Identify the genre of "The Neon Rain." Discuss how you know the genre of this book. List elements that establish the genre. Be specific. Are these elements in other books of the same genre?
2) Small Group Discussion: Divide the students into groups of three or four. What other genre or category descriptions might be used to classify "The Neon Rain"? Discuss whether mystery, thriller, suspense, and adventure mean the same thing or are...
This section contains 7,924 words
(approx. 27 pages at 300 words per page) |
Question: What is tonsillitis, what are the symptoms, and how is it treated?
Answer: Tonsillitis refers to inflammation of the glands in the back of the throat. Tonsillitis typically is caused by a viral infection and occasionally is caused by a bacterium. Typical symptoms of tonsillitis include sore throat, swelling of the glands, runny nose, decreased appetite, fever -- basically a lot of the common symptoms of a common cold.
Typically tonsillitis can be treated with over-the-counter remedies such as Tylenol, acetaminophen or Advil. And on the rare occasions when it's caused by a bacterium, we'll treat it with an antibiotic. And the most common antibiotic that we use is penicillin. |
final electron acceptor of photosynthesis
Best Results From Wikipedia Yahoo Answers Youtube
An electron acceptor is a chemical entity that accepts electrons transferred to it from another compound. It is an oxidizing agent that, by virtue of its accepting electrons, is itself reduced in the process.
Typical oxidizing agents undergo permanent chemical alteration through covalent or ionic reaction chemistry, resulting in the complete and irreversible transfer of one or more electrons. In many chemical circumstances, however, the transfer of electronic charge from an electron donor may be only fractional, meaning an electron is not completely transferred, but results in an electron resonance between the donor and acceptor. This leads to the formation of charge transfer complexes in which the components largely retain their chemical identities.
The overall energy balance (ΔE), i.e., energy gained or lost, in an electron donor-acceptor transfer is determined by the difference between the acceptor's electron affinity (A) and the ionization potential (I) of the electron donor:
In chemistry, a class of electron acceptors that acquire not just one, but a set of two paired electrons that form a covalent bond with an electron donor molecule, is known as a Lewis acid. This phenomenon gives rise to the wide field of Lewis acid-base chemistry. The driving forces for electron donor and acceptor behavior in chemistry is based on the concepts of electropositivity (for donors) and electronegativity (for acceptors) of atomic or molecular entities.
Examples of electron acceptors include oxygen, nitrate, iron (III), manganese (IV), sulfate, carbon dioxide, or in some microorganisms the chlorinated solvents such as tetrachloroethylene (PCE), trichloroethylene (TCE), dichloroethene (DCE), and vinyl chloride (VC). These reactions are of interest not only because they allow organisms to obtain energy, but also because they are involved in the natural biodegradation of organic contaminants. When clean-up professionals use monitored natural attenuation to clean up contaminated sites, biodegradation is one of the major contributing processes.
In biology, a terminal electron acceptor is a compound that receives or accepts an electron during cellular respiration or photosynthesis. All organisms obtain energy by transferring electrons from an electron donor to an electron acceptor. During this process (electron transport chain) the electron acceptor is reduced and the electron donor is oxidized.
- X + e−→ X−
The electron affinity, Eea, is defined as positive when the resulting ion has a lower energy, i.e. it is an exothermic process that releases energy:
- Eea = Einitial − Efinal
- X−→ X + e−
Electron affinities of the elements
Although Eea varies greatly across the periodic table, some patterns emerge. Generally, nonmetals have more positive Eea than metals. Atoms whose anions are more stable than neutral atoms have a greater Eea. Chlorine most strongly attracts extra electrons; mercury most weakly attracts an extra electron. The electron affinities of the noble gases have not been conclusively measured, so they may or may not have slightly negative values.
Eea generally increases across a period (row) in the periodic table. This is caused by the filling of the valence shell of the atom; a group 7A atom releases more energy than a group 1A atom on gaining an electron because it obtains a filled valence shell and therefore is more stable.
A trend of decreasing Eea going down the groups in the periodic table would be expected. The additional electron will be entering an orbital farther away from the nucleus, and thus would experience a lesser effective nuclear charge. However, a clear counterexample to this trend can be found in group 2A, and this trend only applies to group 1A atoms. Electron affinity follows the trend of electronegativity. Fluorine (F) has a higher electron affinity than oxygen and so on.
The following data are quoted in kJ/mol. Elements marked with an asterisk are expected to have electron affinities close to zero on quantum mechanical grounds. Elements marked with a dotted box are synthetically made elements—elements not found naturally in the environment.
Molecular electron affinities
The electron affinity of molecules is a complicated function of their electronic structure. For instance the electron affinity for benzene is negative, as is that of naphthalene, while those of anthracene, phenanthrene and pyrene are positive. In silicoexperiments show that the electron affinity ofhexacyanobenzene surpasses that of fullerene.
Electron affinity of Surfaces
The electron affinity measured from a material's surface is a function of the bulk material as well as the surface condition. Often negative electron affinity is desired to obtain efficient cathodes that can supply electrons to the vacuum with little energy loss. The observed electron yield as a function of various parameters such as bias voltage or illumination conditions can be used to describe these structures with band diagrams in which the electron affinity is one parameter. For one illustration of the apparent effect of surface termination on electron emission, see Figure 3 in Marchywka Effect.
From Yahoo Answers
Answers:1. oxygen - hence u wrok out, produce water (sweat) 2. reflected, if absorbed, then u can't see it. 3. carbon dioxdie and water 4. this is tricky, my guess is source of electron for photosynthesis (usually its involve in the energy production pathways of an organism) 5. aerobinc respiration
Answers:By accepting electrons, DPIP will speed up electron flow, water splitting and oxygen production. DPIP ain't NADPH so I would imagine thed dark rxn would probably be inhibited.
Answers:Pyruvate! If it is anaerobic, no oxygen would be available.
Answers:I can only find sulfate reduction in the metabolism of archea and bacteria in my bio book. It's biological science by scott freeman. Under sulfate reducers, it lists H2 or organic compounds as the e- donor, (SO4)2- as the e- acceptor. The by products are H2O or CO from the donor and H2S from the acceptor. Hope this helps. |
How Can Virtual Reality Be Used to Promote Retention?
Corporate America invests about $60 billion dollars a year in training and workforce development, but research shows that only a small number of employees who receive training can remember information to enhance to their jobs. But, the question is why not this enormous investment pay off?
Usually, students, trainers, or, even, lessons are blamed for low retention; but, I believe the biggest problem is rooted in the lack of understanding about how our brain learns and retains information. In other words, we have to pay attention the limitations of the brain when designing or teaching a course or delivering training.
How do we process information?
Any information that needs to be processed by our brain has to be picked up by our ears or eyes, and this information is held by our sensory memory for only a split second. Yet, just a small amount of information goes on to the next stage of memory: working memory. Working memory is what allows you to hold onto multiple pieces of information in order to complete cognitive tasks, such as comprehending this article. Once the information is processed, working memory sends it to long-term memory. Working memory is manageable, but it has space and duration limitations when dealing with new, unfamiliar information because it can only hold 5 to 9 elements for about 20 seconds and process only two to four elements at a time. In contrast, long-term memory keeps permanent information to be retrieved when necessary. The processed information in working memory is stored in long-term memory. Even though working memory has limits for the amount of information to process and duration of information to keep, working memory is not subject to these limitations when information is retrieved from long-term memory because stored information is treated as one element in working memory, no matter how complex it is. In other words, we do not process or store any information if information is not engaging, meaningful, and emotionally compelling enough to go through our sensory memory. Yet, not all educational materials are designed to enable learners to store and retrieve all essential information, and this causes enormous losses.
Forgetting: a human feature
Chances are you think you are good at remembering. People’s appraisal of their memory is much like their assessment of their driving skills, in that the great bulk of adults think they’re above average. However, neuroscientists believe that forgetting is an essential part of memory. From the evolutionary perspective, our brain can’t handle all sensory information, so it has to flush nonessential information to focus on necessary information. This process is called “desirable forgetting.” However, our brain is not capable of determining which information is essential, and it may flush out important information resulting in “undesirable forgetting.” One way to help our brain retain information is to explore ways to decrease undesirable forgetting. Henry Roediger, a professor at Washington University in St Louis, has studied possible ways to minimize undesirable forgetting for almost forty years. According to him, we can stimulate our brain when a particular piece of information needs to be stored in our long-term memory, which will increase our retention rate.
So, how is this related to Virtual Reality?
Learners of all ages typically retain between 10% and 30% of that they read and see. Traditional educational approaches usually fail to help us retain information because they do not stimulate our brain as much as Virtual Reality.
Aukstakalnis and Blatner defines virtual reality as “a way for humans to visualize, manipulate, and interact with computers.” Careful research shows that the use of virtual reality in education increase learners’ performance and retention by 35% because virtual reality ignites our brain as visual stimuli impinge on our eyes, and spoken words or music impinge on our ears. When we attend to the incoming sounds and images, both will pass our sensory memory to be processed in our working memory, and therefore, it is more likely to be stored in our long-term memory. Every time we are exposed to the same sounds and images, not only we will be able to retrieve that information easily, also we will remember the details of that lesson. Moreover, the VR technology enables the delivery of specific, consistent, and relevant context that allows learners to immerse in a virtual world. This immersion tricks our senses by increasing electrical activity in our brain. Electrical activities increase as VR enables a learner to look all around, internalize the environment, and move through scenes, which makes the learning experience incredibly realistic and very memorable.
Long story short, virtual reality has the potential to create training that increases retention and performance; hence, VR may make money spent in training worthwhile.
Author: Filiz Aktan PhD Canidate |
Middle-Dnipro culture [Середньодніпровська культура; Seredniodniprovska kultura]. A Bronze Age archeological culture of the late 3rd to mid-2nd millennium BC found along the tributaries of the middle and upper Dnipro River and the Desna River. It was first identified by Vasilii Gorodtsov in the 1920s. An offshoot of the Corded-Ware culture in Western Ukraine and other parts of Europe, its trademark was pottery with imprints of rope or small etched lines forming horizontal layers of ornamentation. On the basis of changes in pottery style, scholars have divided the culture into early (24th–22nd century BC), middle (22nd–17th century BC), and late (17th–15th century BC) periods of development. Its major economic activities were agriculture and animal husbandry, although it also traded with tribes in the northern Caucasus Mountains, the Carpatho-Danube region, and the Baltic region. Houses consisted of surface dwellings with stone hearths. The culture practiced both full body and cremation burials in either dugout graves or kurhans. Material culture remains found at excavation sites included stone querns, flint chisels, arrowheads, copper and amber adornments, stone and bronze tools, and (corded-ware) pottery made from clay with an admixture of fine sand. Notable Middle-Dnipro culture archeological sites include Iskivshchyna (near Kaniv), Pekari, and Mys Ochkynskyi.
[This article originally appeared in the Encyclopedia of Ukraine, vol. 3 (1993).] |
In this topic I am giving an Introduction to SAS, explaining the basics about SAS in brief. This is for beginners who are just getting started learning SAS. You will find the learning path for Beginners and Advanced Users.
Introduction to SAS – What is SAS?
SAS stands for Statistical Analysis System or Software, a powerful statistical package. It includes many modules for Data Management, Data Mining and Statistical Data Analysis. And SAS is available for both Windows and UNIX platforms.
Introduction to SAS – What we can do with SAS?
SAS is an integrated system of software products provided by the SAS Institute that enables a user to perform:
- Data Management: Data entry, retrieval, Data Cleaning, and Data mining
- Report Generation: We can generate different reports including graphs
- Data Analysis: We can perform simple descriptive Data analysis to Advanced Statistical Data Analysis and Operations Research
- Predictive Modelling: SAS have powerful modules for Forecasting and Decision Support
- Data Warehousing: SAS is BI tool and used can perform all ETL transactions (Extract, Transform, Load)
Introduction to SAS – What is SAS program?
SAS is driven by SAS programs or procedures, which we can use to perform various operations on data stored as tables called Datasets. SAS also provides menu driven graphical user interfaces (such as the SAS Enterprise Guide (EG), SAS Enterprise Miner (Eminer)) which helps non-programmer.
However, most of the interaction with SAS system to perform analytical operations are done through writing SAS programs. SAS programs provide high level of flexibility compares to the menu driven interface. Also, menu driven interface for SAS is not provided for platforms like UNIX and Mainframe.
SAS programs are composed of two fundamental components, Data Step and Proc Step (Procedures).
Data Steps used to Create or modify the data sets. We can use the Data steps for:
- Defining the structure of the data: We can define the variables and assign the data
- Creating the Data: We can input the data or read from the files, subsets of the existing data, merging the more than one data set, or updating the data
- Modifying the data: We can modify the existing data and create new data sets and update the existing the data
- Checking for correctness: We can check if there are errors in the data
An Example Data Step:
Proc steps are pre-written procedures in SAS, each proc step is created for a particular form of data manipulation or statistical analysis to be performed on data sets created in the DATA step. We can use Proc steps for:
- Printing the contents of a data set and create reports (Example, PROC PRINT)
- Producing the frequency and cross tabulation (Example, PROC FREQ)
- Generating Summaries and Aggregates (Example PROC MEANS, PROC Summary)
- Applying Statistical Techniques and analysis the data (Example, PROC TTEST, PROC REG)
- Generating the Charts (Example, PROC GPLOT)
- Sorting, listing and exporting the results and creating data sets
An Example Proc Step:
PROC PRINT DATA=summarytables.categores_copy;
SAS programs are written using SAS language to manipulate, clean, describe and analyze the data. So, it is important to understand and learn the SAS language to use SAS.
A typically SAS program consists of one or more DATA steps to get and define the data as a required format that SAS can understand and one or more PROC Steps to analyze the data. All SAS statements must end with a semicolon.
A beginner level SAS user should at least know how to create the simple data sets and performing minimum operations to analyze the data. We will start with the following Data and Proc statement to do the basic tasks using SAS language.
DATA; INFILE;INPUT; SET; CARDS; DATALINES; TITLE; LABEL;FORMAT;IF / THEN; ELSE; WHERE; SORT; MERGE;PROC PRINT; PROC FREQ; PROC MEANS; PROC GPLOT; PROC SQL;
We will discuss about these statements in the next few topics. We will begin with understanding the statements used in the Data Step and Proc Step in the next topic.
Hey! Join Our Community
Get Quick Responses & Experts' Answers in Minutes!
Get Notified - When Answered Your Question! |
Gender in English
A system of grammatical gender, whereby every noun was treated as either masculine, feminine or neuter, existed in Old English, but fell out of use during the Middle English period. Modern English retains features relating to natural gender, namely the use of certain nouns and pronouns (such as he and she) to refer specifically to persons or animals of one or other genders and certain others (such as it) for sexless objects – although feminine pronouns are sometimes used when referring to ships (and more uncommonly some airplanes and analogous machinery) and nation states.
Some aspects of gender usage in English have been influenced by the movement towards a preference for gender-neutral language. This applies in particular to avoidance of the default use of the masculine he when referring to a person of unspecified genders, usually using the neuter they as a third-person singular, and avoidance of the use of certain feminine forms of nouns (such as authoress and poetess).
Gender in Old English
Old English had a system of grammatical gender similar to that of modern German, with three genders: masculine, feminine, neuter. Determiners and attributive adjectives showed gender inflection in agreement with the noun they modified. Also the nouns themselves followed different declension patterns depending on their gender. Moreover the third-person personal pronouns, as well as interrogative and relative pronouns, were chosen according to the grammatical gender of their antecedent.
For details of the declension patterns and pronoun systems, see Old English grammar.
Decline of grammatical gender
By the 11th century, the role of grammatical gender in Old English was beginning to decline. The Middle English of the 13th century was in transition to the loss of a gender system. One element of this process was the change in the functions of the words the and that (then spelt þe and þat; see also Old English determiners): previously these had been non-neuter and neuter forms respectively of a single determiner, but in this period the came to be used generally as a definite article and that as a demonstrative; both thus ceased to manifest any gender differentiation. The loss of gender classes was part of a general decay of inflectional endings and declensional classes by the end of the 14th century. While inflectional reduction seems to have been incipient in the English language itself, some theories suggest that it was accelerated by contact with Old Norse, especially in midland and northern dialects.
Gender loss began in the north of England; the south-east and the south-west Midlands were the most linguistically conservative regions, and Kent retained traces of gender in the 1340s. Late 14th-century London English had almost completed the shift away from grammatical gender, and Modern English retains no morphological agreement of words with grammatical gender.
Gender is no longer an inflectional category in Modern English. The only traces of the Old English gender system are found in the system of pronoun–antecedent agreement, although this is now generally based on natural gender – the sex, gender identity, or perceived sexual characteristics (or asexual nature), of the pronoun's referent. Another manifestation of natural gender that continues to function in English is the use of certain nouns to refer specifically to persons or animals of a particular sex: widow/widower, actor/actress, etc.
Benjamin Whorf described grammatical gender in English as a covert grammatical category. He noted that gender as a property inherent in nouns (rather than in their referents) is not entirely absent from modern English: different pronouns may be appropriate for the same referent depending on what noun has been used. For example, one might say this child is eating its dinner, but my daughter is eating her (not its) dinner, even though child and daughter in the respective sentences might refer to the same person.
- he (and its related forms him, himself, his) is used when the referent is a male person, and sometimes when it is a male animal (or something else to which male characteristics are attributed);
- she (and her, herself, hers) is used when the referent is a female person, sometimes when it is a female animal, and sometimes when female characteristics are attributed to something inanimate – this is common especially with vessels such as ships and airplanes, and sometimes with countries. An Example is in God Bless America, where one lyric is "Stand beside her, and guide her through the night with a light from above."
- it (and itself, its) is used when the referent is something inanimate, often when it is an animal, and sometimes for a child when the sex is unspecified.
Pronoun agreement is often with the natural gender of the referent (the person or thing denoted) rather than simply the antecedent (a noun or noun phrase which the pronoun replaces). For example, one might say either the doctor and his patients or the doctor and her patients, depending on one's knowledge or assumptions about the sex of the doctor in question, as the phrase the doctor (the antecedent) does not itself have any specific natural gender. Also, pronouns are sometimes used without any explicit antecedent. However, as noted above (the example with child and daughter), the choice of pronoun may also be affected by the particular noun used in the antecedent.
(When the antecedent is a collective noun, such as family or team, and the pronoun refers to the members of the group denoted rather than the group as a single entity, a plural pronoun may be chosen: compare the family and its origins; the family and their breakfast-time arguments. See also synesis.)
Because there is no gender-neutral pronoun, problems arise when the referent is a person of unknown or unspecified sex. Traditionally the male forms he etc. have been used in such situations, but in contemporary English (partly because of the movement towards gender-neutral language) this is often avoided. Possible alternatives include:
- use of he or she, he/she, s/he, etc.
- alternation or random mixture of use of she and he
- use of singular they (common especially in informal language)
- use of it (normally only considered when the antecedent is a word like child, baby, infant)
Although the use of she and he for inanimate objects is not very frequent in Standard Modern English, it is in fact fairly widespread in some varieties of English. Gender assignment to inanimate nouns in these dialects is sometimes fairly systematic. For example, in some dialects of southwest England, masculine pronouns are used for individuated or countable matter, such as iron tools, while the neuter form is used for non-individuated matter, such as liquids, fire and other substances.
In principle, animals are triple-gender nouns, being able to take masculine, feminine and neuter pronouns. However, animals viewed as less important to humans, also known as ‘lower animals’, are generally referred to using it; higher (domestic) animals may more often be referred to using he and she, when their sex is known. If the sex of the animal is not known, the masculine pronoun is often used with a sex-neutral meaning. For example,
Person A: Ah there’s an ant
Person B: Well put him outside
Animate pronouns he and she are usually applied to animals when personification and/or individuation occurs. Personification occurs whenever human attributes are applied to the noun. For example:
A widow bird sat mourning for her love.
Specifically named animals are an example of individuation, such as Peter Rabbit or Blob the Whale. In these instances, it is more likely that animate pronouns he or she will be used to represent them.
These rules also apply to other triple-gender nouns, including ideas, inanimate objects, and words like infant and child.
Traditionally ships, even ships named after men such as USS Barry, countries, and oceans have been referred to using the feminine pronouns. The origins of this practice are not certain - it is not, as is sometimes postulated, a remnant of Old English's grammatical gender (in Old English, a ship, or "scip", was neuter, and a boat, or "bāt", was masculine). It is currently in decline (though still more common for ships, particularly in nautical usage, than for countries), and in Modern English, calling objects "she" is an optional figure of speech, while in American English it is advised against by The Chicago Manual of Style.
In general, transgender individuals prefer to be referred to by the gender pronoun appropriate to the gender with which they identify. Some genderqueer or similarly-identified people prefer not to use either he or she, but a different pronoun such as they, zie, or so forth. Drag performers, when in costume, are usually referred to by the gender pronouns for the gender they are performing (for example, drag queens are usually called "she" when in drag).
Other English pronouns are not subject to male/female distinctions, although in some cases a distinction between animate (or rather human) and inanimate (non-human) referents is made. For example, the word who (as an interrogative or relative pronoun) refers to a person or persons, and rarely to animals (although the possessive form whose can be used as a relative pronoun even when the antecedent is inanimate), while which and what refer to inanimate things (and non-human animals). Since these pronouns function on a binary gender system, distinguishing only between animate and inanimate entities, this suggests that English has a second gender system which contrasts with the primary gender system. It should also be noted that relative and interrogative pronouns do not encode number. This is shown in the following example:
The man who lost his head vs. the men who lost their heads
Other pronouns which show a similar distinction include everyone/everybody vs. everything, no one/nobody vs. nothing, etc.
Many words in modern English refer specifically to people or animals of a particular sex, although sometimes the specificity is being lost (for example, duck need not refer exclusively to a female bird; cf. Donald Duck). As part of the movement towards gender-neutral language, the use of many specifically female forms, such as poetess, authoress, is increasingly discouraged.
An example of an English word that has retained gender-specific spellings is the noun-form of blond/blonde, with the former being masculine and the latter being feminine.
Gender neutrality in English
|This section needs additional citations for verification. (May 2014)|
Gender neutrality in English became a growing area of interest among academics during Second Wave Feminism, when the work of structuralist linguist Ferdinand de Saussure, and his theories on semiotics, became more well known in academic circles. By the 1960s and 1970s, post-structuralist theorists, particularly in France, brought wider attention to gender-neutrality theory, and the concept of supporting gender equality through conscious changes to language. Feminists analyzing the English language put forward their own theories about the power of language to create and enforce gender determinism and the marginalization of the feminine. Debates touched on such issues as changing the term "stewardess" to the gender-neutral "flight attendant", "fireman" to "fire fighter", "mailman" to "mail carrier", and so on. At the root of this contentiousness may have been feminists' backlash[the reference doesn't make sense; the degendering of Modern English was consistent with the gender-equality basis of the cultures assembled in this region and time period; there was no "feminist backlash"] against the English language's shift from "grammatical gender" to "natural gender" during the early Modern era coinciding with the spread of institutional prescriptive grammar rules in English schools. These theories have been challenged by some researchers, with attention given to additional possible social, ethnic, economic, and cultural influences on language and gender. The impact on mainstream language has been limited, yet has led to lasting changes in practice.
Features of gender-neutral language in English may include:
- Avoidance of gender-specific job titles, or caution in their use;
- Avoidance of the use of man and mankind to refer to humans in general;
- Avoidance of the use of he, him and his when referring to a person of unspecified sex (see under Personal pronouns above).
Certain naming practices (such as the use of Mrs and Miss to distinguish married and unmarried women) may also be discouraged on similar grounds. For more details and examples, see Gender neutrality in English.
- Curzan 2003, pp. 84, 86: "[T]he major gender shift for inanimate nouns in written texts occurs in late Old English/early Middle English, but [. . .] the seeds of change are already present in Old English before 1000 AD."
- Lass, Roger (2006). "Phonology and morphology". In Richard M. Hogg, David Denison. A history of the English language. Cambridge University Press. p. 70. ISBN 0-521-66227-3.
- Curzan 2003, p. 86: "[G]rammatical gender remained healthy in the personal pronouns through late Old English; it is not until early Middle English that the balance of gender concord in the pronouns tips towards natural gender, at least in the written language."
- Shinkawa, Seiji (2012). Unhistorical Gender Assignment in Laʒamon's Brut. Switzerland: Peter Lang.
- Hellinger, Marlis; Bussmann, Hadumod (2001). "English — Gender in a global language". Gender across languages: the linguistic representation of women and men 1. John Benjamins Publishing Company. p. 107. ISBN 90-272-1841-2.
- Curzan 2003, p. 53.
- Rodney Huddleston and Geoffrey K. Pullum, The Cambridge Grammar of the English Language (2002).
- 'English Language', Encarta, (Microsoft Corporation, 2007). "The distinctions of grammatical gender in English were replaced by those of natural gender.". Archived 2009-10-31.
- Benjamin Lee Whorf, 'Grammatical Categories', Language 21 (1945): 1–11. See also Robert A. Hall Jr, 'Sex Reference and Grammatical Gender in English', American Speech 26 (1951): 170–172.
- Siemund, Peter (2008). Pronominal Gender in English: A Study of English Varieties form a Cross-Linguistic Perspective. New York: Routledge.
- Compare the similar Early Modern English formation which is typified in the prose of the King James Bible (or Authorized Version), here shewed in the Gospel of St Matthew, v,13: Ye are the salt of the earth: but if the salt have lost his savour, wherewith shall it be salted? it is thenceforth good for nothing, but to be cast out, and to be trodden under foot of men.
- The Chicago Manual of Style, 15th edition, p. 356. 2003. ISBN 0-226-10403-6.
- Meyer, Charles F. (2010). Introducing English Linguistics International Student Edition. Cambridge University Press. p. 14. ISBN 9780521152211.
- Curzan 2003, pp. 39, 151, 156.
- Cameron 1992, p. 29.
|Look up gender in Wiktionary, the free dictionary.| |
Iron exhibits two different oxidation numbers in compounds, +2 and +3, which are seen in two different chemical compounds, FeCl2 and FeCl3, for instance. Stable compounds form because the products are at a lower total energy than the reactants. That is the case for both FeCl2 and FeCl3. But it is possible to cause the further oxidation of iron in FeCl2 to the +3 oxidation state, as in this reaction.2FeCl2 + Cl2 --> 2FeCl3 Transition metals show variable valency and these elements have d-orbital as penultimate orbital and the outermost orbital is the s-orbital.
Now, the atomic number of Fe, Iron is 26
The electronic configuration is
(1s)2 (2s)2 (2p)6 (3s)2 (3p)6 (4s)2 (3d)6 (aufbau's principle).
But the exact arrangement is also possible with (1s)2 (2s)2 (2p)6 (3s)2 (3p)6 (3d)6 (4s)2.
Therefore, because of the different electronic configurations which the element can have it shows variable valency.
Another reason: An atom has to complete 2 or 8 electrons in its outermost shell therefore, when Fe or any other transition element reacts with the other element then these transition atoms share the electrons according to their requirements
Eg: 1. Cu 1+ & Cu 2+
2. Ag 1+ & Ag 2+
3. Hg 1+ & Hg 2+
4. Au 1+ & Au 3+
5. Fe 2+ & Fe 3+
6. Pb 2+ & Pb 4+
7. Sn 2+ & Sn 4+
8. Pt 2+ & Pt 4+
9.Mn 2+ & Mn 4+ |
Winter cover crops provide important ecological functions that include nutrient cycling and soil cover. Although cover crop benefits to agroecosystems are well documented, cover crop use in agronomic farming systems remains low. Winter cover crops are usually planted in the fall after cash crop harvest and killed the following spring before planting the next cash crop. Recent research has identified time and money as major impediments to farmer adoption of winter cover crops. Developing innovative cover crop management systems could increase the use of winter cover crops.
A scientist with the USDA Agricultural Research Service National Soil Tilth Lab and colleagues at Iowa State University investigated the potential for winter cereal cover crops to perpetuate themselves through self-seeding, thereby eliminating the cost of planting a cover crop each fall and time constraints between cash crop harvest and the onset of winter. Results from the study were published in the March-April 2008 issue of Agronomy Journal.
In the research investigation, winter rye, triticale, and wheat were planted and managed chemically and mechanically in varying configurations to facilitate self-seeding. After soybean harvest in the fall of 2004 and 2005, establishment and green ground cover of self-seeded winter cover crops was measured because of their important relationships with nutrient uptake capacity and soil erosion protection. The study revealed that plant establishment through self-seeding was generally accomplished within one week after soybean harvest. Green ground cover and self-seeding was consistently higher with wheat.
“The significance of this research, in addition to lowering the cost and risk of establishing cover crops, is to extend the ecological functions that cover crops perform beyond the normal cover crop termination dates between mid-April and early May,” says Dr. Jeremy Singer of the National Soil Tilth Lab. “Furthermore, producers using organic crop production techniques could adopt these systems because of the potential for enhanced weed suppression without soil disturbance.”
According to Singer, increasing the presence of cover crops on the landscape can increase nutrient capture and lower soil erosion, both of which can improve water quality.
Research is ongoing at the National Soil Tilth Lab to identify self-seeded cover crop systems that minimize competition with cash crops and maximize the effectiveness of self-propagation. The impacts of cover crops on soil quality in systems with biomass removal are also being investigated because cover crops can help offset the carbon and nutrient losses that occur when biomass is harvested in row crop production systems.
Source: American Society of Agronomy
Explore further: Researchers find the genome of the cultivated sweet potato has bacterial DNA |
A team of researchers at Harvard University has created a novel hydrogen fuel cell that keeps running even after its fuel supply is exhausted. The solid-oxide fuel cell (SOFC) converts hydrogen into electricity, and it can also store electrochemical energy like a battery. This allows the fuel cell to produce power for a short time after it has run out of hydrogen.
In a press statement, associate professor of materials science at the Harvard School of Engineering and Applied Sciences (SEAS) Shriram Ramanathan said: “This thin-film SOFC takes advantage of recent advances in low-temperature operation to incorporate a new and more versatile material. Vanadium oxide (VOx) at the anode behaves as a multifunctional material, allowing the fuel cell to both generate and store energy.”
The team’s findings were published in the journal Nano Letters in June, where they theorized that their hydrogen fuel cell would be useful in small-scale, portable energy applications, where a very compact and lightweight power supply is essential. “Unmanned aerial vehicles, for instance, would really benefit from this,” says lead author Quentin Van Overmeere, a postdoctoral fellow at SEAS. “When it’s impossible to refuel in the field, an extra boost of stored energy could extend the device’s life span significantly.”
Thin-film SOFCs traditionally use platinum for the electrodes (the two “poles” known as the anode and the cathode) which, allows the cell to generate power for only about 15 seconds before the electrochemical reaction peters out. The Harvard team used a bilayer of platinum and VOx for their anode, which can continue operating without fuel for up to 14 times longer.
“There are three reactions that potentially take place within the cell due to this vanadium oxide anode,” says Ramanathan. “The first is the oxidation of vanadium ions, which we verified through XPS [X-ray photoelectron spectroscopy]. The second is the storage of hydrogen within the VOx crystal lattice, which is gradually released and oxidized at the anode. And the third phenomenon we might see is that the concentration of oxygen ions differs from the anode to the cathode, so we may also have oxygen anions being oxidized, as in a concentration cell.”
Ramanathan and his colleagues estimate that a more advanced fuel cell of this type, capable of producing power without fuel for a longer period of time, will be available for applications testing within two years. |
In our world, each country follows a certain time-zone. These time-zones are crucial for expressing time conveniently and effectively. However, time-zones can sometimes be inexplicit due to variables such as daylight saving time, coming into the picture.
Moreover, while representing these time-zones in our code, things can get confusing. Java has provided multiple classes such as Date, Time and DateTime in the past to also take care of time-zones.
However, new Java versions have come up with more useful and expressive classes such as ZoneId and ZoneOffset, for managing time-zones.
In this article, we'll discuss ZoneId and ZoneOffset as well as related DateTime classes.
We can also read about the new set of DateTime classes introduced in Java 8, in our previous post.
2. ZoneId and ZoneOffset
With the advent of JSR-310, some useful APIs were added for managing date, time and time-zones. ZoneId and ZoneOffset classes were also added as a part of this update.
As stated above, ZoneId is a representation of the time-zone such as ‘Europe/Paris‘.
There are 2 implementations of ZoneId. First, with a fixed offset as compared to GMT/UTC. And second, as a geographical region, which has a set of rules to calculate the offset with GMT/UTC.
Let's create a ZoneId for Berlin, Germany:
ZoneId zone = ZoneId.of("Europe/Berlin");
ZoneOffset extends ZoneId and defines the fixed offset of the current time-zone with GMT/UTC, such as +02:00.
This means that this number represents fixed hours and minutes, representing the difference between the time in current time-zone and GMT/UTC:
LocalDateTime now = LocalDateTime.now(); ZoneId zone = ZoneId.of("Europe/Berlin"); ZoneOffset zoneOffSet = zone.getRules().getOffset(now);
In case a country has 2 different offsets – in summer and winter, there will be 2 different ZoneOffset implementations for the same region, hence the need to specify a LocalDateTime.
3. DateTime Classes
Next let's discuss some DateTime classes, that actually take advantage of ZoneId and ZoneOffset.
ZonedDateTime is an immutable representation of a date-time with a time-zone in the ISO-8601 calendar system, such as 2007-12-03T10:15:30+01:00 Europe/Paris. A ZonedDateTime holds state equivalent to three separate objects, a LocalDateTime, a ZoneId and the resolved ZoneOffset.
This class stores all date and time fields, to a precision of nanoseconds, and a time-zone, with a ZoneOffset, to handle ambiguous local date-times. For example, ZonedDateTime can store the value “2nd October 2007 at 13:45.30.123456789 +02:00 in the Europe/Paris time-zone”.
Let's get the current ZonedDateTime for the previous region:
ZoneId zone = ZoneId.of("Europe/Berlin"); ZonedDateTime date = ZonedDateTime.now(zone);
ZonedDateTime also provides inbuilt functions, to convert a given date from one time-zone to another:
ZonedDateTime destDate = sourceDate.withZoneSameInstant(destZoneId);
OffsetDateTime is an immutable representation of a date-time with an offset in the ISO-8601 calendar system, such as 2007-12-03T10:15:30+01:00.
This class stores all date and time fields, to a precision of nanoseconds, as well as the offset from GMT/UTC. For example,OffsetDateTime can store the value “2nd October 2007 at 13:45.30.123456789 +02:00”.
Let's get the current OffsetDateTime with 2 hours of offset from GMT/UTC:
ZoneOffset zoneOffSet= ZoneOffset.of("+02:00"); OffsetDateTime date = OffsetDateTime.now(zoneOffSet);
OffsetTime is an immutable date-time object that represents a time, often viewed as hour-minute-second-offset, in the ISO-8601 calendar system, such as 10:15:30+01:00.
This class stores all time fields, to a precision of nanoseconds, as well as a zone offset. For example, OffsetTime can store the value “13:45.30.123456789+02:00”.
Let's get the currentOffsetTime with 2 hours of offset:
ZoneOffset zoneOffSet = ZoneOffset.of("+02:00"); OffsetTime time = OffsetTime.now(zoneOffSet);
Getting back to the focal point, ZoneOffset is a representation of time-zone in terms of the difference between GMT/UTC and the given time. This is a handy way of representing time-zone, although there are other representations also available.
Moreover, ZoneId and ZoneOffset are not only used independently but also by certain DateTime Java classes such as ZonedDateTime, OffsetDateTime, and OffsetTime.
As usual, the code is available in our GitHub repository. |
Tear Catchers – For Those In Mourning
According to legend tear bottles were prevalent in ancient Roman times, when mourners filled small glass vials or cups with tears and placed them in burial tombs as symbols of love and respect. Sometimes women were even paid to cry into “cups”, as they walked along the mourning procession. Those crying the loudest and producing the most tears received the most compensation, or so the legend goes. The more anguish and tears produced, the more important and valued the deceased person was perceived to be.The bottles used during the Roman era were lavishly decorated and some measured up to four inches in height.
In ancient Persia it is said that when a sultan returned from battle, he checked his wives’ tear catchers to see who among them had wept in his absence and missed him the most.
Tear bottles reappeared during the Victorian period of the 19th century, when those mourning the loss of loved ones would collect their tears in bottles ornately decorated with silver and pewter. Special stoppers allowed the tears to evaporate. When the tears were gone, the mourning period would end.
However, the truth is perhaps somewhat different as most historians and archaeologists believe that these so-called “tear bottles” contained oily substances, perhaps fragrant ointments used as libations or to anoint the dead. Oddly enough, this theory was known well before modern chemical analyses, but so ingrained was the idea that these ancient bottles were “tear catchers,” that people simply chose to ignore the facts and believe the romantic Victorian idea that they were tear catchers. The myth likely began with archaeologists and an oddly chosen term. Small glass bottles were often found in Greek and Roman tombs, and early scholars romantically dubbed them lachrymatories or tear bottles. |
Preparing the Next Generation
With the U.S. Bureau of Labor Statistics predicting a sharp increase of jobs in science, technology, engineering, and math, there is a critical need to get the next generation of students and workers ready to step into those roles. Play is the work of childhood and toys have the power to shape valuable life skills to prepare kids for these jobs that may not even exist yet. Problem-based play and spacial learning inspires kids to learn and grow to become whatever they want.
What is STEAM?
Often when people think of STEM, they only think of Science, Technology, Engineering, and Math, but engineers have to get creative in their projects along with the math and applied science they use. Art and creativity (the "A" in STEAM) is a huge part of what engineers do every day. Engineers take a problem, visualize it, and then build it. This is the same process that children use when they are building with Brackitz. Brackitz helps kids create, learn, and grow by facilitating their creativity through building.
Looking for a STEAM Gift For Girls and Boys?
Here are some more resources to find the perfect gift for your loved one this year: |
What would happen if the Earth temperature increase by 1 degree?
People will also die in greater numbers as they struggle with the increasing heat. The ecosystem will collapse and a third of all life on earth will face extinction. Plant growth will slow, then stop. The world’s food centres will become barren and, within 85 years, one third of the planet will be without fresh water.
What would happen if the Earth’s temperature increased by 2 degree Celsius?
If warming reaches 2 degrees Celsius, more than 70 percent of Earth’s coastlines will see sea-level rise greater than 0.66 feet (0.2 meters), resulting in increased coastal flooding, beach erosion, salinization of water supplies and other impacts on humans and ecological systems.
What is the 2 degree Celsius limit and why is it important?
The 2-degree scenario is widely seen as the global community’s accepted limitation of temperature growth to avoid significant and potentially catastrophic changes to the planet.
Why a half degree temperature rise is a big deal?
Now, a major new United Nations report has looked at the consequences of jumping to 1.5 or 2 degrees Celsius. Half a degree may not sound like much. But as the report details, even that much warming could expose tens of millions more people worldwide to life-threatening heat waves, water shortages and coastal flooding.
Does 1 degree make a difference?
It turns out, that just a degree or two difference in temperature makes a big difference in your heating or cooling bill. That means you can save between $1.80 and $2.70 a month by raising the temperature one degree and as much as $9.00 to $13.50 by raising the temperature five degrees.
Where have some of the strongest and earliest impacts of global warming occurred?
latitudes Impacts of global warming are distributed equally all over the planet Some of the fastest-warming regions on the planet include Alaska, Greenland and Siberia. These Arctic environments are highly sensitive to even small temperature increases, which can melt sea ice, ice sheets and.
What is the most important effect of climate change?
Increased heat, drought and insect outbreaks, all linked to climate change, have increased wildfires. Declining water supplies, reduced agricultural yields, health impacts in cities due to heat, and flooding and erosion in coastal areas are additional concerns.
Which is the most widely discussed impact of climate change?
Ongoing effects include rising sea levels due to thermal expansion and melting of glaciers and ice sheets, and warming of the ocean surface, leading to increased temperature stratification. Other possible effects include large-scale changes in ocean circulation.
Which of the following greenhouse gases has the biggest impact on climate change?
Summary of Key Points Carbon dioxide accounts for most of the nation’s emissions and most of the increase since 1990. Electricity generation is the largest source of greenhouse gas emissions in the United States, followed by transportation.
What does carbon footprint cause?
It includes carbon dioxide — the gas most commonly emitted by humans — and others, including methane, nitrous oxide, and fluorinated gases, which trap heat in the atmosphere, causing global warming.
How are greenhouse gases reduced?
There are many ways to reduce greenhouse gas emissions from the industrial sector, including energy efficiency, fuel switching, combined heat and power, use of renewable energy, and the more efficient use and recycling of materials.
What is the biggest contributor to global warming?
Why should we reduce greenhouse gas emissions?
Reducing Greenhouse Gas Emissions Could Prevent Premature Deaths. Reducing the flow of the greenhouse gases that spur global warming could prevent up to 3 million premature deaths annually by the year 2100, a new study suggests. Greenhouse gases such as carbon dioxide trap heat, helping warm the globe.
What are the benefits of reducing greenhouse gas emission?
One of the benefits associated with reducing greenhouse gas emissions is an improvement in air quality. Air quality is improved by electrifying transportation because emissions from vehicles are a significant part of smog and urban air pollution 30, 32, 38.
How much do we need to reduce greenhouse gases?
Stabilizing Earth’s temperature to significantly reduce risks to societies now requires greenhouse gas emissions to reach net zero by 2050. This translates to cutting greenhouse gases by about 50% by 2030 alongside significant removal of carbon dioxide from the atmosphere.
What are the main causes of the greenhouse effect?
Human activities are responsible for almost all of the increase in greenhouse gases in the atmosphere over the last 150 years. The largest source of greenhouse gas emissions from human activities in the United States is from burning fossil fuels for electricity, heat, and transportation.
What are the benefits of reducing pollution?
Benefits of Reducing and ReusingPrevents pollution caused by reducing the need to harvest new raw materials.Saves energy.Reduces greenhouse gas emissions that contribute to global climate change.Helps sustain the environment for future generations.Saves money.
Why is it important to reduce waste?
Waste Reduction. Although recycling items instead of throwing them away allows the material to be turned into something else, recycling everything isn’t the end goal for our waste. Reducing the amount of waste you produce overall – whether trash, recycling, or compost – will make the most impact for the planet.
What are advantages of pollution?
Key Takeaways. Pollution is related to the concept of scarcity. The existence of pollution implies that an environmental resource has alternative uses and is thus scarce. Pollution has benefits as well as costs; the emission of pollutants benefits people by allowing other activities to be pursued at lower costs. |
Our cardiovascular system is composed of the heart (the pump); blood vessels consisting of arteries, veins, and capillaries (the pipes); and blood containing red and white blood cells, and plasma (the fluid). Cardiovascular disease (CVD) relates to diseases that affect the heart and blood vessels. These can include stroke, heart attack, coronary heart disease (CHD), angina, congenital heart disease etc. According to the British Heart Foundation there are around 7.4 million people living with heart and circulatory disease in the UK, with CHD being the most common. Heart and circulatory disease cause more than a quarter of all deaths in the UK.1
Perhaps of particular relevance today is that pre-existing cardiovascular diseases and their risk factors, such as diabetes, obesity, and high blood pressure, have emerged as some of the most important reasons for severe complications from Covid-19.2
Our cardiovascular system carries out many functions. It transports blood around the body, delivering oxygen, nutrients, and hormones to our cells. It also carries waste products out of the cell for detoxification and excretion and is a crucial part of our defence against infection and illness, as it carries immune system components to where they are needed. It also helps to regulate our body temperature and stabilise pH and ionic concentration of our bodily fluids. Damage to our CV system can subsequently have an immense impact on our health.
Factors that can play a part in the development of CVD include inflammation, obesity, lack of exercise, insulin resistance, oxidative stress, smoking and chronic stress. The likelihood of developing CVD is therefore largely linked to our diet and lifestyle choices. The World Health Organization, for instance, have estimated that most heart attacks and strokes are preventable, and that a healthy diet and other lifestyle factors are central to this prevention. Poor diet combined with a sedentary lifestyle and chronic stress is the perfect recipe for developing CVD. However, it also means that we have great capacity to safeguard our CV health by optimising our nutrient intake, exercising, and taking steps to manage our stress. Furthermore, there are certain nutrients, botanicals and natural phyto-antioxidants that can specifically help to support our CV health. In this blog we will be discussing some of these specific nutrients.
Nitric oxide (NO) plays a crucial role in the protection against the development and progression of CVD. L-citrulline is an amino acid and the natural precursor to L-arginine. While l-arginine is the precursor for the synthesis of NO, supplementation is mostly ineffective at increasing NO synthesis. Unlike l-arginine, l-citrulline is not quantitatively extracted from the gastrointestinal tract or liver and its supplementation is therefore more effective at increasing l-arginine levels and NO synthesis.3
The cardioprotective roles of NO include regulation of blood pressure and vascular tone, (helps blood vessels dilate to promote proper blood flow), inhibition of platelet aggregation and leukocyte adhesion, and prevention of smooth muscle cell proliferation.4
Beets and leafy green vegetables are a good source of dietary nitrates, which the body can convert to NO. Including vitamin C-rich foods in the diet such as citrus fruits and vegetables can also be beneficial. Vitamin C can help to increase levels of NO by increasing its bioavailability and absorption in the body.
Horse Chestnut Extract
In traditional medicine, Horse chestnut (Aesculus hippocastanum) has a long history of use in addressing symptoms such as swelling and inflammation, and in supporting the strength of blood vessel walls. Most of its beneficial effects comes from its biologically active compound, aescin.
Chronic venous insufficiency (CVI) is a condition where veins do not effectively return blood from the legs to the heart. It is associated with problems such as varicose veins, ankle swelling, and leg cramping. Research suggests that horse chestnut seed extract may be useful for CVI and can help to support the health of capillaries and veins. For instance, several randomised controlled trials have shown an improvement in CVI related signs and symptoms, such as leg pain, with horse chestnut extract compared with placebo.5
There is an increasing body of evidence that demonstrates that oxidative stress plays a fundamental role in the development of many diseases, including CVD. Oxidative damage to heart tissue can lead to heart disease, stroke, heart attack and other CV conditions. N-Acetyl-l-Cysteine (NAC) comes from the amino acid L-cysteine. Through its capacity to synthesize glutathione, a potent antioxidant, NAC has displayed many health-promoting properties with regards to CV health. Research also demonstrates it can help to increase NO production, which supports the dilation of veins, improving blood flow. This speeds up blood transportation back to the heart.6
Eating a diet rich in cysteine foods, including chicken, turkey, garlic, yoghurt, and eggs, has shown a decreased risk for strokes.7
Epidemiological research indicates that a diet abundant in fruit and vegetables offers protection against CVD, and this may be attributed, in part, to the flavonoid content.8 Studies have shown that flavonoids have several anti-atherosclerotic activities including anti-inflammatory, antioxidant, antiproliferative, antiplatelet, and provessel function activities9 and have been shown to be active against hypertension, inflammation, diabetes and vascular diseases.10
Quercetin is a flavonoid found in many plants and foods, such as onions, green tea, apples, berries, and red wine. In research, it has displayed considerable benefits in relation to the health of the heart, such as the inhibition of LDL oxidation; vasodilator effects; reduction of inflammatory markers, the protective effect on NO and endothelial function; prevention of oxidative and inflammatory damage; and helps to prevent the aggregation of platelets.10
Rutin is a flavanol, found abundantly in many plants including buckwheat, asparagus, apples, figs, and tea. It has also exhibited several pharmacological activities, including antioxidant, cytoprotective, vasoprotective, and cardioprotective activities.11
Grape Seed Extract
The oxidation of LDL cholesterol considerably increases this risk of atherosclerosis (accumulation of fatty plaque in the arteries), which is a well-known risk factor for CVD. Grape seeds are high in antioxidants, including phenolic acids, anthocyanins, and flavonoids. In human studies, grape seed extract has been found to exert reducing effects on oxidized LDL.12 Furthermore, a systematic review showed that grape seed extract appeared to significantly lower systolic blood pressure and heart rate.13
Acerola is a shrub or tree that is native to Central America, Mexico, and the Caribbean. The fruit is similar to a cherry and is a rich source of vitamin C . Vitamin C is a powerful antioxidant that has received considerable interest for its possible role in heart health. There are many observational studies that show an inverse relationship between vitamin C intake and major CV events and CVD risk factors. Hypertension is regarded as a major and independent risk factor of CVD. A systematic review last year showed that people with hypertension have a relatively low serum vitamin C, and vitamin C is inversely associated with both systolic blood pressure and diastolic blood pressure.14
Maritime Pine Bark Extract
Maritime pine trees grow in countries on the Mediterranean Sea and contain several bioflavonoids that exert anti-inflammatory and antioxidant effects.
Maritime pine bark extract has been used for respiratory conditions such as asthma, poor circulation and CVI.
Elevated total homocysteine has been identified as a risk factor for CVD. Homocysteine is an intermediary in the methylation cycle. Throughout the cycle it is recycled back to methionine if there is adequate B6, B12 and folate. Without sufficient B vitamins, homocysteine can build up and bring about damage to arteries. Over time, this can lead to atherosclerosis and CVD. Good sources of B vitamins include dark green leafy vegetables, nuts/seeds, eggs, meat/fish, and wholegrains.
Physical exercise is linked to longevity and can significantly reduce the risk of CVD, diabetes, high blood pressure, and obesity. It can also help to reduce stress, anxiety, and depression, and improve lipid profile.15 It is estimated that physical inactivity increases the risk of CVD by more than 20%.16
We are all currently experiencing unprecedented times, and stress levels are perhaps heightened for many of us. Whether it be loss of employment, adapting to a new way of working, or feeling socially isolated, stress can be detrimental to our CV health. The ‘fight or flight’ response is a primal evolutionary response to help us to survive. It is, however, largely unsuitable in today’s world of chronic stress. Modern stressors can be frequent and long-lasting, meaning the survival traits of the stress response (i.e., increased heart rate) are not effective in resolving the threat, and overtime can affect health. Chronic stress can contribute to high blood sugar, high blood pressure (which can damage artery walls), insulin resistance, decreased absorption of nutrients etc, all of which are bad news for our CV health.
Exercising, and enjoying a nutrient dense diet is a good place to start when dealing with stress. When we are stressed, the metabolism of our cells increases, and we burn through a lot more of the nutrients than we normally need. The adrenal glands will be working hard to produce stress hormones, so including plenty of foods to support the adrenal glands can be beneficial. These include vitamin C, magnesium, and the B vitamins. Limiting alcohol and caffeine and maintaining a healthy weight can also be beneficial.
- CVD relates to diseases that affect the heart and blood vessels
- Pre-existing cardiovascular diseases have emerged as some of the most important reasons for severe complications from Covid-19
- CV system transports blood around the body, delivering oxygen, nutrients, and hormones to our cells and is a crucial part of our defence against infection and illness
- Factors that can play a part in the development of CVD include inflammation, obesity, lack of exercise, insulin resistance, oxidative stress, smoking and chronic stress
- NO plays a crucial role in the protection against CVD. L-citrulline is an amino acid and precursor to L-arginine. l-arginine is the precursor for the synthesis of NO
- Horse chestnut has a long history of use in addressing symptoms such as swelling, inflammation, and in supporting blood vessel walls
- Through its capacity to synthesize glutathione, NAC has displayed many health-promoting properties with regards to CV health
- Epidemiological research indicates that a diet abundant in fruit and vegetables offers protection against CVD, and this may be attributed, in part, to the flavonoid content
- Grape seeds are high in antioxidants, including phenolic acids, anthocyanins, and flavonoids
- Vitamin C is a powerful antioxidant that has received considerable interest for its possible role in heart health
- Maritime pine bark extract has been used for respiratory conditions such as asthma, poor circulation and CVI
- Without sufficient B vitamins, homocysteine can build up and bring about damage to arteries
- Physical exercise is linked to longevity and can significantly reduce the risk of CVD, diabetes, high blood pressure, and obesity
- Chronic stress can contribute to high blood sugar, high blood pressure, insulin resistance, decreased absorption of nutrients, all of which are bad news for our CV health
If you have questions regarding the topics that have been raised, or any other health matters, please do contact me (Amanda) by phone or email at any time.
Amanda Williams and the Cytoplan Editorial Team
- British Heart Foundation. [online] Available at: https://www.bhf.org.uk/what-we-do/news-from-the-bhf/contact-the-press-office/facts-and-figures#:~:text=There%20are%20around%207.4%20million,the%20single%20biggest%20killer%20worldwide. [Accessed17th December 2020]
- Covid-19 and cardiovascular health | BHF (2020). Available at: https://www.bhf.org.uk/for-professionals/information-for-researchers/covid-19-and-cardiovascular-health (Accessed: January 5, 2021).
- Allerton, T. D. et al. (2018) “L-citrulline supplementation: Impact on cardiometabolic health,” Nutrients. MDPI AG.
- Naseem, K. M. (2005) “The role of nitric oxide in cardiovascular diseases,” Molecular Aspects of Medicine. Elsevier Ltd, pp. 33–65.
- Bielanski, T. E. and Piotrowski, Z. H. (1999) “Horse-chestnut seed extract for chronic venous insufficiency.,” The Journal of family practice. Cochrane Database Syst Rev, 48(3), pp. 171–172.
- Anfossi, G. et al. (2001) “N-acetyl-L-cysteine exerts direct anti-aggregating effect on human platelets,” European Journal of Clinical Investigation. Eur J Clin Invest, 31(5), pp. 452–461.
- Larsson, S. C., Håkansson, N. and Wolk, A. (2015) “Dietary Cysteine and Other Amino Acids and Stroke Incidence in Women,” Stroke. Lippincott Williams and Wilkins, 46(4), pp. 922–926.
- Bondonno, N. P. et al. (2015) “The Efficacy of Quercetin in Cardiovascular Health,” Current Nutrition Reports. Current Science Inc., pp. 290–303.
- Ciumărnean, L. et al. (2020) “The effects of flavonoids in cardiovascular diseases,” Molecules.
- Patel, R. v. et al. (2018) “Therapeutic potential of quercetin as a cardiovascular agent,” European Journal of Medicinal Chemistry. Elsevier Masson SAS, pp. 889–904.
- Ganeshpurkar, A. and Saluja, A. K. (2017) “The Pharmacological Potential of Rutin,” Saudi Pharmaceutical Journal. Elsevier B.V., pp. 149–164.
- Sano, A. et al. (2007) “Beneficial effects of grape seed extract on malondialdehyde-modified LDL,” Journal of Nutritional Science and Vitaminology. J Nutr Sci Vitaminol (Tokyo), 53(2), pp. 174–182.
- Feringa, H. H. H., Laskey, D. A., Dickson, J. E., & Coleman, C. I. (2011). The Effect of Grape Seed Extract on Cardiovascular Risk Markers: A Meta-Analysis of Randomized Controlled Trials. Journal of the American Dietetic Association, 111(8), 1173–1181.
- Ran, L. et al. (2020) “Association between Serum Vitamin C and the Blood Pressure: A Systematic Review and Meta-Analysis of Observational Studies,” Cardiovascular Therapeutics. Hindawi Limited, 2020.
- World Health Organisation. [online] Available at: https://www.who.int/cardiovascular_diseases/en/cvd_atlas_08_physical_inactivity.pdf?ua=1#:~:text=Doing%20more%20than%20150%20minutes,heart%20disease%20by%20approximately%2030%25.[Accessed 27th December, 2020]
- Physical Activity Policies for Cardiovascular Health (2020). Available at: http://www.ehnheart.org/publications-and-papers/publications/1243:physical-activity-policies-for-cardiovascular-health.html (Accessed: December 27, 2020). |
In April 2020, as children in China started returning to school after COVID-19 closures, an ancient hat from the Song Dynasty came back into fashion. At a primary school in Hangzhou, pupils donned handmade headgear that they fashioned out of paper, balloons, and other crafts, with protruding arms that spanned one meter. These eccentric hats were intended to help them adjust to social distancing measures, the South China Morning Post reported, and they were modeled after hats once worn by Chinese officials. Photographs of the students have been circulating on the internet, and so has a popular legend: that the hats were designed to keep officials away from each other, so that they couldn’t whisper and scheme with one another.
According to a scholar of art history and Asian studies, however, the “social distancing” function of the Chinese hats is rooted in “an unfounded speculation.” Jin Xu, an assistant professor at Vassar College, writes in an email, “Modern scholars trace the origin of the rumor to a 13th-century Chinese scholar who’s known for his shoddy scholarship.”
The original headwear was made of somber black cloth and was called futou, or more specifically zhanjiao futou—zhanjiao meaning “spreading feet or wings.” Early futou were simple cloths wrapped around the head, and wearers eventually padded them with wood, silk, grass, or leather, writes the scholar Mei Hua in Chinese Clothing. In the Tang Dynasty (618-907), as futou gradually took on the appearance of a more fitted, structured cap, officials began adopting them, adding two long wings made of stiffened ribbons.
Futou became a common accessory during the Song Dynasty (960-1279), although most of them had less cumbersome extensions—more feet than wings. People of all classes wore these hats, and there were five styles that individuals donned, according to their status or for various occasions, notes Liu Fusheng in A Social History of Medieval China.
“Some warriors wore the ‘curved-feet futou’ or ‘flower-like futou with feet curved backward,’ or the ‘cross-feet futou,’ and musical instruments players in the imperial music office liked to wear the ‘long-feet futou,’” Liu writes. “On some special occasions, such as longevity ceremonies held for the royal family or imperial court banquets, officials would pin flowers on futou … And there were lustreless futou and white crêpe futou worn at funerals.”
The shape of the wings indicated the wearer’s rank, and the longest were reserved for the emperor and other high-ranking officials. Yu Yan—the hapless scholar who first speculated about the “social distancing” function of these hats—made his dubious claim in his four-volume text, The Pedantic Remarks of the Confucians. The purpose of futou, he scribbled, was “perhaps to avoid [the officials] whispering to each other when having an audience with the Emperor,” Liu writes.
The futou that Yu Yan described were likely made of plain or lacquered muslin, with extended ribbons that were reinforced with iron wires or bamboo strips. One style of wings was particularly narrow and long, “projecting as much as two feet from either side of the wearer’s head,” Alexandra B. Bonds, a professor of costume design at the University of Oregon, writes in Beijing Opera Costumes: The Visual Communication of Character and Culture.
Variations on futou cropped up during the Ming Dynasty (1368-1644), but new headwear entered the court after the Manchus took power and established the Qing Dynasty (1644-1912). Since that time, the hat has resurfaced in paintings and theatrical costumes, and different versions of it can still be spotted on stage during Beijing Opera performances. “The wings are mounted on springs, and the actor can make them quiver to expand their expression,” Bonds writes in an email. “Each style represents rank or sometimes personal characteristics.”
Individually crafted and embellished, the hats of students in Hangzhou are similar expressions of personality and style. Bonds adds that she’s delighted by how the pupils have repurposed this object from their cultural heritage. “Whether the headdress was originally intended to prevent courtiers from plotting sedition or not, the wings certainly would have precluded private conversation,” she says. “They can serve the same purpose today by reminding the students of the space required for social distancing, while also teaching them about their history, and giving them an art project. What more could a teacher want from an assignment?” |
There has been quite a lot in the media lately about GEOTHERMAL ENERGY. The US government has a programme that allocates hundreds of millions of dollars for the development of Geothermal Energy. The BBC recently has aired articles about the development of Geothermal Energy in the UK explaining that there is enough geothermal energy available in the UK to replace all coal, gas, wind and nuclear that are currently used to power all the UK's Electrical Generating Stations. Also there have been several news reports about the development of Geothermal Energy in Australia and in several other countries.
Knowing that I have been following the development of Geothermal Energy for some time, I was asked to give a talk on the subject to residents and guests on The World. Based on that talk, attached is a paper in the 'As I See It…….' series,
As usual, comments and brickbats welcome!
What is it? How does it work?
by Leonard Berney
What does the word 'Geothermal' in 'Geothermal Energy' mean? Its origin is Greek: 'Geo' means 'The Earth' and 'Thermal' means 'Heat'. 'Geothermal Energy' means 'Heat from the Earth'.
OUR PLANET HAS A VERY HOT CENTRE.
Our planet has a very hot centre. The temperature at the core is 6,000ºC (11,000ºF). Compare the temperature at which rock melts: 1,000ºC (1,800ºC). Iron melts at 1,500ºC (2,700ºF).
How do we know that the centre of the Earth is very hot? In some parts of the world molten rock breaks through the crust, the planet's outer layer, and forms a Volcano where some of the magma (as molten rock is named) spills out as lava.
We are all familiar with the existence of Hot Springs found at various places round the world. A natural Hot Spring occurs when rain water percolates down from the surface and comes in contact with hot rocks below the surface and then emerges at a spring as warm or hot water. The Romans made good use of Hot Springs and constructed many Roman Baths. Probably the most famous of these is in the city of Bath in England.
Many of the Health Spas of today are built over natural Hot Springs.
Iceland is fortunate enough to have many Hot Springs: 87% of Iceland's houses are heated by piped natural hot water.
Another manifestation of subterranean heat is a Geyser. A geyser occurs where surface water has percolated deep down to some 2,000m (6,000 ft) below the surface where, at that depth, the rocks are much hotter than the temperature at which water boils. As the pressure builds up boiling water and steam gush up to the surface as a spray that can be many feet high.
A geyser at Yellowstone Park, USA
GEOTHERMAL ENERGY POWER PLANT
In the 20th century, demand for electricity led to the use of geothermal energy as a heat source to generate electricity.
But first, how is electricity generated? Here is a diagram of a standard Electrical Power Plant.
Water in a boiler is heated to produce high pressure steam. The Steam drives a Steam Turbine that in turn drives an Electricity Generator; the electricity generated flows out on the Grid. The fuel used to heat the water is normally coal, oil, gas or nuclear. The spent steam is condensed back to water and returned to the boiler.
In certain places in the world there are subterranean reservoirs, formed many millions of years ago, containing very hot water. If a well is sunk into such a reservoir, superheated steam will emerge. In a Geothermal Energy Power Plant that steam is piped directly to the turbine, thus eliminating the need for the boiler and its heat source. The spent steam is condensed to water an injected back down to the reservoir.
Prince Piero Ginori Conti tested the first geothermal power generator in 1904 in Larderello, Italy. Later, in 1911, the world's first commercial Geothermal Power plant was built there. In 1958, New Zealand became the second major industrial producer of geothermally generated electricity. 1960 saw the launching of the first successful geothermal electric power plant in the United States, at The Geysers in California. Today 24 countries operate Geothermal Energy Power Plants.
A modern Geothermal Power Plant looks like this:
AS A SOURCE OF ENERGY, HOW DOES GEOTHERMAL ENERGY RATE?
Here is a list of the qualities that, ideally, a source of energy should have and how Geothermal Energy scores.
- CONTINUOUS OUTPUT √
- VARIABLE OUTPUT √
- RENEWABLE/SUSTAINABLE √
- NO CO2 √
- NO OTHER EMISSIONS √
- NO HAZARDOUS WASTE √
- LOW RUNNING COST √
- SAFETY √
- SMALL LAND AREA √
- ENVIRONMENTALLY FRIENDLY √
As a source of energy, as desirable as Geothermal Energy may be, the amount of energy currently produced by Geothermal Energy is only about a tiny 1% of the world's total energy consumption.
GLOBAL FUTURE ENERGY NEEDS
I would now like to turn our attention to the world's future energy picture.
Energy is now vital to our civilization. We need energy for 1) electricity, 2) road transport, aircraft and ships, 3) heating and cooling
The world uses more energy every year. Over the last 50 years the increase in energy consumption has been about 2% a year. There is no reason to suppose that, assuming energy will be available at or near the same price as it is now, the annual increase will not continue at 2% a year. It must be remembered that some 25% of the world's population is living in poverty and without electricity. Also that the global population is increasing at over 1% a year. It is simple arithmetic that an increase of 2% a year means doubling the consumption in just 33 years, that is by the year 2045.
Where will energy come from? Let us look at where energy comes from now.
- OIL 36% )
- COAL 27% ) 86%
- NATURAL GAS 23% )
- HYDROELECTRIC 6%
- NUCLEAR 6%
- WIND )
- SOLAR )
- BIOMASS ) 2%
What is the world's potential for not only continuing to generate energy at the present level but over the next few decades of doubling it?
Currently the fossil fuels, oil, coal and natural gas, generate 86% of the total consumption. We know that the remaining stocks of these three sources of energy are diminishing, and that what remains is getting more difficult and more expensive to extract. Moreover their use releases CO2 and other gasses into the atmosphere that many believe is causing climate change that is going to have disastrous consequences for mankind. Clearly for the future we need to wean ourselves off fossil fuels and switch to another source of energy.
Hydroelectric (dams), now produces 6% of the total. Most of the sites where hydroelectric power is practical have already been developed. Replacing fossil fuel with hydroelectric is out of the question.
Nuclear, now accounting for 6% of the total, could, in theory, be extended to replace fossil fuel. However, nuclear has several very serious disadvantages: popular protest (Japan 2011); toxic waste disposal; high cost of construction; would need over 7,000 to replace fossil fuel; decommissioning after 25-50 years; need for imported uranium, to mention but a few.
With energy consumption increasing, fossil fuel supply reducing, and with nuclear being unacceptable
the human race is facing a major problem.
WHAT IS THE SOLUTION? IS THERE A SOLUTION?
I think there is a solution -- Engineered Geothermal System or EGS
An Engineered Geothermal System is similar to a normal Geothermal Energy system except that the source of the steam is a subterranean hot water reservoir that has been artificially created. How can that be achieved? As explained earlier in this paper, at any point on the planet, the temperature of the rock in the crust increases with the depth from the surface. If a well (an Injection well) is bored deep into the earth to a depth where the rock is very hot, and water is pumped down that well at very high pressure, the rock surrounding the end of that well will fracture allowing water to penetrate into the fractures. That water will take up the temperature of the hot rocks through which it is percolating. If then other wells (Production wells) are bored to the same depth the very hot water will rise up those wells as steam.
This steam is used to power a standard turbine and electric generator. As with a 'natural' Geothermal Energy plant, the spent steam is condensed back to water and pumped down the Injection well, thus forming a looped circuit. The process—pump water down; water heated by hot rock; Heated water emerges at power plant as steam; steam drives the turbine and generator; condensed steam pumped down again—will generate electrical power continuously. Since the heat from the centre of the earth is inexhaustible, the electricity generation by EGS is likewise inexhaustible.
No new technology is involved. The process of drilling geothermal wells is identical to drilling for oil or gas. The process of fracturing subterranean rock is in general use in the oil/gas industry. Many existing oil and gas wells are bored down to the same depth as would be required for EGS wells. The surface plant (turbine, generator, condensers) is the same as for a existing 'natural' Geothermal plants.
In 2006 The US government commissioned the Massachusetts Institute Of Technology (MIT) to research "THE FUTURE OF GEOTHERMAL ENERGY IN THE USA". The findings of that report were:
"We estimate the extractable portion of the geothermal resource to exceed 2,000 times the annual consumption in the United States in 2005. With technological improvements the economically extractable amount of useful energy could increase by a factor of 10 or more, thus making geothermal energy sustainable for centuries".
Professor J.Tester, who headed the MIT report, said, "The numbers (the amountsof geothermal energy that could be harnessed) are staggering. Developing the technology to enable world-wide deployment of EGS could be accomplished for about the price of a new coal-fired power plant...It’s not as if we don’t know how to drill holes and fracture rocks."
THE FACT IS that heat from centre of the earth exists everywhere, in every country.
The Geothermal Energy available is unlimited and inexhaustible.
So, how deep would EGS wells have to be? The temperature of the rock surrounding the artificial reservoir needed to work an EGS system is around 200ºC (400ºF). It is not a question of 'if' rock at 200ºC exists—it does; rock at the critical 200ºC temperature exists below everyplace on Earth. It is only a question of 'how deep' However. The depth at which that temperature is found varies from one part of the world to another but on average that depth is 3-5 miles or 5-8 Km.
Information of the world's subterranean rock temperatures already exists. Here are the charts for the US and for Europe.
SUBTERRANEAN ROCK TEMPERATURES
WHAT WOULD AN EGS POWER PLANT LOOK LIKE?
What would an EGS Power plant look like? According to Geodynamics Ltd of Australia, an EGS Power Plant would consist of four injection wells surrounded by five Production wells. The output would be 50 MWs. EGS plants would be low and unobtrusive, occupying a relatively small surface areas. They would look something like this.
EGS plants could be sited one kilometre apart, near where their output is required e.g. around cities and industrial areas. However, as has been explained, forming the underground reservoir entails fracturing subterranean rock. This fracturing process can in some cases cause a minor earthquake. For this reason, EGS plants must not be sited within or very close to built-up areas.
Is generation of electrical power by EGS practical—or is it merely a theory? Does an EGS plant exist, and if so does it work? Yes, there is a working EGS plant at Soultz in France. It was constructed as a demonstration plant by the European Union and has been generating electricity continuously for several years. Here is a photo of the Soultz plant
The Engineered Geothermal System of generating electricity is eminently practical.
WHAT IS THE POTENTIAL OF EGS?
What is the potential of EGS? What if EGS were to be deployed on a world-wide scale? What if EGS were to replace the use of coal, oil and gas? If by the use of EGS we could generate unlimited cheap electricity, we could solve most of the World’s existing problems! We would:
- END CO2 AND OTHER EMISSIONS
- END WORLD CONFLICT OVER ACCESS TO OIL AND GAS
- END WORLD CONFLICT OVER ACCESS TO WATER
- END WORLD FOOD SHORTAGES
- END WORLD POVERTY
- IF THE SEA LEVEL RISES; FACILITATE RELOCATING EFFECTED POPULATIONS
IS EGS PRACTICAL?
To replace oil, gas, coal and nuclear with EGS would be a vast engineering task. Is the concept practical?
Let us consider what engineering feats the world has achieved.
§ In World War 2, Germany, Japan, Russia, the US and the UK built many hundreds of thousands of warplanes, ships, tanks, artillery pieces and trucks. All in the short space of just six years.
§ In the last 50 years 153,000 miles (1,225,000 Kms) of motorway (freeways) have been built—enough to go round the world six times.
§ There are some 800 million cars on the road, most of which have been manufacturews in the last ten years.
§ There are some 3,000 million more people on the Earth today than there were 50 years ago. Housing, work-places and infrastructure have been built to cater for this vast population increase.
§ World-wide there are no less than 880,000 oil and gas wells in daily operation. In addition there are 1 million wells that have been worked out and are abandoned. The great majority of these wells were drilled in the last 50 years.!
Mothballed WW2 US warplanes - 153,000 miles of motorways - 800 million cars on the road
6,000 in the Gulf of Mexico
1,800,000 wells since 1950
The conversion from coal, oil, gas and nuclear to EGS would be phased over four or five decades. On that time-scale, in my opinion, the change-over is well within the engineering capability of governments and industry.
ROAD VEHICLES, PLANES AND SHIPS
In the change-over from fossil fuels and nuclear to EGS the first priority would be the generation of electricity. As the availability of liquid fuel becomes less and the price per gallon becomes higher, road vehicles will turn to electric propulsion. Either with on-board batteries, or with hydrogen-powered fuel cells or with hydrogen-fuelled internal combustion engines. The hydrogen they will use is manufactured by EGS-generated electricity.
Fuelling airplanes with other than oil-based fuel is still a long way off. Research is currently being conducted with hydrogen-fuelled aero engines. Again, the hydrogen they will use is manufactured by EGS-generated electricity.
Like airplanes, fuelling ships with other than oil-based fuel is a long way off. Probably hydrogen-powered fuel cells and electric motor propulsion is the best solution.
WHAT ARE THE OBSTACLES TO EGS?
The cost of drilling the many thousands of wells that would be necessary is the main obstacle. However, if the establishment of EGS plants was undertaken on a massive scale, the rigs and the drilling process would lend themselves to automation and standardisation, and the costs of drilling would come down dramatically.
The greatest driver will be economic; the cost of fossil fuels and nuclear versus the cost of EGS. As fossil fuels become more difficult to extract and the price increases, market forces may well effect the change.
Traditionally governments have sponsored major power projects such as Hydroelectric dams, the first Nuclear Power plants, Wind Turbines: large Solar installations, etc. Governments would need to encourage the change from fossil fuels and nuclear to EGS by means of legislation, guarantees, grants and penalties.
I believe that the solution to mankind's future energy supply problem is the generation of electrical power by the Engineered Geothermal System.
I believe EGS is the ONLY practical solution.
The report says, "Market changes and an investment of $800 million to $1 billion over 15 years could bring more than 100 GW of geothermal energy to the US grid by 2050, according to a study recently released by a multi-disciplinary research group at MIT. That investment is less than the cost of a single “new generation” coal-powered plant, and the amount of energy is equivalent to 200 coal-fired power plants or 100 new nuclear power plants." |
Given the example code below, suppose the Java classes are actually in different files:
Is there anything wrong with the code in our example above?
Before we start, we want to point out that both classes A and B belong to the same package called ‘randomPackage’.
No access modifier means it’s package access
Looking at the declaration of ‘f1’ in class A, you’ll notice that it has no access modifier (like public or private). When an access modifier is omitted from a variable, it means that the variable has default, or package, access. When a variable has package access, it means that any other class defined in the same package will have access to that variable by name. And, any class not defined in the same package will not be able to access it by name. Because class B is in the same package as class A, it can access the f1 variable directly by name. So, nothing is wrong with the code inside the someMethod method in class B, which accesses the f1 variable.
Now, notice that class B accesses the variable ‘i1’, which is declared in class A. Looking at the declaration of the variable in class A, we notice that it uses the protected access modifier. So, the question is whether or not ‘i1’ is accessible by name in class B? The answer is yes, because in Java any method or instance variable that is declared as protected can be accessed by name in its own class definition, in any class derived from it, and by any class that’s in the same package. And because class B is in the same package as class A, class B can access any variable that’s declared as protected in class A, which in this case is ‘i1’. Note that this is basically the same thing as class B accessing the f1 variable, which also has package access, but it’s just not explicitly declared to be package access.
Derived classes also have access to protected variables
Finally, we can see that ‘i1’ is also being accessed in class C, which derives from class A. This is a legitimate access of ‘i1’, since it is declared as protected in class A. And, as the rule above states, any instance variable declared as protected can be accessed in a derived class.
So, we can conclude that there is nothing wrong with the code above. |
We use electrical energy to power our homes, schools and places of work. Generating electricity by burning coal or gas in a power station is not sustainable, and produces carbon dioxide. Using sunlight to make electricity is sustainable. A solar panel is a grid of photovoltaic cells that convert sunlight into electric power. Power is measured in watts. One kilowatt (kW) = 1000 watts (W). |
BY: SHAWNTAE HARRIS
A perfect sunny climate will certainly brighten everyone’s mood while simultaneously creating fuels that help, and not hinder, the environment. This is why an artificial sun is coming soon to Germany. They hope to create a larger than life sun to make climate-friendly fuel.
The sphere-like fake sun, known as Synlight, will be built in a town in Germany called Juelich. It is made with 149 lights that mimic natural light, the same lights that are found in film projects.
“Researchers hope to bypass the electricity stage by tapping into the enormous amount of energy that reaches Earth in the form of light from the sun,” according to the Associated Press.
The lights heat up to about 3,000 degrees Celsius. This temperature is the key to creating hydrogen, which German researchers need to solve their climate problems. The aim of the experiment is to come up with the optimal setup for concentrating natural sunlight to power a reaction to produce hydrogen fuel.
Hydrogen produces no carbon emissions when burned and therefore doesn’t contribute to global warning. “It’s an environmentally-friendly alternative to fossil and biofuels because when you burn it, the only waste product is water,” according to JDR energy. “And it will dramatically reduce our dependence on imported fuels, therefore improving our energy security.”
Hydrogen is created by splitting water with a process called electrolysis. Meaning it uses electricity to split oxygen and hydrogen and returns it as a gas. Synlight will be created in a large house in Juelich.
Unfortunately, this artificial sun uses an enormous amount of energy. In four hours it requires as much electricity as a family of four would use in an entire year. But scientists hope that in the future natural sunlight could be used instead to produce hydrogen in a carbon-neutral way.
German water is affected by climate change
A new report by Climate Service Center Germany (GERICS), claims that climate change is affected by hydrological balance. Meaning, if there is not enough hydrogen it affects the water balance.
Currently, Germany is experiencing a very wet and rainy winter while they suffer through a dry summer.
In 2003, the weather in Germany reached 40 degrees Celsius in August. It caused health problems for residents and the crops died as a result. “Precipitation in Germany has increased by 11 per cent since 1881 – and according to the forecasts, this trend is set to continue,” according to PhyOrg. The report states that low water will come early but then fall below unusual levels.
The government is currently working with the United States and Paris governments to lower the rising climate change levels. Each country is aiming to reduce carbon dioxide by 2020.
But this “could require $145 trillion in investment in low-carbon technologies by mid-century,” according to Mashable.
Under this worldwide agreement, nations agreed to keep below two degrees Celsius. To make this dream a reality countries need to switch from oil and coal to natural gases. Renewable energy should make up 65 per cent of power worldwide by 2050. |
This course will examine the history of Latin America from the first peoples who crossed the Bering Strait until the beginnings of the independence movements during the first decade of the nineteenth century. The class is divided into three sections. The first third of the class will examine the encounter between Spaniards and Native Americans and the subsequent conquest of the Americas. The second third of the class will examine colonial society. In the final section, we will discuss enslavement and eighteenth-century movements towards independence. We will spend a considerable amount of time analyzing the role of race and gender in Latin American society. |
A survey of the seas by the King Abdullah University of Science and Technology (KAUST), Saudi Arabia, has enabled scientists to come up with a predictive model of how planktonic heterotrophic prokaryotes – simple marine organisms that process most organic matter in the ocean – are affected by global warming.
Although small, plankton populations make up the largest living biomass in the ocean.
During an expedition in 2010, the scientists looked at plankton across subtropical and tropical waters of the Atlantic, Indian and Pacific Oceans. They scrutinized three factors: resource availability, mortality rates and temperature. They also looked at the viruses and microbes that either live off or kill off plankton.
Team leader Xose Anxelu G. Moran, associate professor of marine science at KAUST, and his peers from Saudi Arabia, Spain and Sweden, wanted to know what influenced plankton abundance and metabolism, and how this can help researchers predict the future role of the microbial populations in a changing ocean plagued by warmer temperatures and diminishing nutrients – thanks to climate change.
They found out that the effect of rising temperature on plankton is not uniform – populations living near the equator, for instance, are not as affected as those near the poles. The impact of global warming on marine microbes is more intense at higher latitudes, according to the study.
When there’s an abundance of viruses that eventually diminish the organism’s populations, temperature’s role becomes limited, the study adds. The same happens when there’s a decrease in nutrients; the water’s rising temperature almost becomes irrelevant. It’s why the scientists conclude that temperature only becomes a dominant factor when plankton are neither controlled by poor resources nor viral attacks.
As well, the study notes that a 1°C ocean warming will increase the biomass of plankton only in waters with more than 26°C of mean annual surface temperature. |
Staying positive during uncertain times can be incredibly difficult. The pandemic has managed to leave a “lingering grey cloud” above all of our heads. In light of this, our team at WHS has decided to use both National Hope Month and Find-A-Rainbow Day to reflect on what hope represents, and how we can preserve this in the minds of our youth.
As parents, we wish for our children to grow up joyful and resilient. To successfully promote the concept of hope, we must first understand what it is and why it’s so important. “The Psychology of Hope”, written by Charles R. Synder, is an excellent source of reference for anyone interested in this topic. Synder was an American psychologist specializing in positive psychology, who based a majority of his life’s work around the concept of hope and its effect on humans. During his research, he established something called the “Hope Theory”. According to this notion, kids who are hopeful are happier. They are more satisfied with life. They even tend to do better with academic and athletic achievements. Hopeful kids are also more apt to create stronger relationships.
Whether you’re an adoptive, foster, or biological parent, we all share one common goal: keeping our children hopeful. To achieve this, we’ve come up with a few interactive ways for you to help drive the hope within their hearts.
- Create a future focus: Start by talking to your child about what it is that they see when they think about their future. For instance, have them close their eyes and imagine their potential best self, then write down what that entails. Find out what they’re looking forward to. Ask them what they want to do, have, and be. After all, thinking about the future and making plans is central to fostering hope.
- Hope centered crafts: Plan a day to sit down and create a “Hope Kit”. This could be a box in which each member of the family puts hopeful stories they’ve heard, pictures that bring them hope, or hopeful experiences they’ve had in the past. To build on this, come up with a plan for ways to celebrate the smaller accomplishments together. Focus on the positives and how lucky you are to just be together. Because after all, it’s the simple things, really.
- Develop a sturdy support system: Never underestimate the power of unconditional love/support and what this means to your child. Merely knowing that you’re there to pick them up when they fall will keep them pushing along the pathway of resiliency.
- Keep things open-ended: If they’re ever seeking an answer to something, instead of just giving them the answer, teach them to take a moment to think. Ask them questions like “What do you think is the next best thing to do?” or “When have you overcome something like this in the past?”. This strategy will motivate your child to rely on their own resourcefulness and initiative. They will be more likely to recall the times they’ve succeeded in the past and use that knowledge to propel forward with the hope that they are capable of doing so again.
Now more than ever, young people are more at risk for falling spirits and lost hope. Let’s take these strides to illuminate their worlds and give hope a fighting chance.
We leave you with this: “Life throws challenges, and every challenge comes with a rainbow and light to conquer it.” -Amit Ray
To learn more and partner with Wolverine to bring hope into a child’s life, please complete the Care Provider Form to start your foster care and adoption parent journey today. |
Like any apex predator, wolves have a significant impact on their environment. Throughout the nineteenth and twentieth centuries, wolves were hunted to near extinction on the Great Plains. This led to several significant environmental changes as the delicate balance between predators and prey was disrupted.
Without the presence of wolves to keep populations in check, deer, antelope, and even elk proliferated. Without the pressure put on these grazing animals by predators, they gathered at water sources in large numbers. When grazing animals consumed too much erosion-preventing vegetation along the banks of rivers, these waterways changed. They became wider and shallower. This increased the temperature of the water, which led to the decline of many aquatic animals, and particularly trout. Also, wolves have the effect of strengthening herds of deer since they typically target the old and sick. Without wolves, the deer population became less robust, even though it increased in size overall.
Furthermore, overgrazing by animals that wolves traditionally prey upon led to the destruction of smaller animals' habitats. This, in turn, reduced the presence of small prey for other predators, like foxes and raptors. Additionally, without the presence of large carcasses left behind by wolves, the population of many scavenging creatures declined. |
Discover the cosmos! Each day a different image or photograph of our fascinating universe is featured, along with a brief explanation written by a professional astronomer.
2000 August 30
Explanation: The bright stars above are well known as heart of the Trapezium, an open cluster of stars in the center of the Orion Nebula. The many dim objects, however, are not well known, and have come to attention only on recent images in infrared light. These dim objects are thought to be brown dwarfs and free-floating planets. Brown dwarfs are stars too puny to create energy in their core by fusing hydrogen into helium. Although many more brown dwarfs than hot stars have now been found in Orion, their very low masses make them inadequate to compose much of the dark matter expected in galaxies and the Universe. The above false-color mosaic combines infrared and visible light images of the Trapezium from the Hubble Space Telescope. Faint brown dwarfs with masses as small as about one percent the mass of the sun are seen in the infrared data. Also visible are complex lanes of hot gas (appearing in blue) and cooler fine dust that blocks, glows and reflects nearby starlight.
Authors & editors:
Jerry Bonnell (USRA)
NASA Technical Rep.: Jay Norris. Specific rights apply.
A service of: LHEA at NASA/GSFC
& Michigan Tech. U. |
Flight shaming is a form of climate activism in which people are encouraged to avoid air transportation because of the large environmental footprint of aviation compared to other means of transportation. While this is by no means a new phenomenon, it has recently gained a lot of media coverage thanks in part to Greta Thunberg’s School Strike for Climate Movement. When Greta decided to tour the world to spread her message about climate change, she refused to fly across the Atlantic, opting to sail instead to reduce the CO2 emissions produced during her trip. This young activist’s decision reflects the changing global attitudes about transportation and climate change. People are no longer just concerned with getting to their destination as quickly and cheaply as possible, they are now also considering the environmental impact of such a decision.
The International Panel on Climate Change stated that we have 12 years to reduce carbon emissions or face imminent and irreversible damage to the environment and human life. The record-breaking wildfires, increased intensity and frequency of hurricanes, loss of biodiversity, and sea-level rise are examples of the effects of climate change that are already occurring and should be a stark warning to all high-polluting industries. Pollution comes at a price, and because carbon-taxes have failed to gain widespread traction, it is still ordinary people, in particular poor and minority communities that pay the price for this environmental negligence.
Demands are growing across the world for high-polluting companies and industries to be held accountable for their contribution to the climate crisis. While a carbon tax or government emissions reduction incentives would be the most efficient way to reduce pollution on a wide scale, for now it is up to individual companies to commit to emissions reduction on an ethical basis. The US government has yet to implement any meaningful climate change policy, so it is no surprise that many people feel a sense of hopelessness and despondency towards the existential crisis that is climate change. Flight shaming is a way that average citizens can voice this discontent, and while it poses a significant challenge to the aviation industry, it is also a wakeup call for them to change their business practices before it’s too late. So, what can airlines do about flight shaming? The answer is quite simply, reduce emissions.
The International Civil Aviation Organization (ICAO) suggests a three-pronged approach to sustainable aviation. Aviaton companies must invest in green technology, sustainable alternative fuels, and take advantage of market-based or government incentives if they want to meaningfully reduce emissions. The KLM Royal Dutch Airlines is one aviation company leading the way in green aircraft technology.
KLM is developing a new type of airplane called the Flying-V. Due to its unique and aerodynamic V-shape, it is expected to reduce fuel consumption by 20% compared to a regular aircraft. While maybe one day these types of aircrafts will be widely available, we are still likely decades away from this. For smaller aviation companies that don’t have the means to invest in this expensive green technology, biofuels will play the most important role in their emissions reduction strategies. Alternative fuel sources can prove aviation companies the most straightforward means of emissions reduction as they can simply drop biofuel into existing combustion jet engines without having to alter the aircraft. Biofuels are not a perfect solution either, but they will play a vital role in achieving the industry’s pledge to reduce emissions. Lastly, aviation companies must take advantage of opportunities like market-based incentives for carbon offsetting. While such incentives are still few and mostly voluntary, they will likely become more common place as the urgency to act to mitigate climate change grows.
While flight shaming may have a small impact on the industry’s revenue, the cost of inaction on reducing emissions will be far greater in long run. By prioritizing social and environmental responsibility, airlines can not only combat climate change, but they can reduce the effect of flight-shaming on their operations.
Contact us for more information or to request a demo!
Sources and related texts:
Special Projects Manager |
Have you ever had a “gut feeling” about something? Have you ever considered that children have these as well? Of course they do! The problem is, they don’t know how to express this to us. They know something doesn’t feel right, so they act out or exhibit a grumpy mood. The good news is, there is a way to boost their mood and reinforce positive behavior choices.
Serotonin, also known as the “feel good” neurotransmitter, plays a part in our wellbeing and is important in balancing mood. Ninety percent of the serotonin in our bodies is produced in the gut. This is because the gut and the brain were developed from the same embryo cell line and continue to communicate through the vagus nerve. This explains why the gut is often referred to as the “second brain” and where the phrase “gut feeling” comes from.
In recent studies serotonin levels have been found to also affect memory and learning. In addition to this, it helps build new neuropathways in the brain which supports the ability to learn new information more quickly. When there are higher levels of serotonin, moods are better and, therefore, cognitive functioning is improved. The problem is found when serotonin levels are too low. In children, this can manifest in behaviors such as poor impulse control and inattention.
Now that we understand the neuroscience surrounding serotonin, how can we, as parents, teachers, coaches, and anyone who works with children, use this information? We must create a learning environment that is inviting and form bonds with the children by setting an enthusiastic and positive mood.
The SKILLZ martial arts program does this by teaching with the brain in mind and utilizing game-based learning. Along with this, two of the Teaching SKILLZ that are used in class are specifically designed to increase the students’ serotonin levels.
The use of “choices” as a teaching skill in class helps the students build satisfaction because they have a say in what they are doing and, therefore, their excitement to do things increases. For example, when working on forms in class, if the instructor tells the students to do their forms for 15 minutes, they probably aren’t going to be that excited. But, if the instructor tells the students they can choose from doing their forms with weights, slow motion, backwards, or progressively, then the student will be more excited about getting to make their decision regarding this. And, they will then be more satisfied with the overall experience.
The use of “redirection” in class helps the students feel more accomplished and, therefore, happier. For example, if you have a student in class that doesn’t always sit the best during mat chats, the instructor can say “When I count to three, let’s see who can sit faster than Johnny.” This student will be prompted to sit correctly and then he will feel more accomplished by showing how quickly he can sit correctly.
By utilizing these techniques, the instructors are increasing the students’ serotonin, which helps them become more satisfied in their accomplishments and it reinforces their good behaviors.
Guest post by Jennifer Salama of Skillz Worldwide. Jennifer is a 4th-degree black belt and has been training in martial arts since 2001. She has a Masters Degree in Child Psychology and has embraced the SKILLZ curriculum because of its focus on child development and using martial arts as a vehicle to develop the child as a whole. |
Researchers from the Moscow Institute of Physics and Technology and Tohoku University (Japan) have explained the puzzling phenomenon of particle-antiparticle annihilation in graphene, recognized by specialists as Auger recombination. Although persistently observed in experiments, it was for a long time thought to be prohibited by the fundamental physical laws of energy and momentum conservation. The theoretical explanation of this process has until recently remained one of the greatest puzzles of solid-state physics. The theory explaining the phenomenon was published in Physical Review B.
In 1928, Paul Dirac predicted that an electron has a twin particle, which is identical in all respects but for its opposite electric charge. This particle, called the positron, was soon discovered experimentally. Several years later, scientists realized that the charge carriers in semiconductors — silicon, germanium, gallium arsenide, etc. — behave like electrons and positrons. These two kinds of charge carriers in semiconductors were called electrons and holes. Their respective charges are negative and positive, and they can recombine, or annihilate each other, releasing energy. Electron-hole recombination accompanied by the emission of light provides the operating principle of semiconductor lasers, which are devices crucial for optoelectronics.
The emission of light is not the only possible outcome of an electron coming in contact with a hole in a semiconductor. The liberated energy is often lost to thermal vibrations of the neighboring atoms or picked up by other electrons (figure 1). The latter process is referred to as Auger recombination and is the main “killer” of active electron-hole pairs in lasers. It bears the name of French physicist Pierre Auger, who studied these processes. Laser engineers strive to maximize the probability of light emission upon electron-hole recombination and to suppress all the other processes.
That is why the optoelectronics community greeted with enthusiasm the proposal for graphene-based semiconductor lasers formulated by MIPT graduate Victor Ryzhii. The initial theoretical concept said that Auger recombination in graphene should be prohibited by the energy and momentum conservation laws. These laws are mathematically similar for electron-hole pairs in graphene and for electron-positron pairs in Dirac’s original theory, and the impossibility of electron-positron recombination with energy transfer to a third particle has been known for a long time.
Figure 1. Two scenarios of electron-hole recombination in graphene. In radiative recombination (left), the mutual annihilation of an electron and a hole, shown as blue and red spheres respectively, frees energy in the form of a photon, a portion of light. In Auger recombination (right), this energy is picked up by an electron passing by. The Auger process is harmful for semiconductor lasers, because it consumes the energy that could be used to produce laser light. For a long time, the Auger process was considered to be impossible in graphene due to the energy and momentum conservation laws. Credit: Elena Khavina/MIPT Press Office
However, experiments with hot charge carriers in graphene consistently returned the unfavorable result: Electrons and holes in graphene do recombine with a relatively high rate, and the phenomenon appeared attributable to the Auger effect. Moreover, it took an electron-hole pair less than a picosecond, or one-trillionth of a second, to disappear, which is hundreds of times faster than in contemporary optoelectronic materials. The experiments suggested a tough obstacle for the implementation of a graphene-based laser.
The researchers from MIPT and Tohoku University found that the recombination of electrons and holes in graphene, prohibited by the classical conservation laws, is made possible in the quantum world by the energy-time uncertainty principle. It states that conservation laws may be violated to the extent inversely proportional to the mean free time of the particle. The mean free time of an electron in graphene is quite short, as the dense carriers form a strongly-interacting “mash.” To systematically account for the uncertainty of particle energy, the so-called nonequilibrium Green’s functions technique was developed in modern quantum mechanics. This approach was employed by the authors of the paper to calculate Auger recombination probability in graphene. The obtained predictions are in good agreement with the experimental data.
“At first, it looked like a mathematical brain teaser, rather than an ordinary physical problem,” says Dmitry Svintsov, the head of the Laboratory of 2D Materials for Optoelectronics at MIPT. “The commonly accepted conservation laws permit recombination only if all three particles involved are moving precisely in the same direction. The probability of this event is like the ratio between the volume of a point and the volume of a cube — it approaches zero. Luckily, we soon decided to reject abstract mathematics in favor of quantum physics, which says that a particle cannot have a well-defined energy. This means that the probability in question is finite, and even sufficiently high to be experimentally observed.
The study does not merely offer an explanation for why the “prohibited” Auger process is actually possible. Importantly, it specifies the conditions when this probability is low enough to make graphene-based lasers viable. As particles and antiparticles rapidly vanish in experiments with hot carriers in graphene, the lasers can exploit the low-energy carriers, which should have longer lifetimes, according to the calculations. Meanwhile, the first experimental evidence of laser generation in graphene has been obtained at Tohoku University in Japan.
Notably, the method for calculation of the electron-hole lifetimes developed in the paper is not limited to graphene. It is applicable to a large class of so-called Dirac materials, in which charge carriers behave similarly to the electrons and positrons in Dirac’s original theory. According to preliminary calculations, the mercury cadmium telluride quantum wells could enable much longer carrier lifetimes, and therefore more effective laser generation, as the conservation laws for Auger recombinations in this case are more stringent.
The study was supported by the Russian Science Foundation. |
A hundred was the division of a shire for administrative, military and judicial purposes under the common law. Originally, when introduced by the Saxons between 613 and 1017, a hundred had enough land to sustain approximately one hundred households headed by a hundred-man or hundred eolder. He was responsible for administration, justice, and supplying military troops, as well as leading its forces. The office was not hereditary, but by the 10th century the office was selected from among a few outstanding families. Within each hundred there was a meeting place where the men of the hundred discussed local issues, and judicial trials were enacted. The role of the hundred court was described in the Dooms (laws) of King Edgar. The name of the hundred was normally that of its meeting-place. In England, specifically, it has been suggested that 'hundred' referred to the amount of land sufficient to sustain one hundred families, defined as the land covered by one hundred "hides".
Hundreds were further divided. Larger or more populous hundreds were split into divisions (or in Sussex, half hundreds). All hundreds were divided into tithings, which contained ten households. Below that, the basic unit of land was called the hide, which was enough land to support one family and varied in size from 60 to 120 old acres, or 15 to 30 modern acres (6 to 12 ha) depending on the quality and fertility of the land. Compare with township.
Above the hundred was the shire under the control of a shire-reeve (or sheriff). Hundred boundaries were independent of both parish and county boundaries, although often aligned, meaning that a hundred could be split between counties (usually only a fraction), or a parish could be split between hundreds. The system of hundreds was not as stable as the system of counties being established at the time, and lists frequently differ on how many hundreds a county has. The Domesday Book contained a radically different set of hundreds than that which would later become established, in many parts of the country. The number of hundreds in each county varied wildly. Leicestershire had six (up from four at Domesday), whereas Devon, nearly three times larger, had thirty-two. |
Document Versus Data
XML is commonly used for representing data structures. A data structure is simply a way to represent data that obeys some well-defined structure. The Water language, using ConciseXML, can formally describe the structure of data by using Water Type and Water Contract. Using Water, you also can unambiguously represent static data.
Representing static data might seem straightforward, but XML 1.0 has design constraints carried over from the document markup world that can make representing data in XML quite confusing. The quandary between elements and attributes is a common example of this confusion.
Most programming languages and other technologies for representing data employ the concept of a data structure or object. This article, by convention, uses the term object. The word object is similar to other terms such as a record, structure, or tuple from other technologies.
In most programming languages, an object has fields, and those fields hold values that are also objects. Water objects have this property as well: An object is a collection of fields; each field has a key and a value; and the value can be any object.
The following ConciseXML is an example of an
<item id="XL283" color="blue" size=10/>
The preceding ConciseXML could be described as creating an instance of an
item object. The instance has three fields:
size. The value of the
id field is the string
"XL283", the value of the
color field is
"blue", and the value of the
size field is the number
The type or class of the object appears as the element's name, immediately following
the opening angle bracket (<). The fields of the object are represented as key-value pairs within the element's opening area. An opening angle bracket syntactically is the start of an XML element, but it has the semantic meaning of performing a call. The call is either the calling of a method or the calling of a constructor method of an object. Fields of an object have a clear and unambiguous key and value:
<item id="xx283" color="blue" size=10/>
In the preceding line, the instance of
item has three fields.
"id" is the key of the first field, and
"xx283" is the value of the field.
"color" is the key of the second field and
"blue" is its value.
"size" is the key of the third field and the integer
10 is its value.
It is very common, though, to see the following XML to represent the instance of
To the vast majority of people, the above XML is normal and easily understood, but this is an example of XML in the flat-world model. The round-world model sees this as an ambiguous, poorly constructed XML data object. One problem (which is described in detail later in this article) is that the syntax of an XML element is used to represent two very different things: an object and a field of an object. Having one syntax to represent two different concepts presents a serious ambiguity problem. This ambiguity leads to a serious problem when a machine tries to interpret the meaning of the XML data.
For a data structure to be useful, the distinction between objects and fields is extremely important. How, for example, do you know that
<color>blue</color> represents a field of
item and not an instance of type
color? As humans, we use our gift of pattern recognition to deduce that
color must be a field of
item because it occurs within the content of
item and it has
blue in the content of the element.
To emphasize the ambiguity, what if you wrapped the
item within another
color element? Is
item now a field of
color? Did the meaning of
item radically change because it moved to a different level in the structure? Consider the following example:
If a serious ambiguity appears in such a small example, imagine the scope of the problem when objects and data structures get more complex. At a minimum, data structures need to be unambiguous and not depend on any other knowledge for interpreting a data structure.
Water's use of XML makes a clear separation between objects and fields. An XML element represents an object. XML attributes represent fields of an object. The ConciseXML syntax allows any type of object as the value of an attribute; therefore, Water supports fields that can store any type of objectnot just strings. |
For example, more than 90% of North America’s threatened bird species depend on forest habitat (American Forest Foundation, 2006) and 12 % of threatened or vulnerable plant species in Quebec find refuge here.
Forests provide other indirect benefits. They play a key role in maintaining water quality and containing carbon dioxide, a real issue in times of climate change.
Appalachian Corridor believes that forest conservation must take into account not only the ecological context, but the social and economic implications as well.
that improve water quality by retaining sediments and nutritive elements
that regulate water levels and diminish the risk of flooding
for numerous species of invertebrates, fish, amphibians, reptiles, birds and mammals
like hunting, fishing, hiking or birdwatching
The threat to wetlands situated on private lands is extensive. Given that the real estate market is expanding and that rural areas are becoming more and more attractive to city dwellers looking for second homes, real estate development is a major threat to wetlands.
Appalachian Corridor believes that protecting wetlands is a collaborative effort not only with landowners, but with municipalities. The latter can contribute directly to the preservation of wetlands by collaborating with local conservation organisations.
Forest cover on the lower slopes of the Appalachians is dominated by Sugar Maple, White Ash, Basswood, American Beech, Red Maple, Yellow Birch, Black Cherry, Ironwood and Butternut. Hemlock and White Pine are present only locally.
At altitudes over 400 meters, several species are no longer found, and only the maples, Yellow Birch and American Beech persist. These species, mostly deciduous hardwoods, are replaced at about 700 meters by White Birch and Balsam Fir. Over 800 meters, the forest assumes a more boreal character and becomes dominated by Balsam Fir and Red Spruce.
Appalachian Corridor’s territory of action is a very suitable habitat for Wide-Ranging Mammals. These species need large unfragmented forest blocks linked together by natural corridors in order to roam and complete their life cycle (including breeding and feeding activities, seeking refuge, etc).
|FISHER||600 to 4000 ha|
|COUGAR||4000 to 9000 ha|
|MOOSE||6000 to 10000 ha|
|BLACK BEAR||6000 to more than 10000 ha|
INTERIOR FOREST BIRDS
Owing to its vast areas of unfragmented forest blocks, our region plays a key role in maintaining viable populations of interior forest birds such as the Pileated Woodpecker, American Redstart, Eurasian Wren, Barred Owl and several birds of prey.
The Appalachian region is also known for its abundance of reptiles and amphibians. There are about twenty species in the region, including several listed as species at risk in Canada or Quebec, such as the Wood Turtle, Snapping Turtle, Pickerel Frog, Four-toed Salamander and two species of lungless salamanders associated with pristine mountain streams, i.e. the Spring Salamander and Northern Dusky Salamander.
The region is also know for its outcroppings of serpentine, an uncommon plant on a Worldwide scale. These rare outcroppings can reveal the presence of Rand’s Goldenrod and Green Mountain Maidenhair Fern, two species susceptible to be designated as Threatened or Vulnerable in Quebec.
(GPS COORDINATES IF POSSIBLE)
(especially for turtles) |
Travel topics > Cultural attractions > Historical travel > European history > Ancient Greece
- See also: European history
Ancient Greece or Classical Greece was a civilization which emerged around the 8th century BC, and was annexed by the Roman Empire in the second century BC. Ancient Greece is remembered for its architecture, philosophy and other ideas, which became the foundation of modern Europe. The Olympic Games are originally an ancient Greek tradition.
- See Prehistoric Europe for background.
Classical Greece was not the first civilization around the Aegean Sea. Since the 27th century BC, the Minoan culture had flourished on Crete, until displaced by the Mycenaeans around the 16th century BC. However, there are no surviving historical records from these societies.
The first written records from the Greek city-states, poleis, date to the 9th century BC. The period of the 5th and 4th centuries are today known as Classical Greece. During this period, the Greeks defended themselves against the mighty Persian Empire in a series of wars which became legendary in Western culture. Greece later entered a golden age for philosophy, drama, and science. Through colonization and conquest, Greek language and culture came to stretch far beyond the territory of modern Greece, with especially strong footprints in Sicily and across Asia Minor (today, the Asian part of Turkey). In Ancient Greece's apogee, the dominant cities of Greece were Athens and Sparta, which often were at war against each other.
Starting with the conquests of Alexander the Great in the 4th century BC, the Greek culture spread as far east as modern-day Afghanistan, and Egypt (see Ancient Egypt) was ruled for three centuries by the Greek Ptolemaic dynasty, which was founded by one of Alexander's generals. This late bloom of Greek culture, which was later partially supplanted by the Roman Empire, is known as the Hellenic era.
According to the Biblical book of Acts, the Apostle Paul traveled to the region in the 1st century AD and brought Christianity to the area.
Some elements of Greek culture endured for centuries after the last Greek polity had disappeared. For instance Coptic, the language that Ancient Egyptian evolved into, was written in Greek-derived letters until it died out in the 17th century. Other examples include Greek authors and philosophers, such as Homer and Socrates, that were and are still widely read among a certain subset of Europeans. Greek terms have entered the general lexicon of many European languages including English, mostly relating to things the Greeks were known for (theatre, politics, democracy) or scientific terms. Sometimes Greek and Latin terms have been mixed, such as in the case of "automobile" which derives from Greek "autos" (~self) and Latin "mobilis" (~movable, moving). For these reasons and the fact that the Christian New Testament was written in Ancient Greek, Ancient Greek is still taught in many secondary schools and universities throughout Europe.
Although in modern times the Greek alphabet itself is only used to write Greek (and the individual letters as symbols in maths and science), the Latin and Cyrillic alphabets that are used by many other European languages were originally derived from the Greek alphabet. The very word "alphabet" is also derived from the first two letters of the Greek alphabet (alpha and beta) and its importance in being the first known phonetic script, a script to encode all vowel and consonant sounds (as opposed to other scripts that only encoded consonants or had ideographic and/or syllabic aspects), cannot be overstated.
The Byzantine Empire survived as a bastion of Greek heritage until it fell in 1453. Some Byzantine scholars moved west, and contributed to the Italian Renaissance. From the 17th century, the Grand Tour became a customary voyage where north Europeans visited the Greek ruins in southern Italy. Over time, tourism expanded to Greece proper.
Greece became independent from the Ottoman Empire in the 1820s, adopting a monarchical constitution largely on the urging of the Great Powers of Europe, and initially enthroning a Bavarian Wittelsbach prince, hence the – still used – blue and white colors of the Greek flag.
- 1 Athens (Attica). One of the most important poleis in Ancient Greece, Athens was a naval power and a center of learning and philosophy. While it was eventually surpassed militarily by Sparta and Thebes, its immense wealth meant that some of its classical architecture is still standing. Due in part to its history Athens later became the capital of modern Greece.
- 2 Argos (Peloponnese). Major stronghold during the Mycenaean era, this city may be older than Mycenae itself. In classical times was a powerful rival of Sparta for dominance over the Peloponnese. Nowadays, there are still several interesting remains, among them a ruined temple to goddess Hera.
- 3 Arta (Epirus). Historic capital of Epirus, famously associated with King Pyrrhus, opponent of the Roman Republic, after whom the phrase "Pyrrhic victory" was coined. There's an extensive archeological site, with ancient walls, the ruins of a temple of Apollo, a small theatre, among other things.
- 4 Corinth (Peloponnese). One of the largest and most important cities of Classical Greece, with a population of 90,000 in 400 BC. In classical times and earlier, Corinth had a temple of Aphrodite and rivaled Athens and Thebes in wealth.
- 5 Delphi (Sterea Hellada). Famously nested on a shoulder of Mount Parnassus, Delphi was believed to be determined by Zeus when he sought to find the omphalos (navel) of his "Grandmother Earth" (Ge, Gaea, or Gaia). Site of the Apollo cult, oracle, and eternal flame.
- 6 Dodona (about 6 km southwest of Ioannina, Epirus). The oldest recorded Hellenic oracle. There's a well preserved theater, built by King Pyrrhus, a must-see, which hosts theatrical performances.
- 7 Larissa (Thessaly). Historic Thessalian capital; the name means "stronghold" in ancient Greek. One of the oldest settlements in Greece, with artifacts uncovered dating at least the Neolithic period (6000 BC) and two ancient theaters, one Greek, the other Roman.
- 8 Mount Olympos (Thessaly). The highest mountain in Greece (2917 m), the abode of the Gods.
- 9 Marathon (Attica). Site of the famous battle against the Persians, 490 BC, and starting point of the First modern Olympiad's eponymous foot race, 1896.
- 10 Mycenae (Peloponnese). Royal seat of Agamemnon, High King of the Greeks and undisputed leader of the anti-Trojan coalition, according to the Iliad. Its prominence from about 1600 BC to about 1100 BC was such that it lends its name to this period of Greek history, habitually referred to as "Mycenaean". Its acropolis, continuously inhabited from the Early Neolithic onwards, in Roman times had already become a tourist attraction.
- 11 Nafplio (Peloponnese). Said to have been founded by and named after the Argonaut Nauplios, father of Palamidis who fought in the Trojan War, this town is a good base to head out to the numerous archeological sites surrounding it. UNESCO World Heritage sites Epidaurus with its gorgeous theater, Tiryns the Mighty-Walled (Homer's words), and Mycenae are just some of them.
- 12 Olympia (Peloponnese). Site of the original Olympic Games and the Temple of Zeus. Hosted the shot put event in the 2004 Olympic Games - the very first time women athletes competed in the venue.
- 13 Piraeus (Attica). Athenian harbor from time immemorial, still is the Greek capital's chief point of entry and exit by sea. There's a nice archeological museum here.
- 14 Pella (Central Macedonia). Alexander the Great's Macedonian capital and birthplace. In 168 BC, it was sacked by the Romans, and its treasury transported to Rome. Nowadays it's a rich archeological site.
- 15 Pylos (Peloponnese). The "Sandy Pylos" mentioned very often in both the Iliad and the Odyssey, home to King Nestor, eldest of Agamemnon's advisers. The remains of the so-called "Palace of Nestor" have been excavated nearby.
- 16 Sparta (Peloponnese). Even contemporaries agreed, that Athens would be perceived to have been much more important than Sparta. This is mostly because the Spartan society was very militaristic and invested in war rather than monuments or temples. A famous quote sums up the Spartan attitude towards building, even if for war: "Sparta has no walls. The Spartans are the wall of Sparta"
- 17 Thebes (Central Greece). From time immemorial, this city is featured in an abundant mass of legends which rival the myths of Troy. In Classical times, it was largest city of the ancient region of Boeotia, the leader of the Boeotian confederacy, and a major rival of Athens. It sided with the Persians during the 480 BC invasion, and formed a firm alliance with Sparta during the Peloponnesian War (431–404 BC). The modern city contains an archaeological museum, the remains of the Cadmea pre-Mycenaean citadel, and scattered ancient remains.
- 18 Thermopylae (Central Greece). The battlefield where King Leonidas and his 300 Lacedaemonians made their stand against the Persian army, immortalized in song, prose, comics and movies, in 480 BC. Today it's bisected by a highway, and right beside it, are the Spartans' burial mound, with a plaque containing the famous epitaph by Simonides: Ὦ ξεῖν', ἀγγέλλειν Λακεδαιμονίοις ὅτι τῇδε κείμεθα, τοῖς κείνων ῥήμασι πειθόμενοι. ("Go tell the Spartans, passerby, that here, obedient to their laws, we lie.") and a statue of Leonidas, under which an inscription reads laconically: Μολὼν λαβέ ("Come and take them!" — his answer to Xerxes' demand that the Greeks give up their weapons).
- 19 Volos (Thessaly). Identified with Iolkos, the alleged birthplace of mythical hero Jason, leader of the Argonauts. Features several archeological sites nearby.
- 20 Aegina. The famous Aegina Treasure (between 1700 and 1500 BC), now in the British Museum, came from this island. There stand the remains of three Greek temples.
- 21 Corfu (Corcyra, Korkyra). An island bound up with the history of Greece from the beginnings of Greek mythology. Famous sights, like the cave where Jason and Medea were married (Argonautica), or the beach where Ulysses met Nausicaa (Odyssey), remain very popular tourist attractions.
- 22 Delos. This island, the alleged birthplace of Apollo and Artemis, was already a holy sanctuary for a millennium before the establishment of this piece of Olympian Greek mythology; a very significant archaeological site.
- 23 Heraklion (Crete). Known in ancient times as Knossos; the ceremonial and political centre of the Minoan civilization and culture (3650 to 1400 BC).
- 24 Kos. Famously associated with native-born physician Hippocrates of Kos, the "Father of Western Medicine". Major historic attractions include the Asklepeion sanctuary, where he most probably studied, and the Platanus tree under which he taught his pupils the art of medicine.
- 25 Lindos (Rhodes). Beautiful hilltop town with a nice acropolis archeological site.
- 26 Mytilene (Lesbos). The historic capital of Lesbos island was briefly the home of master philosopher Aristotle. The island was also the home of Sappho, who is famous for her poetry with homoerotic features, which gave rise to the term 'lesbian' after the island's name. Nowadays, there is more than one archeological museum worth visiting.
- 27 Naxos. Herodotus describes Naxos circa 500 BC as the most prosperous of all the Greek islands. According to Greek mythology, the young Zeus was raised in Mt. Zas's cave. Besides some nice ruined temples to Apollo and Demeter, the island is considered as perfect for windsurfing, as well as kitesurfing.
- 28 Samos. Birthplace of Pythagoras, the famous mathematician. Features the remains of a once-famous sanctuary to goddess Hera.
- 29 Samothrace. Site of the Sanctuary of the Great Gods, the centre of a mystery cult that rivaled Delos and Delphi. Here was unearthed the Victory of Samothrace statue, a highlight of the Louvre.
- 30 Agrigento (Sicily). Site of the ancient Greek city of Akragas (Ἀκράγας), famous for its seven monumental Greek temples in the Doric style, constructed during the 6th and 5th centuries BC. Now excavated and partially restored, they constitute some of the largest and best-preserved ancient Greek buildings outside of Greece itself.
- 31 Brindisi (Apulia). Allegedly founded by King Diomedes of Argos, after he lost his route back home from the siege of Troy. Its name comes from the Greek Brentesion (Βρεντήσιον) meaning "deer's head", which refers to the shape of its natural harbor. Some columns, most likely from the Roman period, still stand.
- 32 Cumae (Campania). Kumai (Κύμαι) was the first Greek colony on the mainland of Italy, founded by settlers from Euboea, allegedly led by the legendary gadget-maker Daedalus, in the 8th century BC. It's most famous as the seat of the Cumaean Sibyl, a priestess of Apollo with prophetic powers, very respected and consulted among the Romans. Her sanctuary is open to visitors.
- 33 Erice (Sicily). Ancient Eryx (Eρυξ) is today a gorgeous hilltop destination, where less than 500 people live close to a mediaeval fortification ("Venus Castle", built on the foundations of a temple to Aphrodite) on top of the 715 m high Mount Eryx. Local tradition places the lair of cyclops Polyphemus, Ulysses' foe in the Odyssey, on the side of this mountain. The town itself has wonderful views. There's a cable car that comes up from Trapani to the hilltop.
- 34 Gela (Sicily). founded around 688 BC by colonists from Rhodes and Crete; playwright Aeschylos, the "father of tragedy", died in this city in 456 BC.
- 35 Paestum (Campania). Widely considered to have the best and most extensive ancient Greek relics in the former Magna Graecia.
- 36 Reggio di Calabria (Calabria). A Greek colony at first, under the name Rhégion (Ῥήγιον, "Cape of the King"), Reggio is home to the National Archaeological Museum of Magna Græcia, one of the most important archaeological museums of Italy.
- 37 Segesta (Sicily). Said to have been founded by Trojan refugees, welcomed by the Elymians, right after the end of the Trojan War, Segesta is home to a beautiful Greek theater and an unusually well preserved Doric temple.
- 38 Selinunte (Sicily). Its Greek name was Selinous (Σελινοῦς). Features an extensive acropolis archeological site with several temples, one of which has been reconstructed.
- 39 Syracuse (Sicily). Famously besieged by an Athenian expedition (415-413 BC) during the Peloponnesian War. The siege was a failure and spelled the doom of the Athenian hegemony over the Greek world. It's also the birthplace of Archimedes, the famous philosopher and mathematician.
- 40 Taranto (Apulia). Taras (Τάρας) was founded as a Spartan colony. The modern city has been built over the Greek city; a few ruins remain, including part of the city wall, two temple columns dating to the 6th century BC, and tombs.
- 41 Trapani (Sicily). Founded as early as the 13th century BC, as Drepanon (Δρέπανον), by the same Greeks who called themselves the Elymian people and also founded Erice and Segesta. Recent scholarship formulates the hypothesis that princess Nausicaa, a highlighted character of the Odyssey, is the real author of the epic poem, and was born and raised in Drepanon - refer to Homeric translator Samuel Butler's The Authoress of the Odyssey and novelist Robert Graves' Homer's Daughter for further details.
- 42 Aphrodisias (Southern Aegean). Site of the Temple of Aphrodite. Now it's one of the best preserved ancient cities in Turkey, and without the usual crowds of Ephesus.
- 43 Assos (Northern Aegean). The Doric order columns of the hilltop Temple of Athena here are the only one of this type on the Asian mainland. Assos was also the site of the academy established by philosopher Aristotle.
- 44 Bergama (Northern Aegean). The UNESCO-listed Pergamon was once the capital of the Kingdom of Pergamon, ruled by a Hellenistic dynasty and held sway over most of western Anatolia. The ruins of Pergamon are among the most popular archaeological sites in Turkey, and there is much to see in two separate areas — although the impressive altar was taken to Germany in the late 19th century, and is now in display in the Pergamon Museum in Berlin.
- 45 Çavdarhisar (Central Anatolia). Features the impressive ruins of Aizanoi, site of the awesome Temple of Zeus.
- 46 Didyma (Southern Aegean). The sanctuary of the then great city of Miletus was once the site of an oracle that was as renowned as that of Delphi. Go there to see the ruins of the colossal Temple of Apollon, adorned with much ancient Greek art.
- 47 Ephesus (Central Aegean). A famous and prosperous polis in Classical times, birthplace of philosopher Heraclitus, now a large world heritage-listed archeological site and one of Turkey's major tourist attractions.
- 48 Foça (Central Aegean). Phocaea was the home of the sailors who ploughed the waves in the far-flung areas of the Western Mediterranean, founding a number of colonies along the coasts of Iberia, Italy, and France, Marseille being one of them. Some believe the offshore islands were the domain of the Sirens, beautiful sea fairies who doomed the sailors to death, found in Homer's Odyssey along with other Greek stories. Nowadays only scant ruins of Phocaea exist on a hillside some distance away from the modern town, but the cobbled streets of Foça are lined by Greek civic architecture of the 19th century throughout the town.
- 49 Gülpınar (north of Babakale, Northern Aegean). The site of the lonely ruins of the Temple of Apollon Smintheion, the major sacred site of the Troad Peninsula extending south of Troy.
- 50 Izmir (Central Aegean). Ancient Smyrna has always been famous as the birthplace of Homer, thought to have lived here around the 8th century BC. Its agora (central market place) is now an open-air museum.
- 51 Knidos (Southern Aegean). This was the site of the Aphrodite of Knidos, a statue depicting a nude goddess of love created in the 4th century BC, which became so famous that it sparked one of the earliest forms of tourism in the classical world. Nowadays Knidos doesn't have as many visitors, as it lies at the end of a remote peninsula and had its statue long since lost to oblivion.
- 52 Miletus (Southern Aegean). Considered to be the largest and the wealthiest of the Greek cities prior to the Persian invasion of the 6th century BC, Miletus is also the birthplace of mathematician and philosopher Thales.
- 53 Phaselis (south of Kemer, Lycia). Once the major harbor of the region, the ruins of Phaselis overgrown by a pine forest are now the destination of many daily cruises departing from the nearby resort towns.
- 54 Priene (Southern Aegean). The earliest city built on a grid plan, Priene was once a major harbor on the Ionian coast. Its hillside ruins now overlook a fertile plain, formed by the silting up of its harbor by the Meander River in the meantime.
- 55 Sinop (Black Sea Turkey). Σινώπη (Sinōpē), where an important stopover on the Argonauts' journey to Colchis took place, is also the birthplace of seminal philosopher Diogenes the Cynic.
- 56 Trabzon (Black Sea Turkey). Τραπεζοῦς (Trapezous) was the first Greek city reached by Xenophon and the Ten Thousand mercenaries, when fighting their way out of Persia, as described in the Anabasis.
- 57 Troy (Southern Marmara). The scenery of all the action contained in Homer's Iliad.
- 58 Beglik Tash (7 km north of Primorsko). A Thracian megalithic sanctuary used for more than a millennium, from 14th century BC to the 4th century AD.
- 59 Burgas (Bulgarian Black Sea Coast). The present city's territory features the Aquae Calidae hot springs, already used in the Neolithic between the 6th and 5th millennium BC. In the 4th century BC, Philip II of Macedon conquered the region and, according to legend, he was a frequent guest here.
- 60 Nesebar (Bulgarian Black Sea Coast). Founded as a Greek colony, the ancient city of Mesembria was located on a former island, which has sunk under water. However, some remains from the Hellenistic period are extant, including the acropolis, a temple of Apollo, a market place, and a fortification wall, which can still be seen on the north side of the peninsula.
- 61 Plovdiv (Upper Thracian Plain). Ancient Philippopolis was the historic capital of Thracia. Several ruins can be seen in or near the downtown area, including an aqueduct and a very well preserved theater.
- 62 Sozopol (Bulgarian Black Sea Coast). Anciently known as Apollonia Pontica (that is, "Apollonia on the Black Sea", the ancient Pontus Euxinus) and Apollonia Magna ("Great Apollonia"), founded in the 7th century BC by colonists from Miletus. A part of the ancient seaside fortifications, including a gate, have been preserved, along with an amphitheater.
- 63 Varna (Bulgarian Black Sea Coast). Started to exist as a Greek colony named Odessos (Ὀδησσός). Home to the remains of a large bathing complex, and an archeological museum.
- 64 Constanța (Northern Dobruja). Originally a Greek colony, named Tomis.
- 65 Mangalia (Northern Dobruja). Started to exist as a Greek colony named Callatis in the 6th century BC. Today, it's a rich archeological site, with ruins of the original Callatis citadel and an archeological museum.
- 66 Chersonesus Taurica ("Taurica" stands for the Crimean Peninsula) (Sevastopol, about 3 km from the city centre). Χερσόνησος was founded by settlers from Heraclea Pontica in Bithynia in the 6th century BC. On the site are various Byzantine basilicas, including a famous one with marble columns. It's listed as a UNESCO World Heritage Site.
- 67 Feodosiya. Founded as Theodosia (Θεοδοσία) by Greek colonists from Miletos in the 6th century BC. It was destroyed by the Huns in the 4th century AD. In the late 13th century, the city was purchased from the ruling Golden Horde by the Republic of Genoa; the present city's main historic attractions date from this period.
- 68 Kerch. Greek colonists from Miletos founded Panticapaeum (Παντικάπαιον) in the 7th century BC. Panticapaeum subdued nearby cities and by 480 BC became a capital of the Kingdom of Bosporus. Later, during the rule of Mithradates VI Eupator, Panticapaeum for a short period of time became the capital of the much more powerful and extensive Kingdom of Pontus. Its archeological site features ruins from the 5th century BC up to the 3rd century AD.
- 69 Yevpatoria. An ancient city with more that 2500 years of history, named after King Mithradates VI Eupator of Pontus; the first recorded settlement in the area, called Kerkinitis (Κερκινίτις), was built by Greek colonists around 500 BC.
- 70 Batumi. This was the Greek colony of Bathys in the land of Colchis, the final destination of Jason and his Argonauts in their pursuit of the "Golden Fleece" around Pontos Axeinos, "the inhospitable sea". While not much remains of Bathys, in 2007 the city has erected a large statue in honour of Medea, mythical Colchian princess and the wife of Jason, depicting her while holding what appears to be the Golden Fleece.
- 71 Kutaisi. Identified as Aea, King Aeëtes' capital in Colchis, from whence the Golden Fleece was seized. Nearby, the so-called Prometheus's Cave is reported to have amazing stalactites.
- 72 Paphos. Renowned in antiquity as the birthplace of Aphrodite, the goddess of love. A few miles outside the city, the rock of Aphrodite (Petra tou Romiou, "Stone of the Greek") emerges from the sea. According to legend, Aphrodite rose from the waves in this strikingly beautiful spot.
- 73 Alexandria. Egyptian capital until the Islamic conquest, the best known of several towns founded by and named for Alexander the Great, nicknamed by him "my window on Greece". A center of learning in antiquity as well as the seat of the Ptolemaic dynasty.
- 74 Cyrene. Ancient Cyrene was the oldest, largest and the most important of the five Greek cities ("pentapolis") of the greater Cyrenaica region. Prospered with the trade of its rich agricultural products, the city became one of the most influential centres of ancient Greek culture and art, gave rise to the hedonistic "Cyrenaics" movement, and was nicknamed the "Athens of Africa". Ruins of several temples dedicated to the Greek gods dot the site. |
By the late 19th century, America's Industrial Revolution had a full head of steam. The Hall of Power Machinery holds examples of the machines that helped make the United States a world leader in industrial production. With models and machines—pumps, boilers, turbines, waterwheels, and engines—the hall follows the development of increasingly efficient power machinery.
The hall features:
- early developments in steam engines, illustrated with several models, including the first commercially useful steam engine, designed by Englishman Thomas Savery in 1712. Also on display is one of the oldest surviving relics of an American-built stationary steam engine, a Holloway 10-horsepower engine of 1819
- steam turbines, which replaced traditional steam engines, including a Curtis-General Electric steam turbine of 1927, cut away to reveal its blading
- internal combustion engines, from Nikolaus Otto and Eugen Langen's 1872 gas engine to examples of early engines based on the designs of German engineer Rudolph Diesel. |
This is our third post in the VoiceThread A to Z series. In the first post, we discussed ways to use VoiceThread for early semester activities and in the second post we discussed creating presentations. This post will focus on incorporating storytelling into your curriculum. Upcoming posts will focus on other innovative lesson design and assessment ideas. Stay tuned!
Which types of courses can use storytelling as a lesson design framework? People typically assume storytelling is confined to creative writing or literature courses, but stories can be a part of any course. Let’s take a look at a few creative VoiceThread storytelling examples:
In this thread, the instructor is working with students who are learning to speak English. He begins a story and asks his students to make predictions and co-write the ending of the story in their new language. This design not only assesses whether they understood the vocabulary used and the overall story concept, but also helps him evaluate the students’ pronunciation.
Speaking and Writing Skills
In this thread, a 1st grade student wrote an original story and then uploaded screenshots so he could narrate the story for his class. The student not only learned how to write a story in 3 acts, but learned new technology skills and got to practice speaking via a read-aloud.
Turning any Lesson into a Story
This is a lesson comparing databases and search engines for research purposes. That content doesn’t seem like a natural fit for storytelling, but the instructor framed the lesson as a battle between the two approaches. He created a question in the minds of his students and that question helps create engagement.
These are just a few lesson ideas that you can bring into your class this year. If you have other ways to incorporate storytelling into your VoiceThread lessons, let us know in the comments below! |
1. What is Bipolar Disorder?
Bipolar disorder, also known as manic depression, is a mental illness that causes extreme changes in mood, energy, and activity levels that affect someone’s ability to carry out daily tasks.
It most often develops in older teenagers or young adults, with about 50 percent of all cases starting before age 25. Some people with bipolar disorder may display symptoms as children, while others only show symptoms later in life.
The main symptoms of this condition are intense emotional phases called “mood episodes.” These episodes can switch from extreme happiness or joy (mania) to deep sadness or hopelessness (depression) in a matter of seconds.
Sometimes, people with bipolar disorder experience both happiness and sadness at the same time (mixed state).
Symptoms of a manic episode include:
- Overt happiness and sociable mood for a long period of time;
- High energy levels;
- Extreme irritability or restlessness;
- Talking quickly, changing ideas midconversation, or having racing thoughts;
- Short attention span;
- Sudden desire to take on new activities or projects;
- Sleeping too little;
- Impulsive, risky behavior.
- Overt sadness or hopelessness for a long period of time;
- Low energy levels;
- Lack of interest in doing pleasurable activities;
- Difficulty concentrating, remembering things, and making choices;
- Restlessness or irritability;
- Extreme changes in eating or sleeping;
- Talk or threats of suicide;
- Suicide attempt.
The people with bipolar disorder need to understand how to deal with their condition, but is equally important that the people in their lives, such as friends, family members, employers, coworkers, and teachers, know how to help them when they are going through a manic or depressive state.
3. How you can help someone with bipolar disorder?
1. Learn more about this disorder.
If you want to be able to help, you need to research and learn more about bipolar disorder. We don’t say that you need to be an expert, but the more you know about this disorder, you will have more understanding and patience about these people.
2. Be an active listener.
Listen to what they say. Don’t assume that you know what they are going through. Stay calm during your conversations, avoid arguments and any topic that seem to irritate or frustrate them. Pay attention to all of their emotions and filings as sings of their illness.
3. Be understanding.
It’s not easy to understand what the people with bipolar disorder are experiencing. But you can try to understand what the person is going through. Your support can make a big difference in how they feel.
4. Be patient.
Bipolar disorder is usually a long – term condition. Don’t expect a quick recovery or a permanent cure. Be patient with the pace of recovery and prepare for setbacks and challenges. Managing bipolar disorder is a lifelong process. Stay optimistic and encourage them to keep on!
5. Make a plan.
Because bipolar disorder is an unpredictable illness, you should plan for bad times. Be clear. Agree with your loved one about what to do if their symptoms get worse. Have a plan for emergencies. If you both know what to do and what to expect of each other, you’ll feel more confident about the future.
People who are depressed often pull away from others. So encourage your friend or loved one to get out and do things he or she enjoys. Ask him to join you for a walk or a dinner out. If he says no, let it go. Ask again a few days later.
7. Take care of yourself.
As intense as your loved one’s needs may be, you count too. It’s important for you to stay healthy emotionally and physically.
Do things that you enjoy. Stay involved with other people you’re close to — social support from those relationships mean a lot. Think about seeing a therapist on your own or joining a support group for other people who are close to someone who has bipolar disorder.
What have you already done to help someone with bipolar disorder?
Share your experience and thoughts with us in the comments bellow. Thanks! |
Mountaintops, like Mount Dana in Yosemite National Park, have extreme conditions of cold and wind. The mountaintops are isolated from one another and so create a small island of favorable conditions for some plants.
Why It Matters
- Yosemite has high flat plateaus at around 12,000 feet elevation. The plateaus stuck up while glaciers carved around them.
- On these rocky plateaus are plants that grow nowhere else. They are endemic to these high altitude environments.
- Animals can be found up this high too but they are rare. Some, like the marmot, hibernate during the long winters.
Explore More/Show What You Know/Can You Apply It?
With the link below, learn more about sky islands. Then answer the following questions.
- Yosemite National Park, Yosemite Nature Notes – 16 – Sky Islands (video): http://www.youtube.com/watch?v=yneADYBWRvs&list=PL890957589F8403A4&index=9
- Why are the locations discussed in the video called sky islands?
- As you move up in altitude along the Tioga Road in Yosemite, what happens to the vegetation?
- What characteristics of the plants in the sky islands are the scientists putting in their inventory?
- What is the long-term goal of this type of study? Why are scientists interested in doing studies like this long-term?
- As climate warms, what will happen to these plants that live at the tops of mountains?
- How much biodiversity is found in sky islands in Yosemite National Park? |
What is Elbow Dislocation?
The arm in the human body is made up of three bones that join to form a hinge joint called the elbow. The upper arm bone or humerus connects from the shoulder to the elbow to form the top of the hinge joint. The lower arm or forearm consists of two bones, the radius and the ulna. These bones connect the wrist to the elbow forming the bottom portion of the hinge joint.
The bones are held together by ligaments to provide stability to the joint. Muscles and tendons move the bones around each other and help to position the hand in space to perform various activities. An elbow dislocation occurs when the bones that make up the joint are forced out of alignment.
Causes of Elbow Dislocation
Elbow dislocations usually occur when you fall onto an outstretched hand. It can also occur from a traumatic injury such as a motor vehicle accident.
Symptoms of Elbow Dislocation
When the elbow is dislocated you may experience severe pain, swelling and lack the ability to bend your arm. You may feel a pop or clunk when the elbow slips out of place.
Diagnosis of Elbow Dislocation
To diagnose an elbow dislocation, your doctor will examine your arm. Your doctor will also check the pulses at the wrist and evaluate the circulation to the hand. An X-ray is necessary to confirm the dislocation and determine if there is a break in the bone.
Treatment Options for Elbow Dislocation
An elbow dislocation is a serious injury and requires immediate medical attention.
What your Doctor Does to Treat an Elbow Dislocation
Your doctor will put your dislocated elbow back in place by pulling on your wrist and levering your elbow. This procedure is known as a reduction. You may be given medication to relieve your pain before the procedure. After the reduction, you may have to wear a splint to immobilize your arm at the elbow. After a few days, you may also need to perform gentle motion exercises to improve your range of motion and strength.
- Triceps Injuries
- Osteochondritis Dissecans of the Capitellum
- Elbow Trauma
- Elbow Arthritis
- Bicep Tendon Tear at the Elbow
- Elbow Dislocation
- Triceps Tendonitis
- Elbow (Olecranon) Bursitis
- Elbow Sprain
- Tennis Elbow
- Golfer's Elbow
- Little League Elbow
- Nursemaid's Elbow
- Elbow Pain
- Elbow Contracture
- Distal Humerus Fractures of the Elbow
- Radial Head Fractures of the Elbow
- Elbow Fractures
- Ulnar Nerve Neuropathy
- Loose Bodies in the Elbow
- Radial Tunnel Syndrome
- Lateral Ulnar Collateral Ligament Injuries (Elbow)
- Post-traumatic Stiffness (Elbow)
- Cubital Tunnel Syndrome (Ulnar Nerve Entrapment) |
Speech Language SLP’s, as they are called for short, are the specialists that help your child with speech, talking and communication. However you may be surprised at how broad this field of speech-language pathology really is and just how many skill areas SLP’s are trained to build and expand in young children.
Speech Language therapy can help your child with:
Articulation is the physical ability to move the tongue, lips, jaw and palate (known as the articulators) to produce individual speech sounds which we call phonemes. Intelligibility, refers to how well people can understand your child’s speech. If a child’s articulation skills are compromised for any reason, his intelligibility will be decreased in compared to other children his age. Speech therapy can help your child to teach him how to produce the specific speech sounds or sound patterns that he is having difficulty with, and thus increasing his overall speech intelligibility. You can read more about articulation development and delays here.
While speech involves the physical motor ability to talk, language is a symbolic, rule governed system used to convey a message. Symbols can be words, either spoken or written. We also have gestural symbols like shrugging our shoulders to indicate “I don’t know” or waving to indicate “Bye Bye” or the raising of our eye brows to indicate that we are surprised by something.
Expressive language then, refers to what your child says. Speech therapy can help your child learn new words and how to put them together to form phrases and sentences (semantics and syntax) so that your child can communicate to you and others. You can read more about the difference between speech and language here.
Receptive language, refers to your child’s ability to listen and understand language. Most often, young children have stronger receptive language skills (what they understand) than expressive language skills (what they can say). Speech therapy can help teach your child new vocabulary and how to use that knowledge to follow directions, answer questions, and participate in simple conversations with others.
Stuttering is a communication disorder that affects speech fluency. It is characterized by breaks in the flow of speech referred to as disfluencies and typically begins in childhood. Everyone experiences disfluencies in their speech. Some disfluencies are totally normal but having too many can actually significantly affect one’s ability to communicate.
Speech therapy can teach your child strategies on how to control this behavior and thus increasing his speech fluency and intelligibility.
Voice disorders refer to disorders that affect the vocal folds that allow us to have a voice. These can include vocal cord paralysis, nodules or polyps on the vocal folds, and other disorders that can cause hoarseness or aphonia (loss of voice). Resonance refers to “the quality of the voice that is determined by the balance of sound vibration in the oral, nasal, and pharyngeal cavities during speech. Abnormal resonance can occur if there is obstruction in one of the cavities, causing hyponasality or cul-de-sac resonance, or if there is velopharyngeal dysfunction (VPD), causing hypernasality and/or nasal emission.” A common voice disorder in young children is hoarseness caused by vocal abuse. Voice therapy can help in these conditions.
Social/ pragmatic language refers to the way an individual uses language to communicate and involves three major communication skills: using language to communicate in different ways (like greeting others, requesting, protesting, asking questions to gain information, etc), changing language according to the people or place it is being used (i.e. we speak differently to a child than we do to an adult; we speak differently inside vs. outside), and following the rules for conversation (taking turns in conversation, staying on topic, using and understanding verbal and nonverbal cues, etc).
Speech language therapy can help your child learn these social language skills so that they can participate appropriately in conversations with others.
Cognitive-communication disorders refer to the impairment of cognitive processes including attention, memory, abstract reasoning, awareness, and executive functions (self-monitoring, planning and problem solving). These can be developmental in nature (meaning the child is born with these deficits) or can be acquired due to a head injury, stroke, or degenerative diseases. Speech therapy can help your child to help build these skills and/or help your child learn compensatory methods to assist them with their deficits.
Augmentative and Alternative Communication, also known simply as AAC, refers to “…all forms of communication (other than oral speech) that are used to express thoughts, needs, wants, and ideas.
We all use AAC when we make facial expressions or gestures, use symbols or pictures, or write” . When speech therapists are working with children, our number one goal is always communication. Sometimes, a child may have such a severe delay/disorder, that traditional oral speech is not possible or is not practical. In these circumstances, a speech therapist may work with a child and his family to come up with an AAC system to use instead of, or along side of, speech.
It is very important to note, that these AAC methods are not always used to replace speech. In many circumstances, AAC is used as a bridge to speech. Children can use the AAC methods to communicate while still working on developing speech skills (when appropriate).
Hands down, the best thing an SLP can do for your child, is to educate you and empower you on how to best help your child. A speech-language pathologist may spend an hour or so a week with your child, but you spend hours and hours a week interacting with your child. You wake your child, get him ready for his day, read to him, talk to him, bathe him, and put him down to sleep at night. It is during these everyday routines that your child is learning the most and is given the most opportunities to communicate.
When you are equipped with the knowledge, skills, and confidence YOU can be the best “speech therapist” your child will ever have. So ask questions, take notes, do the homework, and work closely with your child’s SLP. Together you can make an amazing team and change your child’s life, one word at a time. |
A trait is any aspect of an organism that defines it with respect to a concept. A trait may be color, when examining heat absorption; accent, in the case of humans when attempting to approximate nationality; specific leaf area (i.e., area to dry mass ratio of a leaf) when interested in investment in photosynthetic machinery over carbohydrate revenue; metabolic rate, when attempting to quantify the pace of living of an organism.
The concept trait, which was already used by Darwin (1859), was initially used almost exclusively as a proxy to individual performance. Examples include the matching between seed size and the size and shape of beaks in Galapagos finches. However, the advent and impressive development of community ecology has expanded the term trait away from its original usage. And it must be emphasized, there is nothing wrong with that a priori; word meanings evolve all the time.
Certainly, a trait-based approach in this discipline has resulted in the discovery of important findings such as the detection of niche-based community assembly processes (Kraft et al. 2015) or the identification of key plant traits driving ecosystem productivity (Garnier et al. 2004). In summary, a key advance of trait-based ecology is the ability to scale up processes, and one may not always need demography for that…
One can, however, find oneself (we do!!) attending conferences where the adjective “functional” is prefixed to the noun “trait” perhaps a bit too freely. Why is this an issue, potentially? Well, functional means that the trait itself either has a specific function on shaping organismal fitness (e.g., McGill et al. 2006; Violle et al. 2007), or it can be used as a proxy to it, perhaps through their modulation of fitness components.
What if one is measuring a trait that simply does not approximate a fitness component or the performance of an organism? It should not be a big deal to examine traits for ecosystem functioning among other questions…though we would argue that a system functions in a specific manner due to the emergent properties (e.g., turnover) of the demographic dynamics of the species that compose the community.
We contend that in most cases, only a careful and quantitative examination of the relationships between traits and fitness components can help scientists benefit from a “trait approach” to their specific questions. Equally important, is the question of how the enormous interspecific variation in plant traits that has been found relate to key processes at the organismal level, such as competitive ability (Albert et al. 2011; Violle et al. 2012).
To contribute to such an urgent and timely need, at the 2017 annual meeting of the Ecological Society of America in Portland, we are organizing a symposium titled “Towards a unified framework for functional traits and life history strategies in plants”.
Due to the acquisition of large volumes of data on traits and vital rates (e.g., survival and growth), during the last decade researchers have implemented global analyses on the covariation of traits on the one hand, and vital rates, on the other. As a result of such efforts, important discoveries have been made, including the leaf-economics spectrum (Wright et al. 2004), or the wood-economics spectrum (Chave 2009), among others. Despite the independence in approaches and data used in these global analyses of functional traits and of vital rates, remarkable similarities in how plant life is structured are starting to emerge.
Recent findings (see fig. 1) report a couple of axes explaining most of the variation in the vast plant biodiversity repertoire, one related to plant size, and the other to individual/organ turnover (Díaz et al. 2016; Salguero-Gómez et al. 2017). Moreover, global patterns for the unification of traits and vital rates are starting to pop up too (Salguero-Gómez 2015). The invited speakers in this symposium, leaders in the fields of plant traits and demography, will provide a synthetic framework to integrate plant shape, function and strategies using big data.
Our own contributions to this field have led us to build bridges between demographers (R. Salguero-Gómez) and functional ecologists (C. Violle), and to start evaluating single-trait to single-vital rate correlations in the plant kingdom. For instance, using data from TRY (Kattge et al. 2011) and COMPADRE (Salguero-Gómez et al. 2015), it was shown that not all traits commonly regarded as functional in the plant kingdom are actually so (Fig. 2), and that their functionality is fitness-component specific (Adler et al. 2014).
Also, leaf nitrogen content, the main driver of the leaf-economics spectrum (Wright et al. 2004), turns out not to have much of a predictive power in the relative importance of survival or growth of individuals in a population in over 200 plant species worldwide, but it was positively correlated with the effect of fecundity. The opposite was true for seed mass, which was not correlated with the elasticity of population growth rate to growth or fecundity, but was positively correlated with the survival elasticity.
Even in the models where the strongest correlations were found, the coefficients of correlation were low. We have suspected for quite some time now that this is so because plant populations ecologists and functional ecologists have not historically worked together in the field. Consequently, big data correlative exercises like this may inevitably blur exiting underlying controls of anatomy onto demography due to the great geographic distance (and thus difference in microhabitat conditions) between the collection point of traits and vital rates. In an effort to build predictive models of life history strategies and population performance using anatomic and physiological traits as proxies, the StrateGo Network (Fig 3) was recently launched.
StrateGo is liaising with field researchers worldwide to test the following hypotheses: (1) shifts in functional trait values precede changes in population dynamics, and thus traits are good proxies to demographic projections, (2) not all traits are functional; trait functionality depends on the life history of the species along the fast-slow continuum and reproductive strategies continuum (Salguero-Gómez et al. 2016), and (3) trait interactions of orthogonal axes such as wood density and leaf Nitrogen describe emerging properties result of trade-offs in resource allocation that single traits cannot.
StrateGo, a globally distributed network, sits on top of a much older network: COMPADRE. The COMPADRE Network is constituted by all the researchers who have contributed plant population dynamics data in the shape of matrix population models (Caswell 2001) or integral projection models (Easterling et al. 2000).
Researchers who have collected or are currently collecting demographic data are being encouraged by StrateGo to collect a set of traits in the same, comparable manner. However, StrateGo is also open to researchers who have collected, are collecting or plan to collect functional trait data and would like to evaluate plant population dynamics too at the same study site(s). Incidentally, wanna join? Contact here.
Also cooking in the backstage, we are planning a special feature, together with Olivier Gimenez and Dylan Childs, for the British Ecological Society. Manuscripts will be submitted to Journal of Ecology, Functional Ecology and Journal of Animal Ecology. The special feature, inspired on the aforementioned ESA symposium, will aim to provide novel frameworks to link traits and life history strategies within plants, animals and microbes.
Dr Rob Salguero-Gómez (University of Oxford, UK, and Associate Editor of Journal of Ecology)
Dr Cyrille Violle (CEFE CNRS Senior Researcher, France) |
The idea of capital has long had a strong materialistic bent that is evident in the dominance of material capital in economic thinking. The logical basis of an all-inclusive concept of capital, which includes human capital, was established by Irving Fisher (1906). This concept treats all sources of income streams as forms of capital. These sources include not only such material forms as natural resources and reproducible producer and consumer goods and commodities but also such human forms as the inherited and acquired abilities of producers and consumers. Yet the core of economics with respect to this matter concentrates on producer goods, particularly on structures, equipment, and inventories, with little or no attention to the abilities of human beings, even though human resources are much the larger source of income streams.
An approach to capital that includes human capital has two major advantages. The first arises out of the fact that by taking both human capital and material capital into account, a number of biases in economics would be corrected. The overemphasis of material sources of income streams is one of them. Closely related are the imbalances in investment programs of countries where investment in human capital is not an integral part of such programs. Another is the mistaken inference that the real capital–income ratio is necessarily declining over time when the observed ratio of material capital to income falls. Still another is the belief that the productivity of the economy as a whole increases as rapidly as total output rises, relative to measured inputs, although the estimates of inputs fail to include many improvements in the quality of both material factors and human agents. These improvements in quality are the product of investment and thus are forms of capital. There are strong reasons, both theoretical and empirical, to support the inference that in value terms the productivity of the U.S. economy, for example, has remained approximately constant for many decades. [SeeProductivity.] An all-inclusive concept of capital also provides a framework for determining how closely the private and public sectors of an economy come to an optimum in investing in each of the sources of income streams.
The other major advantage of the concept of human capital is in analyzing the various organized activities that augment those human abilities which raise real income prospects. People acquire both producer and consumer abilities. Many of these abilities are clearly the product of investment. There are important unsettled questions about economic growth, changes in the pattern of wages and salaries and the personal distribution of income, that can be resolved once investment in human capital is taken into account. There are also biases in the way the labor force is measured, and in the treatment of public expenditure for education and medical care that can be corrected by using the concept of human capital.
Inherited and acquired abilities. The philosopher–economist Adam Smith boldly included all useful abilities of the inhabitants of a country, whether inherited or acquired, as part of capital. These two sorts of abilities, however, differ importantly in the formation of human capital.
Migration and population growth aside, inherited abilities of a population are akin to the original properties of land in the sense that they are “given by nature” in any time period that is meaningful for economic analysis. Any genetic drift that affects the distribution and level of these abilities occurs so slowly that it is of no relevance in economic analysis. It seems to be true also that the distribution of inherited abilities within any large population remains, for all practical purposes, constant over time and that the distribution of these abilities is approximately the same whether a country is poor or rich, backward or modern, provided the population is large.
But the picture is quite otherwise in the case of acquired abilities having economic value. The formation and maintenance of these abilities are analogous to the formation and maintenance of reproducible material capital. These abilities are obviously subject to depreciation and obsolescences. The distribution and level of acquired abilities can be altered importantly during a time span that matters in economic analysis. Historically they have been altered vastly in countries that have developed a modern economy. In this respect the difference between poor and rich, backward and modern, countries is indeed great. The level of acquired abilities that have economic value is very high in a few countries while it is still exceedingly low in most countries. The truth is that the amount of human capital per worker, or per million inhabitants, varies greatly among countries.
The acquired abilities that raise income prospects are of many types and differ from country to country, depending upon differences in the demand for these abilities and upon differences in the opportunities to supply them. They also are augmented in different ways, depending in part on the type of abilities and in part on the process of investing in them. Some abilities are acquired through informal and essentially unorganized activities, which is the case with most learning in the home and learning from informal community experiences. Others are acquired through organized activities that are, as a rule, also specialized; these include schooling, most on-the-job training, and many adult programs to improve the skills and knowledge of those participating [seeAdult education; Labor force, article onParticipation]. People also improve their future earning abilities through medical care, by acquiring job and other types of information about the economic system, and by migrating to take better jobs.
The formation of human capital, especially through those activities which have become organized and specialized in a modern economy, is of a magnitude to alter radically the conventional estimates of savings and capital formation. These forms of human capital are the source of many additional income streams contributing to economic growth. They also alter wages and salaries, in both absolute and relative terms, and the share of the national income from earnings relative to that from property over time.
Instead of developing and using a general concept that includes human capital, economists have predominantly used a concept restricted to classes of wealth that are bought and sold. Irving Fisher, in a series of papers published just before the turn of the century and then in his excellent but neglected book, The Nature of Capital and Income (1906), clearly and cogently established the economic basis for an all-inclusive concept of capital. But the prestige of Alfred Marshall was too great; his ideas on this matter prevailed. Marshall dismissed Fisher’s approach in these words: “Regarded from the abstract and mathematical point of view, his position is incontestable. But he seems to take too little account of the necessity for keeping realistic discussions in touch with the language of the market-place” (Marshall , 1916, pp. 787–788). Marshall concluded his appendix “Definitions of Capital” by stating, “… we are seeking a definition that will keep realistic economics in touch with the market-place …” (ibid., p. 790). Marshall’s market place restriction had the effect of excluding all capital that becomes an integral part of a people.
In this respect five of the more serious biases that thwart economic analysis and limit its usefulness require a brief comment.
It has been said that the economists have had a rather unfavorable image in the public mind. Whether true or not, there have been many protests contending that the policy implications of economics are primarily concerned with the value of material things. Unquestionably, economics has been strongly and persistently biased in favor of producer and consumer goods and commodities. For this reason it is justly charged as having a materialistic orientation. This orientation is all too evident in the treatment of capital, where producer goods are treated as if they were the sum and substance of capital and as if economic growth were dependent wholly on investment in such goods. Accordingly, there is much merit in the protest that the rise of human capital in capitalism is not seen or that increases in human capital, which have become a crucial feature of the economic system, are neglected.
Labor inputs inadequately specified
Economists have found it all too convenient to think of labor as a homogeneous input free of any capital components. Much theory rests on a presumed dichotomy between labor and capital. But it is a treacherous dichotomy when analyzing economic growth, for the reason that the acquired abilities of labor that contribute to growth are as much a product of investment in man as growth is a product of investment in material forms of capital. The bias here is also clearly evident in the conventional approach to the measurement of labor as a factor of production.
In this approach it suffices to count the number of workers in the labor force or the number of man-hours worked. Differences in the acquired abilities of a labor force that occur over time are not reckoned. This particular bias has fostered the retention of the classical notion of labor as a capacity to do manual work requiring little skill and knowledge, a capacity with which, according to this notion, laborers are endowed about equally. But this notion of labor is patently wrong. The size of the labor force or the number of man-hours worked is not a satisfactory measure of increases in the productive services rendered by labor over time because of changes in the human capital component.
Misinterpretation of declines in capital–income ratios
The empirical foundation of economics has been much strengthened by studies of wealth and income [seeNational income and product accounts; National wealth]. One of the uses made of these studies has been to show that the capital–income ratio has been declining in countries with a modern economy. The decline in this ratio is then frequently viewed with apprehension because of the inferences that are drawn with respect to savings and investment and with respect to economic growth. Here, too, there is obviously a bias arising out of the restricted concept of capital on which these estimates are based.
There are no compelling reasons why the stock of any particular class of capital should not fall (or rise) relative to national income over time. Producer goods—structures, equipment, and inventories—are such a class. It is this particular class of capital that has been declining relative to income in the case of these estimates. Leaving aside the fact that the estimates of producer goods omit many improvements, they are at best only a part of all capital. The most serious omission in them is human capital, which has been increasing at a much higher rate than that of material reproducible capital. In the United States between 1929 and 1957, for example, while national income was increasing at about 3 per cent per annum, the stock of reproducible tangible capital rose only 2 per cent per annum. But the stock of educational and of training-on-the-job capital in the labor force rose between 4 and 5 per cent per annum. It turns out that the sum of this class of material capital and of the human capital just mentioned rose about 3 per cent per annum, that is, at the same rate as national income. The ratio of this more nearly all-inclusive concept of capital to national income was about the same in 1929 and in 1957; in both of these years it was about six. Thus the apparent substantial declines in capital relative to income, which are based on estimates covering a number of modern countries, are an illusion in the sense that they are not valid indications of what has been happening to the ratio of all capital to income.
Savings and investment–income relation
Another closely related issue has been the concern about the amount of savings and investment relative to income, the concern being that as national income rises, savings and investment decline relatively [seeConsumption function]. Here, too, conventional estimates are very misleading because they omit investment in human capital. They under-state the amount of savings and investment that occurs in any given year, and they show a decline in such savings and investment over time relative to income, when in fact there may have been no decline in all savings and investment in relation to income. Again, an appeal to more all-inclusive estimates for the economy of the United States is instructive. Based on the sum of the investment in reproducible material capital and in educational and on-the-job-training capital in the labor force, the amount of capital thus formed was equal to about 26 per cent of net national product in both 1929 and 1957.
Seeming rise in aggregate productivity
Another bias that has come to thwart economic analysis is the belief that the productivity of capital and labor has been rising very substantially over time, especially in countries that have developed a modern economy. There are estimates to support the belief that output has been rising not only relative to capital and to labor, respectively, but also relative to all inputs of capital and labor treated as an aggregate. The output of a particular industry or sector may, of course, rise in relation to material capital or to the size of the labor force. Nor is it implausible, under the circumstances that characterize economic growth, for an entire industry or sector to lag in its adjustments and thus to operate for a considerable period at a disequilibrium. This would cause the value of its output to decline relative to all inputs valued at equilibrium prices as the disequilibrium becomes established, and then to rise as such an industry or sector reattains an equilibrium.
But there is no strong theoretical or empirical basis for believing that the productivity of all factors of production treated as an aggregate, where the economy grows at an even pace, should either rise or fall. A much more plausible hypothesis is that it remains approximately constant over time. Why are there so many estimates that seemingly show total national output rising relative to total inputs? In a real sense it is because more of the additional capital that is formed over time is concealed than is income. This explanation is undoubtedly too cryptic, and therefore some elaboration is in order.
The analytical game that most economists have been playing in studying economic growth has been to take an index of reproducible material capital that omits changes in quality, in the sense that it abstracts from improvements in material capital. The next move is to take the size of the labor force, or man-hours worked, which also omits changes in quality, in the sense that improvements in the capabilities of labor are not fully reckoned. These two measures of inputs are then aggregated and related to total output. This game always shows the total input of such capital and labor as falling relative to total output over time. The inference is then drawn that the productivity of the economy as a whole rises over time, and this rise in productivity is generously attributed to “technological change,” which according to this game would appear to account for most of the observed economic growth.
The finding that the over-all productivity of the economy increases in this manner is due to two types of illusions. The first is simply a consequence of the fact that many factors of production which are added to the resources of an economy over time are not included among the inputs; they are frequently and conveniently swept under the rug of “technological change.” Here, basically, the analytical problem is one of specifying and identifying the improvements in human and material resources that occur over time. Undoubtedly most of the seeming rise in over-all productivity, which is based on estimates of the conventional measures of material capital and labor, is a result of omission of a large array of these quality components [seeAgriculture, article onProductivity and technology].
The second type of illusion is based on an apparent change in the capital–income ratio of a country. The observed ratio declines for reasons already noted, namely, because only a part of all capital is reckoned and because this part of the stock of capital is not increasing at as high a rate as either all capital or income. To avoid this type of illusion, an all-inclusive concept of capital is necessary. The crucial question is as follows: When all the sources of income streams are treated as capital, is the rate of return on capital, so conceived, rising persistently over time? There is no theoretical basis for an affirmative answer, even though economic instability or forms of disequilibrium are postulated. Nor is there any empirical evidence that would support an affirmative answer to this question. The rate of return to investment that entails a “standard” component of risk and uncertainty is probably no higher presently in the United States than it was, say, during the 1920s. Nor should it come as a surprise that this rate of return has not been rising secularly.
With regard to motives and preferences of people for holding and acquiring sources of income streams, the most plausible assumption is that they have remained essentially constant. With respect to the behavior of suppliers of the sources of income streams, the equally plausible assumption is that these suppliers have been successful in providing enough new sources to increase national income, as it is now measured, at a rate of, say, between 3 and 4 per cent per year; however, they have not succeeded in increasing the supply at so fast a rate as to cause the price of these sources per dollar of income per year to decline, given the growth in demand consistent with the underlying preferences of the demanders. From these two very plausible assumptions its follows that the rate of return to investment would tend to remain approximately constant over time; and in this critical sense, the value of capital in relation to income has not been rising.
Both of these types of productivity illusions are in large part a consequence of the neglect of human capital and of its contribution to production, although improvements in the quality of other forms of capital are also a part.
But there is a sense in which real income can rise in a way that would alter the productivity of an economy that is beyond the two illusions already examined, although it is closely related analytically. There are consumer satisfactions that people derive from better health, from more education, and from more leisure time. These satisfactions also increase in most countries as economic growth occurs. Are they to be treated as a consumer surplus? [SeeConsumer’s surplus.] Or are they concealed income from particular forms of capital that have been augmented over time? Surely a part of education has the attribute of an enduring consumer component that renders a stream of consumer satisfactions. These satisfactions from education, at least in principle, can be treated as a product of investment in schooling, akin to the satisfactions derived from investment in conventional consumer durables. But none of them appear in national income as it is presently measured. If they were included and if the sources were omitted from capital, it would tend once again to reduce the capital–income ratio. Contrariwise, if the stock of additional education capital represented by these enduring consumer abilities were to be identified and measured and thus made a part of the total stock of capital, and if the stream of income from this part of educational capital were omitted from national income, it would tend to increase the apparent capital–income ratio over time. Obviously the reason for the apparent rise of such a ratio would be the use of a partial concept of income. Unquestionably the same reasoning is applicable to consumer satisfactions from better health.
It is not obvious, however, that the additional satisfactions that come from more leisure time, from decreases in hours of work per week or in days worked per week and during a year, can be treated in the same way. But since more time for leisure has a value, the value of it can be included in income. The source of this leisure is an integral part of total production that provides enough income so that people can afford the leisure time. Thus, when all sources of income are treated as capital, the source of leisure is already reckoned. Accordingly, if the income component represented by leisure were to be omitted, and if all capital were reckoned, it would tend to increase the apparent capital–income ratio over time.
Public “welfare” expenditures
A long-standing bias permeates the treatment of public expenditures for health facilities and services and for education, treating them as if they were wholly for consumption, as welfare measures that in no way enhance the abilities of people as producers. Even vocational training and public funds to retrain workers in depressed areas for new jobs are often treated as welfare programs, although they are predominantly an investment in the productive abilities of the recipients. Conversely, most of the notions for attaining an optimum rate of economic growth in poor countries are seriously biased because of their strong emphasis on investment in new steel mills and in other modern industrial structures and equipment, with no comparable emphasis on providing for the complementary investment in human agents to administer and to do the skilled work which these installations require.
Human beings are both consumers and producers. To production they contribute either entrepreneurship or work. In classical economics the producer attribute of human agents is that of a factor of production, referred to simply as labor. In modern analysis it is that of an input, or of a coefficient of production [seeProduction]. In accounting for increases in national income over time, labor is treated as one of the sources of economic growth. As a source of income streams, the acquired abilities of human agents have, as we have argued, the attribute of an investment. All of these attributes of human agents are in one way or another a form of capital. Viewed as capital, they are a stock that renders services of economic value. The services are either consumer or producer services. Human capital is not, of course, bought or sold where men are free and none are slaves, as are the material forms of capital; but its producer services are generally for hire, the price being a wage or a salary. These producer services of human agents can be augmented by means of investment, which increases their income prospects. The additional income that is realized from an investment in human capital implies some rate of return.
The logical approach to determining the economic value of any of these attributes of human agents will differ depending upon the aim of the analysis, the theory and estimating technique that are used, and the limitations of the data. Much, of course, depends upon the aim. To see this, three different aims will be considered briefly.
What are the inputs in an economy, sector, or industry? A basic difficulty arises at once out of the fact that an input has two economic faces; one is the income value of its productive services, and the other is its capital value. In the case of a parcel of land, for example, there is the rent paid for its use and the price at which the land sells.
The aim of many studies is to determine the economic value of the productive services of the inputs. Suppose we begin with a net national product for a given year and ignore all increases or decreases in inputs that occur during the year. Suppose also that the inputs are of two sorts, namely, labor and producer goods. Such a gross dichotomy may show that a fourth of the net national product is functionally contributed by producer goods and three-fourths by human agents. There is then the temptation to transform the productive services of the inputs into stocks of capital by using a naive approach that treats their services as permanent income streams and that capitalizes each of these income streams at the same rate of return. This approach would, of course, imply that the stock of human capital is three times as large as that of producer goods.
But this is obviously a naive way of transforming income streams into capital stocks, since estimates of net national product, as presently determined, are far from net—especially so for human capital —and thus in this important respect these estimates do not represent permanent income streams. It is also true that it would be a rare coincidence in a dynamic, growing economy to find an equality in the real rates of return to investment.
Nevertheless, in pursuing this aim there may be analytical reasons for concentrating on the magnitude and value of the productive services of these inputs while leaving aside the problem of determining their capital value. To do this, what is required is the price and the amount of the respective input services employed. With regard to these requirements, economists have treated human agents more adequately than producer goods. The difference is a consequence partly of inadequacies of the theory used and partly limitations of data. In aggregating producer goods, the differences in the quasi rents, or the relative prices of the services of these goods, are not as a rule reckoned; nor are the improvements in quality of new producer goods generally taken into account [seeRent].
In this sense it is true that most estimates of material capital conceal a part of the additional capital that is formed over time. The problem of aggregation in this connection is not only conceptual but is also confounded by the lack of price data of the productive services of different classes of producer goods [seeAggregation]. Fortunately these analytical inadequacies are not nearly so pronounced in the case of the productive services of human agents. Wages and salaries provide price data; and human agents can be classified into fairly homogeneous groups by occupations, or levels of skills, and by age, sex, and schooling. Nevertheless, this picture of the productive services of inputs will not reveal the differences in investment opportunities among the reproducible inputs.
Sources of economic growth
National income increases at, say, a rate of 3 per cent per year between two dates. What are the sources of that growth? The matter of investment, rates of return to investment, and whether net savings are allocated optimally among investment opportunities can be put aside in determining the part of growth associated with each source. If the income value of the productive services of human agents and of producer goods were known, it would entail only a little simple arithmetic. But such is not the case, mainly for the reasons already considered. For human agents it is fairly straightforward to the extent that there is a linkage between what they contribute to production and what they earn in wages and salaries, although to determine what part of the increase in wages and salaries over time comes from more schooling, on-the-job training, better health, and from still other sources is far from easy. But not all of the additional income from these sources accrues to the individuals who have acquired these abilities. Some of it accrues to their co-workers, employers, and neighbors. In addition, there is an array of consumer satisfactions from these sources that accrues partly to the individuals who have acquired the relevant abilities and partly to others in the community. In general these consumer satisfactions are omitted from national income as presently measured.
For producer goods, it is as yet most difficult to ascertain this linkage because of the manner in which these forms of capital are identified and measured and because of the lack of price information on the services of producer goods. Improvements in the quality of such capital are largely omitted, and they become a major part of a “source” that appears as a residual, an increase in “output per unit of input.” Thus, until this residual is properly allocated, it is obvious that producer goods (material capital) are underrated as a source of growth. Capital embodied in human agents is in this respect on a much stronger footing.
While this knowledge of the sources of economic growth is indeed useful in serving the aim of Edward F. Denison’s comprehensive study (1962), it is not an approach to determining the underlying costs and returns to the investment that produced the additional sources that account for this part of economic growth. The important matter of an optimum allocation of total net savings among investment opportunities is not a part of the aim of this approach.
Acquired abilities that have economic value usually entail identifiable costs. Each process of acquiring abilities that enhance income prospects has the attributes of an investment. Viewed as an investment, what is the rate of return? The aim implied by this question cannot be realized by obtaining a picture of inputs or by ascertaining the sources of economic growth as they are treated above. An investment approach is required to attain this aim, which is important because knowledge about investment and the rate of return in this connection is essential in making the economic decisions necessary to achieve an optimum allocation of savings among investment opportunities. The relevance of this approach for a large array of economic problems is set forth by this writer in “Investment in Human Capital” (Schultz 1961b). The theoretical “relations between earnings, rates of return, and the amount invested” and “how the latter two can be indirectly inferred from earnings” are investigated by Gary S. Becker (1962) in a paper that appears in the supplement referred to below.
The investment approach is central in a number of recent studies, the results of which are presented in a supplement, “Investment in Human Beings,” to the Journal of Political Economy, October 1962. It includes the theoretical analysis by Becker just mentioned and the findings of several major empirical studies. These studies pertain to education, on-the-job training, health, information about the labor market, and migration when migration is treated as investment in human beings. Only two of these forms of investment, on-the-job training and education, will be considered here.
Investment in job training
By treating “training” as an “investment in acquisition of skill or in improvement of worker productivity” and by using a procedure akin to that used in determining investment in education, Jacob Mincer (1962) has identified and measured what appear to be costs of on-the-job training. His study, which also considers returns to this training, is restricted to males in the United States.
The investment in this training during 1939 was $3,000 million and during 1958, $13,500 million. In constant 1954 dollars, it was $5,700 million and $12,500 million, respectively. Mincer’s study reveals two major shifts. One is toward higher skill levels; for example, males who already had a college level of education by 1939 accounted for one-third of all this training acquired that year; during 1958 they received nearly two-thirds of it. The other shift is toward formal schooling relative to on-thejob training; the investment in this training declined from about four-fifths to three-fifths of that in schooling between 1939 and 1958.
Estimates of the rate of return to investment in on-the-job training are very fragmentary. Those reported range from 9.0 to 12.7 per cent per year. The apparent reasons for the two shifts referred to above and the implications of on-the-job training as a factor in income and with respect to employment behavior are also examined by Mincer.
Investment in education
Education is unquestionably the largest source of human capital consisting of acquired abilities. But the road to an analysis of the economic value of education is not paved. The costs of education are surprisingly well concealed. Not all of the benefits accrue to students; they are frequently widely dispersed. The rates of return depend on earning profiles of many different shapes, extending over many years. The responses to new, profitable investment in education are subject to some long lags; they are blunted in the case of public decisions by other matters and in the area of private decisions by incomplete information and by uncertainties that are inherent in a long future. There is also the uncertainty inherent in the fact that no student knows his abilities for schooling prior to putting himself to the test. In addition, the capital market is not well organized when it comes to lending funds for schooling.
Seemingly the task is simplified when it comes to formal education, since it is organized and presumably can be viewed as an industry that produces schooling. The difficulty with this simplification is that the functions of the educational establishment include activities other than schooling. One of the important functions of higher education is research. On-campus research has been increasing rapidly, and much of it is an integral part of graduate instruction. Another function consists of extension activities, notably, in the United States, the farflung state agricultural extension services. There is also activity akin to an advisory service, especially to public agencies. Then, too, universities have been entering upon programs of instruction and research abroad with so-called sister universities in the cooperating country. And not least of these other functions is that of discovering and cultivating talent, which is quite distinct from formal schooling.
Costs. Much has been done recently to clarify the cost components of education. Opportunity costs are large, especially the earnings foregone by mature students, which were concealed in the way costs of schooling were formerly estimated. In the United States, for example, earnings foregone account for fully half to three-fifths of the total costs of high school and higher education. Because of the importance of earnings foregone, education beyond the elementary level is far from free to students. In poor countries and also in some low-income communities in the United States, for instance, in some agricultural areas and in city slums, earnings foregone have been and still are a factor even for children during the latter years in the elementary grades. When earnings foregone are brought into the picture, a part of the educational scene that always appeared blurred becomes clear. The distinction between private costs incurred by the student or his family and total costs to the economy is important analytically in explaining differences in incentives to invest in schooling, the shift in favor of formal schooling relative to on-the-job training over time, and in ascertaining the rates of return that matter in determining optimum investment decisions.
Total costs also provide clues to the amounts invested and changes in stocks. Leaving aside the education of persons who are not in the labor force, in the United States, as already noted, the amount spent on the schooling of persons who are 14 years and older rose at a rate of 4 per cent per annum between 1929 and 1957, measured in constant 1956 dollars. Investment in reproducible tangible wealth rose at 2 per cent per annum. When these rates of growth are applied to the respective stocks of 1957, the net annual investment implied is $21,900 million for this schooling and $25,500 million for this form of material capital.
Benefits. The future benefits from schooling accrue in part to the student and in part to others in society. Burton A. Weisbrod (1962) has substantially clarified the distinction between these two parts, although other investigators are fully aware that there are these two classes of benefits from schooling. As yet there has been little empirical success in determining the value of the benefits from schooling that accrue to co-workers and employers of the students and to the students’ neighbors. There is a strong presumption that universal literacy of a population in a modern economy has large external economies [seeCapital, social overhead; External economies and diseconomies].
Furthermore, not all the benefits from schooling that accrue to the student are revealed in his future earnings, in wages, salaries, and entrepreneurial income from work. The benefits that accrue to the student are of three sorts. One consists of current consumption; the other two are an investment. That which is current consumption consists of satisfactions that the student obtains from schooling while in attendance. This benefit is undoubtedly small, for school days entail much hard work and long hours. There is next a class of enduring consumer abilities acquired through schooling; from these abilities the student derives satisfactions throughout his remaining life, for example, the ability to appreciate and enjoy the fine arts, the masterpieces of literature, science, and logical discourse. The source of these satisfactions is an investment in particular consumer abilities; but the value of the stream of satisfactions from this source, although substantial, is not a part of future wages and salaries. The third set of benefits consists of increases in the student’s productivity, the source being the producer abilities acquired from schooling. While most of these appear in future earnings, there are nevertheless some that are derived from production activities that the student does for himself over the years, like preparing his income tax returns, which do not enter into his earnings or into national income as it is presently measured.
Earnings. Investigations of investment in education have concentrated on earnings while leaving aside the other benefits from schooling. Even so, it is no simple matter to identify and measure these earnings. They are beset by the effects of differences in the inherited abilities of workers, of race, sex, and age; by unemployment; by the content and quality of schooling; and by the effects upon earnings of job training, health, and other forms of investment in human beings. Difficult as it is to isolate and adjust for these effects, some fairly satisfactory estimates have been obtained. These estimates make it possible to determine the rates of return to schooling, which are considered briefly below. They show also that at least one-fifth of the economic growth of the United States between 1929 and 1957 came from additional earnings connected with schooling.
Rates of return. Available estimates on rates of return are limited to money returns, that is, to money earnings from schooling that accrue to the student. Accordingly, all other benefits from schooling are omitted in these estimates; and to this extent the real rates of return to education are underestimated. For a more complete review and appraisal, see The Economic Value of Education (Schultz 1963).
Rates of return to total costs of schooling support several important generalizations. Costs here consist of all direct and indirect costs, including earnings foregone, whether borne privately or publicly; returns are restricted to monetary earnings from schooling. For males in the United States the following generalizations emerge: (1) the rate of return to elementary schooling is higher than to high school education, and in turn the rate of return to high school education is higher than to college education; (2) the rate of return to high school education (completing the twelfth year) rose persistently and very substantially between 1939 and 1958, while that to college education (completing at least the sixteenth year) declined somewhat between 1939 and 1956 and then began to rise; and (3) the lowest of these rates of return has been about 12 per cent per annum.
New knowledge pertaining to investment in human capital is already quite satisfactory with regard to the behavior of the supply and the rates of return to on-the-job training and to education. Little is known, however, about the factors that have been increasing the demand for these acquired abilities—an integral part of economic growth.
Theodore W. Schultz
Anderson, C. Arnold; Brown, James C.; and Bowman, M. J. 1952 Intelligence and Occupational Mobility. Journal of Political Economy 60:218−239.
Ashby, Eric 1960 Investment in Education: The Report of the Commissions on Post-school Certificate and Higher Education in Nigeria. Lagos (Nigeria): Federal Ministry of Education.
Becker, Gary S. 1960 Underinvestment in College Education? American Economic Review 50:346−354.
Becker, Gary S. 1962 Investment in Human Capital: A Theoretical Analysis. Journal of Political Economy 70, no. 5 (Supplement):9−49.
Becker, Gary S. 1964 Human Capital: A Theoretical and Empirical Analysis, With Special Reference to Education. New York: National Bureau of Economic Research.
Benson, Charles S.; and Lohnes, Paul R. 1959 Skill Requirements and Industrial Training in Durable Goods Manufacturing. Industrial and Labor Relations Review 12:540−553.
Blank, David; and Stigler, George J. 1957 The Demand and Supply of Scientific Personnel. National Bureau of Economic Research General Series, No. 62. New York: The Bureau.
Bonner, J.; and Lees, D. S. 1963 Consumption and Investment. Journal of Political Economy 71:64−75.
Bowen, William G. 1964 Economic Aspects of Education: Three Essays. Princeton Univ., Department of Economics, Industrial Relations Section.
Bowman, Mary Jean 1962 Human Capital: Concepts and Measures. U.S. Office of Education, Bulletin no. 5:69−92.
Colberg, Marshall R. 1965 Human Capital in Southern Development: 1939−1963. Chapel Hill: Univ. of North Carolina Press.
Commission on Human Resources and Advanced Training 1954 America’s Resources of Specialized Talent: A Current Appraisal and a Look Ahead. New York: Harper.
Denison, Edward F. 1962 Education, Economic Growth, and Gaps in Information. Journal of Political Economy 70, no. 5 (Supplement): 124−128.
DeWitt, Nicholas 1955 Soviet Professional Manpower, Its Education, Training, and Supply. Prepared in co-operation with the National Academy of Sciences–National Research Council for the National Science Foundation. Washington: Government Printing Office.
Edding, Friedrich 1958 Internationale Tendenzen in der Entwicklung der Ausgaben für Schulen und Hochschulen; International Trends in Educational Expenditures. Kiel (Germany): Institut fur Weltwirtschaft. → Contains a summary in English.
Education and the Southern Economy. 1965 Southern Economic Journal 32, part 2:1−128.
The Falk Project for Economic Research in Israel 1961 Report: 1959 and 1960. Jerusalem (Israel): The Project. → See especially pages 138−146 on the profitability of investment in education and pages 146−150 on the measurement of educational capital in Israel.
Fisher, Irving (1906) 1927 The Nature of Capital and Income. New York and London: Macmillan.
Friedman, Milton and Kuznets, Simon 1945 Income From Independent Professional Practice. National Bureau of Economic Research General Series, No. 45. New York: The Bureau.
Hansen, W. Lee 1963 Total and Private Rates of Return to Investment in Schooling. Journal of Political Economy 71:128−140.
Harbison, Frederick; and Myers, Charles A. 1964 Education, Manpower, and Economic Growth: Strategies of Human Resource Development. New York: McGraw-Hill.
Investment in Human Beings: Papers Presented at a Conference Called by the Universities–National Bureau Committee for Economic Research. 1962 Journal of Political Economy 70, no. 5: Supplement. → The entire supplement is devoted to the problem of human capital.
Kellogg, Charles E. 1960 Transfer of Basic Skills of Food Production. American Academy of Political and Social Science, Annals 331:32−38.
Kenen, Peter B. 1965 Nature, Capital and Trade. Journal of Political Economy 73:437−460.
Machlup, Fritz 1962 The Production and Distribution of Knowledge in the United States. Princeton Univ. Press.
Marshall, Alfred (1890) 1916 Principles of Economics. 7th ed. New York: Macmillan; London: St. Martins.
Miller, Herman P. 1960 Annual and Lifetime Income in Relation to Education: 1939−1959. American Economic Review 50:962−986.
Mincer, Jacob 1958 Investment in Human Capital and Personal Income Distribution. Journal of Political Economy 66:281−302.
Mincer, Jacob 1962 On-the-job Training: Costs, Returns, and Some Implications. Journal of Political Economy 70, no. 5 (Supplement): 50−79.
Mushkin, Selma J. (editor) 1962 The Economics of Higher Education. U.S. Office of Education, Bulletin , no. 5. → The entire issue is devoted to the topic. See especially pages 69−92 and pages 281−304.
Nicholson, Joseph S. 1891 The Living Capital of the United Kingdom. Economic Journal 1:95−107.
Organization for economic Cooperation and Development 1962 Policy Conference on Economic Growth and Investment in Education. Paris: The Organization. → Also published in Volume 36 of the Bulletin of the International Bureau of Education.
Organization for economic Cooperation and Development 1964 The Residual Factor and Economic Growth. London: H.M. Stationery Office.
Princeton University, Industrial Relations Section 1957 High-talent Manpower for Science and Industry: An Appraisal of Policy at Home and Abroad, by J. Douglas Brown and Frederick Harbison. Research Report Series, No. 95. Princeton, N.J.: The Section.
Schultz, Theodore W. 1960 Capital Formation by Education. Journal of Political Economy 68:571−583.
Schultz, Theodore W. 1961a Education and Economic Growth. Volume 60, pages 46−88 in National Society for the Study of Education, Yearbook. Part 2: Social Forces Influencing American Education. Univ. of Chicago Press.
Schultz, Theodore W. 1961b Investment in Human Capital. American Economic Review 51:1−17.
Schultz, Theodore W. 1963 The Economic Value of Education. New York: Columbia Univ. Press.
Stigler, George J. 1962 Information in the Labor Market. Journal of Political Economy 70, no. 5 (Supplement) : 94−105.
Stigler, George J. 1963 Capital and Rates of Return in Manufacturing Industries. A Study of the National Bureau of Economic Research. Princeton Univ. Press.
Tawney, Richard H. 1938 Some Thoughts on the Economics of Public Education. Oxford Univ. Press.
Vaizey, John 1958 The Costs of Education. London: Allen & Unwin.
Weisbrod, Burton A. 1961 The Valuation of Human Capital. Journal of Political Economy 69:425−436.
Weisbrod, Burton A. 1962 Education and Investment in Human Capital. Journal of Political Economy 70, no. 5 (Supplement): 106−123.
Wiles, P. J. D. 1956 The Nations’s Intellectual Investment. Oxford University, Institute of Statistics, Bulletin 18, no. 3:279−290.
According to marginal analysis, each factor of production is paid according to its contribution to production, and the market constitutes a mechanism that establishes a moral rule of distributive justice in the society. This theory of marginal productivity was put forth around the turn of the twentieth century by the American economist John Bates Clark (1847–1938). In the neoclassical analysis of the labor market, the equilibrium wage is set at the point of intersection of the demand and supply curves of labor. The level of the equilibrium wage set by the intersection of the two curves guides the allocation of workers across firms in such a way that an efficient allocation of resources is achieved. The implication is that wages are equal to the value of the marginal product of labor, which is the same for all workers and every firm. To achieve this condition, firms and workers should operate in a perfectly competitive environment, anonymity for both sides should be present, and the allocation of labor to firms should be random. However, wage differentials are present in the labor market, and several attempts have been made to explain the reasons for these wage differentials.
Adam Smith (1723–1790) was the first economist to introduce the idea of wage differentials and attempt to offer reasons for its existence. According to Smith, one reason for wage differentials is the common idea that wages vary positively, everything else being held constant, with the disutility of labor. This argument was later formalized by William Stanley Jevons (1835–1882) and states that an individual is willing to offer more labor input only if the wage is higher; this is because leisure time gets scarcer and thus becomes more valuable to an individual. Another reason for relative wage differentials put forward by Smith is related to the concept of human capital, which has been introduced only recently into economic analysis. According to Smith, the cost of a person’s education or training can be viewed as an investment in the individual’s future earnings capacity, analogous to an investment in physical capital. To be economically justified, this investment must be recuperated over the lifetime of the student or trainee. Thus, those with education or training will generally earn more than those without. In recent years, this interpretation of wage differentials has given rise to numerous attempts to measure the rate of return on investment in education and training. In general, these efforts have sought to check whether such investments in education and training do, in fact, earn normal profits for an equally valuable capital.
Neoclassical theory has never abandoned the idea that wages tend to equal the net product of labor; in other words, the marginal productivity of labor rules the determination of wages. However, it has also been recognized that wages tend to retain an intricate, although indirect, relationship with the cost of rearing, training, and sustaining the energy of efficient labor. All these factors cause wage differentials, and they make up what is known as human capital in economic literature. Human capital, in mainstream economics, is similar to the physical means of production—for example, factories and machines—and is defined as the knowledge and skills possessed by the workforce that can be accumulated. Human capital, therefore, is a stock of assets that one owns and that allows one to receive a flow of income, similar to interest earned. In modern times, human capital is largely a product of education, training, and experience. Within this approach, investment in human capital is treated similarly to investment in physical capital and is considered, in many cases, as a principal factor for the observed differences in developmental levels between nations. It is argued that investment in human capital can take various forms, including improvements in health services, formal education, and training of employees, and it can help eliminate blockages in productivity enhancements. It is worth clarifying that the acquisition of more human capital is not related to the number in the workforce but to the improvement of labor services and the concomitant increase in labor productivity.
Today, new growth theorists put a lot of emphasis on human capital: “The main engine of growth is the accumulation of human capital—or knowledge—and the main source of differences in living standards among nations is a difference in human capital. Physical capital plays an essential but decidedly subsidiary role. Human capital takes place in schools, in research organizations, and in the course of producing goods and engaging in trade” (Lucas 1993, p. 270). Within this framework, a lot of emphasis is given to the policies promoting human capital, such as investment in education or health. More than that, according to the economists Theodore W. Schultz (1902–1998) and Anthony P. Thirlwall, it is argued that differences in the quality of human capital are the cause of the different levels of development among nations and that the improvement of human capital might reduce the need for physical capital.
In addition, literature about organizational behavior and human resource management has emphasized the role of human capital in promoting the development and implementation of national and corporate strategies. Hence, human capital is a very important asset and may serve as a useful link in the chain between employee and business performance; moreover, human capital, human-resources practices, and firm performance are areas that need to be explored together to design better corporate strategies.
Modern human capital theory was initiated by the work of Jacob Mincer, in which investment in human resources is considered similarly to other types of investments. In Mincer’s first model (1958), individuals are assumed to be identical prior to any training, thereby forcing a differential wage to employment based on the length of the expected training period. The size of the compensating differential is determined by equating the present value of the future net earnings stream (gross earnings minus costs) with the different levels of investment. The formal presentation of Mincer’s first simple model is:
lnw (s ) = lnw (0) + rs
where lnw (s ) represents the log of the annual earnings of an individual with s years of education, lnw (0) represents the log of the annual earnings of an individual with basic years of education, and r is the internal rate of return to schooling years s. According to the above equation, individuals with more training receive higher earnings. The difference between earning levels of individuals with different years of schooling is determined by the second term of the right-hand side of the equation. If one defines the internal rate of return to schooling as the discount rate that equates the lifetime earnings stream for different educational choices, then the internal rate of return to schooling can be estimated by the coefficient on years of schooling. This simple framework offers a number of interesting implications. However, the whole analysis relies on some unrealistic assumptions, which can be summarized as follows:
the individuals are all identical prior to making choices;
the age of retirement does not depend on years of schooling;
the only cost of schooling is foregone earnings;
earnings do not vary over the life cycle; and
there is no post-school on-the-job investment.
Mincer’s second model (1974) allows for the on-the-job investment and yields an earnings specification that is similar to the first. To establish the relationship between potential earnings and years of labor-market experience, assuming that observed earnings are equal to potential earnings less investment cost, the following relationship for observed earnings is produced:
lnw (sx ) = a 0 + ρs s + β 0 x + β 1 x 2
where lnw (sx ) stands for the observed earnings, x is the amount of work experience, ρs is the rate of return on formal schooling, and s represents the years of education. The intercept term is the product of the log skill price and the initial ability of the individual. The coefficients β 0 and β 1 stand for the return to experience.
This second expression is called Mincer’s standard wage equation, which regresses the log earnings on a constant term, on a linear term of the years of schooling, and on a quadratic term of the labor-market experience. Mincer’s standard wage equation transforms a normal distribution of years of schooling into a log-normal distribution of earnings. Under the assumption that post-school investment patterns are identical across individuals and do not depend on the schooling level, Mincer shows that there is an important distinction between age-earnings profiles and experience-earnings profiles, where experience means years since leaving school. More specifically, he shows that the log-earnings-experience profiles are parallel across schooling levels and that log-earnings-age profiles diverge with age across schooling levels.
By the early 1970s, the estimation of the returns on schooling using Mincerian wage regressions had become one of the most widely analyzed topics in applied econometrics. The reason is that the human-capital earning function has several distinct characteristics that make it particularly attractive:
the functional form is not arbitrary, and the identity is based on the optimizing behavior of individuals as captured by the outcome of the labor-market process;
it converts immeasurables (the dollar cost of investment in human capital) into measurables (years of schooling and years of labor-market experience);
it can include instrumental variables to capture a dichotomous variable describing some characteristics such as race or sex; and
the coefficients of the regression equation may be attributed with economic interpretations.
While the human-capital literature has now been generalized to incorporate on-the-job training, it should be noted that the Mincerian wage regression equation is a representation of the statistical relationship between wages and experience (given schooling) for an exogenously determined rate of on-the-job training. The Mincerian wage regression disregards the endogeneity of post-schooling human-capital accumulation and treats schooling and training symmetrically. More precisely, Mincer’s approach ignores the possibility that schooling may change the human-capital accumulation process that takes place on the job.
C. Lester Thurow’s job-competition model serves as an interesting counterpoint to the traditional models in explaining the distribution of earnings. In his book Generating Inequality: Mechanisms of Distribution in the U.S. Economy (1975), Thurow shows that for the period 1950–1970, changes in the educational attainments of white males 25 to 64 years old did not affect their earnings. He finds that educational distribution is equalized through the years while income distribution is not, and he notes that the expected arguments and results from marginal productivity are not fulfilled. To explain the distribution of wage earnings, Thurow rejects market imperfections as a possible explanation of unexpected observations in labor markets. In fact, he argues that individuals compete against one another for job opportunities based on their relative costs of being trained to fill a job position instead of based on wages they are willing to accept. He argues that wages are based on the marginal productivity of the job, not of the worker. Workers who compete for jobs offered at fixed wages determine the labor supply, and workers compete for relative positions based upon their training costs to employers rather than on wages. Hence, job competition prompts people to overinvest in formal education for purely defensive reasons. Moreover, technology, the sociology of wage determination, and the distribution of training costs determine the distribution of job opportunities.
Even though the advantages of education are frequently exaggerated in terms of their strictly economic results, there is no doubt that education is advantageous in earning a higher income and also teaches general skills that improve the quality of labor and, by extension, human capital. However, recent literature on issues related to human capital stresses the need to consider other dimensions of human identity that contribute to the formation of human capital. In fact, modern labor economics has criticized the simple approach that tries to explain all differences in wages and salaries in terms of human capital as a function of knowledge, skills, and education. The reason is that the concept of human capital can be infinitely elastic, including unmeasurable variables such as culture, personal character, or family ties. Many other factors, therefore, may contribute to wage differentials, such as gender or race. The existence of imperfections in labor markets implies mostly the existence of segmentation in labor that causes the return on human capital to differ between different labor-market segments. Similarly, discrimination against minority or female employees implies different rates of return on human capital. Most of these studies in their wage equations use the technique of instrumental (or dummy) variables to introduce specific characteristics of the labor segmentation that takes place. The statistical significance of these instrumental variables indicates their importance in forming human capital and the concomitant wage differentials. In addition, over the past few decades several writers, including Rhonda Williams (1991) and Howard Botwinick (1993), have begun to develop alternative approaches to the analysis of discrimination and wage differentials that are based on a more classical analysis of capitalist competition and accumulation.
The theoretical discussion about the meaning of human capital is vast and extends into the fields of human development and human resource management. For instance, human-development literature often distinguishes between specific and general human capital, where the first refers to specific skills or knowledge that is useful only to a single employer, while the second refers to general skills, such as literacy, that are useful to all employers. Human-development theories also differentiate social trust (social capital), sharable knowledge (instructional capital), and individual leadership and creativity (individual capital) as three distinct forms of human participation in economic activity.
Indisputably, the term human capital is used everywhere in economic and business analysis in which labor has to count as an input in the production process. The term has gradually replaced terms such as laborer, labor force, and labor power in the relevant analysis, giving an impression that these traditional terms are socially degraded. Moreover, in the strict sense of the term, human capital is not really capital at all. The term was originally used as an illustrative analogy between, on the one hand, investing resources to increase the stock of ordinary physical capital (such as tools, machines, or buildings) in order to increase the productivity of labor and, on the other hand, investing in educating or training the labor force as an alternative way of accomplishing the same general economic objective. In both sorts of investment, investors incur costs in the present with the expectation of deriving future benefits over time. However, the analogy between human capital and physical capital breaks down in one important and very crucial respect. Property rights over ordinary physical capital are readily transferable by sale, whereas human capital itself cannot be directly bought and sold on the market. Human capital is inseparably embedded in the nervous system of a specific individual and thus it cannot be separately owned. Hence, at least in regimes that ban slavery and indentured servitude, the analogy between human capital and physical capital breaks down.
Botwinick, Howard. 1993. Persistent Inequalities: Wage Disparity under Capitalist Competition. Princeton, NJ: Princeton University Press.
Gintis, Herbert, Samuel Bowles, and Melissa Osborne. 2001. The Determinants of Individual Earnings: Skills, Preferences, and Schooling. Journal of Economic Literature 39 (4): 1137–1176.
Lucas, Robert E., Jr. 1993. Making a Miracle. Econometrica 61: 251–272.
Mincer, Jacob. 1958. Investment in Human Capital and Personal Income Distribution. Journal of Political Economy 66 (4): 281–302.
Mincer, Jacob. 1974. Schooling, Experience and Earnings. New York: National Bureau of Economic Research.
Schultz, Theodore W. 1962. Reflections on Investment in Man. Journal of Political Economy 70: 1–8.
Thirlwall, Anthony P. 1999. Growth and Development. London: Macmillan.
Thurow, C. Lester. 1975. Generating Inequality: Mechanisms of Distribution in the U.S. Economy. New York: Basic Books.
Williams, Rhonda. 1991. Competition, Discrimination and Differential Wage Rates: On the Continued Relevance of Marxian Theory to the Analysis of Earnings and Employment Inequality. In New Approaches to Economic and Social Analyses of Discrimination, eds. Richard R. Cornwall and Phanindra V. Wunnava. New York: Praeger.
Human capital refers to the knowledge, skills, and capabilities of individuals that generate economic output. Human capital averages about two-thirds of the total value of the capital of most economies, which includes land, machinery, and other physical assets as well as the skills and talents of people. The value of human capital is often apparent after physical destruction, as during World War II—many of the German and Japanese cities that were bombed intensely were able to recover 80 to 90 percent of their previous levels of production within months.
More than two centuries ago, Adam Smith observed in Wealth of Nations (1776) that an educated man must earn more than "common labor" to "replace to him the whole expense of his education." Human capital was first discussed extensively by two Nobel Prize–winning economists, Theodore Schultz (1979) and Gary Becker (1992), to explain how any personal decision to sacrifice today for a return tomorrow can be analyzed in the same way that a business considers an investment decision, such as whether to buy new machinery. A nation's stock of human capital and thus its economic growth potential could be increased, they reasoned, if governments reduced the cost and increased the benefits of schooling, the human capital embodied in individuals.
Human capital is rented in the labor market rather than "sold." Individuals exchange effort for reward, and acquire human capital in the expectation that their incomes will be higher. Human capital takes time to acquire, and the basic model of human capital acquisition compares the income stream from going to school with that of going to work immediately. The costs of going to school include the direct costs, in the form of tuition and books, as well as the indirect costs, the forgone earnings that could have been received from working. The benefits of more schooling are higher earnings in the future.
Costs and Benefits
These costs and benefits must be brought to a single point in time so that the present value of the higher average lifetime earnings with more schooling and the lower average earnings with less schooling can be compared. A rational individual chooses the education and work profile that maximizes the present value of lifetime earnings.
This investment approach to education yields several important predictions about behavior. First, most people will get some education, and most students will be young, because the early years of education have few direct costs, and there are no forgone earnings because most societies prohibit children from working; younger people also have a longer period over which they can recoup their educational investment in the form of higher earnings. The analysis gets more interesting when young people reach the age at which they can work, sixteen or eighteen in most industrial countries, and twelve to fifteen in many developing countries. Youth in industrial countries tend to stay in school because the direct costs are often low and the for-gone earnings may be low (working as a teen is often at the minimum wage), while earnings with a college education can be significantly higher. The interest rate used to compare future and present earnings is also important; if this interest rate is low because of subsidized loans, more people will choose to get more education.
Second, government policies shape individual decisions about how much human capital to acquire by affecting the cost of schooling and the payoff from work. Social attitudes also play a role, encouraging young people to stay in school, or to go to college with friends in industrial countries, but in developing countries often encouraging children to help support the family as soon as possible. More forward-looking people, those most able to sacrifice now for future returns, are likely to get the most education, such as those willing to undergo rigorous and time-consuming medical education.
In the United States a combination of higher lifetime earnings, government policies, and social attitudes has increased the percentage of high school graduates who go to college. In 1960, about 45 percent of all high school graduates enrolled in college in the following twelve months, 54 percent of men and 38 percent of women. By 1980, 49 percent of high school graduates enrolled in college: men dropped to 47 percent, and women rose to 52 percent. In 1999, the most recent data available, 63 percent of high school graduates enrolled in college, 61 percent of men and 64 percent of women. Most studies find that men are more present-oriented to immediate earnings opportunities than women, explaining why more women are in college.
The average earnings of college-educated persons are higher than the earnings of high school graduates, and rose in the late twentieth century. In 1979, male college graduates earned 33 percent more than high school graduates, the so-called college earnings premium, and female college graduates earned 41 percent. By 2000, these college earnings premiums rose to 84 percent for men and 67 percent for women, in part because the real earnings of those with less education fell as a result of globalization, which reduced the wages of many high school graduates employed in manufacturing.
Education is generally a good investment: the private rate of return to a college education ranges from 12 to 40 percent in most countries, more than the return on investments in stocks and bonds. Around the world, the rate of return to primary school education was estimated to be 29 percent in the early 1990s, 18 percent for secondary schooling, and 20 percent for higher education. In developing African countries, these rates were 39, 19, and 20 percent, respectively, while in the industrial countries that are members of the Organisation for Economic Co-Operation and Development (OECD), the rates were 22, 12, and 12 percent, respectively. The high rate of return on primary education suggests that countries should subsidize it most.
It is very hard to compute average rates of return for higher levels of education because it is likely that the most capable individuals go to college, so that colleges transmit knowledge that increase productivity and also serve as screening institutions for employers seeking the best workers; some economists argue that the screening role is more important than the productivity-increasing role. On the other hand, the private rate of return to a college education may be higher if more highly educated individuals have both higher incomes as well as more fringe benefits and better or more prestigious jobs. The social rate of return may be even higher than the private rate if highly educated individuals provide more leadership and commit fewer crimes.
Any personal investment that raises productivity in the future can add to an individual's human capital, including onthe-job training and migration. A job that offers training, for example, may have lower earnings but still be attractive because a person knows that, after completing the military, aviation, or stockbroker course, earnings will rise. On-the-job training that makes the person more useful to the employer who provided it is called job-specific training, while training that transmits general skills that make the trained individual useful to many employers is called general training. Employers are more likely to pay for specific than general training. Employers do a great deal of training; by some estimates, U.S. employers spend almost as much on training as is spent at colleges and universities.
Migration of Human Capital
If people truly are the "wealth of a nation," should developing countries worry about a "brain drain" if their doctors, nurses, and scientists migrate to richer countries? The answer depends on the 3 R's of recruitment, remittances, and returns. Recruitment deals with who migrates abroad: Are the emigrants employed managers whose exit leads to layoffs, or unemployed workers who have jobs and earnings abroad but would have been unemployed at home? Remittances are the monies sent home by migrants abroad: Are they significant, and fuel for investment and job creation, or small and not used to speed development? Returns refers to whether migrants return to their countries of origin: Do they return after education or a period of employment abroad with enhanced skills, or do they return only to visit and retire?
During the 1990s the migration of highly skilled workers from developing to more developed countries increased, reflecting more foreign students as well as aging populations able to pay for more doctors and nurses and the internet-related economic boom. International organizations are exploring ways in which the more developed countries could replenish the human capital they take from the developing world via migration, perhaps by contributing to or backing loans to improve their educational systems. If the human capital that migrates is not replenished, global inequalities may increase.
See also Education ; Globalization ; Social Capital .
Becker, Gary S. Human Capital. A Theoretical and Empirical Analysis, with Special Reference to Education. New York: National Bureau of Economic Research, 1964.
Blaug, Mark. "The Empirical Status of Human Capital Theory: A Slightly Jaundiced Survey." Journal of Economic Literature 14 (September 1976): 827–855.
Organisation for Economic Co-Operation and Development. Human Capital Investment: An International Comparison. Paris: OECD, 1998.
Psacharopoulos, George. "Returns to Investment in Education: A Global Update." World Development 22, no. 9 (1994): 1325–1343.
Smith, Adam. An Inquiry into the Nature and Causes of the Wealth of Nations. Dublin: Whitestone, 1776.
Philip L. Martin
The quality of labor in a country's workforce can directly influence a nation's economic growth. Investment in vocational training and education, which improves the quality of labor, is called investment in human capital. As an individual becomes more skilled and educated, productivity or output of work may increase, along with income. The concept of human capital can provide justification for wage and salary differentials by age and occupation. Education and training in skill development can create human capital just as construction of a building creates physical capital.
Some economists assert that a society should allocate resources to educational and training services similar to the allocation of resources for physical capital. Costs would be incurred in expectation of future benefits. However, unlike physical capital, human capital is not a guarantee and cannot be repossessed in settlement of a debt. The key question has been whether or not benefits exceed expenditures by a sufficient amount.
Until the mid-nineteenth century, education expenditures were primarily generated by the private sector. By the 1850s all states had developed programs for funding public schools. As late as the early twentieth century, most people considered education that was beyond the primary grades to be a luxury—particularly among low-income groups. However literacy rates continued to move upward and since 1940, education levels have consistently climbed. In 1940 24 percent of the U.S. population had high school diplomas and 4.6 percent earned college degrees. By 1996 almost 82 percent had completed four years of high school and almost 24 percent had completed four or more years of college. By attending college or vocational training programs, individuals were able to invest in themselves. Firms invested in human capital with on-thejob training. Government invested in human capital by offering programs to improve health, quality free schooling, including vocational and on-the-job training, and by providing student loans.
See also: Physical Capital |
The current commercially available nuclear reactors all have cores with solid fuel rods and are cooled either by light water or heavy water. These reactors are collectively known as Generation 3 reactors.
There are a series of major issues associated with the current 3rd generation nuclear power plants that are being deployed in a number of countries, many of which stem from the adoption for commercial power producing reactors of the Uranium/Plutonium (U/P) fuel cycle, and in particular, the use of solid fuel contained within fuel rods.
This choice was driven some 50-60 years ago by the needs of the military, both for plutonium for bombs and high-powered compact reactors for nuclear submarines.
The use of a solid form of fuel imposes limits on the degree of burn-up that can be achieved, due to the progressive build up of fission products within the fuel rod, which in turn dictates the frequency of re-fuelling shutdowns. It also has an adverse effect on the fuel costs and the quantity of high active waste created. The use of water as the coolant requires very high pressure coolant systems and highly engineered safety measures to eliminate the risk of a Loss of Coolant Accident (LOCA).
A number of challenges arose when these technologies were adopted for civilian use. For example:
– Very expensive and complex engineered safety systems, as borne out by the UK licensing experience with the EPR safety control systems;
– Complex and regular re-fuelling arrangements, requiring plant shutdowns for PWR and BWR, and on-line refuelling machines for HWR’s due to the limits on the achievable burn-up;
– Low thermal efficiencies caused by the low steam temperatures due to the temperature limits placed on the fuel rods canning material and the very high coolant pressures required for higher temperatures;
– The need for large quantities of cooling water in order to maximise the poor thermal efficiency;
– Sophisticated fuel fabrication facilities for new fuel;
– Construction of the necessary large primary pressure vessels and forgings, to the highest engineering standards;
– Complex and costly secondary containment systems, due to the large amount of stored energy within the primary loop and core;
– Long-term storage of irradiated fuel: The US has some 70,000 Te of irradiated waste fuel in surface storage (both wet and dry);
– Long-term storage of the actinide wastes if re-processing is undertaken to recover the uranium and plutonium. The US has recently abandoned development of its YUCA Mountain long-term repository;
– Diversion of technology and fissile material for illicit weapons manufacture;
– Transport of high active waste for re-processing and storage;
– Questionable economics due to construction cost escalation and program delays, open-ended decommissioning costs, and unknown long-term storage costs for the large quantities of highly active waste produced;
– Increasingly complex and arduous consenting of new plants and regulation of operating plants, and;
– The unit sizes have increased to 1600 MW in the case of the AREVA EPR, and 1000MW in the case of the Westinghouse/Toshiba AP1000, based on the claimed, but not delivered, improved economics.
The adoption of what have been called 3rd generation nuclear reactors has made the scale of most of these problems even greater than before. AREVA’s recent experience with their first three EPR type reactors under construction bears testament to this, with large cost escalation and major construction delays.
In recognition of these fundamental constraints and problems associated with the current reactor systems, 12 countries are participating in the Generation IV International Forum (GIF), to further research theoretical reactor designs that offer improvements in nuclear safety, proliferation resistance, waste production, resource utilisation, and overall economics. The designs under consideration are grouped as either Thermal Reactors or Fast Reactors.
The Thermal reactor systems under investigation are:
– Very High Temperature Reactor (VHTR)
– Supercritical Water Cooled Reactor (SCWR)
– Molten Salt Reactor (MSR)
The Fast Reactors under investigation are:
– Gas Cooled Fast Reactor (GFR)
– Sodium Cooled Fast Reactor (SFR)
– Lead Cooled Fast Reactor (LFR)
All these reactor types seek to achieve much higher burn-up of the fuel, thereby reducing the magnitude of the waste disposal and re-processing challenges, along with better resource utilisation. However all but the MSR rely on solid fuel, either in fuel rods or in the form of ceramic fuel (usually the carbide forms of uranium or plutonium) dispersed in a graphite matrix. Once again the use of solid fuel introduces limits and constraints on both the reactor designs and the maximum achievable burn-up.
In the case of the MSR, the coolant is a mixture of molten salts and the fuel is dispersed in a homogeneous manner throughout the coolant. This removes all the major limits and constraints on the maximum burn-up that can be achieved. It also offers the best way of using thorium as the main fuel component for the reactor without the need for very complex fuel reprocessing that the use of solid Thorium requires. The MSR studies show that it is capable of breeding more thorium fuel than it consumes, even though it is a thermal reactor.
The origins of the thorium fuelled molten salt reactor (TFMSR) date back to a very successful development program at the Oak Ridge National Laboratory (ORNL) in the USA, culminating in almost five years of successful operation of an 8 MW thermal MSR during the period 1965 to 1969. This program demonstrated the many advantages of the MSR, such as very high temperature operation (650 degrees centigrade), high power densities, the online removal of fission products, and online re-fuelling. This work has been extended in the intervening years to develop designs that breed the fuel from naturally occurring thorium.
Some of the many advantages of a TFMSR are:
– Thorium is three times more abundant than uranium and much cheaper, with Australia having potentially the world’s largest reserves;
– No fuel fabrication are facilities are required since the fuel is not fabricated but dispersed in the liquid salts;
– It can breed its own fissile material within the reactor, thereby obviating the need for enrichment plants as required for uranium, or re-processing plants, with all the nuclear proliferation risks that uranium enrichment and re-processing entails;
– The thorium fuel cycle is very resistant to diversion of fissile material for weapons manufacture;
– It has a very high degree of inherent safety, such that the reactor shuts itself down in the event of any accident or loss of power supplies, without the need for highly engineered safety systems or operator intervention, and is capable of load following;
– The reactor designs have excellent passive safety features thus further obviating the need for complex and expensive back-up diesel generators pumps etc;
– The reactor burn-up of fissile material is not constrained by solid fuel cladding life time, and hence the higher burn-up means that less fuel is required per GW. (One fifth of that in a Generation 3 type reactor);
– The quantity of high active actinide waste is reduced by a factor of between 1000 and 10,000 times that of the current nuclear reactors, due to the very much higher fuel burn-up achieved as compared to solid fuel types of design;
– The small quantity of highly active waste allows cost-effective on-site storage to be considered, thereby removing the need to transport fissile and high active waste materials around the country;
– There are no major stored energy sources within a MSR and hence the secondary containment requirements are reduced both in complexity and cost as compared to a Generation 3 reactor type;
– The fissile inventory required for start up is at least half of that required for a Generation 3 reactor and about a tenth of that required for a fast reactor. Hence the overall fissile inventory for a fleet of MSR’s will be much less that other reactor types;
– No need for very high pressure reactor vessels and coolant loops, since the reactor core and primary coolant loops operate at very low pressures (typically less than 5 atmospheres);
– The molten salt coolant pumping power is much reduced as compared to gas cooled reactors that offer the same high outlet temperatures. This results in a 4-8 per cent gain in thermal efficiency for the same temperatures;
– The high reactor power density results in much smaller and cheaper plants, which can be designed for economic operation at smaller sizes than the current 3rd generation of water-cooled reactors;
– The high temperatures achieved allows the MSR to be considered as the heat source for zero-carbon hydrogen production along with much improved thermal efficiency for electrical power generation;
– The high temperatures achieved make possible the use of dry cooling, and hence allows plants to be sited away from the coast or rivers and lakes;
– Offers a real opportunity to effect a conservative 30-40 per cent capital cost reduction, due to the ability to manufacture the much smaller reactor core and heat exchangers entirely under factory conditions, and shorter construction times;
– The reactor and core can be designed to allow it to burn up actinide waste, thereby reducing the worldwide high active waste disposal problem created by other reactor systems. For instance, the UK has 102 Te of plutonium in storage from its current and past reactor programs. This flexibility to handle different fuels allows a variety of fuel cycles to be used whilst retaining the same basic reactor engineering design.
The successful results of the ORNL experimental MSR form the basis of much of the renewed interest in this type of reactor and establish a foundation of already proven science and technology for the proposed development of a TFMSR reactor system.
The next step is to build a demonstration TFMSR to establish and provide all the scientific information and data required for the design and construction of commercial-sized plants. The results of the ORNL work would form the basis for such design, which would then be revised to incorporate all the current knowledge, and most importantly, the current requirements for improvements in nuclear safety, proliferation resistance, waste production, resource utilisation, and overall economics. A demonstration plant based on such a design would provide the necessary data, construction and operational experience, and scientific knowledge necessary to design and build a commercial-sized power plant.
The costs for such a program have been estimated by a team in Japan as being some $US300 million. Thus with, say, six participating member countries or companies, the individual cost might be circa $US50 million in total over a six year period.
The adoption of the TFMSR as the reactor system of choice for Australia, along with hosting a demonstration TFMSR, offers many benefits that are not available if the existing PWR/BWR systems were selected. Many of these derive directly from the inherent technical characteristics of a TFMSR, but others would derive from the leading role that Australia would play in developing a reactor system that will, in the opinion of many world experts, be the future of nuclear power, due to its ability to overcome the deficiencies of the current reactor systems being deployed.
Set out below are the some of the major benefits that Australia would derive from such an approach:
– Positioning Australia as an international leader in nuclear engineering and science for the future;
– Developing a reactor system that delivers large improvements in nuclear safety, proliferation resistance, waste reduction/ production, resource utilisation, and overall economics;
– Establishing a national nuclear design and construction capability, including advanced manufacturing facilities and competencies;
– Deploying a reactor system with minimal cooling water requirements and hence can be located away from the coast and rivers, etc;
– Reactor unit sizes that are appropriate for the NEM system;
– Opening up the use of nuclear power for direct high temperature chemical processing, such as the production of hydrogen for transportation use, with no consequential CO2 emissions;
– Develop an economic use for Australia’s very extensive reserves of Thorium, and break the 'razor blade' business model used by the current reactor vendors in relation to their supply of new fuel elements;
– Export opportunities in many areas.
In order to progress this proposal the Federal Government will need to develop a detailed policy position, and gain both parliamentary (bipartisan) support and approval, along with community support for such an approach, including the intention to host the DTFMSR.
A number of countries are working in varying degrees on aspects of the TFMSR technologies already, such as the USA, Japan, China, France, India, the Czech Republic, Singapore, Korea, Russia, and Canada. There is currently a “window of opportunity” for Australia to participate with these countries.
In summary, it is time for the federal government to take the lead in initiating a meaningful debate on the commercial nuclear power options open to Australia. The possible adoption and development of TFMSR technology should be part of that debate. It offers Australia the opportunity to establish a leading position in a technology for the future with worldwide export opportunities.
TFMSR – Thorium fuelled molten salt Reactor
DTFMSR - Demonstration Thorium Fuelled Molten Salt Reactor
MSR - Molten Salt Reactor
CCS - Carbon Capture and Storage
U/PU - Uranium/Plutonium
PWR - Pressurised Water Reactor
BWR - Boiling Water Reactor
HWR - Heavy Water Reactor
LOCA - Loss of coolant accident
EPR - The 1600 MW PWR 3rd Generation reactor being built by AREVA
NEM - The Australian National Electricity Market |
Easily Solve Hard Math Equations & Problems
The idea of taking on math problems often leave a bad taste on students’ mouths, regardless of what level they are on. More often than not, most students tend to get easily overwhelmed when faced with the task of dealing with hard math questions, especially those that come out in tests and final exams. Just ask them what’s their least favorite subject and a good percentage would answer “Math” without hesitation.
The thing is, understanding mathematics requires a much more different kind of discipline compared to studying for English, history, or even some aspects of science. It takes certain kinds of approach to fully getting a good grasp of how to deal with hard math problems. It’s definitely difficult, but it’s not at all impossible to enjoy answering hard math word problems and coming up with the corresponding solutions and answers easily–so long as you know the tips and tricks behind it.
Active Participation In Learning
You can’t simply sit in class and watch how teachers or other people solve hard math problems and expect that you can learn it as well. Math requires an active participation in the whole learning process – from taking down notes on sample problems, guides to figuring out hard math equations, and putting in the effort to solve problems by yourself.
Students who are already experiencing difficulty in math or have math as their weakest subjects should really try their best to bring their 100% particularly during math class. A teacher can only do so much in showing how it’s done and without active participation from the student, hard math problems will remain just that, which is hard.
It’s not enough to just memorize the equation and expect to hit the right answer. Solving hard math problems and answers isn’t done by just remembering the exact formula used for a certain word problem and applying it onto another. It’s about understanding how to use the formula with a certain problem that calls for it. Analyzing the principles behind these formulas can help in recognizing patterns for when a specific formula and manner of solving is required.
For example, hardproblems in math for 8th graders comprise of algebraic formulas. It’s pretty easy to memorize that ‘x + y = z’ but if you can’t identify the functions in a word problem in the first place, then the formula becomes worthless. Even more so, if a word problem requires several steps before you even get to deducing the values behind x, y, and z.
Math Requires Cumulative Knowledge
Math necessitates a progressive method of learning. What is being taught at a certain level builds on what has already been established in the previous grade levels. It certainly isn’t like taking on calculus when you haven’t even touched upon algebra and trigonometry just yet.
The key principles you have learned will be the basis on which new knowledge will be built on. This is why it’s very important to not just memorize the formulas and equations but have a deeper understanding of the rationale and system behind it. These systems and principles are what students will bring on as they level up from 8th grade algebra to hard math problems for 9th graders.
Is It Time to Seek Out Help & Guidance?
Given these key ideas on how students can better solve hard math problems and figure out equations and answers, most of them may find it hard to just get all the guidance they need from one teacher or professor alone.
If you are looking for “Math Help” then our specialized one-on-one math tutoring service can help in improving your capability as a student to get a much better grasp on answering mathematical problems with ease.
These private and focused sessions will make it easier for students to seek out difficult problems and mathematical concepts that they need further clarification on. Learning will be done on their pacing and coming to class more prepared can also make them feel more confident.
YYC Tutoring for math provides expert math tutoring for 8th and 9th grade students, as well as high school students in Calgary. We believe that confidence is key to success and developing this among students entails making sure that they’re well prepared and well versed particularly in subjects that they have most difficulty with. |
It’s important to know how cold weather can affect your heart, especially if you have cardiovascular disease.
Many people don’t realize how much they exert themselves when they are not conditioned for it simply by walking through snow. Even those that are accustomed to being outdoors in winter can accidentally suffer hypothermia if certain precautions are not taken.
Hypothermia means the body temperature has fallen below 95 degrees Fahrenheit. It occurs when your body can’t produce enough energy to keep the internal body temperature warm enough. It can kill you. Heart failure causes most deaths in hypothermia. Symptoms include lack of coordination, mental confusion, slowed reactions, shivering and sleepiness.
Children, our elderly and those with heart disease are at higher risk. As we age we seem to become almost immune to feeling moderately cold conditions, we can suffer hypothermia without realizing the danger.
People with heart disease often suffer chest pain or discomfort when they’re in cold weather. Some studies suggest that harsh winter weather may increase a person’s risk of heart attack due to overexertion.
It’s not just cold temperatures, high winds, snow and dampness can also cause the body to lose warmth. Wind is especially dangerous, because it removes the layer of heated air from around your body. Similarly, dampness causes the body to lose heat faster than it would at the same temperature in drier conditions.
To keep warm, wear layers of clothing. This traps air between layers, forming a protective insulation. Also, cover your head. Heat is lost through your head, ears are especially prone to frostbite. Keep your hands and feet warm, too, as they lose heat quickly.
Don’t drink alcoholic beverages before going outdoors or when outside. Alcohol gives an initial feeling of warmth, because blood vessels in the skin expand. Heat is then drawn away from the body’s vital organs. |
65 million years ago, many different kinds of animals on Earth died out. It is believed that the larger land creatures suffered the most and most animals bigger than a dog, disappeared, including most dinosaurs!
Something extraordinary had happened to destroy so much life. Scientists call this event the Cretaceous-Tertiary event (K-T for short). Nobody is sure what the K-T event was, but scientists have some ideas about what might have happened.
An asteroid is a giant piece of rock moving through space. Very rarely, these asteroids fall on Earth. Some scientists think that an asteroid 10 kilometres wide fell on Earth 65 million years ago. A large asteroid hitting the Earth would cause a huge fire storm, and cause a lot of dust to fly into the air. This dust would block out the sun for several months, or even for several years.
With most sunlight blocked by dust, the ground would get colder, and a long winter would come. To make things worse, the large amount of dust in the air would cause acid rain, rain so poisonous that it can harm or even kill plants.
The combination of the cold, little sun and poisonous rain would kill many plants. Herbivore dinosaurs would not have enough left to eat, and would starve and without herbivores to eat, the carnivores would starve too.
When scientists examine layers of very old rocks from 65 million years ago, they find a lot of one type of rare metal called iridium. Because iridium is so rare on earth, it must have come from somewhere else, like an asteroid. More evidence of this is an enormous 180 kilometre (111 miles) wide crater in Mexico, called the Chicxulub Crater. An asteroid impact could have made such a crater.
A few scientists disagree with this explanation. They say that all this would cause dinosaurs to die much faster than they did. There are similar big craters around the world that affected ancient life.
Some scientists think that volcano eruptions could have caused the K-T event. These volcanoes would release huge amounts of ash and poisonous gases into the air. Like the dust in the asteroid explanation, this ash would stay in the air for a long time and cause a winter and acid rain.
These gases would also destroy the protective ozone layer in the sky. The ozone layer protects us from the Sun's harmful rays.
Scientists have found out that there had been some huge eruptions. These eruptions could have covered both the states Alaska and Texas in one kilometer of ash and lava! |
/bi:t/ (B) A component in the machine data hierarchy larger than a bit and usually smaller than a word; now nearly always eight bits and the smallest addressable unit of storage. A byte typically holds one character.
A byte may be 9 bits on 36-bit computers. Some older architectures used "byte" for quantities of 6 or 7 bits, and the PDP-10 and IBM 7030 supported "bytes" that were actually bit-fields of 1 to 36 (or 64) bits! These usages are now obsolete, and even 9-bit bytes have become rare in the general trend toward power-of-2 word sizes.
The term was coined by Werner Buchholz in 1956 during the early design phase for the IBM Stretch computer. It was a mutation of the word "bite" intended to avoid confusion with "bit". In 1962 he described it as "a group of bits used to encode a character, or the number of bits transmitted in parallel to and from input-output units". The move to an 8-bit byte happened in late 1956, and this size was later adopted and promulgated as a standard by the System/360 operating system (announced April 1964).
James S. Jones <firstname.lastname@example.org> adds:
I am sure I read in a mid-1970's brochure by IBM that outlined the history of computers that BYTE was an acronym that stood for "Bit asYnchronous Transmission E..?" which related to width of the bus between the Stretch CPU and its CRT-memory (prior to Core).
Terry Carr <email@example.com> says:
In the early days IBM taught that a series of bits transferred together (like so many yoked oxen) formed a Binary Yoked Transfer Element (BYTE).
[True origin? First 8-bit byte architecture?]
See also nibble, octet. |
| Black-tailed Jackrabbits are tremendous leapers, able to jump more than 6 m horizontally. They live in some of the hottest and driest regions of the continent, can survive on poor-quality foods, and get most or all of the water they need from their food. Where they can, they eat green vegetation, but they can survive in parts of the Southwest where creosote-bush forms a large part of their diet. They cope with extreme heat by lowering their metabolism and resting in the shade during the day, which conserves water. They get rid of extra salt through their urine, and blood flows close to the skin in their enormous ears, a cooling mechanism. Although mostly nocturnal and solitary, large groups sometimes form near a good food supply. With their typically high reproductive output, Black-tails can be agricultural pests, and there were periods in the 1800s and 1900s when aggressive rabbit drives herded and destroyed 5,000-6,000 animal in a single day. In spite of this, they are quite common and widespread.
Also known as:
Gray, J.E., 1837. Description of some new or little known Mammalia, principally in the British Museum Collection, p. 586. The Magazine of Natural History, and Journal of Zoology, Botany, Mineralogy, Geology, and Meteorology, New Series, 1:577-587.
Mammal Species of the World (opens in a new window).
Mammalian Species, American Society of Mammalogists' species account (opens in a new window).
Click to enlarge this image. |
Written by an outstanding scholar, Phonics They Use seamlessly weaves together the complex and varied strategic approaches needed to help students develop reading and spelling skills.
Long-positioned and long-respected as a bestseller by both pre-service and practicing teachers of reading, this affordable text offers a coherent collection of practical, hands-on activities that provide a framework for teaching phonics. The Fourth Edition continues to emphasize that what matters is not how much phonics students know but what they actually use when they need phonics for decoding a new word, for reading and spelling a new word, and for writing. Rather than subscribe to a single theory, Pat Cunningham stresses a balanced reading program—-incorporating a variety of strategic approachestied to the individual needs of children. Packed with new activities and strategies for teaching reading, this book is an invaluable resource for any new or veteran teacher.
Now teachers have access to a new grade-level series Making Words that offers fresh multi-level activities and lessons for the kindergarten through fifth grade classroom. Based on the active and innovative approach to making words that teachers and their students have grown to love in Phonics They Use, this new series is the best resource you can have on hand for motivating your students to learn words!
Take a Peek at What's New to the Edition!
New Chapter on Making Words in Kindergarten (Ch. 4) describes and provides sample lesson plans on how teachers can make each kindergarten student a letter of the alphabet, using a big letter card, to teaching them how to begin to form words. |
|Data Structures and Algorithms|
|9.3 Optimal Binary Search Trees|
Up to this point, we have assumed that an optimal search tree is one in which the probability of occurrence of all keys is equal (or is unknown, in which case we assume it to be equal). Thus we concentrated on balancing the tree so as to make the cost of finding any key at most log n.
However, consider a dictionary of words used by a spelling checker for English language documents. It will be searched many more times for 'a', 'the', 'and', etc than for the thousands of uncommon words which are in the dictionary just in case someone happens to use one of them. Such a dictionary needs to be large: the average educated person has a vocabulary of 30 000 words, so it needs ~100 000 words in it to be effective. It is also reasonably easy to produce a table of the frequency of occurrence of words: words are simply counted in any suitable collection of documents considered to be representative of those for which the spelling checker will be used. A balanced binary tree is likely to end up with a word such as 'miasma' at its root, guaranteeing that in 99.99+% of searches, at least one comparison is wasted!
If key, k, has relative frequency, rk, then in an optimal tree,
We make use of the property:
Thus the problem is to determine which key should be placed at the root of the tree. Then the process can be repeated for the left- and right-sub-trees. However, a divide-and-conquer approach would choose each key as a candidate root and repeat the process for each sub-tree. Since there are n choices for the root and 2O(n) choices for roots of the two sub-trees, this leads to an O(nn) algorithm.
LemmaSub-trees of optimal trees are themselves optimal trees.
ProofIf a sub-tree of a search tree is not an optimal tree, then a better search tree will be produced if the sub-tree is replaced by an optimal tree.
An efficient algorithm can be generated by the dynamic approach. We calculate the O(n) best trees consisting of just two elements (the neighbours in the sorted list of keys).
|In the figure, there are two possible
arrangements for the tree containing F
The cost for (a) is
Thus (b) is the optimum tree and its cost is saved as c(f,g). We also store g as the root of the best f-g sub-tree in best(f,g).
Similarly, we calculate the best cost for all n-1 sub-trees with two elements, c(g,h), c(h,i), etc.
There are O(n2) such sub-tree costs. Each one requires n operations to determine, if the cost of the smaller sub-trees is known.
Thus the overall algorithm is O(n3).
Code for optimal binary
Note some C 'tricks' to handle dynamically-allocated two-dimensional arrays using pre-processor macros for C and BEST!
This Java code may be easier to comprehend for some! It uses this class for integer matrices.
The data structures used may be represented:
After the initialisation steps, the data structures used
contain the frequencies, rfi,
in cii (the costs of single element trees),
max everywhere below the diagonal and zeroes in
the positions just above the diagonal (to allow for the
trees which don't have a left or right branch):
In the first iteration, all the positions below the diagonal (ci,i+1) will be filled in with the optimal costs of two-element trees from i to i+1.
In subsequent iterations, the optimal costs of k-1 element trees (ci,i+k) are filled in using previously calculated costs of smaller trees.
Optimal Binary Search Tree Animation
This animation was written by John Morris and (mostly) Woi Ang
||Please email comments to:|
|Continue on to Matrix Chain Multiplication||Back to the Table of Contents| |
Presentation on theme: "Dynamic Programming In this handout A shortest path example Deterministic Dynamic Programming Inventory example Resource allocation example."— Presentation transcript:
Dynamic Programming In this handout A shortest path example Deterministic Dynamic Programming Inventory example Resource allocation example
Dynamic Programming Dynamic programming is a widely-used mathematical technique for solving problems that can be divided into stages and where decisions are required in each stage. The goal of dynamic programming is to find a combination of decisions that optimizes a certain amount associated with a system.
A typical example: Shortest Path Ben plans to drive from NY to LA Has friends in several cities After 1 day’s driving can reach Columbus, Nashville, or Louisville After 2 days of driving can reach Kansas City, Omaha, or Dallas After 3 days of driving can reach Denver or San Antonio After 4 days of driving can reach Los Angeles The actual mileages between cities are given in the figure (next slide) Where should Ben spend each night of the trip to minimize the number of miles traveled?
Shortest Path: network figure New York 1 New York 1 Columbus 2 Columbus 2 Kansas City 5 Kansas City 5 Denver 8 Denver 8 Los Angeles 10 Los Angeles 10 Omaha 6 Omaha 6 Dallas 7 Dallas 7 Nashville 3 Nashville 3 Louisville 4 Louisville 4 San Antonio 9 San Antonio 9 Stage 1 Stage 2 Stage 3 Stage 4 Stage
Shortest Path problem: Solution The problem is solved recursively by working backward in the network Let c ij be the mileage between cities i and j Let f t (i) be the length of the shortest path from city i to LA (city i is in stage t) Stage 4 computations are obvious: f 4 (8) = 1030 f 4 (9) = 1390
Stage 3 computations Work backward one stage (to stage 3 cities) and find the shortest path to LA from each stage 3 city. To determine f 3 (5), note that the shortest path from city 5 to LA must be one of the following: Path 1: Go from city 5 to city 8 and then take the shortest path from city 8 to city 10. Path 2: Go from city 5 to city 9 and then take the shortest path from city 9 to city 10. Similarly,
Stage 2 computations Work backward one stage (to stage 2 cities) and find the shortest path to LA from each stage 2 city.
Stage 1 computations Now we can find f 1 (1), and the shortest path from NY to LA. Checking back our calculations, the shortest path is 1 – 2 – 5 – 8 – 10 that is, NY – Columbus – Kansas City – Denver – LA with total mileage 2870.
General characteristics of Dynamic Programming The problem structure is divided into stages Each stage has a number of states associated with it Making decisions at one stage transforms one state of the current stage into a state in the next stage. Given the current state, the optimal decision for each of the remaining states does not depend on the previous states or decisions. This is known as the principle of optimality for dynamic programming. The principle of optimality allows to solve the problem stage by stage recursively.
Division into stages The problem is divided into smaller subproblems each of them represented by a stage. The stages are defined in many different ways depending on the context of the problem. If the problem is about long-time development of a system then the stages naturally correspond to time periods. If the goal of the problem is to move some objects from one location to another on a map then partitioning the map into several geographical regions might be the natural division into stages. Generally, if an accomplishment of a certain task can be considered as a multi-step process then each stage can be defined as a step in the process.
States Each stage has a number of states associated with it. Depending what decisions are made in one stage, the system might end up in different states in the next stage. If a geographical region corresponds to a stage then the states associated with it could be some particular locations (cities, warehouses, etc.) in that region. In other situations a state might correspond to amounts of certain resources which are essential for optimizing the system.
Decisions Making decisions at one stage transforms one state of the current stage into a state in the next stage. In a geographical example, it could be a decision to go from one city to another. In resource allocation problems, it might be a decision to create or spend a certain amount of a resource. For example, in the shortest path problem three different decisions are possible to make at the state corresponding to Columbus; these decisions correspond to the three arrows going from Columbus to the three states (cities) of the next stage: Kansas City, Omaha, and Dallas.
Optimal Policy and Principle of Optimality The goal of the solution procedure is to find an optimal policy for the overall problem, i.e., an optimal policy decision at each stage for each of the possible states. Given the current state, the optimal decision for each of the remaining states does not depend on the previous states or decisions. This is known as the principle of optimality for dynamic programming. For example, in the geographical setting the principle works as follows: the optimal route from a current city to the final destination does not depend on the way we got to the city. A system can be formulated as a dynamic programming problem only if the principle of optimality holds for it.
Recursive solution to the problem The principle of optimality allows to solve the problem stage by stage recursively. The solution procedure first finds the optimal policy for the last stage. The solution for the last stage is normally trivial. Then a recursive relationship is established which identifies the optimal policy for stage t, given that stage t+1 has already been solved. When the recursive relationship is used, the solution procedure starts at the end and moves backward stage by stage until it finds the optimal policy starting at the initial stage.
Solving Inventory Problems by DP Main characteristics: 1.Time is broken up into periods. The demands for all periods are known in advance. 2.At the beginning of each period, the firm must determine how many units should be produced. 3.Production and storage capacities are limited. 4.Each period’s demand must be met on time from inventory or current production. 5.During any period in which production takes place, a fixed cost of production as well as a variable per- unit cost is incurred. 6.The firm’s goal is to minimize the total cost of meeting on time the demands.
Inventory Problems: Example Producing airplanes 3 production periods No inventory at the beginning Can produce at most 3 airplanes in each period Can keep at most 2 airplanes in inventory Set-up cost for each period is 10 Determine a production schedule to minimize the total cost (the DP solution on the board). Period123 Demand121 Unit cost354
Resource Allocation Problems Limited resources must be allocated to different activities Each activity has a benefit value which is variable and depends on the amount of the resource assigned to the activity The goal is to determine how to allocate the resources to the activities such that the total benefit is maximized
Resource Allocation Problems: Example A college student has 6 days remaining before final exams begin in his 4 courses He wants allocate the study time as effectively as possible Needs at least 1 day for each course and wants to concentrate on just one course each day. So 1, 2, or 3 days should be allocated to each course He estimates that the alternative allocations for each course would yield the number of grade points shown in the following table: How many days should be allocated to each course? (The DP solution on the board). Courses Days |
Objective: Decide how you create a “natural critical learning environment” in your online courses.
What is a “natural critical learning environment”?
According to Ken Bain, “natural” means answering questions and completing tasks that naturally matter most to learners interests. Learners make decisions, defend their choices, receive feedback, and try again when their answers are incomplete.
“Critical” means thinking critically. Students learn to reason from evidence, examine the quality of their reasoning, make improvements, and ask probing and insightful questions. There are five essential elements that make up a natural critical learning environment.
- A “natural critical learning environment begins with an intriguing question or problem. Often the most successful questions are highly provocative. Many teachers give students answers and never ask questions.
- Students are provided guidance to understand the significance of the question. Many teachers present intellectual problems but often focus only on the course subject and issues. In contrast, the best teachers tend to take the subjects and issues from the course and integrate them with broader concerns and issues, creating an interdisciplinary approach. They remind students how the current question relates to some larger issue that already interests them.
- Students are engaged in some higher-order intellectual activity where they are encouraged to compare, apply, evaluate, analyze, and synthesize. They do not listen and remember!
- Students answer the question. The best teachers raise important questions but challenged students to develop their own answers and defend them.
- Finally, a good learning environment leaves students wondering: “What’s the next question?” and “What can we ask now?”
When instructors attempt to “cover content” over “engaged thinking” they do not fully appreciate or understand the role of questions in teaching content. There is a deep misunderstanding about the significance of questions in the learning (and thinking) process. In fact every textbook could be rewritten by translating statements into questions. Most instruction ignores questions by spoon feeding learners “answers.” When we teach by giving answers were are not teaching learners how to think critically. Critical thinking is not driven by answers but by questions.
Questions define tasks, express problems and describe issues. Answers on the other hand, often signal a full stop in thought. Only when an answer generates a further question does thought continue its life as such. This is why it is true that only students who have questions are really thinking and learning. Moreover, the quality of the questions students ask determines the quality of the thinking they are doing. It is possible to give students an examination on any subject by just asking them to list all of the questions that they have about a subject, including all questions generated by their first list of questions. That we do not test students by asking them to list questions and explain their significance is again evidence of the privileged status we give to answers isolated from questions. That is, we ask questions only to get thought-stopping answers, not to generate further questions.
Effective questioning strategies guide discussions and promote critical interaction. Learners need to have time to process questions and develop responses that match the cognitive level of the questions asked. Higher level cognitive and affective questions encourage learners to interpret, analyze, evaluate, infer, explain and self regulate. According to Wilson (2002) there are four types of questions that encourage learners to use higher levels of cognitive, or affective, processes for critical thinking. They are convergent, divergent, and evaluative questions.
Convergent Questioning Strategies
Convergent questions normally ask learners to analyze issues, and their personal awareness of issues. Learners often become more conscious of the learning process when convergent questions are framed around relationships, between concepts, ideas, and information . Key words used in convergent questions are support, translate, judge, classify, select, match, explain, represent, and demonstrate. Convergent questions ask learners to analyze information by breaking down parts, recognizing patterns, forming assumptions and identifying relationships (Wilson, 2002). Convergent questions are used to check for understanding by asking learners to identify content information or interpret information in a new way. Convergent questions do not generate a great deal of interaction.
Divergent Questioning Strategies
Divergent questions explore different possibilities, variations, and alternative answers or scenarios, and require learners to analyze, synthesize or evaluate knowledge, and project, or predict different outcomes (Wilson, 2002). Divergent questions generally stimulate creativity, and are used to investigate cause and effect relationships. Wilson points out that answers to divergent questions often have a wide variety of acceptability since they are subjective and based on the answers possibility or probability. Divergent questions often challenge learners to synthesize information through creative and original thinking. Divergent questions are used provide opportunities to expose learners to alternative possibilities, and new solutions presented by different learners.
Evaluative Questioning Strategies
Evaluative questions require comparative analysis from different perspectives before learners can synthesize information and reach conclusions. Evaluative questions usually require higher levels of cognitive and emotional judgment (Wilson, 2002). Evaluative questions promote critical thinking providing reflective opportunities. Learners evaluate issues by assessing, appraising, and defending information according to a set of criteria, and justification of their beliefs, and then reflect and gather resources to support their opinion. Discussions can often become intense and emotional, and facilitation is critical to prevent argumentative interactions.
According to The Foundation for Critical Thinking (n.d), the best known teaching strategy for promoting critical thinking is Socratic-questioning since it highlights the need for using clarity and logical consistency. Socratic-questions encourage critical thinking when learners look deeply into assumptions, points of views, perspectives, and evidence to analyze assumptions, and examine reasons, concepts and consequences. They help learners to understand the implications of what they discuss online. Socratic-questions ask learners to identify cause and effect relationships, probe by asking “so what”, and look for relevant responses (Stepien, 1999). They ask learners to clarify, look for meaning, and provide justification and evidence. Socratic-questions ask learners to consider and evaluate different paths.
- Critical thinking in the Online Classroom?
- Critical Thinking Community
- Critical Thinking in Asynchronous Discussions
- Facilitating Students’ Critical Thinking in Online Discussion:
- Using Discussions to Promote Critical Thinking in an Online Environment
- Using Online Reflection and Conversation to Build Community.
- Evaluation of the Effectiveness of Online Resources in Developing Student Critical Thinking: Review of Literature and Case Study of a Critical Thinking Online Site |
The gravitational pull of the moon and sun along with the rotation of the earth cause the tides. In some places, tides cause water levels near the shore to vary up to 40 feet. People harnessed this movement of water to operate grain mills more than a 1,000 years ago in Europe. Today, tidal energy systems generate electricity. Producing tidal energy economically requires a tidal range of at least 10 feet.
One type of tidal energy system uses a structure similar to a dam called a barrage. The barrage is installed across an inlet of an ocean bay or lagoon that forms a tidal basin. Sluice gates on the barrage control water levels and flow rates to allow the tidal basin to fill on the incoming high tides and to empty through an electricity turbine system on the outgoing ebb tide. A two-way tidal power system generates electricity from both the incoming and outgoing tides.
A potential disadvantage of tidal power is the effect a tidal station can have on plants and animals in estuaries of the tidal basin. Tidal barrages can change the tidal level in the basin and increase turbidity (the amount of matter in suspension in the water). They can also affect navigation and recreation.
Several tidal power barrages operate around the world. The Sihwa Lake Tidal Power Station in South Korea has the largest electricity generation capacity at 254 Megawatts (MW). The oldest operating tidal power plant is in La Rance, France, with 240 MW of electricity generation capacity. The next largest tidal power plant is in Annapolis Royal in Nova Scotia, Canada, with 20 MW of electricity generation capacity. China, Russia, and South Korea all have smaller tidal power plants.
The United States does not have any tidal power plants, and it only has a few sites where tidal energy could be economical to produce. France, England, Canada, and Russia have much more potential to use tidal power.
Tidal turbines look similar to wind turbines. They can be placed on the sea bed where there is strong tidal flow. Because water is about 800 times denser than air, tidal turbines have to be much sturdier and heavier than wind turbines. Tidal turbines are more expensive to build than wind turbines, but capture more energy with the same size blades. Strangford Lough, Scotland, and Uldolmok, South Korea, both have 1.5 MW tidal turbines. A tidal turbine project is under development in the East River of New York. Another project with up to 400 MW of electricity generation capacity is under development in northern Scotland.
A tidal fence is a type of tidal power system that has vertical axis turbines mounted in a fence or row placed on the sea bed, similar to tidal turbines. Water passing through the turbines generates electricity. As of the end of 2016, no tidal fence projects are currently operating. |
Saturday, January 29, 2011
As a way to help your child with their comprehension, you can use this story elements chart to help them understand the key characteristics of a fiction story: characters, setting, problem, and solution. The students can draw the pictures and/or write notes about each part. As students work on these, encourage your student to look back at the text to find answers. Also encourage them to use specific details such as the proper names of the characters rather than "the boy" or "the girl" and the specific setting such as "a farm in the winter" rather than "outside." |
For 300 years we have known that the Earth’s magnetic field moves gradually westward. Computer simulations on the CSCS super-computer “Monte Rosa” by researchers at ETH Zurich and the University of Leeds explain why this happens.
by Simone Ulmer, CSCS
The Earth’s magnetic field surrounding our globe protects the Earth from harmful radiation and helps animals like birds or bats to get their bearings. The Earth’s magnetic field is mainly generated by the so-called geodynamo processes in the liquid outer core and in the solid inner core. Philip Livermore from the University of Leeds along with Rainer Hollerbach and Andrew Jackson from ETH Zurich have now demonstrated for the first time, using computer simulations on the CSCS super-computer “Monte Rosa”, that the magnetic field in turn influences these dynamic processes in the Earth’s core. Hence, the magnetic field leads to the solid inner core – which is about the size of the moon – rotating in an easterly direction and the outer core fluid and magnetic field being pushed westward. The latter was already observed back in 1692 by the discoverer of Halley’s comet, the natural scientist Edmund Halley, but could not be explained up to now.
Thrust to the east and west
The scientists used new methods for their simulation and worked in particular with a viscosity that was two orders of magnitude lower and thus 100 times closer to reality than in previous models. The scientists said that this was how they had succeeded in achieving correspondingly higher resolutions of certain physical processes in the Earth’s core. The simulations show that the force of the Earth’s magnetic field in the outermost region of the liquid core drives the magnetic field westwards. At the same time, these very forces give the solid inner core a thrust towards the east. This leads to Earth’s inner core having a higher rotation speed than the Earth.
Based on their study the researchers came to the conclusion that even subtle changes in the Earth’s inner magnetic field can lead to the respective directions of movement being reversed. This, in turn, explains the observation that over the last 3,000 years the Earth’s magnetic field has shifted on several occasions eastward instead of westward. It was likely that the Earth’s inner core then rotated in a westerly direction, instead of eastwards. According to the researchers even the smallest changes in the magnetic field could have led to different rotation speeds of the inner core. The research team of Hrvoje Tkalcic at the Australian National University recently made just such an observation of spin rate fluctuations over the last 50 years
Sole opportunity for reconstruction
Numerical models of the Earth’s magnetic field rank among the most computer-intensive simulations in high performance computing. In the simulations systems of equations governing fluid dynamics, classical mechanics and thermodynamics have to be solved. Together with seismic measurements, such simulations are the only tool for researching the Earth’s interior from depths of 2,900 kilometres all the way to the Earth’s centre at a depth of 6,378 kilometres. In recent decades studies of this kind have made enormous contributions to understanding what happens in the Earth’s interior.
Livermore PW, Hollerbach R & Jackson A: Electromagnetically driven westward drift and inner-core superrotation in Earth’s core, PNAS 2013, 110, 15914-15918; doi:10.1073/pnas.1307825110 |
Elyse Warren is in her second year of a Masters of Arts in Education and Human Development at George Washington University. She also works at PBS Learning Media, where she focuses on educational technology and providing resources for blended learning in the classroom.
Narrowing the gap in the United States between male and female students is a shared goal amongst education policy makers. However, the conversation has been dominated in recent years by a the move to address what was a major disparity between female students and their male peers in the subjects of science, technology, engineering, and mathematics (STEM). Now that the gender gap is significantly narrowing due to strong federal programming and better modalities of engaging females in STEM topics in the classroom, we need to take measure of where we are at as a nation in addressing our gender literacy deficit.
Similar to the STEM imbalance faced by females, male students face a global, a long- term trend of falling by the wayside, when it comes to literacy. The National Assessment of Educational Progress (NAEP), produced by the National Center for Education Statistics, demonstrated this disparity with a staggering thirteen point gap between male and female students in the United States in their first annual Nation’s Report Card in 1971.
On the cusp of the publication of that report, the US women’s rights movement refocused their mission to revolutionizing education reform for girls after the first gender discrimination case was filed in 1969. The case concerned discrimination against a part-time faculty member at the University of Maryland during her application process for a full-time position and prompted a wave of inspiration for women and girls to bring their discrimination cases to justice. The buzz from this case was so influential that it caused the House Committee on Education and Labor to address these new claims and establish the foundation for amendment Title IX of the Education Amendments of 1972.
Needless to say, the case for reinvigorating male literacy, was not on the horizons at the dawn of the first NAEP study.
In 1992, the American Association of University Women, attempted to further cement the vision for education that the women’s rights movement had initiated in the 1970s with a study that was published in the book, How Schools Shortchange Girls. The study depicts a granular portrait of the treatment of girls in the classroom and their performance in comparison to their male peers. In her book, The Trouble With Boys, journalist Peg Tyre highlights the study’s claim “boys are more likely to feel mastery and control over academic challenges, while girls are more likely to feel powerless in academic situations,” in her investigation on male literacy deficits to showcase the general stereotyping of how male students learn.
The study did not go unnoticed and after heavy lobbying, it led to the codification of the Gender Equity in Education Act of 1994, giving schools agency to help girls succeed with funding for new initiatives.
These doctrines tackled the education injustices facing girls and revolutionized the space that teachers and policy makers operate in today.
There is speculation as to whether the reverential mission to help girls has led to apathy when it comes to male deficits, but the correlation and causation are unclear. The male to female ratio of teachers is skewed heavily towards females in the United States. Education and the classroom are dominated by female activity. In turn, young male students have been shown to respond by not actively participating in reading activities, as the reading topics are “girly” or otherwise engendered. Research illustrates that a lack of male role models reading or being seen reading at home has a direct impact on literacy and desire to read among young male students.
The old adage, “boys will be boys,” applies here as well. Biologically, boys are prone to slower language development than girls and face a higher risk of reading problems or stuttering. Research indicates that during infancy the left hemisphere, which is responsible for language comprehension, develops after the right in males. To complicate this even further, the female brain’s Broca, the instrument that helps with language, has 20 percent more neurons than the male brain.
Boys are also are subject to more higher behavioral issues in the classroom. Diagnoses of attention disorders such as ADD/ADHD that affect literary comprehension increasing at a rate of 5% per year, unprecedented in the 21st century, and also significantly skewed towards boys with a diagnoses rate of 13.2% in comparison to girls at 5.6%.
This does not mean that females are better readers than males. The data indicate that cognitively there are differences in how we process what we learn. This, however, does not mean that we cannot try to address these differences in the classroom and provide a just education for all our students.
So what can we do about this? Serendipitously, the NAEP published their annual Nation’s Report Card last week. Testing eight subjects across grades four, eight, and twelve, we see that, despite new initiatives such as the Common Core and more testing, the US has declined in overall math and reading.
Nestled in this data, within the achievement gap section, we find the gender deficits. In 2012, the gap had closed to about five points between male and female students who were nine years old. Since then, it has remained the same. On the flip side, the gap disparity has been stagnant at eight points for ages 17 and slightly narrowing from 8 points at age 13 since 1992 in the recent assessment.
The evidence shows that the gap is narrowing, but it is still poignant– and it’s a global problem. Finland, with an education system idolized across the world, faces the highest gender deficits at gaps of 62 compared to the US ranking at 31 on the Program for International Student Assessment (PISA).
Biology may be partially to blame for this drastic gender gap, but maybe it also has to do with what the boys are reading.
In France, researchers tried to answer that question providing books aligned to boys’ interests, but ultimately concluded that there was not enough evidence to prove a causation in their correlation. Enjoyment only went up by .11 points and literacy actually decreased by 15.26. Brookings Senior Fellow Tom Loveless analyzes this data featured in the 2015 Brown Center Report On American Education and believes that, while of course the enjoyment of reading can increase desire to read, it is not enough evidence to conclude that enjoyment is the root of the issue.
U.S. Secretary of Education Arne Duncan, one of President Obama’s senior cabinet members since 2009, announced his resignation in early October. His tenure and reform initiatives, were subjected to waves of bipartisan criticism from seasoned educators. However, one concept that was mutually conclusive was the gap in male achievement across not only literacy in K-12 but also in success through college.
In one of his statements, before he took his position as Secretary, Duncan stated in response to the swelling achievement gap amongst college boys that, “it seems to be getting worse,” and it is. Long term trends indicate that the situation is progressing and the idea of an affirmative action program for male students has been thrown about to help balance the disparity between all students regardless of background.
Loveless points to a need to fund better assessments that test smaller populations and also provide assessments that get a better grasp on whether enjoyment of reading can be raised and help close gaps. I suggest a more bottom-up approach. We need to provide funding for professional development for our teachers in English Language Arts, similar to what we do for girls in STEM, to better understand the cognitive differences and what they can do to better meet the needs of their students. Federal funding needs to be allocated for statistically significant age nine years, and- as Loveless alludes- to we need to provide better assessments, that can track whether these changes are significant at ages 13 and 17 on the NAEP tests.
We need only take note from our friends in England to see that national programs such as the National Literacy Trust, which provides male role models for male students, are effective and voluntary tools to help with these gaps. With better assessment and research, we can resolve the problem and close the global gaps that riddle even the best programming.
PS21 is a non-national, nonpartisan, nongovernmental organization. All views expressed are the author’s own. |
At the core of the Museum's scientific work lies taxonomy: the description, classification and naming of species. This science is the foundation for all the biological sciences - if we cannot accurately describe the organism, the biological research that we do will not be reliable. Species are essential concepts in describing diversity and exploring evolution - the Museum's collections and research centre on taxonomy, but integrate it with all sorts of other scientific approaches.
Taxonomy is published in the scientific literature in a number of ways - individual species results are published increasingly in short papers, sometimes online. However, there is great value in ambitious works that cover whole groups of organisms - it allows all members of the group to be compared in a systematic way and new ideas and conclusions on diversity and evolution explored.
The final part of Dr Norman Robson’s Hypericum monograph was published in Phytotaxa. This an important monograph of a species-rich flowering plant genus; Hypericum (approximately 480 species) is one of 100 plant genera which together represent 22% of angiosperm (flowering plant) diversity.
A genus is a classification group for a number of individual closely related species. Hypericum is a genus of flowering plant species that is worldwide in distribution and familar as a garden plant in the UK and some species have been used in the past in herbal treatments. (The name St John’s Wort is commonly used for these plants.) A New Zealand species, Hypericum gramineum, is shown below.
The entire work comprises 1,247 pages in 11 parts, the culmination of 27 years of work and more than 50 years of research by Dr Robson on this genus. The editorial in Phytotaxa states that “The size of such genera means that complete monographic treatments to account for species diversity are time-consuming, costly and labour-intensive. Consequently, the species-level taxonomy of most such groups is poorly known [and this] presents a substantial barrier both to the goal of completing the global inventory and to understanding the evolution of the diversity they contain. Hypericum is now a notable exception to this problem” |
In this ESL grammar exercise, students must identify the adjectives in sentences and the nouns that they modify.
Circle the adjectives in each sentence, and draw an arrow to the nouns that they describe.
1. Using a computer is difficult.
2. Paper airplanes are fun.
3. I want to buy that new red car.
4. We can use the old paper in my English notebook.
5. The video game is expensive because it is new.
6. Airplanes are fast, but boats are slow.
7. Don’t draw disgusting pictures!
8. That picture looks like a real tiger!
9. I think math is easy.
10. The expensive, new car is very fast.
Back to ESL Grammar Exercises
Stickyball Home: ESL Resources for Teachers |
Everyone is at risk of homelessness. A job loss, a house fire, a natural disaster, a relationship breakdown all bring with them the risk of losing one’s home and becoming homeless. For most people, structural factors play the biggest role in becoming homeless although personal history and individual characteristics also play a role. Structural factors include: the growing gap between the rich and the poor, a lack of affordable housing, low social assistance and other income supports, low vacancy rates and discrimination (including racism, sexism, homophobia and ageism). Personal history and individual characteristics include: catastrophic events, loss of employment, family break up, physical or mental health issues, substance use by oneself or family members, a history of physical, sexual or emotional abuse, and current or past involvement in the child welfare system. Homelessness exists in every community across the country even if it is not visible.
Homelessness is experienced differently by various populations. For example, the ‘working poor’ and single-parent families with children often live in sub-standard or overcrowded housing. They are unable to afford a decent place to live in addition to paying other bills including food, health care, clothing and transportation. Often this group is part of the ‘hidden homeless’ population; approximately 50,000 people are considered to be hidden homeless on any given night in Canada.
In this section, the challenges experienced by various sub-populations are detailed. We hope to help you understand the unique needs faced by people experiencing homelessness including youth, the elderly, families with children, newcomers, racialized communities, members of the LGBTQ community, Aboriginal Peoples, women and men.
Understanding the variety of factors that may lead to homelessness is not easy considering the heterogeneity of the population. There are many pathways into and out of homelessness. It is often said that the only commonality amongst people experiencing homelessness is that they lack access to safe, secure and affordable housing. The collection of relevant and valid demographic data is a key factor in developing suitable and relevant programs. Service providers and researchers increasingly recognize that understanding the distinct challenges of sub-populations and providing supports and services directed to these needs will help improve solutions to ending homelessness. |
|Name: _________________________||Period: ___________________|
This quiz consists of 5 multiple choice and 5 short answer questions through Chapter 4, The Greek Way of Writing.
Multiple Choice Questions
1. The power of the Egyptian priests came from ____________________________.
(a) the ingorance and fear of others
(b) their ability to teach
(c) the people
(d) their power over the king
2. The first games, contests and competitions were held in _____________.
3. The literature indicates that life differed in Athens, as compared to Egypt, by which century?
4. Poetic license ________________________________________.
(a) allows the poet to write in whatever way he wants
(b) requires facts
(c) calls for simple statements
(d) requires briefness
5. The Hindu artist ________________________.
(a) was dictated to by Hindu priests
(b) had limited freedom
(c) had no freedom
(d) had the most freedom
Short Answer Questions
1. Which of the following terms does not describe Greek writing?
2. Which Pharaoh opposed the priests?
3. One of the differences in the society of the Greeks was the __________________.
4. What word best describes the ancient Athenians?
5. Greek writing can be characterized as ______________________.
This section contains 173 words
(approx. 1 page at 300 words per page) |
Humans communicate using language. They can read, write and speak. We may wonder how animals communicate. The first question arises in mind is "do they really communicate?". The answer is "yes". Except humans and also few animals, chimpanzees, gorillas, and orangutans (type of great ape found only in Asia) have language. They do not speak words and they certainly can not read or write. But this does not mean that animals don't communicate. In reality, it is possible to recognize the meaning in a wide variety of sounds made by animals. Animals change the rate and structure of ‘Animal Sound production’ to convey different messages to one another.
Vocalization is a common word for ability to speak. Birds are one of the very best animal communicators. Their vocalizations are often referred as "calls" or "songs". Animal Sounds generally serve three important purposes. One is to announce to their mates that the singer is available, other to announce to rivals that, “The location is occupied. Don't come any closer." The third important function that bird vocalizations serve is to warn other birds in the area that a predator is nearby and that they should be aware of him.
Frogs also produce Animal Sounds. They use “songs” or give “calls” to mark their territories and to inform potential mates of their presence. Bullfrogs often claim small territories on social breeding grounds and guard those spots throughout the night, backing up threatening postures with vocalizations that can be more easily interpreted in complete darkness. Lions also use menacing roars to establish dominance in their territory. They also attract their mates by roaring.
Social creatures, including the chimpanzees, whales and seals also make Animal Sounds. They rely heavily on vocalizations for their communication purpose. Calls or songs are often the best way for individuals to locate and recognize others from their social group. For example, whales often swim outside of visual contact with members of their group. With nearly constant clicks and whistles whale can keep the track to know the whereabouts of every other whale in the group. Most marine animals do rely on sound for their survival and depend on their unique adaptations that have enabled them to protect themselves, locate food, communicate and navigate underwater. The communication for navigations by whales and dolphins is discussed below.
Similar to SONAR systems on navy ships, some whales use Animal Sounds to detect, localize, and differentiate between objects. This includes obstacles and other whales near by. By emitting short pulses of sound (also called as clicks), these marine mammals can listen for echoes and detect objects underwater. Some whales and dolphins use echolocation to locate food and their mate. They send out high intensity and frequency pulsed sounds that are reflected back when they strike a target. This echo helps the whale or dolphin to identify the size and shape of an object. Also it helps in finding the direction in which the object is moving and enables them to estimate the distance of object from them. Echo-location is a very sophisticated way of locating prey and can even be used to find prey that is hidden in the sand.
Chimpanzees also use vocalizations for keeping the track of members of their group. However, chimpanzees and other large apes, which include gorillas and orangutans, make far more than simple contact sounds. Scientists who study chimpanzees have identified about three dozen chimpanzee vocalizations and each vocalization has its own meaning. In fact, if we combinely consider their sophisticated body language and wide range of facial expressions with the sounds that chimpanzees make, they seem to HAVE very little difficulty communicating just about anything they need to say. There are different names for Animal Sounds which are made by different animals. |
In the summer of 2012, scientist and entrepreneur Russ George sailed purposefully past the coast of Vancouver to the archipelago of Haida Gwaii. There, he proceeded to dump 100 tons of iron sulfate into 10,000 square miles of ocean.
The Haida Indians had given him their blessing. George was the director of the Haida Salmon Restoration Corporation, and the Haida Indians were told that this iron would fertilize the plankton, a valuable feedstock for the native salmon. But George’s intentions went beyond fish farming: adding iron would allow swarms of plankton to blossom, which would draw down massive amounts of carbon dioxide. Russ George claimed to have found a solution for amending the starving salmon population and mitigating the rising concentration of greenhouse gases in one fell swoop.
Most experts, however, were infuriated.
Since then, George has become an infamous case of the dangerous line between ingenuity and recklessness. Supporters argue that such drastic measures may be needed in the future unless we somehow reduce our greenhouse gas emission. But most scientists and policymakers argue that his hasty deed had no scientific merit, and could cause irreversible damage to the ocean environment.
How could an experiment with such good intentions have gone so wrong?
The idea of “ocean iron fertilization” as a means of climate control dates back to 1988, when oceanographer John Martin quipped, “Give me half a tanker of iron and I will give you another ice age.” Since then, scientists have researched the consequences of adding iron to fertilize oceanic plankton, with promising results. Small-scale experiments demonstrated that simply adding iron to certain ocean waters, such as the Indian Ocean, would result in a bloom of plankton. A high concentration of iron led some plankton blooms to grow so large that they could be seen by satellite. Furthermore, these plankton blooms were accompanied by measurably lower levels of CO2 at the surface of the water.
According to Martin’s theory, plankton would consume the CO2 and incorporate the carbon into their bodies. Then, after a healthy life, the plankton would die, sinking to the bottom of the ocean and dragging that carbon with them. Ocean iron fertilization would therefore be a means of converting CO2 into biomass buried deep in the ocean.
Only one problem stands between Russ George’s solutions and a greener, happier world: it won’t work. Scientists have now all but disregarded ocean iron fertilization as a means of climate control. Sitting right beneath the planktonic bloom is a ravenous microbial community, just waiting for dead cells to fall on their dinner platters. Once these microbes consume the fallen plankton, they convert its carbon back to CO2, which bubbles right back up to the atmosphere. In fact, for the plankton to drag converted carbon to the bottom of the ocean, it first must make it past a whole kilometer of hungry microbes.
But even if the dying plankton cells managed to dodge all the scavengers in the water for that full kilometer, new research shows that ocean circulation would still bring that converted carbon right back to the surface in 38 years or less. This is much shorter than the hundreds of years CO2 would need to stay at the bottom of the ocean to make any noticeable difference in the climate.
To make matters worse, Russ George violated the international moratorium on geoengineering – the practice of manipulating Earth’s climate through large-scale perturbations of the environment. He also violated an international treaty on ocean pollution. Strict rules were in place to prevent this type of experiment, as large-scale ocean iron fertilization has the potential for many unexpected and terrifying consequences.
For example, as a sinking plankton bloom is consumed by the underlying microbes, the microbes’ demand for oxygen will be fierce, which may lead to oxygen-deficient areas of the ocean. This lack of oxygen could could be devastating to other sea life. Plankton can also directly secrete toxins; while not all plankton are harmful, large plankton blooms in the Gulf of Mexico have previously led to murky red tides of excreted toxins, which left hordes of dead fish washing on shore.
Most critical scientists know not to let these horrific possibilities get too far in their heads; the fact is we don’t have enough data to know whether any of these catastrophic side effects will occur. What most scientists are more concerned with is the irreversibility of such experimentation. With large-scale ocean iron fertilization, any unforeseen consequences may not emerge until it’s too late.
The real battle
Anecdotally, within a year of the iron dump, the salmon population did appear vibrant, although it’s possible this was simply because of salmon migration patterns and not the iron. Nonetheless, due to the outrage over the potential irreversible effects of iron fertilization, Russ George was fired as the head of the Haida Salmon Restoration Corporation in May 2013. Since then, Russ George’s name has trickled in and out of the media, bringing mumblings of a public still undecided on the merits of his deeds.
But the most troubling part of Russ George’s antics had nothing to do with ocean fertilization. The real scare was in a well-educated man initiating a high-risk solution without waiting for the proper lines of peer approval. For every scientific hypothesis, there will be scores of highly qualified researchers checking the counter-points. This is a normally a good thing, preventing falsities and promoting new discovery. But navigating this system takes extreme patience; researchers can be long in the grave before a peer-approved solution is agreed upon. What happens when an impatient scientist just says “to hell with it?”
In the end, Russ George’s solution did not result in any of the potentially devastating impacts, but also had zero affect on the climate. But the experimenter is still highly debated to this day. Is Russ George merely a man of action with the balls to leapfrog academic and policy gridlock and experiment on a scale that actually matters? Or is he a rogue scientist, a reckless cowboy experimenting on the environment with no care for tomorrow’s consequences? The debate may continue until another scientist-gone-rogue becomes our savior or our doom. |
Environmental Law 15 - Hazardous wastes
- What is hazardous waste?
- What legislation controls its disposal?
- What forms of hazardous waste are found in the home?
- What type of landfill sites can accommodate hazardous wastes?
There are no universally applied definitions of “hazardous waste”. The Department of Water Affairs and Forestry defines hazardous waste as “An inorganic or organic element or compound that, because of its toxicology, physical, chemical or persistency properties, may exercise detrimental acute or chronic impacts on human health and the environment” (Source: The minimum Requirements for the Handling, Classification & Disposal of Hazardous Wastes).
The definition further describes hazardous waste as a waste that directly or indirectly represents a threat to human health or the environment by introducing one or more of the following risks: (i) explosion of fire; (ii) infections, pathogens, parasites or their vectors; (iii) chemical instability, reactions or corrosion; (iv) acute or chronic mammalian toxicity; (v) cancer, mutations or birth defects; (vi) toxicity, or damage to the ecosystems or natural resources; (vii) accumulation in biological food chains, persistence in the environment, or multiple effects to the extent that it requires special attention and cannot be released into the environment or be added to sewage. E.g. chemicals used in industry such as mercuric sulphate (toxic), sodium hydroxide (corrosive), ethanol alcohol (ignitable/flammable) or hydrogen peroxide (reactive). Many of these substances are well known to us and used by us on a daily basis.
Legislation which controls the storage, handling, treatment, transport, disposal:
- National Environmental Management Act (Act 107 of 1998)
- Environmental Conservation Act 73 of 1989
- Hazardous Substances Act 15 of 1973
- The Occupational Health and Safety Act 85 of 1993
- National Road Traffic Act 93 of 1996
- DWAF Minimum Requirements
In order to identify which waste substances in the home are “hazardous” one needs to refer to the SANS (South African National Standards) 10228 booklet. This booklet classes the waste in to 1 of 9 classes. The 9 classes includes the following: (i) Explosives, (ii) Gases, (iii) Flammable liquids, (iv) Flammable solids, (v) Oxidisers, (vi) Toxic & infectious substances, (vii) Radioactive, (viii) Corrosives and (ix) Miscellaneous dangerous substances and goods.
Typical examples of hazardous wastes which would need to disposed of in a registered hazardous landfill site would be: zinc batteries (Class 6), used fluorine tubes (Class 6), broken mercury thermometers (Class 6), used printer cartridges (Class 6), paint wastes (Class 3), household cleaners (Class 3, 6 and 8), antifreeze (Class 6) and spent disinfectants (Class 8), amongst others. It is of interest that the oils trapped within grease traps is considered hazardous and this waste should actually be disposed of with greater care.
Hazardous wastes would then be classified into 4 classes according to their Hazard Rating: HR1 – Extreme Hazard, HR 2 – High Hazard, HR 3 – Moderate Hazard and HR 4 – Low Hazard. Classification of wastes into these waste types is dependant on their toxicity value (LC50 value which is a statistical estimate of the amount of chemical which will kill 50% of a given population of aquatic organisms under standard control conditions (table below) E.g. Mercury has a LC50 value of 0.22 and would have a Hazard Rating of HR3 (Source: Waste Classification Course Handout – Institute of Waste Management). One can obtain LC50 values from the Minimum Requirements for the Handling, Classification & Disposal of Hazardous Wastes (DWAF publication – available on the web at www.dwaf.co.za). A waste would be disposed of in 1 of 2 types of hazardous landfill sites (an H:H or H:h) depending on the specific Hazard Rating. So it is important to know what wastes you are producing and where to dispose of them.
One can consult their local DWAF or DAEA branch in order to verify if a waste is hazardous and what implications it would mean in terms of handling, classification and disposing of the waste (this would include transportation of the waste).
The author will not be held responsible for misinterpretation of the law, and strongly recommends that readers consult their local DAEA branch for clarification.
Afzelia Environmental Consultants cc
- Tel: +27(0) 31 303 2835 |
Today’s invasive species: the water hyacinth.
Native to the Amazon basin, but considered an ornamental aquarium plant, the water hyacinth was introduced to Florida in 1884. By the mid-1950s, water hyacinths were clogging Florida’s water ways and interfering with navigation, not to mention displacing the native species. Clean up took millions of dollars, and they’re still spreading on every continent except Antarctica.
Guess how many of the species on the International Union for the Conservation of Nature’s list of the one-hundred worst invasive species are the result of aquarium and ornamental releases?
A full third! You see, when it comes to aquarium animals and plants, we’re dealing with mature adults, and particularly hardy ones at that, since the weaker ones don’t survive transport. So whenever they’re released into the environment, either intentionally or accidentally, they’re better able to establish themselves.
Despite all this, until recently researchers have largely ignored the role of pet fish and aquarium plants when studying the spread of exotic and invasive species. Finally, it’s time for some guidelines, especially ones that encourage the trade of less invasive and aggressive species, or the substitution of native and/or safer species that people could grow instead. |
Several thousand years ago, whether you were an Egyptian with migraines or a feverish Greek, chances are your doctor would try one first-line treatment before all others: bloodletting. He or she would open a vein with a lancet or sharpened piece of wood, causing blood to flow out and into a waiting receptacle. If you got lucky, leeches might perform the gruesome task in place of crude instruments.
Considered one of medicine’s oldest practices, bloodletting is thought to have originated in ancient Egypt. It then spread to Greece, where physicians such as Erasistratus, who lived in the third century B.C., believed that all illnesses stemmed from an overabundance of blood, or plethora. (Erasistratus also thought arteries transported air rather than blood, so at least some of his patients’ blood vessels were spared his eager blade.) In the second century A.D., the influential Galen of Pergamum expanded on Hippocrates’ earlier theory that good health required a perfect balance of the four “humors”—blood, phlegm, yellow bile and black bile. His writings and teachings made bloodletting a common technique throughout the Roman empire. Before long it flourished in India and the Arab world as well.
In medieval Europe, bloodletting became the standard treatment for various conditions, from plague and smallpox to epilepsy and gout. Practitioners typically nicked veins or arteries in the forearm or neck, sometimes using a special tool featuring a fixed blade and known as a fleam. In 1163 a church edict prohibited monks and priests, who often stood in as doctors, from performing bloodletting, stating that the church “abhorred” the procedure. Partly in response to this injunction, barbers began offering a range of services that included bloodletting, cupping, tooth extractions, lancing and even amputations—along with, of course, trims and shaves. The modern striped barber’s pole harkens back to the bloodstained towels that would hang outside the offices of these “barber-surgeons.”
As hairdressers lanced veins in an attempt to cure Europeans’ ailments, in pre-Columbian Mesoamerica bloodletting was believed to serve a very different purpose. Maya priests and rulers used stone implements to pierce their tongues, lips, genitals and other soft body parts, offering their blood in sacrifice to their gods. Blood loss also allowed individuals to enter trance-like states in which they reportedly experienced visions of deities or their ancestors.
Bloodletting as a medical procedure became slightly less agonizing with the advent in the 18th century of spring-loaded lancets and the scarificator, a device featuring multiple blades that delivered a uniform set of parallel cuts. Respected physicians and surgeons extolled the practice, generously prescribing it to their most esteemed patients. Marie-Antoinette, for instance, seemed to benefit from a healthy dose of bloodletting while giving birth to her first child, Marie-Thérèse, in 1778, 14 years before the guillotine would shed more of the queen’s blood. As an excited crowd thronged her bedchamber, hoping to witness a dauphin’s arrival, the mother-to-be fainted, prompting her surgeon to wield his lancet. Marie-Antoinette immediately revived after the bloodletting—perhaps because the windows were simultaneously opened to let in fresh air.
America’s first president was less fortunate than France’s most infamous queen. On December 13, 1799, George Washington awoke with a bad sore throat and began to decline rapidly. A proponent of bloodletting, he asked to be bled the next day, and physicians drained an estimated 5 to 7 pints in less than 16 hours. Despite their best efforts, Washington died on December 17, leading to speculation that excessive blood loss contributed to his demise. Bloodletting has also been implicated in the death of Charles II, who was bled from the arm and neck after suffering a seizure in 1685.
By the late 1800s new treatments and technologies had largely edged out bloodletting, and studies by prominent physicians began to discredit the practice. Today it remains a conventional therapy for a very small number of conditions. The use of leeches, meanwhile, has experienced a renaissance in recent decades, particularly in the field of microsurgery. |
I put a few inches of water in the pool, we gathered supplies from around the house and yard and started throwing them into the pool. After every addition I said, "it floats!" or "it sinks!" Buddy did repeat "sink!" or "foat!" a couple of times, but he was more interested in pouring the water out of the containers we had brought outside.
|Buddy getting ready to throw in a rock.|
|"Sink!" The rock sinks and makes a big splash! Buddy liked to watch it splash over and over...|
|Buddy would rather make waves than conduct the experiment I had laid out. |
Sometimes we just have to go with the flow and let kids be kids.
Young kids don't always have the same thought process as adults. Sometimes it is just better to let them try their own experiment. At this age I just want to see Buddy explore and figure out how things work. The real lesson is cause and effect, not density.
|Today Buddy is checking out gravity instead of density.|
There are so many ways to adapt this experiment for kids. Get down on your child's level and make it fun for them while they learn. |
Neo-classicism was a movement in painting which reflected political changes in Europe. The French Revolution, which began in 1789, stressed the virtues of Roman civilization. These virtues included discipline and high moral principles. Neo-classical artists helped educate the French people in the goals of the new government. They painted inspirational scenes from Roman history to create a feeling of patriotism. They are Jacques Louis David and Jean Auguste Dominique of France.
Romanticism was a reaction against the neo-classical emphasis on balanced, orderly pictures. Romantic paintings expressed the imagination and emotions of the artists. The painters replaced the clean, bright colors and harmonious compositors of neo-classicism with scenes of violent activity dramatized by vigorous brushstrokes, rich colors, and deep shadows.
Two English painters - John Constable and Joseph M. W. Turner -made important contributions to romanticism. Constable was a master of landscape painting. He developed a style of rough brushstrokes, and broken color to catch the effected of lights in the air, trees bent in the wind, and pond surfaces moved by a breeze. In his works he tried to capture in oil paintings the fresh quality of water color sketches.
Turner was increasingly concerned with the effects of color. In his late works color became one dazzling swirl of paint on the canvas. The influence of Constable and Turner appeared during the late 1800"s in the works of the French impressionists.
Realism. As neo-classicism and romanticism declined , a new movement - realism - developed in France. Guctave Courbet became the first great master of realistic painting. Courbet painted landscapes, but his vision of nature was not so idealized as that of other painters. He recorded the world around him so sharply that many of his works were considered social protests. In one painting, for example, he portrayed an old man and a youth inthe agonizing work of breaking rocks with hammers. The artist implied that something is wrong with a society that allows people to spend their lives at such labor. The neo-classicists called Courbet's paintings low and vulgar. But Courbet's works helped change the course of art. The paintings were based on the artist's honest. Unsentimental observations of life around him. From Courbet's time to the present day, many painters have adopted his approach.
The Pre-Raphaelite Brotherhood was an English art and literary movement founded in 1848. The leading painters of the movement were William Holman Hunt, Sir John Everett, Millais, and Dante Gabriel Rossetti. The Pre-Raphaelite painters stood apart from the major art movements of their century. They wanted to return to what they believed was the purity and innocence of painting before Raphael. Most Pre-Raphaelite art has a strong moral message through religious paintings.
Edward Manet was a French artist who revolutionized painting in the mid-1800's. He developed a new approach to art. He believed that painting do not have to express messages or portray emotions. Manet was chiefly interested in painting beautiful pictures. To him, beauty resulted from a combination of brushstrokes, colors, patterns, and tones. Since Manet's time, most painters have emphasized the picture itself, rather than its storytelling function. His "Luncheon on the Grass" illustrates lack of concern for story.
Impressionism was developed by a group of French painters who did their major work between about 1870 and 1910. The impressionists included Claude Manet, Pierre Auguste Renoir, and Edgar Degas. Like Manet, the impressionists, they chose to paint scenes from everyday life, including buildings, landscapes, people, and scenes of city traffic. Most of the people in their pictures were ordinary middle-class city dwellers -like the painters themselves.
The impressionists developed a revolutionary painting style. They based it on the fact that nature changes continually. Leaves move in the wind, light transforms the appearance of object, reflections alter color and form. As the viewer moves, the perspective of what is seen changes. The impressionists tried to create painting that capture ever - changing reality at a particular moment - much as a camera does.
Postimpressionism described a group of artists who attempted in various ways to extend the visual language of painting beyond impressionism. The most influential postimpressionists were Paul Cezanne, Paul Gauguin, and Vincent van Gogh. All were French except van Gogh, who was Dutch . Unlike the impressionists, who emphasized light, Cezanne stressed form and mass. The distortion in his pictures add force to the composition and give the subject an appearance of permanence and strength. Gauguin's pictures are highly decorative. Gauguin's pictures are highly decorative. Gauguin stressed flat color, strong patterns, unshaded shapes, and curved lines. He constantly searched for purity and simplicity in life. His search led him to the South Seas, where he settled on the island of Tahiti. Like Gauguin, van Gogh wanted to express his innermost feelings through his art. He believed he could achieve this goal through the use of brilliant color and violent brushstrokes. He applied his oil colors directly from the tube. without mixing them. The result was an art of passionate intensity. Artists of the 1900's have continued the search for new approaches to painting that characterized the work of the impressionists and postimpressionists. Many art movements appeared during the 1900's. Each lasted only a few years but added to the richness and variety of modern art. They are fauvism, cubism, futurism, expressionism, dadaism, surrealism, etc. As time passed, painters of the 1900's increasingly emphasized purely visual impact rather than recognizable subject matter or storytelling.
Some art critics say that too much of today's painting is concerned only with originality and novelty. These critics agree that artists should discard traditions that no longer meet their needs. But they point out that most great advances in style and technique were achieved because artists believed they needed new methods to express beliefs or ideas. Sometimes artists strive only to create original painting styles. But originality for its own sake becomes boring unless the painting has qualities that help it remain significant and interesting after its novelty has worn off. |
Highlights DNA Replication (continued)
1. Replication of linear eukaryotic chromosomes is more complex than replicating the circles of prokaryotic cells. During each round of eukaryotic DNA replication, a small portion of the DNA at the end of the chromosome (known as a telomer) is lost. Telomers have thousands of copies of the same short nucleotide sequence. Telomer length may be relative the cellular lifespan.
2. Telomerases are enzymes that make telomers and they are active in fetal cells. They serve to elongate the ends of linear chromosomal DNAs, adding thousands of repeats of a short sequence (junk DNA). This "junk DNA" is called a telomere. At each round of eukaryotic DNA replication, a short stretch at the end of the DNA is lost, shortening the telomere. The longer a telomere is, the more times a cell can divide before it starts losing important DNA sequences.
3. Tumor cells are another cell type that has an active telomerase. This probably is a factor that enables them to be "immortal".
4. Telomerase acts as a reverse transcriptase, using an RNA primer that it carries with it to copy and make the repetitive sequences of the telomer.
5. Eukaryotic cells tightly control the process that leads to their division. The cycle is called the cell cycle and the protein p53 plays an important role. If p53 detects that replication has not completed properly, it stimulates production of repair proteins that try to fix the damage. If the damage is fixed, the cell cycle continues and the cell ultimately divides. If the damage cannot be fixed, p53 stimulates the cell to commit suicide - a phenomenon called apoptosis.
1. Transcription is the making of RNA using DNA as a template. Transcription requires an RNA polymerase, a DNA template and 4 ribonucleoside triphosphates (ATP, GTP, UTP, and CTP). Prokaryotic cells have only a single RNA polymerase. Transcription occurs in the 5' to 3' direction. RNA polymerases differ from DNA polymerases in the RNA polymerases do NOT require a primer.
2. Transcription requires DNA strands to be opened to allow the RNA polymerase to enter and begin making RNA. Transcription starts near special DNA sequences called promoters.
3.A factor known as sigma associates with the RNA polymerase in E. coli and helps it to recognize and bind to the promoter. A promoter is a sequence in DNA that is recognized by the RNA Polymerase-Sigma complex. (Note that sigma factor binds to BOTH the RNA Polymerase and to the promoter sequence in the DNA. Note also that sigma factor is a PROTEIN). Genes that are to be transcribed have a promoter close by to facilitate RNA Polymerase binding to begin transcription.
4. Promoters in E. coli have two common features. The first is a sequence usually located about 10 base pairs "upstream" of the transcription start site (the transcription start site is the location where the first base of RNA starts). This sequence is known as the "-10" sequence or the Pribnow (TATA) box', which is so-named because the most common version of it (known as a consensus sequence) has the sequence 5'-TATAAT-3'. The second common feature of E. coli promoters is located about 35 base pairs upstream of the transcription start site. Eukaryotic promotoers also frequently have a TATA box, but in a slightly different position.
5. Transcription occurs in three phases - initiation, elongation, and termination. Binding of RNA Polymerase and sigma is the first step in transcription (initiation). After polymerization starts, sigma factor leaves the RNA polymerase and the elongation process continues.
6. Termination of transcription in E. coli occurs by several mechanisms. One I discussed in class is factor independent transcription termination, which occurs as a result of a hairpin loop forming in the sequence of an RNA. When it forms, it "lifts" the RNA polymerase off the DNA and everything falls apart and transcription stops at that point.
7. Factor dependent termination and is caused by a protein called rho. Rho works by binding to the 5' end of the RNA and sliding up the RNA faster than the RNA Polymerase makes RNA. When rho catches the RNA polymerase, it causes the RNA polymerase to dissociate (come off of) the DNA and release the RNA.
8. An operon is a collection of genes all under the control of the same promoter. When an operon is transcribed, all of the genes on the operon are on the same mRNA. Operons occur in prokaryotes, but not eukaryotes. In eukaryotes, each gene is made on individual mRNAs and each gene has its own promoter. |
Salvestrols are plant derived compounds (phytonutrients) essential for wellbeing that cannot be made in the body and must therefore be supplied through our diet. As a group, these substances are chemically unrelated but nevertheless confer their benefits in a similar manner by reacting with a particular enzyme. This enzyme converts salvestrols into a form that is toxic to malfunctioning cells but because it is only present in sick cells salvestrols do not harm healthy cells.
One reason for the disappearance of salvestrols in the diet is that that they all have a bitter taste. As a result of the modern trend toward sweet flavours, plant sources that would normally be rich in salvestrols are shunned as sweeter tasting varieties are bred or selected to suit modern tastes. Furthermore, the trend towards producing foods without adding sugars or sweeteners is also causing salvestrols to be removed by manufacturing processes that filter out bitter substances so that the finished product will taste sweeter.
However, the use of many modern fungicides and crop protection chemicals means that plants which are not organically grown will not express high concentrations of salvestrols because they are never exposed to the attacks which cause the plant to produce them at such levels! About 100 years ago it is estimated that we would have consumed about 10 times the amount of salvestrols in our diet as we do now.
We can find high levels in the following fruits and vegetables:
Apples, Blackcurrants, Blueberries, Cranberries, Grapes ( wine) Oranges, Strawberries and Tangerines.
Aubergines, Artichokes ( globe) Avocado, Broccoli, Brussel Sprouts, Cabbage, Cauliflower, Olives, Red/Yellow Peppers. |
The centrifuge problem states that a centrifuge with some arbitrary number of test tubes in it must be radially balanced — that is, its center of mass must be in the center of the centrifuge. An unbalanced, spinning centrifuge will shake and "walk" across a desk like an unbalanced washing machine, and the vibrations caused by the unbalanced load will damage the spinning mechanism. So it is critical to balance it as well as possible, especially in very high-speed applications.
The specific problem given in the node Hard interview questions states that the centrifuge has 12 evenly spaced slots for test tubes, and that all the test tubes have equal mass. The solutions for 0, 2, 3, 4, and 6 tubes are trivial since they divide evenly into 12 and can therefore be evenly spaced. Likewise the solutions for 8, 9, 10, and 12 are exactly the same if you consider evenly spacing 0, 2, 3, or 4 empty slots rather than full slots.
Clearly the 1 and 11 test tube cases are not possible because there is nothing left with which to balance one and only one full or empty slot. This is also obvious and trivial.
However, the 5 and 7 test tube cases are nontrivial. 5 and 7 do not divide evenly into 12 and therefore cannot be evenly spaced around the centrifuge. Here is one possible arrangement which might work:
One commonly given proof that this arrangement is valid is that it is the superposition of two other valid arrangements, the solutions for 2 and 3. But this doesn't tell you anything if you don't understand superposition.
We can check to see if it is valid by calculating its center of mass. We can do this with basic trigonometry. If this arrangement is balanced both left to right and top to bottom, then it is balanced. Trigonometry allows us to find the horizontal and vertical components of each item's mass with respect to the center by drawing triangles. The hypotenuse of the triangle is the actual mass of the test tube and the horizontal and vertical legs represent the horizontal and vertical components of its mass.
First we see that the mass at C (slot 12) has only a vertical component. The masses at 3 and 9 likewise have only a horizontal component. So we only need triangles to find the horizontal and vertical components of 4 and 8. Recall the trigonometric identities:
sinθ = ---------- cosθ = ----------
Each slot represents 30° (360°/12). For convenience, let us assume the mass of each test tube is one (zero being an easier but far less interesting case). This sets the hypotenuse equal to 1, which therefore drops out of the equations, leaving us with:
Vertical components: Horizontal components:
sin(-30) = -1/2 cos(-30) = √3/2
sin(-150) = -1/2 cos(-150) = -√3/2
Now we add up all the vertical and horizontal components of the masses, defining up and right as positive.
C + 9 + 3 + 8 + 4 =
1 + 0 + 0 + (-1/2) + (-1/2) = 0 Balance!
C + 9 + 3 + 8 + 4 =
0 + (-1) + 1 + (-√3/2) + √3/2 = 0 Balance!
Therefore this solution is valid. The 7 test tube case is the same, just with the empty and full slots reversed.
In fact, this is only one valid solution to the problem. By running the math the same way, we see that any combination of any solution for 2 test tubes and 3 test tubes will work, so long as you don't try to put two test tubes in the same slot. For example, the case C, 4, 8, B, 5, although it looks unbalanced at first glance, still works:
C + 4 + 8 + B + 5 =
1 + (-1/2) + (-1/2) + √3/2 + (-√3/2) = 0 Balance!
C + 4 + 8 + B + 5 =
0 + √3/2 + (-√3/2) + (-1/2) + 1/2 = 0 Balance!
Really this is just the same case rotated 120 degrees. C, 4, 8, 1, 7 is the mirror image of that and also works. Superposition would have told us this immediately.
As mentioned earlier, the 5 test tube case is just the 2 and 3 test tube cases put together. Consider the cases separately. A valid 3 test tube case balances, so the center of mass doesn't need to be considered when looking at the 2 test tube case. Likewise a valid 2 test tube case balances, so its center of mass doesn't need to be considered when looking at the 3 test tube case. Each case completely drops out of the analysis of the problem because its center of mass is at the center, exactly where we need it to be.
Essentially, we are saying that two arrangements that each equal zero separately also equal zero when considered together. |
A huge and growing amount of research has now shown that vitamin D deficiency is very common (at least 50% of the general population and 80% in infants), and plays a major role in the development in many of the chronic degenerative diseases. In fact, vitamin D deficiency may be the most common medical condition in the world, and vitamin D supplementation may be the most cost effective strategy in improving health, reducing disease, and living longer. Those deficient in vitamin D have twice the rate of death and a doubling of risk for many diseases, such as cancer, cardiovascular disease, diabetes, asthma and autoimmune diseases, such as multiple sclerosis.
The optimum blood levels of vitamin D and what constitutes vitamin D deficiency is somewhat controversial in mainstream medicine. For optimum health, most experts recommend blood levels of vitamin D3 (25(OH)D3) between 50-80 ng/mL (125-200 nmol/L).
Although an individual’s vitamin D requirement may be met through synthesis of vitamin D from 7-dehydrocholesterol in the skin through exposure to sunlight, most people have serum concentration of 25-hydroxyvitamin D in the subnormal range and require treatment with supplemental vitamin D.
A new study published in the Journal of the American Board of Family Medicine demonstrates quite clearly that the RDA for vitamin D is grossly inadequate, and considerably higher dosages than the RDA are required to help meet a person’s vitamin D requirement.
Vitamin D3 acts as a vital key to unlock binding sites on the human genome for the expression of the genetic code. The human genome contains more than 2,700 binding sites for D3; those binding sites are near genes that are involved in virtually every known major disease of humans.
Vitamin D Deficiency Syndrome (VDDS) is newly designated disorder linked to blood levels of D3 less than 25 ng/ml and the presence of at least two of following conditions:
- Heart Disease
- High blood pressure
- Autoimmune disease
- Chronic fatigue
- Psoriasis or Eczema
- Recurrent infections
Risk Factors for Vitamin D Deficiency
- Insufficient exposure to sunlight- working and playing indoors, covering up with clothes or sunscreen when outside, residing at a high latitude.
- Aging – seniors are at greater risk due to lack of mobility and skin that is less responsive to ultraviolet light.
- Darker skin – high incidence of vitamin D deficiency and its associated conditions in Blacks is widely documented. Blacks are at greatest risk of vitamin D deficiency, due to higher skin melanin content.
- Breastfeeding – breastfeeding will result in vitamin D deficiency in the baby if the mother fails to ensure her own levels are high enough to provide for her baby’s needs. When the mother is deficient, the breast-fed child will be deficient due to the low vitamin D content of the mother’s breast milk.
- Obesity – fat-soluble vitamin D gets trapped in fat tissue, preventing its utilization by the body.
Researchers from the University of Missouri conducted a study to determine whether the recommended doses of vitamin D3 are adequate to correct deficiency and maintain normal blood levels. They also sought to develop a predictive equation for replacement doses of vitamin D.
They reviewed the response to vitamin D supplementation in 1,327 patients and 3,885 episodes of vitamin D supplementation. For the whole population, the average daily dose resulting in any increase in blood levels of vitamin D3 was 4,707 IU/day; corresponding values for ambulatory and nursing home patients were 4229 and 6103 IU/day, respectively.
The authors concluded that the recommended daily allowance for vitamin D (600 to 800 IU) is grossly inadequate for correcting low blood levels of D3 in many adult patients. They estimated that 5000 IU vitamin D3 per day is usually needed to correct deficiency, and the maintenance dose in adults should be equal to or greater than 2000 IU per day. Furthermore, for people living in nursing homes or not getting any direct sunlight, slightly higher dosages may be necessary.
This new study confirms that most adults need to be supplementing with 2,000 to 5,000 IU of vitamin D3 each day. Of course, the ideal method for determining the exact optimal dosage of vitamin D3 is to get a blood test for 25-hydroxyvitamin D3 or 25(OH)D3. Many doctors are now routinely checking vitamin D status in their patients, which is a great service. You can also order a test from www.vitaminDcouncil.org where you collect a small blood sample by skin prick and send it in to the lab. Again, for optimum health, 25(OH)D3 blood levels should be around 50-80 ng/mL (125-200 nmol/L).
Singh G, Bonham AJ. A predictive equation to guide vitamin d replacement dose in patients. J Am Board Fam Med. 2014 Jul-Aug;27(4):495-509. |
Suppose that there are three point masses arranged as shown in the figure at right. Where is the center of mass of this 3-object system?
So, the center of mass of the system is at the point (2.0 m, 1.7 m).
Note: It would have been quicker and easier to notice that the masses in the diagram at left are symmetric about x = 2 m, so the x-coordinate of the center of mass has to be 2.0 m.
Note: Given this arrangement of masses, you would reduce the amount of calculation by placing the origin of the coordinate system at the location of one of the points. I didn't do that for reasons you will soon see.
Suppose that a 2 m by 3 m rectangle is cut from a square piece of plywood which originally had sides of length 4 m (as shown at right). What is the center of mass of the resulting "U-shaped" piece of plywood?
Well, the center of mass of the of a homogeneous rectangle is in the geometric center of the rectangle (by symmetry). Why not divide the U-shaped piece into three rectangles as shown in the diagram at right - two 2m x 3 m rectangles and one 4 m x 1 m rectangle. (This is not the only way to divide the U-shape into rectangles, and any of the other ways will work just as well.)
The centers of mass of the three rectangles are indicated in the figure at right. (Does it look familiar?) We can now replace the rectangular sheets by their centers of mass, and using the area of the rectangle as a "stand-in" for its mass, you get:
Note: Compare this to the example above.
Moral: If you can "chop up" a continuous object into pieces whose centers of mass are easy to find, you can reduce the problem to finding the center of mass of a system of discrete points.
Perhaps chopping perfectly-good objects into little pieces does not appeal to you. Here's an alternative, based on the idea that the center of mass of an object is in the same place no matter how you calculate it.
The center of mass of the originall 4 m x 4 m piece of plywood is at its geometric center (middle dot at right), so the y-coordinate of the center of mass of the original square is 2 m.
On the other hand, you could find the center of mass of the original square by finding the center of mass of the 2-point system consisting of the centers of mass of the U-shape and the center of mass of the plywood that originally fit in the "hole."
where Y is the y-coordinate of the original square, au is the area of the U-shape, yu is the y-coordinate of the center of mass of the U-shape, ah = area of the "hole", yh is the y-coordinate of the plywood that originally filled the hole, and A is the area of the original square (which equals au + ah). A little algebra gives:
which matches the results above.
last update February 1, 2010 by JL Stanbrough |
When we think of solar energy, we tend to picture shiny, bluish-silver photovoltaic cells, lined up in intricate patterns over enormous surfaces, all laid out in the heat of beating sunlight. These solar arrays may look cool, but the technology they employ is quite expensive and, in the end, not very efficient.
Mother Nature herself, as it turns out, does a much better job dealing with the energy available in sunlight. So what if we could employ sunlight the way nature does? What if we could use photosynthesis to create a better, more efficient and more versatile type of solar energy than current technology allows?
Researchers have recently found a way to alter the photosynthetic process so that its chemical reactions produce liquid hydrogen, which can be used in turn to create energy — and ultimately the electricity that powers all our stuff.
Using hydrogen as a fuel isn’t a new idea; it’s a proven energy source. The problem is making enough liquid hydrogen for it to be viable on a large scale. Producing pure hydrogen, which is extremely rare on Earth, is now energy- and time-intensive. Photosynthesis produces hydrogen naturally by using sunlight to break water molecules (H2O) into hydrogen and oxygen. If we could harness this natural process, we could produce hydrogen cleanly and abundantly.
England’s Daily Mail recently reported that a team at Australian National University created a protein that, when exposed to light, displayed a process similar to that of a plant’s leaves. It replicates “the primary capture of energy from sunlight” needed to break down H2O. The great thing about this protein is that, unlike hydrogen, it is naturally occurring, so doesn’t require lots of expensive raw materials to create.
The resources needed to produce hydrogen this way are the same as those in photosynthesis: water and sunlight. This opens the technology to possible widespread use in developing nations, where exotic raw materials and expensive technology aren’t easy to come by.
Moreover, photosynthesis doesn’t produce carbon, which makes it much better for the environment than current fossil fuels. The already available technology for turning hydrogen into electricity is also carbon neutral. So, overall, photosynthesis-aided hydrogen power would be a great breakthrough for the environment.
Despite the recent advances, the technology isn’t quite where it needs to be — not yet, anyway. As with all solar-based energy, making efficient use of the photons available in sunlight is a challenge. Stability is another issue in this process, which requires some sort of catalyst to get the sunlight to break down the water molecules. These catalysts can be either organic, like the naturally occurring protein used by the Australian team, or inorganic, like various metal oxides.
The problem with organic catalysts is that they tend to wear out over time. What’s more, they often wind up damaging photosynthetic cells through secondary effects they generate. Metal oxides, on the other hand, seem less efficient than organic catalysts, and they aren’t as wonderfully abundant as sunlight and water.
Given these and other difficulties, an efficient, widely available and carbon-neutral photosynthetic technology isn’t just around the corner. But it is a real possibility — sometime in the next decade.
Related on MNN: |
- Tides and tidal streams
In a tide wave the horizontal motion, i.e. the particle velocity, is called the tidal stream. The vertical tide is said to rise and fall, and the tidal stream is said to flood and ebb. If the tide is progressive, the flood direction is that of the wave propagation: if the tide is a standing wave, the flood direction is inland or toward the coast, i.e. «upstream.» The flow is the net horizontal motion of the water at a given time from whatever causes. The single word «current» is frequently used synonymously with «flow», but the term residual current is used for the portion of the flow not accounted for by the tidal streams. A tidal stream is rectilinear if it flows back and forth in a straight line, and is rotary if its velocity vector traces out an ellipse. Except in restricted coastal passages, most tidal streams are rotary, although the shape of the ellipse and the direction of rotation may vary. The ellipse traced out by a tidal stream vector is called the tidal ellipse. Slack water refers to zero flow in a tidal regime. The stand of the tide is the interval around high or low water in which there is little change of water level: this need not coincide with slack water.
Since the observed tide consists not of a single wave, but of the superposition of many tide waves of different frequency and amplitude, it will never fit exactly any of our simple descriptions. Because of this, we cannot expect the heights of successive High Waters (HWs) or of successive Low Waters (LWs) to be identical, even when they occur in the same day. Thus, the two HWs and two LWs occurring in the same day are designated as higher and lower high water (HHW and LHW), and higher and lower low water (HLW and LLW). It is likewise only the tidal stream associated with a single frequency tide wave that traces a perfect tidal ellipse. The composite tidal stream each day traces a path more closely resembling a double spiral, with no two days patterns identical. Also, no tide is ever a purely progressive or a purely standing wave, so that slack water should not be expected to occur at the same interval before HW or LW at all locations. Canadian Tidal Manual
Tides in the open ocean are usually of much smaller amplitude than those along the coast. This is partly due to amplification by reflection and resonance. It is, however, more generally the result of shoaling: as the wave propagates into shallower water, its wave speed decreases and the energy contained between crests is compressed both into a smaller depth and a shorter wavelength. The tide height and the tidal stream strength must increase accordingly. If, in addition, the tide propagates into an inlet whose width diminishes toward the head, the wave energy is further compressed laterally. This effect, called funneling, also causes the tide height to increase. Canadian Tidal Manual
Sometimes the front of the rising tide propagates up a river as a bore, a churning and tumbling wall of water advancing up the river not unlike a breaking surf riding up a beach (Fig. 1).
Creation of a bore requires a large rise of tide at the mouth of the river, some sandbars, or other restrictions at the entrance to impede the initial advance of the tide, and a shallow and gently sloping river bed. Simply stated, the water cannot spread uniformly over the vast shallow interior area fast enough to match the rapid rise at the entrance. Friction at the base of the advancing front, plus resistance from the last of the ebb flow still leaving the river, causes the top of the advancing front to tumble forward, sometimes giving the bore the appearance of a travelling waterfall.
There are spectacular bores a meter or more high in several rivers and estuaries of the world. The best known bore in Canada is that in the Petitcodiac River near Moncton, N.B., but there is another in the Shubenacadie River and in the Salmon River near Truro, N.S., all driven by the large Bay of Fundy tides. These are impressive (about a meter) only at the time of the highest monthly tides, and may be no more than a large ripple during the smallest tides. Canadian Tidal Manual
The reversing falls near the mouth of the St. John River at Saint John, N.B. is also caused by the large Bay of Fundy tides and the configuration of the river. A narrow gorge at Saint John separates the outer harbour from a large inner basin. When the tide is rising most rapidly outside, water cannot pass quickly enough through the gorge to raise the level of the inner basin at the same rate, so on this stage of the tide the water races in through the gorge, dropping several meters over the length of the gorge. When the outside tide is falling most rapidly, the situation is reversed, and the water races out through the gorge in the opposite direction, again dropping several meters in surface elevation. Twice during each tidal cycle, when the water levels inside and out are the same, the water in the gorge is placid and navigable. The surface of the water in the gorge near the peak flows is violently agitated and the velocity of flow is too rapid and turbulent to permit navigation through the gorge. Canadian Tidal Manual
The upper photo is an aerial view at slack water, showing the inner basin, the outer harbour, and the bridge over that separates them. Lower left shows the inflow through the gorge at high water in the outer harbour (7.6 m above chart datum at time of photo). Lower right shows the outflow though the gorge at low water in the outer harbour (0.9 m above hart datum at time of photo). The recorded extreme high and low waters at Saint John are 9.0 and -0.4 m,. respectively, above chart datum, and at these times the flows would have been correspondingly greater. Canadian Tidal Manual
A tide rip or overfall is an area of breaking waves or violent surface agitation that may occur at certain stages of the tide in the presence of strong tidal flow. They may be caused by a rapid flow over an irregular bottom, by the conjunction of two opposing flows, or by the piling up of waves or swell against an oppositely directed tidal flow. If waves run up against a current, the wave form and the wave energy are compressed into a shorter wavelength, causing a growth and steepening of the waves. If the current is strong enough, the waves may steepen to the point of breaking, and dissipate their energy in a wild fury at sea. Canadian Tidal Manual
Because the tide usually dominates the spectrum of water level and current fluctuations along the ocean coasts, it is common to think of non-tidal fluctuations mostly in connection with inland waters. The tide in the deep ocean, however, can be quite insignificant and the tidal streams completely negligible from the standpoint of navigation. Wind-driven surface currents in the deep ocean, on the other hand, are of major importance to navigation. Water levels along ocean coasts are just as surely affected by atmospheric pressure and wind as are water levels along the shores of inland bodies of water.
This is the case of storm surge. As the name suggests, storm surges are pronounced increases in water level associated with the passage of storms. Much of the increase is the direct result of wind set-up and the inverted barometer effect under the low pressure area near the center of the storm. There is, however, another process by which the surge may become more exaggerated than would be anticipated from these two effects alone. As the storm depression travels over the water surface, a long surface wave travels along with it. If the storm path is such as to direct this wave up on shore, the wave may steepen and grow as a result of shoaling and tunneling, (see Shallow-water effects). The term «negative surge» is sometimes used to describe a pronounced non-tidal decrease in water level. These could be associated with offshore winds and travelling high pressure systems, and are not usually as extreme as storm surges. Negative surges may, however, be of considerable concern to mariners, since they can create unusually shallow water if they occur near the low tide stage.
The range, however, is generally small compared to that of the tide on the coast, and the importance may not be fully realized until an extreme of the non-tidal fluctuations coincides with a corresponding extreme (high or low) of the tidal fluctuation. In using tidal predictions, such as those in the Canadian Tide and Current Tables, it should be borne in mind that they contain no allowance for non-tidal effects, other than for the average seasonal change in mean water leve. Canadian Tidal Manual
A seiche is the free oscillation of the water in a closed or semi-enclosed basin at its natural period. Seiches are frequently observed in harbours, lakes, bays. And in almost any distinct basin of moderate size. They may be caused by the passage of a pressure system over the basin or by the build-up and subsequent relaxation of a wind set-up in the basin. Following initiation of the seiche, the water sloshes back and forth until the oscillation is damped out by friction.
Seiches are not apparent in the main ocean basins, probably because there is no force sufficiently coordinated over the ocean to set a seiche in motion. The tides are not seiches, being forced oscillations at tidal frequencies. If the natural period, or seiche period, is close to the period of one of the tidal species, the constituents of that species (diurnal or semidiurnal) will be amplified by resonance more than those of other species. The constituent closest to the seiche period will be amplified most of all, but the response is still a forced oscillation, whereas a seiche is a free oscillation. A variety of seiche periods may appear in the same water level record because the main body of water may oscillate longitudinally or laterally at different periods, it may also oscillate both in the open and closed mode if the open end is somewhat restricted, and bays and harbours off the main body of water may oscillate locally at their particular seiche periods. Seiches generally have half-lives of only a few periods, but may be frequently regenerated. The largest amplitude seiches are usually found in shallow bodies of water of large horizontal extent, probably because the initiating wind set-up can be greater under these conditions. Canadian Tidal Manual
A tsunami is a disturbance of the water surface caused by a displacement of the sea-bed or an underwater landslide, usually triggered by an earthquake or an underwater volcanic eruption. The surface disturbance travels out from the center of origin in much the same pattern as do the ripples from the spot where a pebble lands in a pond. In some directions the waves may almost immediately dissipate their energy against a nearby shore, while in other directions they may be free to travel for thousands of kilometers across the ocean as a train of several tens of long wave crests. Being long waves, they travel at the speed (gD)½, giving them a speed of over 700 km/h (almost 400 knots) when travelling in a depth of 4 000 m. The period between crests may vary from a few minutes to the order of 1 h, so that in a depth of 4 000 m the distance between crests might range from less than a hundred to several hundred kilometers.
The wave heights at sea are only the order of a metre, and over a wavelength of several hundred kilometers this does not constitute a significant distortion of the sea surface. When these waves arrive in shallow water, however, their energy is concentrated by shoaling and possibly tunneling, causing them to steepen and rise to many meters in height. Not only are the tsunami waves high, but they are also massive when they arrive on shore, and are capable of tremendous destruction in populated areas. Because of the relative gentleness of tsunamis in deeper water, ships should always leave harbour and head for deep offshore safety when warned of an approaching tsunami.
The origin of the word is, in fact, from the Japanese expression for «harbour wave.» This name has been adopted to replace the popular expression «tidal wave,» whose use is to be discouraged since there is nothing tidal in the origin of a tsunami. Another expression sometimes used for these waves is «seismic sea wave,» suggesting the seismic, or earthquake, origin of most tsunamis.
A tsunami warning system for the Pacific has been established by the United States, with its headquarters in Honolulu, Hawaii. Other countries, including Canada, that border on the Pacific have since been recruited into the system. Canada's direct contribution consists of two automatic water level gauges programmed to recognize unusual water level changes that could indicate the passage of a tsunami, and to transmit this advice to Honolulu. The gauges are at Tofino on the west coast of Vancouver Island, and at Langara Island off the northwest tip of the Queen Charlotte Islands group. The tsunami warning center at Honolulu receives immediate information from seismic recording stations around the Pacific of any earthquake that could possibly generate a tsunami; it calculates the epicenter and intensity of the quake and the arrival time of the as yet hypothetical tsunami at the water level sensing stations in the network; it initiates a «tsunami watch» at all water level stations in the path, for a generous time interval around the ETA of the hypothetical tsunami; and it issues tsunami warnings to the appropriate authorities in threatened locations if the water level interpretation indicates that a tsunami has indeed been generate. Canadian Tidal Manual
When seawater freezes, it is only the water that forms into ice crystals. The salt becomes trapped between the crystals in a concentrated brine that eventually leaches out, leaving mostly pure ice floating on the surface, surrounded by sea water of increased salinity and density. Since the ice displaces its own weight in this denser water, it does not displace as much volume as it occupied before freezing. Because of this, freezing has an effect similar to that of evaporation - it lowers the water level and increases the surface salinity and density. Surface water must therefore flow toward a region of freezing, while the cold salty water that is formed must sink and flow away from the region. In the polar regions, particularly in the Antarctic, freezing produces cold salty water that sinks and flows along the ocean bottom for thousands of kilometers. When sea ice melts, mostly fresh water is released, and this decreases the salinity and the density of the surrounding water. Melting thus has an effect similar to that of precipitation - it raises the water level and decreases the surface salinity and density. Surface water must therefore flow away from a region of melting ice. The speed of currents associated with freezing and melting in the ocean are never great. Canadian Tidal Manual |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.